Zum Inhalt

HAProxy Provider Anleitung

Umfassende Anleitung für HAProxy Provider in GAL (Gateway Abstraction Layer)

Inhaltsverzeichnis

  1. Übersicht
  2. Installation & Setup
  3. Schnellstart
  4. Konfigurationsoptionen
  5. Provider-Vergleich

Weitere Dokumentation: - Feature-Implementierungen - Details zu ACLs, Auth, Rate Limiting, Health Checks - Best Practices & Troubleshooting - Best Practices, Troubleshooting


Übersicht

Was ist HAProxy?

HAProxy (High Availability Proxy) ist ein äußerst performanter und zuverlässiger TCP/HTTP Load Balancer. Er wird von den größten Websites der Welt eingesetzt (GitHub, Reddit, Stack Overflow, etc.) und ist bekannt für:

  • Extreme Performance: 100.000+ Requests pro Sekunde
  • Enterprise-grade Reliability: Höchste Verfügbarkeit
  • Advanced Load Balancing: 10+ Algorithmen
  • Flexible Health Checks: Active & Passive
  • Low Resource Usage: Minimale CPU/RAM Nutzung

Feature-Matrix

Feature Support Level Implementierung Notes
Load Balancing
Round Robin ✅ Full balance roundrobin Gleichmäßige Verteilung
Least Connections ✅ Full balance leastconn Wenigste Verbindungen
IP Hash (Source) ✅ Full balance source Session Persistence
Weighted ✅ Full weight per server Kapazitätsbasiert
URI Hash ✅ Full balance uri URL-basiert
Header Hash ✅ Full balance hdr(name) Header-basiert
Health Checks
Active HTTP Checks ✅ Full option httpchk Periodisches Probing
Active TCP Checks ✅ Full check TCP Connection Check
Passive Checks ✅ Full fall/rise thresholds Traffic-basiert
Custom Health Checks ✅ Full http-check expect Flexible Validierung
Traffic Management
Rate Limiting ✅ Full stick-table IP & Header-basiert
Sticky Sessions ✅ Full cookie, source Cookie oder IP-basiert
Connection Pooling ✅ Full http-server-close Connection Reuse
Timeouts ✅ Full timeout * Granulare Timeouts
Security
Basic Authentication ✅ Full ACL + auth backend Via ACLs
Header Manipulation ✅ Full http-request/response Add/Set/Remove
CORS ⚠️ Limited Custom headers Via http-response
JWT Authentication ⚠️ Lua Lua scripting Lua-Script erforderlich
Observability
Access Logs ✅ Full log global Structured logging
Stats Page ✅ Full stats section Web UI & API
Runtime API ✅ Full stats socket Dynamic config

Legende: - ✅ Full: Nativ unterstützt - ⚠️ Limited: Eingeschränkt oder benötigt Zusatzmodule - ⚠️ Lua: Benötigt Lua Scripting - ❌ Not Supported: Nicht verfügbar


Installation & Setup

HAProxy Installation

Ubuntu/Debian:

# HAProxy 2.8+ (neueste Stable)
sudo apt update
sudo apt install haproxy

# Version prüfen
haproxy -v

CentOS/RHEL:

# EPEL Repository aktivieren
sudo yum install epel-release

# HAProxy installieren
sudo yum install haproxy

# Version prüfen
haproxy -v

Docker:

# HAProxy 2.9 Official Image
docker pull haproxy:2.9-alpine

# Mit Config starten
docker run -d \
  -p 80:80 \
  -v $(pwd)/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro \
  haproxy:2.9-alpine

GAL Installation

# PyPI Installation
pip install gal-gateway

# Provider prüfen
gal list-providers
# → haproxy - HAProxy Load Balancer

Schnellstart

Beispiel 1: Basic Load Balancing

config.yaml:

version: "1.0"
provider: haproxy

global:
  host: "0.0.0.0"
  port: 80

services:
  - name: api_service
    type: rest
    protocol: http
    upstream:
      targets:
        - host: api-1.internal
          port: 8080
        - host: api-2.internal
          port: 8080
      load_balancer:
        algorithm: round_robin  # Gleichmäßige Verteilung

    routes:
      - path_prefix: /api

Generierung:

gal generate --config config.yaml --provider haproxy > haproxy.cfg

Generierte haproxy.cfg:

global
    log         127.0.0.1 local0
    maxconn     4000
    daemon
    stats socket /var/lib/haproxy/stats level admin

defaults
    mode                    http
    log                     global
    option                  httplog
    timeout client          30s
    timeout server          30s
    timeout connect         5s

frontend http_frontend
    bind 0.0.0.0:80

    acl is_api_service_route0 path_beg /api
    use_backend backend_api_service if is_api_service_route0

backend backend_api_service
    balance roundrobin
    server server1 api-1.internal:8080 check
    server server2 api-2.internal:8080 check

Testen:

# Config validieren
haproxy -c -f haproxy.cfg

# HAProxy starten
haproxy -f haproxy.cfg

# Requests testen
curl http://localhost/api

Beispiel 2: Load Balancing + Health Checks

services:
  - name: api_service
    type: rest
    protocol: http
    upstream:
      targets:
        - host: api-1.internal
          port: 8080
          weight: 2  # Erhält 2x mehr Traffic
        - host: api-2.internal
          port: 8080
          weight: 1
      health_check:
        active:
          enabled: true
          http_path: /health
          interval: "10s"
          healthy_threshold: 2
          unhealthy_threshold: 3
          healthy_status_codes: [200, 204]
      load_balancer:
        algorithm: least_conn  # Wenigste Verbindungen

    routes:
      - path_prefix: /api

Generierte haproxy.cfg (Backend):

backend backend_api_service
    balance leastconn
    option httpchk GET /health HTTP/1.1
    http-check expect status 200|204

    server server1 api-1.internal:8080 check inter 10s fall 3 rise 2 weight 2
    server server2 api-2.internal:8080 check inter 10s fall 3 rise 2 weight 1

Beispiel 3: Rate Limiting + Headers + CORS

services:
  - name: api_service
    type: rest
    protocol: http
    upstream:
      host: api.internal
      port: 8080

    routes:
      - path_prefix: /api
        rate_limit:
          enabled: true
          requests_per_second: 100
          burst: 200
          key_type: ip_address
          response_status: 429

        headers:
          request_add:
            X-Request-ID: "{{uuid}}"
            X-Gateway: "HAProxy"
          response_add:
            X-Frame-Options: "DENY"
            X-Content-Type-Options: "nosniff"

        cors:
          enabled: true
          allowed_origins:
            - "https://app.example.com"
          allowed_methods:
            - GET
            - POST
            - PUT
            - DELETE
          allow_credentials: true

Generierte haproxy.cfg (Frontend):

frontend http_frontend
    bind 0.0.0.0:80

    # Rate Limiting
    stick-table type ip size 100k expire 30s store http_req_rate(10s)
    http-request track-sc0 src if is_api_service_route0
    http-request deny deny_status 429 if is_api_service_route0 { sc_http_req_rate(0) gt 100 }

    # Request Headers
    http-request set-header X-Request-ID "%[uuid()]" if is_api_service_route0
    http-request set-header X-Gateway "HAProxy" if is_api_service_route0

    # Response Headers
    http-response set-header X-Frame-Options "DENY" if is_api_service_route0
    http-response set-header X-Content-Type-Options "nosniff" if is_api_service_route0

    # CORS
    http-response set-header Access-Control-Allow-Origin "https://app.example.com" if is_api_service_route0
    http-response set-header Access-Control-Allow-Methods "GET, POST, PUT, DELETE" if is_api_service_route0
    http-response set-header Access-Control-Allow-Credentials "true" if is_api_service_route0

    use_backend backend_api_service if is_api_service_route0


Konfigurationsoptionen

Global Configuration

global:
  host: "0.0.0.0"        # Listen Address (Standard: 0.0.0.0)
  port: 80               # Listen Port (Standard: 10000)
  admin_port: 9901       # Admin/Stats Port (nicht genutzt in HAProxy)
  timeout: "30s"         # Default Timeout (Standard: 30s)

Mapping zu haproxy.cfg: - host:portbind Direktive im Frontend - timeouttimeout client, timeout server

Upstream Configuration

upstream:
  # Option 1: Single Host (einfach)
  host: api.example.com
  port: 8080

  # Option 2: Multiple Targets (Load Balancing)
  targets:
    - host: api-1.internal
      port: 8080
      weight: 2          # Load Balancing Gewichtung (default: 1)
    - host: api-2.internal
      port: 8080
      weight: 1

  # Health Checks
  health_check:
    active:
      enabled: true
      http_path: /health        # Health Check Pfad (default: /health)
      interval: "10s"           # Check Intervall (default: 10s)
      timeout: "5s"             # Check Timeout (default: 5s)
      healthy_threshold: 2      # Erfolge bis gesund (default: 2)
      unhealthy_threshold: 3    # Fehler bis ungesund (default: 3)
      healthy_status_codes:     # Erfolgreiche Status Codes
        - 200
        - 201
        - 204

    passive:
      enabled: true
      max_failures: 5           # Max Fehler (default: 5)
      unhealthy_status_codes:   # Fehlerhafte Status Codes
        - 500
        - 502
        - 503
        - 504

  # Load Balancing
  load_balancer:
    algorithm: round_robin      # round_robin, least_conn, ip_hash, weighted
    sticky_sessions: false      # Cookie-based Session Persistence
    cookie_name: "SERVERID"     # Cookie Name (wenn sticky_sessions=true)

HAProxy Mapping:

GAL Option HAProxy Direktive Beschreibung
targets[].host:port server name host:port Backend Server
targets[].weight server ... weight N Load Balancing Gewicht
algorithm: round_robin balance roundrobin Round Robin
algorithm: least_conn balance leastconn Least Connections
algorithm: ip_hash balance source IP Hash (Source)
algorithm: weighted balance roundrobin + weight Weighted Round Robin
active.enabled option httpchk Active Health Check
active.http_path option httpchk GET /path Health Check Pfad
active.interval check inter N Check Intervall
active.unhealthy_threshold fall N Fehler bis ungesund
active.healthy_threshold rise N Erfolge bis gesund
passive.max_failures fall N Passive Fehlergrenze
sticky_sessions: true cookie NAME insert Cookie Persistence

Route Configuration

routes:
  - path_prefix: /api           # Pfad-Präfix (ACL)
    methods:                    # HTTP Methoden (optional)
      - GET
      - POST

    # Rate Limiting
    rate_limit:
      enabled: true
      requests_per_second: 100  # RPS Limit
      burst: 200                # Burst Kapazität
      key_type: ip_address      # ip_address oder header
      key_header: X-API-Key     # Header Name (wenn key_type=header)
      response_status: 429      # HTTP Status bei Limit

    # Header Manipulation
    headers:
      request_add:              # Request Headers hinzufügen
        X-Request-ID: "{{uuid}}"
        X-Gateway: "HAProxy"
      request_set:              # Request Headers setzen
        User-Agent: "HAProxy/1.0"
      request_remove:           # Request Headers entfernen
        - X-Internal-Token

      response_add:             # Response Headers hinzufügen
        X-Frame-Options: "DENY"
      response_set:             # Response Headers setzen
        Server: "HAProxy"
      response_remove:          # Response Headers entfernen
        - X-Powered-By

    # CORS
    cors:
      enabled: true
      allowed_origins:
        - "https://app.example.com"
        - "https://www.example.com"
      allowed_methods:
        - GET
        - POST
        - PUT
        - DELETE
        - OPTIONS
      allowed_headers:
        - Content-Type
        - Authorization
        - X-API-Key
      expose_headers:
        - X-Request-ID
      allow_credentials: true
      max_age: 86400            # Preflight Cache (Sekunden)

Feature-Implementierungen

1. Load Balancing Algorithmen

HAProxy unterstützt 10+ Load Balancing Algorithmen. GAL implementiert die wichtigsten:

Round Robin (Standard)

Beschreibung: Gleichmäßige Verteilung über alle Server.

load_balancer:
  algorithm: round_robin

HAProxy:

backend backend_api
    balance roundrobin
    server server1 api-1.internal:8080
    server server2 api-2.internal:8080

Use Case: Homogene Server mit ähnlicher Kapazität.

Least Connections

Beschreibung: Wählt Server mit wenigsten aktiven Verbindungen.

load_balancer:
  algorithm: least_conn

HAProxy:

backend backend_api
    balance leastconn
    server server1 api-1.internal:8080
    server server2 api-2.internal:8080

Use Case: Long-running Requests, ungleiche Request-Dauer.

IP Hash (Source)

Beschreibung: Konsistente Server-Auswahl basierend auf Client-IP.

load_balancer:
  algorithm: ip_hash

HAProxy:

backend backend_api
    balance source
    server server1 api-1.internal:8080
    server server2 api-2.internal:8080

Use Case: Session Persistence ohne Cookies.

Weighted Load Balancing

Beschreibung: Traffic-Verteilung basierend auf Server-Kapazität.

targets:
  - host: powerful-server.internal
    port: 8080
    weight: 3  # Erhält 75% des Traffics
  - host: small-server.internal
    port: 8080
    weight: 1  # Erhält 25% des Traffics

load_balancer:
  algorithm: weighted

HAProxy:

backend backend_api
    balance roundrobin
    server server1 powerful-server.internal:8080 weight 3
    server server2 small-server.internal:8080 weight 1

Use Case: Heterogene Server mit unterschiedlicher Kapazität.

2. Health Checks

Active Health Checks

Beschreibung: Periodisches Probing der Backend-Server.

health_check:
  active:
    enabled: true
    http_path: /health
    interval: "5s"
    timeout: "3s"
    healthy_threshold: 2      # 2 erfolgreiche Checks → gesund
    unhealthy_threshold: 3    # 3 fehlgeschlagene Checks → ungesund
    healthy_status_codes:
      - 200
      - 204

HAProxy:

backend backend_api
    option httpchk GET /health HTTP/1.1
    http-check expect status 200|204

    server server1 api-1.internal:8080 check inter 5s fall 3 rise 2

Parameter: - check: Aktiviert Health Checks - inter 5s: Check alle 5 Sekunden - fall 3: 3 Fehler → ungesund - rise 2: 2 Erfolge → gesund

Passive Health Checks

Beschreibung: Überwacht echten Traffic, markiert fehlerhafte Server.

health_check:
  passive:
    enabled: true
    max_failures: 5

HAProxy:

backend backend_api
    server server1 api-1.internal:8080 check fall 5 rise 2

Use Case: Kombination mit Active Checks für maximale Zuverlässigkeit.

3. Rate Limiting

HAProxy verwendet stick-tables für Rate Limiting.

IP-basiertes Rate Limiting

rate_limit:
  enabled: true
  requests_per_second: 100
  burst: 200
  key_type: ip_address
  response_status: 429

HAProxy:

frontend http_frontend
    # Stick-Table für IP-basierte Rate Limiting
    stick-table type ip size 100k expire 30s store http_req_rate(10s)

    # Track Client IP
    http-request track-sc0 src if is_route

    # Deny wenn > 100 req/s
    http-request deny deny_status 429 if is_route { sc_http_req_rate(0) gt 100 }

Erklärung: - stick-table type ip: Tabelle pro Client-IP - store http_req_rate(10s): Request-Rate über 10 Sekunden - track-sc0 src: Trackt Source IP (Client) - sc_http_req_rate(0) gt 100: Prüft ob Rate > 100

Header-basiertes Rate Limiting

rate_limit:
  enabled: true
  requests_per_second: 1000
  key_type: header
  key_header: X-API-Key

HAProxy:

frontend http_frontend
    http-request track-sc0 hdr(X-API-Key) if is_route
    http-request deny deny_status 429 if is_route { sc_http_req_rate(0) gt 1000 }

Use Case: Unterschiedliche Limits pro API-Key/Tenant.

4. Header Manipulation

Request Headers

headers:
  request_add:
    X-Request-ID: "{{uuid}}"
    X-Real-IP: "{{client_ip}}"
  request_set:
    User-Agent: "HAProxy/1.0"
  request_remove:
    - X-Internal-Token

HAProxy:

frontend http_frontend
    # Add Headers
    http-request set-header X-Request-ID "%[uuid()]" if is_route
    http-request set-header X-Real-IP "%[src]" if is_route

    # Set Headers (überschreiben)
    http-request set-header User-Agent "HAProxy/1.0" if is_route

    # Remove Headers
    http-request del-header X-Internal-Token if is_route

Template-Variablen: - {{uuid}}%[uuid()]: Eindeutige Request-ID - {{now}}%[date()]: ISO8601 Timestamp

Response Headers

headers:
  response_add:
    X-Frame-Options: "DENY"
    X-Content-Type-Options: "nosniff"
  response_set:
    Server: "HAProxy"
  response_remove:
    - X-Powered-By

HAProxy:

frontend http_frontend
    # Add Response Headers
    http-response set-header X-Frame-Options "DENY" if is_route
    http-response set-header X-Content-Type-Options "nosniff" if is_route

    # Set Response Headers
    http-response set-header Server "HAProxy" if is_route

    # Remove Response Headers
    http-response del-header X-Powered-By if is_route

5. CORS Configuration

HAProxy unterstützt CORS durch Response-Header.

cors:
  enabled: true
  allowed_origins:
    - "https://app.example.com"
  allowed_methods:
    - GET
    - POST
    - PUT
    - DELETE
  allowed_headers:
    - Content-Type
    - Authorization
  allow_credentials: true
  max_age: 86400

HAProxy:

frontend http_frontend
    http-response set-header Access-Control-Allow-Origin "https://app.example.com" if is_route
    http-response set-header Access-Control-Allow-Methods "GET, POST, PUT, DELETE" if is_route
    http-response set-header Access-Control-Allow-Headers "Content-Type, Authorization" if is_route
    http-response set-header Access-Control-Allow-Credentials "true" if is_route
    http-response set-header Access-Control-Max-Age "86400" if is_route

Hinweis: Preflight OPTIONS Requests müssen ggf. manuell behandelt werden.

6. Sticky Sessions

load_balancer:
  sticky_sessions: true
  cookie_name: "SERVERID"

HAProxy:

backend backend_api
    cookie SERVERID insert indirect nocache

    server server1 api-1.internal:8080 cookie server1
    server server2 api-2.internal:8080 cookie server2

Erklärung: - cookie SERVERID insert: Fügt Cookie hinzu - indirect: Cookie nur zwischen Client und HAProxy - nocache: Verhindert Caching - cookie server1: Server-spezifischer Cookie-Wert

Source IP-basierte Persistence

load_balancer:
  algorithm: ip_hash

HAProxy:

backend backend_api
    balance source

Use Case: Wenn Cookies nicht möglich (z.B. native Apps).

7. Traffic Splitting & Canary Deployments

Feature: Gewichtsbasierte Traffic-Verteilung für A/B Testing, Canary Deployments und Blue/Green Deployments.

Status:Vollständig unterstützt (seit v1.4.0)

HAProxy unterstützt Traffic Splitting nativ über Server Weights mit balance roundrobin.

Canary Deployment (90/10 Split)

Use Case: Neue Version vorsichtig ausrollen (10% Canary, 90% Stable).

routes:
  - path_prefix: /api/v1
    traffic_split:
      enabled: true
      targets:
        - name: stable
          weight: 90
          upstream:
            host: backend-stable
            port: 8080
        - name: canary
          weight: 10
          upstream:
            host: backend-canary
            port: 8080

HAProxy Config:

backend backend_api_v1
    balance roundrobin
    option httpclose
    option forwardfor

    # 90% Stable, 10% Canary
    server server_stable backend-stable:8080 check inter 10s fall 3 rise 2 weight 90
    server server_canary backend-canary:8080 check inter 10s fall 3 rise 2 weight 10

Erklärung: - balance roundrobin: Round-Robin-Verteilung mit Gewichtung - weight 90: Stable Backend erhält 90% des Traffics - weight 10: Canary Backend erhält 10% des Traffics - check inter 10s: Health Check alle 10 Sekunden

A/B Testing (50/50 Split)

Use Case: Zwei Versionen gleichwertig testen.

traffic_split:
  enabled: true
  targets:
    - name: version_a
      weight: 50
      upstream:
        host: api-v2-a
        port: 8080
    - name: version_b
      weight: 50
      upstream:
        host: api-v2-b
        port: 8080

HAProxy Config:

backend backend_ab_testing
    balance roundrobin

    server version_a api-v2-a:8080 check weight 50
    server version_b api-v2-b:8080 check weight 50

Blue/Green Deployment

Use Case: Instant Switch zwischen zwei Environments (100% → 0%).

traffic_split:
  enabled: true
  targets:
    - name: blue
      weight: 0    # Aktuell inaktiv
      upstream:
        host: api-blue
        port: 8080
    - name: green
      weight: 100  # Aktuell aktiv
      upstream:
        host: api-green
        port: 8080

Deployment-Strategie: 1. Initial: Blue = 100%, Green = 0% 2. Deploy neue Version auf Green Environment 3. Test Green ausgiebig 4. Switch: Blue = 0%, Green = 100% (Re-Generate haproxy.cfg, reload) 5. Rollback bei Problemen: Green = 0%, Blue = 100%

Gradual Rollout (5% → 25% → 50% → 100%)

Use Case: Schrittweise Migration mit Monitoring.

Phase 1: 5% Canary

targets:
  - {name: stable, weight: 95, upstream: {host: api-stable, port: 8080}}
  - {name: canary, weight: 5, upstream: {host: api-canary, port: 8080}}

Phase 2: 25% Canary (nach Monitoring)

targets:
  - {name: stable, weight: 75, upstream: {host: api-stable, port: 8080}}
  - {name: canary, weight: 25, upstream: {host: api-canary, port: 8080}}

Phase 3: 50% Canary (Confidence-Build)

targets:
  - {name: stable, weight: 50, upstream: {host: api-stable, port: 8080}}
  - {name: canary, weight: 50, upstream: {host: api-canary, port: 8080}}

Phase 4: 100% Canary (Full Migration)

targets:
  - {name: canary, weight: 100, upstream: {host: api-canary, port: 8080}}

HAProxy Traffic Splitting Features

Feature HAProxy Support Implementation
Weight-based Splitting ✅ Native server ... weight N
Health Checks ✅ Native check inter Ns fall N rise N
Sticky Sessions ✅ Native cookie NAME insert
Dynamic Weights ⚠️ Runtime API HAProxy Runtime API (socat/socket)
Header-based Routing ⚠️ ACLs Via http-request set-header + ACLs
Cookie-based Routing ⚠️ ACLs Via hdr(cookie) ACLs

Best Practices: - Start Small: Begin mit 5-10% Canary Traffic - Monitor Metrics: Error Rate, Latency, Throughput auf beiden Targets - Health Checks: Immer aktivieren für automatisches Failover - Gradual Increase: 5% → 25% → 50% → 100% über mehrere Tage - Rollback Plan: Schnelles Zurücksetzen via Config Reload (systemctl reload haproxy)

Docker E2E Test Results:

# Test: 1000 Requests mit 90/10 Split
Stable Backend:  900 requests (90.0%) Canary Backend:  100 requests (10.0%) Failed Requests: 0 (0.0%)

Siehe auch: - Traffic Splitting Guide - Vollständige Dokumentation - examples/traffic-split-example.yaml - 6 Beispiel-Szenarien - tests/docker/haproxy/ - Docker Compose E2E Tests


HAProxy-spezifische Details

haproxy.cfg Struktur

Eine generierte haproxy.cfg besteht aus 4 Hauptsektionen:

# 1. Global Settings
global
    log         127.0.0.1 local0
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats level admin

# 2. Defaults (für alle Frontends/Backends)
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  forwardfor except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    30s
    timeout queue           30s
    timeout connect         5s
    timeout client          30s
    timeout server          30s
    timeout http-keep-alive 10s
    timeout check           5s
    maxconn                 3000

# 3. Frontend (Eingehende Requests)
frontend http_frontend
    bind 0.0.0.0:80

    # ACLs für Routing
    acl is_api path_beg /api

    # Rate Limiting
    stick-table type ip size 100k expire 30s store http_req_rate(10s)

    # Backend Routing
    use_backend backend_api if is_api

# 4. Backend (Upstream Services)
backend backend_api
    balance roundrobin
    option httpchk GET /health HTTP/1.1

    server server1 api-1.internal:8080 check
    server server2 api-2.internal:8080 check

ACLs (Access Control Lists)

GAL generiert automatisch ACLs für Routing:

# Pfad-basiert
acl is_api_route0 path_beg /api

# Methoden-basiert
acl is_api_method method GET POST

# Kombiniert
use_backend backend_api if is_api_route0 is_api_method

Wichtige ACL Typen: - path_beg: Pfad beginnt mit - path_end: Pfad endet mit - path_reg: Pfad Regex Match - hdr(name): Header-Wert - method: HTTP Methode

Stats Page & Runtime API

HAProxy bietet eine integrierte Stats Page und Runtime API:

global
    stats socket /var/lib/haproxy/stats level admin
    stats timeout 30s

# Optional: Web UI Stats Page
listen stats
    bind *:8080
    stats enable
    stats uri /haproxy-stats
    stats refresh 30s
    stats admin if TRUE

Zugriff: - Web UI: http://localhost:8080/haproxy-stats - Runtime API: echo "show info" | socat /var/lib/haproxy/stats -

Logging

HAProxy loggt standardmäßig via syslog:

global
    log 127.0.0.1 local0
    log 127.0.0.1 local1 notice

defaults
    log     global
    option  httplog

Syslog Konfiguration (rsyslog):

# /etc/rsyslog.d/haproxy.conf
$ModLoad imudp
$UDPServerRun 514

local0.* /var/log/haproxy/access.log
local1.* /var/log/haproxy/errors.log

Log-Format:

Nov 18 12:34:56 localhost haproxy[1234]: 192.168.1.100:54321 [18/Nov/2025:12:34:56.789] http_frontend backend_api/server1 0/0/1/2/3 200 1234 - - ---- 1/1/0/0/0 0/0 "GET /api/users HTTP/1.1"


Provider-Vergleich

Feature-Matrix: HAProxy vs. andere Provider

Feature Envoy Kong APISIX Traefik Nginx HAProxy
Load Balancing
Round Robin
Least Connections
IP Hash ✅ (source)
Weighted
Health Checks
Active HTTP
Active TCP
Passive ⚠️
Security
Basic Auth ⚠️ Lua ✅ ACL
JWT Auth ⚠️ ⚠️ ⚠️ Lua
API Key ⚠️ Lua ⚠️ ⚠️ ⚠️ ACL
Headers
CORS ⚠️
Traffic Management
Rate Limiting
Sticky Sessions
Circuit Breaker ⚠️ ⚠️ ⚠️
Performance
RPS (100k+) ⚠️ ⚠️
Memory Usage Medium High Medium Low Low Very Low
CPU Usage Medium High Medium Low Low Very Low

Legende: - ✅ Full: Vollständig unterstützt - ⚠️ Limited: Eingeschränkt oder zusätzliche Module erforderlich - ❌ Not Supported: Nicht verfügbar

Wann HAProxy wählen?

✅ Ideal für: - Extreme Performance-Anforderungen: 100k+ RPS - Low Resource Usage: Minimale CPU/RAM - Layer 4 & 7 Load Balancing: TCP + HTTP - Enterprise Production: Höchste Zuverlässigkeit - Complex Routing: ACL-basiertes Routing - Stats & Monitoring: Integrierte Stats Page

⚠️ Limitierungen: - Kein natives JWT: Benötigt Lua - Limitierte CORS: Nur via Headers - Statische Konfiguration: Reload erforderlich - Weniger Plugins: Kein Plugin-Ökosystem wie Kong

Alternativen: - Nginx: Einfacher, aber weniger Features für LB - Traefik: Dynamic Configuration, gute Docker Integration - Envoy: Moderne Service Mesh Integration - Kong/APISIX: Volles API Gateway mit Plugins