Zum Inhalt

Traefik Provider Anleitung

Umfassende Anleitung für Traefik Provider in GAL (Gateway Abstraction Layer)

Inhaltsverzeichnis

  1. Übersicht
  2. Schnellstart
  3. Installation und Setup
  4. Konfigurationsoptionen
  5. Provider-Vergleich

Weitere Dokumentation: - Feature-Implementierungen - Details zu Middlewares, Auth, Rate Limiting, Circuit Breaker - Best Practices & Troubleshooting - Best Practices, Troubleshooting


Übersicht

Traefik ist ein modernes HTTP-Reverse-Proxy und Load Balancer, das speziell für Cloud-Native-Umgebungen entwickelt wurde. Es bietet automatische Service-Discovery, Let's Encrypt-Integration und ein benutzerfreundliches Dashboard.

Warum Traefik?

  • 🔄 Auto-Discovery: Automatische Erkennung von Services (Docker, Kubernetes, Consul)
  • 🔒 Let's Encrypt: Native HTTPS mit automatischer Zertifikatserneuerung
  • 📊 Dashboard: Echtzeit-Monitoring und Konfigurationsvisualisierung
  • ☁️ Cloud-Native: Docker, Kubernetes, Swarm, Mesos, Consul, etcd, Zookeeper
  • ⚡ Zero-Downtime: Hot-Reload ohne Verbindungsabbruch
  • 🎯 Middleware-System: Flexible Request/Response-Manipulation

Feature-Matrix

Feature Traefik Support GAL Implementation
Load Balancing ✅ Vollständig upstream.load_balancer
Active Health Checks ✅ Native upstream.health_check.active
Passive Health Checks ⚠️ Limitiert upstream.health_check.passive
Rate Limiting ✅ rateLimit Middleware route.rate_limit
Authentication ✅ Basic, JWT (Traefik Plus) route.authentication
CORS ✅ headers Middleware route.cors
Timeout & Retry ✅ serversTransport, retry route.timeout, route.retry
Circuit Breaker ✅ circuitBreaker Middleware upstream.circuit_breaker
WebSocket ✅ Native route.websocket
Header Manipulation ✅ headers Middleware route.headers
Body Transformation ❌ Nicht nativ route.body_transformation

Bewertung: ✅ = Vollständig unterstützt | ⚠️ = Teilweise unterstützt | ❌ = Nicht unterstützt


Schnellstart

Beispiel 1: Basic Load Balancing

services:
  - name: api_service
    protocol: http
    upstream:
      targets:
        - host: api-1.internal
          port: 8080
        - host: api-2.internal
          port: 8080
      load_balancer:
        algorithm: round_robin
    routes:
      - path_prefix: /api

Generierte Traefik-Konfiguration:

http:
  routers:
    api_service_router_0:
      rule: "PathPrefix(`/api`)"
      service: api_service
  services:
    api_service:
      loadBalancer:
        servers:
          - url: "http://api-1.internal:8080"
          - url: "http://api-2.internal:8080"

Beispiel 2: Basic Auth + Rate Limiting

services:
  - name: secure_api
    protocol: http
    upstream:
      host: api.internal
      port: 8080
    routes:
      - path_prefix: /api
        authentication:
          enabled: true
          type: basic
          basic_auth:
            users:
              admin: password123
        rate_limit:
          enabled: true
          requests_per_second: 100

Generierte Traefik-Konfiguration:

http:
  routers:
    secure_api_router_0:
      rule: "PathPrefix(`/api`)"
      service: secure_api
      middlewares:
        - secure_api_router_0_auth
        - secure_api_router_0_ratelimit

  middlewares:
    secure_api_router_0_auth:
      basicAuth:
        users:
          - "admin:$apr1$..."  # Hashed password

    secure_api_router_0_ratelimit:
      rateLimit:
        average: 100
        burst: 200

  services:
    secure_api:
      loadBalancer:
        servers:
          - url: "http://api.internal:8080"

Beispiel 3: Complete Production Setup

services:
  - name: production_api
    protocol: http
    upstream:
      targets:
        - host: api-1.internal
          port: 8080
        - host: api-2.internal
          port: 8080
      load_balancer:
        algorithm: round_robin
      health_check:
        active:
          enabled: true
          path: /health
          interval: 5s
      circuit_breaker:
        enabled: true
        max_failures: 5
    routes:
      - path_prefix: /api
        rate_limit:
          enabled: true
          requests_per_second: 100
        timeout:
          connect: 5s
          read: 30s
        retry:
          enabled: true
          attempts: 3
        cors:
          enabled: true
          allowed_origins: ["https://app.example.com"]

Installation und Setup

Docker (Empfohlen)

# Traefik mit Docker starten
docker run -d \
  --name traefik \
  -p 80:80 \
  -p 443:443 \
  -p 8080:8080 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v $(pwd)/traefik.yml:/etc/traefik/traefik.yml \
  traefik:latest

Docker Compose

version: "3"
services:
  traefik:
    image: traefik:latest
    ports:
      - "80:80"
      - "443:443"
      - "8080:8080"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./traefik.yml:/etc/traefik/traefik.yml
      - ./dynamic-config.yml:/etc/traefik/dynamic-config.yml
    command:
      - "--api.dashboard=true"
      - "--providers.file.filename=/etc/traefik/dynamic-config.yml"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"

Kubernetes (Helm)

# Traefik Helm Repository hinzufügen
helm repo add traefik https://traefik.github.io/charts
helm repo update

# Traefik Ingress Controller installieren
helm install traefik traefik/traefik \
  --namespace traefik \
  --create-namespace \
  --set dashboard.enabled=true \
  --set service.type=LoadBalancer

GAL-Konfiguration generieren

# Traefik-Konfiguration generieren
gal generate --config config.yaml --provider traefik --output traefik-dynamic.yml

# Oder via Docker
docker run --rm -v $(pwd):/app ghcr.io/pt9912/x-gal:latest \
  generate --config config.yaml --provider traefik --output traefik-dynamic.yml

Konfiguration anwenden

# Static Configuration (traefik.yml)
cat > traefik.yml <<EOF
api:
  dashboard: true

providers:
  file:
    filename: /etc/traefik/dynamic-config.yml
    watch: true

entrypoints:
  web:
    address: ":80"
  websecure:
    address: ":443"
EOF

# Dynamic Configuration (von GAL generiert)
cp traefik-dynamic.yml /etc/traefik/dynamic-config.yml

# Traefik starten/neu laden
docker restart traefik

# Dashboard öffnen: http://localhost:8080

Konfigurationsoptionen

Global Config

global:
  host: 0.0.0.0
  port: 80
  log_level: info

Traefik Mapping (traefik.yml):

entryPoints:
  web:
    address: ":80"

log:
  level: INFO

Upstream Config

services:
  - name: my_service
    upstream:
      targets:
        - host: backend-1.internal
          port: 8080
        - host: backend-2.internal
          port: 8080
      load_balancer:
        algorithm: round_robin
      health_check:
        active:
          enabled: true
          path: /health

Traefik Mapping:

http:
  services:
    my_service:
      loadBalancer:
        servers:
          - url: "http://backend-1.internal:8080"
          - url: "http://backend-2.internal:8080"
        healthCheck:
          path: /health
          interval: 5s

Route Config

routes:
  - path_prefix: /api
    rate_limit:
      enabled: true
      requests_per_second: 100

Traefik Mapping:

http:
  routers:
    my_service_router_0:
      rule: "PathPrefix(`/api`)"
      service: my_service
      middlewares:
        - my_service_router_0_ratelimit

  middlewares:
    my_service_router_0_ratelimit:
      rateLimit:
        average: 100
        burst: 200

Feature-Implementierungen

1. Load Balancing

Traefik unterstützt mehrere Load-Balancing-Algorithmen über loadBalancer.sticky:

GAL Algorithm Traefik Implementation Beschreibung
round_robin Default (keine Config) Gleichmäßige Verteilung
least_conn ⚠️ Nicht verfügbar Traefik wählt zufällig
ip_hash sticky.cookie Session Persistence via Cookie
weighted servers.weight Gewichtete Verteilung

Implementierung (gal/providers/traefik.py:230-261):

# Services
output.append("  services:")
for service in config.services:
    output.append(f"    {service.name}:")
    output.append("      loadBalancer:")
    output.append("        servers:")

    # Targets
    if service.upstream:
        if service.upstream.targets:
            for target in service.upstream.targets:
                weight = target.weight if target.weight else 1
                url = f"http://{target.host}:{target.port}"
                output.append(f"          - url: \"{url}\"")
                if weight != 1:
                    output.append(f"            weight: {weight}")

Sticky Sessions (gal/providers/traefik.py:425):

# Sticky sessions (IP hash)
if service.upstream and service.upstream.load_balancer:
    if service.upstream.load_balancer.algorithm == "ip_hash":
        output.append("        sticky:")
        output.append("          cookie:")
        output.append("            name: lb")

Beispiel:

upstream:
  targets:
    - host: api-1.internal
      port: 8080
      weight: 3
    - host: api-2.internal
      port: 8080
      weight: 1
  load_balancer:
    algorithm: weighted

2. Health Checks

Traefik bietet Active Health Checks (Passive nur eingeschränkt über Circuit Breaker).

Active Health Checks (gal/providers/traefik.py:262-277):

# Health checks
if service.upstream and service.upstream.health_check:
    hc = service.upstream.health_check
    if hc.active and hc.active.enabled:
        output.append("        healthCheck:")
        output.append(f"          path: {hc.active.path}")
        output.append(
            f"          interval: {hc.active.interval}"
        )
        output.append(
            f"          timeout: {hc.active.timeout}"
        )

Passive Health Checks: Traefik hat keine native passive health checks. Nutze Circuit Breaker als Alternative.

Beispiel:

upstream:
  health_check:
    active:
      enabled: true
      path: /health
      interval: 5s
      timeout: 3s
      healthy_threshold: 2
      unhealthy_threshold: 3

3. Rate Limiting

Traefik verwendet das rateLimit Middleware.

Implementierung (gal/providers/traefik.py:347-359):

# Rate limiting middlewares (route-level)
for service in config.services:
    for i, route in enumerate(service.routes):
        if route.rate_limit and route.rate_limit.enabled:
            router_name = f"{service.name}_router_{i}"
            rl = route.rate_limit
            output.append(f"    {router_name}_ratelimit:")
            output.append("      rateLimit:")
            output.append(f"        average: {rl.requests_per_second}")
            burst = (
                rl.burst if rl.burst else rl.requests_per_second * 2
            )
            output.append(f"        burst: {burst}")

Beispiel:

routes:
  - path_prefix: /api
    rate_limit:
      enabled: true
      requests_per_second: 100
      burst: 200

Generierte Middleware:

middlewares:
  api_service_router_0_ratelimit:
    rateLimit:
      average: 100  # Requests pro Sekunde
      burst: 200    # Burst-Kapazität

4. Authentication

Traefik unterstützt Basic Auth nativ, JWT nur in Traefik Enterprise.

Basic Authentication (gal/providers/traefik.py:361-377):

# Basic auth middlewares
for service in config.services:
    for i, route in enumerate(service.routes):
        if route.authentication and route.authentication.enabled:
            auth = route.authentication
            if auth.type == "basic":
                router_name = f"{service.name}_router_{i}"
                output.append(f"    {router_name}_auth:")
                output.append("      basicAuth:")
                output.append("        users:")
                if auth.basic_auth and auth.basic_auth.users:
                    for username, password in auth.basic_auth.users.items():
                        # htpasswd-Format erforderlich
                        output.append(f'          - "{username}:$apr1$..."')

JWT Authentication: Traefik Open Source hat keine native JWT-Unterstützung. Nutze Traefik Enterprise oder ForwardAuth Middleware mit externem Service.

Beispiel:

routes:
  - path_prefix: /api
    authentication:
      enabled: true
      type: basic
      basic_auth:
        users:
          admin: password123
          user: pass456

5. CORS

Traefik verwendet das headers Middleware für CORS.

Implementierung (gal/providers/traefik.py:379-409):

# CORS middlewares
for service in config.services:
    for i, route in enumerate(service.routes):
        if route.cors and route.cors.enabled:
            router_name = f"{service.name}_router_{i}"
            cors = route.cors
            output.append(f"    {router_name}_cors:")
            output.append("      headers:")
            output.append("        accessControlAllowMethods:")
            for method in cors.allowed_methods or ["*"]:
                output.append(f"          - {method}")
            output.append("        accessControlAllowOriginList:")
            for origin in cors.allowed_origins:
                output.append(f"          - {origin}")
            if cors.allowed_headers:
                output.append("        accessControlAllowHeaders:")
                for header in cors.allowed_headers:
                    output.append(f"          - {header}")
            if cors.allow_credentials:
                output.append(
                    "        accessControlAllowCredentials: true"
                )
            if cors.max_age:
                output.append(
                    f"        accessControlMaxAge: {cors.max_age}"
                )

Beispiel:

routes:
  - path_prefix: /api
    cors:
      enabled: true
      allowed_origins:
        - "https://app.example.com"
        - "https://admin.example.com"
      allowed_methods: ["GET", "POST", "PUT", "DELETE"]
      allowed_headers: ["Content-Type", "Authorization"]
      allow_credentials: true
      max_age: 86400

6. Timeout & Retry

Timeout Configuration (gal/providers/traefik.py:489-502):

# Timeout (serversTransport)
has_timeout = any(
    route.timeout for service in config.services for route in service.routes
)
if has_timeout:
    output.append("  serversTransports:")
    output.append("    default:")
    for service in config.services:
        for route in service.routes:
            if route.timeout:
                timeout = route.timeout
                output.append("        serversTransport:")
                output.append("          forwardingTimeouts:")
                output.append(f"            dialTimeout: {timeout.connect}")
                output.append(
                    f"            responseHeaderTimeout: {timeout.read}"
                )
                output.append(f"            idleConnTimeout: {timeout.idle}")
                break

Retry Configuration (gal/providers/traefik.py:411-422):

# Retry middlewares (route-level)
for service in config.services:
    for i, route in enumerate(service.routes):
        if route.retry and route.retry.enabled:
            router_name = f"{service.name}_router_{i}"
            retry = route.retry
            output.append(f"    {router_name}_retry:")
            output.append("      retry:")
            output.append(f"        attempts: {retry.attempts}")
            output.append(
                f"        initialInterval: {retry.base_interval}"
            )

Beispiel:

routes:
  - path_prefix: /api
    timeout:
      connect: 5s
      read: 30s
      idle: 300s
    retry:
      enabled: true
      attempts: 3
      base_interval: 100ms

7. Circuit Breaker

Traefik verwendet das circuitBreaker Middleware.

Implementierung (gal/providers/traefik.py:424-445):

# Circuit breaker middlewares
for service in config.services:
    if service.upstream and service.upstream.circuit_breaker:
        cb = service.upstream.circuit_breaker
        if cb.enabled:
            output.append(f"    {service.name}_circuitbreaker:")
            output.append("      circuitBreaker:")
            # Traefik verwendet expression syntax
            # z.B. "NetworkErrorRatio() > 0.30" oder "ResponseCodeRatio(500, 600, 0, 600) > 0.25"
            failure_ratio = (
                cb.max_failures / 100
            )  # Convert to percentage
            output.append(
                f'        expression: "NetworkErrorRatio() > {failure_ratio}"'
            )

Beispiel:

upstream:
  circuit_breaker:
    enabled: true
    max_failures: 5  # 5% failure rate
    timeout: 30s

Generierte Middleware:

middlewares:
  api_service_circuitbreaker:
    circuitBreaker:
      expression: "NetworkErrorRatio() > 0.05"

8. WebSocket

Traefik unterstützt WebSocket nativ ohne zusätzliche Konfiguration.

Implementierung (gal/providers/traefik.py:425):

# WebSocket support (native in Traefik)
if route.websocket and route.websocket.enabled:
    output.append("        passHostHeader: true")
    output.append("        responseForwarding:")
    output.append("          flushInterval: 100ms")

Beispiel:

routes:
  - path_prefix: /ws
    websocket:
      enabled: true
      idle_timeout: 300s

9. Header Manipulation

Traefik verwendet das headers Middleware für Request/Response Header Manipulation.

Request Headers:

middlewares:
  api_service_router_0_headers:
    headers:
      customRequestHeaders:
        X-Request-ID: "{{uuid}}"
        X-Gateway: "GAL-Traefik"

Response Headers:

middlewares:
  api_service_router_0_headers:
    headers:
      customResponseHeaders:
        X-Server: "Traefik"
        X-Response-Time: "{{timestamp}}"

Beispiel:

routes:
  - path_prefix: /api
    headers:
      request:
        add:
          X-Request-ID: "{{uuid}}"
          X-Gateway: "GAL-Traefik"
        remove:
          - X-Internal-Header
      response:
        add:
          X-Server: "Traefik"

10. Body Transformation

⚠️ Limitation: Traefik Open Source unterstützt keine native Body Transformation.

Alternativen:

  1. ForwardAuth Middleware mit externem Service:

    middlewares:
      body-transformer:
        forwardAuth:
          address: "http://transformer-service:8080/transform"
    

  2. Custom Traefik Plugin (Go development erforderlich):

    // traefik-plugin-body-transformer
    package traefik_plugin_body_transformer
    
    func (t *BodyTransformer) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
        // Body transformation logic
    }
    

  3. Alternativer Provider: Envoy, Kong, APISIX, Nginx, HAProxy unterstützen Body Transformation nativ.

GAL Verhalten (gal/providers/traefik.py:151-160):

# Body Transformation warning
if route.body_transformation and route.body_transformation.enabled:
    logger.warning(
        f"Body transformation for route '{route.path_prefix}' "
        "is not natively supported by Traefik. Consider using:\n"
        "  1. ForwardAuth middleware with external transformation service\n"
        "  2. Custom Traefik plugin (requires Go development)\n"
        "  3. Alternative provider: Envoy, Kong, APISIX, Nginx, HAProxy"
    )

11. Traffic Splitting & Canary Deployments

Feature: Gewichtsbasierte Traffic-Verteilung für A/B Testing, Canary Deployments und Blue/Green Deployments.

Status:Vollständig unterstützt (seit v1.4.0)

Traefik unterstützt Traffic Splitting nativ über Weighted Services.

Canary Deployment (90/10 Split)

Use Case: Neue Version vorsichtig ausrollen (10% Canary, 90% Stable).

routes:
  - path_prefix: /api/v1
    traffic_split:
      enabled: true
      targets:
        - name: stable
          weight: 90
          upstream:
            host: backend-stable
            port: 8080
        - name: canary
          weight: 10
          upstream:
            host: backend-canary
            port: 8080

Traefik Config (traefik.yml):

http:
  routers:
    canary_deployment_api_route0:
      rule: "PathPrefix(`/api/v1`)"
      service: canary_deployment_api_route0_service
      entryPoints:
        - web

  services:
    # Weighted Service: 90% stable, 10% canary
    canary_deployment_api_route0_service:
      weighted:
        services:
          - name: canary_deployment_api_stable_service
            weight: 90
          - name: canary_deployment_api_canary_service
            weight: 10

    # Stable Backend
    canary_deployment_api_stable_service:
      loadBalancer:
        servers:
          - url: "http://backend-stable:8080"

    # Canary Backend
    canary_deployment_api_canary_service:
      loadBalancer:
        servers:
          - url: "http://backend-canary:8080"

Erklärung: - weighted.services: Weighted Service mit mehreren Targets - weight: 90: Stable Backend erhält 90% des Traffics - weight: 10: Canary Backend erhält 10% des Traffics - loadBalancer.servers: Backend URLs

A/B Testing (50/50 Split)

Use Case: Zwei Versionen gleichwertig testen.

traffic_split:
  enabled: true
  targets:
    - name: version_a
      weight: 50
      upstream:
        host: api-v2-a
        port: 8080
    - name: version_b
      weight: 50
      upstream:
        host: api-v2-b
        port: 8080

Traefik Config:

http:
  services:
    ab_testing_service:
      weighted:
        services:
          - name: version_a_service
            weight: 50
          - name: version_b_service
            weight: 50

    version_a_service:
      loadBalancer:
        servers:
          - url: "http://api-v2-a:8080"

    version_b_service:
      loadBalancer:
        servers:
          - url: "http://api-v2-b:8080"

Blue/Green Deployment

Use Case: Instant Switch zwischen zwei Environments (100% → 0%).

traffic_split:
  enabled: true
  targets:
    - name: blue
      weight: 0    # Aktuell inaktiv
      upstream:
        host: api-blue
        port: 8080
    - name: green
      weight: 100  # Aktuell aktiv
      upstream:
        host: api-green
        port: 8080

Deployment-Strategie: 1. Initial: Blue = 100%, Green = 0% 2. Deploy neue Version auf Green Environment 3. Test Green ausgiebig 4. Switch: Blue = 0%, Green = 100% (Re-Generate traefik.yml, hot-reload) 5. Rollback bei Problemen: Green = 0%, Blue = 100%

Gradual Rollout (5% → 25% → 50% → 100%)

Use Case: Schrittweise Migration mit Monitoring.

Phase 1: 5% Canary

targets:
  - {name: stable, weight: 95, upstream: {host: api-stable, port: 8080}}
  - {name: canary, weight: 5, upstream: {host: api-canary, port: 8080}}

Phase 2: 25% Canary (nach Monitoring)

targets:
  - {name: stable, weight: 75, upstream: {host: api-stable, port: 8080}}
  - {name: canary, weight: 25, upstream: {host: api-canary, port: 8080}}

Phase 3: 50% Canary (Confidence-Build)

targets:
  - {name: stable, weight: 50, upstream: {host: api-stable, port: 8080}}
  - {name: canary, weight: 50, upstream: {host: api-canary, port: 8080}}

Phase 4: 100% Canary (Full Migration)

targets:
  - {name: canary, weight: 100, upstream: {host: api-canary, port: 8080}}

Traefik Traffic Splitting Features

Feature Traefik Support Implementation
Weight-based Splitting ✅ Native weighted.services[].weight
Health Checks ✅ Native loadBalancer.healthCheck
Sticky Sessions ✅ Native loadBalancer.sticky.cookie
Dynamic Reconfiguration ✅ Native File Provider Hot-Reload
Header-based Routing ⚠️ Headers Middleware Via headers.customRequestHeaders + routing rules
Cookie-based Routing ⚠️ Router Rules Via HeadersRegexp rule matching
Mirroring ✅ Native mirroring.service for Traffic Shadowing

Best Practices: - Start Small: Begin mit 5-10% Canary Traffic - Monitor Metrics: Error Rate, Latency, Throughput via Traefik Dashboard/Prometheus - Health Checks: Immer aktivieren für automatisches Failover - Gradual Increase: 5% → 25% → 50% → 100% über mehrere Tage - Hot-Reload: Traefik lädt traefik.yml automatisch neu (keine Downtime) - Rollback Plan: Schnelles Zurücksetzen via Config Update (< 1 Sekunde)

Docker E2E Test Results:

# Test: 1000 Requests mit 90/10 Split (✅ Passed)
Stable Backend:  900 requests (90.0%)
Canary Backend:  100 requests (10.0%)
Failed Requests: 0 requests (0.0%)

Siehe auch: - Traffic Splitting Guide - Vollständige Dokumentation - examples/traffic-split-example.yaml - 6 Beispiel-Szenarien - tests/docker/traefik/ - Docker Compose E2E Tests


Provider-Vergleich

Feature Traefik Envoy Kong APISIX Nginx HAProxy
Ease of Use ⭐⭐⭐⭐⭐ ⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐
Auto-Discovery ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐ ⚠️ ⚠️
Let's Encrypt ⭐⭐⭐⭐⭐ ⚠️ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⚠️
Dashboard ⭐⭐⭐⭐⭐ ⚠️ ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ ⚠️ ⭐⭐⭐
Performance ⭐⭐⭐⭐ ⭐⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐
Plugin System ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐⭐ ⭐⭐⭐⭐⭐ ⚠️ ⚠️

Traefik vs Envoy

  • Traefik: Einfacher, bessere Auto-Discovery, Let's Encrypt Integration
  • Envoy: Mehr Features, bessere Observability, Service Mesh Integration

Traefik vs Kong

  • Traefik: Bessere Docker/Kubernetes Integration, Let's Encrypt, kostenlos
  • Kong: Mehr Plugins, bessere Auth-Features, reiferes Ökosystem

Traefik vs APISIX

  • Traefik: Einfachere Konfiguration, besseres Dashboard, Let's Encrypt
  • APISIX: Höhere Performance, mehr Plugins, Lua-Programmierbarkeit

Traefik vs Nginx/HAProxy

  • Traefik: Dynamische Konfiguration, Auto-Discovery, Dashboard, Let's Encrypt
  • Nginx/HAProxy: Höhere Performance, niedriger Overhead, etablierter