Skip to content

Envoy Access Logs and Request Correlation

Introduction

This guide explains how to configure Envoy access logs in Istio-based Kuadrant deployments to enable request correlation using x-request-id. Access logs provide detailed information about each request processed by the gateway, including timing, response codes, and request identifiers that can be correlated with traces and application logs.

Prerequisites

  • A Kubernetes cluster with Istio and Kuadrant installed
  • Istio configured as your Gateway API provider

Understanding Request Correlation

Request correlation allows you to track a single request across multiple services and components using a unique identifier. In Envoy and Istio, this is typically done using the x-request-id header, which is:

  • Automatically generated by Envoy for each incoming request (if not already present)
  • Propagated to upstream services
  • Included in access logs for correlation with application logs and traces
  • Used to correlate requests across gateways, Authorino, Limitador, and backend services

Configuring Access Logs with the Telemetry API

Basic Configuration

Enable access logs using Istio's Telemetry API. This is the recommended approach for configuring access logs in Istio.

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: access-logs
  namespace: istio-system  # Or gateway-system, depending on your setup
spec:
  accessLogging:

    - providers:
      - name: envoy

This enables access logs with Envoy's default log format.

Structured Logging (JSON Format)

For better parsing and integration with log aggregation systems (Loki, Elasticsearch, etc.), enable JSON-formatted access logs:

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: access-logs-json
  namespace: istio-system
spec:
  accessLogging:

    - providers:
      - name: envoy
    filter:
      expression: "response.code >= 400"  # Optional: only log errors

To enable JSON encoding, configure the Istio mesh with a custom access log provider.

Check which Istio installation method you're using:

# Sail Operator (modern/recommended)
kubectl get istio -A

# Classic IstioOperator (legacy)
kubectl get istiooperator -A

If using Istio Sail Operator (recommended):

apiVersion: sailoperator.io/v1
kind: Istio
metadata:
  name: default
spec:
  namespace: istio-system
  values:
    meshConfig:
      accessLogFile: /dev/stdout
      accessLogEncoding: JSON
      accessLogFormat: |
        {
          "start_time": "%START_TIME%",
          "method": "%REQ(:METHOD)%",
          "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
          "protocol": "%PROTOCOL%",
          "response_code": "%RESPONSE_CODE%",
          "response_flags": "%RESPONSE_FLAGS%",
          "bytes_received": "%BYTES_RECEIVED%",
          "bytes_sent": "%BYTES_SENT%",
          "duration": "%DURATION%",
          "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
          "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%",
          "user_agent": "%REQ(USER-AGENT)%",
          "request_id": "%REQ(X-REQUEST-ID)%",
          "authority": "%REQ(:AUTHORITY)%",
          "upstream_host": "%UPSTREAM_HOST%",
          "upstream_cluster": "%UPSTREAM_CLUSTER%",
          "route_name": "%ROUTE_NAME%"
        }

If using classic IstioOperator (legacy installations):

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio
spec:
  meshConfig:
    accessLogFile: /dev/stdout
    accessLogEncoding: JSON
      accessLogFormat: |
        {
          "start_time": "%START_TIME%",
          "method": "%REQ(:METHOD)%",
          "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
          "protocol": "%PROTOCOL%",
          "response_code": "%RESPONSE_CODE%",
          "response_flags": "%RESPONSE_FLAGS%",
          "bytes_received": "%BYTES_RECEIVED%",
          "bytes_sent": "%BYTES_SENT%",
          "duration": "%DURATION%",
          "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
          "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%",
          "user_agent": "%REQ(USER-AGENT)%",
          "request_id": "%REQ(X-REQUEST-ID)%",
          "authority": "%REQ(:AUTHORITY)%",
          "upstream_host": "%UPSTREAM_HOST%",
          "upstream_cluster": "%UPSTREAM_CLUSTER%",
          "route_name": "%ROUTE_NAME%"
        }

Key Fields for Request Correlation

The most important fields for request correlation are:

  • request_id (%REQ(X-REQUEST-ID)%): The unique request identifier generated by Envoy
  • start_time (%START_TIME%): Request start time for temporal correlation
  • route_name (%ROUTE_NAME%): The route that matched the request (useful for policy debugging)
  • upstream_cluster (%UPSTREAM_CLUSTER%): The upstream service that handled the request

Correlating with Kuadrant Components

Request Flow and Correlation Points

A typical request flows through these components:

Client → Envoy Gateway → [Wasm Shim] → Authorino → Limitador → Backend Service
         ↓                                    ↓            ↓
    Access Logs                          Auth Logs    Rate Limit Logs

Configuring Kuadrant for Request Correlation

To enable request correlation across Kuadrant components, configure the httpHeaderIdentifier in the Kuadrant CR:

apiVersion: kuadrant.io/v1beta1
kind: Kuadrant
metadata:
  name: kuadrant
  namespace: kuadrant-system
spec:
  observability:
    dataPlane:
      httpHeaderIdentifier: x-request-id
      defaultLevels:

        - debug: "true"  # Optional: Controls OTEL trace filtering for the wasm-shim
    tracing:
      defaultEndpoint: rpc://jaeger.jaeger.svc.cluster.local:4317
      insecure: true

This configuration:

  • Tells wasm-shim to include the x-request-id header value in trace spans
  • Enables request correlation across Envoy access logs, Authorino, Limitador, and wasm-shim traces
  • Optionally controls OpenTelemetry trace filtering via defaultLevels

Important - Understanding Kuadrant Observability vs Envoy Access Logs:

  • Envoy Access Logs (configured via Istio Telemetry API above): HTTP request/response logs visible via kubectl logs on gateway pods
  • Kuadrant dataPlane.defaultLevels: Controls trace span filtering sent to your tracing collector (Jaeger/Tempo), not gateway pod logs
  • Kuadrant dataPlane.httpHeaderIdentifier: Includes the specified header in both trace spans and enables correlation with access logs

For detailed information on wasm-shim observability configuration and how to enable debug logging in gateway pods, see the Tracing documentation.

Example Log Correlation

With proper configuration, you can correlate logs across all components using the x-request-id:

Envoy Access Log (JSON):

{
  "start_time": "2026-01-23T15:45:12.345Z",
  "method": "GET",
  "path": "/api/users",
  "response_code": 200,
  "request_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "route_name": "toystore-route"
}

Authorino Log:

{"level":"info","ts":"2026-01-23T15:45:12.350Z","request_id":"a1b2c3d4-e5f6-7890-abcd-ef1234567890","msg":"auth check succeeded","identity":"alice"}

Limitador Log:

Request received: ... "x-request-id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890" ...

Integration with Tracing

When both access logging and tracing are enabled, you can correlate traces with access logs:

  1. Access logs show the x-request-id
  2. Traces include the x-request-id as a span attribute
  3. Use Grafana to jump from logs to traces and vice versa

See the tracing documentation for details on enabling tracing.

Filtering Access Logs

To reduce log volume, filter access logs based on specific criteria:

Log Only Errors

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: access-logs-errors-only
  namespace: istio-system
spec:
  accessLogging:

    - providers:
      - name: envoy
      filter:
        expression: "response.code >= 400"

Log Specific Routes

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: access-logs-api-only
  namespace: istio-system
spec:
  accessLogging:

    - providers:
      - name: envoy
      filter:
        expression: 'request.url_path.startsWith("/api/")'

Exclude Health Checks

apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: access-logs-no-healthz
  namespace: istio-system
spec:
  accessLogging:

    - providers:
      - name: envoy
      filter:
        expression: '!request.url_path.startsWith("/healthz")'

Envoy Access Log Format Variables

Common Envoy access log format variables:

Variable Description
%START_TIME% Request start time
%REQ(HEADER)% Request header value (e.g., %REQ(X-REQUEST-ID)%)
%RESP(HEADER)% Response header value
%PROTOCOL% Protocol (HTTP/1.1, HTTP/2, etc.)
%RESPONSE_CODE% HTTP response code
%RESPONSE_FLAGS% Response flags indicating issues (UH, UF, etc.)
%BYTES_RECEIVED% Bytes received from client
%BYTES_SENT% Bytes sent to client
%DURATION% Total request duration in milliseconds
%UPSTREAM_HOST% Upstream host address
%UPSTREAM_CLUSTER% Upstream cluster name
%ROUTE_NAME% Route name that matched

For a complete list, see the Envoy access log documentation.

Troubleshooting

Access Logs Not Appearing

  1. Check Telemetry configuration:

    kubectl get telemetry -n istio-system
    kubectl describe telemetry access-logs -n istio-system
    

  2. Verify gateway pod logs:

    kubectl logs -n istio-system -l istio=gateway --tail=50
    

  3. Check Istio configuration:

    kubectl get istio -n istio-system -o yaml | grep -A 10 "accessLog"
    

Request ID Not Propagating

Ensure that:

  • The x-request-id header is not being overwritten by upstream services
  • Envoy is configured to generate request IDs (enabled by default)
  • Application code preserves the x-request-id header when making outbound requests

Log Format Issues

If using custom JSON format and logs appear malformed:

  • Validate JSON syntax (trailing commas are not allowed in JSON)
  • Check for proper escaping of special characters
  • Test format with a simple configuration first

Best Practices

  1. Use structured (JSON) logging for easier parsing and analysis
  2. Always include x-request-id in your log format for correlation
  3. Filter logs to reduce volume and costs (e.g., exclude health checks)
  4. Aggregate logs centrally using Loki, Elasticsearch, or similar
  5. Correlate with traces for complete observability
  6. Configure log rotation to prevent disk space issues
  7. Use log levels appropriately (info for access logs, debug for detailed troubleshooting)

Additional Resources