Distributed Tracing: Understanding Requests Across Services

Distributed tracing tracks a single request as it flows through multiple services, capturing timing, errors, and context at each step. Without tracing, a slow API call that crosses 5 microservices lea

Introduction#

Distributed tracing tracks a single request as it flows through multiple services, capturing timing, errors, and context at each step. Without tracing, a slow API call that crosses 5 microservices leaves you with 5 sets of logs and no way to correlate them. With tracing, you see the entire request timeline in one view — which service was slow, where an error originated, and how services depend on each other.

Core Concepts#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Trace: the complete end-to-end record of a single request
  Trace ID: unique identifier shared by all spans in a trace

Span: a single unit of work within a trace
  Span ID: unique identifier for this span
  Parent Span ID: the span that started this one
  Start time, duration, status, attributes, events

Context propagation: passing Trace ID and Span ID between services
  HTTP: via W3C Trace Context headers (traceparent, tracestate)
  gRPC: via metadata
  Message queues: via message headers

Example trace:
  frontend (100ms)
    └─ api-gateway (95ms)
         ├─ auth-service (5ms)
         └─ order-service (85ms)
              ├─ postgres (15ms)
              ├─ inventory-service (30ms)
              └─ payment-service (35ms)

OpenTelemetry Setup (Python)#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.instrumentation.httpx import HTTPXClientInstrumentor
from opentelemetry.instrumentation.sqlalchemy import SQLAlchemyInstrumentor

def setup_tracing(service_name: str, otlp_endpoint: str = "http://otel-collector:4317"):
    """Configure OpenTelemetry with OTLP export."""
    provider = TracerProvider(
        resource=Resource.create({
            SERVICE_NAME: service_name,
            SERVICE_VERSION: os.getenv("APP_VERSION", "unknown"),
            DEPLOYMENT_ENVIRONMENT: os.getenv("ENVIRONMENT", "development"),
        })
    )

    # Export to OTLP collector (Jaeger, Tempo, Honeycomb, etc.)
    exporter = OTLPSpanExporter(endpoint=otlp_endpoint)
    provider.add_span_processor(BatchSpanProcessor(exporter))
    trace.set_tracer_provider(provider)

    return trace.get_tracer(service_name)

# Auto-instrument common libraries
def instrument_app(app):
    FastAPIInstrumentor.instrument_app(app)    # HTTP requests/responses
    HTTPXClientInstrumentor().instrument()      # outbound HTTP calls
    SQLAlchemyInstrumentor().instrument()       # database queries

tracer = setup_tracing("order-service")
instrument_app(app)

Manual Span Creation#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
from opentelemetry import trace
from opentelemetry.trace import Status, StatusCode

tracer = trace.get_tracer(__name__)

async def process_order(order_id: str, user_id: str) -> dict:
    with tracer.start_as_current_span("process_order") as span:
        # Add attributes for filtering and debugging
        span.set_attribute("order.id", order_id)
        span.set_attribute("user.id", user_id)
        span.set_attribute("order.source", "api")

        try:
            # Nested spans for each sub-operation
            with tracer.start_as_current_span("validate_inventory") as inv_span:
                inventory = await check_inventory(order_id)
                inv_span.set_attribute("inventory.available", inventory["available"])

            with tracer.start_as_current_span("charge_payment") as pay_span:
                pay_span.set_attribute("payment.provider", "stripe")
                payment = await charge_customer(user_id, inventory["total"])
                pay_span.set_attribute("payment.id", payment["id"])

            # Add events (point-in-time annotations)
            span.add_event("order_confirmed", {
                "payment_id": payment["id"],
                "total": inventory["total"],
            })

            return {"order_id": order_id, "status": "confirmed"}

        except PaymentDeclined as e:
            span.set_status(Status(StatusCode.ERROR, str(e)))
            span.record_exception(e)
            raise

        except Exception as e:
            span.set_status(Status(StatusCode.ERROR, "Unexpected error"))
            span.record_exception(e)
            raise

Context Propagation Between Services#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import httpx
from opentelemetry import trace
from opentelemetry.propagate import inject, extract
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator

# Outbound HTTP: inject trace context into headers
async def call_inventory_service(order_id: str) -> dict:
    headers = {}
    inject(headers)  # adds traceparent and tracestate headers

    async with httpx.AsyncClient() as client:
        response = await client.get(
            f"http://inventory-service/items/{order_id}",
            headers=headers,
        )
    return response.json()

# Inbound HTTP: extract trace context from request (FastAPI)
from fastapi import FastAPI, Request

app = FastAPI()

@app.middleware("http")
async def extract_trace_context(request: Request, call_next):
    # Extract parent span from incoming headers
    context = extract(dict(request.headers))
    token = trace.context_api.attach(context)
    try:
        response = await call_next(request)
        return response
    finally:
        trace.context_api.detach(token)

Kafka Message Tracing#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
from confluent_kafka import Producer, Consumer
from opentelemetry.propagate import inject, extract
from opentelemetry import trace

tracer = trace.get_tracer(__name__)

def produce_with_tracing(producer: Producer, topic: str, message: dict) -> None:
    """Inject trace context into Kafka message headers."""
    headers = {}
    inject(headers)  # propagate current span context

    with tracer.start_as_current_span(f"kafka.produce {topic}") as span:
        span.set_attribute("messaging.system", "kafka")
        span.set_attribute("messaging.destination", topic)

        producer.produce(
            topic=topic,
            value=json.dumps(message).encode(),
            headers=list(headers.items()),
        )

def consume_with_tracing(message) -> None:
    """Extract trace context from Kafka message headers."""
    headers = {k: v.decode() for k, v in (message.headers() or [])}
    context = extract(headers)

    with tracer.start_as_current_span(
        f"kafka.consume {message.topic()}",
        context=context,
        kind=trace.SpanKind.CONSUMER,
    ) as span:
        span.set_attribute("messaging.system", "kafka")
        span.set_attribute("messaging.source", message.topic())
        span.set_attribute("messaging.partition", message.partition())
        span.set_attribute("messaging.offset", message.offset())

        process_message(json.loads(message.value()))

Docker Compose: Jaeger for Local Development#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
version: "3.8"
services:
  jaeger:
    image: jaegertracing/all-in-one:latest
    ports:
      - "16686:16686"  # Jaeger UI
      - "4317:4317"    # OTLP gRPC
      - "4318:4318"    # OTLP HTTP
    environment:
      COLLECTOR_OTLP_ENABLED: "true"

  api:
    build: .
    environment:
      OTEL_EXPORTER_OTLP_ENDPOINT: "http://jaeger:4317"
      OTEL_SERVICE_NAME: "api-service"
      OTEL_TRACES_SAMPLER: "traceidratio"
      OTEL_TRACES_SAMPLER_ARG: "0.1"  # sample 10% in production
    ports:
      - "8080:8080"
    depends_on: [jaeger]

Sampling Strategies#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from opentelemetry.sdk.trace.sampling import (
    TraceIdRatioBased,
    ParentBased,
    ALWAYS_ON,
    ALWAYS_OFF,
)

# Production: sample 10% of requests
# But always sample error traces and high-latency traces
production_sampler = ParentBased(root=TraceIdRatioBased(0.1))

# Custom sampler: always trace errors
class ErrorAwareSampler:
    def should_sample(self, parent_context, trace_id, name, kind, attributes, links):
        # Always sample if parent is sampled
        if parent_context and parent_context.is_sampled:
            return Decision.RECORD_AND_SAMPLE

        # Always sample errors (check after span completes — use events/status)
        # Rate-based sampling for normal traffic
        return TraceIdRatioBased(0.1).should_sample(
            parent_context, trace_id, name, kind, attributes, links
        )

Querying Traces#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Jaeger UI: http://localhost:16686
# - Search by service, operation, tags, duration
# - Visualize trace timeline
# - Compare traces

# Common trace queries:
# - Traces with error spans
# - Traces > 2s duration
# - Traces touching payment-service
# - Traces with specific user_id attribute

# Grafana Tempo + TraceQL:
# { .service.name = "order-service" && duration > 2s }
# { .http.status_code = 500 }
# { .user.id = "user:12345" }

Conclusion#

Distributed tracing transforms debugging in microservice architectures from log archaeology to visual timeline analysis. OpenTelemetry provides vendor-neutral instrumentation — instrument once, export to any backend (Jaeger, Zipkin, Honeycomb, Datadog). Auto-instrumentation of FastAPI, httpx, and SQLAlchemy handles 90% of spans automatically. Add manual spans for business-critical paths with semantic attributes. In production, sample 10-20% of traffic and always capture error traces to balance storage cost with observability.

Contents