Post

High Cardinality Metrics: The Hidden Cost

Introduction

High cardinality metrics can quietly destabilize a monitoring stack. A single label with user IDs or request IDs can multiply series counts and overwhelm storage, query latency, and memory. Advanced teams treat cardinality as a design constraint, not an afterthought.

Where Cardinality Comes From

Cardinality grows with label combinations. Each unique set of label values creates a new time series. Common mistakes include:

  • Adding user_id or session_id labels.
  • Encoding URLs or query parameters into labels.
  • Using unbounded error_message values.

Cost of Excessive Series

High series counts increase memory usage, slow down queries, and amplify alert evaluation time. It also makes downsampling and retention policies harder to enforce.

C# Example: Safer Attribute Usage

This example shows how to keep metrics low-cardinality while still capturing useful context. Use logs or traces for per-request detail.

1
2
3
4
5
6
7
8
private static readonly Meter Meter = new("billing-service");
private static readonly Counter<long> RequestCounter = Meter.CreateCounter<long>("http_requests_total");

public void RecordRequest(string route, int statusCode, string userId) {
    RequestCounter.Add(1,
        new KeyValuePair<string, object?>("route", route),
        new KeyValuePair<string, object?>("status", statusCode));
}

Safer Alternatives

  • Use exemplars to link a metric to a trace without exploding cardinality.
  • Aggregate by route or operation name instead of raw URLs.
  • Apply sampling to high-volume success paths.
  • Use logs for user-level attribution.

Conclusion

High cardinality metrics are a hidden cost that can break observability at scale. A small set of carefully chosen labels preserves queryability without compromising system stability.

This post is licensed under CC BY 4.0 by the author.