Caching Patterns: Write-Through, Write-Back, and Cache-Aside
Introduction
Caching is an architectural decision that changes consistency, latency, and failure behavior. The three most common patterns are write-through, write-back, and cache-aside. Each pattern has distinct tradeoffs that must align with your data consistency requirements.
Write-Through Cache
Writes go to the cache and the database synchronously. Reads are always served from the cache.
Advantages
- Strong consistency between cache and database
- Predictable reads
Disadvantages
- Higher write latency
- Cache becomes a critical dependency
1
2
3
4
5
6
public async Task UpdateOrderAsync(Order order)
{
_dbContext.Orders.Update(order);
await _dbContext.SaveChangesAsync();
await _redis.StringSetAsync(CacheKey(order.Id), JsonSerializer.Serialize(order));
}
Write-Back (Write-Behind) Cache
Writes go to the cache first and are asynchronously flushed to the database.
Advantages
- Low write latency
- Batch writes reduce database load
Disadvantages
- Risk of data loss if cache fails
- Requires durable queue or background worker
1
2
3
4
5
public async Task UpdateOrderAsync(Order order)
{
await _redis.StringSetAsync(CacheKey(order.Id), JsonSerializer.Serialize(order));
await _queue.PublishAsync(new PersistOrderMessage(order));
}
Cache-Aside (Lazy Loading)
The application reads from the cache, and on a miss it loads from the database and populates the cache.
Advantages
- Simple to implement
- Cache is optional on reads
Disadvantages
- Cache stampede risk
- Data can become stale without invalidation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public async Task<Order> GetOrderAsync(long orderId)
{
var cached = await _redis.StringGetAsync(CacheKey(orderId));
if (cached.HasValue)
{
return JsonSerializer.Deserialize<Order>(cached);
}
var order = await _dbContext.Orders.FindAsync(orderId);
if (order != null)
{
await _redis.StringSetAsync(CacheKey(orderId), JsonSerializer.Serialize(order), TimeSpan.FromMinutes(10));
}
return order;
}
Choosing the Right Pattern
- Write-through for data that must be consistent across reads.
- Write-back for high write throughput with eventual consistency.
- Cache-aside for read-heavy workloads with tolerable staleness.
Best Practices
- Use TTLs to prevent unbounded growth.
- Add cache stampede protection with distributed locks.
- Monitor cache hit ratio and eviction rates.
Conclusion
Caching patterns are not interchangeable. Map the pattern to your consistency requirements first, then validate with load testing to ensure the desired latency gains.
This post is licensed under CC BY 4.0 by the author.