C# Memory Management and Garbage Collection Deep Dive
Introduction to Memory Management in C#
Memory management is one of the most critical aspects of building high-performance C# applications. While the .NET garbage collector (GC) handles most memory management automatically, understanding how it works and how to optimize for it can significantly improve application performance and reduce memory-related issues.
This guide provides a comprehensive look at C# memory management, garbage collection internals, common pitfalls, and optimization techniques.
Stack vs Heap Memory
Understanding the Stack
The stack is a region of memory that stores value types and method call information. It follows Last-In-First-Out (LIFO) principle.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
public void StackExample()
{
int x = 10; // Allocated on stack
double y = 3.14; // Allocated on stack
bool flag = true; // Allocated on stack
// All cleaned up automatically when method exits
}
public struct Point // Value type
{
public int X;
public int Y;
}
public void StructExample()
{
Point p = new Point { X = 5, Y = 10 }; // Allocated on stack
// No garbage collection needed
}
Stack Characteristics:
- Fast allocation and deallocation
- Limited size (typically 1MB per thread)
- Automatic cleanup
- Thread-specific
- Stores value types and method parameters
Understanding the Heap
The heap stores reference types and is managed by the garbage collector.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class Customer // Reference type
{
public string Name { get; set; }
public int Age { get; set; }
}
public void HeapExample()
{
Customer customer = new Customer // Object allocated on heap
{
Name = "John", // String allocated on heap
Age = 30 // Value stored in object on heap
};
// Object remains on heap until garbage collected
}
Heap Characteristics:
- Slower allocation than stack
- Much larger size (limited by system memory)
- Managed by garbage collector
- Shared across all threads
- Stores reference types
Mixed Allocations
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class Order
{
public int Id; // Stored in Order object on heap
public DateTime Date; // Stored in Order object on heap
public List<OrderItem> Items; // Reference on heap, pointing to heap
}
public void MixedExample()
{
Order order = new Order(); // Order object on heap
order.Id = 1; // Value stored in heap object
order.Date = DateTime.Now; // Struct stored in heap object
order.Items = new List<OrderItem>(); // New heap allocation
}
Garbage Collection Basics
Generational Garbage Collection
The .NET GC uses a generational approach to optimize collection performance.
Generation 0 (Gen 0):
- Short-lived objects
- Collected frequently
- Fast collection
- Typical size: 256KB - 4MB
Generation 1 (Gen 1):
- Buffer between Gen 0 and Gen 2
- Medium-lived objects
- Collected less frequently
- Typical size: Similar to Gen 0
Generation 2 (Gen 2):
- Long-lived objects
- Collected infrequently
- Slower collection
- Can grow very large
1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class GCGenerationExample
{
public void DemonstrateGenerations()
{
var temp = new byte[100];
Console.WriteLine($"Generation: {GC.GetGeneration(temp)}"); // 0
GC.Collect();
Console.WriteLine($"After GC: {GC.GetGeneration(temp)}"); // 1
GC.Collect();
Console.WriteLine($"After 2nd GC: {GC.GetGeneration(temp)}"); // 2
}
}
GC Modes
Workstation GC:
- Default for client applications
- Single-threaded or concurrent
- Lower latency
- Less CPU usage
Server GC:
- Default for server applications
- Multi-threaded (one heap per CPU)
- Higher throughput
- More CPU usage
1
2
3
4
5
<!-- Configure in .csproj -->
<PropertyGroup>
<ServerGarbageCollection>true</ServerGarbageCollection>
<ConcurrentGarbageCollection>true</ConcurrentGarbageCollection>
</PropertyGroup>
GC Collection Triggers
Garbage collection occurs when:
- Gen 0 threshold is exceeded
GC.Collect()is called explicitly- Low memory situation
- System memory pressure
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// Monitor GC behavior
public class GCMonitor
{
public void MonitorGC()
{
GC.RegisterForFullGCNotification(10, 10);
while (true)
{
GCNotificationStatus status = GC.WaitForFullGCApproach();
if (status == GCNotificationStatus.Succeeded)
{
Console.WriteLine("GC is approaching");
// Take action: stop accepting requests, etc.
}
status = GC.WaitForFullGCComplete();
if (status == GCNotificationStatus.Succeeded)
{
Console.WriteLine("GC completed");
// Resume normal operations
}
}
}
}
Memory Allocation Patterns
Object Pooling
Reuse objects instead of creating new ones to reduce GC pressure.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
public class ObjectPool<T> where T : class, new()
{
private readonly ConcurrentBag<T> _objects = new ConcurrentBag<T>();
private readonly int _maxSize;
public ObjectPool(int maxSize = 100)
{
_maxSize = maxSize;
}
public T Rent()
{
return _objects.TryTake(out T item) ? item : new T();
}
public void Return(T item)
{
if (_objects.Count < _maxSize)
{
_objects.Add(item);
}
}
}
// Usage
public class DataProcessor
{
private static readonly ObjectPool<StringBuilder> _stringBuilderPool
= new ObjectPool<StringBuilder>(20);
public string ProcessData(List<string> items)
{
var sb = _stringBuilderPool.Rent();
try
{
foreach (var item in items)
{
sb.Append(item);
}
return sb.ToString();
}
finally
{
sb.Clear();
_stringBuilderPool.Return(sb);
}
}
}
ArrayPool
Use ArrayPool<T> for temporary arrays to avoid allocations.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
using System.Buffers;
public class ArrayPoolExample
{
public byte[] ProcessData(int size)
{
// BAD - allocates new array
byte[] buffer = new byte[size];
try
{
// Process data
return buffer;
}
finally
{
// No cleanup possible
}
}
public byte[] ProcessDataOptimized(int size)
{
// GOOD - rent from pool
byte[] buffer = ArrayPool<byte>.Shared.Rent(size);
try
{
// Process data
byte[] result = new byte[size];
Array.Copy(buffer, result, size);
return result;
}
finally
{
// Return to pool
ArrayPool<byte>.Shared.Return(buffer);
}
}
}
Span and Memory
Use Span<T> and Memory<T> for efficient memory access without allocations.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
public class SpanExample
{
// BAD - creates substring (allocates)
public string GetSubstringBad(string input)
{
return input.Substring(0, 10);
}
// GOOD - uses span (no allocation)
public ReadOnlySpan<char> GetSubstringGood(string input)
{
return input.AsSpan(0, Math.Min(10, input.Length));
}
// Parse without allocation
public bool TryParseInt(ReadOnlySpan<char> input, out int result)
{
return int.TryParse(input, out result);
}
// Array slicing without copying
public void ProcessArraySegment(int[] numbers)
{
Span<int> slice = numbers.AsSpan(10, 20);
for (int i = 0; i < slice.Length; i++)
{
slice[i] *= 2; // Modifies original array
}
}
}
Large Object Heap (LOH)
Objects larger than 85,000 bytes are allocated on the Large Object Heap.
LOH Characteristics
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public class LOHExample
{
// Small object - allocated on regular heap
public void AllocateSmall()
{
byte[] small = new byte[80000]; // Gen 0
}
// Large object - allocated on LOH
public void AllocateLarge()
{
byte[] large = new byte[90000]; // LOH (Gen 2)
}
public void CheckAllocation()
{
byte[] small = new byte[80000];
byte[] large = new byte[90000];
Console.WriteLine($"Small: Gen {GC.GetGeneration(small)}"); // 0
Console.WriteLine($"Large: Gen {GC.GetGeneration(large)}"); // 2
}
}
LOH Fragmentation
LOH doesn’t compact by default, leading to fragmentation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
public class LOHCompaction
{
public void EnableLOHCompaction()
{
// Enable LOH compaction for next full GC
GCSettings.LargeObjectHeapCompactionMode =
GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
}
public void AvoidLOHFragmentation()
{
// Use pooling for large arrays
byte[] buffer = ArrayPool<byte>.Shared.Rent(100000);
try
{
// Use buffer
}
finally
{
ArrayPool<byte>.Shared.Return(buffer);
}
}
}
Memory Leaks in Managed Code
Even with GC, memory leaks can occur when objects are unintentionally kept alive.
Event Handler Leaks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
// BAD - creates memory leak
public class LeakyPublisher
{
public event EventHandler DataChanged;
public void NotifyChange()
{
DataChanged?.Invoke(this, EventArgs.Empty);
}
}
public class LeakySubscriber
{
public LeakySubscriber(LeakyPublisher publisher)
{
// Subscriber is kept alive by publisher!
publisher.DataChanged += OnDataChanged;
}
private void OnDataChanged(object sender, EventArgs e)
{
// Handle event
}
}
// GOOD - properly unsubscribe
public class ProperSubscriber : IDisposable
{
private readonly LeakyPublisher _publisher;
public ProperSubscriber(LeakyPublisher publisher)
{
_publisher = publisher;
_publisher.DataChanged += OnDataChanged;
}
private void OnDataChanged(object sender, EventArgs e)
{
// Handle event
}
public void Dispose()
{
_publisher.DataChanged -= OnDataChanged;
}
}
Static Reference Leaks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
// BAD - static references keep objects alive forever
public class Cache
{
private static readonly Dictionary<string, byte[]> _cache =
new Dictionary<string, byte[]>();
public static void Add(string key, byte[] data)
{
_cache[key] = data; // Never released!
}
}
// GOOD - use weak references or time-based expiration
public class BetterCache
{
private readonly Dictionary<string, CacheEntry> _cache =
new Dictionary<string, CacheEntry>();
private class CacheEntry
{
public WeakReference<byte[]> Data { get; set; }
public DateTime Expiry { get; set; }
}
public void Add(string key, byte[] data, TimeSpan ttl)
{
CleanExpired();
_cache[key] = new CacheEntry
{
Data = new WeakReference<byte[]>(data),
Expiry = DateTime.UtcNow + ttl
};
}
public bool TryGet(string key, out byte[] data)
{
data = null;
if (_cache.TryGetValue(key, out var entry))
{
if (entry.Expiry > DateTime.UtcNow)
{
return entry.Data.TryGetTarget(out data);
}
_cache.Remove(key);
}
return false;
}
private void CleanExpired()
{
var expired = _cache
.Where(kvp => kvp.Value.Expiry <= DateTime.UtcNow)
.Select(kvp => kvp.Key)
.ToList();
foreach (var key in expired)
{
_cache.Remove(key);
}
}
}
Finalizer Leaks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
// BAD - finalizer keeps object in memory longer
public class ExpensiveResource
{
private byte[] _data = new byte[1000000];
~ExpensiveResource()
{
// Finalizer delays collection
Cleanup();
}
private void Cleanup()
{
// Cleanup code
}
}
// GOOD - implement IDisposable without finalizer
public class ManagedResource : IDisposable
{
private byte[] _data = new byte[1000000];
private bool _disposed = false;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
// Dispose managed resources
_data = null;
}
_disposed = true;
}
}
}
IDisposable Pattern
Proper implementation of IDisposable for resource management.
Standard Dispose Pattern
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
public class ProperDisposable : IDisposable
{
private IntPtr _unmanagedResource;
private Stream _managedResource;
private bool _disposed = false;
public ProperDisposable()
{
_unmanagedResource = AllocateUnmanagedResource();
_managedResource = new FileStream("data.txt", FileMode.Open);
}
// Public dispose method
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
// Protected dispose method
protected virtual void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
// Dispose managed resources
_managedResource?.Dispose();
}
// Free unmanaged resources
if (_unmanagedResource != IntPtr.Zero)
{
FreeUnmanagedResource(_unmanagedResource);
_unmanagedResource = IntPtr.Zero;
}
_disposed = true;
}
}
// Finalizer - only if you have unmanaged resources
~ProperDisposable()
{
Dispose(false);
}
private IntPtr AllocateUnmanagedResource() => IntPtr.Zero;
private void FreeUnmanagedResource(IntPtr handle) { }
}
Using Statement
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// Traditional using statement
public void TraditionalUsing()
{
using (var resource = new DisposableResource())
{
resource.DoWork();
} // Dispose called automatically
}
// Using declaration (C# 8+)
public void ModernUsing()
{
using var resource = new DisposableResource();
resource.DoWork();
// Dispose called at end of scope
}
// Multiple resources
public void MultipleResources()
{
using var resource1 = new DisposableResource();
using var resource2 = new DisposableResource();
resource1.DoWork();
resource2.DoWork();
// Disposed in reverse order of declaration
}
Memory Profiling and Diagnostics
Using Performance Counters
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
public class GCPerformanceMonitor
{
private PerformanceCounter _gen0Counter;
private PerformanceCounter _gen1Counter;
private PerformanceCounter _gen2Counter;
private PerformanceCounter _heapSizeCounter;
public void Initialize()
{
string processName = Process.GetCurrentProcess().ProcessName;
_gen0Counter = new PerformanceCounter(
".NET CLR Memory", "# Gen 0 Collections", processName);
_gen1Counter = new PerformanceCounter(
".NET CLR Memory", "# Gen 1 Collections", processName);
_gen2Counter = new PerformanceCounter(
".NET CLR Memory", "# Gen 2 Collections", processName);
_heapSizeCounter = new PerformanceCounter(
".NET CLR Memory", "# Bytes in all Heaps", processName);
}
public void LogStatistics()
{
Console.WriteLine($"Gen 0 Collections: {_gen0Counter.NextValue()}");
Console.WriteLine($"Gen 1 Collections: {_gen1Counter.NextValue()}");
Console.WriteLine($"Gen 2 Collections: {_gen2Counter.NextValue()}");
Console.WriteLine($"Heap Size: {_heapSizeCounter.NextValue() / 1024 / 1024}MB");
}
}
Memory Diagnostics
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
public class MemoryDiagnostics
{
public void PrintMemoryInfo()
{
// GC Statistics
Console.WriteLine($"Gen 0 Collections: {GC.CollectionCount(0)}");
Console.WriteLine($"Gen 1 Collections: {GC.CollectionCount(1)}");
Console.WriteLine($"Gen 2 Collections: {GC.CollectionCount(2)}");
// Memory Information
var gcInfo = GC.GetGCMemoryInfo();
Console.WriteLine($"Heap Size: {gcInfo.HeapSizeBytes / 1024 / 1024}MB");
Console.WriteLine($"Fragmented: {gcInfo.FragmentedBytes / 1024 / 1024}MB");
Console.WriteLine($"Total Memory: {GC.GetTotalMemory(false) / 1024 / 1024}MB");
// Process Memory
var process = Process.GetCurrentProcess();
Console.WriteLine($"Working Set: {process.WorkingSet64 / 1024 / 1024}MB");
Console.WriteLine($"Private Memory: {process.PrivateMemorySize64 / 1024 / 1024}MB");
}
public void ForceGCAndReport()
{
Console.WriteLine("Before GC:");
PrintMemoryInfo();
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
Console.WriteLine("\nAfter GC:");
PrintMemoryInfo();
}
}
Best Practices
1. Minimize Allocations
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// BAD - allocates on each call
public string FormatBad(List<string> items)
{
string result = "";
foreach (var item in items)
{
result += item + ", "; // New string each iteration!
}
return result;
}
// GOOD - use StringBuilder
public string FormatGood(List<string> items)
{
var sb = new StringBuilder();
foreach (var item in items)
{
sb.Append(item).Append(", ");
}
return sb.ToString();
}
// BETTER - use string.Join
public string FormatBetter(List<string> items)
{
return string.Join(", ", items);
}
2. Use Value Types Appropriately
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// Use struct for small, immutable data
public readonly struct Point
{
public int X { get; }
public int Y { get; }
public Point(int x, int y)
{
X = x;
Y = y;
}
}
// Use class for larger, mutable data
public class ComplexObject
{
public string Name { get; set; }
public List<Point> Points { get; set; }
// Many more properties...
}
3. Avoid Premature Optimization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
// Profile first, then optimize
public class ProfileBeforeOptimize
{
[Benchmark]
public void MeasurePerformance()
{
var sw = Stopwatch.StartNew();
// Your code here
sw.Stop();
Console.WriteLine($"Time: {sw.ElapsedMilliseconds}ms");
}
}
4. Be Careful with Finalizers
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
// Only use finalizers when you have unmanaged resources
public class WithUnmanagedResource : IDisposable
{
private IntPtr _handle;
private bool _disposed = false;
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!_disposed)
{
if (_handle != IntPtr.Zero)
{
// Free unmanaged resource
FreeHandle(_handle);
_handle = IntPtr.Zero;
}
_disposed = true;
}
}
~WithUnmanagedResource()
{
Dispose(false);
}
private void FreeHandle(IntPtr handle) { }
}
Conclusion
Understanding C# memory management and garbage collection is crucial for building high-performance applications. Key takeaways include:
- Understand stack vs heap allocation
- Know how generational GC works
- Use object pooling and ArrayPool for frequently allocated objects
- Leverage Span
and Memory for zero-allocation scenarios - Be aware of the Large Object Heap and its implications
- Implement IDisposable correctly for resource management
- Avoid common memory leak patterns
- Profile and measure before optimizing
- Use proper diagnostic tools to understand memory usage
By applying these principles and techniques, you can write memory-efficient C# code that scales well and performs optimally.
References
- Microsoft Docs: Garbage Collection
- Memory Management and GC in .NET
- IDisposable Pattern
- [Span
and Memory ](https://docs.microsoft.com/en-us/dotnet/standard/memory-and-spans/) - [ArrayPool
](https://docs.microsoft.com/en-us/dotnet/api/system.buffers.arraypool-1) - GC Performance Counters