HTTP/2 vs HTTP/3: What Changes for Backend Engineers

HTTP/1.1 was designed in 1997 when web pages consisted of a handful of files. Modern applications make hundreds of requests per page load. HTTP/2 and HTTP/3 address HTTP/1.1's fundamental limitations.

Introduction#

HTTP/1.1 was designed in 1997 when web pages consisted of a handful of files. Modern applications make hundreds of requests per page load. HTTP/2 and HTTP/3 address HTTP/1.1’s fundamental limitations. Understanding what changed helps you configure servers correctly and diagnose performance issues.

HTTP/1.1 Limitations#

1
2
3
4
5
HTTP/1.1 issues:
1. Head-of-line blocking: requests are processed in order per connection
2. Verbose headers: repeated on every request (Cookie, User-Agent, etc.)
3. No server push: client must ask for every resource
4. 6-connection browser limit per origin: workaround was domain sharding

HTTP/2#

HTTP/2 runs over TCP and adds:

Multiplexing: multiple requests over one TCP connection. Requests and responses are interleaved as binary frames.

Header compression (HPACK): headers are compressed and delta-encoded. A 500-byte Cookie header sent once is sent as a 3-byte reference subsequently.

Server Push: server can proactively send resources (largely abandoned by browsers, but useful in gRPC).

Stream prioritization: clients can hint at request priority (removed in HTTP/3 in favor of explicit prioritization).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# nginx: HTTP/2 configuration
server {
    listen 443 ssl http2;  # enable HTTP/2
    ssl_certificate     /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;

    # HTTP/2 tuning
    http2_max_concurrent_streams 128;  # max parallel streams per connection
    http2_push_preload on;             # translate Link: </> rel=preload headers to push

    location / {
        proxy_pass http://backend;
        # Enable HTTP/2 to backend (if backend supports it)
        # proxy_http_version 2;  # uncomment if backend speaks HTTP/2
    }
}
1
2
3
4
5
6
# Verify HTTP/2 is active
curl -v --http2 https://example.com 2>&1 | grep "HTTP/2"
# < HTTP/2 200

# Check HTTP/2 connection reuse
nghttp -v https://example.com

HTTP/2 TCP Head-of-Line Blocking#

HTTP/2 multiplexing solves application-level HOL blocking but not transport-level. All HTTP/2 streams share one TCP connection. A single lost packet stalls all streams until TCP retransmits it.

On high-loss networks (mobile, satellite), HTTP/2 can perform worse than HTTP/1.1 with multiple connections.

HTTP/3 (QUIC)#

HTTP/3 moves from TCP to QUIC (UDP-based transport developed by Google).

QUIC advantages:

  • Connection establishment: 0-RTT on reconnection (client caches connection parameters)
  • Independent streams: packet loss in one stream does not block others
  • Connection migration: connection ID is decoupled from IP address — mobile clients switching from WiFi to cellular maintain their connection
  • Built-in encryption: TLS 1.3 is part of QUIC, not layered on top
1
2
3
4
5
6
7
8
9
10
11
12
13
# nginx: HTTP/3 (requires nginx >= 1.25 with quic module)
server {
    # HTTP/3 on UDP port 443
    listen 443 quic reuseport;
    # HTTP/2 fallback on TCP
    listen 443 ssl;

    http3 on;
    ssl_early_data on;   # enable 0-RTT

    # Advertise HTTP/3 support
    add_header Alt-Svc 'h3=":443"; ma=86400';
}
1
2
3
4
5
6
# Test HTTP/3
curl --http3 https://example.com -v 2>&1 | head -5

# Check if server advertises HTTP/3
curl -sI https://example.com | grep -i alt-svc
# alt-svc: h3=":443"; ma=86400

gRPC and HTTP/2#

gRPC requires HTTP/2. All gRPC communication is multiplexed over a single HTTP/2 connection per server endpoint.

1
2
3
4
5
6
7
8
// gRPC server: HTTP/2 is automatic
server := grpc.NewServer(
    grpc.KeepaliveParams(keepalive.ServerParameters{
        MaxConnectionIdle: 15 * time.Second,
        Time:              5 * time.Second,
        Timeout:           1 * time.Second,
    }),
)
1
2
3
4
5
6
# Python: verify HTTP version in use
import httpx

async with httpx.AsyncClient(http2=True) as client:
    response = await client.get("https://example.com")
    print(response.http_version)  # HTTP/2

Performance Impact#

1
2
3
4
5
6
7
8
9
Scenario: 100 small API requests

HTTP/1.1 (6 connections): ~150ms (sequential within each connection)
HTTP/2 (1 connection, multiplexed): ~20ms (all parallel)
HTTP/3 (QUIC, 0-RTT reconnect): ~15ms (no handshake on reconnect)

Scenario: high-loss network (5% packet loss)
HTTP/2: worse than HTTP/1.1 due to TCP HOL blocking
HTTP/3: significantly better — streams are independent

What Backend Engineers Need to Do#

  1. Enable HTTP/2 in your load balancer/nginx (usually one line). There is almost no reason not to.
  2. Remove domain sharding — it hurts HTTP/2 by splitting requests across multiple connections and preventing multiplexing.
  3. Remove resource bundling (JS/CSS concatenation) if it was done only for HTTP/1.1 performance — HTTP/2 makes it unnecessary.
  4. Consider HTTP/3 if you serve mobile clients or operate in high-latency regions. Browser support is universal; the main requirement is UDP port 443 being open.
  5. Use HTTP/2 for gRPC internal services — it enables bidirectional streaming.

Conclusion#

HTTP/2 multiplexing eliminates the practical need for connection pooling tricks. HTTP/3’s QUIC transport solves TCP’s head-of-line blocking at the cost of UDP routing support in network infrastructure. Enabling HTTP/2 on your load balancer is low-risk and high-reward. HTTP/3 requires more infrastructure consideration but delivers meaningful improvements for mobile and high-latency connections.

Contents