Introduction#
Real-time communication is a requirement in many modern applications: live dashboards, chat, notifications, collaborative editing. The three dominant approaches are WebSockets, Server-Sent Events (SSE), and Long Polling. Each has distinct trade-offs, and picking the wrong one adds complexity without benefit.
Long Polling#
Long polling is HTTP-based. The client sends a request, and the server holds the connection open until it has data to send or a timeout occurs. The client immediately opens a new request after receiving a response.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# FastAPI: long polling endpoint
import asyncio
from fastapi import FastAPI
app = FastAPI()
# Simulated event store
pending_events: list[str] = []
@app.get("/poll")
async def long_poll(last_id: int = 0, timeout: int = 30):
deadline = asyncio.get_event_loop().time() + timeout
while asyncio.get_event_loop().time() < deadline:
events = [e for e in pending_events if int(e.split(":")[0]) > last_id]
if events:
return {"events": events}
await asyncio.sleep(0.5)
return {"events": []}
Characteristics:
- Works over plain HTTP/1.1
- Firewall and proxy friendly
- High overhead: each response requires a new request with full HTTP headers
- Not suitable for very high message frequency
When to use: Legacy environments, firewalls that block persistent connections, or when you need broad compatibility with minimal infrastructure changes.
Server-Sent Events (SSE)#
SSE is a one-directional, server-to-client stream over HTTP. The server sends a stream of text/event-stream messages. The browser’s EventSource API handles reconnection automatically.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# FastAPI: SSE endpoint
import asyncio
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
app = FastAPI()
async def event_generator():
event_id = 0
while True:
await asyncio.sleep(1)
event_id += 1
yield f"id: {event_id}\ndata: heartbeat at {event_id}\n\n"
@app.get("/stream")
async def stream():
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={
"Cache-Control": "no-cache",
"X-Accel-Buffering": "no", # disable Nginx buffering
},
)
1
2
3
4
5
6
7
8
// Client-side: EventSource
const source = new EventSource("/stream");
source.onmessage = (event) => {
console.log("Received:", event.data);
};
source.onerror = () => {
console.log("Connection lost, retrying...");
};
Characteristics:
- Unidirectional (server to client only)
- Built-in reconnection with
Last-Event-IDheader - Works over HTTP/1.1 and HTTP/2
- HTTP/2 multiplexes many SSE streams over one connection, making it efficient at scale
- Not suitable when the client also needs to push frequent data to the server
When to use: Notifications, live feeds, dashboards, log tailing — anywhere the data flow is primarily server-to-client.
WebSockets#
WebSockets provide a full-duplex, persistent TCP connection negotiated via an HTTP Upgrade handshake.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# FastAPI: WebSocket endpoint
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
app = FastAPI()
class ConnectionManager:
def __init__(self):
self.active: list[WebSocket] = []
async def connect(self, ws: WebSocket):
await ws.accept()
self.active.append(ws)
def disconnect(self, ws: WebSocket):
self.active.remove(ws)
async def broadcast(self, message: str):
for ws in self.active:
await ws.send_text(message)
manager = ConnectionManager()
@app.websocket("/ws")
async def websocket_endpoint(ws: WebSocket):
await manager.connect(ws)
try:
while True:
data = await ws.receive_text()
await manager.broadcast(f"User says: {data}")
except WebSocketDisconnect:
manager.disconnect(ws)
1
2
3
4
5
6
// Client-side: WebSocket
const ws = new WebSocket("wss://example.com/ws");
ws.onopen = () => ws.send("Hello");
ws.onmessage = (event) => console.log("Received:", event.data);
ws.onclose = () => console.log("Disconnected");
Characteristics:
- Full-duplex: both sides can send at any time
- Low overhead per message once connected (no HTTP headers)
- Requires stateful server; horizontal scaling needs a shared pub/sub layer (e.g., Redis)
- Some proxies and firewalls require explicit WebSocket support
- No built-in reconnection; must be implemented manually
When to use: Chat, collaborative editing, multiplayer games, or any use case requiring low-latency bidirectional messaging.
Side-by-Side Comparison#
| Property | Long Polling | SSE | WebSocket |
|---|---|---|---|
| Direction | Bidirectional (via new requests) | Server to client | Full-duplex |
| Protocol | HTTP | HTTP | WebSocket (ws/wss) |
| Reconnection | Manual | Built-in | Manual |
| Overhead | High (full HTTP per message) | Low | Very low |
| Proxy/Firewall compatibility | Excellent | Good | Moderate |
| Horizontal scaling | Easy (stateless) | Moderate | Requires pub/sub |
| Browser support | Universal | Modern browsers | Modern browsers |
Scaling Considerations#
Long polling and SSE are stateless-friendly. Each request can hit any server instance.
WebSockets are stateful. A client is bound to one server process for the duration of the connection. To broadcast messages across instances, use a pub/sub broker:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Redis pub/sub for WebSocket fan-out
import asyncio
import redis.asyncio as aioredis
from fastapi import FastAPI, WebSocket
app = FastAPI()
redis = aioredis.from_url("redis://localhost")
@app.websocket("/ws/{room}")
async def room_ws(ws: WebSocket, room: str):
await ws.accept()
pubsub = redis.pubsub()
await pubsub.subscribe(room)
async def listener():
async for message in pubsub.listen():
if message["type"] == "message":
await ws.send_text(message["data"].decode())
asyncio.create_task(listener())
try:
while True:
data = await ws.receive_text()
await redis.publish(room, data)
except Exception:
await pubsub.unsubscribe(room)
Best Practices#
- Use SSE by default for server-to-client data streams. It is simpler to operate and HTTP/2 makes it scalable.
- Use WebSockets when you need true bidirectional, low-latency communication (chat, games, collaborative tools).
- Avoid long polling for new systems; use it only when other transports are blocked by infrastructure constraints.
- Always set a heartbeat or ping interval on WebSocket and SSE connections to detect stale connections through NAT/proxies.
- Implement exponential backoff on client reconnection attempts.
Conclusion#
Long polling is a workaround, not a design choice. SSE covers most notification and feed use cases with minimal complexity. WebSockets are the right tool when bidirectional, low-latency communication is a core requirement. Reach for the simplest option that meets your requirements.