Kubernetes Network Policies in Practice

By default, every pod in Kubernetes can communicate with every other pod. Network Policies restrict that communication. They are the primary mechanism for micro-segmentation in Kubernetes clusters. Th

Introduction#

By default, every pod in Kubernetes can communicate with every other pod. Network Policies restrict that communication. They are the primary mechanism for micro-segmentation in Kubernetes clusters. This post covers the policy model, common patterns, and how to verify policies are working.

How Network Policies Work#

Network Policies are enforced by the CNI plugin (Calico, Cilium, Weave Net). If your CNI does not support NetworkPolicy, the objects are accepted by the API server but have no effect.

1
2
3
4
5
# Check if your CNI supports NetworkPolicy
# For Calico:
kubectl get daemonset -n kube-system calico-node
# For Cilium:
kubectl get daemonset -n kube-system cilium

Policy Basics#

A NetworkPolicy selects pods via podSelector and defines ingress and egress rules.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-network-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api          # applies to pods with app=api label
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: ingress   # only from ingress namespace
    - podSelector:
        matchLabels:
          app: frontend   # or from frontend pods in same namespace
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: postgres   # allow to postgres pods
    ports:
    - protocol: TCP
      port: 5432
  - to:
    - namespaceSelector: {}   # allow DNS (all namespaces)
    ports:
    - protocol: UDP
      port: 53

Important: once any NetworkPolicy selects a pod, all traffic not explicitly allowed is denied. Before the first policy, all traffic is allowed.

Default Deny All#

Start with a default-deny policy, then add explicit allows.

1
2
3
4
5
6
7
8
9
10
11
# Deny all ingress and egress for the namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: production
spec:
  podSelector: {}   # empty selector = applies to all pods
  policyTypes:
  - Ingress
  - Egress

After applying this, all pods in production are isolated. Add targeted policies to restore necessary traffic.

Common Patterns#

Allow Ingress from Load Balancer Only#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-ingress-controller
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: ingress-nginx
      podSelector:
        matchLabels:
          app.kubernetes.io/name: ingress-nginx

Allow Monitoring Scraping#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-prometheus-scrape
  namespace: production
spec:
  podSelector: {}    # all pods
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: monitoring
      podSelector:
        matchLabels:
          app: prometheus
    ports:
    - port: 9090
    - port: 8080    # common metrics port

Database Access: Only From App Tier#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: postgres-ingress
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: postgres
  ingress:
  - from:
    - podSelector:
        matchLabels:
          tier: backend
    ports:
    - protocol: TCP
      port: 5432
  egress: []    # postgres has no egress (or add replication if needed)

Egress to External Services#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Allow egress to specific external CIDR
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-external-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.0.0.0/8      # block internal network
        - 172.16.0.0/12
        - 192.168.0.0/16
    ports:
    - protocol: TCP
      port: 443

Testing and Debugging#

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# Test connectivity between pods
kubectl run test-pod --image=nicolaka/netshoot -it --rm -- bash

# From inside test-pod:
curl -v http://api.production.svc.cluster.local:8080/health
nc -zv postgres.production.svc.cluster.local 5432

# Verify NetworkPolicy is applied
kubectl describe networkpolicy -n production

# With Cilium: inspect policy enforcement
kubectl exec -n kube-system cilium-xxxxx -- cilium policy get

# With Calico: trace policy decisions
calicoctl get networkpolicy -n production -o yaml
kubectl exec -n kube-system calico-node-xxxxx -- \
  calico-node -diagnosePolicies

Namespace Isolation Template#

A complete namespace isolation setup:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# 1. Default deny all
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: production
spec:
  podSelector: {}
  policyTypes: [Ingress, Egress]
---
# 2. Allow DNS
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns
  namespace: production
spec:
  podSelector: {}
  egress:
  - ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53
---
# 3. Allow intra-namespace communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-namespace
  namespace: production
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector: {}
  egress:
  - to:
    - podSelector: {}

Conclusion#

Start every namespace with a default-deny policy and DNS exception, then add specific allow rules. Always test connectivity after applying policies using a debug pod. NetworkPolicy is pod-to-pod L3/L4 control only — for L7 policies (HTTP path, headers), use Cilium NetworkPolicy or a service mesh.

Contents