← Back to Guide
Security & Hardening L1 · INTRO ~45 min

Default-Deny NetworkPolicy with Allowlist Rules

Implement a zero-trust network model for a namespace: start with default-deny for all ingress and egress, then add precise allow rules for specific pod selectors and ports. Verify connectivity with kubectl exec tests.

Objective

NetworkPolicies implement microsegmentation within Kubernetes. Without them, all pods in a cluster can reach all other pods by default — a flat network that allows lateral movement if any workload is compromised. This exercise creates a default-deny baseline then opens specific communication paths, verifying each with connectivity tests from inside pods.

NetworkPolicy enforcement requires a CNI that supports it: Calico, Cilium, Weave, or Azure CNI with network policy enabled. Flannel and basic kubenet do NOT enforce NetworkPolicies — they accept the objects but ignore them.

Prerequisites

Steps

01

Set up the test environment

Create a namespace with a frontend, backend, and database pod. The intended communication is: client → frontend → backend → database. No other paths should be allowed.

# Create namespace for the exercise
kubectl create namespace netpol-demo

# Deploy test pods with labels for policy targeting
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: frontend
  namespace: netpol-demo
  labels:
    app: frontend
    tier: web
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: backend
  namespace: netpol-demo
  labels:
    app: backend
    tier: api
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: database
  namespace: netpol-demo
  labels:
    app: database
    tier: data
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Pod
metadata:
  name: external-client
  namespace: netpol-demo
  labels:
    app: external-client
spec:
  containers:
  - name: curl
    image: curlimages/curl:latest
    command: ["sleep", "3600"]
EOF

kubectl wait pods --all -n netpol-demo --for=condition=Ready --timeout=120s
02

Verify open connectivity BEFORE applying policies

Document the baseline — all pods should be able to reach each other. This confirms your CNI is functional before you test policy enforcement.

# Get pod IPs
kubectl get pods -n netpol-demo -o wide

# From external-client, reach all pods (should all succeed before policies)
FRONTEND_IP=$(kubectl get pod frontend -n netpol-demo -o jsonpath='{.status.podIP}')
BACKEND_IP=$(kubectl get pod backend -n netpol-demo -o jsonpath='{.status.podIP}')
DB_IP=$(kubectl get pod database -n netpol-demo -o jsonpath='{.status.podIP}')

# Test connectivity from external-client
kubectl exec -n netpol-demo external-client -- \
  curl -s --connect-timeout 3 http://$FRONTEND_IP && echo "PASS: frontend reachable"

kubectl exec -n netpol-demo external-client -- \
  curl -s --connect-timeout 3 http://$DB_IP && echo "PASS: database reachable (should be blocked later)"
03

Apply the default-deny policies

A default-deny policy selects all pods (empty podSelector) and specifies empty ingress/egress rule arrays. This denies all traffic for selected pods that isn't explicitly allowed.

# default-deny-all.yaml
cat << 'EOF' | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: netpol-demo
spec:
  podSelector: {}      # selects ALL pods in namespace
  policyTypes:
  - Ingress
  # No ingress rules = deny all ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: netpol-demo
spec:
  podSelector: {}
  policyTypes:
  - Egress
  # No egress rules = deny all egress
EOF

# Verify connectivity is now blocked (should timeout)
kubectl exec -n netpol-demo external-client -- \
  curl -s --connect-timeout 3 http://$FRONTEND_IP 2>&1 || echo "BLOCKED: frontend unreachable (expected)"
Default-deny-egress blocks DNS as well. After applying this, pods cannot resolve hostnames. You must add an explicit egress rule for DNS (port 53 UDP/TCP to kube-dns) if your pods need name resolution.
04

Allow DNS egress for all pods

Without DNS, pods cannot resolve service names. Allow egress to the kube-dns service (CoreDNS) in the kube-system namespace.

# allow-dns-egress.yaml
cat << 'EOF' | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-dns-egress
  namespace: netpol-demo
spec:
  podSelector: {}
  policyTypes:
  - Egress
  egress:
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
EOF
05

Create specific allow rules for the intended communication paths

Now define the allowed communication paths: external ingress to frontend, frontend to backend, backend to database. No other paths will be allowed.

# allow-frontend-ingress.yaml — allows anyone to reach frontend:80
cat << 'EOF' | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-ingress
  namespace: netpol-demo
spec:
  podSelector:
    matchLabels:
      app: frontend
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 80
      protocol: TCP
    # No 'from' field = allow from anywhere
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: netpol-demo
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - port: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-backend-to-database
  namespace: netpol-demo
spec:
  podSelector:
    matchLabels:
      app: database
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: backend
    ports:
    - port: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-egress-to-backend
  namespace: netpol-demo
spec:
  podSelector:
    matchLabels:
      app: frontend
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: backend
    ports:
    - port: 80
EOF
06

Verify allowed and blocked connections

Test each communication path. Allowed connections should succeed (HTTP 200); blocked connections should time out.

# ALLOWED: external-client → frontend (should succeed)
kubectl exec -n netpol-demo external-client -- \
  curl -s --connect-timeout 5 -o /dev/null -w "%{http_code}" \
  http://$FRONTEND_IP
# Expected: 200

# BLOCKED: external-client → database (should timeout)
kubectl exec -n netpol-demo external-client -- \
  curl -s --connect-timeout 3 http://$DB_IP 2>&1
# Expected: curl: (28) Connection timed out

# ALLOWED: frontend → backend
kubectl exec -n netpol-demo frontend -- \
  curl -s --connect-timeout 5 -o /dev/null -w "%{http_code}" \
  http://$BACKEND_IP
# Expected: 200

# BLOCKED: frontend → database (no direct path)
kubectl exec -n netpol-demo frontend -- \
  curl -s --connect-timeout 3 http://$DB_IP 2>&1
# Expected: curl: (28) Connection timed out

# BLOCKED: external-client → backend (must go through frontend)
kubectl exec -n netpol-demo external-client -- \
  curl -s --connect-timeout 3 http://$BACKEND_IP 2>&1
# Expected: curl: (28) Connection timed out
07

Clean up

kubectl delete namespace netpol-demo

Success Criteria

Key Concepts

Further Reading