← Back to Guide
Security & Hardening L1 · INTRO ~30 min

Enforce Pod Security Standards with Namespace Labels

Apply Pod Security Standards to namespaces using labels. Attempt to deploy a privileged pod and observe the admission rejection. Understand the three PSS profiles and when to apply each in a real environment.

Objective

Pod Security Standards (PSS) replaced PodSecurityPolicy in Kubernetes 1.25. They are enforced entirely through namespace labels — no webhook or admission controller installation required. In this exercise you will configure all three PSS modes (enforce, warn, audit) on a namespace, verify they work correctly by attempting policy-violating deployments, and build a decision matrix for which profile to apply to different workload types.

Prerequisites

Pod Security Standards Profiles

ProfileUse CaseWhat It Restricts
privilegedSystem/infrastructure workloads (CNI, CSI, monitoring agents)Nothing — all capabilities allowed
baselineMost application workloads; prevents known privilege escalationhostNetwork, hostPID, privileged containers, dangerous capabilities, hostPath volumes
restrictedHigh-security workloads, multi-tenant environmentsEverything in baseline + requires non-root, drops all capabilities, requires seccomp

Steps

01

Create a namespace with all three PSS modes

Using all three modes simultaneously is useful when migrating workloads to a stricter policy. Enforce blocks violations, warn alerts developers, audit records them for review.

# Create namespace with restricted profile in all modes
kubectl create namespace pss-demo

kubectl label namespace pss-demo \
  pod-security.kubernetes.io/enforce=restricted \
  pod-security.kubernetes.io/enforce-version=v1.29 \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/warn-version=v1.29 \
  pod-security.kubernetes.io/audit=restricted \
  pod-security.kubernetes.io/audit-version=v1.29

# Verify labels are applied
kubectl get namespace pss-demo --show-labels
02

Attempt to deploy a privileged pod (should be rejected)

This pod requests host networking and runs as root — two common container escape techniques. The restricted PSS enforces several controls that block this.

# privileged-pod.yaml — will be rejected by restricted PSS
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: bad-pod
  namespace: pss-demo
spec:
  hostNetwork: true
  containers:
  - name: bad-container
    image: ubuntu:22.04
    command: ["sleep", "3600"]
    securityContext:
      privileged: true
      runAsUser: 0
      allowPrivilegeEscalation: true
EOF

# Expected rejection output:
# Error from server (Forbidden): error when creating "STDIN":
# pods "bad-pod" is forbidden: violates PodSecurity "restricted:v1.29":
#   host namespaces (hostNetwork=true),
#   privileged (containers "bad-container" must not set
#     securityContext.privileged=true),
#   allowPrivilegeEscalation != false (containers "bad-container" must
#     set securityContext.allowPrivilegeEscalation=false),
#   unrestricted capabilities (containers "bad-container" must set
#     securityContext.capabilities.drop=["ALL"]),
#   runAsNonRoot != true (pod or containers "bad-container" must not
#     set securityContext.runAsUser=0),
#   seccompProfile (pod or containers "bad-container" must set
#     securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
03

Deploy a compliant pod with restricted profile

A pod meeting the restricted profile must: run as non-root, drop all capabilities, set seccomp profile, disallow privilege escalation, and use a read-only root filesystem.

# compliant-pod.yaml
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: good-pod
  namespace: pss-demo
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: nginx:1.25-unprivileged
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop: ["ALL"]
    volumeMounts:
    - name: tmp
      mountPath: /tmp
    - name: var-run
      mountPath: /var/run
    - name: var-cache
      mountPath: /var/cache/nginx
  volumes:
  - name: tmp
    emptyDir: {}
  - name: var-run
    emptyDir: {}
  - name: var-cache
    emptyDir: {}
EOF

# Verify it runs
kubectl get pod good-pod -n pss-demo
# Expected: STATUS = Running
04

Test warn mode — see warnings without blocking

Create a namespace with only warn mode (not enforce) to see how PSS can be used in audit/migration mode without blocking deployments.

# Create warn-only namespace (allows but warns)
kubectl create namespace pss-warn-demo
kubectl label namespace pss-warn-demo \
  pod-security.kubernetes.io/warn=restricted \
  pod-security.kubernetes.io/warn-version=v1.29

# Deploy the bad pod — it will succeed but generate a warning
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: warned-pod
  namespace: pss-warn-demo
spec:
  containers:
  - name: app
    image: nginx:latest
    securityContext:
      runAsUser: 0
EOF

# Expected: Warning message in the output, but pod IS created:
# Warning: would violate PodSecurity "restricted:v1.29":
#   runAsNonRoot != true (container "app" must not set runAsUser=0)
# pod/warned-pod created

kubectl get pod warned-pod -n pss-warn-demo
Warn mode is ideal when migrating existing namespaces to stricter policies. Enable warn first, observe which workloads would be affected, fix them, then switch to enforce.
05

Apply baseline profile to an infrastructure namespace

System workloads like monitoring agents (Prometheus node exporter, Falco) require host access that the restricted profile blocks. Use baseline for these namespaces.

# Create monitoring namespace with baseline (not restricted)
kubectl create namespace monitoring
kubectl label namespace monitoring \
  pod-security.kubernetes.io/enforce=baseline \
  pod-security.kubernetes.io/enforce-version=v1.29

# This pod needs hostPID for process metrics — baseline allows it
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: node-exporter-test
  namespace: monitoring
spec:
  hostPID: true     # allowed by baseline, blocked by restricted
  hostNetwork: true  # allowed by baseline, blocked by restricted
  containers:
  - name: exporter
    image: prom/node-exporter:latest
    securityContext:
      runAsUser: 65534  # nobody user
EOF

kubectl get pod node-exporter-test -n monitoring
# Expected: Running (hostPID is allowed under baseline)
06

Clean up

kubectl delete namespace pss-demo pss-warn-demo monitoring

PSS Profile Decision Matrix

Workload TypeRecommended ProfileRationale
User application workloadsrestrictedNo legitimate reason to need host access or run as root
Prometheus, metrics-serverbaselineMay need host ports; does not need full privilege
CNI plugins (Calico, Cilium)privilegedRequires host network manipulation and raw socket access
CSI driversprivilegedRequires privileged access to mount block devices
Falco, security agentsprivilegedRequires kernel module loading or eBPF probes
Log shippers (Fluentd, Fluent Bit)baselineNeeds hostPath for log files; no privilege required

Success Criteria

Further Reading