Objective
One of the highest-leverage platform engineering automations is eliminating the manual ticket-driven process of onboarding new teams to a Kubernetes cluster. This exercise builds a webhook (HTTP API) that accepts a JSON request with team and tier fields, and idempotently creates the namespace with the full standard resource set. The approach is GitOps-compatible: the webhook writes YAML to a Git repo rather than directly to the cluster, so Flux or Argo CD performs the actual apply.
Request (POST /namespaces)
└── Validate input (team name, tier)
└── Generate namespace YAML bundle
├── Namespace
├── ResourceQuota (tier-specific)
├── LimitRange (tier-specific)
├── NetworkPolicy (default-deny + allow-same-ns)
└── RoleBinding (team engineers → edit role)
└── Commit to Git repo (namespaces/<team>/)
└── Return 201 with bundle summary
Prerequisites
- Python 3.9+ with pip
- A Kubernetes cluster with kubectl configured
- A GitHub personal access token with repo write permission (for GitOps mode)
- Familiarity with Python, HTTP APIs, and Kubernetes YAML
Steps
01
Install dependencies and set up project structure
pip install fastapi uvicorn pyyaml pygithub ## Project layout: ## ns-provisioner/ ## ├── main.py ← FastAPI app ## ├── templates.py ← YAML generation functions ## ├── config.py ← tier quotas, settings ## └── requirements.txt
02
Define tier quotas in config.py
ns-provisioner/config.py
# Resource quotas and limit ranges by tier TIERS = { "small": { "quota": { "requests.cpu": "4", "requests.memory": "8Gi", "limits.cpu": "8", "limits.memory": "16Gi", "pods": "20", }, "default_request": { "cpu": "100m", "memory": "128Mi" }, "default_limit": { "cpu": "500m", "memory": "512Mi" }, }, "medium": { "quota": { "requests.cpu": "16", "requests.memory": "32Gi", "limits.cpu": "32", "limits.memory": "64Gi", "pods": "100", }, "default_request": { "cpu": "200m", "memory": "256Mi" }, "default_limit": { "cpu": "1000m", "memory": "1Gi" }, }, "large": { "quota": { "requests.cpu": "64", "requests.memory": "128Gi", "limits.cpu": "128", "limits.memory": "256Gi", "pods": "500", }, "default_request": { "cpu": "500m", "memory": "512Mi" }, "default_limit": { "cpu": "2000m", "memory": "4Gi" }, }, } VALID_TIERS = list(TIERS.keys())
03
Write YAML generation functions in templates.py
ns-provisioner/templates.py
import yaml from config import TIERS def namespace(team: str, tier: str) -> dict: return { "apiVersion": "v1", "kind": "Namespace", "metadata": { "name": team, "labels": { "app.kubernetes.io/managed-by": "ns-provisioner", "platform/team": team, "platform/tier": tier, # Pod Security Standards enforcement "pod-security.kubernetes.io/enforce": "baseline", "pod-security.kubernetes.io/warn": "restricted", }, }, } def resource_quota(team: str, tier: str) -> dict: return { "apiVersion": "v1", "kind": "ResourceQuota", "metadata": { "name": "default", "namespace": team }, "spec": { "hard": TIERS[tier]["quota"] }, } def limit_range(team: str, tier: str) -> dict: t = TIERS[tier] return { "apiVersion": "v1", "kind": "LimitRange", "metadata": { "name": "default", "namespace": team }, "spec": { "limits": [{ "type": "Container", "defaultRequest": t["default_request"], "default": t["default_limit"], }]}, } def network_policy_default_deny(team: str) -> dict: return { "apiVersion": "networking.k8s.io/v1", "kind": "NetworkPolicy", "metadata": { "name": "default-deny-all", "namespace": team }, "spec": { "podSelector": {}, "policyTypes": ["Ingress", "Egress"] }, } def network_policy_allow_same_ns(team: str) -> dict: return { "apiVersion": "networking.k8s.io/v1", "kind": "NetworkPolicy", "metadata": { "name": "allow-same-namespace", "namespace": team }, "spec": { "podSelector": {}, "policyTypes": ["Ingress", "Egress"], "ingress": [{ "from": [{ "podSelector": {} }] }], "egress": [{ "to": [{ "podSelector": {} }] }], }, } def rbac_binding(team: str, group: str = None) -> dict: # Bind the team's SSO group to the edit ClusterRole group = group or f"team-{team}" return { "apiVersion": "rbac.authorization.k8s.io/v1", "kind": "RoleBinding", "metadata": { "name": "team-edit", "namespace": team }, "subjects": [{ "kind": "Group", "name": group, "apiGroup": "rbac.authorization.k8s.io" }], "roleRef": { "kind": "ClusterRole", "name": "edit", "apiGroup": "rbac.authorization.k8s.io" }, } def build_bundle(team: str, tier: str) -> str: docs = [ namespace(team, tier), resource_quota(team, tier), limit_range(team, tier), network_policy_default_deny(team), network_policy_allow_same_ns(team), rbac_binding(team), ] return yaml.dump_all(docs, default_flow_style=False, sort_keys=False)
04
Build the FastAPI webhook in main.py
ns-provisioner/main.py
import re, subprocess from fastapi import FastAPI, HTTPException from pydantic import BaseModel, validator from templates import build_bundle from config import VALID_TIERS app = FastAPI(title="Namespace Provisioner") class NamespaceRequest(BaseModel): team: str tier: str @validator("team") def validate_team(cls, v): if not re.match(r"^[a-z][a-z0-9-]{1,30}$", v): raise ValueError("team must be lowercase alphanumeric/hyphens, 2-31 chars") return v @validator("tier") def validate_tier(cls, v): if v not in VALID_TIERS: raise ValueError(f"tier must be one of {VALID_TIERS}") return v @app.post("/namespaces", status_code=201) def provision_namespace(req: NamespaceRequest): yaml_bundle = build_bundle(req.team, req.tier) # Apply directly to cluster (for direct-mode, not GitOps) result = subprocess.run( ["kubectl", "apply", "--server-side", "-f", "-"], input=yaml_bundle.encode(), capture_output=True, ) if result.returncode != 0: raise HTTPException( status_code=500, detail=result.stderr.decode(), ) return { "namespace": req.team, "tier": req.tier, "resources_created": [ "Namespace", "ResourceQuota", "LimitRange", "NetworkPolicy/default-deny-all", "NetworkPolicy/allow-same-namespace", "RoleBinding/team-edit", ], "kubectl_output": result.stdout.decode(), }
05
Run the webhook and test idempotency
# Start the server uvicorn main:app --reload --port 8080 # In another terminal — provision a namespace (first call) curl -s -X POST http://localhost:8080/namespaces \ -H "Content-Type: application/json" \ -d '{"team": "payments", "tier": "medium"}' | jq . ## { ## "namespace": "payments", ## "tier": "medium", ## "resources_created": ["Namespace", "ResourceQuota", "LimitRange", ...], ## "kubectl_output": "namespace/payments serverside-applied\n..." ## } # Call again — idempotent (--server-side apply handles this) curl -s -X POST http://localhost:8080/namespaces \ -H "Content-Type: application/json" \ -d '{"team": "payments", "tier": "medium"}' | jq '.kubectl_output' ## "namespace/payments serverside-applied\n..." (no error — idempotent) # Verify resources in the cluster kubectl get namespace payments --show-labels kubectl -n payments get resourcequota,limitrange,networkpolicy,rolebinding
In a GitOps environment, replace the kubectl apply subprocess with a GitHub API call that commits the YAML bundle to your cluster-state repo. Flux or Argo CD will then detect the new files and apply them. This keeps the webhook stateless and the cluster state fully auditable in Git.
06
Validate input rejection
# Invalid team name (uppercase) curl -s -X POST http://localhost:8080/namespaces \ -H "Content-Type: application/json" \ -d '{"team": "MyTeam", "tier": "small"}' | jq . ## 422 Unprocessable Entity — team must be lowercase alphanumeric/hyphens # Invalid tier curl -s -X POST http://localhost:8080/namespaces \ -H "Content-Type: application/json" \ -d '{"team": "analytics", "tier": "xl"}' | jq . ## 422 Unprocessable Entity — tier must be one of ['small', 'medium', 'large'] # Valid request curl -s -X POST http://localhost:8080/namespaces \ -H "Content-Type: application/json" \ -d '{"team": "analytics", "tier": "large"}' | jq .resources_created
Success Criteria
Further Reading
- Server-side apply: kubernetes.io/docs/reference/using-api/server-side-apply — enables idempotent, field-manager-aware applies
- FastAPI documentation: fastapi.tiangolo.com — validation, dependency injection
- Hierarchical Namespace Controller: github.com/kubernetes-sigs/hierarchical-namespaces — production-grade multi-tenancy
- Capsule operator: projectcapsule.dev — tenant isolation at scale