← Back to Guide
Cluster Architecture L2 · PRACTICAL ~90 min

Multi-Cloud Workload Deployment Comparison

Deploy the same workload on EKS and GKE using a shared Terraform module with provider-specific overrides. Build a structured comparison matrix of control plane differences, default CNI, and managed add-on gaps across cloud providers.

Objective

Write a reusable Terraform module for a standard Kubernetes workload deployment. Deploy it against both EKS and GKE clusters using provider-specific override files. Document the differences you observe in a comparison matrix covering networking, authentication, node images, load balancer behaviour, and managed add-on availability. This exercise builds the muscle memory needed to design cloud-agnostic platform layers.

Prerequisites

Steps

01

Design the shared module structure

The module deploys a Deployment, Service, HorizontalPodAutoscaler, and PodDisruptionBudget. Provider-specific behaviour (load balancer annotations, node selectors) goes in override files, not the module itself.

multi-cloud-workload/
├── modules/
│   └── app-deployment/
│       ├── main.tf         # Deployment, Service, HPA, PDB
│       ├── variables.tf
│       └── outputs.tf
├── envs/
│   ├── eks/
│   │   ├── main.tf         # calls module with AWS overrides
│   │   ├── providers.tf    # AWS provider config
│   │   └── terraform.tfvars
│   └── gke/
│       ├── main.tf         # calls module with GCP overrides
│       ├── providers.tf    # GCP provider config
│       └── terraform.tfvars
02

Write the shared module (modules/app-deployment/main.tf)

The module accepts service annotations as a variable, allowing each environment to inject provider-specific load balancer configuration without forking the module.

# modules/app-deployment/main.tf
resource "kubernetes_deployment_v1" "app" {
  metadata {
    name      = var.app_name
    namespace = var.namespace
    labels    = { app = var.app_name }
  }
  spec {
    replicas = var.replicas
    selector { match_labels = { app = var.app_name } }
    template {
      metadata { labels = { app = var.app_name } }
      spec {
        container {
          name  = var.app_name
          image = var.image
          resources {
            requests = { cpu = var.cpu_request, memory = var.memory_request }
            limits   = { cpu = var.cpu_limit,   memory = var.memory_limit   }
          }
          readiness_probe {
            http_get { path = "/healthz"; port = var.port }
            initial_delay_seconds = 5
            period_seconds        = 10
          }
        }
      }
    }
  }
}

resource "kubernetes_service_v1" "app" {
  metadata {
    name        = var.app_name
    namespace   = var.namespace
    # Provider-specific annotations injected here
    annotations = var.service_annotations
  }
  spec {
    selector = { app = var.app_name }
    type     = "LoadBalancer"
    port { port = 80; target_port = var.port }
  }
}

resource "kubernetes_horizontal_pod_autoscaler_v2" "app" {
  metadata { name = var.app_name; namespace = var.namespace }
  spec {
    scale_target_ref { api_version = "apps/v1"; kind = "Deployment"; name = var.app_name }
    min_replicas = var.replicas
    max_replicas = var.replicas * 3
    metric {
      type = "Resource"
      resource { name = "cpu"; target { type = "Utilization"; average_utilization = 70 } }
    }
  }
}
03

EKS environment override (envs/eks/main.tf)

EKS requires specific annotations for the AWS Load Balancer Controller to provision an NLB. Without these, no external IP will be assigned.

# envs/eks/main.tf
module "app" {
  source    = "../../modules/app-deployment"
  app_name  = "demo-app"
  namespace = "default"
  image     = "nginx:1.25"
  port      = 80
  replicas  = 3

  cpu_request    = "100m"
  memory_request = "128Mi"
  cpu_limit      = "500m"
  memory_limit   = "256Mi"

  # AWS-specific: NLB via AWS Load Balancer Controller
  service_annotations = {
    "service.beta.kubernetes.io/aws-load-balancer-type"             = "external"
    "service.beta.kubernetes.io/aws-load-balancer-nlb-target-type"  = "ip"
    "service.beta.kubernetes.io/aws-load-balancer-scheme"           = "internet-facing"
  }
}

# EKS provider — reads from kubeconfig context
data "aws_eks_cluster" "cluster" { name = "my-eks-cluster" }
data "aws_eks_cluster_auth" "cluster" { name = "my-eks-cluster" }
04

GKE environment override (envs/gke/main.tf)

GKE uses the cloud.google.com/neg annotation to enable container-native load balancing via Network Endpoint Groups, which is the recommended approach for GKE.

# envs/gke/main.tf
module "app" {
  source    = "../../modules/app-deployment"
  app_name  = "demo-app"
  namespace = "default"
  image     = "nginx:1.25"
  port      = 80
  replicas  = 3

  cpu_request    = "100m"
  memory_request = "128Mi"
  cpu_limit      = "500m"
  memory_limit   = "256Mi"

  # GCP-specific: container-native load balancing
  service_annotations = {
    "cloud.google.com/neg" = "{\"ingress\": true}"
  }
}

# GKE provider configuration
data "google_client_config" "default" {}
data "google_container_cluster" "cluster" {
  name     = "my-gke-cluster"
  location = "us-central1"
}
05

Deploy to both clusters

# Deploy to EKS
cd envs/eks
terraform init && terraform apply -auto-approve

# Switch to EKS context and verify
aws eks update-kubeconfig --name my-eks-cluster --region us-east-1
kubectl get svc demo-app -o wide
kubectl get pods -o wide --show-labels

# Deploy to GKE
cd ../gke
terraform init && terraform apply -auto-approve

# Switch to GKE context and verify
gcloud container clusters get-credentials my-gke-cluster \
  --region us-central1
kubectl get svc demo-app -o wide
kubectl get pods -o wide --show-labels
06

Collect data for the comparison matrix

Run these commands on each cluster and record the output in your matrix. Pay attention to differences in node image OS, CNI plugin, storage class defaults, and load balancer provisioning time.

# 1. CNI plugin in use
kubectl get pods -n kube-system | grep -E 'cni|calico|cilium|aws-node|flannel'

# 2. Default storage classes
kubectl get storageclass

# 3. Node OS image
kubectl get nodes -o json | jq '.items[].status.nodeInfo.osImage'

# 4. Kubernetes version and control plane info
kubectl version --output=yaml

# 5. Cluster DNS provider
kubectl get pods -n kube-system | grep dns

# 6. Managed add-ons present
kubectl get pods -n kube-system -o wide

# 7. Load balancer external IP assignment time
time kubectl wait svc/demo-app --for=jsonpath='{.status.loadBalancer.ingress}' \
  --timeout=300s
07

Complete the comparison matrix

Fill in this matrix based on your observations. The gaps identify where your platform abstraction layer needs to normalise differences.

Dimension EKS GKE Notes
Default CNIaws-node (VPC CNI)kubenet or Dataplane V2EKS CNI uses ENIs; GKE V2 is eBPF-based Cilium
Network PolicyCalico (optional add-on)Built-in via Dataplane V2Must explicitly install Calico on EKS
Node OSAmazon Linux 2 or BottlerocketContainer-Optimized OS (cos)Different kernel tuning defaults
Load BalancerAWS LBC (NLB/ALB)Cloud Load Balancing (NEG)Different annotation schemas
Storage Defaultgp2 (EBS)standard (pd-standard)Both zone-local; cross-zone volume is unsupported
Auth MethodIAM + aws-auth ConfigMap / EKS access entriesGoogle IAM + RBACEKS recently added EKS Access Entries API
Managed Add-onsCoreDNS, kube-proxy, VPC CNI, EBS CSIAll bundled + GKE-managedGKE manages add-on upgrades automatically
Control Plane LogsCloudWatch Logs (opt-in per component)Cloud Logging (enabled by default)EKS requires explicit enablement per log type

Success Criteria

Key Concepts

Further Reading