This document explains how Terraform is configured for both AWS EKS deployment and local development using kind, including the benefits of using 127.0.0.1.nip.io
for local development.
The Terraform infrastructure is organized into three main components:
terraform/
├── eks/ # AWS EKS production deployment
├── kind-local/ # Local development with kind
└── module-aws/ # Reusable AWS modules
├── eks/ # EKS cluster module
├── karpenter/ # Karpenter autoscaler
└── network/ # VPC and networking
The local development setup uses kind (Kubernetes IN Docker) to create a fully functional Kubernetes cluster on your local machine, complete with:
resource "kind_cluster" "this" {
name = var.cluster_name
node_image = "kindest/node:${var.kubernetes_version}"
kubeconfig_path = "${path.root}/kubeconfig"
kind_config {
kind = "Cluster"
api_version = "[kind.x-k8s.io/v1alpha4](<http://kind.x-k8s.io/v1alpha4>)"
networking {
disable_default_cni = true # Use Cilium instead
kube_proxy_mode = "none" # Cilium replaces kube-proxy
}
node {
role = "control-plane"
# Port mappings for external access
extra_port_mappings {
container_port = 80 # HTTP traffic
host_port = 80
}
extra_port_mappings {
container_port = 30025 # Maildev SMTP
host_port = 1025
}
extra_port_mappings {
container_port = 30432 # PostgreSQL
host_port = 5432
}
extra_port_mappings {
container_port = 30094 # Kafka
host_port = 9094
}
# Mount monorepo for ArgoCD access
extra_mounts {
host_path = "${path.root}/../../"
container_path = "/mnt/monorepo-template.git"
read_only = true
}
}
}
}
Cilium provides advanced networking capabilities:
# cilium.values.yaml
ipam:
mode: kubernetes
kubeProxyReplacement: true
gatewayAPI:
enabled: true
enableAlpn: true
enableAppProtocol: true
hostNetwork:
enabled: true
envoy:
enabled: true
Benefits: