Skip to content

Conversation

@danielpanzella
Copy link
Contributor

@danielpanzella danielpanzella commented Sep 30, 2025

Move pods onto a separate subnet by default.

BREAKING CHANGE: A new kubectl terraform provider is required

Summary by CodeRabbit

  • New Features

    • Added dedicated pod subnets and propagated them into networking and EKS.
    • Enabled automatic ENIConfig creation per pod subnet for VPC CNI.
    • Added a pod security group with access to database (MySQL) and Redis.
    • Exposed outputs for pod subnets and the pod security group ID.
  • Chores

    • Added the kubectl provider for manifest management.
    • Made key VPC CIDR inputs required; added inputs for pod subnets and their CIDRs.

BREAKING CHANGE: A new kubectl terraform provider is required
@coderabbitai
Copy link

coderabbitai bot commented Sep 30, 2025

Walkthrough

Adds pod subnet CIDRs and pod subnets to the networking module, exposes their IDs/CIDRs, wires pod subnet CIDRs/subnets into root and EKS modules, configures VPC CNI addon with per-subnet ENIConfig manifests via kubectl, and adds a pods security group with DB/ElastiCache ingress rules and an output for its ID.

Changes

Cohort / File(s) Summary
Root Terraform wiring
main.tf, variables.tf, versions.tf
Wire pod subnet CIDRs into networking when creating VPC or from provided vars; concatenate pod CIDRs with private DB CIDRs for allowlists; pass pod subnet IDs to app_eks; add network_pod_subnets/network_pod_subnet_cidrs variables and require kubectl provider.
Networking: pod subnets
modules/networking/main.tf, modules/networking/variables.tf, modules/networking/outputs.tf
Add pod_subnet_cidrs variable; create aws_subnet.pod_subnets when create_vpc is true; expose pod_subnets and pod_subnet_cidrs outputs; mark several VPC variables as required and adjust elasticache_subnet_cidrs default.
EKS module: networking and add-ons
modules/app_eks/main.tf, modules/app_eks/variables.tf, modules/app_eks/outputs.tf, modules/app_eks/add-ons.tf, modules/app_eks/versions.tf
Add network_pod_subnets variable; create aws_security_group.pods and ingress rules to database and ElastiCache SGs; extend aws_eks_addon.vpc_cni with configuration_values; add data.aws_subnet.pod_subnets and kubectl_manifest.vpc_eni_config to create per-subnet ENIConfig; output pod_security_group_id; require kubectl provider.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Root as Root (main.tf)
  participant Net as module.networking
  participant AWS as AWS (VPC/Subnets)
  participant EKS as module.app_eks
  participant Addon as EKS Addon (vpc-cni)
  participant K8s as Kubernetes API

  Root->>Net: provide pod_subnet_cidrs or use module output
  Net->>AWS: create aws_subnet.pod_subnets (when create_vpc)
  Net-->>Root: output pod_subnets / pod_subnet_cidrs
  Root->>EKS: pass network_pod_subnets + SG context
  EKS->>AWS: create aws_security_group.pods and SG rules
  EKS->>Addon: enable custom network config (configuration_values)
  EKS->>K8s: create kubectl_manifest.vpc_eni_config per subnet
  Note over Addon,K8s: ENIConfig manifests provided for VPC CNI to map pods to subnets
Loading
sequenceDiagram
  autonumber
  participant Pod as Pod (scheduled)
  participant CNI as VPC CNI
  participant K8s as Kubernetes API (ENIConfig)
  participant EC2 as EC2 (ENIs/Subnets)

  Pod->>CNI: request IP for pod
  CNI->>K8s: read ENIConfig for zone/subnet
  CNI->>EC2: provision/attach ENI in pod subnet
  EC2-->>CNI: ENI with IP(s) assigned
  CNI-->>Pod: return pod IP (ENI uses pods SG)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

Poem

In a warren of CIDRs I quietly hop,
I map every subnet and never stop.
ENIConfigs planted, one per zone's floor,
Pods find their IPs and scurry out the door.
I twitch my whiskers — network stitched, ready to roar 🐇✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly and accurately describes the main change—moving pods onto a separate subnet by default—aligning with the PR’s objectives and file updates without unnecessary detail or noise.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch danielpanzella/separate-pod-subnets

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7bd39b7 and 93a5ff6.

📒 Files selected for processing (2)
  • modules/app_eks/add-ons.tf (1 hunks)
  • modules/networking/main.tf (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • modules/app_eks/add-ons.tf
  • modules/networking/main.tf

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
main.tf (1)

55-182: Re-run terraform fmt for this file
CI flagged terraform fmt -check. Please format this file before merging.

modules/app_eks/add-ons.tf (1)

71-112: Format updates before merge
terraform fmt -check is failing here as well. Please run terraform fmt so the kubectl manifest block and addon config comply with Terraform style.

🧹 Nitpick comments (2)
variables.tf (1)

247-299: Clarify pod subnet variable descriptions.

Both new variables still use the generic “private subnet” phrasing, which makes it easy to mix them up with the existing private/database subnets. A short wording tweak keeps callers from mis-wiring IDs or CIDRs.

Apply something like:

 variable "network_pod_subnets" {
-  description = "A list of the identities of the private subnetworks in which pods will be deployed."
+  description = "Pod subnet IDs used for ENIConfig / pod IP assignment."
 }

 variable "network_pod_subnet_cidrs" {
-  description = "List of private subnet CIDR ranges to create in VPC."
+  description = "Pod subnet CIDR ranges reserved for pod ENIs."
 }
modules/app_eks/variables.tf (1)

139-142: Flag the new required network_pod_subnets input for consumers.

This module now needs pod subnet IDs to plan successfully; please update the module docs/examples so downstream users (especially those sourcing modules/app_eks directly) know to supply them.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2f23591 and 7bd39b7.

📒 Files selected for processing (11)
  • main.tf (4 hunks)
  • modules/app_eks/add-ons.tf (1 hunks)
  • modules/app_eks/main.tf (3 hunks)
  • modules/app_eks/outputs.tf (1 hunks)
  • modules/app_eks/variables.tf (1 hunks)
  • modules/app_eks/versions.tf (1 hunks)
  • modules/networking/main.tf (1 hunks)
  • modules/networking/outputs.tf (2 hunks)
  • modules/networking/variables.tf (1 hunks)
  • variables.tf (2 hunks)
  • versions.tf (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Terraform Lint and Format Check
modules/app_eks/add-ons.tf

[error] 1-1: Terraform fmt -check -recursive detected formatting issues in this file.

modules/networking/main.tf

[error] 1-1: Terraform fmt -check -recursive detected formatting issues in this file.

main.tf

[error] 1-1: Terraform fmt -check -recursive detected formatting issues in this file.

🔇 Additional comments (5)
modules/app_eks/versions.tf (1)

3-6: Provider constraint in-sync.

Thanks for scoping the kubectl provider requirement inside the module—this keeps direct consumers aligned with the root constraints.

modules/app_eks/outputs.tf (1)

21-23: New pod SG output makes sense.

Surfacing this ID will help downstream modules reference the pod isolation rules without spelunking module internals.

modules/networking/variables.tf (3)

32-36: LGTM!

The elasticache_subnet_cidrs variable correctly uses an empty list default, making it truly optional while maintaining consistency with the "(Optional)" description.


38-41: LGTM!

The new pod_subnet_cidrs variable is correctly defined as required (no default attribute) and follows the established pattern for subnet CIDR variables. This addition properly supports the PR's objective of moving pods onto separate subnets.


12-15: Required variables correctly have no defaults. The cidr, private_subnet_cidrs, public_subnet_cidrs, and database_subnet_cidrs blocks contain no default attributes, making them truly required; elasticache_subnet_cidrs retains default = [] and the new pod_subnet_cidrs has no default.

Comment on lines 41 to 51
resource "aws_subnet" "pod_subnets" {
count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0

vpc_id = module.vpc.vpc_id
cidr_block = var.pod_subnet_cidrs[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]

tags = {
Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}"
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Run terraform fmt on this module
terraform fmt -check is failing on this file. Please run terraform fmt (with -recursive if needed) so CI passes.

🤖 Prompt for AI Agents
In modules/networking/main.tf around lines 41 to 51 the file is failing
terraform fmt -check; run terraform fmt on this file (or run terraform fmt
-recursive at the module root) to reformat the HCL so the CI check passes, then
re-add the formatted file to the commit.

⚠️ Potential issue | 🔴 Critical

Pod subnets lack route-table associations — traffic will blackhole
These subnets never get associated with the private route tables, so they fall back to the VPC’s main table (only the local route by default). Pods launched via the custom ENI will lose egress/NAT connectivity and won’t reach databases or external services. Please associate each pod subnet with the same private route table (one per AZ) or otherwise ensure they inherit the expected routes.(repost.aws)

Apply something along these lines:

 resource "aws_subnet" "pod_subnets" {
   count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0

   vpc_id            = module.vpc.vpc_id
   cidr_block        = var.pod_subnet_cidrs[count.index]
   availability_zone = data.aws_availability_zones.available.names[count.index]

   tags = {
     Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}"
   }
 }

+resource "aws_route_table_association" "pod_subnets" {
+  count          = var.create_vpc ? length(var.pod_subnet_cidrs) : 0
+  subnet_id      = aws_subnet.pod_subnets[count.index].id
+  route_table_id = module.vpc.private_route_table_ids[count.index]
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
resource "aws_subnet" "pod_subnets" {
count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0
vpc_id = module.vpc.vpc_id
cidr_block = var.pod_subnet_cidrs[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}"
}
}
resource "aws_subnet" "pod_subnets" {
count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0
vpc_id = module.vpc.vpc_id
cidr_block = var.pod_subnet_cidrs[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}"
}
}
resource "aws_route_table_association" "pod_subnets" {
count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0
subnet_id = aws_subnet.pod_subnets[count.index].id
route_table_id = module.vpc.private_route_table_ids[count.index]
}
🤖 Prompt for AI Agents
In modules/networking/main.tf around lines 41 to 51, the aws_subnet
"pod_subnets" resources are not associated with any private route table so pod
traffic will use the VPC main table and lose egress/NAT; fix by creating
aws_route_table_association resources that associate each pod_subnet (by
count.index) with the corresponding private route table for that AZ (e.g., a map
or list variable of private route table IDs indexed by AZ or count), ensure a
private route table exists per AZ (or derive the correct one from module.vpc
outputs), and reference the subnet_id as aws_subnet.pod_subnets[count.index].id
and the route_table_id from the per-AZ private table so each pod subnet gets the
proper route association.

Comment on lines +12 to +15
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.19.0"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Switch to the maintained kubectl provider.

gavinbunney/kubectl has been archived for years and is known to drift/remove resources on EKS ≥1.27, which will break the new ENIConfig flow (see aws-ia/terraform-aws-eks-blueprints#1675 and multiple upgrade guides recommending the maintained fork). Please move to the drop-in replacement alekc/kubectl before shipping this breaking change.

Suggested diff:

-    kubectl = {
-      source  = "gavinbunney/kubectl"
-      version = "~> 1.19.0"
-    }
+    kubectl = {
+      source  = "alekc/kubectl"
+      version = "~> 2.0"
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.19.0"
}
kubectl = {
source = "alekc/kubectl"
version = "~> 2.0"
}
🤖 Prompt for AI Agents
In versions.tf around lines 12 to 15, the provider block uses the archived
source "gavinbunney/kubectl"; replace it with the maintained drop-in fork
"alekc/kubectl" and pin an appropriate compatible version (e.g. keep the same
constraint like "~> 1.19.0" or update to the intended maintained release), then
run terraform init -upgrade to fetch the new provider and validate plan/apply to
ensure no provider source drift; update any provider aliases or references if
present to match the new source.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants