-
Notifications
You must be signed in to change notification settings - Fork 21
feat!: Move pods onto a separate subnet by default. #369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
BREAKING CHANGE: A new kubectl terraform provider is required
WalkthroughAdds pod subnet CIDRs and pod subnets to the networking module, exposes their IDs/CIDRs, wires pod subnet CIDRs/subnets into root and EKS modules, configures VPC CNI addon with per-subnet ENIConfig manifests via kubectl, and adds a pods security group with DB/ElastiCache ingress rules and an output for its ID. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Root as Root (main.tf)
participant Net as module.networking
participant AWS as AWS (VPC/Subnets)
participant EKS as module.app_eks
participant Addon as EKS Addon (vpc-cni)
participant K8s as Kubernetes API
Root->>Net: provide pod_subnet_cidrs or use module output
Net->>AWS: create aws_subnet.pod_subnets (when create_vpc)
Net-->>Root: output pod_subnets / pod_subnet_cidrs
Root->>EKS: pass network_pod_subnets + SG context
EKS->>AWS: create aws_security_group.pods and SG rules
EKS->>Addon: enable custom network config (configuration_values)
EKS->>K8s: create kubectl_manifest.vpc_eni_config per subnet
Note over Addon,K8s: ENIConfig manifests provided for VPC CNI to map pods to subnets
sequenceDiagram
autonumber
participant Pod as Pod (scheduled)
participant CNI as VPC CNI
participant K8s as Kubernetes API (ENIConfig)
participant EC2 as EC2 (ENIs/Subnets)
Pod->>CNI: request IP for pod
CNI->>K8s: read ENIConfig for zone/subnet
CNI->>EC2: provision/attach ENI in pod subnet
EC2-->>CNI: ENI with IP(s) assigned
CNI-->>Pod: return pod IP (ENI uses pods SG)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewersPoem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
main.tf (1)
55-182: Re-runterraform fmtfor this file
CI flaggedterraform fmt -check. Please format this file before merging.modules/app_eks/add-ons.tf (1)
71-112: Format updates before merge
terraform fmt -checkis failing here as well. Please runterraform fmtso the kubectl manifest block and addon config comply with Terraform style.
🧹 Nitpick comments (2)
variables.tf (1)
247-299: Clarify pod subnet variable descriptions.Both new variables still use the generic “private subnet” phrasing, which makes it easy to mix them up with the existing private/database subnets. A short wording tweak keeps callers from mis-wiring IDs or CIDRs.
Apply something like:
variable "network_pod_subnets" { - description = "A list of the identities of the private subnetworks in which pods will be deployed." + description = "Pod subnet IDs used for ENIConfig / pod IP assignment." } variable "network_pod_subnet_cidrs" { - description = "List of private subnet CIDR ranges to create in VPC." + description = "Pod subnet CIDR ranges reserved for pod ENIs." }modules/app_eks/variables.tf (1)
139-142: Flag the new requirednetwork_pod_subnetsinput for consumers.This module now needs pod subnet IDs to plan successfully; please update the module docs/examples so downstream users (especially those sourcing
modules/app_eksdirectly) know to supply them.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
main.tf(4 hunks)modules/app_eks/add-ons.tf(1 hunks)modules/app_eks/main.tf(3 hunks)modules/app_eks/outputs.tf(1 hunks)modules/app_eks/variables.tf(1 hunks)modules/app_eks/versions.tf(1 hunks)modules/networking/main.tf(1 hunks)modules/networking/outputs.tf(2 hunks)modules/networking/variables.tf(1 hunks)variables.tf(2 hunks)versions.tf(1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Terraform Lint and Format Check
modules/app_eks/add-ons.tf
[error] 1-1: Terraform fmt -check -recursive detected formatting issues in this file.
modules/networking/main.tf
[error] 1-1: Terraform fmt -check -recursive detected formatting issues in this file.
main.tf
[error] 1-1: Terraform fmt -check -recursive detected formatting issues in this file.
🔇 Additional comments (5)
modules/app_eks/versions.tf (1)
3-6: Provider constraint in-sync.Thanks for scoping the kubectl provider requirement inside the module—this keeps direct consumers aligned with the root constraints.
modules/app_eks/outputs.tf (1)
21-23: New pod SG output makes sense.Surfacing this ID will help downstream modules reference the pod isolation rules without spelunking module internals.
modules/networking/variables.tf (3)
32-36: LGTM!The
elasticache_subnet_cidrsvariable correctly uses an empty list default, making it truly optional while maintaining consistency with the "(Optional)" description.
38-41: LGTM!The new
pod_subnet_cidrsvariable is correctly defined as required (no default attribute) and follows the established pattern for subnet CIDR variables. This addition properly supports the PR's objective of moving pods onto separate subnets.
12-15: Required variables correctly have no defaults. Thecidr,private_subnet_cidrs,public_subnet_cidrs, anddatabase_subnet_cidrsblocks contain nodefaultattributes, making them truly required;elasticache_subnet_cidrsretainsdefault = []and the newpod_subnet_cidrshas no default.
| resource "aws_subnet" "pod_subnets" { | ||
| count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0 | ||
|
|
||
| vpc_id = module.vpc.vpc_id | ||
| cidr_block = var.pod_subnet_cidrs[count.index] | ||
| availability_zone = data.aws_availability_zones.available.names[count.index] | ||
|
|
||
| tags = { | ||
| Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Run terraform fmt on this module
terraform fmt -check is failing on this file. Please run terraform fmt (with -recursive if needed) so CI passes.
🤖 Prompt for AI Agents
In modules/networking/main.tf around lines 41 to 51 the file is failing
terraform fmt -check; run terraform fmt on this file (or run terraform fmt
-recursive at the module root) to reformat the HCL so the CI check passes, then
re-add the formatted file to the commit.
Pod subnets lack route-table associations — traffic will blackhole
These subnets never get associated with the private route tables, so they fall back to the VPC’s main table (only the local route by default). Pods launched via the custom ENI will lose egress/NAT connectivity and won’t reach databases or external services. Please associate each pod subnet with the same private route table (one per AZ) or otherwise ensure they inherit the expected routes.(repost.aws)
Apply something along these lines:
resource "aws_subnet" "pod_subnets" {
count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0
vpc_id = module.vpc.vpc_id
cidr_block = var.pod_subnet_cidrs[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
tags = {
Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}"
}
}
+resource "aws_route_table_association" "pod_subnets" {
+ count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0
+ subnet_id = aws_subnet.pod_subnets[count.index].id
+ route_table_id = module.vpc.private_route_table_ids[count.index]
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| resource "aws_subnet" "pod_subnets" { | |
| count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0 | |
| vpc_id = module.vpc.vpc_id | |
| cidr_block = var.pod_subnet_cidrs[count.index] | |
| availability_zone = data.aws_availability_zones.available.names[count.index] | |
| tags = { | |
| Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}" | |
| } | |
| } | |
| resource "aws_subnet" "pod_subnets" { | |
| count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0 | |
| vpc_id = module.vpc.vpc_id | |
| cidr_block = var.pod_subnet_cidrs[count.index] | |
| availability_zone = data.aws_availability_zones.available.names[count.index] | |
| tags = { | |
| Name = "${var.namespace}-vpc-pods-${data.aws_availability_zones.available.names[count.index]}" | |
| } | |
| } | |
| resource "aws_route_table_association" "pod_subnets" { | |
| count = var.create_vpc ? length(var.pod_subnet_cidrs) : 0 | |
| subnet_id = aws_subnet.pod_subnets[count.index].id | |
| route_table_id = module.vpc.private_route_table_ids[count.index] | |
| } |
🤖 Prompt for AI Agents
In modules/networking/main.tf around lines 41 to 51, the aws_subnet
"pod_subnets" resources are not associated with any private route table so pod
traffic will use the VPC main table and lose egress/NAT; fix by creating
aws_route_table_association resources that associate each pod_subnet (by
count.index) with the corresponding private route table for that AZ (e.g., a map
or list variable of private route table IDs indexed by AZ or count), ensure a
private route table exists per AZ (or derive the correct one from module.vpc
outputs), and reference the subnet_id as aws_subnet.pod_subnets[count.index].id
and the route_table_id from the per-AZ private table so each pod subnet gets the
proper route association.
| kubectl = { | ||
| source = "gavinbunney/kubectl" | ||
| version = "~> 1.19.0" | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch to the maintained kubectl provider.
gavinbunney/kubectl has been archived for years and is known to drift/remove resources on EKS ≥1.27, which will break the new ENIConfig flow (see aws-ia/terraform-aws-eks-blueprints#1675 and multiple upgrade guides recommending the maintained fork). Please move to the drop-in replacement alekc/kubectl before shipping this breaking change.
Suggested diff:
- kubectl = {
- source = "gavinbunney/kubectl"
- version = "~> 1.19.0"
- }
+ kubectl = {
+ source = "alekc/kubectl"
+ version = "~> 2.0"
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| kubectl = { | |
| source = "gavinbunney/kubectl" | |
| version = "~> 1.19.0" | |
| } | |
| kubectl = { | |
| source = "alekc/kubectl" | |
| version = "~> 2.0" | |
| } |
🤖 Prompt for AI Agents
In versions.tf around lines 12 to 15, the provider block uses the archived
source "gavinbunney/kubectl"; replace it with the maintained drop-in fork
"alekc/kubectl" and pin an appropriate compatible version (e.g. keep the same
constraint like "~> 1.19.0" or update to the intended maintained release), then
run terraform init -upgrade to fetch the new provider and validate plan/apply to
ensure no provider source drift; update any provider aliases or references if
present to match the new source.
Move pods onto a separate subnet by default.
BREAKING CHANGE: A new kubectl terraform provider is required
Summary by CodeRabbit
New Features
Chores