Panfactum LogoPanfactum
Infrastructure ModulesDirect ModulesAWSaws_vpc
aws_vpc
Stable
Direct
Source Code Link

AWS Virtual Private Cloud (VPC)

This module configures the following infrastructure resources for a Virtual Private Cloud:

  • Establishes a VPC

  • Deploys subnets with associated CIDR reservations and Route tables

  • NAT instances with static Elastic IP addresses associated and mapped correctly.

  • An internet gateway to allow resources that get public IPs in the VPC to be accessible from the internet.

  • VPC peering as required with resources outside the VPC.

  • Full VPC Flow Logs with appropriate retention and tiering for compliance and cost management.

  • An S3 Gateway endpoint for free network traffic to/from AWS S3

Usage

CIDR Blocks and Subnets

A critical decision for this module is deciding your CIDR blocks for subnet setup. This is very difficult to change later and will usually require redeploying your entire VPC and all the resources it contains.

We strongly recommend choosing the largest possible CIDR block for your VPC: 10.0.0/16 (the default for this module). You want to ensure that you have at least 100 IPs available in each public subnet and 1,000 IPs available in each private / isolated subnet. Choosing a large VPC CIDR gives you the most flexibility. 1

Iff you use the default CIDR of 10.0.0/16, your subnets will be automatically configured as follows: 2 3

SLA Level 1 4

NameTypeAvailability ZoneCIDRAvailable IPs
PUBLIC_APublicA10.0.0.0/24254
PUBLIC_BPublicB10.0.1.0/24254
N/AReservedN/A10.0.2.0/24254
PRIVATE_APrivateA10.0.64.0/1816,382
N/AReservedN/A10.0.128.0/1816,382
N/AReservedN/A10.0.192.0/1816,382
ISOLATED_AIsolatedA10.0.16.0/204,094
N/AReservedN/A10.0.32.0/204,094
N/AReservedN/A10.0.48.0/204,094
N/AReservedN/A10.0.3.0/24254
N/AReservedN/A10.0.4.0/221022
N/AReservedN/A10.0.8.0/212046

We recommend the above Reserved CIDR blocks in case you want upgrade your SLA target in the future.

SLA Level 2+

NameTypeAvailability ZoneCIDRAvailable IPs
PUBLIC_APublicA10.0.0.0/24254
PUBLIC_BPublicB10.0.1.0/24254
PUBLIC_CPublicC10.0.2.0/24254
PRIVATE_APrivateA10.0.64.0/1816,382
PRIVATE_BPrivateB10.0.128.0/1816,382
PRIVATE_CPrivateC10.0.192.0/1816,382
ISOLATED_AIsolatedA10.0.16.0/204,094
ISOLATED_BIsolatedB10.0.32.0/204,094
ISOLATED_CIsolatedC10.0.48.0/204,094
N/AReservedN/A10.0.3.0/24254
N/AReservedN/A10.0.4.0/221022
N/AReservedN/A10.0.8.0/212046

Custom Network Layout

If you are choosing a different network layout, we recommend this site for helping to divide your network.

To configure the network, you will need to manually specify both the subnets and nat_associations inputs.

Network Address Translation (NAT)

If you are unfamiliar with NAT, you should review the NAT concept documentation.

NAT is the one component of the VPC configuration that we have enhanced beyond the typical AWS-recommended setup.

Specifically, we do NOT use AWS NAT Gateways by default. They are far too expensive for behavior that should ultimately be available for free (and is in other cloud providers). For many organizations, NAT Gateway costs alone can produce 10-50% of total AWS spend.

Instead, we deploy a Panfactum-enhanced version of the fck-nat project. Using this pattern, our module launches self-hosted NAT nodes in EC2 autoscaling groups and reduces the costs of NAT by over 90%.

This setup does come with some limitations:

  • Outbound network bandwidth is limited to 5 Gbit/s per AZ (vs 25 Gbit/s for AWS NAT Gateways)
  • Outbound network connectivity in each AZ is impacted by the health of a single EC2 node

In practice, these limitations rarely impact an organization, especially as they only impact outbound connections (not inbound traffic):

  • If you need > 5 Gbit/s of outbound public internet traffic, you would usually establish a private network tunnel to the destination to improve throughput beyond even 25 Gbit/s.
  • The EC2 nodes are extremely stable as NAT only relies on functionality that is native to the linux kernel (we have never seen a NAT node crash).
  • The primary downside is that during NAT node upgrades, outbound network connectivity will be temporarily suspended. This typically manifests as a brief (1-2 min) delay in outbound traffic. Upgrades are typically only necessary every 6 months, so you can still easily achieve high uptime in this configuration.

Providers

The following providers are needed by this module:

  • aws (5.80.0)

  • pf (0.0.7)

Required Inputs

The following input variables are required:

vpc_name

Description: The name of the VPC resource.

Type: string

Optional Inputs

The following input variables are optional (have default values):

nat_associations

Description: A mapping of NATed egress network traffic between subnets. Keys represent the source subnets. Values represent destination subnets that will contain the NAT resources.

Type: map(string)

Default: {}

subnets

Description: Subnet configuration

Type:

map(object({
    az          = string                    # Availability zone (either of the format 'a' or 'us-east-2a')
    cidr_block  = string                    # Subnet IP block
    public      = bool                      # If subnet is routable to and from the public internet
    extra_tags  = optional(map(string), {}) # Additional tags for the subnet
    description = optional(string)          # A description of the subnet's purpose
  }))

Default: {}

vpc_cidr

Description: The main CIDR range for the VPC.

Type: string

Default: "10.0.0.0/16"

vpc_extra_tags

Description: Extra tags to add to the VPC resource.

Type: map(string)

Default: {}

vpc_flow_logs_enabled

Description: Whether to enable VPC flow logs

Type: bool

Default: false

vpc_flow_logs_expire_after_days

Description: How many days until VPC flow logs expire.

Type: number

Default: 30

vpc_peer_acceptances

Description: A list of VPC peering requests to accept. All VPC peers will be routable from all subnets.

Type:

map(object({
    allow_dns                 = bool   # Whether the remote VPC can use the DNS in this VPC.
    cidr_block                = string # The CIDR block to route to the remote VPC.
    vpc_peering_connection_id = string # The peering connection ID produced from the VPC peer request.
  }))

Default: {}

Outputs

The following outputs are exported:

nat_ips

Description: n/a

subnet_info

Description: Outputs a map of Subnet info.

test_config

Description: Configuration for the pf-vpc-network-test command

vpc_cidr

Description: n/a

vpc_id

Description: n/a

Maintainer Notes

None.

Footnotes

  1. If you need to choose a smaller block for some reason (e.g., VPC peering), that is completely fine, but you will want to ensure that it isn't too small. However, a hard lower limit should be a /19 network which would provide about 8,192 (232192^{32-19}) IP addresses.

  2. The public subnets are small because we will only deploy a handful of resources that can be directly reached from the public internet (e.g., load balancers). The private subnets are the largest because that is where the vast majority of the Kubernetes workloads will run.

  3. We reserve a few CIDR ranges so that you can provision extra subnets in the future should you need to. This can be an extremely helpful escape hatch that prevents you from needing to mutate existing subnets (causing a service disruption).

  4. For more information on SLA levels, see this guide.