Panfactum LogoPanfactum
Infrastructure ModulesDirect ModulesAWSaws_vpc
aws_vpc
Stable
Direct
Source Code Link

AWS Virtual Private Cloud (VPC)

This module configures the following infrastructure resources for a Virtual Private Cloud:

  • Establishes a VPC

  • Deploys subnets with associated CIDR reservations and Route tables

  • NAT instances with static Elastic IP addresses associated and mapped correctly.

  • An internet gateway to allow resources that get public IPs in the VPC to be accessible from the internet.

  • VPC peering as required with resources outside the VPC.

  • Full VPC Flow Logs with appropriate retention and tiering for compliance and cost management.

  • An S3 Gateway endpoint for free network traffic to/from AWS S3

Providers

The following providers are needed by this module:

Required Inputs

The following input variables are required:

subnets

Description: Subnet configuration

Type:

map(object({
    az         = string                    # Availability zone (either of the format 'a' or 'us-east-2a')
    cidr_block = string                    # Subnet IP block
    public     = bool                      # If subnet is routable to and from the public internet
    extra_tags = optional(map(string), {}) # Additional tags for the subnet
  }))

vpc_cidr

Description: The main CIDR range for the VPC.

Type: string

vpc_name

Description: The name of the VPC resource.

Type: string

Optional Inputs

The following input variables are optional (have default values):

nat_associations

Description: A mapping of NATed egress network traffic between subnets. Keys represent the source subnets. Values represent destination subnets that will contain the NAT resources.

Type: map(string)

Default: {}

vpc_extra_tags

Description: Extra tags to add to the VPC resource.

Type: map(string)

Default: {}

vpc_flow_logs_enabled

Description: Whether to enable VPC flow logs

Type: bool

Default: false

vpc_flow_logs_expire_after_days

Description: How many days until VPC flow logs expire.

Type: number

Default: 30

vpc_peer_acceptances

Description: A list of VPC peering requests to accept. All VPC peers will be routable from all subnets.

Type:

map(object({
    allow_dns                 = bool   # Whether the remote VPC can use the DNS in this VPC.
    cidr_block                = string # The CIDR block to route to the remote VPC.
    vpc_peering_connection_id = string # The peering connection ID produced from the VPC peer request.
  }))

Default: {}

Outputs

The following outputs are exported:

nat_ips

Description: n/a

subnet_info

Description: Outputs a map of Subnet info.

test_config

Description: Configuration for the pf-vpc-network-test command

vpc_cidr

Description: n/a

vpc_id

Description: n/a

Usage

NAT

Our NAT implementation is a customized version of the fck-nat project.

This means that instead of using an AWS NAT Gateway, we perform NAT through EC2 instances deployed into autoscaling groups.

Why? NAT Gateways are extremely expensive for what they do. For many organizations, this single infrastructure component can be 10-50% of total AWS spend. Because NAT is trivial to implement in linux, we can reduce this spend by over 90% by implementing it ourselves as we do in this module.

While we take inspiration from the fck-nat project, we enhance their scripts to also include assigning static public IPs. This is important in many environments for IP whitelisting purposes.

This setup does come with some limitations:

  • Outbound network bandwidth is limited to 5 Gbit/s per AZ (vs 25 Gbit/s for AWS NAT Gateways)
  • Outbound network connectivity in each AZ is impacted by the health of a single EC2 node

In practice, these limitations rarely impact an organization, especially as they only impact outbound connections (not inbound traffic):

  • If you need > 5 Gbit/s of outbound public internet traffic, you would usually establish a private network tunnel to the destination to improve throughput beyond even 25 Gbit/s.
  • The EC2 nodes are extremely stable as NAT only relies on functionality that is native to the linux kernel (we have never seen a NAT node crash).
  • The primary downside is that during NAT node upgrades, outbound network connectivity will be temporarily suspended. This typically manifests as a ~2 minute delay in outbound traffic. Upgrades are typically only necessary every 6 months, so you can still easily achieve 99.99% uptime in this configuration.

Future Enhancements

NAT