Elastic Kubernetes Service (EKS)

This module provides our standard setup for a configurable AWS EKS Cluster. It includes:

  • An EKS Cluster. This cluster defines the Kubernetes control plane (managed by AWS) and provisions it to the specified set of availability zones.

  • A KMS key for encrypting the control plane data at-rest.

  • Setup of EKS Access Entries.

  • A set of “controller” node groups with a static size for running cluster-critical controllers. Nodes use the Bottlerocket distribution. Autoscaled nodes are deployed via our kube_karpenter module.

  • Security groups for both the cluster control plane and for the node groups.

    • The control plane accepts inbound traffic from the nodes and can make arbitrary outbound traffic.
    • The nodes accept inbound traffic from the control plane, from each other, and can make arbitrary outbound traffic.
  • Subnet tags that controllers in our other modules depend upon.

  • The requisite infrastructure for using IAM roles for service accounts (IRSA).

Usage

Installation

Choose Control Plane Subnets

Control plane subnets are the subnets within which AWS will deploy the EKS-managed Kubernetes API servers.

By default, the control plane subnets will be any subnet named PUBLIC_A, PUBLIC_B, or PUBLIC_C in the VPC indicated by the vpc_id input as these are the subnets created by the aws_vpc module.

If you need to overwrite the default module behavior, you can specify control_plane_subnets. This input takes at least 2 subnets (each in a different AZ).

Choose Node Subnets

Node subnets are the subnets within which your actual workloads will run once deployed to the Kubernetes cluster.

By default, the node subnets will be any subnet named PRIVATE_A, PRIVATE_B, or PRIVATE_C in the VPC indicated by the vpc_id input as these are the subnets created by the aws_vpc module.

If you need to overwrite the default module behavior, you can specify node_subnets.

For an SLA target of level 2 or above, you MUST provide at least 3 subnets (each in a different AZ).

Overriding the Service CIDR

Kubernetes requires that you specify a range of IP addresses that can be allocated to Services deployed in Kubernetes. This is called the Service CIDR.

We provide a default CIDR range of 172.20.0.0/16. We strongly discourage overriding this default unless you have a demonstrated need.

If you do override with the service_cidr input, you MUST provide a private CIDR range that does not conflict with your VPC or any of its subnets. That is because Kubernetes performs its own routing and networking independently of AWS.

You will also need to choose a dns_service_ip which must be in the service_cidr. If you use the 172.20.0.0/16 CIDR, then you should use 172.20.0.10 as this is the EKS default.

Post-install Steps

This module is intended to be installed as a part of this guide which includes manual steps that must be run after applying the module.

RBAC

This module configures access to the cluster via EKS Access Entries.

See the below table for our standard Kubernetes groups, the AWS principals linked to each group (configured through this module’s input variables), and the description of the intended permission level:

Kubernetes GroupDefault AWS Principals LinkedExtra AWS Principals LinkedPermission Level
pf:superusersSuperuser SSO Role, root IAM Uservar.superuser_principal_arnsFull access to everything in the cluster. (AmazonEKSClusterAdminPolicy)
pf:adminsAdmin SSO Rolevar.admin_princiapl_arnsWrite access to everything besides core cluster utilities. (AmazonEKSAdminViewPolicy)
pf:readersReader SSO Rolevar.reader_principal_arnsRead access to all resources (including secrets). (AmazonEKSEditPolicy)
pf:restricted-readersRestrictedReader SSO Rolevar.restricted_reader_principal_arnsRead access to all resources (not including secrets). (AmazonEKSViewPolicy)

The SSO roles are installed into each account via aws_iam_identity_center_permissions and are automatically discovered by this module. Users with access to a particular AWS IAM SSO role will have the corresponding permissions in all Panfactum clusters in that AWS account.

You can explicitly grant additional AWS IAM principals (users and roles) access via the input variables outlined above (e.g., var.superuser_principal_arns).

Note that extra permissions are given to the pf:admins and pf:restricted-readers Kubernetes groups in the kube_policies. AWS doesn’t install permissions that cover CRDs, so we add them ourselves once the cluster is instantiated.