Panfactum LogoPanfactum
Bootstrapping StackKubernetes Cluster

Kubernetes Cluster

Objective

Deploy a Kubernetes cluster on AWS EKS using our aws_eks module.

A Quick Note

This is the first infrastructure component that will begin to incur nontrivial cost. EKS costs at minimum $75 / month, and we recommend planning for at minimum $150 / month / cluster. 1

Configure Pull Through Cache

Many of the utilities we will run on the cluster are distributed as images from public registries such as quay.io, ghcr.io, docker.io, or registry.k8s.io. The cluster's ability to download these images is critical to its operational resiliency. Unfortunately, public registries have several downsides:

  • They can and frequently do experience service disruptions

  • Many impose rate limits to the amount of images any IP is allowed to download in a given time window

  • Image downloads tend to be large and are subject to the bandwidth limitations of the upstream registry as well the intermediate network infrastructure

To address these problems, we will configure a pull through cache using AWS ECR. Conceptually this works as follows:

Model for how pull through image caches operate in AWS

Instead of cluster nodes pulling images directly from a public registry, they will pull them from ECR which is then configured to download the image from the public registry only if it does not already contain the image in its cache. In this way, most images will only ever need to be downloaded from a public source once during the initial deployment.

We provide a module for configuring this behavior: aws_ecr_pull_through_cache.

However, before we deploy it, you must first retrieve authentication credentials for some upstream repositories. 2

GitHub Credentials

You will need a GitHub user and an associated GitHub personal access token (PAT).

Use the following PAT settings:

  • Use a classic token.

  • Set the token to never expire.

  • Grant only the read:packages scope.

Docker Hub Credentials

You will need a Docker Hub user an associated access token.

This token should have the Public Repo Read-only access permissions.

Deploy the Pull Through Cache Module

The following instructions apply for every environment-region combination where you will deploy Kubernetes cluster:

  1. Choose the region where you want to deploy clusters.

  2. Add an aws_ecr_pull_through_cache directory to that region.

  3. Add a terragrunt.hcl to that directory that looks like this.

  4. Run pf-tf-init to enable the required providers.

  5. Run terragrunt apply.

  6. In your region.yaml, set extra_inputs.pull_through_cache_enabled to true like this. This ensures all modules deployed in this environment will utilize the cache.

Deploying the Cluster

Set up Terragrunt

  1. Choose the region where you want to deploy the cluster.

  2. Add an aws_eks directory to that region.

  3. Add a terragrunt.hcl to that directory that looks like this.

  4. Run pf-tf-init to enable the required providers.

  5. Do NOT apply the module yet.

Choose a Cluster Name

Your cluster name should be globally unique within your organization and descriptive. We will use it as an identifier in many tools, and it should be immediately apparent which cluster is being referred to if referenced by name. A good name would look like production-primary indicating that this cluster is the primary cluster in the production environment.

Choose Kubernetes Version

We strongly recommend leaving this as the module default. The version was specifically chosen for compatibility with the rest of the Panfactum stack. See the module documentation if you need to override either the control plane or node group versions.

Choose Control Plane Subnets

For control_plane_subnets, you need to enter the names of at least 3 subnets (each in a different AZ) that you created in the aws networking guide. 3 This ensures the API server is resilient to an AZ outage.

We assume that you will use the three public subnets so that you can access the API server from your local machine. 4 We will do deeper into securing the API server in a subsequent section.

Choose a Service CIDR

For service_cidr, you will want to a private CIDR range that does not conflict with your VPC or any of its subnets. That is because Kubernetes performs its own routing and networking independently of AWS.

If you've been following the recommendations in this guide, we strongly recommend 172.20.0.0/16.

You will also need to choose a dns_service_ip which must be in the service_cidr. If you use the 172.20.0.0/16 CIDR, then you should use 172.20.0.10 as this is the EKS default.

Choose Node Subnets

For node_subnets, you have an important decision to make: how many availability zones do you want your nodes to run in.

More AZs will result in higher resiliency, but it will also result in increased cost as network traffic that crosses availability zones incurs additional charge. 5

In production, strongly recommend using three AZs, but in development or test environments it is perfectly acceptable to choose one. 6

The subnets you choose should be private and should each be in a different AZ if you are using more than one.

Enable Bootstrapping Mode

Set bootstrap_mode_enabled to true. This will ensure your node count and type is sufficient to complete the bootstrapping guide. Once autoscaling is installed, this can be set to false to reduce resource usage.

Choose Cluster Availability Guarantees

By default, all cluster components are deployed with very strong availability guarantees. Our target is 99.995% uptime.

However, this comes at additional cost that is not necessary in all environments and deployment scenarios.

The uptime guarantees can be slightly weakened to 99.9% by setting extra_inputs.enhanced_ha_enabled to false in your region.yaml like this. This can save about $50 / month.

Deploy the Cluster

You are now ready to run terragrunt apply.

This may take up to 20 minutes to complete.

When it is ready, you should see your EKS cluster in the AWS web console reporting as Active and without any health issues.

Kubernetes cluster dashboard in AWS web console

Connect to the Cluster

Set Up cluster_info Metadata and CA Certs

The Panfactum devenv comes with utilities that make connecting to your cluster a breeze.

First, we want to save important cluster metadata into your repository so other users can easily access the information even if they do not have permissions to interact directly with the infrastructure modules.

To download this metadata:

  1. Add a config.yaml file to your kube_dir directory: 7

    clusters:
      - module: "production/us-east-2/aws_eks"
    

    Every entry under clusters defines a new cluster that you want to be able to connect to. module points to its terragrunt directory under environments_dir.

  2. Replace module with the appropriate path for the cluster you just launched.

  3. Run pf-update-kube --build to dynamically generate a cluster_info file and download your cluster's CA certs.

As you add additional clusters, you will need to update config.yaml and re-run pf-update-kube --build. More information about this file can be found here.

Set up Kubeconfig

All utilities in the kubernetes ecosystem rely on kubeconfig files to configure their access to various Kubernetes clusters.

In the Panfactum stack, that file is stored in your repo in the kube_dir directory. 8

To generate your kubeconfig:

  1. Add a config.user.yaml file that looks like this: 9

    clusters:
      - name: "production-primary"
        aws_profile: "production-superuser"
    
  2. Replace name with the name of the EKS cluster which can be found in cluster_info.

  3. Replace aws_profile with the AWS profile you want to use to authenticate with the cluster. For now, use the AWS profile that you used to deploy the aws_eks module for the cluster.

  4. Run pf-update-kube to generate your kubeconfig file.

Remember that you will need to update your config.user.yaml and re-run pf-update-kube as you add additional clusters. More information about this file can be found here.

Verify Connection

  1. Run kubectx to list all the clusters that were set up in the previous section. Selecting one will set your Kubernetes context which defines which cluster your commandline tools like kubectl will target. Select one now.

  2. Run kubectl cluster-info.

    You should receive a result that looks like this:

    Kubernetes control plane is running at https://99DF0D231CAEFBDA815F2D8F26575FB6.gr7.us-east-2.eks.amazonaws.com
    CoreDNS is running at https://99DF0D231CAEFBDA815F2D8F26575FB6.gr7.us-east-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    

Use k9s

Most of our cluster introspection and debugging will be done from a TUI called k9s. This comes bundled with the Panfactum devenv.

Let's verify what pods are running in the cluster:

  1. Run k9s

  2. Type :pods⏎ to list all pods in the cluster

  3. k9s will filter results by namespace and by default it is set to the default namespace. Press 0 to switch the filter to all namespaces.

  4. You should see a minimal list of pods that looks like this:

    k9s listing all pods

k9s is an incredibly powerful tool, and it is our recommended way for operators to interact directly with their clusters. If you have never used this tool before, we recommend getting up to speed with these tutorials.

Reset EKS Cluster

Unfortunately, AWS installs various utilities such as coredns and kube-proxy to every new EKS cluster. We provide hardened alternatives to these defaults, and their presence will conflict with Panfactum resources in later guide steps.

As a result, we need to reset the cluster to a clean state before we continue.

We provide a convenience command pf-eks-reset to perform the reset. Run this command now.

Note that this will temporarily prevent EC2 nodes from joining the cluster until the kube_rbac module is deployed (below). 10

Deploy Kubernetes Modules

In the Panfactum stack everything is deployed via OpenTofu (Terraform) modules, including Kubernetes manifests. 11 By constraining ourselves to a single IaC paradigm, we are able to greatly simplify operations for users of the stack.

Set up Kubernetes Provider

In order to start using our kubernetes modules, we must first configure the Kubernetes provider by setting some additional terragrunt variables.

In the region.yaml file for the region where you deployed the cluster, add the following fields:

  • kube_config_context: The context in your kubeconfig file to use for connecting to the cluster in this region. If this was set up using pf-update-kube, this is just the name of the cluster.
  • kube_api_server: This is the https address of the Kubernetes control plane when you run kubectl cluster-info

See this file as an example.

The Kubernetes modules deployed in this region will now appropriately deploy to this cluster.

RBAC

Up until now, we have been using implicit EKS authentication to communicate with the cluster (the IAM user that created the cluster automatically has cluster access). We will now deploy the kube_rbac module which will allow other users to authenticate (and eventually use dynamic rather than static credentials). This relies on a paradigm called role-based access control which we will cover in more detail as we set up user roles and SSO.

Since we are using an EKS cluster, authentication will work via the AWS IAM Authenticator for Kubernetes. Users will use their IAM credentials to authenticate with the Kubernetes API server (hence why we have aws_profile set for each cluster in the config.user.yaml file).

Authentication is controlled via a ConfigMap found at kube-system/aws-auth. 12

Let's deploy this module now:

  1. Adjacent to your aws_eks module, add a kube_rbac directory.

  2. Set up a terragunt.hcl that looks like this. For now, you only need to set the aws_node_role_arn input. We will set up the other inputs when we configure SSO for your infrastructure.

  3. Run pf-tf-init to enable the required providers.

  4. Run terragrunt apply.

  5. To verify that cluster authentication is functional, you should be able your cluster nodes within k9s (:nodes):

    k9s showing node successfully joining the cluster

    Note that the nodes are in a NotReady status as we have not yet installed the cluster networking utilities which we will do in the next section.

  6. If all nodes join successfully, you have successfully set up your initial Kubernetes RBAC.

Priority Classes

Priority classes in Kubernetes instruct the cluster which pods to prioritize running should the cluster become resource constrained. If a utility is depended on for proper cluster operation, we want to give them higher priority.

The Panfactum stack defines several priority levels to ensure that your cluster remains as healthy as possible even in adverse circumstances such as an unexpected AZ outage. These are defined in the kube_priority_classes module.

Let's deploy this module:

  1. Adjacent to your aws_eks module, add a kube_priority_classes directory.

  2. Set up a terragunt.hcl that looks like this.

  3. Run pf-tf-init to enable the required providers.

  4. Run terragrunt apply.

Next Steps

Congratulations! You have officially deployed Kubernetes using infrastructure-as-code. Now that the cluster is running, we will begin working on the internal networking stack.

PreviousNext
Panfactum Bootstrapping Guide:
Step 8 /20

Footnotes

  1. This cost is well worth it. Even if you were to self-manage the control plane using a tool like kops, your raw infrastructure costs will likely cost at minimum /$50 / month. From personal experience running many bare metal clusters, when you factor in the additional time and headache required to manage the control plane on your own, this is an incredible deal.

  2. Yes, even though we are only using their public images. This appears to be an AWS limitation.

  3. Ensure that you choose an odd number for proper resilience to AZ outages. A requirement of etcd, the database backend for Kubernetes.

  4. In fact, the rest of this guide depends on you doing this.

  5. In practice, this is fairly minimal unless you have very chatty applications. As a result, it is normally best to run whatever you'd run in production in all environments. This allows for better testing and issue emulation.

  6. Note that this must be an odd number in order for many high availability algorithms to work in the case of an AZ outage (e.g., Raft).

  7. By default, this is set to .kube in the root of your repository.

  8. We store the config file in your repo and not in the typical location (~/.kube/config) so that it does not interfere with other projects you are working on.

  9. This file is specific to every user as different users will have different access levels to the various clusters. Every user will need to set up their own <kube_dir>/config.user.yaml. This file is not committed to version control.

  10. Just like human users, EC2 nodes use AWS IAM roles to authenticate with the EKS Kubernetes API server. These roles must be authorized which we do in the kube_rbac module.

  11. Though we will often use third-party helm charts under the hood.

  12. The <namespace>/<resource> syntax is common in the Kubernetes ecosystem. kube-system/aws-auth should be interpreted as the aws-auth resource in the kube-system namespace. We use this syntax because most resources in Kubernetes are scoped to particular namespace.