Kubernetes Cluster
Objective
Deploy a Kubernetes cluster on AWS EKS using our aws_eks module.
A Quick Note
This is the first infrastructure component that will begin to incur nontrivial cost. EKS costs at minimum $75 / month, and we recommend planning for at minimum $150 / month / cluster. 1
Configure Pull Through Cache
Many of the utilities we will run on the cluster are distributed as images from public registries such as quay.io, ghcr.io, docker.io, or registry.k8s.io. The cluster's ability to download these images is critical to its operational resiliency. Unfortunately, public registries have several downsides:
-
They can and frequently do experience service disruptions
-
Many impose rate limits to the amount of images any IP is allowed to download in a given time window
-
Image downloads tend to be large and are subject to the bandwidth limitations of the upstream registry as well the intermediate network infrastructure
To address these problems, we will configure a pull through cache using AWS ECR. Conceptually this works as follows:
Instead of cluster nodes pulling images directly from a public registry, they will pull them from ECR which is then configured to download the image from the public registry only if it does not already contain the image in its cache. In this way, most images will only ever need to be downloaded from a public source once during the initial deployment.
We provide a module for configuring this behavior: aws_ecr_pull_through_cache.
However, before we deploy it, you must first retrieve authentication credentials for some upstream repositories. 2
GitHub Credentials
You will need a GitHub user and an associated GitHub personal access token (PAT).
Use the following PAT settings:
-
Use a classic token.
-
Set the token to never expire.
-
Grant only the
read:packages
scope.
Docker Hub Credentials
You will need a Docker Hub user an associated access token.
This token should have the Public Repo Read-only
access permissions.
Deploy the Pull Through Cache Module
The following instructions apply for every environment-region combination where you will deploy Kubernetes cluster:
-
Choose the region where you want to deploy clusters.
-
Add an
aws_ecr_pull_through_cache
directory to that region. -
Add a
terragrunt.hcl
to that directory that looks like this. -
Add a
module.yaml
that enables theaws
provider. -
Run
terragrunt apply
.
In the following sections, modules will have pull_through_cache_enabled
as an input. Setting this to true
will use the pull through cache that you just deployed. This is the default in the examples that we provide.
Deploying the Cluster
Set up Terragrunt
-
Choose the region where you want to deploy the cluster.
-
Add an
aws_eks
directory to that region. -
Add a
terragrunt.hcl
to that directory that looks like this. -
Add a
module.yaml
that enables both theaws
andtls
providers. -
Do NOT apply the module yet.
Choose a Cluster Name
Your cluster name should be globally unique within your organization and descriptive. We will use it
as an identifier in many tools, and it should be immediately apparent which cluster is being referred to if
referenced by name. A good name would look like production-primary
indicating that this cluster is the
primary cluster in the production environment.
Choose Kubernetes Version
We strongly recommend leaving this as the module default. The version was specifically chosen for compatibility with the rest of the Panfactum stack. See the module documentation if you need to override either the control plane or node group versions.
Choose Control Plane Subnets
For control_plane_subnets
, you need to enter the names of at least 3 subnets (each in a different AZ) that
you created in the aws networking guide. 3 This ensures the API server is resilient to an AZ outage.
We assume that you will use the three public subnets so that you can access the API server from your local machine. 4 We will do deeper into securing the API server in a subsequent section.
Choose a Service CIDR
For service_cidr
, you will want to a private CIDR range that does not conflict with your VPC or any of it's subnets. That
is because Kubernetes performs its own routing and networking independently of AWS.
If you've been following the recommendations in this guide, we strongly recommend 172.20.0.0/16
.
Choose Node Subnets
For controller_node_subnets
, you have an important decision to make: how many availability zones do you want your nodes
to run in.
More AZs will result in higher resiliency, but it will also result in increased cost as network traffic that crosses availability zones incurs additional charge. 5
We generally recommend using three AZs, but in development or test environments it is perfectly acceptable to choose one. 6
The subnets you choose should be private and should each be in a different AZ if you are using more than one.
Choose Node Count and Size
The nodes created by this module are used to run cluster-critical controllers, without which your cluster will not
function properly. 7 You should use at least three to facilitate the high-availability algorithms used by some of the
controllers. The t3a.large
is the minimum recommended instance size if using three nodes.
Deploy the Cluster
You are now ready to run terragrunt apply
.
This may take up to 30 minutes to complete.
When it is ready, you should see your EKS cluster in the AWS web console reporting as Active
and without any health
issues.
Connect to the Cluster
Set Up cluster_info Metadata and CA Certs
The Panfactum devenv comes with utilities that make connecting to your cluster a breeze.
First, we want to save important cluster metadata into your repository so other users can easily access the information even if they do not have permissions to interact directly with the infrastructure modules.
To download this metadata:
-
Add a
config.yaml
file to your$PF_KUBE_DIR
directory: 8clusters: - module: "production/us-east-2/aws_eks"
Every entry under
clusters
defines a new cluster that you want to be able to connect to.module
points to its terragrunt directory under$PF_ENVIRONMENTS_DIR
. -
Replace
module
with the appropriate path for the cluster you just launched. -
Run
pf-update-kube --build
to dynamically generate acluster_info
file and download your cluster's CA certs.
As you add additional clusters, you will need to update config.yaml
and re-run pf-update-kube --build
. More
information about this file can be found here.
Set up Kubeconfig
All utilities in the kubernetes ecosystem rely on kubeconfig files to configure their access to various Kubernetes clusters.
In the Panfactum stack, that file is stored in your repo in the $PF_KUBE_DIR
directory. 9
To generate your kubeconfig:
-
Add a
config.user.yaml
file that looks like this: 10clusters: - name: "production-primary" aws_profile: "production-superuser"
-
Replace
name
with the name of the EKS cluster which can be found incluster_info
. -
Replace
aws_profile
with the AWS profile you want to use to authenticate with the cluster. For now, use the AWS profile that you used to deploy theaws_eks
module for the cluster. -
Run
pf-update-kube
to generate your kubeconfig file.
Remember that you will need to update your config.user.yaml
and re-run pf-update-kube
as you add additional clusters. More
information about this file can be found here.
Verify Connection
-
Run
kubectx
to list all the clusters that were set up in the previous section. Selecting one will set your Kubernetes context which defines which cluster your commandline tools likekubectl
will target. Select one now. -
Run
kubectl cluster-info
.You should receive a result that looks like this:
Kubernetes control plane is running at https://99DF0D231CAEFBDA815F2D8F26575FB6.gr7.us-east-2.eks.amazonaws.com CoreDNS is running at https://99DF0D231CAEFBDA815F2D8F26575FB6.gr7.us-east-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Use k9s
Most of our cluster introspection and debugging will be done from a TUI called k9s. This comes bundled with the Panfactum devenv.
Let's verify what pods are running in the cluster:
-
Run
k9s
-
Type
:pods⏎
to list all pods in the cluster -
k9s will filter results by namespace and by default it is set to the
default
namespace. Press0
to switch the filter to all namespaces. -
You should see a minimal list of pods that looks like this:
k9s is an incredibly powerful tool, and it is our recommended way for operators to interact directly with their clusters. If you have never used this tool before, we recommend getting up to speed with these tutorials.
Deploy Kubernetes Modules
In the Panfactum stack everything is deployed via OpenTofu (Terraform) modules, including Kubernetes manifests. 11 By constraining ourselves to a single IaC paradigm, we are able to greatly simplify operations for users of the stack.
Set up Kubernetes Provider
In order to start using our kubernetes modules, we must first configure the Kubernetes provider by setting some additional terragrunt variables.
In the region.yaml
file for the region where you deployed the cluster, add the following fields:
kube_config_context
: The context in yourkubeconfig
file to use for connecting to the cluster in this region. If this was set up usingpf-update-kube
, this is just the name of the cluster.kube_api_server
: This is thehttps
address of theKubernetes control plane
when you runkubectl cluster-info
See this file as an example.
The Kubernetes modules deployed in this region will now appropriately deploy to this cluster.
RBAC
Up until now, we have been using implicit EKS authentication to communicate with the cluster (the IAM user that created the cluster automatically has cluster access). We will now deploy the kube_rbac module which will allow other users to authenticate (and eventually use dynamic rather than static credentials). This relies on a paradigm called role-based access control which we will cover in more detail as we set up user roles and SSO.
Since we are using an EKS cluster, authentication will work
via the AWS IAM Authenticator for Kubernetes. Users
will use their IAM credentials to authenticate with the Kubernetes API server (hence why we have aws_profile
set
for each cluster in the config.user.yaml
file).
Authentication is controlled via a ConfigMap found at
kube-system/aws-auth
. 12
Let's deploy this module now:
-
Adjacent to your
aws_eks
module, add akube_rbac
directory. -
Set up a
terragunt.hcl
that looks like this. For now, you only need to set theaws_node_role_arn
input. We will set up the other inputs when we configure SSO for your infrastructure. -
Enable both the
aws
andkubernetes
providers in themodule.yaml
. -
Run
terragrunt apply
. -
To verify that cluster authentication is still functional, we will need to terminate a node and make sure the replacement node is able to rejoin the cluster. 13 Select one of the nodes and terminate it as shown below:
-
The autoscaling group should notice the missing node within a few minutes and launch a new node to replace the terminated node. If the node is able to reconnect, you should be able to see it as
Ready
within k9s (:nodes
): -
If the node joins successfully, you have successfully set up your initial Kubernetes RBAC.
Priority Classes
Priority classes in Kubernetes instruct the cluster which pods to prioritize running should the cluster become resource constrained. If a utility is depended on for proper cluster operation, we want to give them higher priority.
The Panfactum stack defines several priority levels to ensure that your cluster remains as healthy as possible even in adverse circumstances such as an unexpected AZ outage. These are defined in the kube_priority_classes module.
Let's deploy this module:
-
Adjacent to your
aws_eks
module, add akube_priority_classes
directory. -
Set up a
terragunt.hcl
that looks like this. -
Enable the
kubernetes
providers in themodule.yaml
. -
Run
terragrunt apply
.
Next Steps
Congratulations! You have officially deployed Kubernetes using infrastructure-as-code. Now that the cluster is running, we will begin working on the internal networking stack.
Footnotes
-
This cost is well worth it. Even if you were to self-manage the control plane using a tool like kops, your raw infrastructure costs will likely cost at minimum /$50 / month. From personal experience running many bare metal clusters, when you factor in the additional time and headache required to manage the control plane on your own, this is an incredible deal. ↩
-
Yes, even though we are only using their public images. This appears to be an AWS limitation. ↩
-
Ensure that you choose an odd number for proper resilience to AZ outages. A requirement of
etcd
, the database backend for Kubernetes. ↩ -
In fact, the rest of this guide depends on you doing this. ↩
-
In practice, this is fairly minimal unless you have very chatty applications. As a result, it is normally best to run whatever you'd run in production in all environments. This allows for better testing and issue emulation. ↩
-
Note that this must be an odd number in order for many high availability algorithms to work in the case of an AZ outage (e.g., Raft). ↩
-
Additional "worker" nodes will be dynamically provisioned by Karpenter in a future section of this guide. ↩
-
By default, this is set to
.kube
in the root of your repository. ↩ -
We store the config file in your repo and not in the typical location (
~/.kube/config
) so that it does not interfere with other projects you are working on. ↩ -
This file is specific to every user as different users will have different access levels to the various clusters. Every user will need to set up their own
$PF_KUBE_DIR/config.user.yaml
. This file is not committed to version control. ↩ -
Though we will often use third-party helm charts under the hood. ↩
-
The
<namespace>/<resource>
syntax is common in the Kubernetes ecosystem.kube-system/aws-auth
should be interpreted as theaws-auth
resource in thekube-system
namespace. We use this syntax because most resources in Kubernetes are scoped to particular namespace. ↩ -
EC2 VMs are assigned IAM roles, and we only allow certain IAM roles to assume the necessary permissions to join the cluster as a Kubernetes node. This is defined in the
aws-auth
ConfigMap alongside the human user authentication directives. ↩