Kubernetes Cluster
Objective
Deploy a Kubernetes cluster on AWS EKS using our aws_eks module.
A Quick Note
This is the first infrastructure component that will begin to incur nontrivial cost. EKS costs at minimum $75 / month, and we recommend planning for at minimum $150 / month / cluster. 1
Configure Pull Through Cache
Many of the utilities we will run on the cluster are distributed as images from public registries such as quay.io, ghcr.io, docker.io, or registry.k8s.io. The cluster's ability to download these images is critical to its operational resiliency. Unfortunately, public registries have several downsides:
-
They can and frequently do experience service disruptions
-
Many impose rate limits to the amount of images any IP is allowed to download in a given time window
-
Image downloads tend to be large and are subject to the bandwidth limitations of the upstream registry as well the intermediate network infrastructure
To address these problems, we will configure a pull through cache using AWS ECR. Conceptually this works as follows:
Instead of cluster nodes pulling images directly from a public registry, they will pull them from ECR which is then configured to download the image from the public registry only if it does not already contain the image in its cache. In this way, most images will only ever need to be downloaded from a public source once during the initial deployment.
We provide a module for configuring this behavior: aws_ecr_pull_through_cache.
However, before we deploy it, you must first retrieve authentication credentials for some upstream repositories. 2
GitHub Credentials
You will need a GitHub user and an associated GitHub personal access token (PAT).
Use the following PAT settings:
-
Use a classic token.
-
Set the token to never expire.
-
Grant only the
read:packages
scope.
Docker Hub Credentials
You will need a Docker Hub user an associated access token.
This token should have the Public Repo Read-only
access permissions.
Deploy the Pull Through Cache Module
The following instructions apply for every environment-region combination where you will deploy Kubernetes cluster:
-
Choose the region where you want to deploy clusters.
-
Add an
aws_ecr_pull_through_cache
directory to that region. -
Add a
terragrunt.hcl
to that directory that looks like this. -
Run
pf-tf-init
to enable the required providers. -
Run
terragrunt apply
. -
In your
region.yaml
, setextra_inputs.pull_through_cache_enabled
totrue
like this. This ensures all modules deployed in this environment will utilize the cache.
Deploying the Cluster
Set up Terragrunt
-
Choose the region where you want to deploy the cluster.
-
Add an
aws_eks
directory to that region. -
Add a
terragrunt.hcl
to that directory that looks like this. -
Run
pf-tf-init
to enable the required providers. -
Do NOT apply the module yet.
Choose a Cluster Name
Your cluster name should be globally unique within your organization and descriptive. We will use it
as an identifier in many tools, and it should be immediately apparent which cluster is being referred to if
referenced by name. A good name would look like production-primary
indicating that this cluster is the
primary cluster in the production environment.
Choose Kubernetes Version
We strongly recommend leaving this as the module default. The version was specifically chosen for compatibility with the rest of the Panfactum stack. See the module documentation if you need to override either the control plane or node group versions.
Enable Bootstrapping Mode
Set bootstrap_mode_enabled
to true
. This will ensure your node count and type is sufficient to complete the
bootstrapping guide. Once autoscaling is installed, this can be set to false
to reduce resource usage.
Deploy the Cluster
You are now ready to run terragrunt apply
.
This may take up to 20 minutes to complete.
When it is ready, you should see your EKS cluster in the AWS web console reporting as Active
and without any health
issues.
Connect to the Cluster
Set Up cluster_info Metadata and CA Certs
The Panfactum devShell comes with utilities that make connecting to your cluster a breeze.
First, we want to save important cluster metadata into your repository so other users can easily access the information even if they do not have permissions to interact directly with the infrastructure modules.
To download this metadata:
-
Add a
config.yaml
file to yourkube_dir
directory: 3clusters: - module: "production/us-east-2/aws_eks"
Every entry under
clusters
defines a new cluster that you want to be able to connect to.module
points to its terragrunt directory underenvironments_dir
. -
Replace
module
with the appropriate path for the cluster you just launched. -
Run
pf-update-kube --build
to dynamically generate acluster_info
file and download your cluster's CA certs.
As you add additional clusters, you will need to update config.yaml
and re-run pf-update-kube --build
. More
information about this file can be found here.
Set up Kubeconfig
All utilities in the Kubernetes ecosystem rely on kubeconfig files to configure their access to various Kubernetes clusters.
In the Panfactum stack, that file is stored in your repo in the kube_dir
directory. 4
To generate your kubeconfig:
-
Add a
config.user.yaml
file that looks like this: 5clusters: - name: "production-primary" aws_profile: "production-superuser"
-
Replace
name
with the name of the EKS cluster which can be found incluster_info
. -
Replace
aws_profile
with the AWS profile you want to use to authenticate with the cluster. For now, use the AWS profile that you used to deploy theaws_eks
module for the cluster. -
Run
pf-update-kube
to generate your kubeconfig file.
Remember that you will need to update your config.user.yaml
and re-run pf-update-kube
as you add additional clusters. More
information about this file can be found here.
Verify Connection
-
Run
kubectx
to list all the clusters that were set up in the previous section. Selecting one will set your Kubernetes context which defines which cluster your commandline tools likekubectl
will target. Select one now. -
Run
kubectl cluster-info
.You should receive a result that looks like this:
Kubernetes control plane is running at https://99DF0D231CAEFBDA815F2D8F26575FB6.gr7.us-east-2.eks.amazonaws.com CoreDNS is running at https://99DF0D231CAEFBDA815F2D8F26575FB6.gr7.us-east-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Use k9s
Most of our cluster introspection and debugging will be done from a TUI called k9s. This comes bundled with the Panfactum devShell.
Let's verify what pods are running in the cluster:
-
Run
k9s
-
Type
:pods⏎
to list all pods in the cluster -
k9s will filter results by namespace and by default it is set to the
default
namespace. Press0
to switch the filter to all namespaces. -
You should see a minimal list of pods that looks like this:
k9s is an incredibly powerful tool, and it is our recommended way for operators to interact directly with their clusters. If you have never used this tool before, we recommend getting up to speed with these tutorials.
Reset EKS Cluster
Unfortunately, AWS installs various utilities such as coredns
and kube-proxy
to every new EKS cluster. We
provide hardened alternatives to these defaults, and their presence will conflict with Panfactum resources in later guide steps.
As a result, we need to reset the cluster to a clean state before we continue.
We provide a convenience command pf-eks-reset
to perform the reset. Run this command now.
Prepare to Deploy Kubernetes Modules
In the Panfactum stack everything is deployed via OpenTofu (Terraform) modules, including Kubernetes manifests. 6 By constraining ourselves to a single IaC paradigm, we are able to greatly simplify operations for users of the stack.
In order to start using our Kubernetes modules, we must first configure the Kubernetes provider by setting some additional terragrunt variables.
In the region.yaml
file for the region where you deployed the cluster, add the following fields:
kube_config_context
: The context in yourkubeconfig
file to use for connecting to the cluster in this region. If this was set up usingpf-update-kube
, this is just the name of the cluster.kube_api_server
: This is thehttps
address of theKubernetes control plane
when you runkubectl cluster-info
See this file as an example.
The Kubernetes modules deployed in this region will now appropriately deploy to this cluster.
Next Steps
Congratulations! You have officially deployed Kubernetes using infrastructure-as-code. Now that the cluster is running, we will begin working on the internal networking stack.
Footnotes
-
This cost is well worth it. Even if you were to self-manage the control plane using a tool like kops, your raw infrastructure costs will likely cost at minimum /$50 / month. From personal experience running many bare metal clusters, when you factor in the additional time and headache required to manage the control plane on your own, this is an incredible deal. ↩
-
Yes, even though we are only using their public images. This appears to be an AWS limitation. ↩
-
By default, this is set to
.kube
in the root of your repository. ↩ -
We store the config file in your repo and not in the typical location (
~/.kube/config
) so that it does not interfere with other projects you are working on. ↩ -
This file is specific to every user as different users will have different access levels to the various clusters. Every user will need to set up their own
<kube_dir>/config.user.yaml
. This file is not committed to version control. ↩ -
Though we will often use third-party helm charts under the hood. ↩