Internal Cluster Networking
Objective
Install the basic Kubernetes cluster networking primitives via the kube_core_dns and kube_cilium modules.
Background
In the Panfactum stack, we use CoreDNS to handle cluster DNS resolution and Cilium to handle all the L3/L4 networking in our Kubernetes cluster.
In this guide, we won't go into detail about the underlying design decisions and network concepts, so we recommend reviewing the concept documentation for more information.
Deploy Cilium
Cilium provides workloads in your clusters with network interfaces that allow them to connect with each other and the wider internet. Without this controller, your pods would not be able to communicate. We provide a module for deploying Cilium: kube_cilium.
Let's deploy it now.
Deploy the Cilium Module
-
Create a new directory adjacent to your
aws_eks
module calledkube_cilium
. -
Add a
terragrunt.hcl
to that directory that looks like this. -
Run
pf-tf-init
to enable the required providers. -
Run
terragrunt apply
. -
If the deployment succeeds, you should see the various cilium pods running:
Additionally, all the nodes should now be in the
Ready
state:
Deploy CoreDNS
Kubernetes provides human-readable DNS names for pods and services running inside the cluster (e.g., my-service.namespace.svc.cluster.local
);
however, it does not come with its own DNS servers. The standard way to provide this functionality is via CoreDNS.
We provide a module to deploy CoreDNS called kube_core_dns.
Let's deploy it now.
Deploy the CoreDNS Module
-
Create a new directory adjacent to your
aws_eks
module calledkube_core_dns
. -
Add a
terragrunt.hcl
to that directory that looks like this. -
If you used our recommendation of
172.20.0.0/16
for theservice_cidr
in the cluster setup docs, you should use aservice_ip
of172.20.0.10
as this is the well-known DNS IP in Kubernetes. -
Run
pf-tf-init
to enable the required providers. -
Run
terragrunt apply
. -
If the deployment succeeds, you should see a
core-dns
deployment with either 1/2 or 2/2 pods running:If you see only 1/2 pods running, that is because we force the CoreDNS pods to run on nodes with different instance types for high-availability. However, the cluster won't be able to dynamically provision the new instance types until you complete the autoscaling section of the bootstrapping guide. Once you complete that guide section, you will see that both CoreDNS pods have launched successfully. This should not have any impact in the interim.
Next Steps
Now that basic networking is working, we will configure a policy engine for the cluster.