Deploying Workloads: Permissions
Overview
Often your workloads will need to interact with the control plane APIs. To do this, your workloads will need to be granted the permissions relevant to the particular control plane system (e.g., AWS). While this differs by system (AWS permissions are assigned in a different way than Kubernetes permissions), all permissions are assigned to workloads via Service Accounts.
All our controller submodules will automatically create a dedicated service account for the pods managed
by the controller and provide its name via the service_account_name
output.
We provide several submodules that allow you to easily assign permissions to service accounts:
Control Plane System | Submodule | Description |
---|---|---|
Kubernetes API | None (see below) | No module needed as very simple to setup with basic Kubernetes resources |
AWS | kube_sa_auth_aws | Sets up IAM roles for service accounts (IRSA) |
Vault | kube_sa_auth_vault | Sets up the Kubernetes auth method with Hashicorp Vault |
Argo | kube_sa_auth_workflow | Provides the necessary permissions for the service account to be used in an Argo workflow pod |
Multiple of these submodules may be used with a single service account in order to grant a single workload access to multiple control plane systems.
Kubernetes Permissions
When a pod is created, a JWT is generated for its service account and mounted inside the pod
at /var/run/secrets/kubernetes.io/serviceaccount/token
. This JWT is a "Bearer" token and can be presented to the Kubernetes
API to authenticate: 1
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -H "Authorization: Bearer $TOKEN" https://$KUBERNETES_API/api/v1/namespaces/default/pods/ --insecure
This token is bound to the lifetime of the pod and will no longer work once the pod is removed from the cluster. Additionally, the token will be rotated every hour while the pod is still alive. 2
While the token "authenticates" the workloads (e.g., provides an identity), it does not "authorize" the workload to access specific API endpoints, so you will receive 403 error codes.
To provide permissions to service accounts, you must use the Kubernetes RBAC system by creating Roles and RoleBindings. 3
For example, this code snippet gives a Deployment's service account permissions to get, list, and update PVCs in its namespace:
module "example_deployment" {
source = "${var.pf_module_source}kube_deployment${var.pf_module_ref}"
name = "example"
namespace = var.namespace
...
}
resource "kubernetes_role" "example" {
metadata {
name = "example"
namespace = var.namespace
labels = module.example_deployment.labels
}
rule {
api_groups = [""]
resources = ["persistentvolumeclaims"]
verbs = ["get", "update", "list"]
}
}
resource "kubernetes_role_binding" "example" {
metadata {
name = "example"
namespace = var.namespace
labels = module.example_deployment.labels
}
subject {
kind = "ServiceAccount"
name = module.example_deployment.service_account_name
namespace = var.namespace
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "Role"
name = kubernetes_role.example.metadata[0].name
}
}
Footnotes
-
Note this behavior automatically completed by most Kubernetes clients such as
kubectl
so you do not need to manually specify the token as in the below example. ↩ -
When the token is rotated, the old token will continue to work for up to 90 days (see docs). However, this is an EKS-specific quirk to aid in migrating to time-bound tokens. In the future, tokens will stop working after they are rotated, so you should endeavor to update your client code to be aware of this rotation logic. ↩
-
Roles and RoleBindings are scoped to a specific namespace. Use ClusterRoles and ClusterRoleBindings if you need to give a service account permissions to access objects across all namespaces in a cluster. This is not recommended for most workloads due to the security impact. ↩