edge.26-04-05
Launches the new `pf` CLI with guided wizards for environment, cluster, domain, and SSO provisioning, upgrades Kubernetes to 1.33 and AWS provider to 6.x, migrates legacy devshell scripts to TypeScript, and consolidates several IaC modules.
Add alias to aws_organization
The aws_organization module now manages the IAM account alias for the management AWS account via a new required alias variable.
Add an alias input to your aws_organization module configuration before the next terragrunt apply:
inputs = { alias = "my-org-management" # A human-readable string for the management account # ... other existing inputs}Consolidate Contact Information Variables
The contact information variables on aws_account and aws_registered_domains have been consolidated to single objects for each contact.
The inputs to aws_dns_zones have been consolidated to a single domains object for better per-domain configuration.
Replace the individual contact field variables (e.g., contact_first_name, contact_last_name, etc.) with the new consolidated contact objects in your aws_account and aws_registered_domains module configurations. Refer to the updated module reference docs for the exact object structure.
Migrate aws_dns_zones Inputs to domains Object
The inputs to aws_dns_zones have been consolidated from separate domain list variables into a single domains object that allows per-domain granular configuration.
Migrate your aws_dns_zones inputs so that each key in the domains object is a domain name and the value contains per-domain configuration. See the module reference for the new input schema.
Rename Dedicated Cluster DNS Zone
Every cluster now has a dedicated DNS zone for hosting control-plane utilities. kube_domain is now a required configuration value and should be set in the region.yaml for every region that houses a Kubernetes cluster. The value must be a subdomain of a domain available to the environment.
Add a
kube_domainfield toregion.yamlfor every region that contains a Kubernetes cluster. The value must be a subdomain of a domain already available in the environment (e.g.,kube.example.comifexample.comis managed).Deploy the corresponding DNS zone via
aws_dns_zonesbefore applying cluster modules.
Remove Secondary Provider from tf_bootstrap_resources
The aws.secondary provider alias and cross-region DynamoDB replica have been removed from tf_bootstrap_resources.
- Remove the
aws.secondaryprovider alias from yourtf_bootstrap_resourcesTerragrunt configuration if present. - Re-apply
tf_bootstrap_resourcesto remove the cross-region DynamoDB replica from your state lock table.
Rename Backup Vault in tf_bootstrap_resources
Complete the secondary provider removal step above before proceeding. Both changes affect tf_bootstrap_resources and should be applied together.
The backup vault name in tf_bootstrap_resources now has a unique suffix to prevent conflicts. Before applying the updated module, manually delete the existing backup vault named terraform-<env_name>. Delete all recovery points first, then the vault.
Migrate Authentik Token to region.secrets.yaml
The pf sso add command previously stored the Authentik API token as authentikUserToken inside authentik_core_resources/secrets.yaml. The token has been relocated to region.secrets.yaml under the standardized key authentik_token.
If you have already run pf sso add and have an existing token stored in <cluster-path>/authentik_core_resources/secrets.yaml under the key authentikUserToken, migrate it to the region-level file:
sops --set '["authentik_token"] "your-token-here"' <region-path>/region.secrets.yamlEnsure authentik_token is set in region.secrets.yaml (SOPS-encrypted) before re-applying any Authentik modules.
Update Authentik URL to sso.<domain>
Complete the Authentik token migration step above before re-applying kube_authentik.
The Authentik subdomain has been standardized from authentik.<domain> to sso.<domain>.
If you previously deployed Authentik at authentik.<domain>, you must:
- Update the
authentik_urlvalue inglobal.yamlfromauthentik.<domain>tosso.<domain>. - Create a DNS record for
sso.<domain>pointing to the Authentik ingress. - Re-apply
kube_authentikso the module picks up the new domain. - After verifying the new domain works, remove the old
authentik.<domain>DNS record.
Configure Authentik Organization Name
We now create the Authentik email template in kube_authentik. The organization name is now a required input. Use the new organization_name output from the kube_authentik module as an input to the authentik_core_resources module.
Add the
organization_namevariable to yourkube_authentikmodule configuration.Wire the
organization_nameoutput fromkube_authentikinto theauthentik_core_resourcesmodule as an input via adependencyblock:dependency "authentik" {config_path = "../kube_authentik"}inputs = {organization_name = dependency.authentik.outputs.organization_name# ... other inputs}
Install KEDA
We now include KEDA in our base Panfactum clusters and our modules assume that you have it installed. See the instructions here.
Remove Node Image Cache Modules
This step must be completed before applying the Kubernetes 1.33 upgrade below. The Kyverno-based image cache system causes resource exhaustion issues in production clusters. Destroying it first prevents instability during the upgrade.
The kube_node_image_cache and kube_node_image_cache_controller modules have been removed entirely.
- Destroy any active
kube_node_image_cachemodule deployments. - Destroy any active
kube_node_image_cache_controllermodule deployments. - Remove the module directories for
kube_node_image_cacheandkube_node_image_cache_controllerfrom your Terragrunt configuration. - Remove the following input variables from any module configurations that set them:
node_image_cached_enabled— fromkube_airbyte,kube_alloy,kube_argo_event_bus,kube_authentik,kube_aws_ebs_csi,kube_cloudnative_pg,kube_gha_runners,kube_ingress_nginx,kube_linkerd,kube_monitoring,kube_nats,kube_opensearch,kube_pg_cluster,kube_redis_sentinel,kube_vaultnode_image_cache_enabled— from any module that sets itimage_prepull_enabledandimage_pin_enabled— from container spec blocks inkube_pod,kube_deployment,kube_daemon_set,kube_stateful_set,kube_cron_job, andkube_jobpanfactum_node_image_cache_enabled— fromkube_policies
Upgrade Kubernetes to 1.33
The default Kubernetes version has been upgraded to 1.33 (from 1.30, incrementing through 1.31 and 1.32 in this release).
- Review the Kubernetes 1.33 changelog for any deprecated APIs or behavior changes that affect your workloads.
- If you pin
kube_versionexplicitly inaws_eks, update it to1.33. Otherwise, re-applyaws_eksto trigger the upgrade.
No additional action is required for the bundled compatibility fixes (EBS CSI driver pinning, Descheduler config migration, and Ingress-nginx annotation risk level).
Migrate kube_cert_manager and kube_cert_issuers to kube_certificates
kube_cert_manager and kube_cert_issuers have been consolidated into a single kube_certificates module.
Create a
kube_certificatesdirectory as a sibling to thekube_cert_managerandkube_cert_issuersdirectories.Create a
terragrunt.hclfile in thekube_certificatesdirectory with the following contents:include "panfactum" {path = find_in_parent_folders("panfactum.hcl")expose = true}terraform {source = include.panfactum.locals.pf_stack_source}dependency "vault_core" {config_path = "../vault_core_resources"skip_outputs = true}inputs = {alert_email = "..." # Copy from kube_cert_issuers}From the region directory, run the following:
Terminal window (cd kube_cert_issuers; terragrunt state pull > state.json);(cd kube_cert_manager; terragrunt state pull > state.json);jq -s '.[0] as $f1| .[1] as $f2| $f1| .outputs = ($f1.outputs + $f2.outputs)| .resources = ($f1.resources + $f2.resources)' kube_cert_issuers/state.json kube_cert_manager/state.json > kube_certificates/state.json;jq 'del(.resources[]| select(.type == "pf_kube_labels"and (has("module") | not)))' kube_certificates/state.json > tmp && mv tmp kube_certificates/state.json;rm kube_cert_issuers/state.json;rm kube_cert_manager/state.json;Update the version of the Panfactum framework you are using.
Navigate to the
kube_certificatesdirectory.- Run
terragrunt init. - Run
terragrunt state push state.json && rm state.json. - Run
terragrunt applyand review the changes. There should be only a few resources that will be replaced.
- Run
Remove the
kube_cert_issuersandkube_cert_managerdirectories.Replace any references to
kube_cert_issuersandkube_cert_managerwithkube_certificatesin your code.
Review pull_through_cache_enabled Default
pull_through_cache_enabled now defaults to true for kube_nats, kube_pg_cluster, kube_redis_sentinel, kube_stateful_set, and kube_deployment. If ECR pull-through caching is not configured in your environment, image pulls for these modules will fail.
Choose one of the following options:
- Option A: Deploy the
aws_ecr_pull_through_cachemodule to enable pull-through caching before re-applying these modules. - Option B: Explicitly set
pull_through_cache_enabled = falseon each ofkube_nats,kube_pg_cluster,kube_redis_sentinel,kube_stateful_set, andkube_deploymentin your module configurations.
Remove update_type from kube_cron_job
The update_type variable has been removed from kube_cron_job.
Remove any update_type input from your kube_cron_job module configurations to avoid a Terraform error.
Review burstable_nodes_enabled Default
burstable_nodes_enabled now defaults to true. If you explicitly set it to false, no action is needed. Otherwise, confirm your workloads are compatible with burstable (T-family) instances.
Update PF_SKIP_REPO_CHECK Environment Variable
The PF_SKIP_REPO_CHECK environment variable has been renamed to PF_SKIP_CHECK_REPO_SETUP.
If you previously set PF_SKIP_REPO_CHECK=1 to skip the repo setup check, update your environment, CI pipelines, and scripts to use PF_SKIP_CHECK_REPO_SETUP=1 instead.
Update pf install-cluster References
The pf install-cluster CLI command has been renamed to pf cluster add.
Replace all invocations of pf install-cluster with pf cluster add in your scripts, CI pipelines, and runbooks.
Initialize the pf CLI
This step must be completed after all migration steps above.
This release adds the new pf CLI tool. To begin using it:
- Complete all migration steps for the breaking changes above.
- Run
pf devshell sync. Ensure this completes successfully before proceeding. - Run
terragrunt applyon all modules (orterragrunt run-all apply).