Karpenter NodePools
This module provisions Karpenter NodePools and NodeClasses that allow Karpenter to manage EC2 instances.
Usage
Limiting Maximum Node Size
Due to this issue, we have observed that Karpenter
will occasionally provision extremely large nodes for no apparent reason. As a mitigation, we have
two variables, max_node_memory_mb
and max_node_cpu
, that limit the maximum size of nodes that can be provisioned.
If you need larger nodes than the default set by this module, you will need to adjust those limits.
Providers
The following providers are needed by this module:
-
aws (5.70.0)
-
kubectl (2.0.4)
-
kubernetes (2.27.0)
-
pf (0.0.3)
-
random (3.6.0)
Required Inputs
The following input variables are required:
cluster_ca_data
Description: The B64 encoded CA data of the API server of the eks cluster
Type: string
cluster_dns_service_ip
Description: The IP address of the cluster's DNS service.
Type: string
cluster_endpoint
Description: The URL of the API server of the eks cluster
Type: string
cluster_name
Description: The name of the eks cluster
Type: string
node_instance_profile
Description: The instance profile to use for launched nodes
Type: string
node_security_group_id
Description: The id of the security group for nodes running in the EKS cluster
Type: string
node_subnets
Description: List of subnet names to deploy Karpenter nodes into.
Type: set(string)
node_vpc_id
Description: The ID of the VPC to deploy Karpenter nodes into.
Type: string
Optional Inputs
The following input variables are optional (have default values):
max_node_cpu
Description: The maximum number of CPUs for any single provisioned node (in MB)
Type: number
Default: 32
max_node_memory_mb
Description: The maximum memory for any single provisioned node (in MB)
Type: number
Default: 65536
monitoring_enabled
Description: Whether to active monitoring has been added to the cluster
Type: bool
Default: false
node_labels
Description: Labels to apply to nodes generated by Karpenter
Type: map(string)
Default: {}
Outputs
The following outputs are exported:
user_data
Description: n/a
Maintainer Notes
We make heavy use of random_id
and create_before_destroy
because Karpenter often updates its CRD spec,
and changes to this spec requires destroying old CRs. However, we cannot just naively destroy these CRs as (a) destroying
a CR de-provisions all nodes created by it and (b) destroying all CRs at once would leave Karpenter unable
to create new nodes for the disrupted pods. Obviously this is not desirable in a live cluster.
As a result, we create new CRs before destroying the old ones so that when we destroy the old ones, Karpenter can create new nodes for the disrupted pods.