aws-nodegroup

Creates an additional nodegroup for the primary EKS cluster.

This module creates an additional nodegroup for the primary EKS cluster. Note that the aws-eks module creates a default nodegroup so this should only be used when you want one more.

IAM Permissions given to the Nodegroup

Along with the nodegroup, Opta creates a AWS IAM role that is attached to each EC2 in the pool and handles all of the machine’s (and Kubernetes actions done by the kubelet in the machine like, for example, downloading an ecr image) IAM permissions. Opta gives this service account the following policies:

  • AmazonEKSWorkerNodePolicy
  • AmazonEKS_CNI_Policy
  • AmazonEC2ContainerRegistryReadOnly

The first 2 are needed for the EC2 to function as a k8s node properly and the last ensures we can read ecr images from this account. If you need more permissions, feel free to attach extra policies to the iam role via the awscli or AWS web ui console– assuming you do not destroy/modify the existing policies attached there should be no problem.

THIS IAM ROLE IS NOT THE ONE USED BY YOUR CONTAINERS RUNNING IN THE CLUSTER– Opta handles creating appropriate IAM roles for each K8s service, but for any non-opta managed workloads in the cluster, please refer to this AWS documentation (the OIDC provider is created by Opta).

Fields

Name Description Default Required
labels labels for the kubernetes nodes {} False
max_nodes Max number of nodes to allow via autoscaling 15 False
min_nodes Min number of nodes to allow via autoscaling 3 False
node_disk_size The size of disk to give the nodes' ec2s in GB. 20 False
node_instance_type The ec2 instance type for the nodes. t3.medium False
use_gpu Should we expect and use the gpus present in the ec2? False False
spot_instances A boolean specifying whether to use spot instances for the default nodegroup or not. The spot instances will be configured to have the max price equal to the on-demand price (so no danger of overcharging). WARNING: By using spot instances you must accept the real risk of frequent abrupt node terminations and possibly (although extremely rarely) even full blackouts (all nodes die). The former is a small risk as containers of Opta services will be automatically restarted on surviving nodes. So just make sure to specify a minimum of more than 1 containers – Opta by default attempts to spread them out amongst many nodes. The former is a graver concern which can be addressed by having multiple node groups of different instance types (see aws nodegroup module) and ideally at least one non-spot. False False