Arm node pools running Kubernetes versions earlier than 1.24.8-gke.1300
automatically add a taint during node pool creation to prevent Arm workloads
from being scheduled on non-Arm nodes. Arm node pools in clusters at version
1.24.8-gke.1300 or higher no longer add this taint. If you're upgrading from a
cluster earlier than 1.24.8-gke.1300, you must create this
taint yourself or otherwise take this into account when upgrading.
Arm node pools on GKE on AWS don't support Cloud Service Mesh, Config Sync,
or Policy Controller. You must run these products on an x86 node
pool.
Clusters running Kubernetes version 1.24 need an x86 node pool to run the
Connect Agent.
If your cluster runs Kubernetes version 1.25 or later, you don't need an
x86 node pool.
This page explains how to create an Arm node pool, why
multi-architecture images are the recommended way to deploy Arm workloads, and
how to schedule Arm workloads.
Before you begin
Before you create node pools for your Arm workloads, you need the following
resources:
An existing AWS cluster to create the node pool in. This cluster
must run Kubernetes version 1.24 or later.
NODE_POOL_NAME: the name you choose for your node pool
CLUSTER_NAME: the name of the cluster to attach the node
pool to
INSTANCE_TYPE: one of the following instance types:
m6g
m6gd
t4g
r6g
r6gd
c6g
c6gd
c6gn
x2gd
c7g
im4gn
g5g
These instance types are powered by Arm-based AWS Graviton processors.
You also need to specify the instance size that you want. For example,
m6g.medium. For a complete list, see
Supported AWS instance types.
ROOT_VOLUME_SIZE: the desired size for each node's root
volume, in Gb
NODEPOOL_PROFILE: the IAM instance profile
for node pool VMs
NODE_VERSION: the Kubernetes version to install on each
node in the node pool, which must be version 1.24 or later. For example,
1.24.3-gke.200.
MIN_NODES: the minimum number of nodes the node pool can
contain
MAX_NODES: the maximum number of nodes the node pool can
contain
MAX_PODS_PER_NODE: the maximum number of Pods that can be
created on any single node in the pool
GOOGLE_CLOUD_LOCATION: the name of the Google Cloud
location from which this node pool will be managed
NODEPOOL_SUBNET: the ID of the subnet the node pool will
run on. If this subnet is outside of the VPC's primary CIDR block, you need
to take additional steps. For more information, see
security groups.
SSH_KEY_PAIR_NAME: the name of the AWS SSH key pair
created for SSH access (optional)
CONFIG_KMS_KEY_ARN: the Amazon Resource Name (ARN) of the
AWS KMS key that encrypts user data
Understand multi-architecture images
Container images must be compatible with the architecture of the node where you
intend to run the Arm workloads. To ensure that your container image is
Arm-compatible, we recommend that you use multi-architecture ("multi-arch")
images.
A multi-arch image is an image that can support multiple architectures. It looks
like a single image with a single tag, but contains a set of images to run
on different machine architectures. Multi-arch images are
compatible with the Docker Image Manifest V2 Scheme 2 or OCI Image Index
Specifications.
When you deploy a multi-arch image to a cluster, the container runtime
automatically chooses the image that is compatible with the architecture of the
node you're deploying to. Once you have a multi-arch image for a workload, you
can deploy this workload across multiple architectures. Scheduling a
single-architecture image onto an incompatible node causes an error at load
time.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[],[],null,["# Run Arm workloads in GKE on AWS\n\nGKE on AWS lets you run Arm workloads built for Arm-based\n[AWS Graviton processors](https://aws.amazon.com/ec2/graviton/).\n\nLimitations\n-----------\n\n- Arm node pools running Kubernetes versions earlier than 1.24.8-gke.1300\n automatically add a taint during node pool creation to prevent Arm workloads\n from being scheduled on non-Arm nodes. Arm node pools in clusters at version\n 1.24.8-gke.1300 or higher no longer add this taint. If you're upgrading from a\n cluster earlier than 1.24.8-gke.1300, you must create this\n taint yourself or otherwise take this into account when upgrading.\n\n- Arm node pools on GKE on AWS don't support Cloud Service Mesh, Config Sync,\n or Policy Controller. You must run these products on an x86 node\n pool.\n\n- Clusters running Kubernetes version 1.24 need an x86 node pool to run the\n [Connect Agent](/anthos/fleet-management/docs/connect-agent).\n If your cluster runs Kubernetes version 1.25 or later, you don't need an\n x86 node pool.\n\nThis page explains how to create an Arm node pool, why\nmulti-architecture images are the recommended way to deploy Arm workloads, and\nhow to schedule Arm workloads.\n\n### Before you begin\n\nBefore you create node pools for your Arm workloads, you need the following\nresources:\n\n- An existing AWS cluster to create the node pool in. This cluster must run Kubernetes version 1.24 or later.\n- An [IAM instance profile](/kubernetes-engine/multi-cloud/docs/aws/how-to/create-aws-iam-roles#create_a_node_pool_iam_role) for the node pool VMs.\n- A [subnet](/kubernetes-engine/multi-cloud/docs/aws/how-to/create-aws-vpc#create_the_node_pool_subnets) where the node pool VMs will run.\n- If your cluster is running Kubernetes version 1.24, an x86 node pool to run\n the [Connect Agent](/anthos/fleet-management/docs/connect-agent).\n\n For details on how to create a node pool in GKE on AWS, see\n [Create a node pool](/kubernetes-engine/multi-cloud/docs/aws/how-to/create-node-pool).\n\n### Create an Arm node pool\n\nGKE on AWS supports node pools built on the Canonical Ubuntu\narm64 minimal node image and `containerd` runtime.\n\nTo create an Arm node pool and add it to an existing cluster, run the following\ncommand: \n\n gcloud container aws node-pools create \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e \\\n --cluster \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e \\\n --instance-type \u003cvar translate=\"no\"\u003eINSTANCE_TYPE\u003c/var\u003e \\\n --root-volume-size \u003cvar translate=\"no\"\u003eROOT_VOLUME_SIZE\u003c/var\u003e \\\n --iam-instance-profile \u003cvar translate=\"no\"\u003eNODEPOOL_PROFILE\u003c/var\u003e \\\n --node-version \u003cvar translate=\"no\"\u003eNODE_VERSION\u003c/var\u003e \\\n --min-nodes \u003cvar translate=\"no\"\u003eMIN_NODES\u003c/var\u003e \\\n --max-nodes \u003cvar translate=\"no\"\u003eMAX_NODES\u003c/var\u003e \\\n --max-pods-per-node \u003cvar translate=\"no\"\u003eMAX_PODS_PER_NODE\u003c/var\u003e \\\n --location \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_LOCATION\u003c/var\u003e \\\n --subnet-id \u003cvar translate=\"no\"\u003eNODEPOOL_SUBNET\u003c/var\u003e \\\n --ssh-ec2-key-pair \u003cvar translate=\"no\"\u003eSSH_KEY_PAIR_NAME\u003c/var\u003e \\\n --config-encryption-kms-key-arn \u003cvar translate=\"no\"\u003eCONFIG_KMS_KEY_ARN\u003c/var\u003e\n\nReplace the following:\n\n- \u003cvar translate=\"no\"\u003eNODE_POOL_NAME\u003c/var\u003e: the name you choose for your node pool\n- \u003cvar translate=\"no\"\u003eCLUSTER_NAME\u003c/var\u003e: the name of the cluster to attach the node pool to\n- \u003cvar translate=\"no\"\u003eINSTANCE_TYPE\u003c/var\u003e: one of the following instance types:\n\n - `m6g`\n - `m6gd`\n - `t4g`\n - `r6g`\n - `r6gd`\n - `c6g`\n - `c6gd`\n - `c6gn`\n - `x2gd`\n - `c7g`\n - `im4gn`\n - `g5g`\n\n These instance types are powered by Arm-based AWS Graviton processors.\n You also need to specify the instance size that you want. For example,\n `m6g.medium`. For a complete list, see\n [Supported AWS instance types](/kubernetes-engine/multi-cloud/docs/aws/reference/supported-instance-types#supported_instance_types).\n- \u003cvar translate=\"no\"\u003eROOT_VOLUME_SIZE\u003c/var\u003e: the desired size for each node's root\n volume, in Gb\n\n- \u003cvar translate=\"no\"\u003eNODEPOOL_PROFILE\u003c/var\u003e: the IAM instance profile\n for node pool VMs\n\n- \u003cvar translate=\"no\"\u003eNODE_VERSION\u003c/var\u003e: the Kubernetes version to install on each\n node in the node pool, which must be version 1.24 or later. For example,\n `1.24.3-gke.200`.\n\n- \u003cvar translate=\"no\"\u003eMIN_NODES\u003c/var\u003e: the minimum number of nodes the node pool can\n contain\n\n- \u003cvar translate=\"no\"\u003eMAX_NODES\u003c/var\u003e: the maximum number of nodes the node pool can\n contain\n\n- \u003cvar translate=\"no\"\u003eMAX_PODS_PER_NODE\u003c/var\u003e: the maximum number of Pods that can be\n created on any single node in the pool\n\n- \u003cvar translate=\"no\"\u003eGOOGLE_CLOUD_LOCATION\u003c/var\u003e: the name of the Google Cloud\n\n location from which this node pool will be managed\n- \u003cvar translate=\"no\"\u003eNODEPOOL_SUBNET\u003c/var\u003e: the ID of the subnet the node pool will\n run on. If this subnet is outside of the VPC's primary CIDR block, you need\n to take additional steps. For more information, see\n [security groups](/kubernetes-engine/multi-cloud/docs/aws/reference/security-groups).\n\n- \u003cvar translate=\"no\"\u003eSSH_KEY_PAIR_NAME\u003c/var\u003e: the name of the AWS SSH key pair\n created for SSH access (optional)\n\n- \u003cvar translate=\"no\"\u003eCONFIG_KMS_KEY_ARN\u003c/var\u003e: the Amazon Resource Name (ARN) of the\n AWS KMS key that encrypts user data\n\nUnderstand multi-architecture images\n------------------------------------\n\nContainer images must be compatible with the architecture of the node where you\nintend to run the Arm workloads. To ensure that your container image is\nArm-compatible, we recommend that you use multi-architecture (\"multi-arch\")\nimages.\n\nA multi-arch image is an image that can support multiple architectures. It looks\nlike a single image with a single tag, but contains a set of images to run\non different machine architectures. Multi-arch images are\ncompatible with the Docker Image Manifest V2 Scheme 2 or OCI Image Index\nSpecifications.\n\nWhen you deploy a multi-arch image to a cluster, the container runtime\nautomatically chooses the image that is compatible with the architecture of the\nnode you're deploying to. Once you have a multi-arch image for a workload, you\ncan deploy this workload across multiple architectures. Scheduling a\nsingle-architecture image onto an incompatible node causes an error at load\ntime.\n\nTo learn more about how to use multi-arch images with Arm workloads, see\n[Build multi-architecture images for Arm workloads](/kubernetes-engine/docs/how-to/build-multi-arch-for-arm)\nin the Google Kubernetes Engine (GKE) documentation.\n\nWhat's next\n-----------\n\n- [Troubleshoot Arm workloads](/kubernetes-engine/multi-cloud/docs/aws/troubleshooting#troubleshoot-arm)."]]