logo

GCP - GKE

Workload Identity

Workload Identity (WI) is a GKE feature that allows Kubernetes pods to authenticate to GCP APIs without manually managing IAM service account credentials.

It's built on top of Workload Identity Pools, which allow federating external identity providers into Cloud IAM. It's the recommended way for customers to integrate their GKE Workloads with other GCP services, since it provides a drop-in experience (the GKE Metadata Server) that maintains compatibility with existing client libraries.

Workload Identity solves the multitenancy and key management problems by giving each Kubernetes service account its own Google identity, and letting each pod automatically pull an access token for this identity using the existing Google client libraries.

GKE Metadata Server

gke-metadata-server: A reimplementation of a subset of the GCE Metadata Server API, without legacy APIs or access to VM metadata. When a pod requests an access token, gke-metadata-server communicates with the CRI to authenticate the request based on the source IP, and then exchanges the Kubernetes token (k8s JWT) for a Google Access Token (Using IAM Security Token Service).

The GKE metadata server is a component of the GKE Workload Identity feature that maps Kubernetes identities to Cloud IAM. It runs as a daemonset on every node in a Workload Identity enabled node pool and implements a metadata API that is compatible with the Compute Engine and App Engine metadata servers, exposing this API to pods on the node.

The GKE metadata server includes a token endpoint that returns a Google access token based on pods' Kubernetes service accounts. It serves endpoints that return static metadata such as the numeric project ID, project ID, cluster name, hostname, GCE instance ID, and zone

To check the status of the gke-metadata-server pod:

$ kubectl get pods -A | grep gke-metadata-server.

If the pod status is Running, it is working properly. If it is stuck in ImagePullBackOff, the GSA associated with your NodePool needs permission to pull the container image from your Registry. Give it a role with storage.objects.get permission, such as "Storage Viewer".

All packets bound for the GCE Metadata Server will instead be delivered to the gke-metadata-server daemonset.

Where are the Nodes running?

k8s control plane runs in tenant projects, user cluster nodes run in customer projects.