How the API server verifies identity -- certificates, tokens, OIDC, and what happens when they expire.
Know the difference between 401 (authn fail) and 403 (authz fail). Know that users do not exist as objects in K8s. Understand what a kubeconfig contains.
Set up OIDC auth with Dex and an external IdP. Configure cert rotation automation. Know kubeadm certs renew all and which components need restart afterward.
Design auth strategy for a multi-team cluster: OIDC for humans, bound SA tokens for workloads, break-glass certs stored in a vault. Implement cert rotation as a CronJob. Audit all ServiceAccounts for long-lived tokens.
Own authentication posture across 10+ clusters. Define cert lifecycle management policies. Lead migration from static kubeconfigs to OIDC. Define break-glass and recovery runbooks. Know how to recover a cluster from complete cert expiry with zero data loss.
How the API server verifies identity -- certificates, tokens, OIDC, and what happens when they expire.
Engineer runs kubectl get pods -- Error: certificate has expired
CRITICALTeam checks all nodes -- control plane shows Running but API unreachable
WARNINGkubeadm certs check-expiration: all 6 certs expired yesterday
CRITICALkubeadm certs renew all completes -- controller-manager still using stale in-memory cert
WARNINGStatic pod manifests touched -- all control-plane pods restart -- kubectl works again
The question this raises
How does Kubernetes authenticate every request, what certificates does it rely on, and how do you recover from a complete certificate expiry without losing the cluster?
An engineer runs kubectl get pods and gets: "error: You must be logged in to the server (Unauthorized)". The cluster nodes are reachable and other teams report normal operation. What is the most likely cause?
Lesson outline
Authentication answers one question: who are you?
It does NOT answer "what can you do?" -- that is authorization (RBAC). Every request to the Kubernetes API server must pass authn first. If authn fails, the server returns 401. If it passes but the action is unauthorized, it returns 403. These two steps are always separate.
User identity vs Service Account identity
Use for: Humans and external tools use certificates or OIDC tokens. Pods use ServiceAccount tokens automatically mounted at /var/run/secrets/kubernetes.io/serviceaccount/token. These are two completely separate authentication pipelines.
Kubernetes has no User object
Use for: kubectl get users returns "No resources found." Users exist only as strings inside certificates (CN field) or JWT claims. This surprises everyone the first time -- you cannot list users, you can only create RoleBindings that reference username strings.
kubectl / curl
|
| HTTPS request
v
+-------------------+
| API Server |
| Auth Chain |
+-------------------+
|
+---> [1] Client Certificate -- CN=system:admin, O=system:masters
| (kubeconfig)
|
+---> [2] Bearer Token -- ServiceAccount JWT or static token file
| (Authorization: Bearer <token>)
|
+---> [3] OIDC Token -- Dex / Okta / Google JWT
| (--oidc-issuer-url, --oidc-client-id flags on API server)
|
+---> [4] Webhook Token Auth -- calls external HTTP endpoint
| (enterprise SSO bridge)
|
+---> [5] Bootstrap Token -- kubeadm join only, short-lived
|
+---> [6] Anonymous -- disabled in hardened clusters
|
v
Identity: username + groups
|
v
Authorization (RBAC)
How this concept changes your thinking
Service Account tokens in Kubernetes < 1.21
“Long-lived tokens with no expiry -- leaked token = permanent access until SA deleted”
“Bound Service Account tokens (1.21+): audience-bound, time-limited, pod-bound -- invalid the moment the pod dies”
OIDC vs cert-based human auth
“Each engineer gets a kubeconfig with a personal cert -- no central revocation, certs last 1 year”
“OIDC via Dex + corporate IdP -- revoke from Okta, access gone instantly, 1-hour token TTL”
What happens on every kubectl command
01
kubectl reads kubeconfig -- finds cluster CA cert, client cert, client key
02
TLS handshake: API server presents its serving cert (signed by cluster CA), kubectl verifies it
03
kubectl presents its client cert -- API server verifies it against cluster CA
04
API server extracts CN (username) and O (groups) from the client cert Subject field
05
Username + groups passed to RBAC authorizer -- can this identity perform this verb on this resource?
06
Request proceeds or returns 403
kubectl reads kubeconfig -- finds cluster CA cert, client cert, client key
TLS handshake: API server presents its serving cert (signed by cluster CA), kubectl verifies it
kubectl presents its client cert -- API server verifies it against cluster CA
API server extracts CN (username) and O (groups) from the client cert Subject field
Username + groups passed to RBAC authorizer -- can this identity perform this verb on this resource?
Request proceeds or returns 403
CN=system:masters is God mode
Any certificate with O=system:masters group bypasses RBAC entirely -- hardcoded superuser. This is how break-glass certs work, but also how attackers with CA access own your cluster forever. Guard your cluster CA private key like the root password.
1# Typical kubeconfig structure2apiVersion: v13kind: Config4clusters:5- cluster:6certificate-authority-data: <base64 cluster CA>7server: https://api.cluster.example.com:64438name: production9users:10- name: sre-team11user:12client-certificate-data: <base64 client cert> # CN=alice, O=sre-team13client-key-data: <base64 client key>14contexts:15- context:16cluster: production17user: sre-team18namespace: default19name: production-sre
1# Check when your kubeadm certs expire2kubeadm certs check-expiration3# CERTIFICATE EXPIRES RESIDUAL TIME4# admin.conf Nov 23, 2025 14:32 UTC 364d5# apiserver Nov 23, 2025 14:32 UTC 364d67# Renew ALL certs together (on control-plane node)8kubeadm certs renew all910# Copy new admin kubeconfig to your ~/.kube/config11cp /etc/kubernetes/admin.conf ~/.kube/config1213# Restart static control-plane pods to pick up new certs14# (they don't watch the filesystem -- touch forces kubelet restart)15touch /etc/kubernetes/manifests/kube-apiserver.yaml16touch /etc/kubernetes/manifests/kube-controller-manager.yaml17touch /etc/kubernetes/manifests/kube-scheduler.yaml1819# Create a short-lived ServiceAccount token (K8s 1.24+)20kubectl create token my-sa --duration=8h --audience=my-service2122# Inspect a ServiceAccount token's claims23kubectl create token default | cut -d. -f2 | base64 -d 2>/dev/null | python3 -m json.tool
Blast radius when authentication breaks
Partial cert renewal -- leaves components using stale certs
# WRONG: Only renewing the apiserver cert
kubeadm certs renew apiserver
# Restarting only the API server
systemctl restart kube-apiserver
# Result: kube-controller-manager still has old cert in memory
# Error: "TLS handshake error... certificate has expired"
# Controller manager cannot authenticate to API server# RIGHT: Renew ALL certs together
kubeadm certs renew all
# Distribute new kubeconfig to all operators
cp /etc/kubernetes/admin.conf ~/.kube/config
# Restart ALL control plane static pod components
touch /etc/kubernetes/manifests/kube-apiserver.yaml
touch /etc/kubernetes/manifests/kube-controller-manager.yaml
touch /etc/kubernetes/manifests/kube-scheduler.yaml
# Verify renewal succeeded
kubeadm certs check-expirationkubeadm certs renew all renews all 6 certs atomically. Static pod components hold the old cert in memory -- they must be restarted by touching the manifest file. kubelet watches the /etc/kubernetes/manifests directory and immediately restarts the pod when the file modification time changes.
| Method | Lifetime | Revocation | Best for | Risk |
|---|---|---|---|---|
| Client Certificate | 1 year (kubeadm default) | No native revocation -- CRL not supported | Break-glass / bootstrap only | Leaked cert = permanent access until expiry |
| ServiceAccount JWT (bound) | Minutes to hours (configurable) | Instant -- pod deletion invalids token | Pod-to-API access | Low -- audience + time bound |
| OIDC (Dex/Okta) | 1 hour typical | Instant -- revoke in IdP | Human users in orgs with IdP | Requires OIDC provider availability |
| Webhook Token | Provider-dependent | Real-time via webhook | Enterprise SSO bridge | Adds external dependency to auth path |
| Static token file | Never expires | Requires API server restart | Development only -- NEVER production | Tokens in plaintext on disk forever |
How Kubernetes authenticates users
📖 What the exam expects
Kubernetes supports multiple authentication methods: client certificates, bearer tokens, OIDC, and webhook token authentication. The API server tries each configured method in order and accepts the first successful one.
Toggle between what certifications teach and what production actually requires
Asked in security-focused and platform engineering interviews. Senior roles always include a scenario about cert expiry or a user who can no longer access the cluster.
Common questions:
Strong answer: Immediately mentions O=system:masters and its danger, knows kubeadm cert renewal procedure, advocates for OIDC over per-user certs, has a break-glass story from experience.
Red flags: Confusing authentication with authorization, thinking kubectl get users works, not knowing cert expiry is a real operational concern, claiming static tokens are fine for production.
Related concepts
Explore topics that connect to this one.
Suggested next
Often learned after this topic.
RBAC & Service Accounts: Identity and Authorization in KubernetesReady to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.