Kubernetes Cluster Architecture: Control Plane & Nodes
Understanding the distributed decision-making system that powers container orchestration: control plane, nodes, etcd, and the reconciliation loop.
How this might come up in interviews
System design interviews for platform/infra roles. "Design a self-healing deployment system." Also common in Kubernetes admin/CKA prep.
Common questions:
- What happens when you run kubectl apply?
- What is the difference between a Deployment and a ReplicaSet?
- How does Kubernetes handle a node failure?
- What would happen if etcd went down?
- Walk me through the kube-scheduler decision process.
Strong answer: Mentions etcd quorum requirements. Knows the API server is stateless. Can explain why self-healing works (ReplicaSet controller watches pod count). Mentions that managed K8s hides the control plane.
Red flags: Says "kubectl talks to the node directly". Thinks Docker is the container runtime. Confused about who schedules vs who runs pods.
Key takeaways
- Control plane = brain (API server is the only entry point)
- etcd stores ALL cluster state — losing it means losing your cluster
- Reconciliation loop: desired state vs actual state is the core K8s idea
- Worker nodes run your workloads — kubelet is the node agent
- Every kubectl command hits kube-apiserver first
Related concepts
Explore topics that connect to this one.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Discussion
Questions? Discuss in the community or start a thread below.
Join DiscordIn-app Q&A
Sign in to start or join a thread.