Skip to main content
Career Paths
Concepts
Network Segmentation Zero Trust
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Network Segmentation & Zero Trust in Kubernetes

A flat Kubernetes network where every pod can reach every other pod is a lateral movement attack path. Zero-trust micro-segmentation -- using NetworkPolicy, namespace isolation, and mTLS -- limits blast radius when a workload is compromised.

Relevant for:Mid-levelSeniorStaff
Why this matters at your level
Mid-level

Understand that Kubernetes has no network segmentation by default. Know what NetworkPolicy does and that CNI support is required. Be able to write a basic default-deny plus allow-DNS policy.

Senior

Design namespace segmentation with dependency mapping before applying default-deny. Understand CNI enforcement differences (Flannel vs Calico vs Cilium). Know the AND/OR selector semantics. Implement and verify with traffic observation tools.

Staff

Design the cluster topology decision (how many clusters per tier, whether mTLS replaces or augments NetworkPolicy). Define the organizational network segmentation policy. Own the migration from flat to segmented networking on production clusters without downtime.

Network Segmentation & Zero Trust in Kubernetes

A flat Kubernetes network where every pod can reach every other pod is a lateral movement attack path. Zero-trust micro-segmentation -- using NetworkPolicy, namespace isolation, and mTLS -- limits blast radius when a workload is compromised.

~6 min read
Be the first to complete!
LIVEData Plane Incident -- Flat Network Lateral Movement -- Dev-to-Prod -- 2022
Breaking News
Day 0

Cluster deployed with no NetworkPolicy -- default-allow networking

6 months

Dev engineer discovers dev pod has direct network access to prod DB

WARNING
6 months+1d

Network audit: zero NetworkPolicy applied across 23 namespaces

CRITICAL
6 months+2d

Blast radius calculated: any compromised pod reaches any service

CRITICAL
6 months+5w

Namespace-level default-deny NetworkPolicy applied after 4-week audit of service dependencies

—Flat network exposure before detection
—Namespaces with no NetworkPolicy
—Remediation engineering effort
—NetworkPolicy in place at deployment

The question this raises

How do you design network segmentation that limits lateral movement without breaking the legitimate service-to-service calls your application depends on?

Test your assumption first

You apply a default-deny NetworkPolicy to the payments namespace. Immediately, all pods in the namespace start failing their readiness probes -- service names cannot be resolved. What did you forget?

Lesson outline

What network segmentation solves

Kubernetes default: every pod can reach every pod

By default, Kubernetes has no network segmentation. Every pod can initiate connections to every other pod in the cluster regardless of namespace. If a container is compromised -- via a CVE, a supply chain attack, or a malicious image -- the attacker has unrestricted network access to every service, database, and metadata endpoint in the cluster. NetworkPolicy is the primary control that changes this.

The three segmentation layers in Kubernetes

  • NetworkPolicy (L3/L4) — Kubernetes-native policy enforced by the CNI plugin. Controls pod-to-pod and pod-to-external traffic by namespace/pod selector and port. Does nothing without a CNI that supports it (Flannel does NOT support NetworkPolicy -- use Calico or Cilium)
  • Namespace isolation — Namespaces provide a soft boundary -- combined with default-deny NetworkPolicy and RBAC, they prevent cross-namespace access. Without NetworkPolicy, namespace isolation is purely administrative (no network enforcement)
  • mTLS (Istio/Linkerd) — Mutual TLS at the application layer -- services must present a valid certificate to communicate. Even if a NetworkPolicy is bypassed, mTLS with AuthorizationPolicy ensures only authenticated services can call each other

The system view: default-deny with surgical allow

Zero-trust network segmentation pattern:

Without NetworkPolicy (default):
  pod-A (dev ns)   ----> pod-B (prod ns)   ALLOWED
  pod-A (dev ns)   ----> DB (prod ns)      ALLOWED  <- lateral movement risk
  pod-X (any ns)   ----> any pod           ALLOWED

With default-deny + surgical allow:
  [payments namespace]
    NetworkPolicy: default-deny ALL ingress + egress
    + allow ingress from [orders namespace] on port 8080
    + allow egress to [postgres namespace] on port 5432
    + allow egress to kube-dns (port 53)  <- CRITICAL: always allow DNS

  [orders namespace]
    NetworkPolicy: default-deny ALL ingress + egress
    + allow ingress from [api-gateway namespace] on port 8080
    + allow egress to [payments namespace] on port 8080
    + allow egress to kube-dns on port 53

  Result: dev pods cannot reach prod at all.
          A compromised orders pod can only reach payments and DNS.
          Blast radius is bounded to the service dependency graph.

CRITICAL SELECTORS:
  podSelector: {}          <- matches ALL pods in namespace
  namespaceSelector: {}    <- matches ALL namespaces (dangerous!)
  podSelector: {matchLabels: {app: payments-api}}  <- specific

How this concept changes your thinking

Situation
Before
After

Applying default-deny NetworkPolicy to a production namespace

“I will apply default-deny and then add allow rules as services start failing. This is safer than trying to map dependencies upfront.”

“Apply in AUDIT mode first: deploy a CNI with NetworkPolicy logging (Cilium), log blocked connections for 48 hours without enforcing, then write allow rules from the observed traffic. Applying default-deny without a dependency map in production causes immediate outages.”

Segmenting dev from prod in a multi-tenant cluster

“Namespaces provide isolation -- dev engineers cannot get into prod because they do not have RBAC access to the prod namespace.”

“RBAC prevents API access. It does NOT prevent network access. A dev pod can curl prod endpoints directly at the network layer regardless of RBAC. NetworkPolicy is required to enforce network isolation between namespaces.”

How to implement zero-trust network segmentation

Zero-trust NetworkPolicy implementation sequence

→

01

Verify your CNI supports NetworkPolicy enforcement. Calico and Cilium both enforce NetworkPolicy. Flannel does NOT -- if using Flannel, add Calico as a network policy engine on top.

→

02

Map all service dependencies before applying any policy. Use network traffic observability tools (Cilium Hubble, Calico Enterprise Flow Logs, or kubectl exec + netstat) to identify every legitimate connection.

→

03

Apply namespace labels for selector matching. Every namespace should have labels like environment: production, team: payments. These labels are used in NetworkPolicy namespaceSelector fields.

→

04

Apply default-deny ingress + default-deny egress to each namespace as separate NetworkPolicy objects. Default-deny egress must explicitly allow DNS (port 53 to kube-dns) or all internal DNS resolution breaks.

→

05

Layer in surgical allow rules for each legitimate service dependency. Allow rules are additive -- multiple NetworkPolicy objects in the same namespace are OR'd together.

→

06

Verify with netpol testing tools: kubectl-netpol or Cilium Hubble to confirm allowed traffic passes and blocked traffic is dropped.

07

Apply progressively: dev namespaces first (higher tolerance for errors), staging, then production with careful monitoring.

1

Verify your CNI supports NetworkPolicy enforcement. Calico and Cilium both enforce NetworkPolicy. Flannel does NOT -- if using Flannel, add Calico as a network policy engine on top.

2

Map all service dependencies before applying any policy. Use network traffic observability tools (Cilium Hubble, Calico Enterprise Flow Logs, or kubectl exec + netstat) to identify every legitimate connection.

3

Apply namespace labels for selector matching. Every namespace should have labels like environment: production, team: payments. These labels are used in NetworkPolicy namespaceSelector fields.

4

Apply default-deny ingress + default-deny egress to each namespace as separate NetworkPolicy objects. Default-deny egress must explicitly allow DNS (port 53 to kube-dns) or all internal DNS resolution breaks.

5

Layer in surgical allow rules for each legitimate service dependency. Allow rules are additive -- multiple NetworkPolicy objects in the same namespace are OR'd together.

6

Verify with netpol testing tools: kubectl-netpol or Cilium Hubble to confirm allowed traffic passes and blocked traffic is dropped.

7

Apply progressively: dev namespaces first (higher tolerance for errors), staging, then production with careful monitoring.

network-policy-default-deny.yaml
1# Step 1: Default deny ALL ingress and egress in a namespace
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5 name: default-deny-all
6 namespace: payments
7spec:
podSelector: {} with no matchLabels matches ALL pods in the namespace -- the default-deny anchor
8 podSelector: {} # matches ALL pods in the namespace
9 policyTypes:
10 - Ingress
11 - Egress
12---
13# Step 2: Allow DNS egress (without this, all service name resolution breaks)
14apiVersion: networking.k8s.io/v1
DNS egress MUST be explicitly allowed after default-deny or CoreDNS lookups fail silently
15kind: NetworkPolicy
16metadata:
17 name: allow-dns-egress
18 namespace: payments
19spec:
20 podSelector: {}
21 policyTypes:
22 - Egress
23 egress:
24 - ports:
25 - port: 53
26 protocol: UDP
27 - port: 53
28 protocol: TCP
29---
30# Step 3: Allow specific ingress from orders namespace
31apiVersion: networking.k8s.io/v1
32kind: NetworkPolicy
33metadata:
34 name: allow-ingress-from-orders
35 namespace: payments
36spec:
Both namespaceSelector AND podSelector must match -- the AND operator, not OR. This is the most misunderstood selector combination.
37 podSelector:
38 matchLabels:
39 app: payments-api
40 policyTypes:
41 - Ingress
42 ingress:
43 - from:
44 - namespaceSelector:
45 matchLabels:
46 team: orders
47 podSelector:
48 matchLabels:
49 app: orders-service
50 ports:
51 - port: 8080

What breaks -- NetworkPolicy common mistakes

Blast radius: NetworkPolicy mistakes that break services

  • Forgetting DNS egress — Default-deny egress with no DNS allow rule breaks ALL service name resolution. Pods can still reach IPs directly but cannot resolve service names. Symptoms: connection refused for service names but not IPs
  • namespaceSelector {} without labels — namespaceSelector: {} (empty) matches ALL namespaces -- it is a wildcard, not "no namespace." This defeats the purpose of namespace isolation
  • CNI without NetworkPolicy support — NetworkPolicy objects are accepted by the API server regardless of CNI. If your CNI does not enforce them (Flannel), all pods think they are protected but no traffic is blocked
  • Missing podSelector AND namespaceSelector — Using only namespaceSelector allows any pod in that namespace to send traffic. For production isolation, always combine namespaceSelector with podSelector to limit to the specific service
  • Blocking health check probes — liveness and readiness probes from kubelet hit pod IPs directly -- they are NOT intercepted by NetworkPolicy. However, probes that go through Services ARE affected. Verify probes work after applying policy

Cilium Hubble: observe before enforcing

Cilium Hubble provides a real-time flow map of all pod-to-pod connections in the cluster. Before applying default-deny, use hubble observe to record all actual network traffic for 48 hours. Export as a dependency graph, then write NetworkPolicy from the real observed traffic. This eliminates the guesswork that causes outages when applying policy to production.

Decision guide: when to add mTLS on top of NetworkPolicy

Network segmentation depth selection

Do you handle regulated data (PCI, HIPAA, financial) or have strict zero-trust requirements?
YesImplement mTLS (Istio or Linkerd) on top of NetworkPolicy. NetworkPolicy provides L3/L4 segmentation; mTLS provides L7 mutual authentication -- even if network policy is misconfigured, services must present valid certs.
NoContinue to next gate
Is your CNI Calico or Cilium (NetworkPolicy enforcement supported)?
YesApply namespace-level default-deny NetworkPolicy with surgical allow rules. This is sufficient for most environments.
NoAdd Calico network policy enforcement alongside your existing CNI, or replace with Calico/Cilium. Flannel with no enforcement provides zero actual segmentation.
Do you have more than one sensitivity tier in the same cluster (e.g., dev and prod)?
YesSeparate into distinct clusters -- network segmentation between namespaces is always weaker than cluster isolation. Or apply strict NetworkPolicy with namespace labels and verify with regular audits.
NoNamespace-level default-deny with team-labeled namespaces provides adequate segmentation for a single-tier cluster.

Cost and complexity: segmentation depth vs operational overhead

ApproachBlast radius if pod compromisedImplementation effort
No NetworkPolicy (default)Entire cluster -- all services and databases reachableZero -- but existential risk
Namespace default-deny + allow rulesBounded to the namespace dependency graphHigh upfront (dependency mapping), low ongoing maintenance
NetworkPolicy + Istio mTLS + AuthorizationPolicyBounded to specific authenticated service identitiesVery high -- requires Istio, AuthorizationPolicy design, mTLS migration
Separate clusters per tier (dev/prod)Bounded to the cluster -- no cross-cluster lateral movementHighest -- multiple clusters, cross-cluster networking complexity

Exam Answer vs. Production Reality

1 / 2

Namespace isolation

📖 What the exam expects

Kubernetes namespaces provide logical isolation. Combined with RBAC, teams cannot see or modify resources in other namespaces.

Toggle between what certifications teach and what production actually requires

How this might come up in interviews

Asked in security-focused platform and SRE interviews: "How would you prevent lateral movement in a Kubernetes cluster?" Also in threat modeling questions for multi-tenant clusters.

Common questions:

  • By default, can a pod in namespace A reach a pod in namespace B? What controls this?
  • How do you implement default-deny network segmentation in Kubernetes without breaking DNS resolution?
  • What is the difference between namespaceSelector and podSelector in a NetworkPolicy from rule? When does AND vs OR apply?
  • Your CNI is Flannel. You apply a NetworkPolicy. Is traffic being blocked?
  • How would you migrate a production cluster from a flat network to micro-segmented NetworkPolicy without downtime?

Strong answer: Candidates who have mapped service dependencies before applying NetworkPolicy, used Hubble or similar tools to observe traffic, and understand the AND/OR selector semantics. Bonus: experience migrating a production cluster from flat to segmented networking.

Red flags: Candidates who think namespaces provide network isolation. Anyone who says "just apply a default-deny NetworkPolicy" without mentioning DNS egress exceptions or CNI requirements. Not knowing that Flannel does not enforce NetworkPolicy.

Related concepts

Explore topics that connect to this one.

  • Network Policies: Pod-to-Pod Firewalling
  • Istio mTLS Encryption
  • Zero Trust security

Suggested next

Often learned after this topic.

Kubernetes Audit Logging: Who Did What, When

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.

Sign in to track your progress and mark lessons complete.

Continue learning

Kubernetes Audit Logging: Who Did What, When

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.