The Twelve-Factor App
The 12 principles every cloud app must follow: portability, reproducibility, and clean separation of config, code, and state.
The Twelve-Factor App
The 12 principles every cloud app must follow: portability, reproducibility, and clean separation of config, code, and state.
What you'll learn
- Factor III (Config): credentials and env-specific config belong in environment variables, never in code.
- Factor VI (Stateless processes): no in-memory state; store sessions and data in backing services so you can scale horizontally.
- Factor XI (Logs): write to stdout and let the platform aggregate — never manage log files yourself.
- Factor V (Build/release/run): immutable builds with injected config at runtime enable reproducible, rollback-safe deployments.
- The test for Factor III: could you open-source the codebase right now without exposing credentials?
Lesson outline
The startup that broke on Black Friday
It was 11:47 PM. Their biggest sale of the year had just started. Then the site went down.
The team scrambled to add servers, but they could not. Why? The app stored session data in memory on a single machine. Config was baked into the binary. Logs were written to local disk. There was no way to spin up a second instance — it would have different state, different config, and no logs anyone could read.
They had built a web app. They had not built a cloud-native app.
Adam Wiggins and the Heroku team saw this pattern across thousands of apps they hosted. In 2011, they wrote down twelve factors — twelve conditions an app must satisfy to be truly portable, scalable, and operable in the cloud.
What is the Twelve-Factor App?
A methodology for building software-as-a-service apps that are portable across environments, deployable to modern cloud platforms, scalable without architecture changes, and maintainable by a team of developers working in parallel.
The 12 factors — explained with real examples
Factor I–IV: Code, dependencies, config, and backing services
- I. Codebase — one codebase, many deploys — One repo per app. Multiple environments (staging, prod) are just deploys of the same code. BAD: different code branches per environment. GOOD: feature flags or env vars distinguish behaviors.
- II. Dependencies — explicitly declare and isolate all dependencies — Never rely on system-wide packages. BAD:
pip installnot in requirements.txt. GOOD: requirements.txt, package.json, go.mod, Gemfile — versioned, checked in. - III. Config — store config in the environment — Anything that varies between deploys belongs in env vars, not code. BAD:
const DB_URL = "postgres://prod-db.company.com"in source code. GOOD:process.env.DATABASE_URL. This is also why you never commit .env files. - IV. Backing services — treat them as attached resources — Databases, caches, queues — they are all "attached resources" accessed via a URL or credential in config. Swap a local MySQL for an RDS instance by changing one env var. Zero code change.
Factor V–VIII: Build, process, port, and concurrency
- V. Build, release, run — strictly separate build and run stages — Build: compile, bundle. Release: build artifact + config = immutable release. Run: execute. BAD: editing code on a running server (ssh into prod,
vim app.py). GOOD: CI/CD pipeline creates immutable Docker images tagged by commit SHA. - VI. Processes — execute app as one or more stateless processes — Processes are stateless and share nothing. Any state (sessions, data) is stored in a backing service (Redis, DB). BAD: storing user sessions in memory. GOOD: JWT tokens or Redis-backed sessions.
- VII. Port binding — export services via port binding — The app is self-contained and exposes HTTP by binding to a port. No external web server required to run. BAD: app only runs inside Apache with mod_wsgi. GOOD:
app.listen(process.env.PORT). - VIII. Concurrency — scale out via the process model — Scale horizontally by running more processes, not by making one process bigger. BAD: increasing JVM heap endlessly. GOOD: run 10 instances of the same process behind a load balancer.
Factor IX–XII: Disposability, dev/prod parity, logs, and admin
- IX. Disposability — maximize robustness with fast startup and graceful shutdown — Processes start fast and shut down gracefully. BAD: apps that take 5 minutes to start, lose jobs mid-flight on shutdown. GOOD: sub-second startup, SIGTERM handler that drains the queue before exiting.
- X. Dev/prod parity — keep development, staging, and production as similar as possible — BAD: dev uses SQLite, prod uses PostgreSQL (breaks when you need JSONB). GOOD: Docker Compose in dev uses the exact same Postgres version as production.
- XI. Logs — treat logs as event streams — The app writes to stdout. The platform aggregates, routes, and archives. BAD:
logger.setFile("/var/log/app.log")— log rotation, disk management = your problem. GOOD:console.log(JSON.stringify({level:"info", msg:...}))→ shipped to Datadog by the platform. - XII. Admin processes — run admin/management tasks as one-off processes — Database migrations, data backups, REPL sessions — run them as one-off commands, not baked into startup. BAD: app runs
db.migrate()every time it starts. GOOD:kubectl execa migration job before rolling out the new version.
The factor that kills most teams: Factor III (Config)
Of all twelve factors, III (Config) is broken most often, with the most severe consequences.
BAD: Config in code
// app.js — this gets committed to GitHub const config = { database: "postgres://admin:password123@prod-db.company.com:5432/myapp", apiKey: "sk-live-abc123def456", stripeKey: "sk_live_xxxxxxxx" };
GOOD: Config in environment
// app.js — zero secrets in source const config = { database: process.env.DATABASE_URL, apiKey: process.env.API_KEY, stripeKey: process.env.STRIPE_SECRET_KEY }; // .env (never committed — in .gitignore) DATABASE_URL=postgres://admin:password@localhost:5432/myapp_dev
The test: could you open-source your codebase right now without exposing credentials? If no, you are violating Factor III. Fix it before a disgruntled employee or accidental public repo does it for you.
A developer stores the production database password as a constant in `config.js` and commits it to the repo. Which factor does this violate?
Why Factor VI (Stateless processes) changes how you scale
Imagine needing to handle a traffic spike. With stateful processes, you cannot just add servers — each server has different in-memory state, and you have no way to route users back to "their" server reliably.
| Scenario | Stateful app | Twelve-Factor app |
|---|---|---|
| Traffic spike — need 10 more servers | Cannot — user sessions are in memory on server 1 | Launch 10 new instances in 30 seconds, load balancer routes anywhere |
| Server crashes mid-request | User loses their cart / session | User retries, another process handles it — no lost state (it is in Redis) |
| Deploy new version | Must maintain "sticky sessions" routing nightmare | Drain old instances, spin up new ones — stateless means no migration needed |
| Disaster recovery | Complex — need to replicate in-memory state | Trivial — start fresh processes, state is in the DB/cache |
The mental model: treat each process like a lambda function. It starts, does work using data from a backing service, returns a result, and exits. No local state. No assumptions about what ran before.
The Black Friday postmortem: applying all 12 factors
Back to the startup that crashed. Here is what they violated and what they fixed:
Violated factors and their fixes
- Factor VI broken — Sessions in memory → migrated to Redis (Factor IV: backing service). Now they could run 20 instances.
- Factor III broken — DB URL in source code → moved to env vars. Bonus: staging no longer accidentally pointed to prod.
- Factor XI broken — Logs on local disk → stdout to CloudWatch. Now they could actually debug the outage from logs.
- Factor IX broken — Slow startup (45 seconds) → profiled and reduced to 2 seconds. Auto-scaling now actually helped.
Result
Next Black Friday: site stayed up. They scaled from 2 to 40 instances in 4 minutes. Cost: $180 in extra compute for 6 hours. Revenue saved: six figures.
How this might come up in interviews
Cloud architecture and backend interviews: interviewers use 12-Factor to assess whether you understand cloud-native design. Expect to explain why stateless processes matter for scaling, or how you handle config across environments.
Common questions:
- What is the Twelve-Factor App methodology?
- Which factor is most commonly violated, and how do you fix it?
- How does Factor VI (stateless processes) affect horizontal scaling?
- What is the difference between build, release, and run stages (Factor V)?
Key takeaways
- Factor III (Config): credentials and env-specific config belong in environment variables, never in code.
- Factor VI (Stateless processes): no in-memory state; store sessions and data in backing services so you can scale horizontally.
- Factor XI (Logs): write to stdout and let the platform aggregate — never manage log files yourself.
- Factor V (Build/release/run): immutable builds with injected config at runtime enable reproducible, rollback-safe deployments.
- The test for Factor III: could you open-source the codebase right now without exposing credentials?
Before you move on: can you answer these?
What does "backing service" mean in 12-Factor terminology?
Any service the app consumes over the network as part of its normal operation — databases, caches, message queues, SMTP servers. They are attached resources swappable via config.
Why must processes be stateless (Factor VI)?
So any process can handle any request, enabling horizontal scaling by adding identical instances without coordination or state migration.
Where should the app write its logs according to Factor XI?
To stdout as an event stream. The execution environment (Kubernetes, Heroku, AWS) captures and routes them to log aggregation systems.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Discussion
Questions? Discuss in the community or start a thread below.
Join DiscordIn-app Q&A
Sign in to start or join a thread.