What is an operating system?
How computers run apps-the layer beneath your code.
The role of an OS
An operating system manages hardware (CPU, memory, disk, network) and provides services to applications. It schedules processes, manages memory, provides the file system, and handles input and output.
Your code runs on top of the OS and uses its APIs and system calls. The OS abstracts hardware complexity, so you don't need to know how to talk directly to a hard drive or network card. This abstraction makes programming much easier.
The OS provides a consistent interface regardless of the underlying hardware. The same code can run on different machines because the OS handles the differences. This portability is one of the OS's key benefits.
Operating system: layered architecture
Applications
Your code, user apps, and services. They use OS APIs and system calls—they don’t talk to hardware directly.
Kernel
Core of the OS. Schedules processes, manages memory, provides the file system, and abstracts hardware.
Hardware
CPU, memory (RAM), disk, network. The kernel talks to these via drivers; applications never do directly.
Process management and scheduling
Processes are running programs. The OS creates processes, schedules them to run on the CPU, and manages their lifecycle. With multiple processes running, the OS must decide which one gets CPU time-this is process scheduling.
Scheduling algorithms balance fairness, responsiveness, and efficiency. Preemptive scheduling can interrupt a process to give CPU time to another, ensuring all processes make progress. Understanding scheduling helps you understand why your app might be slow or unresponsive.
In cloud environments, you might run multiple processes in containers or virtual machines. Understanding process management helps you configure resource limits, debug performance issues, and optimize resource usage.
Process
Separate program with its own memory
- Isolated memory space
- Heavy to create
- IPC needed to communicate
Thread
Lightweight execution within a process
- Shares process memory
- Lightweight to create
- Fast communication
Context Switching
The OS switches between processes/threads to give each CPU time. Too many switches (high context switching) can hurt performance. Understanding this helps you optimize concurrent code.
Memory management
Memory (RAM) is fast but limited. The OS manages memory allocation, ensuring each process gets the memory it needs without interfering with others. When RAM is full, the OS uses swap (disk space) as virtual memory, though this is much slower.
Memory management includes: allocation (giving memory to processes), deallocation (freeing memory when processes finish), and protection (preventing processes from accessing each other's memory).
Understanding memory helps you debug memory leaks, optimize applications, and configure cloud resources. Cloud providers let you choose instance sizes with different amounts of RAM, and understanding memory helps you choose the right size.
RAM (Physical Memory)
Fast but limited
- Direct CPU access
- Very fast (nanoseconds)
- Expensive per GB
- Lost on power loss
Swap (Virtual Memory)
Disk space used as RAM
- Used when RAM is full
- Much slower (milliseconds)
- Can cause performance issues
- Persistent on disk
File systems and storage
Hierarchy & file systems
File systems give storage a hierarchy: directories and files. They handle read, write, and metadata. ext4 (Linux), NTFS (Windows), and ZFS differ in journaling, snapshots, and throughput.
One interface for all storage
The OS hides where data lives. Local disk, NFS, or S3-style APIs-applications use one file system interface. Same code works across local and cloud.
In the cloud
Cloud storage comes in three forms: block (raw volumes for VMs, DBs), object (key-value blobs for backups, media), file (shared NFS/SMB). Pick by access pattern and latency.
File systems and storage
File systems handle read, write, and organizing data on disk.
ext4Linux default. Journaling, large files, good performance.
NTFSWindows. Permissions, compression, large volumes.
ZFSAdvanced. Snapshots, checksums, pooling.
Local disk
Same interface
Network storage
Same interface
Cloud storage
Same interface
In the cloud: choose the right storage type
Block storage
Raw volumes (EBS, disks). OS formats with a file system. Best for databases, VMs.
Object storage
S3-style: key + data + metadata. No hierarchy. Best for backups, media, static assets.
File storage
NFS, SMB. Shared files and folders. Best for shared drives, home dirs.
I/O and device management
What is I/O?
I/O is any data in or out: disk, network, keyboard, display. The OS drivers and system calls give apps a single, stable interface so they don’t talk to hardware directly.
Speed & how the OS helps
I/O is orders of magnitude slower than CPU (milliseconds vs nanoseconds). The OS uses buffering, caching, and async I/O so the CPU isn’t blocked waiting. Design for I/O when you care about throughput or latency.
In the cloud
In the cloud, I/O is often the bottleneck: network RTT, disk type (SSD vs HDD), and read/write patterns. Tune instance type, storage tier, and app I/O patterns together.
I/O and device management
I/O = input/output — the OS gives apps one consistent interface
Storage
Read / write disk
Network
Send / receive data
Input
Keyboard, mouse
Output
Display, sound
I/O is much slower than CPU — that's why the OS optimizes
Nanoseconds. Millions of ops per second.
Milliseconds. Blocking on I/O wastes CPU.
How the OS improves I/O performance
Buffering
Hold data in memory before writing or after reading. Smooths out bursts.
Caching
Keep frequently used data in fast storage. Reduces repeated slow I/O.
Async I/O
Don't block the CPU waiting. Start I/O, do other work, handle result later.
In the cloud, I/O can be the bottleneck
Network latency, storage speed (SSD vs HDD), and I/O patterns all affect performance. Understanding I/O helps you pick the right instance and storage and write efficient apps.
Concepts that matter for developers
Processes, threads, memory (RAM vs swap), file systems, and permissions are concepts that every developer benefits from understanding. They explain why apps behave the way they do and how they interact with the machine-whether you deploy to the cloud or on-prem.
Understanding the OS helps you: debug performance issues (is it CPU, memory, or I/O?), optimize applications (knowing OS limits helps you work within them), and configure infrastructure (choosing instance sizes, configuring storage, etc.).
Even with cloud abstractions, OS concepts still matter. Containers share the host OS, virtual machines run OS instances, and managed services still use OS concepts under the hood. Understanding the OS makes you a better developer and operator.
Interactive: Memory, scheduling & context switch
See how virtual memory, process states, and context switching work—in small, animated steps.
Sign in to track progress on your dashboard.
Ready to see how this works in the cloud?
Switch to Career Paths on the Academy page for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based paths