DEV Community

alamriku
alamriku

Posted on

Comprehensive Software Engineering Technical Interview Guide

1. Explain difference between inheritance and composition

Inheritance is when a child class takes on the properties and methods of a parent class. Think of it as an "is-a" relationship.

Composition is when a class contains objects of other classes. Think of it as a "has-a" relationship.

Practical Example:
Imagine we're building a music streaming app:

With inheritance:

// Parent class class MediaItem { constructor(title, duration) { this.title = title; this.duration = duration; } play() { console.log(`Playing: ${this.title}`); } } // Child class uses inheritance - "a Song IS A MediaItem" class Song extends MediaItem { constructor(title, duration, artist) { super(title, duration); // Call parent constructor this.artist = artist; // Add specialized property } // Add specialized method showArtist() { console.log(`Artist: ${this.artist}`); } } // Usage const mySong = new Song("Happy Birthday", "3:45", "Various Artists"); mySong.play(); // Inherited from MediaItem mySong.showArtist(); // Defined in Song 
Enter fullscreen mode Exit fullscreen mode

With composition:

// Independent classes class Song { constructor(title, artist, duration) { this.title = title; this.artist = artist; this.duration = duration; } play() { console.log(`Playing song: ${this.title}`); } } // Playlist class CONTAINS Songs (doesn't inherit from Song) class Playlist { constructor(name) { this.name = name; this.songs = []; // Composition: a playlist HAS songs } addSong(song) { this.songs.push(song); } play() { console.log(`Playing playlist: ${this.name}`); this.songs.forEach(song => song.play()); } } // Usage const song1 = new Song("Happy Birthday", "Various Artists", "3:45"); const song2 = new Song("Yesterday", "The Beatles", "2:30"); const myPlaylist = new Playlist("My Favorites"); myPlaylist.addSong(song1); myPlaylist.addSong(song2); myPlaylist.play(); 
Enter fullscreen mode Exit fullscreen mode

Why this matters: Composition is often preferred because it's more flexible. A song isn't a type of playlist, so inheritance wouldn't make sense here. With composition, you can easily add or remove songs from a playlist.

2. Why is data escape bad in OOP?

Data escape happens when an object exposes its internal data directly to the outside world, allowing external code to modify it directly. This breaks encapsulation (the principle that an object should control its own state).

Practical Example:
Let's consider a bank account class:

// BAD: Internal data escapes class BankAccount { constructor(owner) { this.owner = owner; this.transactions = []; // Internal data } deposit(amount) { this.transactions.push({ type: 'deposit', amount: amount, date: new Date() }); } getTransactions() { return this.transactions; // Data escape! Returns direct reference } getBalance() { return this.transactions.reduce((total, t) => t.type === 'deposit' ? total + t.amount : total - t.amount, 0); } } // Outside code can now directly modify the transactions const account = new BankAccount("John"); account.deposit(100); console.log(account.getBalance()); // 100 // Data escape allows dangerous manipulation: const transactions = account.getTransactions(); transactions.push({ type: 'deposit', amount: 1000000, date: new Date() }); console.log(account.getBalance()); // 1000100 - Oops! 
Enter fullscreen mode Exit fullscreen mode

Fixed version:

// GOOD: No data escape class BankAccount { constructor(owner) { this.owner = owner; this.transactions = []; // Internal data } deposit(amount) { this.transactions.push({ type: 'deposit', amount: amount, date: new Date() }); } getTransactions() { // Return a copy, not the original reference return [...this.transactions]; } getBalance() { return this.transactions.reduce((total, t) => t.type === 'deposit' ? total + t.amount : total - t.amount, 0); } } // Outside code cannot modify the internal transactions const account = new BankAccount("John"); account.deposit(100); console.log(account.getBalance()); // 100 // Trying to hack won't work: const transactions = account.getTransactions(); transactions.push({ type: 'deposit', amount: 1000000, date: new Date() }); console.log(account.getBalance()); // Still 100 
Enter fullscreen mode Exit fullscreen mode

Why this matters: Data escape creates unpredictable behavior, security risks, and makes software harder to maintain. It's like giving someone the keys to your house when you just meant to show them a photo of it.

3. Name tolerable cohesion types

Cohesion refers to how closely related the functions within a module or class are. Higher cohesion is better. From best to acceptable:

  1. Functional cohesion (best): All elements work together to perform a single, well-defined task.
// Example of Functional Cohesion class PasswordValidator { validateLength(password) { return password.length >= 8; } validateContainsNumber(password) { return /\d/.test(password); } validateContainsSymbol(password) { return /[!@#$%^&*]/.test(password); } // Main function that uses all the others validatePassword(password) { return this.validateLength(password) && this.validateContainsNumber(password) && this.validateContainsSymbol(password); } } 
Enter fullscreen mode Exit fullscreen mode
  1. Sequential cohesion: The output from one element is the input to another.
// Example of Sequential Cohesion class OrderProcessor { validateOrder(order) { // Validation logic return { ...order, isValid: true }; } calculateTotals(validatedOrder) { // Calculate subtotal, tax, shipping, etc. return { ...validatedOrder, subtotal: 100, tax: 8, total: 108 }; } processPayment(orderWithTotals) { // Process payment return { ...orderWithTotals, paymentStatus: 'paid' }; } // Uses the functions in sequence processOrder(order) { const validatedOrder = this.validateOrder(order); const orderWithTotals = this.calculateTotals(validatedOrder); return this.processPayment(orderWithTotals); } } 
Enter fullscreen mode Exit fullscreen mode
  1. Communicational cohesion: All elements operate on the same input data.
// Example of Communicational Cohesion class CustomerAnalyzer { constructor(customer) { this.customer = customer; } calculateTotalSpent() { return this.customer.orders.reduce((sum, order) => sum + order.total, 0); } findMostPurchasedItem() { // Logic to find most purchased item return "Product X"; } calculateAverageOrderValue() { return this.calculateTotalSpent() / this.customer.orders.length; } } 
Enter fullscreen mode Exit fullscreen mode
  1. Procedural cohesion: Elements execute in a specific order but might not operate on the same data.
// Example of Procedural Cohesion class ApplicationStarter { checkSystemRequirements() { console.log("Checking system requirements..."); return true; } initializeDatabase() { console.log("Initializing database connection..."); return true; } loadConfiguration() { console.log("Loading application configuration..."); return { appName: "MyApp", version: "1.0.0" }; } startApplication() { const requirementsMet = this.checkSystemRequirements(); if (!requirementsMet) return false; const dbInitialized = this.initializeDatabase(); if (!dbInitialized) return false; const config = this.loadConfiguration(); console.log(`Starting ${config.appName} v${config.version}...`); return true; } } 
Enter fullscreen mode Exit fullscreen mode

Why this matters: Higher cohesion makes code more maintainable, reusable, and easier to understand. You want functions in a class to be closely related to deliver a specific purpose, rather than being a random collection of unrelated functionality.

4. What is serialization?

Serialization is the process of converting a complex data structure or object into a format that can be stored or transmitted and later reconstructed. It's like freezing an object in time so it can be thawed out later.

Practical Example:
Let's say you have a game with a player character:

// Our game character class class GameCharacter { constructor(name, level, health, inventory) { this.name = name; this.level = level; this.health = health; this.inventory = inventory; this.position = { x: 0, y: 0 }; } moveRight() { this.position.x += 1; } attack() { console.log(`${this.name} attacks!`); } } // Create a player character const player = new GameCharacter("Hero", 5, 100, ["sword", "shield", "potion"]); player.moveRight(); player.moveRight(); // Now we want to save the game... // Serialization: Convert to JSON string const serialized = JSON.stringify(player); console.log(serialized); // Result: {"name":"Hero","level":5,"health":100,"inventory":["sword","shield","potion"],"position":{"x":2,"y":0}} // Save to localStorage (or could be sent to a server) localStorage.setItem('savedGame', serialized); // Later, when the player comes back... // Get the saved data const savedData = localStorage.getItem('savedGame'); // Deserialization: Convert from JSON string back to object const playerData = JSON.parse(savedData); // Recreate the full object with methods const loadedPlayer = new GameCharacter( playerData.name, playerData.level, playerData.health, playerData.inventory ); loadedPlayer.position = playerData.position; 
Enter fullscreen mode Exit fullscreen mode

Important notes:

  • Basic JavaScript serialization using JSON.stringify() only preserves data, not methods or functions
  • More advanced serialization frameworks exist that can handle complex objects better
  • Different programming languages have different serialization mechanisms

Why this matters: Serialization lets you:

  • Save application state (like game progress)
  • Send data over a network (APIs return serialized data, typically as JSON)
  • Store objects in databases
  • Cache data

5. Difference between Ethernet protocol & IP

Ethernet protocol and IP (Internet Protocol) are both networking protocols but they operate at different layers of the network stack and serve different purposes.

Ethernet is a Layer 2 (Data Link) protocol that handles communication within a local network using MAC addresses (physical addresses).

IP is a Layer 3 (Network) protocol that handles routing between different networks using logical IP addresses.

Practical Example:
Let's say you're sending an email from your computer to someone across the country:

  1. You write an email and click send in your email application
  2. Your computer needs to send this data to your router first:

    • It uses Ethernet protocol for this local communication
    • Your email data is wrapped in an Ethernet frame
    • The frame has a source MAC address (your computer) and destination MAC address (your router)
    • This gets your data to the router, but no further
  3. Your router needs to send the data across the internet:

    • It uses IP protocol for this wider network communication
    • Your email data (already in the Ethernet frame) is wrapped in an IP packet
    • The packet has a source IP address (your network) and destination IP address (recipient's network)
    • The IP protocol handles routing the packet through multiple networks to reach its destination
  4. Once the data arrives at the recipient's network:

    • Their router uses Ethernet protocol again to deliver it to the specific device
    • The data gets unwrapped from its IP packet, then from its Ethernet frame
    • The email is displayed in the recipient's email application

Visual representation:

┌─────────────────────┐ │ Email Data │ ├─────────────────────┤ │ IP Header │ ← Helps get from network to network (IP addresses) ├─────────────────────┤ │ Ethernet Header │ ← Helps get from device to device (MAC addresses) └─────────────────────┘ 
Enter fullscreen mode Exit fullscreen mode

Key distinctions:

  • Addressing: Ethernet uses MAC addresses (physical/hardware); IP uses IP addresses (logical)
  • Scope: Ethernet works within a local network; IP works across multiple networks
  • Purpose: Ethernet connects devices; IP connects networks

Why this matters: Understanding these different layers helps when troubleshooting network issues. If devices on the same network can't communicate, it might be an Ethernet issue. If you can reach local devices but not the internet, it's likely an IP issue.

6. How is DNS resolved?

DNS (Domain Name System) is like the internet's phone book, translating human-readable domain names (like google.com) to IP addresses (like 142.250.190.78) that computers use to identify each other.

Practical Example:
Let's walk through what happens when you type "www.example.com" in your browser:

  1. Check browser cache: Your browser first checks if it already knows the IP address for example.com from a recent visit

  2. Check OS cache: If not found, your browser asks your operating system if it knows

  3. Check router cache: If your OS doesn't know, it asks your router

  4. Ask your ISP's DNS resolver: If the router doesn't know, it asks your Internet Service Provider's DNS server

  5. Recursive resolution: If your ISP's DNS server doesn't know, it starts a search:

a. It asks a Root DNS server: "Who knows about .com domains?"

 ``` Query: "Who knows about .com?" Response: "Ask the .com name servers at <IP addresses>" ``` 
Enter fullscreen mode Exit fullscreen mode

b. It asks the .com DNS server: "Who knows about example.com?"

 ``` Query: "Who knows about example.com?" Response: "Ask the example.com name servers at <IP addresses>" ``` 
Enter fullscreen mode Exit fullscreen mode

c. It asks the example.com DNS server: "What's the IP for www.example.com?"

 ``` Query: "What's the IP for www.example.com?" Response: "www.example.com is at 93.184.216.34" ``` 
Enter fullscreen mode Exit fullscreen mode
  1. Return the answer: The IP address is sent back through the chain to your browser

  2. Caching: Each server in the chain typically caches this information for faster access next time

  3. Browser connects: Your browser connects to the IP address 93.184.216.34 to load the website

Visual representation:

Browser → OS → Router → ISP DNS → Root DNS → .com DNS → example.com DNS ↑ ↑ ↑ ↑ | | | └─────┴──────┴────────┴──────────┴───────────┴────────────┘ Answer flows back through the chain 
Enter fullscreen mode Exit fullscreen mode

Why this matters: DNS is crucial for the user-friendly internet we know today. Without it, you'd need to remember IP addresses for every website you want to visit. The distributed nature of DNS also helps the internet remain robust and scalable.

7. Explain A record & CNAME

A record and CNAME are two different types of DNS records that serve different purposes:

A record (Address record): Directly maps a domain name to an IPv4 address.

CNAME (Canonical Name record): Maps a domain name to another domain name, creating an alias.

Practical Example:
Let's say you have a company website with these requirements:

  • Main site on your own server
  • Blog hosted on a third-party blogging platform
  • Multiple subdomains pointing to the main site

You might set up these DNS records:

# A record for main website (direct IP mapping) example.com → 203.0.113.10 # CNAME for "www" (alias to main domain) www.example.com → example.com # CNAME for blog (alias to third-party platform) blog.example.com → exampleblog.blogprovider.com # A records for specific services with their own IPs mail.example.com → 203.0.113.20 shop.example.com → a different IP address 
Enter fullscreen mode Exit fullscreen mode

When to use each:

Use A records when:

  • You need to point a domain directly to an IP address
  • You're setting up the "root" domain (like example.com)
  • You need to have the maximum control over DNS settings

Use CNAME records when:

  • You're pointing to a domain name that might change its IP
  • You're using a third-party service (like CDN, hosting provider)
  • You want multiple domains to point to the same place and automatically follow if the destination changes

Visual representation:

A record: domain → IP address example.com → 203.0.113.10 CNAME record: domain → another domain www.example.com → example.com 
Enter fullscreen mode Exit fullscreen mode

Limitations:

  • You can't use a CNAME record on the "apex" or "root" domain (example.com)
  • A domain with a CNAME record can't have other DNS records

Why this matters: Using the right type of DNS record makes your website more maintainable. CNAMEs are particularly helpful when working with cloud services where IP addresses might change.

8. Why are both symmetric and asymmetric encryption needed in TLS?

TLS (Transport Layer Security) uses both symmetric and asymmetric encryption for secure communication on the internet:

Symmetric encryption: Uses the same key for both encryption and decryption. Fast but has a key exchange problem.

Asymmetric encryption: Uses a key pair (public and private keys). Secure for key exchange but slower.

Practical Example:
Let's walk through how TLS works when you connect to a secure website:

  1. Initial Handshake (using asymmetric encryption):

    • Your browser requests a secure connection to a website (e.g., your bank)
    • The bank's server sends its certificate containing its public key
    • Your browser verifies the certificate is valid and from a trusted authority
    • Your browser generates a random symmetric key
    • Your browser encrypts this symmetric key using the server's public key
    • Only the bank's server can decrypt this message using its private key
  2. Ongoing Communication (using symmetric encryption):

    • Now both your browser and the server have the same symmetric key
    • All further communication is encrypted and decrypted using this shared symmetric key
    • This symmetric encryption is much faster for the bulk of data transfer

Why use both?:

  • Asymmetric encryption solves the key exchange problem securely but is computationally expensive
  • Symmetric encryption is much faster for large amounts of data but requires both parties to have the same key

Analogy: Think of it like sending a lockbox to someone:

  1. They send you an open padlock (public key) but keep the key (private key)
  2. You put your message (symmetric key) in a box and lock it with their padlock
  3. Only they can unlock it with their private key
  4. Now you both have the same symmetric key for faster, secure communication

Visual representation:

Step 1: Key Exchange (Asymmetric) Browser → Encrypts symmetric key with server's public key → Server ↓ Decrypts with private key Step 2: Data Transfer (Symmetric) Browser ←→ Encrypt/decrypt all traffic with shared symmetric key ←→ Server 
Enter fullscreen mode Exit fullscreen mode

Why this matters: This hybrid approach gives us both security and performance. If we used only asymmetric encryption, secure websites would be too slow; if we used only symmetric encryption, we'd have no secure way to exchange the key initially.

9. How do you control cache in a response?

Caching is the process of storing copies of files or data in a temporary storage so they can be accessed more quickly. In web development, you can control how browsers and proxies cache your content using HTTP headers.

Practical Example:
Let's say you have a news website with:

  • Article content that changes rarely
  • User profile data that changes frequently
  • Stock prices that change constantly

Here's how you might implement caching for each:

// Node.js/Express example const express = require('express'); const app = express(); // Article that rarely changes - cache for 1 day app.get('/articles/:id', (req, res) => { // Find the article const article = getArticleById(req.params.id); // Set cache headers - public means any cache can store it res.set('Cache-Control', 'public, max-age=86400'); // 86400 seconds = 1 day // Send the response res.json(article); }); // User profile that changes occasionally - cache for 5 minutes app.get('/users/:id/profile', (req, res) => { const profile = getUserProfile(req.params.id); // private means only the browser should cache it res.set('Cache-Control', 'private, max-age=300'); // 300 seconds = 5 minutes res.json(profile); }); // Stock prices that change constantly - no caching app.get('/stocks/current', (req, res) => { const prices = getCurrentStockPrices(); // no-store means don't cache it at all res.set('Cache-Control', 'no-store'); res.json(prices); }); 
Enter fullscreen mode Exit fullscreen mode

Main Cache-Control directives:

  • public: Any cache can store the response
  • private: Only browser caches, not intermediaries
  • max-age=seconds: How long to cache in seconds
  • no-cache: Store it but always validate before using
  • no-store: Don't store at all, always fetch fresh
  • must-revalidate: Must check if still valid when expired

Other cache-related headers:

// ETag for conditional requests - server generates unique identifier for content version res.set('ETag', '"123456789"'); // Expires header (older, less flexible than Cache-Control) const inOneHour = new Date(Date.now() + 3600000).toUTCString(); res.set('Expires', inOneHour); 
Enter fullscreen mode Exit fullscreen mode

Testing cache settings:
In Chrome DevTools (Network tab), you can see how caching affects requests:

  • (from disk cache) or (from memory cache) means cache was used
  • Status 304 (Not Modified) means browser checked with server but content hasn't changed

Why this matters: Proper cache control can dramatically improve website performance and reduce server load, but incorrect caching can cause users to see outdated information. It's a critical part of web development.

10. Why is an HTTP request necessary for WebSocket?

WebSockets provide a persistent connection between client and server for real-time communication. However, the connection always starts with a regular HTTP request, which then gets "upgraded" to a WebSocket connection.

Practical Example:
Let's say you're building a chat application that needs real-time updates:

// Client-side JavaScript function connectToChat() { // This looks like WebSocket, but it actually starts with HTTP const socket = new WebSocket('wss://chat.example.com/socket'); socket.onopen = () => { console.log('WebSocket connection established'); socket.send(JSON.stringify({ type: 'join', room: 'general' })); }; socket.onmessage = (event) => { const message = JSON.parse(event.data); displayMessage(message); }; } 
Enter fullscreen mode Exit fullscreen mode

Behind the scenes:

  1. Initial HTTP Request (from browser):
 GET /socket HTTP/1.1 Host: chat.example.com Upgrade: websocket Connection: Upgrade Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ== Sec-WebSocket-Version: 13 
Enter fullscreen mode Exit fullscreen mode
  1. HTTP Response (from server):
 HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo= 
Enter fullscreen mode Exit fullscreen mode
  1. WebSocket Communication: After this HTTP handshake, the connection remains open and switches to the WebSocket protocol. Both sides can now send messages at any time.

Reasons for using HTTP to establish WebSockets:

  1. Compatibility: HTTP is universally supported, making initial connection easier

  2. Infrastructure: Reuses existing web infrastructure (ports, proxies, etc.)

  3. Authentication: Can use HTTP authentication mechanisms for the initial connection

  4. Fallback: Makes it easier to detect WebSocket support and fall back if needed

  5. Security: Firewalls and proxies already understand and can filter HTTP traffic

Simple Server Example (Node.js with Express and ws):

const express = require('express'); const http = require('http'); const WebSocket = require('ws'); // Create Express app and HTTP server const app = express(); const server = http.createServer(app); // Create WebSocket server const wss = new WebSocket.Server({ server }); // Handle WebSocket connections wss.on('connection', (ws) => { console.log('Client connected'); ws.on('message', (message) => { console.log('Received:', message); // Echo back to the sender ws.send(`You said: ${message}`); // Broadcast to all clients wss.clients.forEach((client) => { if (client !== ws && client.readyState === WebSocket.OPEN) { client.send(`Someone said: ${message}`); } }); }); }); // Start server server.listen(3000, () => { console.log('Server listening on port 3000'); }); 
Enter fullscreen mode Exit fullscreen mode

Why this matters: Understanding the HTTP-to-WebSocket upgrade process helps when:

  • Debugging connection issues
  • Configuring servers and proxies correctly
  • Implementing fallback mechanisms for browsers that don't support WebSockets
  1. OSI Model vs TCP/IP Protocol The OSI (Open Systems Interconnection) model and TCP/IP (Transmission Control Protocol/Internet Protocol) are both conceptual frameworks that help us understand network communications. OSI Model (Theoretical) The OSI model has 7 layers, each with specific responsibilities:

Physical Layer: Deals with the physical connection between devices (cables, switches, etc.)

Example: Ethernet cables, fiber optics, wireless signals

Data Link Layer: Handles node-to-node data transfer and error detection

Example: Ethernet protocols, MAC addresses

Network Layer: Manages routing of data packets across networks

Example: IP (Internet Protocol), routers

Transport Layer: Ensures complete data transfer and handles flow control

Example: TCP, UDP protocols

Session Layer: Establishes, maintains, and terminates connections

Example: NetBIOS, RPC

Presentation Layer: Translates data between the application layer and the network format

Example: Encryption/decryption, data compression, format conversion

Application Layer: Interfaces directly with end-user applications

Example: HTTP, FTP, SMTP (email protocols)

TCP/IP Model (Applied)
The TCP/IP model is the practical implementation that powers the modern internet, with 4 layers that roughly map to the OSI model:

Network Access/Link Layer: Combines OSI's physical and data link layers

Example: Ethernet, Wi-Fi

Internet Layer: Equivalent to OSI's network layer

Example: IP addressing, routing

Transport Layer: Same as OSI's transport layer

Example: TCP ensures reliable delivery, UDP provides faster but less reliable delivery

Application Layer: Combines OSI's session, presentation, and application layers

Example: HTTP, FTP, DNS

Practical Example
When you visit a website:

Your browser (Application layer) creates an HTTP request
TCP (Transport layer) breaks the request into packets and ensures reliable delivery
IP (Internet layer) adds addressing information to route the packets
Ethernet/Wi-Fi (Network Access layer) converts the packets to physical signals
The server receives and processes the request through the same layers in reverse

While OSI is valuable as a teaching model, TCP/IP is what's actually implemented in real-world networking.

  1. Parallelism vs Concurrency Parallelism Parallelism is about doing multiple things simultaneously. It requires multiple processors or cores. Example: A restaurant with multiple chefs working at different cooking stations. Each chef is actively cooking different dishes at the exact same time. javascript// JavaScript example of parallelism using Web Workers // main.js const workers = []; const results = []; let completedWorkers = 0;

// Create 4 workers for parallel processing
for (let i = 0; i < 4; i++) {
const worker = new Worker('worker.js');
workers.push(worker);

// Handle messages from worker
worker.onmessage = function(e) {
results.push(e.data);
completedWorkers++;

if (completedWorkers === workers.length) { // All workers completed, sum the results const sum = results.reduce((total, num) => total + num, 0); console.log('Total sum:', sum); } 
Enter fullscreen mode Exit fullscreen mode

};

// Assign each worker a subset of numbers to process
const start = i * 2.5 + 1;
worker.postMessage({ start, count: 2.5 });
}

// worker.js
onmessage = function(e) {
const { start, count } = e.data;
let result = 0;

for (let i = 0; i < count; i++) {
const num = start + i;
result += performExpensiveCalculation(num);
}

postMessage(result);
};

function performExpensiveCalculation(n) {
// Simulate expensive calculation
let result = 0;
for (let i = 0; i < 1000000; i++) {
result += Math.sin(n * i);
}
return n;
}
Concurrency
Concurrency is about dealing with multiple things at once, but not necessarily simultaneously. It's about managing and interleaving multiple tasks, often on a single processor.
Example: A single chef rapidly switching between stirring multiple pots on a stove. The chef isn't cooking multiple dishes simultaneously, but is managing all of them by quickly switching context.
javascript// Node.js example of concurrency using async/await
const fetch = require('node-fetch');

async function fetchData(url) {
console.log(Fetching data from ${url});
// Simulating network request
await new Promise(resolve => setTimeout(resolve, 1000));
return Data from ${url};
}

async function main() {
// These run concurrently, not in parallel
const results = await Promise.all([
fetchData("example.com/api1"),
fetchData("example.com/api2"),
fetchData("example.com/api3")
]);

console.log(results);
}

main().catch(error => console.error(error));
Key Differences

Parallelism requires multiple processors/cores and is about simultaneous execution
Concurrency can work on a single processor and is about managing multiple tasks' progress
Parallelism is about execution (doing multiple things at once)
Concurrency is about structure (designing programs to handle multiple things at once)

  1. Process, Thread, and Coroutine Process A process is an independent program running in its own memory space. Each process has its own resources (memory, file handles) allocated by the operating system. Example: Running Chrome, Word, and Spotify simultaneously on your computer. Each is a separate process with its own memory allocation. bash# View processes on Linux/macOS ps aux

View processes on Windows (PowerShell)

Get-Process
Thread
A thread is a lightweight execution unit within a process. Multiple threads in the same process share memory space and resources.
Example: A web browser process might have different threads for:

Rendering the webpage
Downloading files
Processing JavaScript
Handling user interface actions

// Node.js example of threads const { Worker, isMainThread, parentPort } = require('worker_threads'); if (isMainThread) { // This code runs in the main thread // Create a download thread const downloadWorker = new Worker(__filename); downloadWorker.on('message', (message) => { console.log(message); }); downloadWorker.postMessage('download'); // Create a render thread const renderWorker = new Worker(__filename); renderWorker.on('message', (message) => { console.log(message); }); renderWorker.postMessage('render'); console.log("Browser is ready"); } else { // This code runs in worker threads parentPort.once('message', (message) => { if (message === 'download') { console.log("Downloading file..."); // Download logic would go here parentPort.postMessage('Download complete'); } else if (message === 'render') { console.log("Rendering webpage..."); // Rendering logic would go here parentPort.postMessage('Rendering complete'); } }); } 
Enter fullscreen mode Exit fullscreen mode

Coroutine
A coroutine is a concurrency unit that allows pausing and resuming execution at specific points. Unlike threads, coroutines are cooperatively scheduled rather than preemptively scheduled by the OS.
Example: In a game, you might use coroutines to handle:

Character movement
Animation sequences
Loading assets in the background

<?php // PHP example of coroutines (using react/promise library) // This is simulated since PHP doesn't have native coroutines // In a real project, you would use composer to install react/promise class Game { public function loadAssets() { return new Promise(function ($resolve) { echo "Starting to load background assets\n"; // Simulate async loading sleep(1); echo "Background assets loaded\n"; $resolve(true); }); } public function runAnimation() { return new Promise(function ($resolve) { echo "Starting game animation\n"; // Simulate animation sleep(0.5); echo "Animation sequence complete\n"; $resolve(true); }); } public function start() { echo "Game is ready to play\n"; // Run both "coroutines" concurrently $promises = [ $this->loadAssets(), $this->runAnimation() ]; return Promise\all($promises); } } // Usage: $game = new Game(); $game->start()->then(function() { echo "All game systems ready\n"; }); 
Enter fullscreen mode Exit fullscreen mode

Comparison

Processes have separate memory spaces, making them more resource-intensive but more isolated/secure
Threads share memory within a process, making them lighter but requiring synchronization
Coroutines are even lighter than threads and can be paused/resumed, but rely on cooperative rather than preemptive scheduling

  1. How Paging Helps Memory Optimization Paging is a memory management scheme that breaks physical and virtual memory into small fixed-size blocks called "pages" (virtual memory) and "frames" (physical memory). How Paging Works

The system divides physical memory into fixed-size frames (typically 4KB)
Virtual memory (what programs "see") is divided into pages of the same size
The operating system maintains a "page table" that maps virtual pages to physical frames
Not all pages need to be in physical memory at once

Memory Optimization Benefits

Efficient Use of RAM: Only the parts of a program that are actively used need to be loaded into physical memory.
Overcoming Physical Memory Limitations: Programs can use more memory than physically available through swapping pages to disk.
Memory Isolation: Each process has its own page table, ensuring one process can't access another's memory.
Reduced Fragmentation: Since all pages are the same size, memory fragmentation is minimized.

Practical Example
Imagine you're working with Photoshop on a large 2GB image, but your computer only has 8GB of RAM and other programs are running:
Without paging:

You'd need the entire 2GB image in RAM at once
Other programs might not have enough memory

With paging:

Photoshop only keeps the parts of the image you're actively editing in RAM
Other parts are stored on disk and loaded only when needed
You can work on a file larger than your available RAM
The system can allocate memory more flexibly based on what's actively used

Virtual Memory Physical Memory (RAM)
+-------------+ +-------------+
| Page 1 | → Page Table → | Frame 3 | (loaded)
+-------------+ +-------------+
| Page 2 | → Page Table → | On Disk | (swapped out)
+-------------+ +-------------+
| Page 3 | → Page Table → | Frame 7 | (loaded)
+-------------+ +-------------+
| Page 4 | → Page Table → | On Disk | (swapped out)
+-------------+ +-------------+
When you start working on a section of the image that's on disk, a page fault occurs, and the OS loads that page into RAM (possibly swapping out another page that hasn't been used recently).

  1. Why Concurrency Control is Important Concurrency control ensures that correct results are produced when multiple operations or transactions try to access shared resources simultaneously. The Problem Without Concurrency Control Without concurrency control, concurrent operations can lead to:

Data Inconsistency: Different users seeing different values for the same data
Lost Updates: One user's changes overwriting another's without incorporating both
Incorrect Calculations: Results that don't reflect all operations

Practical Scenario: Banking System
Imagine a banking application where two operations happen simultaneously:

A customer withdraws $500 from an ATM
The same customer's automatic mortgage payment of $1000 is being processed

Starting account balance: $1500
Without concurrency control:
Operation 1 (ATM): Operation 2 (Mortgage):

  1. Read balance: $1500 1. Read balance: $1500
  2. Calculate new balance: $1000 2. Calculate new balance: $500
  3. Write new balance: $1000 3. Write new balance: $500 Final balance: $500 (incorrect - should be $0) The mortgage payment read the original balance before the ATM withdrawal was completed, so the withdrawal was effectively "lost" when the mortgage payment wrote its result. With concurrency control: Operation 1 (ATM): Operation 2 (Mortgage):
  4. LOCK account 1. Wait for lock...
  5. Read balance: $1500
  6. Calculate new balance: $1000
  7. Write new balance: $1000
  8. UNLOCK account 5. LOCK account 6. Read balance: $1000 7. Calculate new balance: $0 8. Write new balance: $0 9. UNLOCK account Final balance: $0 (correct) This is a simple example of a concurrency control technique (locking) that ensures operations see a consistent state and don't interfere with each other.
  9. Locking, Queuing, and Atomic Operations These are three fundamental mechanisms for controlling concurrent access to shared resources. Locking Locking reserves exclusive access to a resource for a specific thread or process. Types of locks:

Mutex: Meaning and Relationships
A mutex (short for "mutual exclusion") is a synchronization mechanism used in concurrent programming to prevent multiple threads or processes from simultaneously accessing a shared resource.
Mutex and Locks
A mutex is a specific type of lock that provides exclusive access to a resource. Here's how they relate:

Locks: A broader term for mechanisms that control access to resources. Locks can be shared (read locks) or exclusive (write locks).
Mutex: A specific type of lock that grants exclusive access to exactly one thread/process at a time.

In the code example provided, you can see a JavaScript Mutex class implementation that:

Maintains a locked state
Makes requesters wait in a queue when the resource is locked
Releases waiting threads in FIFO order

The PHP example shows locking using flock(), which serves a similar purpose but at the file system level.
Mutex and ACID
ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties for database transactions. Mutexes help implement the "I" in ACID:

Atomicity: Either all operations in a transaction succeed or none do.
Consistency: A transaction brings the database from one valid state to another.
Isolation: Concurrent transactions should not affect each other (this is where mutexes come in).
Durability: Once committed, transactions remain permanent.

Mutexes help achieve isolation by preventing concurrent access to the same data, which could cause race conditions or inconsistent reads/writes. In your examples, the accountMutex.lock() ensures that only one withdrawal operation happens at a time, preventing race conditions where two threads might check the balance simultaneously and both withdraw funds.
Mutex and Queues
Mutexes and queues often work together in concurrent systems:

Queues organize pending work (like orders in your examples)
Mutexes ensure that queue operations (enqueue/dequeue) are thread-safe
Producer-Consumer Pattern: Producers add work to queues while consumers take work from queues, with mutexes protecting the queue itself

In your examples:

The JavaScript Queue implementation processes orders one at a time
The PHP database queue uses database transactions and row-level locking to ensure one worker processes each order

Practical Applications
Mutexes are essential in many scenarios:

Financial transactions (as in your withdrawal example)
Inventory systems (preventing overselling)
User registration (preventing duplicate usernames)
File operations (preventing corrupted writes)

The critical point about mutexes is they enforce sequential access to resources that cannot be safely accessed concurrently, making them fundamental building blocks for reliable concurrent systems.

Mutex (Mutual Exclusion): Only one thread can hold the lock at a time
Read-Write Lock: Multiple readers can access simultaneously, but writers need exclusive access
Semaphore: Allows a specified number of threads to access a resource

Example:

<?php // PHP example with database transaction for concurrency control class BankAccount { private $pdo; public function __construct(PDO $pdo) { $this->pdo = $pdo; } public function withdraw($accountId, $amount) { try { // Start transaction for concurrency control $this->pdo->beginTransaction(); // Get current balance with row lock (FOR UPDATE) $stmt = $this->pdo->prepare( "SELECT balance FROM accounts WHERE id = :id FOR UPDATE" ); $stmt->execute(['id' => $accountId]); $account = $stmt->fetch(PDO::FETCH_ASSOC); if (!$account) { throw new Exception("Account not found"); } $balance = $account['balance']; if ($balance < $amount) { throw new Exception("Insufficient funds"); } // Update balance $newBalance = $balance - $amount; $stmt = $this->pdo->prepare( "UPDATE accounts SET balance = :balance WHERE id = :id" ); $stmt->execute([ 'balance' => $newBalance, 'id' => $accountId ]); // Commit the transaction $this->pdo->commit(); return [ 'success' => true, 'message' => "Withdrew ${$amount}, new balance: ${$newBalance}" ]; } catch (Exception $e) { // Rollback on error $this->pdo->rollBack(); return [ 'success' => false, 'message' => $e->getMessage() ]; } } } // Usage: // $pdo = new PDO("mysql:host=localhost;dbname=bank", "user", "password"); // $account = new BankAccount($pdo); // $result = $account->withdraw(123, 500); // echo $result['message']; 
Enter fullscreen mode Exit fullscreen mode

Queuing
Queuing organizes requests for resources in a structured order, usually First-In-First-Out (FIFO).
Example: Task Queue

// Node.js example of task queuing with Bull queue const Queue = require('bull'); // Create a queue const taskQueue = new Queue('data-processing'); // Process jobs in the queue taskQueue.process(async (job) => { const task = job.data; console.log(`Processing task: ${task.name}`); // Simulate work await new Promise(resolve => setTimeout(resolve, 1000)); console.log(`Completed task: ${task.name}`); return { processed: true, taskName: task.name }; }); // Add jobs to the queue async function addJobs() { for (let i = 0; i < 10; i++) { await taskQueue.add({ name: `Task ${i}`, data: { /* task data */ } }); console.log(`Added Task ${i} to queue`); } } // Start adding jobs addJobs().catch(err => console.error('Error adding jobs:', err)); 
Enter fullscreen mode Exit fullscreen mode

// In real application, you would have separate files for adding and processing jobs
Atomic Operations
Atomic operations are indivisible, meaning they complete entirely or not at all, with no partial results visible to other threads.
Example:

// JavaScript example of atomic operations // Using the Atomics API for shared memory // Create a shared buffer const buffer = new SharedArrayBuffer(4); // 4 bytes const view = new Int32Array(buffer); // View as 32-bit integer // Initialize the counter to 0 view[0] = 0; // Function to be run in a worker function incrementCounter(view) { for (let i = 0; i < 1000; i++) { // Atomic add operation Atomics.add(view, 0, 1); // This is equivalent to the non-atomic operation: // view[0] = view[0] + 1; // But the atomic version is thread-safe } } // In a main script you would create multiple workers: // const worker1 = new Worker('worker.js'); // const worker2 = new Worker('worker.js'); // worker1.postMessage(view); // worker2.postMessage(view); // After the workers complete: console.log(`Final count: ${view[0]}`); // Expected: 2000 (with atomic operations) // Without atomic operations, the result would likely be less due to race conditions 
Enter fullscreen mode Exit fullscreen mode

When to Use Each

Locking: When you need exclusive access to a resource for complex operations
Queuing: When processing tasks in order or distributing work among multiple workers
Atomic Operations: For simple operations like counters or flags, with less overhead than locks

  1. Deadlock vs Race Condition Both deadlocks and race conditions are concurrency problems, but they manifest differently. Deadlock A deadlock occurs when two or more processes are waiting for each other to release resources, and neither can proceed. Example: The Dining Philosophers Problem Five philosophers sit at a round table with one fork between each pair. To eat, a philosopher needs both the fork on their left and right.
<?php // PHP example of the Dining Philosophers problem that can deadlock class Fork { private $id; private $inUse = false; public function __construct($id) { $this->id = $id; } public function pickUp() { if ($this->inUse) { return false; } $this->inUse = true; return true; } public function putDown() { $this->inUse = false; return true; } public function getId() { return $this->id; } } class Philosopher { private $name; private $leftFork; private $rightFork; public function __construct($name, Fork $leftFork, Fork $rightFork) { $this->name = $name; $this->leftFork = $leftFork; $this->rightFork = $rightFork; } public function eat() { echo "{$this->name} is hungry\n"; // Try to pick up left fork while (!$this->leftFork->pickUp()) { // Wait until the fork is available echo "{$this->name} waiting for left fork {$this->leftFork->getId()}\n"; sleep(1); } echo "{$this->name} picked up left fork {$this->leftFork->getId()}\n"; // This sleep increases the likelihood of deadlock sleep(1); // Try to pick up right fork while (!$this->rightFork->pickUp()) { // Wait until the fork is available echo "{$this->name} waiting for right fork {$this->rightFork->getId()}\n"; sleep(1); } echo "{$this->name} picked up right fork {$this->rightFork->getId()}\n"; // Eat echo "{$this->name} is eating\n"; sleep(2); // Put down forks $this->rightFork->putDown(); $this->leftFork->putDown(); echo "{$this->name} is thinking\n"; } } // Create 5 forks $forks = []; for ($i = 0; $i < 5; $i++) { $forks[$i] = new Fork($i); } // Create 5 philosophers $philosophers = []; for ($i = 0; $i < 5; $i++) { $leftFork = $forks[$i]; $rightFork = $forks[($i + 1) % 5]; $philosophers[$i] = new Philosopher("Philosopher $i", $leftFork, $rightFork); } // Simulate with processes (in real PHP you would use pcntl_fork or a message queue) // This is just a representation - in real PHP, you'd need a more sophisticated // approach for true parallel execution foreach ($philosophers as $philosopher) { // In a real application, you would fork a process here // But for demonstration, we'll just call eat() sequentially $philosopher->eat(); } 
Enter fullscreen mode Exit fullscreen mode

This can cause deadlock if each philosopher picks up their left fork at the same time. None can pick up their right fork because it's already taken by another philosopher.
Race Condition
A race condition occurs when the behavior of a system depends on the timing or ordering of events that cannot be controlled.

Example: Shared Counter

// JavaScript example of a race condition class SharedCounter { constructor() { this.count = 0; } // This method has a race condition increment() { // This is actually three operations: // 1. Read count // 2. Add 1 to it // 3. Write the new value back this.count++; } getCount() { return this.count; } } // Simulating multiple threads with Web Workers // In a real application, you would create worker.js: // --- worker.js --- // onmessage = function(e) { // // Can't directly share objects between workers, so we just simulate // // incrementing 1000 times and report back // postMessage(1000); // } // --- main.js --- async function simulateRaceCondition() { const counter = new SharedCounter(); // Simulate 10 workers, each incrementing 1000 times const workerCount = 10; const incrementsPerWorker = 1000; // In real JavaScript, we can't share objects between workers directly // without SharedArrayBuffer, but we can simulate the race condition // This is what actually happens behind the scenes: for (let i = 0; i < workerCount; i++) { // Simulate asynchronous execution setTimeout(() => { for (let j = 0; j < incrementsPerWorker; j++) { counter.increment(); } // Check if this is the last worker if (i === workerCount - 1) { console.log(`Final count: ${counter.getCount()}`); // Expected: 10,000 // Actual: 10,000 in this simulation, but in real concurrent  // environment would likely be less due to race conditions } }, Math.random() * 100); } } simulateRaceCondition(); 
Enter fullscreen mode Exit fullscreen mode

Key Differences

Deadlock: Processes are permanently blocked waiting for each other
Race Condition: System behavior depends on uncontrollable timing of events
Deadlocks stop progress entirely; race conditions cause unpredictable results
Deadlocks involve resource allocation; race conditions involve timing
Deadlocks can be prevented by careful resource ordering; race conditions by proper synchronization

  1. How Single-Threaded Redis Performs Better Redis is an in-memory database that, unlike many other database systems, runs on a single thread for most operations. Counterintuitively, this design choice actually contributes to Redis's exceptional performance. Why Single-Threaded Redis Is Fast

No Thread Synchronization Overhead

No locks, mutexes, or context switching
No CPU time wasted on thread coordination
No complex concurrency bugs

Memory Efficiency

No thread stacks to maintain
Simpler memory model with no thread-specific allocations

CPU Cache Optimization

Better CPU cache utilization with single-threaded design
Less cache invalidation due to multiple threads accessing same data

I/O Multiplexing

Uses event-driven architecture with event loops (epoll/kqueue)
Can handle thousands of connections with a single thread
Efficiently processes many operations in non-blocking fashion

Optimized Data Structures

Highly optimized in-memory data structures
Operations are computationally efficient (O(1) for many operations)

Practical Example
Consider a scenario where Redis handles 10,000 concurrent connections:
Multi-threaded approach (traditional database):
10,000 connections → Thread Pool (100 threads) → Database

  • Context switching overhead between threads
  • Lock contention for shared data
  • Memory overhead for each thread
  • Complex coordination logic Redis single-threaded approach: 10,000 connections → Event Loop (1 thread) → Redis
  • Event notification when data is ready to read/write
  • No context switching between threads
  • No lock contention
  • Simple programming model When Single-Threaded Design Falls Short Redis's single-threaded design can become a bottleneck for:

CPU-intensive operations on very large datasets
Complex operations that block the event loop

That's why newer Redis versions (6.0+) implement a multithreaded I/O model while keeping the core processing single-threaded, getting the best of both worlds.

  1. Covariance vs Contravariance Covariance and contravariance are concepts related to type compatibility in programming languages, particularly in languages with strong typing systems. They help determine when a more specific or more general type can be used in place of another. Covariance Covariance allows you to use a more derived (more specific) type than originally specified. Example in JavaScript/TypeScript:
// TypeScript example of covariance // Animal hierarchy class Animal { name: string; constructor(name: string) { this.name = name; } } class Dog extends Animal { breed: string; constructor(name: string, breed: string) { super(name); this.breed = breed; } bark() { return `${this.name} says woof!`; } } // Covariance with arrays // Animal[] is the base type // Dog[] is more specific (derived) let animals: Animal[] = []; let dogs: Dog[] = [new Dog("Buddy", "Golden Retriever")]; // Covariant behavior: can assign more specific to more general animals = dogs; // This is allowed - covariance // We can read from animals array safely console.log(animals[0].name); // OK // But we can't safely write to it without checks // animals.push(new Animal("Generic Animal")); // Runtime error if executed 
Enter fullscreen mode Exit fullscreen mode

Covariance makes sense for "read-only" scenarios because you can always treat a more specific type as its more general base type when reading properties they share.
Contravariance
Contravariance allows you to use a more general (less specific) type than originally specified.
Example in JavaScript/TypeScript:

// TypeScript example of contravariance with function types type AnimalCallback = (animal: Animal) => void; type DogCallback = (dog: Dog) => void; function processAnimal(animal: Animal, callback: AnimalCallback) { callback(animal); } function processDog(dog: Dog, callback: DogCallback) { callback(dog); } // Callbacks const animalGreeter: AnimalCallback = (animal: Animal) => { console.log(`Hello, ${animal.name}`); }; const dogGreeter: DogCallback = (dog: Dog) => { console.log(`Hello, ${dog.name} the ${dog.breed}`); console.log(dog.bark()); }; // Contravariant behavior with function arguments: let dogProcessor: typeof processDog = processDog; // This is NOT allowed (type error) - contravariance // dogProcessor = processAnimal;  // But the opposite is allowed: let animalProcessor: typeof processAnimal = processAnimal; animalProcessor = processDog; // OK // Similarly with callbacks: // Can pass more general callback to function expecting specific callback processDog(new Dog("Rex", "German Shepherd"), animalGreeter); // OK // Cannot pass more specific callback to function expecting general callback // processAnimal(new Animal("Generic"), dogGreeter); // Error Practical Example: Event Handling A real-world example of contravariance is event handling in web development: javascript// DOM event handling demonstrates contravariance // MouseEvent is more specific than Event function handleMouseEvent(event: MouseEvent) { console.log(`Mouse at ${event.clientX}, ${event.clientY}`); } function handleAnyEvent(event: Event) { console.log(`Event of type ${event.type} occurred`); } const button = document.querySelector('button'); // Contravariance in action: // Can use more general handler where specific is expected button.addEventListener('click', handleAnyEvent); // OK // Cannot use more specific handler where general is expected // document.addEventListener('load', handleMouseEvent); // Error 
Enter fullscreen mode Exit fullscreen mode

Summary and Mnemonic
To remember the difference:

Covariance: Compatible with more specific types (like a cone narrowing down)
Contravariance: Goes contrary to intuition, compatible with more general types

In practical terms:

Use covariance for output/return types (reading)
Use contravariance for input/parameter types (writing)

This is often summarized as "covariant outputs, contravariant inputs."
Understanding these concepts helps create more flexible, type-safe interfaces in programming languages that support them, particularly when working with generics, inheritance hierarchies, and callback functions.# Computer Science Concepts Explained

Q: Explain IOC vs Dependency Inversion vs Dependency Injection

These are related but different concepts in software design:

Dependency Inversion Principle (DIP): High-level modules shouldn't depend on low-level modules; both should depend on abstractions.

Inversion of Control (IoC): A design pattern where control flow is inverted; framework calls your code instead of your code calling the framework.

Dependency Injection (DI): A technique implementing IoC by injecting dependencies instead of creating them internally.

Real-world example:
Imagine building a payment system for an e-commerce site:

Without these principles:

// Tightly coupled - difficult to change class OrderProcessor { constructor() { // Direct dependency on a specific implementation this.paymentGateway = new StripePaymentGateway(); } processOrder(order) { // Process payment using Stripe this.paymentGateway.chargeCustomer(order.amount, order.customerId); } } 
Enter fullscreen mode Exit fullscreen mode
// Tightly coupled - difficult to change class OrderProcessor { private $paymentGateway; public function __construct() { // Direct dependency on a specific implementation $this->paymentGateway = new StripePaymentGateway(); } public function processOrder($order) { // Process payment using Stripe $this->paymentGateway->chargeCustomer($order['amount'], $order['customerId']); } } 
Enter fullscreen mode Exit fullscreen mode

With DIP and DI:

// JavaScript/Node.js example with DI // Interface (abstraction) - in JS, this is just a common pattern class PaymentGateway { processPayment(amount, customerId) { throw new Error("Method not implemented"); } } // Low-level module implementing the interface class StripePaymentGateway extends PaymentGateway { processPayment(amount, customerId) { // Stripe-specific code console.log(`Processing ${amount} for customer ${customerId} with Stripe`); } } // Alternative implementation class PayPalPaymentGateway extends PaymentGateway { processPayment(amount, customerId) { // PayPal-specific code console.log(`Processing ${amount} for customer ${customerId} with PayPal`); } } // High-level module depends on abstraction, not implementation class OrderProcessor { constructor(paymentGateway) { // Dependency injected through constructor this.paymentGateway = paymentGateway; } processOrder(order) { this.paymentGateway.processPayment(order.amount, order.customerId); } } // Usage with dependency injection const gateway = new StripePaymentGateway(); const processor = new OrderProcessor(gateway); 
Enter fullscreen mode Exit fullscreen mode
// PHP example with DI // Interface (abstraction) interface PaymentGateway { public function processPayment($amount, $customerId); } // Low-level module implementing the interface class StripePaymentGateway implements PaymentGateway { public function processPayment($amount, $customerId) { // Stripe-specific code echo "Processing $amount for customer $customerId with Stripe"; } } // Alternative implementation class PayPalPaymentGateway implements PaymentGateway { public function processPayment($amount, $customerId) { // PayPal-specific code echo "Processing $amount for customer $customerId with PayPal"; } } // High-level module depends on abstraction, not implementation class OrderProcessor { private $paymentGateway; // Dependency injected through constructor public function __construct(PaymentGateway $paymentGateway) { $this->paymentGateway = $paymentGateway; } public function processOrder($order) { $this->paymentGateway->processPayment($order['amount'], $order['customerId']); } } // Usage with dependency injection $gateway = new StripePaymentGateway(); $processor = new OrderProcessor($gateway); 
Enter fullscreen mode Exit fullscreen mode

Benefits in practice:

  1. Easier testing: You can inject mock implementations
  2. Flexibility: Switch from Stripe to PayPal without changing OrderProcessor
  3. Decoupling: Components evolve independently### Q: Why is concurrency control important? Explain with a scenario

Web API Concepts for Beginners

This guide explains essential API and web development concepts with real-world examples to help novice technical developers understand these important topics.

1. What is the API First Approach?

API First is a development approach where you design the API before implementing the application that uses it.

Explanation: Think of building a house. An API First approach means creating detailed blueprints before laying any foundation or building walls. You decide all the specifications of the API (endpoints, request/response formats, etc.) before writing any implementation code.

Real-life example:
A team building a food delivery app would start by designing the API contract that defines:

  • How restaurants will register their menus
  • How customers will search and place orders
  • How delivery partners will receive and update delivery status

Only after this API design is complete and approved by all stakeholders would they begin coding the actual application. This ensures that mobile apps, web frontend, and third-party integrations can all be developed in parallel based on the agreed API contract.

Benefits:

  • Teams can work in parallel
  • Better documentation from the start
  • Easier to identify design flaws early
  • More consistent user experience

2. Explain Semantic Versioning

Semantic Versioning (SemVer) is a versioning scheme using three numbers (X.Y.Z) where:

  • X = Major version (breaking changes)
  • Y = Minor version (new features, backward compatible)
  • Z = Patch version (bug fixes, backward compatible)

Explanation: Imagine your favorite mobile app. When it updates with just bug fixes, the version might change from 2.1.0 to 2.1.1 (patch). When it adds new features but everything still works the same, it might become 2.2.0 (minor). When it completely redesigns the interface or changes how you use it, it becomes 3.0.0 (major).

Real-life example:
The popular Node.js package Express:

  • Express 4.17.1 → 4.17.2: Fixed security vulnerabilities (patch)
  • Express 4.17.2 → 4.18.0: Added new optional features (minor)
  • Express 4.x.x → 5.0.0: Changed how middleware works, requiring code changes (major)

Why it matters:

  • Helps developers understand the risk of updating
  • Clearly communicates the nature of changes
  • Allows automated tools to safely update dependencies

3. Why is JWT Important?

JWT (JSON Web Token) is a compact, self-contained way to securely transmit information between parties as a JSON object.

Explanation: A JWT is like a digitally signed ID card. When you log into a website, instead of the server keeping track of your session, it gives you this special ID card (token). Every time you make a request, you show this ID card, and the server can verify it's legitimate without needing to look you up in a database.

Real-life example:
A single sign-on system for a company with multiple applications:

  • You log in once at the company portal
  • The authentication server gives you a JWT
  • When you access the HR app, email system, or document management system, you present this JWT
  • Each system can verify your identity and permissions without contacting the authentication server

Benefits:

  • Reduces database lookups (stateless)
  • Can contain user information and permissions
  • Works well in distributed systems
  • Enables single sign-on across multiple applications

4. Which OAuth Grant is Best for Public Clients?

Authorization Code Flow with PKCE (Proof Key for Code Exchange) is the best OAuth grant type for public clients like mobile or single-page applications.

Explanation: Think of OAuth like valet parking at a restaurant. Different types of "valet services" (grant types) exist for different situations. For public clients (where code is exposed to users), the Authorization Code Flow with PKCE is like having a special ticket system where:

  1. You request a valet ticket
  2. You create a secret code only you know
  3. You give the valet a transformed version of this code
  4. When getting your car back, you prove you're the owner by revealing your original secret

Real-life example:
A mobile banking app:

  1. User taps "Login"
  2. App generates a code verifier and its hashed challenge
  3. App opens the bank's authorization page with the challenge
  4. User logs in on that page
  5. App receives an authorization code
  6. App exchanges this code along with the original code verifier for access tokens
  7. User can now access their account details

Why it's best for public clients:

  • Protects against authorization code interception
  • Doesn't require storing client secrets in insecure environments
  • Provides a full authentication flow with better security than Implicit Flow
  • Recommended by OAuth 2.0 security best practices

5. When Will You Use Exponential Backoff with Jitter?

Exponential backoff with jitter is a retry strategy where you wait increasingly longer between retry attempts, plus a random amount of time (jitter).

Explanation: Imagine you're trying to enter a crowded store. If everyone who can't get in immediately tries again exactly 5 seconds later, they'll all collide again. Exponential backoff means waiting longer each time (2 seconds, then 4, then 8...). Adding jitter means each person waits a slightly different time, preventing everyone from trying again simultaneously.

Real-life example:
An e-commerce checkout system during a flash sale:

  1. User clicks "Complete Purchase" but the server is overloaded
  2. Instead of retrying immediately, the app waits 1 second + random 0-0.5 seconds
  3. If still failing, it waits 2 seconds + random 0-1 seconds
  4. Then 4 seconds + random 0-2 seconds, and so on
  5. This spreads out the retry traffic, giving the server a chance to recover

When to use it:

  • API rate limiting scenarios
  • Handling service unavailability
  • Distributed systems with temporary failures
  • High-concurrency environments
  • Microservice communication
  • Cloud service integrations that might experience temporary outages

6. What is OpenID Connect?

OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0 that allows clients to verify a user's identity and obtain basic profile information.

Explanation: If OAuth 2.0 is like a valet parking system that gives someone permission to drive your car, OpenID Connect adds identity verification - it confirms who that person is. OAuth answers "Can this application access this data?", while OIDC adds "Who is this user?"

Real-life example:
When you click "Sign in with Google" on a website:

  1. You're redirected to Google's authentication page
  2. After logging in, Google asks if you want to share your profile info with the website
  3. Upon approval, the website receives:
    • An access token (OAuth) for accessing Google services
    • An ID token (OIDC) containing verified information about you (email, name, etc.)
    • Basic profile information

Why it's important:

  • Standardizes user authentication across applications
  • Provides verified user information
  • Enables single sign-on experiences
  • Separates authentication from authorization
  • Reduces the need for sites to manage passwords

7. Explain Load Test vs Spike Test vs Stress Test

These are different types of performance testing that evaluate system behavior under various conditions:

1. Load Testing:
Testing how a system performs under expected normal conditions.

Explanation: Like testing how efficiently a restaurant can serve customers during regular business hours.

Real-life example:
A ticket booking system being tested with a simulation of 1,000 concurrent users making purchases, which is the average expected load during normal operations.

2. Spike Testing:
Testing how a system handles sudden, extreme increases in load.

Explanation: Testing how that same restaurant handles a surprise visit from a popular celebrity, causing a sudden rush of customers.

Real-life example:
An e-commerce site testing how it handles 50,000 users hitting the site simultaneously when a limited edition product drops or during a flash sale.

3. Stress Testing:
Testing a system's upper limits by gradually increasing load beyond normal capacity until it fails.

Explanation: Gradually adding more and more customers to the restaurant until you find the breaking point where service quality deteriorates.

Real-life example:
Gradually increasing users on a video streaming platform from 10,000 to 100,000 in increments of 10,000 every 10 minutes to find at what point video quality degrades or the system crashes.

Key differences:

  • Load testing confirms the system works under normal conditions
  • Spike testing confirms the system can handle sudden bursts
  • Stress testing identifies the breaking point and failure behavior

8. Which Ready Tool to Use to Offload Static Public Files from the Server?

Content Delivery Networks (CDNs) are the best ready-to-use solution for offloading static files.

Explanation: Instead of your web server having to deliver images, CSS, and JavaScript files to every user, a CDN stores copies of these files on multiple servers around the world. Users download these files from the nearest CDN server, not your application server.

Real-life example:
A news website uses Cloudflare CDN to serve:

  • All images and videos
  • CSS stylesheets
  • JavaScript files
  • Downloadable PDFs

This frees up the main server to focus on generating dynamic content like personalized news feeds and processing user comments.

Popular CDN options:

  • Cloudflare
  • Amazon CloudFront
  • Akamai
  • Fastly
  • Google Cloud CDN
  • Microsoft Azure CDN

Benefits:

  • Faster loading times for users worldwide
  • Reduced bandwidth costs
  • Lower load on application servers
  • Better scalability during traffic spikes
  • Some protection against DDoS attacks

9. How to Optimize Load on the Database When the Same Query is Performed Multiple Times?

Caching is the primary solution for reducing database load from repeated queries.

Explanation: Instead of asking the database the same question repeatedly, you store the answer the first time and reuse it for subsequent requests. It's like a teacher writing a frequently-asked question on the whiteboard instead of repeating the answer to each student individually.

Real-life example:
A product catalog showing the top 20 bestselling items:

  1. First user visits the homepage
  2. System queries the database for top 20 products
  3. Results are stored in Redis cache with a 15-minute expiration
  4. Next 10,000 users who visit get the results from Redis instead of the database
  5. After 15 minutes, the data refreshes for the next visitors

Caching solutions:

  • Redis
  • Memcached
  • Application-level caching
  • ORM query caching
  • Database query caching

When NOT to opt for caching:

  • When data must always be real-time (stock trading prices)
  • When data is highly personalized for each user
  • When queries are rarely repeated
  • When data changes frequently and cache invalidation is complex
  • When the overhead of cache management exceeds the benefits
  • When storage space for the cache is limited compared to the dataset size

10. For Uploading Big Files, Which Solution Can Be Used to Reduce Load on the Server?

Multipart direct-to-storage uploads are the best solution for handling large file uploads efficiently.

Explanation: Instead of sending large files through your application server (like mailing a large package through a small office), users send their files directly to a storage service (like delivering directly to a warehouse). Your application server just coordinates this process without handling the actual data.

Real-life example:
A video editing web application:

  1. User selects a 4GB video file to upload
  2. The application gets a pre-signed URL from Amazon S3
  3. The browser uploads the file directly to S3 in multiple chunks
  4. The application server receives only metadata about the upload
  5. When complete, the server updates the database with the file location
  6. The server can then trigger any needed processing (like transcoding)

Solutions for large file uploads:

  • AWS S3 Multipart Upload
  • Google Cloud Storage Resumable Uploads
  • Azure Blob Storage Block Blobs
  • Cloudflare Stream
  • Specialized upload services like Uploadcare or Filestack

Benefits:

  • Reduces application server bandwidth usage
  • Improves upload reliability with resumable uploads
  • Scales better for concurrent uploads
  • Allows progress tracking for better user experience
  • Removes file size limitations imposed by application servers

System Architecture & Security Concepts for Beginners

1. What is horizontal scaling? Which tool can send requests to multiple servers when extreme load?

Horizontal scaling means adding more machines to your system to handle increased load, rather than upgrading the hardware of an existing server (which would be vertical scaling).

Think of it like this:

  • Vertical scaling: Replacing your computer with a more powerful one
  • Horizontal scaling: Adding more computers and dividing the work between them

When dealing with extreme load, a load balancer is the tool that distributes incoming requests across multiple servers. Popular load balancers include:

  • NGINX
  • HAProxy
  • AWS Elastic Load Balancer
  • Cloudflare
  • Kubernetes Ingress controllers

The load balancer acts as a traffic cop, directing each user request to the server that's best able to handle it at that moment, preventing any single server from becoming overwhelmed.

2. Explain event vs message

Events and messages are both ways systems communicate, but they serve different purposes:

Events:

  • Notifications that something happened
  • Don't necessarily expect a response
  • Often broadcast to multiple listeners
  • Example: "User logged in" or "Payment processed"
  • Usually describe something that occurred in the past

Messages:

  • Directed communication containing data or commands
  • Typically sent to a specific recipient
  • Often expect some form of processing or response
  • Example: "Process this payment" or "Update this user record"
  • Usually represent a request or instruction

Think of events like a news broadcast that anyone can tune into, while messages are more like personal letters addressed to specific recipients.

3. Which load balancing algorithm will you use for a session-based monolith?

For a session-based monolith application, sticky sessions (also called session affinity) is the most appropriate load balancing algorithm.

With sticky sessions:

  • Once a user is directed to a specific server, subsequent requests from that user go to the same server
  • The user's session data stays on one server
  • No need to share session data across all servers

This is necessary because session-based monoliths often store user session information in the server's memory. If a user's requests were randomly distributed across different servers, each server would have an incomplete view of the user's session.

Common ways to implement sticky sessions include:

  • Cookie-based session tracking
  • IP-based persistence
  • URL-based session identification

If you need to move beyond sticky sessions (for better scaling), you would typically need to implement a shared session store (like Redis) or move to stateless authentication (like JWT tokens).

4. Can you implement RBAC using PBAC?

Yes, you can implement Role-Based Access Control (RBAC) using Policy-Based Access Control (PBAC).

RBAC (Role-Based Access Control):

  • Users are assigned roles (like "admin", "editor", "viewer")
  • Roles are granted permissions to perform certain actions
  • Simple but relatively inflexible

PBAC (Policy-Based Access Control):

  • Access decisions are based on policies
  • Policies can consider multiple factors (user attributes, resource properties, time, location, etc.)
  • More flexible and powerful

To implement RBAC using PBAC:

  1. Create policies that check if a user has a specific role
  2. The policy would say: "If the user has role X, then grant permission Y"

For example, a simple PBAC policy implementing RBAC might look like:

IF user.role == "editor" THEN allow_action("edit_document") 
Enter fullscreen mode Exit fullscreen mode

PBAC is more powerful than RBAC because it can easily extend beyond just roles to include other contextual information in access decisions.

5. What is mandatory access control?

Mandatory Access Control (MAC) is a strict security model where access decisions are made by the system based on security labels, not by the resource owners.

Key characteristics:

  • System-controlled: The system administrator sets the rules, not the data owner
  • Security labels: Both users and resources have security classifications or labels
  • Central policy: Access rules are defined by a central authority
  • Non-discretionary: Users cannot change or override the access rules

Examples in real-world systems:

  • Military and government systems use MAC (with classifications like "Top Secret", "Secret", "Confidential")
  • SELinux (Security-Enhanced Linux) uses a form of MAC
  • Apple's iOS implements MAC for app sandboxing

Unlike discretionary access control (DAC) where file owners can set permissions themselves, with MAC, even the owner of a file cannot grant access if the system policy doesn't allow it.

6. What issue can you face in a round robin load balancer?

Round robin load balancing distributes requests evenly across all servers in rotation, but it has several potential issues:

Main issues:

  1. Uneven server capabilities: If some servers are more powerful than others, they'll still receive the same number of requests as weaker servers
  2. Uneven request complexity: Not all requests require the same processing power
  3. Session problems: Without sticky sessions, users may lose their session data when directed to different servers
  4. Cache inefficiency: Each server maintains its own cache, reducing cache hit rates
  5. Server health ignorance: Round robin doesn't check if servers are healthy or overloaded

A common scenario:
Imagine one request is for a simple static page, while another requires complex database operations. Round robin treats both the same, potentially overloading some servers while others remain under-utilized.

Solutions include:

  • Weighted round robin (giving more powerful servers more requests)
  • Adding health checks
  • Implementing sticky sessions
  • Using more sophisticated algorithms like least connections or response time-based load balancing

7. What's the easiest way to defend against DDoS attacks?

The easiest and most effective way to defend against DDoS (Distributed Denial of Service) attacks is to use a cloud-based DDoS protection service.

Popular options include:

  • Cloudflare
  • AWS Shield
  • Akamai
  • Imperva
  • Fastly

These services work by:

  1. Providing a large network capacity that can absorb attack traffic
  2. Automatically detecting and filtering malicious traffic
  3. Only allowing legitimate requests to reach your actual servers
  4. Distributing the traffic across their global network

For smaller websites and applications, simply putting Cloudflare in front of your site can provide substantial DDoS protection with minimal configuration.

Other complementary measures include:

  • Rate limiting requests from single IP addresses
  • Configuring timeouts appropriately
  • Using content delivery networks (CDNs)
  • Implementing proper resource allocation limits

The key advantage of cloud-based solutions is that you don't need to handle the attack traffic directly - the protection service absorbs it before it reaches your infrastructure.

Software Architecture vs. System Design
Software architecture and system design are related concepts that many people confuse or use interchangeably, but they have distinct focuses and scopes.
Software Architecture:

Focus: Structure of a single software application or component
Scope: Internal organization of code, modules, and components
Concerns: Code structure, design patterns, maintainability, technical debt, code quality
Decisions: Programming paradigms, frameworks, modularization approach
Level: Generally more detailed and code-oriented
Examples: MVC, microservices, layered architecture, modular architecture, hexagonal architecture

System Design:

Focus: How multiple components work together to form a complete system
Scope: Entire ecosystems of services, applications, and infrastructure
Concerns: Scalability, reliability, availability, performance, data storage, security
Decisions: Service boundaries, communication protocols, infrastructure choices, data storage solutions
Level: Generally higher-level and more abstract
Examples: Distributed systems, cloud architectures, service-oriented architecture, API gateway patterns

An Analogy:
Think of it like building a house:

Software architecture is like designing the floor plan and structure of a single house - where the rooms go, how they connect, what materials to use.
System design is like planning an entire neighborhood - how houses connect to utilities, road layouts, community services, etc.

How They Intersect:
In modern development, these areas often overlap. For example:

A microservices architecture is both a software architecture pattern and requires system design considerations
Cloud-native applications need both strong software architecture and thoughtful system design

When You Need Each:

Focus on software architecture when working on code organization, internal structure, and ensuring the codebase is maintainable
Focus on system design when planning how different services will communicate, how data will flow through the system, and how the overall solution will scale

Web Communication Technologies Explained
WebSocket
What it is: A protocol that provides full-duplex (two-way) communication channels over a single TCP connection.
How it works:

Starts with an HTTP handshake, then upgrades to a persistent WebSocket connection
Both server and client can send messages at any time without needing to establish a new connection
Connection remains open until explicitly closed by either party

Best for:

Real-time applications like chat apps, live dashboards, collaborative editing
Cases where low latency and frequent updates between client and server are needed

Example code:
javascript// Client-side JavaScript
const socket = new WebSocket('ws://example.com/socket');

socket.onopen = () => {
console.log('Connection established');
socket.send('Hello Server!');
};

socket.onmessage = (event) => {
console.log('Message from server:', event.data);
};
WebRTC (Web Real-Time Communication)
What it is: A collection of protocols and APIs that enable direct peer-to-peer communication between browsers.
How it works:

Allows audio, video, and data to be sent directly between browsers without requiring an intermediary server
Uses STUN/TURN servers for NAT traversal (getting through firewalls)
Includes sophisticated protocols for negotiating connections (using SDP)

Best for:

Video calling and conferencing
Peer-to-peer file sharing
Low-latency gaming
Any application where direct browser-to-browser communication is beneficial

Example code:
javascript// Very simplified example of establishing a data connection
const peerConnection = new RTCPeerConnection();
const dataChannel = peerConnection.createDataChannel("myChannel");

dataChannel.onmessage = (event) => {
console.log("Message received:", event.data);
};

// Connection establishment would require signaling not shown here
SSE (Server-Sent Events)
What it is: A technology where a server can push updates to a client over an HTTP connection.
How it works:

Client establishes a persistent connection to the server
Server can send messages whenever it wants
Communication is one-way (server to client only)
Uses standard HTTP, not a custom protocol

Best for:

News feeds, stock tickers, notification systems
Any case where you need server-to-client updates but don't need to send data from client to server

Example code:
javascript// Client-side JavaScript
const eventSource = new EventSource('/events');

eventSource.onmessage = (event) => {
console.log('New update:', event.data);
};

// Server-side (Node.js with Express example)
app.get('/events', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');

const sendUpdate = () => {
res.write(data: ${new Date().toISOString()}\n\n);
};

const intervalId = setInterval(sendUpdate, 1000);

req.on('close', () => {
clearInterval(intervalId);
});
});
Long Polling
What it is: A technique where the client requests information from the server, and the server holds the request open until new data is available.
How it works:

Client makes an HTTP request to the server
Server doesn't respond immediately if no data is available
Server waits until it has new data or until a timeout occurs
Once the client receives a response, it immediately sends a new request

Best for:

Applications where real-time updates are needed but WebSockets aren't supported
Situations where updates are infrequent (to reduce the number of reconnections)

Example code:

// Client-side JavaScript function longPoll() { fetch('/updates') .then(response => response.json()) .then(data => { console.log('Received update:', data); // Process the data... // Immediately request again longPoll(); }) .catch(error => { console.error('Error in long polling:', error); // Wait a bit before retrying after an error setTimeout(longPoll, 5000); }); } // Start long polling longPoll(); 
Enter fullscreen mode Exit fullscreen mode

Webhooks
What it is: A way for one application to provide other applications with real-time information by making an HTTP request to a URL when certain events occur.
How it works:

Service A registers a URL with Service B
When something happens in Service B, it makes an HTTP request to Service A's URL
Service A processes the request and responds with a status code

Best for:

Integrating different services/APIs
Event notification systems
Automation workflows

Example code:

// Setting up a webhook receiver with Express (Node.js) app.post('/webhook', express.json(), (req, res) => { const event = req.body; console.log('Received webhook:', event); // Process the webhook data if (event.type === 'payment.succeeded') { // Update database, send confirmation email, etc. } // Acknowledge receipt res.status(200).send('Webhook received'); }); 
Enter fullscreen mode Exit fullscreen mode

WAMP (Web Application Messaging Protocol)
What it is: An open standard WebSocket subprotocol that provides two messaging patterns: Remote Procedure Calls (RPC) and Publish & Subscribe.
How it works:

Uses WebSockets as the transport layer
Provides a structured way to do RPC calls between clients and servers
Enables publish/subscribe messaging where components can subscribe to topics
Supports advanced features like progressive results and call cancellation

Best for:

Distributed applications with many components that need to communicate
Applications that need both RPC and pub/sub patterns
Complex real-time applications

Example code:

// Client-side with Autobahn.js library const connection = new autobahn.Connection({ url: 'ws://example.com/ws', realm: 'realm1' }); connection.onopen = (session) => { // 1. Subscribe to a topic session.subscribe('com.example.update', (args) => { console.log('Received update:', args[0]); }); // 2. Register a procedure others can call session.register('com.example.add', (args) => { return args[0] + args[1]; }); // 3. Publish to a topic session.publish('com.example.update', ['Hello, world!']); // 4. Call a remote procedure session.call('com.example.add', [2, 3]).then( (result) => console.log('2 + 3 =', result), (error) => console.error('Call failed:', error) ); }; connection.open(); 
Enter fullscreen mode Exit fullscreen mode

Comparison Summary

Technology Direction Persistent Connection Best For
WebSocket Bidirectional Yes Real-time apps with frequent two-way communication
WebRTC Peer-to-peer Yes Direct browser-to-browser media and data exchange
SSE Server → Client Yes One-way server updates
Long Polling Client ← Server No (simulates) Simple real-time updates with broad compatibility
Webhooks Service → Service No Service-to-service notifications
WAMP Bidirectional + PubSub Yes Complex distributed applications

Top comments (0)