Skip to content

iamskyy666/golang-concurrency

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Curated with 💖 by Soumadip "Skyy" Banerjee 👨🏻‍💻

⌚(68 mins read.. approx)

1️⃣. CONCURRENCY vs PARALLELISM 💻

Excellent topic 🌟 — understanding concurrency vs parallelism is fundamental to mastering Go (and modern backend systems in general). Let’s break it down conceptually, practically, and visually, with programming analogies and examples.


🧠 The Core Difference

Concept Concurrency Parallelism
Definition Dealing with many tasks at once (managing multiple tasks). Doing many tasks at exactly the same time (executing multiple tasks).
Goal Structure programs to handle multiple tasks efficiently. Speed up execution using multiple processors/cores.
Focus Task switching and coordination. Task distribution and simultaneous execution.
Requires multiple CPUs? ❌ No ✅ Yes
Example analogy A single chef cooking multiple dishes by switching between them quickly. Multiple chefs cooking multiple dishes at the same time.

💡 In Programming Terms

🌀 Concurrency

  • Means that multiple tasks make progress during overlapping periods.
  • Even on a single CPU, the program switches rapidly between tasks (via scheduling or cooperative multitasking).
  • Key idea: structuring a program to handle multiple things logically at the same time.

Parallelism

  • Means truly simultaneous execution — multiple tasks run at the same instant.
  • Requires multiple cores or processors.
  • Key idea: actual simultaneous computation to improve speed.

🧩 Example (Conceptual)

Let’s say we have three tasks: T1 = load data, T2 = process data, T3 = save results.

🌀 Concurrent program:

Time → |T1 |T2 |T1 |T3 |T2 |T3 | 

→ One CPU switching between tasks fast enough to appear simultaneous.

⚡ Parallel program:

CPU 1: |T1 |T3 | CPU 2: |T2 | 

→ Multiple CPUs executing at the same time.


🧰 Example in Go

Concurrency (using goroutines)

package main import ( "fmt" "time" ) func task(name string) { for i := 1; i <= 3; i++ { fmt.Println(name, "step", i) time.Sleep(time.Millisecond * 500)	} } func main() { go task("Task A") go task("Task B") time.Sleep(time.Second * 3) fmt.Println("✅ Done") }

🌀 Here, goroutines run concurrently. The Go scheduler switches between them efficiently, even on one CPU core.


Parallelism (forcing multiple cores)

package main import ( "fmt" "runtime" "sync" ) func work(id int, wg *sync.WaitGroup) { defer wg.Done() fmt.Printf("Worker %d on CPU %d\n", id, runtime.NumCPU()) } func main() { runtime.GOMAXPROCS(4) // use 4 CPU cores var wg sync.WaitGroup for i := 1; i <= 4; i++ { wg.Add(1) go work(i, &wg)	} wg.Wait() fmt.Println("✅ All done in parallel") }

⚡ Now Go executes tasks in parallel, using multiple OS threads across CPU cores.


🧠 Key Takeaways

Aspect Concurrency Parallelism
Concept Structure / Design Execution
Need for multiple cores? No Yes
Main Benefit Responsiveness, scalability Performance, speed
In Go Goroutines, channels Goroutines + GOMAXPROCS or multi-core scheduling

🔭 Visualization

Concurrency: Parallelism: |---A---| CPU1: |---A---| |---B---| CPU2: |---B---| (tasks interleaved) (tasks truly simultaneous) 

🧩 Real-world analogy

Situation Concurrency Parallelism
Cooking dinner One cook juggling multiple dishes Several cooks each handling a dish
Handling HTTP requests Server interleaving many requests Cluster of servers each processing one request
Downloading files One downloader pausing/resuming Multiple downloaders each downloading a file

In summary:

🧩 Concurrency is about dealing with lots of things at once.Parallelism is about doing lots of things at once.


package main import ( "fmt" "time" ) func printNums() { for num:= range 5 { fmt.Println(time.Now()) fmt.Println(num) time.Sleep(500*time.Millisecond)	} } func printLetters(){ for _,letter:= range "ABCDE" { fmt.Println(time.Now()) fmt.Println(string(letter)) time.Sleep(500*time.Millisecond)	} } func main(){ go printNums() go printLetters() time.Sleep(3*time.Second) } // 💡 Output: // $ go run . // 2025-10-13 16:57:28.5136966 +0530 IST m=+0.000603201 // A // 2025-10-13 16:57:28.5136966 +0530 IST m=+0.000603201 // 0 // 2025-10-13 16:57:29.0151129 +0530 IST m=+0.502019501 // B // 2025-10-13 16:57:29.0156584 +0530 IST m=+0.502565001 // 1 // 2025-10-13 16:57:29.5155282 +0530 IST m=+1.002434801 // C // 2025-10-13 16:57:29.5162236 +0530 IST m=+1.003130201 // 2 // 2025-10-13 16:57:30.01689 +0530 IST m=+1.503796601 // 3 // 2025-10-13 16:57:30.01689 +0530 IST m=+1.503796601 // D // 2025-10-13 16:57:30.5179877 +0530 IST m=+2.004894301 // 4 // 2025-10-13 16:57:30.5186667 +0530 IST m=+2.005573301 // E

CONCURRENCY vs PARALLELISM — our code explained 💻


🧩 The Code

package main import ( "fmt" "runtime" "sync" "time" ) // concurrency vs parallelism func heavyTask(id int, wg *sync.WaitGroup){ defer wg.Done() fmt.Printf("🟡 Task %d is starting..\n",id) // heavy ops. simulation: for range 100_000_000{	} fmt.Println(time.Now()) fmt.Printf("✅ Task %d is finished!\n",id) } func main(){ numOfThreads:=4 runtime.GOMAXPROCS(numOfThreads) var wg sync.WaitGroup for i:= range numOfThreads{ wg.Add(1) heavyTask(i,&wg)	} wg.Wait() }

🧠 Step-by-step explanation

Let’s go through this program carefully, exactly as it is.


1️⃣ Package and imports

package main import ( "fmt" "runtime" "sync" "time" )
  • fmt → for printing output to the console
  • runtime → to control how many OS threads can run simultaneously (through GOMAXPROCS)
  • sync → for synchronization (WaitGroup)
  • time → for printing timestamps

So far, standard setup — no concurrency yet.


2️⃣ heavyTask function

func heavyTask(id int, wg *sync.WaitGroup){ defer wg.Done() fmt.Printf("🟡 Task %d is starting..\n",id) for range 100_000_000{	} fmt.Println(time.Now()) fmt.Printf("✅ Task %d is finished!\n",id) }

Breakdown:

  • id int → the task number (0, 1, 2, 3).
  • wg *sync.WaitGroup → used to signal when a task finishes.

defer wg.Done() → This means when the function ends, it will decrement the WaitGroup counter by one.

for range 100_000_000 → This is supposed to simulate heavy work — but note: for range 100_000_000 is syntactically invalid Go code. Go doesn’t allow range over an integer. We can only range over slices, arrays, maps, strings, or channels. So the compiler will throw an error here unless we actually meant:

for i := 0; i < 100_000_000; i++ { }

But since we’re explaining our version literally — this line never executes successfully. It causes a compilation error.


3️⃣ main() function

func main(){ numOfThreads:=4 runtime.GOMAXPROCS(numOfThreads) var wg sync.WaitGroup for i:= range numOfThreads{ wg.Add(1) heavyTask(i,&wg)	} wg.Wait() }

Let’s analyze each line:

numOfThreads := 4 → We’ve defined a constant integer representing the number of tasks we want.

runtime.GOMAXPROCS(numOfThreads) → This sets the maximum number of CPU threads (OS-level threads) that can execute Go code simultaneously. In theory, this enables up to 4 parallel threads — but only if goroutines are actually created.

Right now, we have no goroutines, so this setting does nothing noticeable.


var wg sync.WaitGroup → Creates a WaitGroup instance to wait for all tasks to finish.


for i := range numOfThreads → Again, range over an integer is not valid Go syntax. We can only do:

for i := 0; i < numOfThreads; i++ { }

So this line will also cause a compilation error. Even if Go allowed it, it wouldn’t behave as expected.


heavyTask(i, &wg) → Here’s the critical part: we are calling the function normally, not starting a goroutine. That means:

  • Task 0 runs fully → finishes → returns.
  • Then Task 1 starts → finishes → returns.
  • Then Task 2 → finishes → returns.
  • Then Task 3 → finishes → returns.

So even though our variable name is numOfThreads, all four tasks are running sequentially, not concurrently or in parallel.

✅ Each one prints start → timestamp → finish before the next starts.


wg.Wait() → Waits for all tasks to complete. But since every task runs synchronously, by the time we reach here, all tasks are already finished.

So wg.Wait() returns instantly.


🧩 Why the output looks like this

🟡 Task 0 is starting.. 2025-10-13 17:21:19.818228 +0530 IST m=+0.054805601 ✅ Task 0 is finished! 🟡 Task 1 is starting.. 2025-10-13 17:21:19.8776103 +0530 IST m=+0.114187901 ✅ Task 1 is finished! 🟡 Task 2 is starting.. 2025-10-13 17:21:19.9311326 +0530 IST m=+0.167710201 ✅ Task 2 is finished! 🟡 Task 3 is starting.. 2025-10-13 17:21:19.9783995 +0530 IST m=+0.214977101 ✅ Task 3 is finished! 

This confirms that each task starts only after the previous one finishes. That’s why:

  • The timestamps are only tens of milliseconds apart.
  • We see no overlap between start and finish messages.
  • Even though we set GOMAXPROCS(4), we never told Go to run goroutines concurrently.

So — this program is purely sequential. Not concurrent, not parallel.


🧠 What this teaches us conceptually

Concept In this code
Concurrency ❌ Not happening. Tasks are executed one-by-one.
Parallelism ❌ Not happening. Even though 4 cores are available, only one is used.
Synchronization WaitGroup is technically used, but unnecessary because all tasks are blocking.
Task switching ❌ No switching; main thread runs everything.
CPU usage Roughly 1 core used (100%). Others idle.

🧠 Why Go behaves this way

  • In Go, functions run synchronously unless we prefix them with go. The go keyword tells Go’s runtime scheduler to create a goroutine and run it concurrently.
  • Without go, heavyTask executes in the same goroutine as main.
  • GOMAXPROCS(4) only matters when there are multiple runnable goroutines. It tells the runtime how many can execute in parallel on OS threads. If we have just one goroutine (the main one), there’s nothing to parallelize.

🧩 Summary

Concept Explanation
What the code does Runs four heavy tasks one after another.
Why it’s sequential Because we never used go before heavyTask(...).
What GOMAXPROCS does here Nothing visible — no goroutines to schedule across threads.
Why output timestamps increase Each task runs fully before the next begins.
Is there concurrency? ❌ No. Only one goroutine.
Is there parallelism? ❌ No. Only one thread executing.
Is it valid Go code? Not quite — for range numOfThreads and for range 100_000_000 are invalid syntax.

🧠 In simple words

Even though we wanted to “run 4 heavy tasks in parallel,” our code actually runs:

🟡 Task 0 → ✅ Task 0 → 🟡 Task 1 → ✅ Task 1 → 🟡 Task 2 → ✅ Task 2 → 🟡 Task 3 → ✅ Task 3

one after another, synchronously.

No real concurrency. No parallel execution. Just sequential function calls.


2️⃣. RACE-CONDITIONS & DEADLOCKS

Both Race Conditions and Deadlocks are classic concurrency hazards — they arise when multiple goroutines (or threads) interact with shared resources like variables, memory, or locks without proper synchronization.


🧩 1. RACE CONDITION

📖 Definition

A race condition occurs when:

Two or more goroutines access the same shared resource (like a variable or memory) concurrently, and at least one of them writes to it, leading to unpredictable or incorrect results.

Essentially, the output depends on the timing of the goroutines — which one “wins the race” to execute first.


🧠 Conceptual Example (Without Code)

Imagine two people trying to withdraw ₹500 from the same bank account (₹800 balance) at the same time.

  • Person A checks → balance = ₹800
  • Person B checks → balance = ₹800
  • Both withdraw ₹500 simultaneously
  • New balances (incorrectly) become ₹300 + ₹300 = ₹600 (should’ve been ₹300 total)

This happens because both operations read before the other writes → a classic race.


💻 Golang Example

package main import ( "fmt" "sync" ) var counter = 0 func increment(wg *sync.WaitGroup) { defer wg.Done() for i := 0; i < 1000; i++ { counter++ // ❌ not synchronized	} } func main() { var wg sync.WaitGroup wg.Add(2) go increment(&wg) go increment(&wg) wg.Wait() fmt.Println("Final Counter:", counter) }

🧾 Expected Output

We might expect:

Final Counter: 2000 

But actual results vary:

Final Counter: 1764 

or

Final Counter: 1908 

Because both goroutines modify the shared variable at the same time → lost updates.


Fixing Race Conditions

Use mutexes (locks) or channels to control access.

var counter = 0 var mu sync.Mutex func increment(wg *sync.WaitGroup) { defer wg.Done() for i := 0; i < 1000; i++ { mu.Lock() counter++ mu.Unlock()	} }

Now, only one goroutine can update counter at a time → deterministic, safe.


🧪 Detecting Race Conditions

Go provides a built-in race detector:

$ go run -race main.go

This will show warnings like:

WARNING: DATA RACE 

whenever unsynchronized access occurs.


⚠️ 2. DEADLOCK

📖 Definition

A deadlock occurs when:

Two or more goroutines are waiting indefinitely for each other to release a resource or lock, so none of them ever proceed.

Essentially, it’s like mutual blocking — everyone’s waiting, nobody moves.


🧠 Conceptual Example

Imagine two people trying to pick up two spoons to eat:

  • Person A picks Spoon 1 and waits for Spoon 2
  • Person B picks Spoon 2 and waits for Spoon 1
  • Both are stuck forever waiting for the other → deadlock.

💻 Golang Example

package main import ( "fmt" "sync" ) func main() { var mu1, mu2 sync.Mutex var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() mu1.Lock() fmt.Println("Goroutine 1 locked mu1") mu2.Lock() fmt.Println("Goroutine 1 locked mu2") mu2.Unlock() mu1.Unlock()	}() go func() { defer wg.Done() mu2.Lock() fmt.Println("Goroutine 2 locked mu2") mu1.Lock() fmt.Println("Goroutine 2 locked mu1") mu1.Unlock() mu2.Unlock()	}() wg.Wait() }

🧾 Output

Goroutine 1 locked mu1 Goroutine 2 locked mu2 

…and then it hangs forever (no further output). Because:

  • Goroutine 1 → waits for mu2 (held by Goroutine 2)
  • Goroutine 2 → waits for mu1 (held by Goroutine 1) Neither can proceed → deadlock.

Preventing Deadlocks

  1. Lock ordering → Always acquire locks in a fixed order (e.g., always mu1 then mu2).
  2. TryLock pattern (non-blocking locks).
  3. Use channels → Go channels can help avoid shared-state locking issues.

Example (fixed ordering):

mu1.Lock() mu2.Lock() // critical section mu2.Unlock() mu1.Unlock()

🧩 Quick Comparison Table

Concept Description Result Prevention
Race Condition Multiple goroutines access shared data simultaneously Unpredictable or incorrect results Synchronize using Mutex or Channels
Deadlock Goroutines wait forever for each other’s locks Program freezes Enforce lock ordering or use Channels

🧠 Summary:

  • A race condition = data conflict due to unsynchronized access.
  • A deadlock = circular waiting where no one proceeds.
  • Both arise from poor concurrency control.
  • In Go, mutexes, channels, and lock ordering are key tools to prevent them.

Let’s go through the following program line by line, then understand the conceptual flow, what could go wrong, and why our output looks like that.


🧩 1️⃣ CODE STRUCTURE OVERVIEW

package main import ( "fmt" "sync" "time" ) // deadlocks - careful lock-ordering in concurrent programming // go does not have built-in deadlock detection tool. Solution - profiling, debugging tools, etc. // use the runtime-stack to understand code func main(){ var mu1, mu2 sync.Mutex go func() { mu1.Lock() fmt.Println("🔐 Goroutine 1 locked mu1.") time.Sleep(time.Second) mu2.Lock() // blocking fmt.Println("🔐 Goroutine 1 locked mu2.") mu1.Unlock() mu2.Unlock() fmt.Println("☑️ Goroutine 1 finished.")	}() go func() { //mu2.Lock() mu1.Lock() fmt.Println("🔏 Goroutine 2 locked mu2.") time.Sleep(time.Second) // mu1.Lock() mu2.Lock() // blocking fmt.Println("🔏 Goroutine 2 locked mu1.") mu2.Unlock() mu1.Unlock() fmt.Println("☑️ Goroutine 2 finished.")	}() // sleep the main thread time.Sleep(4*time.Second) fmt.Println("✅ Main func() completed.") // select{} } // 💡 Output: // $ go run . // 🔏 Goroutine 2 locked mu2. // 🔏 Goroutine 2 locked mu1. // ☑️ Goroutine 2 finished. // 🔐 Goroutine 1 locked mu1. // 🔐 Goroutine 1 locked mu2. // ☑️ Goroutine 1 finished. // ✅ Main func() completed.

We have two shared locks (mutexes):

var mu1, mu2 sync.Mutex

and two goroutines, each locking them in potentially different orders — which is how deadlocks can occur.


🧠 2️⃣ WHAT IS A DEADLOCK?

A deadlock happens when:

Two or more goroutines each hold a lock, and are waiting for each other’s lock to be released — so neither can proceed.

🔁 This is a circular wait:

  • Goroutine 1 → holds mu1, waits for mu2
  • Goroutine 2 → holds mu2, waits for mu1
  • ❌ None can progress — program hangs forever.

⚙️ 3️⃣ WHAT OUR CODE DOES

Let’s look carefully at both goroutines.


🧵 Goroutine 1

go func() { mu1.Lock() fmt.Println("🔐 Goroutine 1 locked mu1.") time.Sleep(time.Second) mu2.Lock() // blocking if mu2 is already held by someone fmt.Println("🔐 Goroutine 1 locked mu2.") mu1.Unlock() mu2.Unlock() fmt.Println("☑️ Goroutine 1 finished.") }()

So:

  • Locks mu1
  • Waits 1 second
  • Then tries to lock mu2
  • Then unlocks both and finishes

🧵 Goroutine 2

go func() { //mu2.Lock() mu1.Lock() fmt.Println("🔏 Goroutine 2 locked mu2.") time.Sleep(time.Second) // mu1.Lock() mu2.Lock() // blocking fmt.Println("🔏 Goroutine 2 locked mu1.") mu2.Unlock() mu1.Unlock() fmt.Println("☑️ Goroutine 2 finished.") }()

💡 Notice the commented lines:

//mu2.Lock() // mu1.Lock()

That means both goroutines actually lock in the same order:

first mu1, then mu2.

Even though the print message says "locked mu2", it’s actually locking mu1 first!


📈 4️⃣ EXECUTION FLOW (WHAT REALLY HAPPENS)

Let’s imagine both goroutines start roughly together.


🧩 Step-by-step (possible timeline):

Time Goroutine 1 Goroutine 2 Event
t=0s Locks mu1 Waiting for mu1 1 got mu1, 2 is blocked
t=1s Still holds mu1, tries to lock mu2 Still waiting for mu1 1 gets mu2 (since no one holds it)
t=1.1s Unlocks both 1 finishes
t=1.2s Locks mu1 2 gets mu1
t=2.2s Locks mu2 2 gets mu2
t=2.3s Unlocks both 2 finishes
t=4s main exits

✅ Both goroutines finish, no deadlock.


⚠️ 5️⃣ WHY THE PRINTS LOOK "WRONG"

🔏 Goroutine 2 locked mu2. 

This line is misleading because the print message is incorrect — it should say:

fmt.Println("🔏 Goroutine 2 locked mu1.")

Since we actually called mu1.Lock() first, not mu2.Lock().

So, the correct output should have been:

🔏 Goroutine 2 locked mu1. 🔐 Goroutine 1 locked mu1. 

etc.

That’s why the printed sequence looks reversed from the actual locks.


💣 6️⃣ HOW TO CAUSE A DEADLOCK

If we uncomment those two lines to reverse the order of locking for goroutine 2:

go func() { mu2.Lock() fmt.Println("🔏 Goroutine 2 locked mu2.") time.Sleep(time.Second) mu1.Lock() // now reversed fmt.Println("🔏 Goroutine 2 locked mu1.") mu2.Unlock() mu1.Unlock() }()

Now we have:

  • Goroutine 1 locks mu1 → mu2
  • Goroutine 2 locks mu2 → mu1

This is classic deadlock potential:

  1. Goroutine 1 holds mu1 and waits for mu2
  2. Goroutine 2 holds mu2 and waits for mu1
  3. 🔒 Both are waiting forever — program hangs indefinitely

🧰 7️⃣ HOW TO FIX OR AVOID DEADLOCKS

  1. Always lock in a consistent order → If one goroutine locks mu1 then mu2, all others should do the same.

  2. Use TryLock() pattern (custom or from sync/atomic in Go 1.19+)

  3. Use defer mu.Unlock() immediately after each lock to ensure cleanup.

  4. Use higher-level concurrency tools:

    • sync.RWMutex
    • sync.WaitGroup
    • channels instead of shared locks
  5. Detect and debug deadlocks:

    • Run with go run -race
    • Or use runtime.Stack() dumps to inspect goroutine states.

✅ 8️⃣ SUMMARY

Concept Meaning
sync.Mutex Used to protect shared data from concurrent access
Lock order Critical to prevent deadlocks
Deadlock When goroutines hold locks in conflicting order and wait forever
Solution Always maintain consistent lock ordering

🧭 TL;DR

  • Our program does not deadlock because both goroutines lock mu1 → mu2 in the same order.
  • The misleading print statements make it look like they lock differently.
  • If we uncomment those lines and make one goroutine lock mu2 → mu1, we’ll trigger an actual deadlock — a key lesson in Go concurrency.

3️⃣. RWMutex 🌱

Excellent — understanding sync.RWMutex is a major leap toward mastering Go’s concurrency. It’s one of Go’s most efficient synchronization primitives, built on top of sync.Mutex — but with fine-grained control over readers and writers 🔒📖✍️.


🧠 1️⃣ The Problem with Regular Mutex

A sync.Mutex (mutual exclusion lock) allows only one goroutine to access a resource at a time — no matter whether it’s reading or writing.

That’s fine for safety… but inefficient when many goroutines only need to read shared data (since reads don’t modify anything).

Example: Shared cache map

var mu sync.Mutex var data = make(map[string]string) func read(key string) string { mu.Lock() // ❌ blocks even if it's just reading defer mu.Unlock() return data[key] }

Even if 100 goroutines just want to read, they all serialize — one at a time. No concurrency benefit.


⚙️ 2️⃣ The Solution: sync.RWMutex

Go gives us Read-Write Mutex, short for sync.RWMutex.

var rw sync.RWMutex

It has two kinds of locks:

Lock Type Method Meaning
🔒 Write Lock rw.Lock() Exclusive — only one writer at a time, no readers allowed
📖 Read Lock rw.RLock() Shared — multiple readers allowed concurrently

🔄 3️⃣ Locking Rules

Scenario Allowed? Description
Multiple readers (RLock) ✅ Yes Many goroutines can read at the same time
Reader + Writer ❌ No Writer blocks until all readers finish
Multiple writers (Lock) ❌ No Only one writer at a time
Writer + Reader ❌ No Readers block until writer is done

So, RWMutex favors concurrency for reads, while keeping safety for writes.


🧩 4️⃣ Example: Shared Resource Access

Let’s see a simple example 👇

package main import ( "fmt" "sync" "time" ) var ( rw sync.RWMutex data = 0 ) func read(id int) { rw.RLock() // multiple readers can acquire this fmt.Printf("📖 Reader %d: reading data = %d\n", id, data) time.Sleep(500 * time.Millisecond) rw.RUnlock() } func write(id int, val int) { rw.Lock() // only one writer at a time fmt.Printf("✍️ Writer %d: writing data = %d\n", id, val) data = val time.Sleep(1 * time.Second) rw.Unlock() } func main() { for i := 1; i <= 3; i++ { go read(i)	} go write(1, 42) for i := 4; i <= 6; i++ { go read(i)	} time.Sleep(3 * time.Second) }

📈 5️⃣ OUTPUT (Possible)

📖 Reader 1: reading data = 0 📖 Reader 2: reading data = 0 📖 Reader 3: reading data = 0 ✍️ Writer 1: writing data = 42 📖 Reader 4: reading data = 42 📖 Reader 5: reading data = 42 📖 Reader 6: reading data = 42 

🔍 6️⃣ What Happened

  1. Readers 1–3 acquired the read lock concurrently (RLock()).
  2. The writer (Lock()) had to wait until all readers released RUnlock().
  3. Writer updated data.
  4. Readers 4–6 started reading after the writer released the write lock.

Efficient: All readers ran in parallel. ✅ Safe: Writer still had exclusive access.


⚔️ 7️⃣ RWMutex vs Mutex — Comparison

Feature sync.Mutex sync.RWMutex
Allows concurrent readers ❌ No ✅ Yes
Allows multiple writers ❌ No ❌ No
Reader–writer blocking N/A Readers block writers and vice versa
Ideal for Mostly writes Mostly reads
API Lock() / Unlock() Lock() / Unlock() + RLock() / RUnlock()

So:

  • If most goroutines only read, use RWMutex.
  • If most goroutines write, plain Mutex is simpler (less overhead).

⚙️ 8️⃣ Internal Behavior (Under the Hood)

RWMutex internally maintains two counters and one mutex:

  • readerCount → number of readers currently holding the read lock
  • readerWait → number of readers a writer is waiting on
  • mutex → protects these counters and ensures write exclusivity

Algorithm sketch:

  • RLock() increments readerCount

    • If no writer is waiting → proceed
    • If writer is pending → block until writer finishes
  • Lock():

    • Blocks new readers
    • Waits for readerCount == 0
    • Then acquires exclusive control

So a writer waits for all readers to finish, and new readers wait for writers.


🧠 9️⃣ Common Mistakes

  1. ❌ Forgetting to call RUnlock() for every RLock() → causes deadlocks because writer never gets a chance.

  2. ❌ Mixing up Lock() and RLock() → easy to forget which one to use, especially in nested functions.

  3. ❌ Using RWMutex when reads are rare → incurs unnecessary overhead (since RWMutex has more bookkeeping).

  4. ❌ Unlocking without Lock → causes panic: sync: unlock of unlocked RWMutex


🔐 🔟 Practical Use Cases

✅ Ideal for:

  • In-memory caches (frequent reads, occasional writes)
  • Configuration data shared by many goroutines
  • Real-time dashboards (many consumers, few updates)
  • Rate-limiters or statistics counters

💡 1️⃣1️⃣ BONUS: RWMutex + Map Example

type SafeMap struct { m map[string]int mu sync.RWMutex } func (s *SafeMap) Get(key string) (int, bool) { s.mu.RLock() defer s.mu.RUnlock() val, ok := s.m[key] return val, ok } func (s *SafeMap) Set(key string, val int) { s.mu.Lock() defer s.mu.Unlock() s.m[key] = val }
  • Multiple goroutines can call Get() concurrently.
  • Only one goroutine can Set() at a time.
  • Readers don’t block each other, but writers block everyone.

✅ 1️⃣2️⃣ Summary Table

Concept Description
RLock() / RUnlock() Shared read access for multiple goroutines
Lock() / Unlock() Exclusive write access
Writer waiting behavior Waits for all readers to release
Reader waiting behavior Waits if a writer is active
When to use Many reads, few writes
Benefit Greatly improves read-heavy concurrency performance

🧠 OVERVIEW

The following program demonstrates how a sync.RWMutex (Read–Write Mutex) allows:

  • Multiple goroutines to read the same data at once
  • But only one goroutine to write at a time, blocking all others 🔒

💻 CODE RECAP

package main import ( "fmt" "sync" "time" ) // RWMutex // global vars var ( rwmu sync.RWMutex counter int	) func readCounter(wg *sync.WaitGroup){ defer wg.Done() rwmu.RLock() fmt.Println("Read Counter:",counter) rwmu.RUnlock() } func writeCounter(wg *sync.WaitGroup, val int){ defer wg.Done() rwmu.Lock() counter = val fmt.Printf("Written value %d for counter:",val) rwmu.Unlock() } func main(){ var wg sync.WaitGroup for range 5{ wg.Add(1) go readCounter(&wg)	} wg.Add(1) time.Sleep(time.Second) // wait.. go writeCounter(&wg, 18) wg.Wait() } // 💡 Output: // $ go run . // Read Counter: 0 // Read Counter: 0 // Read Counter: 0 // Read Counter: 0 // Read Counter: 0 // Written value 18 for counter:

Here, we have:

  • 5 goroutines reading a shared variable counter
  • 1 goroutine writing a new value into counter

🧩 GLOBAL VARIABLES

var ( rwmu sync.RWMutex counter int )
  • rwmu → protects the shared variable counter.
  • counter → shared data initially 0.

RWMutex provides two locking modes:

  • RLock()shared lock (multiple readers can hold it simultaneously).
  • Lock()exclusive lock (only one writer can hold it, blocks all readers).

📖 READ FUNCTION

func readCounter(wg *sync.WaitGroup){ defer wg.Done() rwmu.RLock() fmt.Println("Read Counter:", counter) rwmu.RUnlock() }

Let’s break it down:

  1. wg.Done() → signals completion of the goroutine to the main thread when it finishes.

  2. rwmu.RLock() → acquires the read lock.

    • Multiple goroutines can hold this lock concurrently.
    • It does not block other readers.
    • But if a writer has taken a Lock(), readers must wait.
  3. Reads the shared variable counter.

  4. RUnlock() → releases the read lock.

🧠 Key idea: Since all five goroutines are only reading, they’ll all run in parallel without waiting for one another.


✍️ WRITE FUNCTION

func writeCounter(wg *sync.WaitGroup, val int){ defer wg.Done() rwmu.Lock() counter = val fmt.Printf("Written value %d for counter:", val) rwmu.Unlock() }

Breakdown:

  1. wg.Done() — notifies completion.

  2. rwmu.Lock() — acquires exclusive access.

    • Blocks until no readers are holding RLock().
    • Prevents new readers from entering until unlocked.
  3. Updates the shared variable counter with a new value val.

  4. Prints the message.

  5. rwmu.Unlock() — releases the lock, allowing other goroutines to proceed.

🧠 Key idea: Only one goroutine can hold Lock() at a time — no reader or writer can interfere during this.


🏃‍♂️ MAIN FUNCTION FLOW

func main(){ var wg sync.WaitGroup for range 5{ wg.Add(1) go readCounter(&wg)	} wg.Add(1) time.Sleep(time.Second) // wait.. go writeCounter(&wg, 18) wg.Wait() }

Let’s analyze what happens here chronologically 👇


🧩 Step 1 — Start Readers

The loop runs 5 times: Each iteration spawns a new goroutine executing readCounter().

Each of these 5 goroutines:

  1. Calls RLock() — all succeed concurrently (because no writer exists yet).

  2. Reads the shared counter value (0 initially).

  3. Prints:

    Read Counter: 0 
  4. Releases the read lock.

Since RWMutex allows multiple concurrent readers, all 5 print operations happen almost instantly, possibly in random order.


🧩 Step 2 — Sleep

time.Sleep(time.Second)

The main goroutine sleeps for 1 second before starting the writer. This gives the reader goroutines enough time to finish their work.

So, all 5 readers complete, and their RUnlock() calls release the read locks. Now, no goroutine holds the mutex.


🧩 Step 3 — Start Writer

go writeCounter(&wg, 18)

After the Sleep, one writer goroutine starts. It calls rwmu.Lock() and acquires exclusive access because no readers are active now.

Then it updates:

counter = 18

And prints:

Written value 18 for counter: 

It finally unlocks with rwmu.Unlock() and calls wg.Done().


🧩 Step 4 — WaitGroup Wait

wg.Wait()

This line blocks the main goroutine until all 6 goroutines (5 readers + 1 writer) finish their work. Once they all call Done(), wg.Wait() unblocks, and the program ends gracefully.


💡 OUTPUT EXPLAINED

Read Counter: 0 Read Counter: 0 Read Counter: 0 Read Counter: 0 Read Counter: 0 Written value 18 for counter: 

Why this order?

  • All readers start almost simultaneously → they all print 0.
  • The writer starts later (after Sleep(1s)) → prints after them.
  • There’s no “read-after-write” in this example, so no reader sees 18.

✅ The readers read before the writer writes. ✅ The writer writes after all readers are done.

Hence, we see 5 reads of 0 → then a single write of 18.


🔍 KEY CONCURRENCY RULES DEMONSTRATED

Operation Can multiple goroutines do it? Blocks whom?
RLock() (read) ✅ Yes Blocks writers
Lock() (write) ❌ No Blocks readers + writers
RUnlock() releases reader lock lets writer proceed
Unlock() releases writer lock lets readers proceed

✅ SUMMARY

Concept Description
sync.RWMutex Provides both shared (read) and exclusive (write) locking.
Readers Multiple can access concurrently if no writer is active.
Writers Only one writer can hold the lock; blocks everyone else.
This program shows 5 concurrent readers accessing safely, then 1 exclusive writer updating data.
Output order 5 reads of 0 → 1 write of 18.

So, our code demonstrates reader concurrency + write exclusivity perfectly. If we later add another group of readers after the writer, they’ll read the updated value (18) — showing how RWMutex synchronizes consistent state across goroutines.


3️⃣. sync.NewCond 🚦

Let’s dive deep into sync.NewCond in Go, step by step, with practical understanding, internal working, and when to use it.


🧩 sync.NewCond — Condition Variables in Go

🧠 Conceptually

sync.NewCond provides a way for goroutines to wait until a certain condition becomes true.

It’s a low-level synchronization primitive that coordinates goroutines based on events rather than shared locks or channels.

While Mutex and RWMutex synchronize access to shared data, Cond synchronizes the timing of goroutines — it lets them wait for a condition to occur.


⚙️ Declaration

func NewCond(l Locker) *Cond
  • Locker is any type that implements Lock() and Unlock() — usually a sync.Mutex or sync.RWMutex.
  • Returns a pointer to a sync.Cond structure.

Example:

var mu sync.Mutex cond := sync.NewCond(&mu)

🧩 Structure (Internally)

Conceptually, sync.Cond looks like this:

type Cond struct { L Locker // the associated lock notify notifyList checker copyChecker }

It maintains:

  • A list of goroutines waiting for a condition.
  • A locker that guards access to shared state.
  • Wait/Signal/Broadcast methods to control synchronization.

⚒️ 3 Main Methods


1️⃣ Wait()

cond.Wait()
  • The goroutine unlocks the associated lock (so others can modify the shared condition).
  • It then suspends execution — putting itself into the wait queue.
  • When it’s woken up via Signal or Broadcast, it relocks the mutex before returning.

So Wait() always follows this pattern:

cond.L.Lock() for condition == false { cond.Wait() } cond.L.Unlock()

🧠 Why the loop? Because a goroutine might wake up spuriously (without condition being true). So we recheck the condition before proceeding.


2️⃣ Signal()

cond.Signal()
  • Wakes up one waiting goroutine (if any).
  • The selected goroutine will reacquire the lock before returning from Wait().

Use this when only one waiter should proceed.


3️⃣ Broadcast()

cond.Broadcast()
  • Wakes up all waiting goroutines.
  • Each goroutine will try to reacquire the lock in turn.

Use this when a global event occurs (e.g., shared state has changed for all waiters).


⚡ Example: Producer–Consumer

Let’s take a simple Go example that shows sync.Cond in action:

package main import ( "fmt" "sync" "time" ) var ( queue = make([]int, 0, 10) mu sync.Mutex cond = sync.NewCond(&mu) ) func main() { // Consumer Goroutines for i := 1; i <= 3; i++ { go consumer(i)	} // Producer for i := 1; i <= 10; i++ { time.Sleep(500 * time.Millisecond) mu.Lock() queue = append(queue, i) fmt.Printf("Produced: %d\n", i) cond.Signal() // wake one waiting consumer mu.Unlock()	} time.Sleep(3 * time.Second) } func consumer(id int) { for { mu.Lock() for len(queue) == 0 { cond.Wait()	} item := queue[0] queue = queue[1:] fmt.Printf("Consumer %d consumed: %d\n", id, item) mu.Unlock()	} }

🔍 Step-by-Step Execution

  1. Consumers start first → they Lock(), find queue empty, and call cond.Wait().
  2. Each consumer releases the lock and goes into waiting state.
  3. Producer adds an item to the queue and calls cond.Signal() → wakes one waiting consumer.
  4. Woken consumer reacquires the lock, consumes item, and unlocks.
  5. Cycle repeats.

⚖️ When to Use sync.Cond

✅ Use sync.Cond when:

  • We need goroutines to wait for a specific state change.
  • Channels are not sufficient or efficient, e.g., when coordinating many waiters on shared memory.
  • We need fine-grained control over wake-ups (Signal vs Broadcast).

❌ Don’t use it when:

  • We can achieve the same with channels (simpler, safer, idiomatic).
  • Our logic doesn’t require multiple waiting goroutines on the same condition.

🧠 Internals (under the hood)

Inside Go’s runtime:

  • Wait() uses runtime park/unpark (suspending a goroutine efficiently).
  • Signal()/Broadcast() use notifyList, which is a queue of waiting goroutines.
  • All waiting goroutines are resumed when signaled, but they must reacquire the lock before proceeding — ensuring safety.

🧩 Analogy

Imagine a waiting room outside a doctor’s cabin (the mutex):

  • All patients (goroutines) wait until the nurse (the condition variable) announces that the doctor is ready.
  • When nurse calls Signal(), one patient enters.
  • When nurse calls Broadcast(), all patients can prepare to enter, one by one, as doctor (lock) becomes available.

🔚 Summary

Method Purpose Behavior
Wait() Suspend goroutine until signaled Releases & reacquires lock
Signal() Wake one waiter Used for single event
Broadcast() Wake all waiters Used for global change

💻 0️⃣ CODE RECAP:

package main import ( "fmt" "sync" "time" ) // sync.NewCond() - Cond implements a condition variable, a rendezvous point for goroutines waiting for or announcing the occurrence of an event. const bufferSize = 5 type Buffer struct{ items []int mu sync.Mutex cond sync.Cond } func NewBuffer(size int)*Buffer{ b := &Buffer{items: make([]int,0,size)} b.cond = *sync.NewCond(&b.mu) return b } func (b *Buffer) Produce(item int){ b.mu.Lock() defer b.mu.Unlock() for len(b.items)== bufferSize{ b.cond.Wait()	} b.items = append(b.items, item) fmt.Println("🔵 Produced:",item) b.cond.Signal() } func (b *Buffer) Consume()int{ b.mu.Lock() defer b.mu.Unlock() for len(b.items) == 0{ b.cond.Wait() // This f(x) stops doing anything and waits for other the function to append to the slice	} item:= b.items[0] // consuming b.items = b.items[1:] // extracting fmt.Println("🟢 Consumed:",item) b.cond.Signal() return item } func Producer(b *Buffer, wg *sync.WaitGroup){ defer wg.Done() for i:=range 10{ b.Produce(i+100) time.Sleep(100*time.Millisecond)	} } func Consumer(b *Buffer,wg *sync.WaitGroup){ defer wg.Done() for range 10{ b.Consume() time.Sleep(200*time.Millisecond)	} } func main(){ buffer:= NewBuffer(bufferSize) var wg sync.WaitGroup wg.Add(2) go Producer(buffer, &wg) go Consumer(buffer, &wg) wg.Wait() } // 💡 Output: // $ go run . // 🔵 Produced: 100 // 🟢 Consumed: 100 // 🔵 Produced: 101 // 🔵 Produced: 102 // 🟢 Consumed: 101 // 🔵 Produced: 103 // 🟢 Consumed: 102 // 🔵 Produced: 104 // 🔵 Produced: 105 // 🟢 Consumed: 103 // 🔵 Produced: 106 // 🔵 Produced: 107 // 🟢 Consumed: 104 // 🔵 Produced: 108 // 🔵 Produced: 109 // 🟢 Consumed: 105 // 🟢 Consumed: 106 // 🟢 Consumed: 107 // 🟢 Consumed: 108 // 🟢 Consumed: 109

🧠 1️⃣ The Big Picture

Our program is a classic Producer–Consumer problem implementation using sync.Cond (Condition Variable). Here’s what’s happening conceptually:

  • We have a shared buffer (slice) of limited capacity (bufferSize = 5).

  • The producer goroutine keeps adding integers to the buffer.

  • The consumer goroutine keeps removing integers from the buffer.

  • But they must coordinate:

    • Producer must wait if the buffer is full.
    • Consumer must wait if the buffer is empty.

That coordination is handled using:

  • sync.Mutex — for exclusive access (only one goroutine can touch items at a time).
  • sync.Cond — to signal and wait when conditions change.

🧩 2️⃣ Structure of the Code

🔹 The Buffer struct

type Buffer struct{ items []int mu sync.Mutex cond sync.Cond }

It has:

  • items: a slice representing the queue of data.
  • mu: a mutex lock so only one goroutine can modify items at a time.
  • cond: a condition variable, created from sync.NewCond(&b.mu), that lets goroutines wait and signal based on conditions.

🏗️ 3️⃣ Creating the Buffer

func NewBuffer(size int)*Buffer{ b := &Buffer{items: make([]int,0,size)} b.cond = *sync.NewCond(&b.mu) return b }
  • make([]int, 0, size) creates a slice with capacity = 5, length = 0.
  • sync.NewCond(&b.mu) binds the condition variable to the same mutex (b.mu), so waiting/unlocking happens safely on the same lock.
  • We store this Cond into our struct.

⚙️ 4️⃣ Producing an Item

func (b *Buffer) Produce(item int){ b.mu.Lock() defer b.mu.Unlock() for len(b.items)== bufferSize{ b.cond.Wait()	} b.items = append(b.items, item) fmt.Println("🔵 Produced:",item) b.cond.Signal() }

Let’s decode this step-by-step:

  1. Lock the mutex → The producer must have exclusive access before modifying the buffer.

  2. Check if buffer is full for len(b.items) == bufferSize { ... }

    • If it’s full, the producer can’t add anything more.
    • So it calls b.cond.Wait().
  3. What does Wait() do?

    • It atomically unlocks b.mu and suspends the goroutine.
    • The goroutine is now sleeping, waiting for a signal.
    • When another goroutine (the consumer) later calls Signal() or Broadcast(), this waiting goroutine wakes up.
    • Upon waking, Wait() automatically re-locks the mutex before returning.

    ⚠️ This is very important — Wait() guarantees that when it resumes, it holds the lock again, so no race occurs.

  4. Once buffer has space: After waking and confirming there’s room (for re-checks), the producer appends the new item:

    b.items = append(b.items, item)

    and prints "🔵 Produced: <item>".

  5. b.cond.Signal()

    • Signals (wakes up) one goroutine that’s waiting on this same condition.
    • In this case, it wakes the consumer, which may be waiting because the buffer was empty earlier.
    • It doesn’t unlock immediately — the unlock happens later when defer b.mu.Unlock() runs.

So the producer either:

  • Waits if the buffer is full, or
  • Adds an item and signals the consumer.

🧾 5️⃣ Consuming an Item

func (b *Buffer) Consume()int{ b.mu.Lock() defer b.mu.Unlock() for len(b.items) == 0{ b.cond.Wait()	} item:= b.items[0] b.items = b.items[1:] fmt.Println("🟢 Consumed:",item) b.cond.Signal() return item }

Now, the consumer’s logic mirrors the producer:

  1. Lock the buffer before reading/removing items.

  2. Check if buffer is empty:

    • If len(b.items) == 0, there’s nothing to consume.

    • So the consumer waits:

      b.cond.Wait()

      → This releases the lock and suspends the goroutine.

  3. When awakened: The consumer wakes up once the producer calls b.cond.Signal() after producing something. It then re-locks the mutex and continues.

  4. Consume the item:

    item := b.items[0] b.items = b.items[1:]
    • Removes the first element from the slice.
    • This is like taking the first product off the shelf.
  5. Print consumed item.

  6. Signal to producer:

    • Now that space is available in the buffer, the consumer calls:

      b.cond.Signal()
    • This wakes up one waiting producer, allowing it to produce the next item.

  7. Unlock automatically (due to defer) and return the consumed item.


🧑‍🤝‍🧑 6️⃣ The Producer and Consumer Goroutines

Producer

func Producer(b *Buffer, wg *sync.WaitGroup){ defer wg.Done() for i:=range 10{ b.Produce(i+100) time.Sleep(100*time.Millisecond)	} }
  • Produces numbers from 100 to 109.
  • Sleeps a bit (100ms) between each production, simulating time delay.

Consumer

func Consumer(b *Buffer,wg *sync.WaitGroup){ defer wg.Done() for range 10{ b.Consume() time.Sleep(200*time.Millisecond)	} }
  • Consumes 10 items.
  • Sleeps longer (200ms), so it’s slower than the producer.
  • Because of that, the buffer sometimes fills up, forcing the producer to wait.

🧭 7️⃣ The Main Function

func main(){ buffer:= NewBuffer(bufferSize) var wg sync.WaitGroup wg.Add(2) go Producer(buffer, &wg) go Consumer(buffer, &wg) wg.Wait() }
  • Creates a buffer with a size of 5.
  • Adds 2 to the wait group (one for producer, one for consumer).
  • Runs both as goroutines.
  • Waits for both to finish (after producing and consuming 10 items each).

⚡ 8️⃣ The Execution Flow (How it really happens)

Let’s visualize roughly:

Time Event Explanation
0ms 🔵 Producer adds 100 Buffer = [100]
100ms 🟢 Consumer takes 100 Buffer = []
200ms 🔵 Producer adds 101 Buffer = [101]
300ms 🔵 Producer adds 102 Buffer = [101, 102]
400ms 🟢 Consumer takes 101 Buffer = [102]

Because the producer is faster (100ms) and the consumer slower (200ms), sometimes:

  • Buffer fills up (5 items), so producer Waits.
  • Then when consumer removes one, it Signals, waking producer.

This is why our output alternates between "Produced" and "Consumed" lines — but not perfectly; timing differences can cause clusters.


🔔 9️⃣ Understanding sync.NewCond — in depth

sync.NewCond(l sync.Locker) creates a Condition Variable bound to a Locker (like sync.Mutex).

A condition variable has three key methods:

  1. Wait()

    • Suspends the goroutine until it’s woken up.
    • Unlocks the mutex before waiting.
    • When it’s woken, it locks the mutex again.
  2. Signal()

    • Wakes one waiting goroutine.
  3. Broadcast()

    • Wakes all waiting goroutines.

The Cond is like a meeting point for goroutines. When one goroutine can’t proceed (e.g., buffer full), it waits there. Another goroutine (the opposite role) later signals that something has changed.


📈 🔟 The Sample Output Explained

🔵 Produced: 100 🟢 Consumed: 100 🔵 Produced: 101 🔵 Produced: 102 🟢 Consumed: 101 🔵 Produced: 103 🟢 Consumed: 102 🔵 Produced: 104 🔵 Produced: 105 🟢 Consumed: 103 🔵 Produced: 106 🔵 Produced: 107 🟢 Consumed: 104 🔵 Produced: 108 🔵 Produced: 109 🟢 Consumed: 105 🟢 Consumed: 106 🟢 Consumed: 107 🟢 Consumed: 108 🟢 Consumed: 109 

Here’s what’s happening:

  • At the start, producer produces faster → buffer gradually fills.
  • Consumer consumes slower → sometimes multiple “Produced” in a row.
  • When buffer is full, producer pauses (Wait()).
  • Once consumer consumes, it signals → producer wakes → continues producing.
  • When both finish their 10 cycles, program exits cleanly.

🧩 1️⃣1️⃣ Core Idea Recap

Concept Role
sync.Mutex Ensures exclusive access to shared data.
sync.Cond Enables goroutines to wait for a condition to become true.
Wait() Puts the goroutine to sleep while unlocking the mutex temporarily.
Signal() Wakes up one waiting goroutine.
for condition { Wait() } Prevents spurious wakeups and ensures condition is rechecked.
Producer Waits when buffer full, signals when new item added.
Consumer Waits when buffer empty, signals when space freed.

🧠 1️⃣2️⃣ Mental Analogy

Think of a shop shelf (buffer) that can hold 5 boxes.

  • 🧍‍♂️ Producer = worker putting boxes on the shelf. If the shelf is full, the worker waits until the consumer takes one.

  • 🧍‍♀️ Consumer = customer taking boxes off the shelf. If the shelf is empty, the customer waits until producer puts one.

  • Wait() = standing idle, waiting for an event.

  • Signal() = tapping the other person’s shoulder saying, “Okay, your turn!”


In summary:

Our code:

  • Creates a synchronized queue (shared buffer).
  • Uses sync.Cond to coordinate two goroutines.
  • Ensures neither goroutine wastes CPU cycles spinning.
  • Demonstrates how Go’s Cond enables producer-consumer coordination elegantly.

4️⃣. sync.Once 🟢

Perfect 🌟 — let’s now dive deeply into sync.Once — one of the most elegant synchronization primitives in Go. We’ll go step-by-step just like we did for sync.Cond, covering concept, use case, internal working, real-world analogy, and code examples.


🧠 1️⃣ What is sync.Once?

sync.Once ensures that a particular piece of code (a function) executes only once, even if multiple goroutines try to run it at the same time.

In other words:

  • It’s like a one-time initializer.
  • Even if 100 goroutines call the same function concurrently, sync.Once guarantees that only one of them actually runs the function body.
  • All others will wait until that one finishes.

🧩 2️⃣ Syntax

import "sync" var once sync.Once once.Do(func() { // Code that should run only once })

✅ The .Do() method takes a function (no arguments, no return values). If the function has already run once, future calls to .Do() do nothing.


⚙️ 3️⃣ Typical Use Case

sync.Once is most often used for lazy initialization — i.e., initializing a shared resource only when it’s first needed, and ensuring that initialization happens only once, even if many goroutines are running.

Examples:

  • Opening a database connection only once.
  • Loading a configuration file only once.
  • Initializing a global cache, logger, or network client.

💡 4️⃣ Simple Example

Let’s look at a simple demonstration 👇

package main import ( "fmt" "sync" ) var once sync.Once func initialize() { fmt.Println("✅ Initialization done") } func worker(id int, wg *sync.WaitGroup) { defer wg.Done() fmt.Println("Goroutine", id, "started") once.Do(initialize) // Only the first call executes initialize() fmt.Println("Goroutine", id, "running normally") } func main() { var wg sync.WaitGroup for i := 1; i <= 5; i++ { wg.Add(1) go worker(i, &wg)	} wg.Wait() }

🧾 Output (example)

Goroutine 1 started ✅ Initialization done Goroutine 1 running normally Goroutine 2 started Goroutine 2 running normally Goroutine 3 started Goroutine 3 running normally Goroutine 4 started Goroutine 4 running normally Goroutine 5 started Goroutine 5 running normally 

No matter how many goroutines call once.Do(initialize), 👉 initialize() runs exactly once, guaranteed.


🧮 5️⃣ Internal Working (What Happens Under the Hood)

Let’s understand how sync.Once achieves this.

Internally, sync.Once is implemented as:

type Once struct { done uint32 m Mutex }
  • done — an atomic flag (0 or 1) that tracks whether the function has already run.
  • m — a mutex to ensure exclusive execution of the Do() function.

When once.Do(fn) is called:

  1. It atomically checks if done == 1.

    • If yes, it returns immediately — the function has already run.
  2. If not:

    • It locks the mutex, calls the provided function.
    • Then sets done = 1.
    • Then unlocks the mutex.

It uses both atomic operations and mutex locking for correctness and speed.


🔍 6️⃣ Step-by-Step Flow

Let’s visualize this with multiple goroutines:

Time Goroutine Action Result
t₀ G1 Calls once.Do() done == 0 → G1 executes function
t₁ G2 Calls once.Do() Sees done == 0, waits for mutex
t₂ G1 Finishes, sets done = 1, unlocks mutex
t₃ G2 Sees done == 1, returns immediately
t₄ G3, G4, G5 All see done == 1, skip function instantly

Result: function runs only once (by G1), all others skip.


🧰 7️⃣ Real-World Analogy

Imagine a room with a light switch that turns on the generator.

  • Many people (goroutines) enter the dark room.
  • Each one says, “If the generator isn’t on yet, I’ll turn it on.”
  • The first person flips the switch.
  • Others come in later, see it’s already on, and skip flipping it.

That’s exactly how sync.Once works. 🟢 First goroutine runs the setup. ⚪ Others just continue once setup is done.


⚡ 8️⃣ Another Example — Lazy Initialization of a Database

package main import ( "fmt" "sync" ) type Database struct { connection string } var ( dbInstance *Database once sync.Once ) func GetDatabaseInstance() *Database { once.Do(func() { fmt.Println("Initializing Database connection...") dbInstance = &Database{connection: "Connected to DB"}	}) return dbInstance } func main() { var wg sync.WaitGroup for i := 1; i <= 3; i++ { wg.Add(1) go func(id int) { defer wg.Done() db := GetDatabaseInstance() fmt.Println("Goroutine", id, "using", db.connection)	}(i)	} wg.Wait() }

Output:

Initializing Database connection... Goroutine 2 using Connected to DB Goroutine 1 using Connected to DB Goroutine 3 using Connected to DB 

Even though 3 goroutines called GetDatabaseInstance(), ✅ initialization happens only once.


🧩 9️⃣ Important Notes

1. sync.Once executes once per instance.

If we create multiple sync.Once variables, each is independent.

var once1 sync.Once var once2 sync.Once

Each of them can execute once.


2. The function passed to Do() must not panic.

If it panics, the done flag is not set, so the function may run again later.

This behavior ensures that failed initialization can be retried.


3. Don’t copy a sync.Once value.

If we do something like:

onceCopy := once onceCopy.Do(fn)

→ The state won’t be shared, and fn might run more than once.

We should always use the same instance of sync.Once.


🧠 🔟 Summary Table

Concept Meaning
Purpose Execute a block of code only once across all goroutines.
Method once.Do(func)
Thread Safety Fully thread-safe.
Use Case Lazy initialization, singletons, configuration, setup.
Internal Mechanism Atomic flag + Mutex.
If panic occurs Flag not set; can retry.

🪄 1️⃣1️⃣ Quick Mental Shortcut

When we think of:

  • sync.Mutex"Only one at a time"
  • sync.WaitGroup"Wait for all to finish"
  • sync.Cond"Wait for a condition to change"
  • sync.Once"Run this once only"

So sync.Once is our one-time guarantee mechanism in concurrent Go code.


package main import ( "fmt" "sync" ) var once sync.Once func initialize(){ fmt.Println("🔵 This will not be repeated no matter how many times we call this func. using once.Done()!") } func main() { var wg sync.WaitGroup for i:= range 5{ wg.Add(1) go func (){ defer wg.Done() fmt.Println("✅ Goroutine #",i) once.Do(initialize)	}()	} wg.Wait() } // 💡OUTPUT: // $ go run . // ✅ Goroutine # 4 // 🔵 This will not be repeated no matter how many times we call this func. using once.Done()! // ✅ Goroutine # 3 // ✅ Goroutine # 1 // ✅ Goroutine # 2 // ✅ Goroutine # 0

🌟 — The above Go program demonstrates sync.Once, a powerful synchronization primitive in Go’s sync package that ensures a specific piece of code runs only once, no matter how many goroutines call it concurrently.

Let’s go through it step by step, in deep detail 👇


🧠 Conceptual Understanding

🔹 What is sync.Once?

sync.Once is a struct provided by Go’s sync package. It guarantees that a function (passed to once.Do()) executes only once — even if multiple goroutines call it at the same time.

Once the function finishes, future calls to once.Do() will skip execution, without blocking or re-running the code.

This is extremely useful for:

  • One-time initialization (e.g., loading configuration, setting up database connection, creating singletons)
  • Ensuring thread-safe setup in concurrent programs.

🧩 Code Breakdown

package main import ( "fmt" "sync" )
  • We import sync for synchronization tools and fmt for printing output.

🟦 Global Declaration

var once sync.Once
  • We create a global variable once of type sync.Once.

  • It internally contains:

    • A boolean flag (done) to track if the function has already been executed.
    • A mutex lock for thread-safe access.

👉 Think of sync.Once as a thread-safe, one-time switch — once turned on, it can’t be turned off.


⚙️ Function to Execute Once

func initialize(){ fmt.Println("🔵 This will not be repeated no matter how many times we call this func. using once.Done()!") }
  • This is the function we want to run only once.
  • Even if called from multiple goroutines simultaneously, Go ensures only one goroutine executes this, and the rest skip it.

💡 Main Function

func main() { var wg sync.WaitGroup
  • wg is a WaitGroup that ensures the main goroutine waits until all spawned goroutines finish execution.

🚀 Launching Goroutines

for i := range 5 { wg.Add(1) go func () { defer wg.Done() fmt.Println("✅ Goroutine #", i) once.Do(initialize)	}() }

Let’s break this part line by line 👇

🔸 for i := range 5

  • This is Go 1.22+ syntax — it means loop 5 times (i = 0,1,2,3,4).

🔸 wg.Add(1)

  • Increments the WaitGroup counter by 1 for each goroutine we spawn.

🔸 go func() {...}()

  • Launches a goroutine (a lightweight thread).

  • Each goroutine runs:

    • Prints "✅ Goroutine #i"
    • Calls once.Do(initialize)

🔸 once.Do(initialize)

  • This line is key:

    • The first goroutine that reaches this line calls initialize().
    • The rest of the goroutines skip execution once initialize() has already been called.

⚙️ Internally, sync.Once.Do() works like this:

func (o *Once) Do(f func()) { if atomic.LoadUint32(&o.done) == 0 { o.doSlow(f) } } func (o *Once) doSlow(f func()) { o.m.Lock() defer o.m.Unlock() if o.done == 0 { f() // run only once o.done = 1 // mark as done } }

So the sequence is:

  1. First goroutine locks the mutex and runs initialize().
  2. Sets done = 1.
  3. All other goroutines skip execution because they see done == 1.

🧵 Wait for All Goroutines

wg.Wait()
  • Blocks the main goroutine until all 5 goroutines finish their execution.

🧾 OUTPUT EXPLAINED

✅ Goroutine # 4 🔵 This will not be repeated no matter how many times we call this func. using once.Done()! ✅ Goroutine # 3 ✅ Goroutine # 1 ✅ Goroutine # 2 ✅ Goroutine # 0 

Explanation:

  • Multiple goroutines started almost simultaneously.
  • One of them (here, Goroutine #4) was first to execute once.Do(initialize) and ran initialize().
  • Others also reached that point but skipped the function because once.done was already set to true.
  • Hence, we see the “🔵 This will not be repeated...” message only once.

🧠 Key Takeaways

Concept Explanation
Purpose Ensures a function runs only once — thread-safe and concurrent-safe.
Main Method once.Do(func)
Internal Mechanism Uses a boolean flag + mutex to guarantee single execution.
Common Use Case Initialization, singletons, lazy loading, configuration setup.
Thread Safety Fully thread-safe — no need to use extra locks.

🧩 Visual Mental Model

 ┌──────────────┐ │ once.Do(f) │ └──────┬───────┘ │ ┌──────────────┴──────────────┐ │ │ ┌────▼────┐ ┌─────▼────┐ │ done=0? │ YES (1st time) │ done=1? │ NO (subsequent) └────┬────┘ └─────┬────┘ │ │ ▼ ▼ Run f() once Skip execution Set done=1 Return immediately 

⚙️ In One Sentence:

sync.Once ensures that a function executes only once across all goroutines — no matter how many try, only the first succeeds.


Let’s build a visual and intuitive analogy for sync.Once ❤️

🎯 Visual Analogy for sync.Once

Let’s imagine we’re a team of five engineers (our goroutines) working in a big company 🏢. We all arrive at the office early one morning, and our manager gives us this instruction:

“Someone must turn on the main server before work starts — but only once! If one of you already turned it on, the rest of you shouldn’t do it again.”

That “server initialization” is exactly like our initialize() function.


🧩 Step-by-Step Analogy

🧑‍💻 Step 1 — The Shared Rule (sync.Once)

Our team has a shared rulebook (the sync.Once object) that keeps track of whether the server has already been started.

Inside it, there’s:

  • A lock (to ensure only one person checks and modifies the rule at a time)
  • A flag (done = false initially), meaning “Server not started yet.”

⚙️ Step 2 — We All Start Working (Goroutines)

All 5 of us rush to start work at the same time (go func()).

Each of us checks the rulebook:

“Has anyone already started the server?”

If the answer is No (done == false):

  • One of us locks the book 🔒
  • Starts the server (initialize())
  • Marks in the book: “✅ Done = true”
  • Unlocks the book 🔓

🚫 Step 3 — The Rest Arrive

The other four engineers arrive and check the rulebook too.

But now the rule says:

“Server already started ✅”

So, each of them says:

“Cool, no need to repeat that step — I’ll move on.”


🧠 Step 4 — What Really Happened Internally

While this looks simple, the sync.Once mechanism behind the scenes ensures atomic, thread-safe execution:

  • Only one goroutine runs the code.
  • Others never run it again — not even partially.
  • Once marked as done, it stays done forever.

💡 The Mental Picture

 ┌──────────────────────────┐ │ Shared Rulebook (once) │ │ done = false │ └──────────┬───────────────┘ │ ┌─────────────────┴──────────────────┐ │ │ ┌──▼──┐ ┌────▼────┐ │G#1 │ -> "Done = false" │G#2, G#3,│ │ │ -> Runs initialize() │G#4, G#5 │ │ │ -> Set done = true │Check rule│ └─────┘ │Skip func │ └──────────┘ 

🧾 In Our Program’s Terms

Real-world analogy Go equivalent
Rulebook sync.Once
Flag inside rulebook internal done boolean
Lock on rulebook internal mutex
Turning on the server calling initialize()
Engineer who does it first first goroutine to execute once.Do()
Other engineers other goroutines skipping execution

🧩 Summary in Our Words

  • We can call once.Do(f) from multiple goroutines, but only one will actually run f().
  • All other calls wait until it’s confirmed done, then return immediately.
  • This guarantees thread-safe, idempotent initialization.

💬 One-Line Memory Hook

“In concurrency, sync.Once is like a door with one key — the first goroutine unlocks it, and for the rest, it’s already open.”


5️⃣. sync.Pool 🟣

Let’s dive deeply into sync.Pool in Go and understand how it works, why it exists, and how to use it efficiently. We’ll break it down step by step.

1️⃣ What is sync.Pool?

In Go, sync.Pool is a concurrent-safe object pool. It’s designed to reuse allocated objects to reduce the overhead of memory allocation and garbage collection (GC).

  • Think of it like a "recycling bin" for objects.
  • Instead of creating a new object every time, we take one from the pool.
  • After using it, we put it back for future reuse.

This is especially useful in high-performance programs where temporary objects are frequently created and discarded.


Key Characteristics

  1. Thread-safe: Multiple goroutines can get/put objects without explicit locks.
  2. Automatic GC support: Objects in the pool can be garbage collected if there’s memory pressure.
  3. Lazy initialization: We can define a New function to create a new object if the pool is empty.

2️⃣ Anatomy of sync.Pool

type Pool struct { // contains filtered or unexported fields } func (p *Pool) Get() interface{} func (p *Pool) Put(x interface{}) var _ Pool

Main methods:

  1. Get()

    • Returns an object from the pool if available.
    • If the pool is empty, it calls New (if defined) to create a new object.
  2. Put(x interface{})

    • Returns an object to the pool for later reuse.
    • After calling Put, the pool may or may not retain the object. GC can remove it.
  3. New (optional)

    • Function to allocate a new object if Get finds the pool empty.
    pool := sync.Pool{ New: func() interface{} { return &MyStruct{} }, }

3️⃣ Why use sync.Pool?

  • Frequent allocation of temporary objects creates GC pressure.
  • sync.Pool reduces memory churn by reusing objects, improving performance.
  • Perfect for objects with short lifetimes.

Use Cases:

  1. Buffers for network I/O ([]byte)
  2. Structs used temporarily in computations
  3. Large objects reused multiple times (e.g., JSON encoders/decoders)

Not ideal for:

  • Long-lived objects
  • Objects shared across multiple unrelated operations

4️⃣ How sync.Pool Works Internally

  • sync.Pool maintains per-P (processor) local storage.

    • Each P (logical processor in Go runtime) has its own list of pooled objects.
    • This reduces lock contention in high-concurrency scenarios.
  • When Put is called:

    • The object is added to the local P pool.
  • When Get is called:

    • First, it checks the local P pool.
    • If empty, it tries other Ps’ pools.
    • If still empty, it calls New.
  • Garbage Collection (GC) behavior:

    • On a GC cycle, the pool can be cleared.
    • This prevents sync.Pool from holding onto memory indefinitely, unlike other caches.

5️⃣ Example Usage

Let’s see a practical example:

package main import ( "fmt" "sync" ) type MyStruct struct { ID int } func main() { pool := sync.Pool{ New: func() interface{} { fmt.Println("Creating new object") return &MyStruct{} }, } // Get object (pool is empty -> New is called) obj1 := pool.Get().(*MyStruct) obj1.ID = 42 fmt.Println("obj1 ID:", obj1.ID) // Put object back into pool pool.Put(obj1) // Get object again (reuse from pool) obj2 := pool.Get().(*MyStruct) fmt.Println("obj2 ID:", obj2.ID) // 42 }

Output:

Creating new object obj1 ID: 42 obj2 ID: 42 

✅ The object is reused, so no new allocation occurs on the second Get().


6️⃣ Tips & Best Practices

  1. Use New for lazy initialization Avoid Get() returning nil.

  2. Reset objects before putting back

    buffer := pool.Get().([]byte) buffer = buffer[:0] // reset pool.Put(buffer)
  3. Do not store long-lived objects

    • GC may remove them unexpectedly.
    • sync.Pool is meant for temporary/reusable objects.
  4. Avoid storing pointers to large shared resources

    • Could lead to unexpected behavior if reused in parallel without reset.

7️⃣ Key Points to Remember

Property Details
Thread-safe ✅ multiple goroutines can Get and Put concurrently
Memory management Objects may be cleared on GC
Best for Temporary objects with short lifetimes
Not suitable for Long-lived objects, persistent caches

8️⃣ Real-world analogy

Imagine a pool of reusable coffee mugs:

  1. If someone needs a mug:

    • They take one from the shelf (pool).
    • If none are left, they grab a new mug (New function).
  2. After finishing:

    • They wash it and put it back (Put).
  3. If the shelf gets cleared occasionally (GC), mugs might be discarded to free space.

sync.Pool works exactly like that in code.


package main import ( "fmt" "sync" ) // sync.Pool type Person struct{ name string age int } func main() { var pool = sync.Pool{ New: func() interface{}{ fmt.Println("🔵Creating a new Person") return &Person{}	},	} // Get an object from the pool person1:= pool.Get().(*Person) person1.name = "Skyy" person1.age = 30 fmt.Println("💡Got Person 1️⃣:",person1) fmt.Printf("☑️ Person1 - Name: %s, Age: %d\n",person1.name,person1.age) pool.Put(person1) fmt.Println("✅ Returned Person to the pool.") // Retrieve the person person2:= pool.Get().(*Person) fmt.Println("💡Got Person 2️⃣:",person2) person3:=pool.Get().(*Person) fmt.Println("💡Got Person 3️⃣:",person3) person3.name = "Soumadip" // Returning obj{} to the pool again.. pool.Put(person2) pool.Put(person3) fmt.Println("✅ Returned Person to the pool again.") person4:=pool.Get().(*Person) fmt.Println("💡Got Person 4️⃣:",person4) person5:=pool.Get().(*Person) fmt.Println("💡Got Person 5️⃣:",person5) } // O/P: // $ go run . // 🔵Creating a new Person // 💡Got Person 1️⃣: &{Skyy 30} // ☑️ Person1 - Name: Skyy, Age: 30 // ✅ Returned Person to the pool. // 💡Got Person 2️⃣: &{Skyy 30} // 🔵Creating a new Person // 💡Got Person 3️⃣: &{ 0} // ✅ Returned Person to the pool again. // 💡Got Person 4️⃣: &{Skyy 30} // 💡Got Person 5️⃣: &{Soumadip 0}

Let's go line by line and concept by concept to understand our sync.Pool example (above program) deeply. We’ll explore what’s happening, why the output is like that, and the subtleties of Go’s object pooling.


1️⃣ Imports

import ( "fmt" "sync" )
  • fmt → For printing logs.
  • sync → Contains concurrency primitives, including sync.Pool.

2️⃣ Struct Definition

type Person struct{ name string age int }
  • Defines a simple struct Person with two fields: name and age.
  • This struct will be pooled using sync.Pool.

3️⃣ Initializing the Pool

var pool = sync.Pool{ New: func() interface{}{ fmt.Println("🔵Creating a new Person") return &Person{}	}, }
  • sync.Pool{New: ...} initializes a pool of objects.
  • New function is called only when the pool is empty.
  • Here, it prints a message and returns a pointer to a new Person struct.

Important: Objects returned by Get() are pointers, so we can modify their fields.


4️⃣ First Get

person1:= pool.Get().(*Person) person1.name = "Skyy" person1.age = 30
  1. pool.Get() is called.

    • Pool is empty → New is called → prints: 🔵Creating a new Person.
    • Returns &Person{}.
  2. Type assertion: .(*Person) converts interface{} to *Person.

  3. Assign fields:

    • name = "Skyy"
    • age = 30

Printing Person 1

fmt.Println("💡Got Person 1️⃣:",person1) fmt.Printf("☑️ Person1 - Name: %s, Age: %d\n",person1.name,person1.age)

Output:

💡Got Person 1️⃣: &{Skyy 30} ☑️ Person1 - Name: Skyy, Age: 30 

✅ This shows that the Person object is correctly initialized and modified.


5️⃣ Returning Person to the Pool

pool.Put(person1) fmt.Println("✅ Returned Person to the pool.")
  • Put returns the object back into the pool.
  • Important: The object is not copied, only the pointer is stored.
  • Now the pool contains one Person object: &{Skyy 30}.

6️⃣ Second Get

person2:= pool.Get().(*Person) fmt.Println("💡Got Person 2️⃣:",person2)
  • Pool has one object → Get() returns the same pointer (person1).
  • Output:
💡Got Person 2️⃣: &{Skyy 30} 
  • Notice: The object still has the old values (Skyy, 30) because we did not reset the object before putting it back.

7️⃣ Third Get

person3:=pool.Get().(*Person) fmt.Println("💡Got Person 3️⃣:",person3) person3.name = "Soumadip"
  • Pool is now empty (we already took person2).

  • Get() calls New → prints: 🔵Creating a new Person.

  • Returns a fresh object: &{ 0} → default values.

  • Then, we set:

person3.name = "Soumadip"
  • age remains 0 (not set yet).

Output:

💡Got Person 3️⃣: &{ 0} 

8️⃣ Returning Objects Again

pool.Put(person2) pool.Put(person3) fmt.Println("✅ Returned Person to the pool again.")
  • Both person2 and person3 are put back.

  • Pool now contains two objects:

    1. &{Skyy 30} (person2)
    2. &{Soumadip 0} (person3)

9️⃣ Fourth and Fifth Get

person4:=pool.Get().(*Person) fmt.Println("💡Got Person 4️⃣:",person4) person5:=pool.Get().(*Person) fmt.Println("💡Got Person 5️⃣:",person5)
  • Get person4 → Pool is not empty, returns first object available. Typically, Go returns objects in LIFO order per P (per logical processor). In practice here, it returns &{Skyy 30}. Output:

    💡Got Person 4️⃣: &{Skyy 30} 
  • Get person5 → Pool has one object left → returns &{Soumadip 0}. Output:

    💡Got Person 5️⃣: &{Soumadip 0} 
  • If we call Get() again, pool would be empty → New is invoked.


🔹 Key Insights from This Example

  1. Objects are reused as pointers → modifications persist unless reset.
  2. Pool does not automatically reset objects → we must manually clear fields before putting them back if required.
  3. New is called only when pool is empty → reduces allocation overhead.
  4. Multiple objects can exist in the pool at once.
  5. Pool objects can be garbage collected → not guaranteed to persist forever.

🔟 Why the Output is Like This

Step Object Pool State Output
Get 1 New Person Empty → taken &{Skyy 30}
Put 1 Return person1 Pool: &{Skyy 30} -
Get 2 Reuse person1 Pool empty &{Skyy 30}
Get 3 New Person Pool empty → New &{ 0}
Put 2+3 Return person2 & person3 Pool: [&{Skyy 30}, &{Soumadip 0}] -
Get 4 Reuse first available Pool: [&{Soumadip 0}] &{Skyy 30}
Get 5 Reuse remaining Pool empty &{Soumadip 0}

✅ Matches exactly with our observed output.


6️⃣. for-select-statement (BONUS ⭐) 🔄️

The for-select pattern in Go, is a very common idiom for handling concurrent communication with channels.

1️⃣ What is for-select?

In Go, a select statement lets us wait on multiple channel operations simultaneously. Combining select with a for loop allows us to continuously monitor multiple channels until some condition breaks the loop.

The general structure is:

for { select { case msg1 := <-ch1: // handle message from channel 1 case msg2 := <-ch2: // handle message from channel 2 case ch3 <- value: // send value to channel 3 default: // optional: run if no channel is ready } }

2️⃣ How it works

  • for loop: Keeps the select statement running indefinitely (or until we break it).

  • select statement: Waits for one of the channel operations inside it to become ready:

    • Receiving from a channel (<-ch)
    • Sending to a channel (ch <- value)
  • case blocks: Only one case executes at a time. If multiple are ready, Go picks one randomly.

  • default case (optional): Executes if no channels are ready—prevents blocking.


3️⃣ Example: Handling multiple channels

package main import ( "fmt" "time" ) func main() { ch1 := make(chan string) ch2 := make(chan string) // simulate sending data in goroutines go func() { for i := 1; i <= 3; i++ { ch1 <- fmt.Sprintf("ch1 message %d", i) time.Sleep(500 * time.Millisecond) } close(ch1) }() go func() { for i := 1; i <= 3; i++ { ch2 <- fmt.Sprintf("ch2 message %d", i) time.Sleep(700 * time.Millisecond) } close(ch2) }() // for-select loop for { select { case msg, ok := <-ch1: if !ok { ch1 = nil // prevents further reads from closed channel fmt.Println("ch1 closed") continue } fmt.Println("Received from ch1:", msg) case msg, ok := <-ch2: if !ok { ch2 = nil // prevents further reads from closed channel fmt.Println("ch2 closed") continue } fmt.Println("Received from ch2:", msg) } // exit loop if both channels are nil (closed) if ch1 == nil && ch2 == nil { fmt.Println("All channels closed. Exiting loop.") break } } }

✅ Key points from this example:

  1. Detecting closed channels: Using the ok boolean from <-ch to see if a channel is closed.
  2. Avoiding panic: Setting the closed channel to nil ensures it’s ignored in future selects.
  3. Loop termination: Break the for loop when all channels are done.

4️⃣ Optional default case

The default case prevents blocking when no channel is ready:

for { select { case msg := <-ch: fmt.Println(msg) default: fmt.Println("No messages yet, doing other work...") time.Sleep(100 * time.Millisecond) } }
  • Without default, the loop blocks until a channel is ready.
  • With default, we can do non-blocking checks while performing other work.

5️⃣ Use cases for for-select

  1. Multiplexing channels: Listen to multiple channels at once.
  2. Timers and timeouts: Using time.After or time.Ticker channels.
  3. Event loops: Continuously process incoming messages until shutdown.
  4. Graceful shutdown: Using a done channel to signal termination.

Example with timeout:

done := make(chan bool) for { select { case msg := <-ch: fmt.Println(msg) case <-time.After(2 * time.Second): fmt.Println("Timeout! No message received.") done <- true return } }

6️⃣ Summary Table

Concept Explanation
for Keeps the loop running
select Waits on multiple channel operations
case <-ch Receives from a channel
case ch <- value Sends to a channel
default Executes if no channel is ready (non-blocking)
nil channels Prevent select from considering closed channels
ok boolean Detects if a channel is closed

package main import ( "fmt" "time" ) // for-select-statement func main() { ticker:=time.NewTicker(time.Second) quit:= make(chan string) go func() { time.Sleep(5*time.Second) close(quit)	}() for{ select{ case <- ticker.C: fmt.Println("Tick ✅") case <- quit: fmt.Println("Quitting.. ☑️") return	}	} } // O/P: // $ go run . // Tick ✅ // Tick ✅ // Tick ✅ // Tick ✅ // Tick ✅ // Quitting.. ☑️

Let's go line by line and explain the above for-select Go code in depth so we fully understand how it works.


1️⃣ Package and Imports

package main import ( "fmt" "time" )
  • package main: Entry point of the Go program. This tells Go to compile an executable.
  • import "fmt": Used for printing output (fmt.Println).
  • import "time": Provides time-related functions, like time.NewTicker and time.Sleep.

2️⃣ Main function

func main() { ticker := time.NewTicker(time.Second) quit := make(chan string)
  • ticker := time.NewTicker(time.Second)

    • Creates a ticker that sends a signal on its channel (ticker.C) every second.
    • Think of it like an alarm clock ticking every second.
  • quit := make(chan string)

    • Creates a channel of type string.
    • We'll use this channel to signal when to stop the loop.

3️⃣ Goroutine to stop the ticker

go func() { time.Sleep(5 * time.Second) close(quit)	}()
  • go func() { ... }(): Launches an anonymous goroutine.

  • Inside the goroutine:

    1. time.Sleep(5 * time.Second): Waits 5 seconds.
    2. close(quit): Closes the quit channel to signal the main loop to stop.
  • By using a goroutine, this stop signal runs concurrently with the main loop.


4️⃣ The for-select loop

for { select { case <-ticker.C: fmt.Println("Tick ✅") case <-quit: fmt.Println("Quitting.. ☑️") return	}	}
  • for { ... }: Infinite loop, keeps running until we explicitly return.
  • select { ... }: Waits for multiple channel operations.

Case 1: case <-ticker.C

case <-ticker.C: fmt.Println("Tick ✅")
  • <-ticker.C waits for a tick from the ticker, which happens every second.
  • When a tick arrives, it prints: "Tick ✅".
  • This will happen once per second until we quit.

Case 2: case <-quit

case <-quit: fmt.Println("Quitting.. ☑️") return
  • <-quit waits for a value from the quit channel.
  • We didn’t send a value; instead, we closed the channel in the goroutine.
  • Reading from a closed channel immediately returns the zero value of the channel’s type (here, "" for string).
  • When this happens, we print "Quitting.. ☑️" and return, which stops the program.

5️⃣ Flow of Execution

  1. Ticker starts ticking every 1 second.

  2. Goroutine sleeps 5 seconds, then closes quit.

  3. The for-select loop continuously waits:

    • Every second, ticker sends a tick → prints "Tick ✅".
    • After 5 seconds, quit channel is closed → prints "Quitting.. ☑️" → exits loop.

6️⃣ Output Explanation

Tick ✅ ← 1 second Tick ✅ ← 2 seconds Tick ✅ ← 3 seconds Tick ✅ ← 4 seconds Tick ✅ ← 5 seconds Quitting.. ☑️ ← After quit channel closes 
  • 5 ticks happen because the goroutine sleeps 5 seconds before closing quit.
  • As soon as quit closes, the loop stops.

7️⃣ Key Points to Note

Concept Explanation
time.NewTicker Sends a tick on its channel periodically.
<-ticker.C Receives the tick, blocking until one arrives.
quit channel Used to signal termination. Closing it allows select to detect it.
for-select loop Continuously waits on multiple channels and handles whichever is ready.
Goroutine Runs concurrently, allowing the main loop to continue ticking while waiting for quit signal.

✅ This is a classic for-select pattern to handle repeated tasks with graceful shutdown in Go.


About

Everything about CONCURRENCY in Golang 💻🔵

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages