Curated with 💖 by Soumadip "Skyy" Banerjee 👨🏻💻
Excellent topic 🌟 — understanding concurrency vs parallelism is fundamental to mastering Go (and modern backend systems in general). Let’s break it down conceptually, practically, and visually, with programming analogies and examples.
Concept | Concurrency | Parallelism |
---|---|---|
Definition | Dealing with many tasks at once (managing multiple tasks). | Doing many tasks at exactly the same time (executing multiple tasks). |
Goal | Structure programs to handle multiple tasks efficiently. | Speed up execution using multiple processors/cores. |
Focus | Task switching and coordination. | Task distribution and simultaneous execution. |
Requires multiple CPUs? | ❌ No | ✅ Yes |
Example analogy | A single chef cooking multiple dishes by switching between them quickly. | Multiple chefs cooking multiple dishes at the same time. |
- Means that multiple tasks make progress during overlapping periods.
- Even on a single CPU, the program switches rapidly between tasks (via scheduling or cooperative multitasking).
- Key idea: structuring a program to handle multiple things logically at the same time.
- Means truly simultaneous execution — multiple tasks run at the same instant.
- Requires multiple cores or processors.
- Key idea: actual simultaneous computation to improve speed.
Let’s say we have three tasks: T1 = load data, T2 = process data, T3 = save results.
Time → |T1 |T2 |T1 |T3 |T2 |T3 |
→ One CPU switching between tasks fast enough to appear simultaneous.
CPU 1: |T1 |T3 | CPU 2: |T2 |
→ Multiple CPUs executing at the same time.
package main import ( "fmt" "time" ) func task(name string) { for i := 1; i <= 3; i++ { fmt.Println(name, "step", i) time.Sleep(time.Millisecond * 500) } } func main() { go task("Task A") go task("Task B") time.Sleep(time.Second * 3) fmt.Println("✅ Done") }
🌀 Here, goroutines run concurrently. The Go scheduler switches between them efficiently, even on one CPU core.
package main import ( "fmt" "runtime" "sync" ) func work(id int, wg *sync.WaitGroup) { defer wg.Done() fmt.Printf("Worker %d on CPU %d\n", id, runtime.NumCPU()) } func main() { runtime.GOMAXPROCS(4) // use 4 CPU cores var wg sync.WaitGroup for i := 1; i <= 4; i++ { wg.Add(1) go work(i, &wg) } wg.Wait() fmt.Println("✅ All done in parallel") }
⚡ Now Go executes tasks in parallel, using multiple OS threads across CPU cores.
Aspect | Concurrency | Parallelism |
---|---|---|
Concept | Structure / Design | Execution |
Need for multiple cores? | No | Yes |
Main Benefit | Responsiveness, scalability | Performance, speed |
In Go | Goroutines, channels | Goroutines + GOMAXPROCS or multi-core scheduling |
Concurrency: Parallelism: |---A---| CPU1: |---A---| |---B---| CPU2: |---B---| (tasks interleaved) (tasks truly simultaneous)
Situation | Concurrency | Parallelism |
---|---|---|
Cooking dinner | One cook juggling multiple dishes | Several cooks each handling a dish |
Handling HTTP requests | Server interleaving many requests | Cluster of servers each processing one request |
Downloading files | One downloader pausing/resuming | Multiple downloaders each downloading a file |
In summary:
🧩 Concurrency is about dealing with lots of things at once. ⚡ Parallelism is about doing lots of things at once.
package main import ( "fmt" "time" ) func printNums() { for num:= range 5 { fmt.Println(time.Now()) fmt.Println(num) time.Sleep(500*time.Millisecond) } } func printLetters(){ for _,letter:= range "ABCDE" { fmt.Println(time.Now()) fmt.Println(string(letter)) time.Sleep(500*time.Millisecond) } } func main(){ go printNums() go printLetters() time.Sleep(3*time.Second) } // 💡 Output: // $ go run . // 2025-10-13 16:57:28.5136966 +0530 IST m=+0.000603201 // A // 2025-10-13 16:57:28.5136966 +0530 IST m=+0.000603201 // 0 // 2025-10-13 16:57:29.0151129 +0530 IST m=+0.502019501 // B // 2025-10-13 16:57:29.0156584 +0530 IST m=+0.502565001 // 1 // 2025-10-13 16:57:29.5155282 +0530 IST m=+1.002434801 // C // 2025-10-13 16:57:29.5162236 +0530 IST m=+1.003130201 // 2 // 2025-10-13 16:57:30.01689 +0530 IST m=+1.503796601 // 3 // 2025-10-13 16:57:30.01689 +0530 IST m=+1.503796601 // D // 2025-10-13 16:57:30.5179877 +0530 IST m=+2.004894301 // 4 // 2025-10-13 16:57:30.5186667 +0530 IST m=+2.005573301 // E
package main import ( "fmt" "runtime" "sync" "time" ) // concurrency vs parallelism func heavyTask(id int, wg *sync.WaitGroup){ defer wg.Done() fmt.Printf("🟡 Task %d is starting..\n",id) // heavy ops. simulation: for range 100_000_000{ } fmt.Println(time.Now()) fmt.Printf("✅ Task %d is finished!\n",id) } func main(){ numOfThreads:=4 runtime.GOMAXPROCS(numOfThreads) var wg sync.WaitGroup for i:= range numOfThreads{ wg.Add(1) heavyTask(i,&wg) } wg.Wait() }
Let’s go through this program carefully, exactly as it is.
package main import ( "fmt" "runtime" "sync" "time" )
fmt
→ for printing output to the consoleruntime
→ to control how many OS threads can run simultaneously (throughGOMAXPROCS
)sync
→ for synchronization (WaitGroup
)time
→ for printing timestamps
So far, standard setup — no concurrency yet.
func heavyTask(id int, wg *sync.WaitGroup){ defer wg.Done() fmt.Printf("🟡 Task %d is starting..\n",id) for range 100_000_000{ } fmt.Println(time.Now()) fmt.Printf("✅ Task %d is finished!\n",id) }
id int
→ the task number (0, 1, 2, 3).wg *sync.WaitGroup
→ used to signal when a task finishes.
defer wg.Done()
→ This means when the function ends, it will decrement the WaitGroup counter by one.
for range 100_000_000
→ This is supposed to simulate heavy work — but note: for range 100_000_000
is syntactically invalid Go code. Go doesn’t allow range
over an integer. We can only range over slices, arrays, maps, strings, or channels. So the compiler will throw an error here unless we actually meant:
for i := 0; i < 100_000_000; i++ { }
But since we’re explaining our version literally — this line never executes successfully. It causes a compilation error.
func main(){ numOfThreads:=4 runtime.GOMAXPROCS(numOfThreads) var wg sync.WaitGroup for i:= range numOfThreads{ wg.Add(1) heavyTask(i,&wg) } wg.Wait() }
numOfThreads := 4
→ We’ve defined a constant integer representing the number of tasks we want.
runtime.GOMAXPROCS(numOfThreads)
→ This sets the maximum number of CPU threads (OS-level threads) that can execute Go code simultaneously. In theory, this enables up to 4 parallel threads — but only if goroutines are actually created.
Right now, we have no goroutines, so this setting does nothing noticeable.
var wg sync.WaitGroup
→ Creates a WaitGroup instance to wait for all tasks to finish.
for i := range numOfThreads
→ Again, range
over an integer is not valid Go syntax. We can only do:
for i := 0; i < numOfThreads; i++ { }
So this line will also cause a compilation error. Even if Go allowed it, it wouldn’t behave as expected.
heavyTask(i, &wg)
→ Here’s the critical part: we are calling the function normally, not starting a goroutine. That means:
- Task 0 runs fully → finishes → returns.
- Then Task 1 starts → finishes → returns.
- Then Task 2 → finishes → returns.
- Then Task 3 → finishes → returns.
So even though our variable name is numOfThreads
, all four tasks are running sequentially, not concurrently or in parallel.
✅ Each one prints start → timestamp → finish before the next starts.
wg.Wait()
→ Waits for all tasks to complete. But since every task runs synchronously, by the time we reach here, all tasks are already finished.
So wg.Wait()
returns instantly.
🟡 Task 0 is starting.. 2025-10-13 17:21:19.818228 +0530 IST m=+0.054805601 ✅ Task 0 is finished! 🟡 Task 1 is starting.. 2025-10-13 17:21:19.8776103 +0530 IST m=+0.114187901 ✅ Task 1 is finished! 🟡 Task 2 is starting.. 2025-10-13 17:21:19.9311326 +0530 IST m=+0.167710201 ✅ Task 2 is finished! 🟡 Task 3 is starting.. 2025-10-13 17:21:19.9783995 +0530 IST m=+0.214977101 ✅ Task 3 is finished!
This confirms that each task starts only after the previous one finishes. That’s why:
- The timestamps are only tens of milliseconds apart.
- We see no overlap between start and finish messages.
- Even though we set
GOMAXPROCS(4)
, we never told Go to run goroutines concurrently.
So — this program is purely sequential. Not concurrent, not parallel.
Concept | In this code |
---|---|
Concurrency | ❌ Not happening. Tasks are executed one-by-one. |
Parallelism | ❌ Not happening. Even though 4 cores are available, only one is used. |
Synchronization | ✅ WaitGroup is technically used, but unnecessary because all tasks are blocking. |
Task switching | ❌ No switching; main thread runs everything. |
CPU usage | Roughly 1 core used (100%). Others idle. |
- In Go, functions run synchronously unless we prefix them with
go
. Thego
keyword tells Go’s runtime scheduler to create a goroutine and run it concurrently. - Without
go
,heavyTask
executes in the same goroutine asmain
. GOMAXPROCS(4)
only matters when there are multiple runnable goroutines. It tells the runtime how many can execute in parallel on OS threads. If we have just one goroutine (the main one), there’s nothing to parallelize.
Concept | Explanation |
---|---|
What the code does | Runs four heavy tasks one after another. |
Why it’s sequential | Because we never used go before heavyTask(...) . |
What GOMAXPROCS does here | Nothing visible — no goroutines to schedule across threads. |
Why output timestamps increase | Each task runs fully before the next begins. |
Is there concurrency? | ❌ No. Only one goroutine. |
Is there parallelism? | ❌ No. Only one thread executing. |
Is it valid Go code? | Not quite — for range numOfThreads and for range 100_000_000 are invalid syntax. |
Even though we wanted to “run 4 heavy tasks in parallel,” our code actually runs:
🟡 Task 0 → ✅ Task 0 → 🟡 Task 1 → ✅ Task 1 → 🟡 Task 2 → ✅ Task 2 → 🟡 Task 3 → ✅ Task 3
— one after another, synchronously.
No real concurrency. No parallel execution. Just sequential function calls.
Both Race Conditions and Deadlocks are classic concurrency hazards — they arise when multiple goroutines (or threads) interact with shared resources like variables, memory, or locks without proper synchronization.
A race condition occurs when:
Two or more goroutines access the same shared resource (like a variable or memory) concurrently, and at least one of them writes to it, leading to unpredictable or incorrect results.
Essentially, the output depends on the timing of the goroutines — which one “wins the race” to execute first.
Imagine two people trying to withdraw ₹500 from the same bank account (₹800 balance) at the same time.
- Person A checks → balance = ₹800
- Person B checks → balance = ₹800
- Both withdraw ₹500 simultaneously
- New balances (incorrectly) become ₹300 + ₹300 = ₹600 (should’ve been ₹300 total)
This happens because both operations read before the other writes → a classic race.
package main import ( "fmt" "sync" ) var counter = 0 func increment(wg *sync.WaitGroup) { defer wg.Done() for i := 0; i < 1000; i++ { counter++ // ❌ not synchronized } } func main() { var wg sync.WaitGroup wg.Add(2) go increment(&wg) go increment(&wg) wg.Wait() fmt.Println("Final Counter:", counter) }
We might expect:
Final Counter: 2000
But actual results vary:
Final Counter: 1764
or
Final Counter: 1908
Because both goroutines modify the shared variable at the same time → lost updates.
Use mutexes (locks) or channels to control access.
var counter = 0 var mu sync.Mutex func increment(wg *sync.WaitGroup) { defer wg.Done() for i := 0; i < 1000; i++ { mu.Lock() counter++ mu.Unlock() } }
Now, only one goroutine can update counter
at a time → deterministic, safe.
Go provides a built-in race detector:
$ go run -race main.go
This will show warnings like:
WARNING: DATA RACE
whenever unsynchronized access occurs.
A deadlock occurs when:
Two or more goroutines are waiting indefinitely for each other to release a resource or lock, so none of them ever proceed.
Essentially, it’s like mutual blocking — everyone’s waiting, nobody moves.
Imagine two people trying to pick up two spoons to eat:
- Person A picks Spoon 1 and waits for Spoon 2
- Person B picks Spoon 2 and waits for Spoon 1
- Both are stuck forever waiting for the other → deadlock.
package main import ( "fmt" "sync" ) func main() { var mu1, mu2 sync.Mutex var wg sync.WaitGroup wg.Add(2) go func() { defer wg.Done() mu1.Lock() fmt.Println("Goroutine 1 locked mu1") mu2.Lock() fmt.Println("Goroutine 1 locked mu2") mu2.Unlock() mu1.Unlock() }() go func() { defer wg.Done() mu2.Lock() fmt.Println("Goroutine 2 locked mu2") mu1.Lock() fmt.Println("Goroutine 2 locked mu1") mu1.Unlock() mu2.Unlock() }() wg.Wait() }
Goroutine 1 locked mu1 Goroutine 2 locked mu2
…and then it hangs forever (no further output). Because:
- Goroutine 1 → waits for
mu2
(held by Goroutine 2) - Goroutine 2 → waits for
mu1
(held by Goroutine 1) Neither can proceed → deadlock.
- Lock ordering → Always acquire locks in a fixed order (e.g., always
mu1
thenmu2
). - TryLock pattern (non-blocking locks).
- Use channels → Go channels can help avoid shared-state locking issues.
Example (fixed ordering):
mu1.Lock() mu2.Lock() // critical section mu2.Unlock() mu1.Unlock()
Concept | Description | Result | Prevention |
---|---|---|---|
Race Condition | Multiple goroutines access shared data simultaneously | Unpredictable or incorrect results | Synchronize using Mutex or Channels |
Deadlock | Goroutines wait forever for each other’s locks | Program freezes | Enforce lock ordering or use Channels |
🧠 Summary:
- A race condition = data conflict due to unsynchronized access.
- A deadlock = circular waiting where no one proceeds.
- Both arise from poor concurrency control.
- In Go, mutexes, channels, and lock ordering are key tools to prevent them.
Let’s go through the following program line by line, then understand the conceptual flow, what could go wrong, and why our output looks like that.
package main import ( "fmt" "sync" "time" ) // deadlocks - careful lock-ordering in concurrent programming // go does not have built-in deadlock detection tool. Solution - profiling, debugging tools, etc. // use the runtime-stack to understand code func main(){ var mu1, mu2 sync.Mutex go func() { mu1.Lock() fmt.Println("🔐 Goroutine 1 locked mu1.") time.Sleep(time.Second) mu2.Lock() // blocking fmt.Println("🔐 Goroutine 1 locked mu2.") mu1.Unlock() mu2.Unlock() fmt.Println("☑️ Goroutine 1 finished.") }() go func() { //mu2.Lock() mu1.Lock() fmt.Println("🔏 Goroutine 2 locked mu2.") time.Sleep(time.Second) // mu1.Lock() mu2.Lock() // blocking fmt.Println("🔏 Goroutine 2 locked mu1.") mu2.Unlock() mu1.Unlock() fmt.Println("☑️ Goroutine 2 finished.") }() // sleep the main thread time.Sleep(4*time.Second) fmt.Println("✅ Main func() completed.") // select{} } // 💡 Output: // $ go run . // 🔏 Goroutine 2 locked mu2. // 🔏 Goroutine 2 locked mu1. // ☑️ Goroutine 2 finished. // 🔐 Goroutine 1 locked mu1. // 🔐 Goroutine 1 locked mu2. // ☑️ Goroutine 1 finished. // ✅ Main func() completed.
We have two shared locks (mutexes):
var mu1, mu2 sync.Mutex
and two goroutines, each locking them in potentially different orders — which is how deadlocks can occur.
A deadlock happens when:
Two or more goroutines each hold a lock, and are waiting for each other’s lock to be released — so neither can proceed.
🔁 This is a circular wait:
- Goroutine 1 → holds
mu1
, waits formu2
- Goroutine 2 → holds
mu2
, waits formu1
- ❌ None can progress — program hangs forever.
Let’s look carefully at both goroutines.
go func() { mu1.Lock() fmt.Println("🔐 Goroutine 1 locked mu1.") time.Sleep(time.Second) mu2.Lock() // blocking if mu2 is already held by someone fmt.Println("🔐 Goroutine 1 locked mu2.") mu1.Unlock() mu2.Unlock() fmt.Println("☑️ Goroutine 1 finished.") }()
So:
- Locks mu1
- Waits 1 second
- Then tries to lock mu2
- Then unlocks both and finishes
go func() { //mu2.Lock() mu1.Lock() fmt.Println("🔏 Goroutine 2 locked mu2.") time.Sleep(time.Second) // mu1.Lock() mu2.Lock() // blocking fmt.Println("🔏 Goroutine 2 locked mu1.") mu2.Unlock() mu1.Unlock() fmt.Println("☑️ Goroutine 2 finished.") }()
💡 Notice the commented lines:
//mu2.Lock() // mu1.Lock()
That means both goroutines actually lock in the same order:
first
mu1
, thenmu2
.
Even though the print message says "locked mu2"
, it’s actually locking mu1 first!
Let’s imagine both goroutines start roughly together.
Time | Goroutine 1 | Goroutine 2 | Event |
---|---|---|---|
t=0s | Locks mu1 | Waiting for mu1 | 1 got mu1 , 2 is blocked |
t=1s | Still holds mu1 , tries to lock mu2 | Still waiting for mu1 | 1 gets mu2 (since no one holds it) |
t=1.1s | Unlocks both | — | 1 finishes |
t=1.2s | Locks mu1 | — | 2 gets mu1 |
t=2.2s | Locks mu2 | — | 2 gets mu2 |
t=2.3s | Unlocks both | — | 2 finishes |
t=4s | — | — | main exits |
✅ Both goroutines finish, no deadlock.
🔏 Goroutine 2 locked mu2.
This line is misleading because the print message is incorrect — it should say:
fmt.Println("🔏 Goroutine 2 locked mu1.")
Since we actually called mu1.Lock()
first, not mu2.Lock()
.
So, the correct output should have been:
🔏 Goroutine 2 locked mu1. 🔐 Goroutine 1 locked mu1.
etc.
That’s why the printed sequence looks reversed from the actual locks.
If we uncomment those two lines to reverse the order of locking for goroutine 2:
go func() { mu2.Lock() fmt.Println("🔏 Goroutine 2 locked mu2.") time.Sleep(time.Second) mu1.Lock() // now reversed fmt.Println("🔏 Goroutine 2 locked mu1.") mu2.Unlock() mu1.Unlock() }()
Now we have:
- Goroutine 1 locks
mu1 → mu2
- Goroutine 2 locks
mu2 → mu1
This is classic deadlock potential:
- Goroutine 1 holds
mu1
and waits formu2
- Goroutine 2 holds
mu2
and waits formu1
- 🔒 Both are waiting forever — program hangs indefinitely
-
Always lock in a consistent order → If one goroutine locks
mu1
thenmu2
, all others should do the same. -
Use
TryLock()
pattern (custom or from sync/atomic in Go 1.19+) -
Use
defer mu.Unlock()
immediately after each lock to ensure cleanup. -
Use higher-level concurrency tools:
sync.RWMutex
sync.WaitGroup
- channels instead of shared locks
-
Detect and debug deadlocks:
- Run with
go run -race
- Or use
runtime.Stack()
dumps to inspect goroutine states.
- Run with
Concept | Meaning |
---|---|
sync.Mutex | Used to protect shared data from concurrent access |
Lock order | Critical to prevent deadlocks |
Deadlock | When goroutines hold locks in conflicting order and wait forever |
Solution | Always maintain consistent lock ordering |
- Our program does not deadlock because both goroutines lock
mu1 → mu2
in the same order. - The misleading print statements make it look like they lock differently.
- If we uncomment those lines and make one goroutine lock
mu2 → mu1
, we’ll trigger an actual deadlock — a key lesson in Go concurrency.
Excellent — understanding sync.RWMutex
is a major leap toward mastering Go’s concurrency. It’s one of Go’s most efficient synchronization primitives, built on top of sync.Mutex
— but with fine-grained control over readers and writers 🔒📖✍️.
A sync.Mutex
(mutual exclusion lock) allows only one goroutine to access a resource at a time — no matter whether it’s reading or writing.
That’s fine for safety… but inefficient when many goroutines only need to read shared data (since reads don’t modify anything).
var mu sync.Mutex var data = make(map[string]string) func read(key string) string { mu.Lock() // ❌ blocks even if it's just reading defer mu.Unlock() return data[key] }
Even if 100 goroutines just want to read, they all serialize — one at a time. No concurrency benefit.
Go gives us Read-Write Mutex, short for sync.RWMutex
.
var rw sync.RWMutex
It has two kinds of locks:
Lock Type | Method | Meaning |
---|---|---|
🔒 Write Lock | rw.Lock() | Exclusive — only one writer at a time, no readers allowed |
📖 Read Lock | rw.RLock() | Shared — multiple readers allowed concurrently |
Scenario | Allowed? | Description |
---|---|---|
Multiple readers (RLock ) | ✅ Yes | Many goroutines can read at the same time |
Reader + Writer | ❌ No | Writer blocks until all readers finish |
Multiple writers (Lock ) | ❌ No | Only one writer at a time |
Writer + Reader | ❌ No | Readers block until writer is done |
So, RWMutex favors concurrency for reads, while keeping safety for writes.
Let’s see a simple example 👇
package main import ( "fmt" "sync" "time" ) var ( rw sync.RWMutex data = 0 ) func read(id int) { rw.RLock() // multiple readers can acquire this fmt.Printf("📖 Reader %d: reading data = %d\n", id, data) time.Sleep(500 * time.Millisecond) rw.RUnlock() } func write(id int, val int) { rw.Lock() // only one writer at a time fmt.Printf("✍️ Writer %d: writing data = %d\n", id, val) data = val time.Sleep(1 * time.Second) rw.Unlock() } func main() { for i := 1; i <= 3; i++ { go read(i) } go write(1, 42) for i := 4; i <= 6; i++ { go read(i) } time.Sleep(3 * time.Second) }
📖 Reader 1: reading data = 0 📖 Reader 2: reading data = 0 📖 Reader 3: reading data = 0 ✍️ Writer 1: writing data = 42 📖 Reader 4: reading data = 42 📖 Reader 5: reading data = 42 📖 Reader 6: reading data = 42
- Readers 1–3 acquired the read lock concurrently (
RLock()
). - The writer (
Lock()
) had to wait until all readers releasedRUnlock()
. - Writer updated
data
. - Readers 4–6 started reading after the writer released the write lock.
✅ Efficient: All readers ran in parallel. ✅ Safe: Writer still had exclusive access.
Feature | sync.Mutex | sync.RWMutex |
---|---|---|
Allows concurrent readers | ❌ No | ✅ Yes |
Allows multiple writers | ❌ No | ❌ No |
Reader–writer blocking | N/A | Readers block writers and vice versa |
Ideal for | Mostly writes | Mostly reads |
API | Lock() / Unlock() | Lock() / Unlock() + RLock() / RUnlock() |
So:
- If most goroutines only read, use
RWMutex
. - If most goroutines write, plain
Mutex
is simpler (less overhead).
RWMutex
internally maintains two counters and one mutex:
readerCount
→ number of readers currently holding the read lockreaderWait
→ number of readers a writer is waiting onmutex
→ protects these counters and ensures write exclusivity
-
RLock()
incrementsreaderCount
- If no writer is waiting → proceed
- If writer is pending → block until writer finishes
-
Lock()
:- Blocks new readers
- Waits for
readerCount == 0
- Then acquires exclusive control
So a writer waits for all readers to finish, and new readers wait for writers.
-
❌ Forgetting to call
RUnlock()
for everyRLock()
→ causes deadlocks because writer never gets a chance. -
❌ Mixing up
Lock()
andRLock()
→ easy to forget which one to use, especially in nested functions. -
❌ Using RWMutex when reads are rare → incurs unnecessary overhead (since RWMutex has more bookkeeping).
-
❌ Unlocking without Lock → causes panic:
sync: unlock of unlocked RWMutex
✅ Ideal for:
- In-memory caches (frequent reads, occasional writes)
- Configuration data shared by many goroutines
- Real-time dashboards (many consumers, few updates)
- Rate-limiters or statistics counters
type SafeMap struct { m map[string]int mu sync.RWMutex } func (s *SafeMap) Get(key string) (int, bool) { s.mu.RLock() defer s.mu.RUnlock() val, ok := s.m[key] return val, ok } func (s *SafeMap) Set(key string, val int) { s.mu.Lock() defer s.mu.Unlock() s.m[key] = val }
- Multiple goroutines can call
Get()
concurrently. - Only one goroutine can
Set()
at a time. - Readers don’t block each other, but writers block everyone.
Concept | Description |
---|---|
RLock() / RUnlock() | Shared read access for multiple goroutines |
Lock() / Unlock() | Exclusive write access |
Writer waiting behavior | Waits for all readers to release |
Reader waiting behavior | Waits if a writer is active |
When to use | Many reads, few writes |
Benefit | Greatly improves read-heavy concurrency performance |
The following program demonstrates how a sync.RWMutex
(Read–Write Mutex) allows:
- Multiple goroutines to read the same data at once ✅
- But only one goroutine to write at a time, blocking all others 🔒
package main import ( "fmt" "sync" "time" ) // RWMutex // global vars var ( rwmu sync.RWMutex counter int ) func readCounter(wg *sync.WaitGroup){ defer wg.Done() rwmu.RLock() fmt.Println("Read Counter:",counter) rwmu.RUnlock() } func writeCounter(wg *sync.WaitGroup, val int){ defer wg.Done() rwmu.Lock() counter = val fmt.Printf("Written value %d for counter:",val) rwmu.Unlock() } func main(){ var wg sync.WaitGroup for range 5{ wg.Add(1) go readCounter(&wg) } wg.Add(1) time.Sleep(time.Second) // wait.. go writeCounter(&wg, 18) wg.Wait() } // 💡 Output: // $ go run . // Read Counter: 0 // Read Counter: 0 // Read Counter: 0 // Read Counter: 0 // Read Counter: 0 // Written value 18 for counter:
Here, we have:
- 5 goroutines reading a shared variable
counter
- 1 goroutine writing a new value into
counter
var ( rwmu sync.RWMutex counter int )
rwmu
→ protects the shared variablecounter
.counter
→ shared data initially0
.
RWMutex
provides two locking modes:
RLock()
— shared lock (multiple readers can hold it simultaneously).Lock()
— exclusive lock (only one writer can hold it, blocks all readers).
func readCounter(wg *sync.WaitGroup){ defer wg.Done() rwmu.RLock() fmt.Println("Read Counter:", counter) rwmu.RUnlock() }
Let’s break it down:
-
wg.Done()
→ signals completion of the goroutine to the main thread when it finishes. -
rwmu.RLock()
→ acquires the read lock.- Multiple goroutines can hold this lock concurrently.
- It does not block other readers.
- But if a writer has taken a
Lock()
, readers must wait.
-
Reads the shared variable
counter
. -
RUnlock()
→ releases the read lock.
🧠 Key idea: Since all five goroutines are only reading, they’ll all run in parallel without waiting for one another.
func writeCounter(wg *sync.WaitGroup, val int){ defer wg.Done() rwmu.Lock() counter = val fmt.Printf("Written value %d for counter:", val) rwmu.Unlock() }
Breakdown:
-
wg.Done()
— notifies completion. -
rwmu.Lock()
— acquires exclusive access.- Blocks until no readers are holding
RLock()
. - Prevents new readers from entering until unlocked.
- Blocks until no readers are holding
-
Updates the shared variable
counter
with a new valueval
. -
Prints the message.
-
rwmu.Unlock()
— releases the lock, allowing other goroutines to proceed.
🧠 Key idea: Only one goroutine can hold Lock()
at a time — no reader or writer can interfere during this.
func main(){ var wg sync.WaitGroup for range 5{ wg.Add(1) go readCounter(&wg) } wg.Add(1) time.Sleep(time.Second) // wait.. go writeCounter(&wg, 18) wg.Wait() }
Let’s analyze what happens here chronologically 👇
The loop runs 5 times: Each iteration spawns a new goroutine executing readCounter()
.
Each of these 5 goroutines:
-
Calls
RLock()
— all succeed concurrently (because no writer exists yet). -
Reads the shared
counter
value (0
initially). -
Prints:
Read Counter: 0
-
Releases the read lock.
Since RWMutex
allows multiple concurrent readers, all 5 print operations happen almost instantly, possibly in random order.
time.Sleep(time.Second)
The main goroutine sleeps for 1 second before starting the writer. This gives the reader goroutines enough time to finish their work.
So, all 5 readers complete, and their RUnlock()
calls release the read locks. Now, no goroutine holds the mutex.
go writeCounter(&wg, 18)
After the Sleep
, one writer goroutine starts. It calls rwmu.Lock()
and acquires exclusive access because no readers are active now.
Then it updates:
counter = 18
And prints:
Written value 18 for counter:
It finally unlocks with rwmu.Unlock()
and calls wg.Done()
.
wg.Wait()
This line blocks the main goroutine until all 6 goroutines (5 readers + 1 writer) finish their work. Once they all call Done()
, wg.Wait()
unblocks, and the program ends gracefully.
Read Counter: 0 Read Counter: 0 Read Counter: 0 Read Counter: 0 Read Counter: 0 Written value 18 for counter:
- All readers start almost simultaneously → they all print
0
. - The writer starts later (after
Sleep(1s)
) → prints after them. - There’s no “read-after-write” in this example, so no reader sees
18
.
✅ The readers read before the writer writes. ✅ The writer writes after all readers are done.
Hence, we see 5 reads of 0
→ then a single write of 18
.
Operation | Can multiple goroutines do it? | Blocks whom? |
---|---|---|
RLock() (read) | ✅ Yes | Blocks writers |
Lock() (write) | ❌ No | Blocks readers + writers |
RUnlock() | releases reader lock | lets writer proceed |
Unlock() | releases writer lock | lets readers proceed |
Concept | Description |
---|---|
sync.RWMutex | Provides both shared (read) and exclusive (write) locking. |
Readers | Multiple can access concurrently if no writer is active. |
Writers | Only one writer can hold the lock; blocks everyone else. |
This program shows | 5 concurrent readers accessing safely, then 1 exclusive writer updating data. |
Output order | 5 reads of 0 → 1 write of 18. |
So, our code demonstrates reader concurrency + write exclusivity perfectly. If we later add another group of readers after the writer, they’ll read the updated value (18) — showing how RWMutex
synchronizes consistent state across goroutines.
Let’s dive deep into sync.NewCond
in Go, step by step, with practical understanding, internal working, and when to use it.
sync.NewCond
provides a way for goroutines to wait until a certain condition becomes true.
It’s a low-level synchronization primitive that coordinates goroutines based on events rather than shared locks or channels.
While Mutex
and RWMutex
synchronize access to shared data, Cond
synchronizes the timing of goroutines — it lets them wait for a condition to occur.
func NewCond(l Locker) *Cond
Locker
is any type that implementsLock()
andUnlock()
— usually async.Mutex
orsync.RWMutex
.- Returns a pointer to a
sync.Cond
structure.
Example:
var mu sync.Mutex cond := sync.NewCond(&mu)
Conceptually, sync.Cond
looks like this:
type Cond struct { L Locker // the associated lock notify notifyList checker copyChecker }
It maintains:
- A list of goroutines waiting for a condition.
- A locker that guards access to shared state.
- Wait/Signal/Broadcast methods to control synchronization.
cond.Wait()
- The goroutine unlocks the associated lock (so others can modify the shared condition).
- It then suspends execution — putting itself into the wait queue.
- When it’s woken up via
Signal
orBroadcast
, it relocks the mutex before returning.
So Wait()
always follows this pattern:
cond.L.Lock() for condition == false { cond.Wait() } cond.L.Unlock()
🧠 Why the loop? Because a goroutine might wake up spuriously (without condition being true). So we recheck the condition before proceeding.
cond.Signal()
- Wakes up one waiting goroutine (if any).
- The selected goroutine will reacquire the lock before returning from
Wait()
.
Use this when only one waiter should proceed.
cond.Broadcast()
- Wakes up all waiting goroutines.
- Each goroutine will try to reacquire the lock in turn.
Use this when a global event occurs (e.g., shared state has changed for all waiters).
Let’s take a simple Go example that shows sync.Cond
in action:
package main import ( "fmt" "sync" "time" ) var ( queue = make([]int, 0, 10) mu sync.Mutex cond = sync.NewCond(&mu) ) func main() { // Consumer Goroutines for i := 1; i <= 3; i++ { go consumer(i) } // Producer for i := 1; i <= 10; i++ { time.Sleep(500 * time.Millisecond) mu.Lock() queue = append(queue, i) fmt.Printf("Produced: %d\n", i) cond.Signal() // wake one waiting consumer mu.Unlock() } time.Sleep(3 * time.Second) } func consumer(id int) { for { mu.Lock() for len(queue) == 0 { cond.Wait() } item := queue[0] queue = queue[1:] fmt.Printf("Consumer %d consumed: %d\n", id, item) mu.Unlock() } }
- Consumers start first → they
Lock()
, find queue empty, and callcond.Wait()
. - Each consumer releases the lock and goes into waiting state.
- Producer adds an item to the queue and calls
cond.Signal()
→ wakes one waiting consumer. - Woken consumer reacquires the lock, consumes item, and unlocks.
- Cycle repeats.
✅ Use sync.Cond
when:
- We need goroutines to wait for a specific state change.
- Channels are not sufficient or efficient, e.g., when coordinating many waiters on shared memory.
- We need fine-grained control over wake-ups (
Signal
vsBroadcast
).
❌ Don’t use it when:
- We can achieve the same with channels (simpler, safer, idiomatic).
- Our logic doesn’t require multiple waiting goroutines on the same condition.
Inside Go’s runtime:
Wait()
uses runtime park/unpark (suspending a goroutine efficiently).Signal()
/Broadcast()
use notifyList, which is a queue of waiting goroutines.- All waiting goroutines are resumed when signaled, but they must reacquire the lock before proceeding — ensuring safety.
Imagine a waiting room outside a doctor’s cabin (the mutex):
- All patients (goroutines) wait until the nurse (the condition variable) announces that the doctor is ready.
- When nurse calls Signal(), one patient enters.
- When nurse calls Broadcast(), all patients can prepare to enter, one by one, as doctor (lock) becomes available.
Method | Purpose | Behavior |
---|---|---|
Wait() | Suspend goroutine until signaled | Releases & reacquires lock |
Signal() | Wake one waiter | Used for single event |
Broadcast() | Wake all waiters | Used for global change |
package main import ( "fmt" "sync" "time" ) // sync.NewCond() - Cond implements a condition variable, a rendezvous point for goroutines waiting for or announcing the occurrence of an event. const bufferSize = 5 type Buffer struct{ items []int mu sync.Mutex cond sync.Cond } func NewBuffer(size int)*Buffer{ b := &Buffer{items: make([]int,0,size)} b.cond = *sync.NewCond(&b.mu) return b } func (b *Buffer) Produce(item int){ b.mu.Lock() defer b.mu.Unlock() for len(b.items)== bufferSize{ b.cond.Wait() } b.items = append(b.items, item) fmt.Println("🔵 Produced:",item) b.cond.Signal() } func (b *Buffer) Consume()int{ b.mu.Lock() defer b.mu.Unlock() for len(b.items) == 0{ b.cond.Wait() // This f(x) stops doing anything and waits for other the function to append to the slice } item:= b.items[0] // consuming b.items = b.items[1:] // extracting fmt.Println("🟢 Consumed:",item) b.cond.Signal() return item } func Producer(b *Buffer, wg *sync.WaitGroup){ defer wg.Done() for i:=range 10{ b.Produce(i+100) time.Sleep(100*time.Millisecond) } } func Consumer(b *Buffer,wg *sync.WaitGroup){ defer wg.Done() for range 10{ b.Consume() time.Sleep(200*time.Millisecond) } } func main(){ buffer:= NewBuffer(bufferSize) var wg sync.WaitGroup wg.Add(2) go Producer(buffer, &wg) go Consumer(buffer, &wg) wg.Wait() } // 💡 Output: // $ go run . // 🔵 Produced: 100 // 🟢 Consumed: 100 // 🔵 Produced: 101 // 🔵 Produced: 102 // 🟢 Consumed: 101 // 🔵 Produced: 103 // 🟢 Consumed: 102 // 🔵 Produced: 104 // 🔵 Produced: 105 // 🟢 Consumed: 103 // 🔵 Produced: 106 // 🔵 Produced: 107 // 🟢 Consumed: 104 // 🔵 Produced: 108 // 🔵 Produced: 109 // 🟢 Consumed: 105 // 🟢 Consumed: 106 // 🟢 Consumed: 107 // 🟢 Consumed: 108 // 🟢 Consumed: 109
Our program is a classic Producer–Consumer problem implementation using sync.Cond
(Condition Variable). Here’s what’s happening conceptually:
-
We have a shared buffer (slice) of limited capacity (
bufferSize = 5
). -
The producer goroutine keeps adding integers to the buffer.
-
The consumer goroutine keeps removing integers from the buffer.
-
But they must coordinate:
- Producer must wait if the buffer is full.
- Consumer must wait if the buffer is empty.
That coordination is handled using:
sync.Mutex
— for exclusive access (only one goroutine can touchitems
at a time).sync.Cond
— to signal and wait when conditions change.
type Buffer struct{ items []int mu sync.Mutex cond sync.Cond }
It has:
items
: a slice representing the queue of data.mu
: a mutex lock so only one goroutine can modifyitems
at a time.cond
: a condition variable, created fromsync.NewCond(&b.mu)
, that lets goroutines wait and signal based on conditions.
func NewBuffer(size int)*Buffer{ b := &Buffer{items: make([]int,0,size)} b.cond = *sync.NewCond(&b.mu) return b }
make([]int, 0, size)
creates a slice with capacity = 5, length = 0.sync.NewCond(&b.mu)
binds the condition variable to the same mutex (b.mu
), so waiting/unlocking happens safely on the same lock.- We store this Cond into our struct.
func (b *Buffer) Produce(item int){ b.mu.Lock() defer b.mu.Unlock() for len(b.items)== bufferSize{ b.cond.Wait() } b.items = append(b.items, item) fmt.Println("🔵 Produced:",item) b.cond.Signal() }
Let’s decode this step-by-step:
-
Lock the mutex → The producer must have exclusive access before modifying the buffer.
-
Check if buffer is full
for len(b.items) == bufferSize { ... }
- If it’s full, the producer can’t add anything more.
- So it calls
b.cond.Wait()
.
-
What does
Wait()
do?- It atomically unlocks
b.mu
and suspends the goroutine. - The goroutine is now sleeping, waiting for a signal.
- When another goroutine (the consumer) later calls
Signal()
orBroadcast()
, this waiting goroutine wakes up. - Upon waking,
Wait()
automatically re-locks the mutex before returning.
⚠️ This is very important —Wait()
guarantees that when it resumes, it holds the lock again, so no race occurs. - It atomically unlocks
-
Once buffer has space: After waking and confirming there’s room (
for
re-checks), the producer appends the new item:b.items = append(b.items, item)
and prints
"🔵 Produced: <item>"
. -
b.cond.Signal()
- Signals (wakes up) one goroutine that’s waiting on this same condition.
- In this case, it wakes the consumer, which may be waiting because the buffer was empty earlier.
- It doesn’t unlock immediately — the unlock happens later when
defer b.mu.Unlock()
runs.
So the producer either:
- Waits if the buffer is full, or
- Adds an item and signals the consumer.
func (b *Buffer) Consume()int{ b.mu.Lock() defer b.mu.Unlock() for len(b.items) == 0{ b.cond.Wait() } item:= b.items[0] b.items = b.items[1:] fmt.Println("🟢 Consumed:",item) b.cond.Signal() return item }
Now, the consumer’s logic mirrors the producer:
-
Lock the buffer before reading/removing items.
-
Check if buffer is empty:
-
If
len(b.items) == 0
, there’s nothing to consume. -
So the consumer waits:
b.cond.Wait()
→ This releases the lock and suspends the goroutine.
-
-
When awakened: The consumer wakes up once the producer calls
b.cond.Signal()
after producing something. It then re-locks the mutex and continues. -
Consume the item:
item := b.items[0] b.items = b.items[1:]
- Removes the first element from the slice.
- This is like taking the first product off the shelf.
-
Print consumed item.
-
Signal to producer:
-
Now that space is available in the buffer, the consumer calls:
b.cond.Signal()
-
This wakes up one waiting producer, allowing it to produce the next item.
-
-
Unlock automatically (due to defer) and return the consumed item.
func Producer(b *Buffer, wg *sync.WaitGroup){ defer wg.Done() for i:=range 10{ b.Produce(i+100) time.Sleep(100*time.Millisecond) } }
- Produces numbers from 100 to 109.
- Sleeps a bit (100ms) between each production, simulating time delay.
func Consumer(b *Buffer,wg *sync.WaitGroup){ defer wg.Done() for range 10{ b.Consume() time.Sleep(200*time.Millisecond) } }
- Consumes 10 items.
- Sleeps longer (200ms), so it’s slower than the producer.
- Because of that, the buffer sometimes fills up, forcing the producer to wait.
func main(){ buffer:= NewBuffer(bufferSize) var wg sync.WaitGroup wg.Add(2) go Producer(buffer, &wg) go Consumer(buffer, &wg) wg.Wait() }
- Creates a
buffer
with a size of 5. - Adds 2 to the wait group (one for producer, one for consumer).
- Runs both as goroutines.
- Waits for both to finish (after producing and consuming 10 items each).
Let’s visualize roughly:
Time | Event | Explanation |
---|---|---|
0ms | 🔵 Producer adds 100 | Buffer = [100] |
100ms | 🟢 Consumer takes 100 | Buffer = [] |
200ms | 🔵 Producer adds 101 | Buffer = [101] |
300ms | 🔵 Producer adds 102 | Buffer = [101, 102] |
400ms | 🟢 Consumer takes 101 | Buffer = [102] |
… | … | … |
Because the producer is faster (100ms) and the consumer slower (200ms), sometimes:
- Buffer fills up (5 items), so producer Waits.
- Then when consumer removes one, it Signals, waking producer.
This is why our output alternates between "Produced" and "Consumed" lines — but not perfectly; timing differences can cause clusters.
sync.NewCond(l sync.Locker)
creates a Condition Variable bound to a Locker
(like sync.Mutex
).
A condition variable has three key methods:
-
Wait()
- Suspends the goroutine until it’s woken up.
- Unlocks the mutex before waiting.
- When it’s woken, it locks the mutex again.
-
Signal()
- Wakes one waiting goroutine.
-
Broadcast()
- Wakes all waiting goroutines.
The Cond is like a meeting point for goroutines. When one goroutine can’t proceed (e.g., buffer full), it waits there. Another goroutine (the opposite role) later signals that something has changed.
🔵 Produced: 100 🟢 Consumed: 100 🔵 Produced: 101 🔵 Produced: 102 🟢 Consumed: 101 🔵 Produced: 103 🟢 Consumed: 102 🔵 Produced: 104 🔵 Produced: 105 🟢 Consumed: 103 🔵 Produced: 106 🔵 Produced: 107 🟢 Consumed: 104 🔵 Produced: 108 🔵 Produced: 109 🟢 Consumed: 105 🟢 Consumed: 106 🟢 Consumed: 107 🟢 Consumed: 108 🟢 Consumed: 109
Here’s what’s happening:
- At the start, producer produces faster → buffer gradually fills.
- Consumer consumes slower → sometimes multiple “Produced” in a row.
- When buffer is full, producer pauses (
Wait()
). - Once consumer consumes, it signals → producer wakes → continues producing.
- When both finish their 10 cycles, program exits cleanly.
Concept | Role |
---|---|
sync.Mutex | Ensures exclusive access to shared data. |
sync.Cond | Enables goroutines to wait for a condition to become true. |
Wait() | Puts the goroutine to sleep while unlocking the mutex temporarily. |
Signal() | Wakes up one waiting goroutine. |
for condition { Wait() } | Prevents spurious wakeups and ensures condition is rechecked. |
Producer | Waits when buffer full, signals when new item added. |
Consumer | Waits when buffer empty, signals when space freed. |
Think of a shop shelf (buffer) that can hold 5 boxes.
-
🧍♂️ Producer = worker putting boxes on the shelf. If the shelf is full, the worker waits until the consumer takes one.
-
🧍♀️ Consumer = customer taking boxes off the shelf. If the shelf is empty, the customer waits until producer puts one.
-
Wait()
= standing idle, waiting for an event. -
Signal()
= tapping the other person’s shoulder saying, “Okay, your turn!”
✅ In summary:
Our code:
- Creates a synchronized queue (shared buffer).
- Uses
sync.Cond
to coordinate two goroutines. - Ensures neither goroutine wastes CPU cycles spinning.
- Demonstrates how Go’s
Cond
enables producer-consumer coordination elegantly.
Perfect 🌟 — let’s now dive deeply into sync.Once
— one of the most elegant synchronization primitives in Go. We’ll go step-by-step just like we did for sync.Cond
, covering concept, use case, internal working, real-world analogy, and code examples.
sync.Once
ensures that a particular piece of code (a function) executes only once, even if multiple goroutines try to run it at the same time.
In other words:
- It’s like a one-time initializer.
- Even if 100 goroutines call the same function concurrently,
sync.Once
guarantees that only one of them actually runs the function body. - All others will wait until that one finishes.
import "sync" var once sync.Once once.Do(func() { // Code that should run only once })
✅ The .Do()
method takes a function (no arguments, no return values). If the function has already run once, future calls to .Do()
do nothing.
sync.Once
is most often used for lazy initialization — i.e., initializing a shared resource only when it’s first needed, and ensuring that initialization happens only once, even if many goroutines are running.
Examples:
- Opening a database connection only once.
- Loading a configuration file only once.
- Initializing a global cache, logger, or network client.
Let’s look at a simple demonstration 👇
package main import ( "fmt" "sync" ) var once sync.Once func initialize() { fmt.Println("✅ Initialization done") } func worker(id int, wg *sync.WaitGroup) { defer wg.Done() fmt.Println("Goroutine", id, "started") once.Do(initialize) // Only the first call executes initialize() fmt.Println("Goroutine", id, "running normally") } func main() { var wg sync.WaitGroup for i := 1; i <= 5; i++ { wg.Add(1) go worker(i, &wg) } wg.Wait() }
Goroutine 1 started ✅ Initialization done Goroutine 1 running normally Goroutine 2 started Goroutine 2 running normally Goroutine 3 started Goroutine 3 running normally Goroutine 4 started Goroutine 4 running normally Goroutine 5 started Goroutine 5 running normally
No matter how many goroutines call once.Do(initialize)
, 👉 initialize()
runs exactly once, guaranteed.
Let’s understand how sync.Once
achieves this.
Internally, sync.Once
is implemented as:
type Once struct { done uint32 m Mutex }
done
— an atomic flag (0 or 1) that tracks whether the function has already run.m
— a mutex to ensure exclusive execution of theDo()
function.
When once.Do(fn)
is called:
-
It atomically checks if
done == 1
.- If yes, it returns immediately — the function has already run.
-
If not:
- It locks the mutex, calls the provided function.
- Then sets
done = 1
. - Then unlocks the mutex.
It uses both atomic operations and mutex locking for correctness and speed.
Let’s visualize this with multiple goroutines:
Time | Goroutine | Action | Result |
---|---|---|---|
t₀ | G1 | Calls once.Do() | done == 0 → G1 executes function |
t₁ | G2 | Calls once.Do() | Sees done == 0 , waits for mutex |
t₂ | G1 | Finishes, sets done = 1 , unlocks mutex | |
t₃ | G2 | Sees done == 1 , returns immediately | |
t₄ | G3, G4, G5 | All see done == 1 , skip function instantly |
Result: function runs only once (by G1), all others skip.
Imagine a room with a light switch that turns on the generator.
- Many people (goroutines) enter the dark room.
- Each one says, “If the generator isn’t on yet, I’ll turn it on.”
- The first person flips the switch.
- Others come in later, see it’s already on, and skip flipping it.
That’s exactly how sync.Once
works. 🟢 First goroutine runs the setup. ⚪ Others just continue once setup is done.
package main import ( "fmt" "sync" ) type Database struct { connection string } var ( dbInstance *Database once sync.Once ) func GetDatabaseInstance() *Database { once.Do(func() { fmt.Println("Initializing Database connection...") dbInstance = &Database{connection: "Connected to DB"} }) return dbInstance } func main() { var wg sync.WaitGroup for i := 1; i <= 3; i++ { wg.Add(1) go func(id int) { defer wg.Done() db := GetDatabaseInstance() fmt.Println("Goroutine", id, "using", db.connection) }(i) } wg.Wait() }
Initializing Database connection... Goroutine 2 using Connected to DB Goroutine 1 using Connected to DB Goroutine 3 using Connected to DB
Even though 3 goroutines called GetDatabaseInstance()
, ✅ initialization happens only once.
If we create multiple sync.Once
variables, each is independent.
var once1 sync.Once var once2 sync.Once
Each of them can execute once.
If it panics, the done
flag is not set, so the function may run again later.
This behavior ensures that failed initialization can be retried.
If we do something like:
onceCopy := once onceCopy.Do(fn)
→ The state won’t be shared, and fn
might run more than once.
We should always use the same instance of sync.Once
.
Concept | Meaning |
---|---|
Purpose | Execute a block of code only once across all goroutines. |
Method | once.Do(func) |
Thread Safety | Fully thread-safe. |
Use Case | Lazy initialization, singletons, configuration, setup. |
Internal Mechanism | Atomic flag + Mutex. |
If panic occurs | Flag not set; can retry. |
When we think of:
sync.Mutex
→ "Only one at a time"sync.WaitGroup
→ "Wait for all to finish"sync.Cond
→ "Wait for a condition to change"sync.Once
→ "Run this once only"
So sync.Once
is our one-time guarantee mechanism in concurrent Go code.
package main import ( "fmt" "sync" ) var once sync.Once func initialize(){ fmt.Println("🔵 This will not be repeated no matter how many times we call this func. using once.Done()!") } func main() { var wg sync.WaitGroup for i:= range 5{ wg.Add(1) go func (){ defer wg.Done() fmt.Println("✅ Goroutine #",i) once.Do(initialize) }() } wg.Wait() } // 💡OUTPUT: // $ go run . // ✅ Goroutine # 4 // 🔵 This will not be repeated no matter how many times we call this func. using once.Done()! // ✅ Goroutine # 3 // ✅ Goroutine # 1 // ✅ Goroutine # 2 // ✅ Goroutine # 0
🌟 — The above Go program demonstrates sync.Once
, a powerful synchronization primitive in Go’s sync
package that ensures a specific piece of code runs only once, no matter how many goroutines call it concurrently.
Let’s go through it step by step, in deep detail 👇
sync.Once
is a struct provided by Go’s sync
package. It guarantees that a function (passed to once.Do()
) executes only once — even if multiple goroutines call it at the same time.
Once the function finishes, future calls to once.Do()
will skip execution, without blocking or re-running the code.
This is extremely useful for:
- One-time initialization (e.g., loading configuration, setting up database connection, creating singletons)
- Ensuring thread-safe setup in concurrent programs.
package main import ( "fmt" "sync" )
- We import
sync
for synchronization tools andfmt
for printing output.
var once sync.Once
-
We create a global variable
once
of typesync.Once
. -
It internally contains:
- A boolean flag (
done
) to track if the function has already been executed. - A mutex lock for thread-safe access.
- A boolean flag (
👉 Think of sync.Once
as a thread-safe, one-time switch — once turned on, it can’t be turned off.
func initialize(){ fmt.Println("🔵 This will not be repeated no matter how many times we call this func. using once.Done()!") }
- This is the function we want to run only once.
- Even if called from multiple goroutines simultaneously, Go ensures only one goroutine executes this, and the rest skip it.
func main() { var wg sync.WaitGroup
wg
is aWaitGroup
that ensures the main goroutine waits until all spawned goroutines finish execution.
for i := range 5 { wg.Add(1) go func () { defer wg.Done() fmt.Println("✅ Goroutine #", i) once.Do(initialize) }() }
Let’s break this part line by line 👇
- This is Go 1.22+ syntax — it means loop 5 times (i = 0,1,2,3,4).
- Increments the WaitGroup counter by 1 for each goroutine we spawn.
-
Launches a goroutine (a lightweight thread).
-
Each goroutine runs:
- Prints
"✅ Goroutine #i"
- Calls
once.Do(initialize)
- Prints
-
This line is key:
- The first goroutine that reaches this line calls
initialize()
. - The rest of the goroutines skip execution once
initialize()
has already been called.
- The first goroutine that reaches this line calls
⚙️ Internally, sync.Once.Do()
works like this:
func (o *Once) Do(f func()) { if atomic.LoadUint32(&o.done) == 0 { o.doSlow(f) } } func (o *Once) doSlow(f func()) { o.m.Lock() defer o.m.Unlock() if o.done == 0 { f() // run only once o.done = 1 // mark as done } }
So the sequence is:
- First goroutine locks the mutex and runs
initialize()
. - Sets
done = 1
. - All other goroutines skip execution because they see
done == 1
.
wg.Wait()
- Blocks the main goroutine until all 5 goroutines finish their execution.
✅ Goroutine # 4 🔵 This will not be repeated no matter how many times we call this func. using once.Done()! ✅ Goroutine # 3 ✅ Goroutine # 1 ✅ Goroutine # 2 ✅ Goroutine # 0
- Multiple goroutines started almost simultaneously.
- One of them (here, Goroutine #4) was first to execute
once.Do(initialize)
and raninitialize()
. - Others also reached that point but skipped the function because
once.done
was already set totrue
. - Hence, we see the “🔵 This will not be repeated...” message only once.
Concept | Explanation |
---|---|
Purpose | Ensures a function runs only once — thread-safe and concurrent-safe. |
Main Method | once.Do(func) |
Internal Mechanism | Uses a boolean flag + mutex to guarantee single execution. |
Common Use Case | Initialization, singletons, lazy loading, configuration setup. |
Thread Safety | Fully thread-safe — no need to use extra locks. |
┌──────────────┐ │ once.Do(f) │ └──────┬───────┘ │ ┌──────────────┴──────────────┐ │ │ ┌────▼────┐ ┌─────▼────┐ │ done=0? │ YES (1st time) │ done=1? │ NO (subsequent) └────┬────┘ └─────┬────┘ │ │ ▼ ▼ Run f() once Skip execution Set done=1 Return immediately
sync.Once
ensures that a function executes only once across all goroutines — no matter how many try, only the first succeeds.
Let’s build a visual and intuitive analogy for sync.Once
❤️
Let’s imagine we’re a team of five engineers (our goroutines) working in a big company 🏢. We all arrive at the office early one morning, and our manager gives us this instruction:
“Someone must turn on the main server before work starts — but only once! If one of you already turned it on, the rest of you shouldn’t do it again.”
That “server initialization” is exactly like our initialize()
function.
Our team has a shared rulebook (the sync.Once
object) that keeps track of whether the server has already been started.
Inside it, there’s:
- A lock (to ensure only one person checks and modifies the rule at a time)
- A flag (
done = false
initially), meaning “Server not started yet.”
All 5 of us rush to start work at the same time (go func()
).
Each of us checks the rulebook:
“Has anyone already started the server?”
If the answer is No (done == false
):
- One of us locks the book 🔒
- Starts the server (
initialize()
) - Marks in the book: “✅ Done = true”
- Unlocks the book 🔓
The other four engineers arrive and check the rulebook too.
But now the rule says:
“Server already started ✅”
So, each of them says:
“Cool, no need to repeat that step — I’ll move on.”
While this looks simple, the sync.Once
mechanism behind the scenes ensures atomic, thread-safe execution:
- Only one goroutine runs the code.
- Others never run it again — not even partially.
- Once marked as done, it stays done forever.
┌──────────────────────────┐ │ Shared Rulebook (once) │ │ done = false │ └──────────┬───────────────┘ │ ┌─────────────────┴──────────────────┐ │ │ ┌──▼──┐ ┌────▼────┐ │G#1 │ -> "Done = false" │G#2, G#3,│ │ │ -> Runs initialize() │G#4, G#5 │ │ │ -> Set done = true │Check rule│ └─────┘ │Skip func │ └──────────┘
Real-world analogy | Go equivalent |
---|---|
Rulebook | sync.Once |
Flag inside rulebook | internal done boolean |
Lock on rulebook | internal mutex |
Turning on the server | calling initialize() |
Engineer who does it first | first goroutine to execute once.Do() |
Other engineers | other goroutines skipping execution |
- We can call
once.Do(f)
from multiple goroutines, but only one will actually runf()
. - All other calls wait until it’s confirmed done, then return immediately.
- This guarantees thread-safe, idempotent initialization.
“In concurrency,
sync.Once
is like a door with one key — the first goroutine unlocks it, and for the rest, it’s already open.”
Let’s dive deeply into sync.Pool
in Go and understand how it works, why it exists, and how to use it efficiently. We’ll break it down step by step.
In Go, sync.Pool
is a concurrent-safe object pool. It’s designed to reuse allocated objects to reduce the overhead of memory allocation and garbage collection (GC).
- Think of it like a "recycling bin" for objects.
- Instead of creating a new object every time, we take one from the pool.
- After using it, we put it back for future reuse.
This is especially useful in high-performance programs where temporary objects are frequently created and discarded.
- Thread-safe: Multiple goroutines can get/put objects without explicit locks.
- Automatic GC support: Objects in the pool can be garbage collected if there’s memory pressure.
- Lazy initialization: We can define a
New
function to create a new object if the pool is empty.
type Pool struct { // contains filtered or unexported fields } func (p *Pool) Get() interface{} func (p *Pool) Put(x interface{}) var _ Pool
Main methods:
-
Get()
- Returns an object from the pool if available.
- If the pool is empty, it calls
New
(if defined) to create a new object.
-
Put(x interface{})
- Returns an object to the pool for later reuse.
- After calling
Put
, the pool may or may not retain the object. GC can remove it.
-
New
(optional)- Function to allocate a new object if
Get
finds the pool empty.
pool := sync.Pool{ New: func() interface{} { return &MyStruct{} }, }
- Function to allocate a new object if
- Frequent allocation of temporary objects creates GC pressure.
sync.Pool
reduces memory churn by reusing objects, improving performance.- Perfect for objects with short lifetimes.
Use Cases:
- Buffers for network I/O (
[]byte
) - Structs used temporarily in computations
- Large objects reused multiple times (e.g., JSON encoders/decoders)
Not ideal for:
- Long-lived objects
- Objects shared across multiple unrelated operations
-
sync.Pool
maintains per-P (processor) local storage.- Each P (logical processor in Go runtime) has its own list of pooled objects.
- This reduces lock contention in high-concurrency scenarios.
-
When
Put
is called:- The object is added to the local P pool.
-
When
Get
is called:- First, it checks the local P pool.
- If empty, it tries other Ps’ pools.
- If still empty, it calls
New
.
-
Garbage Collection (GC) behavior:
- On a GC cycle, the pool can be cleared.
- This prevents
sync.Pool
from holding onto memory indefinitely, unlike other caches.
Let’s see a practical example:
package main import ( "fmt" "sync" ) type MyStruct struct { ID int } func main() { pool := sync.Pool{ New: func() interface{} { fmt.Println("Creating new object") return &MyStruct{} }, } // Get object (pool is empty -> New is called) obj1 := pool.Get().(*MyStruct) obj1.ID = 42 fmt.Println("obj1 ID:", obj1.ID) // Put object back into pool pool.Put(obj1) // Get object again (reuse from pool) obj2 := pool.Get().(*MyStruct) fmt.Println("obj2 ID:", obj2.ID) // 42 }
Output:
Creating new object obj1 ID: 42 obj2 ID: 42
✅ The object is reused, so no new allocation occurs on the second Get()
.
-
Use
New
for lazy initialization AvoidGet()
returningnil
. -
Reset objects before putting back
buffer := pool.Get().([]byte) buffer = buffer[:0] // reset pool.Put(buffer)
-
Do not store long-lived objects
- GC may remove them unexpectedly.
sync.Pool
is meant for temporary/reusable objects.
-
Avoid storing pointers to large shared resources
- Could lead to unexpected behavior if reused in parallel without reset.
Property | Details |
---|---|
Thread-safe | ✅ multiple goroutines can Get and Put concurrently |
Memory management | Objects may be cleared on GC |
Best for | Temporary objects with short lifetimes |
Not suitable for | Long-lived objects, persistent caches |
Imagine a pool of reusable coffee mugs:
-
If someone needs a mug:
- They take one from the shelf (pool).
- If none are left, they grab a new mug (New function).
-
After finishing:
- They wash it and put it back (Put).
-
If the shelf gets cleared occasionally (GC), mugs might be discarded to free space.
sync.Pool
works exactly like that in code.
package main import ( "fmt" "sync" ) // sync.Pool type Person struct{ name string age int } func main() { var pool = sync.Pool{ New: func() interface{}{ fmt.Println("🔵Creating a new Person") return &Person{} }, } // Get an object from the pool person1:= pool.Get().(*Person) person1.name = "Skyy" person1.age = 30 fmt.Println("💡Got Person 1️⃣:",person1) fmt.Printf("☑️ Person1 - Name: %s, Age: %d\n",person1.name,person1.age) pool.Put(person1) fmt.Println("✅ Returned Person to the pool.") // Retrieve the person person2:= pool.Get().(*Person) fmt.Println("💡Got Person 2️⃣:",person2) person3:=pool.Get().(*Person) fmt.Println("💡Got Person 3️⃣:",person3) person3.name = "Soumadip" // Returning obj{} to the pool again.. pool.Put(person2) pool.Put(person3) fmt.Println("✅ Returned Person to the pool again.") person4:=pool.Get().(*Person) fmt.Println("💡Got Person 4️⃣:",person4) person5:=pool.Get().(*Person) fmt.Println("💡Got Person 5️⃣:",person5) } // O/P: // $ go run . // 🔵Creating a new Person // 💡Got Person 1️⃣: &{Skyy 30} // ☑️ Person1 - Name: Skyy, Age: 30 // ✅ Returned Person to the pool. // 💡Got Person 2️⃣: &{Skyy 30} // 🔵Creating a new Person // 💡Got Person 3️⃣: &{ 0} // ✅ Returned Person to the pool again. // 💡Got Person 4️⃣: &{Skyy 30} // 💡Got Person 5️⃣: &{Soumadip 0}
Let's go line by line and concept by concept to understand our sync.Pool
example (above program) deeply. We’ll explore what’s happening, why the output is like that, and the subtleties of Go’s object pooling.
import ( "fmt" "sync" )
fmt
→ For printing logs.sync
→ Contains concurrency primitives, includingsync.Pool
.
type Person struct{ name string age int }
- Defines a simple struct
Person
with two fields:name
andage
. - This struct will be pooled using
sync.Pool
.
var pool = sync.Pool{ New: func() interface{}{ fmt.Println("🔵Creating a new Person") return &Person{} }, }
sync.Pool{New: ...}
initializes a pool of objects.New
function is called only when the pool is empty.- Here, it prints a message and returns a pointer to a new
Person
struct.
Important: Objects returned by Get()
are pointers, so we can modify their fields.
person1:= pool.Get().(*Person) person1.name = "Skyy" person1.age = 30
-
pool.Get()
is called.- Pool is empty →
New
is called → prints:🔵Creating a new Person
. - Returns
&Person{}
.
- Pool is empty →
-
Type assertion:
.(*Person)
convertsinterface{}
to*Person
. -
Assign fields:
name = "Skyy"
age = 30
fmt.Println("💡Got Person 1️⃣:",person1) fmt.Printf("☑️ Person1 - Name: %s, Age: %d\n",person1.name,person1.age)
Output:
💡Got Person 1️⃣: &{Skyy 30} ☑️ Person1 - Name: Skyy, Age: 30
✅ This shows that the Person
object is correctly initialized and modified.
pool.Put(person1) fmt.Println("✅ Returned Person to the pool.")
Put
returns the object back into the pool.- Important: The object is not copied, only the pointer is stored.
- Now the pool contains one
Person
object:&{Skyy 30}
.
person2:= pool.Get().(*Person) fmt.Println("💡Got Person 2️⃣:",person2)
- Pool has one object →
Get()
returns the same pointer (person1
). - Output:
💡Got Person 2️⃣: &{Skyy 30}
- Notice: The object still has the old values (
Skyy
,30
) because we did not reset the object before putting it back.
person3:=pool.Get().(*Person) fmt.Println("💡Got Person 3️⃣:",person3) person3.name = "Soumadip"
-
Pool is now empty (we already took
person2
). -
Get()
callsNew
→ prints:🔵Creating a new Person
. -
Returns a fresh object:
&{ 0}
→ default values. -
Then, we set:
person3.name = "Soumadip"
age
remains0
(not set yet).
Output:
💡Got Person 3️⃣: &{ 0}
pool.Put(person2) pool.Put(person3) fmt.Println("✅ Returned Person to the pool again.")
-
Both
person2
andperson3
are put back. -
Pool now contains two objects:
&{Skyy 30}
(person2
)&{Soumadip 0}
(person3
)
person4:=pool.Get().(*Person) fmt.Println("💡Got Person 4️⃣:",person4) person5:=pool.Get().(*Person) fmt.Println("💡Got Person 5️⃣:",person5)
-
Get
person4
→ Pool is not empty, returns first object available. Typically, Go returns objects in LIFO order per P (per logical processor). In practice here, it returns&{Skyy 30}
. Output:💡Got Person 4️⃣: &{Skyy 30}
-
Get
person5
→ Pool has one object left → returns&{Soumadip 0}
. Output:💡Got Person 5️⃣: &{Soumadip 0}
-
If we call
Get()
again, pool would be empty →New
is invoked.
- Objects are reused as pointers → modifications persist unless reset.
- Pool does not automatically reset objects → we must manually clear fields before putting them back if required.
New
is called only when pool is empty → reduces allocation overhead.- Multiple objects can exist in the pool at once.
- Pool objects can be garbage collected → not guaranteed to persist forever.
Step | Object | Pool State | Output |
---|---|---|---|
Get 1 | New Person | Empty → taken | &{Skyy 30} |
Put 1 | Return person1 | Pool: &{Skyy 30} | - |
Get 2 | Reuse person1 | Pool empty | &{Skyy 30} |
Get 3 | New Person | Pool empty → New | &{ 0} |
Put 2+3 | Return person2 & person3 | Pool: [&{Skyy 30}, &{Soumadip 0}] | - |
Get 4 | Reuse first available | Pool: [&{Soumadip 0}] | &{Skyy 30} |
Get 5 | Reuse remaining | Pool empty | &{Soumadip 0} |
✅ Matches exactly with our observed output.
The for-select
pattern in Go, is a very common idiom for handling concurrent communication with channels.
In Go, a select
statement lets us wait on multiple channel operations simultaneously. Combining select
with a for
loop allows us to continuously monitor multiple channels until some condition breaks the loop.
The general structure is:
for { select { case msg1 := <-ch1: // handle message from channel 1 case msg2 := <-ch2: // handle message from channel 2 case ch3 <- value: // send value to channel 3 default: // optional: run if no channel is ready } }
-
for
loop: Keeps theselect
statement running indefinitely (or until we break it). -
select
statement: Waits for one of the channel operations inside it to become ready:- Receiving from a channel (
<-ch
) - Sending to a channel (
ch <- value
)
- Receiving from a channel (
-
case
blocks: Only one case executes at a time. If multiple are ready, Go picks one randomly. -
default
case (optional): Executes if no channels are ready—prevents blocking.
package main import ( "fmt" "time" ) func main() { ch1 := make(chan string) ch2 := make(chan string) // simulate sending data in goroutines go func() { for i := 1; i <= 3; i++ { ch1 <- fmt.Sprintf("ch1 message %d", i) time.Sleep(500 * time.Millisecond) } close(ch1) }() go func() { for i := 1; i <= 3; i++ { ch2 <- fmt.Sprintf("ch2 message %d", i) time.Sleep(700 * time.Millisecond) } close(ch2) }() // for-select loop for { select { case msg, ok := <-ch1: if !ok { ch1 = nil // prevents further reads from closed channel fmt.Println("ch1 closed") continue } fmt.Println("Received from ch1:", msg) case msg, ok := <-ch2: if !ok { ch2 = nil // prevents further reads from closed channel fmt.Println("ch2 closed") continue } fmt.Println("Received from ch2:", msg) } // exit loop if both channels are nil (closed) if ch1 == nil && ch2 == nil { fmt.Println("All channels closed. Exiting loop.") break } } }
- Detecting closed channels: Using the
ok
boolean from<-ch
to see if a channel is closed. - Avoiding panic: Setting the closed channel to
nil
ensures it’s ignored in futureselect
s. - Loop termination: Break the
for
loop when all channels are done.
The default
case prevents blocking when no channel is ready:
for { select { case msg := <-ch: fmt.Println(msg) default: fmt.Println("No messages yet, doing other work...") time.Sleep(100 * time.Millisecond) } }
- Without
default
, the loop blocks until a channel is ready. - With
default
, we can do non-blocking checks while performing other work.
- Multiplexing channels: Listen to multiple channels at once.
- Timers and timeouts: Using
time.After
ortime.Ticker
channels. - Event loops: Continuously process incoming messages until shutdown.
- Graceful shutdown: Using a
done
channel to signal termination.
Example with timeout:
done := make(chan bool) for { select { case msg := <-ch: fmt.Println(msg) case <-time.After(2 * time.Second): fmt.Println("Timeout! No message received.") done <- true return } }
Concept | Explanation |
---|---|
for | Keeps the loop running |
select | Waits on multiple channel operations |
case <-ch | Receives from a channel |
case ch <- value | Sends to a channel |
default | Executes if no channel is ready (non-blocking) |
nil channels | Prevent select from considering closed channels |
ok boolean | Detects if a channel is closed |
package main import ( "fmt" "time" ) // for-select-statement func main() { ticker:=time.NewTicker(time.Second) quit:= make(chan string) go func() { time.Sleep(5*time.Second) close(quit) }() for{ select{ case <- ticker.C: fmt.Println("Tick ✅") case <- quit: fmt.Println("Quitting.. ☑️") return } } } // O/P: // $ go run . // Tick ✅ // Tick ✅ // Tick ✅ // Tick ✅ // Tick ✅ // Quitting.. ☑️
Let's go line by line and explain the above for-select
Go code in depth so we fully understand how it works.
package main import ( "fmt" "time" )
package main
: Entry point of the Go program. This tells Go to compile an executable.import "fmt"
: Used for printing output (fmt.Println
).import "time"
: Provides time-related functions, liketime.NewTicker
andtime.Sleep
.
func main() { ticker := time.NewTicker(time.Second) quit := make(chan string)
-
ticker := time.NewTicker(time.Second)
- Creates a ticker that sends a signal on its channel (
ticker.C
) every second. - Think of it like an alarm clock ticking every second.
- Creates a ticker that sends a signal on its channel (
-
quit := make(chan string)
- Creates a channel of type
string
. - We'll use this channel to signal when to stop the loop.
- Creates a channel of type
go func() { time.Sleep(5 * time.Second) close(quit) }()
-
go func() { ... }()
: Launches an anonymous goroutine. -
Inside the goroutine:
time.Sleep(5 * time.Second)
: Waits 5 seconds.close(quit)
: Closes thequit
channel to signal the main loop to stop.
-
By using a goroutine, this stop signal runs concurrently with the main loop.
for { select { case <-ticker.C: fmt.Println("Tick ✅") case <-quit: fmt.Println("Quitting.. ☑️") return } }
for { ... }
: Infinite loop, keeps running until we explicitlyreturn
.select { ... }
: Waits for multiple channel operations.
case <-ticker.C: fmt.Println("Tick ✅")
<-ticker.C
waits for a tick from the ticker, which happens every second.- When a tick arrives, it prints:
"Tick ✅"
. - This will happen once per second until we quit.
case <-quit: fmt.Println("Quitting.. ☑️") return
<-quit
waits for a value from thequit
channel.- We didn’t send a value; instead, we closed the channel in the goroutine.
- Reading from a closed channel immediately returns the zero value of the channel’s type (here,
""
for string). - When this happens, we print
"Quitting.. ☑️"
andreturn
, which stops the program.
-
Ticker starts ticking every 1 second.
-
Goroutine sleeps 5 seconds, then closes
quit
. -
The
for-select
loop continuously waits:- Every second, ticker sends a tick → prints
"Tick ✅"
. - After 5 seconds,
quit
channel is closed → prints"Quitting.. ☑️"
→ exits loop.
- Every second, ticker sends a tick → prints
Tick ✅ ← 1 second Tick ✅ ← 2 seconds Tick ✅ ← 3 seconds Tick ✅ ← 4 seconds Tick ✅ ← 5 seconds Quitting.. ☑️ ← After quit channel closes
- 5 ticks happen because the goroutine sleeps 5 seconds before closing
quit
. - As soon as
quit
closes, the loop stops.
Concept | Explanation |
---|---|
time.NewTicker | Sends a tick on its channel periodically. |
<-ticker.C | Receives the tick, blocking until one arrives. |
quit channel | Used to signal termination. Closing it allows select to detect it. |
for-select loop | Continuously waits on multiple channels and handles whichever is ready. |
Goroutine | Runs concurrently, allowing the main loop to continue ticking while waiting for quit signal. |
✅ This is a classic for-select pattern to handle repeated tasks with graceful shutdown in Go.