Bringing order to the chaos of the race condition.
Chapter 15: Sharing by Communicating
The archive was unusually loud that Tuesday. Not from voices, but from the rain hammering against the copper roof, a chaotic, drumming rhythm that filled the high ceilings.
Ethan was pacing. His laptop fan was spinning at maximum speed.
"It works, but it doesn't," he said, running a hand through his hair. "I'm trying to process these thousand log files. I used the go keyword to spawn a background job for each one. It's blazing fast."
"And the results?" Eleanor asked, calmly stirring her tea.
"Garbage. Sometimes I get 998 results. Sometimes 1005. Sometimes the program crashes with a map assignment error. It’s chaos."
He showed her the code:
func processLogs(logs []string) map[string]int {
results := make(map[string]int)
for _, log := range logs {
go func(l string) {
// Simulate processing
user := parseUser(l)
results[user]++ // THE BUG IS HERE
}(log)
}
return results
}
"Right," Eleanor said, leaning in. "You've got a classic Race Condition. You spun up a thousand goroutines, and they're all fighting over that one map. They're overwriting each other's work because nothing is stopping them."
"So I need a lock? A Mutex?"
"You could use a Mutex," Eleanor conceded. "But then you're just pausing everything constantly to manage that one variable. In Go, we try to avoid that. We have a saying: 'Do not communicate by sharing memory; share memory by communicating.'"
The Goroutine
"First," Eleanor said, "look at what you actually built. A Goroutine isn't just a function call. It's fire and forget. The Go scheduler manages them, multiplexing thousands of them onto a few actual OS threads."
"That sounds efficient," Ethan said.
"It is. But because they are independent, your main function—the one returning results—doesn't wait for any of them. It definitely returns before your workers have even finished."
"That explains the missing data," Ethan realized. "I'm returning an empty map while the workers are still running in the background."
"Exactly. You need a way to get the data back safely. Instead of letting everyone touch the map, let's just have them pass the data back to you."
The Channel
She opened a new file. "We use a Channel. It is a direct pipe for data between running tasks."
ch := make(chan string) // Create a channel of strings
"This handles the synchronization for you," Eleanor explained. "If you send data into it, the code pauses until someone is there to receive it. It forces the two sides to line up perfectly."
She refactored Ethan's code.
func processLogs(logs []string) map[string]int {
results := make(map[string]int)
// 1. Create a channel to receive users
userChan := make(chan string)
// 2. Spawn the workers
for _, log := range logs {
go func(l string) {
user := parseUser(l)
userChan <- user // Pass the data to the channel
}(log)
}
// 3. Collect the results
for i := 0; i < len(logs); i++ {
user := <-userChan // Wait for data to arrive
results[user]++
}
// In a real system, we would also need a channel for errors!
return results
}
"See the difference?" Eleanor asked. "Your workers calculate the user, but they don't touch the map. They just hand the result off. The main function waits, grabs the result, and updates the map. Only one thing touches the memory."
"So the channel effectively serializes the writes," Ethan realized.
"Precisely. It creates a single point of entry."
Blocking is a Feature
"But wait," Ethan asked. "What if the channel gets backed up?"
"By default, there is no backup," Eleanor said. "It's a direct hand-off. When a worker sends userChan <- user, it freezes right there. It blocks. It won't move to the next line until the main function receives that value."
"So they wait for each other?"
"Yes. That's why you don't need locks. The channel forces them to wait."
Buffered Channels
"Now," Eleanor added, "sometimes you don't want them locking up quite that often. You want a bit of a queue."
ch := make(chan string, 100) // Buffer of 100 slots
"This gives you a buffer. Your workers can drop off 100 items without waiting. It lets them run a bit faster than the consumer for short bursts. But be careful—once that buffer is full, they block again."
The select Statement
"One last thing," Eleanor said. "What if you're waiting on two things? Like getting a result or hitting a timeout?"
"I don't know. Check one, then the other?"
"No, that would get stuck on the first one. We use select. It allows you to listen to multiple channels at once."
select {
case msg := <-messageChan:
fmt.Println("Received message:", msg)
case err := <-errorChan:
fmt.Println("Received error:", err)
case <-time.After(time.Second):
fmt.Println("Timed out!")
}
"It just runs whichever one is ready first," she explained. "If a message comes in, it runs that case. If the timer hits one second, it runs that case. It's the standard way to handle timeouts and cancellation."
Ethan looked at the code. "It's cleaner. No locks, no race conditions. Just data moving around."
"It's easier to reason about," Eleanor agreed, taking a sip of her tea. "Concurrency gets messy when everyone grabs for the same data. It stays clean when you just pass messages."
Key Concepts from Chapter 15
Goroutines (go func()):
Lightweight threads managed by the Go scheduler. They run independently and are multiplexed onto OS threads. The main function does not wait for them by default.
Channels (chan type):
The standard way to communicate between goroutines.
-
Send:
ch <- value -
Receive:
value := <-ch
Synchronization:
Channels are safe for concurrent use. By default, sending and receiving blocks (pauses) execution until the other side is ready. This synchronizes your code without manual locks.
The Philosophy:
"Do not communicate by sharing memory; instead, share memory by communicating."
Avoid having multiple goroutines access the same variable (like a map) directly. Instead, pass the data through a channel to a single "owner" goroutine.
Buffered Channels:
make(chan int, 100) creates a channel with capacity. Sends only block when the buffer is full.
The select Statement:
Lets a goroutine wait on multiple channel operations at once. It executes whichever case is ready first. Essential for handling timeouts.
Next chapter: The Context Package. Ethan learns how to stop a runaway goroutine and manage request deadlines politely.
Aaron Rose is a software engineer and technology writer at tech-reader.blog and the author of Think Like a Genius.