Conquering Concurrency in Go with Mutex Magic

image-20231024-145249

Writing concurrent safe Go programs can be much like
sipping coffee in a wildfire – you better not get burned by race conditions!
Photo Credit: Maria Letta - free gophers pack

Among one of the many strengths of the Go (Golang) programming language is its foundational toolset for constructing concurrent programs; that are still human-readable. However, for most use cases Go applications will be generally fast enough to handle most workloads you throw at it, without adding all the fluffy complexity of concurrency.

When you do find yourself building for high throughput or expecting millions of requests to hit your service, you’ll be venturing into the land of concurrency. Once you’ve successfully wrapped your head around goroutines and channels, you may still find following how separate routines are accessing your data and ensuring that all the routines play nicely can be a mind-boggling escapade and one wild mental workout!

Data Race

A data race is when at least two threads or, in our case, goroutines change or mutate the same data exactly at the same time.

Let's take a look at an example. Notice how in the code below, we are creating 1000 goroutines and each routine is simply incrementing a variable counter with a +1.
Naturally, we’d expect the output to be 1000; as there are that many increment calculations.

A non-safe concurrent increment

package main

import (
	"fmt"
	"sync"
)

var (
	counter int
	wg      sync.WaitGroup
)

func main() {
	wg.Add(1000) //add 1000 goroutines to waitGroup

	for i := 0; i < 1000; i++ {
		go func() {
			//notify waitGroup that this goroutine is done
			defer wg.Done()

			//non-safe concurrent increment
			counter += 1
		}()
	}
	wg.Wait() //wait for all goroutines to finish

	fmt.Printf("counter: %d", counter)
}

 

Output

counter: 896

Wait, What! Yup, that is correct, we have a data race condition and it is occurring on more than one occasion, bottomline one or more goroutines are mutating the variable counter at the same time.
For our code example, you can effectively think that some of the increments are simply being overwritten or ignored if you will, basically a race to change the value.

image-20231025-070754

You will notice in our example program, you won't see it panic as we are simply adjusting a value that already exists in memory, however, in other cases, the application will crash with a delightful panic!

If you run this code yourself you will get random numbers between around 800 and 1000 and yes sometimes you will even get the correct answer of 1000; by reducing the number of goroutines to say 100, you will notice the correct answer occurs more often as the data race is occurring at a much less rate. When we get the correct answer it simply means our application never encountered the data race condition; however, it doesn’t mean our application is free of race conditions, which is primarily the main caveat of writing concurrent applications.

However, for now, let us make our example concurrent safe by making use of a mutex

A concurrent-safe increment

package main
import (
	"fmt"
	"sync"
)
var (
	counter int
	mutex   sync.Mutex
	wg      sync.WaitGroup
)
func main() {
	wg.Add(1000) //add 1000 goroutines to waitGroup
	for i := 0; i < 1000; i++ {
		go func() {
			//notify waitGroup that this goroutine is done
			defer wg.Done()
			//concurrent-safe increment, using a mutex lock
			mutex.Lock()
			counter += 1
			mutex.Unlock()
		}()
	}
	wg.Wait() //wait for all goroutines to finish
	fmt.Printf("counter: %d", counter)
}

Output

counter: 1000

Whoop we have successfully removed our data race condition!
Notice how each time you run the example you will always get 1000 as the answer, regardless of the number of goroutines created!

Basically, the mutex allows us to grab a Lock() across our entire application and all goroutines. When any subsequent calls to Lock() are performed, they block until the lock is released using the Unlock() method; this means only one goroutine can at a time perform the increment on the counter variable, or really any code between the lock and unlock calls, safely across all routines.

Conclusion

Concurrent Go applications can be hard, but applying the correct techniques and principles can take you a long way in building and safeguarding against issues.

Overall the sync/mutex package offers a higher level of control over critical sections, allowing fine-grained locking and unlocking. It can be used to protect any shared resource. However, mutexes come with some overhead due to the locking and unlocking process, and improper use can lead to deadlocks or performance bottlenecks.

Further reading concurrency basics: A Tour of Go: Concurrency

Back to Blog