I spoke recently at Rubyconf 2011 on some advanced topics in threading. What surprised me was how little experience people had with threads so I decided to write this post to give people a little more background on threads. Matz actually recommends not using threads (see below for why) and I think this is a big reason why Rubyists tend not to understand threading.
Every time you execute
irb, you are creating a process. Within each process, you have something which is executing the code in your process. This is called a
Your operating system starts every process with a “main” thread. Ruby allows you to create as many additional threads as you want by calling
Thread.new with a block of code to be executed. Once the block of code has finished executing, the thread is considered dead. If the main thread exits, the process dies.
t1 = Thread.new do i = 0 1_000_000.times do i += 1 end end t2 = Thread.new do j = 0 1_000_000.times do j += 1 end end t1.join t2.join
Above we have two threads independently counting up to one million, while the main thread waits for them to finish by calling
join on each thread. These two threads will execute concurrently (“operating or occurring at the same time”) with your process’s main thread. Not so hard, right?
Generally your computer can execute one thread per core. I have a dual core CPU in this laptop which means I can execute two threads at the exact same time . Now imagine I want to parallelize my counting above. Instead of having one thread count to two million, I will have two threads count to one million each. That should execute twice as fast because I’ll be using two threads and thus both cores:
i = 0 t1 = Thread.new do 1_000_000.times do i += 1 end end t2 = Thread.new do 1_000_000.times do i += 1 end end t1.join t2.join puts i
You’d expect the result to print “2000000″, right? Nice try.
> jruby threading.rb
Any time multiple threads try to change the same variables, they have the potential for race conditions. Why is this?
The race condition is fundamentally due to the multi-step process of changing a variable. Even a simple increment in most languages is actually a multi-step process:
register = i # read the current value from RAM into a register register = register + 1 # increment it by one i = register # write the value back to the variable in RAM
One of the features of threads is that they are controlled by the operating system; the OS can decide to stop Thread 1 and start executing Thread 2 at any point in time. This means that the OS can stop your thread after it has read the value of i into a register. Imagine this sequence of events:
i = 0 # OS is running Thread 1 register = i # 0 register = register + 1 # 1 # OS switches to Thread 2 register = i # 0 register = register + 1 # 1 i = register # 1 # Now OS switches back to Thread 1 i = register # 1
Now technically both threads have incremented i. Will the resulting value be 2? No, because the Thread 2′s increment was lost when Thread 1′s last operation overwrote the memory. This is exactly why we saw 1330864 instead of 2000000; we lost a lot of increments due to this race condition. To avoid race conditions, any variable changes (fancy CS terminology: “mutation of shared state”) must be done atomically so that other threads cannot see the change midway through the change process.
Now you know the fundamental requirement for thread-safe code: mutation of shared state must be done atomically. Any time you change a variable that is shared by many threads, it needs to be done atomically. Unfortunately Ruby and most other mainstream languages only give you one tool to do this: the lock aka the mutex.
Mutex is short for “mutual exclusion” as in “only one thread can be executing this code at a time”. Usage is simple:
@mutex = Mutex.new @mutex.synchronize do i += 1 end
Remember that increment is a three-step process but because only one thread can be in the synchronize block at a time, we won’t have any problems with race conditions; the Mutex effectively makes the increment atomic.
Here’s the dirty secret that everyone who uses threads learns eventually: Threads have such a terrible reputation because locks are very painful to use in practice.
What are the alternatives? There are several:
- Atomic Instructions – turn multi-step operations into a single atomic operation
- Transactional Memory (STM) – ensure that changes are done as part of a transaction which guarantee atomicity
- Actors – refactor our code so that only one thread may change a variable
My take is that locks exponentially grow the complexity of your codebase and this is a major reason why Matz has always advised Rubyists to use Processes rather than Threads for concurrency. My recent Rubyconf talk on Threads discusses these options. The Clojure language mandates transactional memory for all variable changes. Scala and Erlang offer Actors. Using plain old threads and locks is akin to writing in assembly language: there are better ways now.
In my opinion, the last option is the preferred option since you avoid the race condition in the first place: “Don’t communicate by sharing state; share state by communicating”. The fundamental idea behind actors is to give each thread a separate responsibility and pass messages between threads according to those responsibilities.
My first piece of advice to Rubyists: avoid
Thread.new. This is exactly what Matz is saying also. Instead look for infrastructure that can abstract the use of threads into a safer concurrency model. See Celluloid and girl_friday for instance. Of course, MRI is not particularly suited to high concurrency applications; JRuby is a better choice. Other languages like Clojure or Erlang were designed with concurrency as a language feature right from the start.
I’m not saying that threads and locks should be removed completely from all software. Rather we should treat them for what they are: low-level abstractions that developers should not be using directly. Like threads and locks I see a need for assembly language but it should be used very sparingly. Understanding and knowing how to use higher level concurrency abstractions like actors and STM will make concurrent pieces of your application easier to write and maintain. Unfortunately not all of these options are available to MRI but all are available to JRuby via Java libraries.
1 – True with JRuby, not true with MRI because of the infamous “Global Interpreter Lock”.