Core Java

Understanding volatile via example

Volatility AheadWe have spent last couple of months stabilizing the lock detection functionality in Plumbr. During this we have stumbled into many tricky concurrency issues. Many of the issues are unique, but one particular type of issues keeps repeatedly appearing.

You might have guessed it – misuse of the volatile keyword. We have detected and solved bunch of issues where the extensive usage of volatile made arbitrary parts of the application slower, extended locks holding time and eventually bringing the JVM to its knees. Or vice versa – granting too liberal access policy has triggered some nasty concurrency issues.

I guess every Java developer recalls the first steps in the language. Days and days spent with manuals and tutorials. Those tutorials all had the list of keywords, among which the volatile was one of the scariest. As days passed and more and more code was written without the need for this keyword, many of us forgot the existence of volatile. Until the production systems started either corrupting data or dying in unpredictable manner. Debugging such cases forced some of us to actually understand the concept. But I bet it was not a pleasant lesson to have, so maybe I can save some of you some time by shedding light upon the concept via a simple example.

Example of volatile in action

The example is simulating a bank office. The type of bank office where you pick a queue number from a ticketing machine and then wait for the invite when the queue in front of you has been processed. To simulate such office, we have created the following example, consisting of two threads.

First of the two threads is implemented as CustomerInLine. This is a thread doing nothing but waiting until the value in NEXT_IN_LINE matches customer’s ticket. Ticket number is hardcoded to be #4. When the time arrives (NEXT_IN_LINE>=4), the thread announces that the waiting is over and finishes. This simulates a customer arriving to the office with some customers already in queue.

The queuing implementation is in Queue class which runs a loop calling for the next customer and then simulating work with the customer by sleeping 200ms for each customer. After calling the next customer, the value stored in class variable NEXT_IN_LINE is increased by one.

public class Volatility {

	static int NEXT_IN_LINE = 0;

	public static void main(String[] args) throws Exception {
		new CustomerInLine().start();
		new Queue().start();
	}

	static class CustomerInLine extends Thread {
		@Override
		public void run() {
			while (true) {
				if (NEXT_IN_LINE >= 4) {
					break;
				}
			}
			System.out.format("Great, finally #%d was called, now it is my turn\n",NEXT_IN_LINE);
		}
	}

	static class Queue extends Thread {
		@Override
		public void run() {
			while (NEXT_IN_LINE < 11) {
				System.out.format("Calling for the customer #%d\n", NEXT_IN_LINE++);
				try {
					Thread.sleep(200);
				} catch (InterruptedException e) {
					e.printStackTrace();
				}
			}
		}
	}
}

So, when running this simple program you might expect the output of the program being similar to the following:

Calling for the customer #1
Calling for the customer #2
Calling for the customer #3
Calling for the customer #4
Great, finally #4 was called, now it is my turn
Calling for the customer #5
Calling for the customer #6
Calling for the customer #7
Calling for the customer #8
Calling for the customer #9
Calling for the customer #10

As it appears, the assumption is wrong. Instead, you will see the Queue processing through the list of 10 customers and the hapless thread simulating customer #4 never alerts it has seen the invite. What happened and why is the customer still sitting there waiting endlessly?

Analyzing the outcome

What you are facing here is a JIT optimization applied to the code caching the access to the NEXT_IN_LINE variable. Both threads get their own local copy and the CustomerInLine thread never sees the Queue actually increasing the value of the thread. If you now think this is some kind of horrible bug in the JVM then you are not fully correct – compilers are allowed to do this to avoid rereading the value each time. So you gain a performance boost, but at a cost – if other threads change the state, the thread caching the copy does not know it and operates using the outdated value.

This is precisely the case for volatile. With this keyword in place, the compiler is warned that a particular state is volatile and the code is forced to reread the value each time when the loop is executed. Equipped with this knowledge, we have a simple fix in place – just change the declaration of the NEXT_IN_LINE to the following and your customers will not be left sitting in queue forever:

static volatile int NEXT_IN_LINE = 0;

For those, who are happy with just understanding the use case for volatile, you are good to go. Just be aware of the extra cost attached – when you start declaring everything to be volatile you are forcing the CPU to forget about local caches and to go straight into main memory, slowing down your code and clogging the memory bus.

Volatile under the hood

For those who wish to understand the issue in more details, stay with me. To see what is happening underneath, lets turn on the debugging to see the assembly code generated from the bytecode by the JIT. This is achieved by specifying the following JVM options:

-XX:+UnlockDiagnosticVMOptions -XX:+PrintAssembly

Running the program with those options turned on both with volatile turned on and off, gives us the following important insight:

Running the code without the volatile keyword, shows us that on instruction 0x00000001085c1c5a we have comparison between two values. When comparison fails we continue through 0x00000001085c1c60 to 0x00000001085c1c66 which jumps back to 0x00000001085c1c60 and an infinite loop is born.

0x00000001085c1c56: mov    0x70(%r10),%r11d
  0x00000001085c1c5a: cmp    $0x4,%r11d
  0x00000001085c1c5e: jge    0x00000001085c1c68  ; OopMap{off=64}
                                                ;*if_icmplt
                                                ; - Volatility$CustomerInLine::run@4 (line 14)
  0x00000001085c1c60: test   %eax,-0x1c6ac66(%rip)        # 0x0000000106957000
                                                ;*if_icmplt
                                                ; - Volatility$CustomerInLine::run@4 (line 14)
                                                ;   {poll}
  0x00000001085c1c66: jmp    0x00000001085c1c60  ;*getstatic NEXT_IN_LINE
                                                ; - Volatility$CustomerInLine::run@0 (line 14)
  0x00000001085c1c68: mov    $0xffffff86,%esi

With the volatile keyword in place, we can see that on instruction 0x000000010a5c1c40 we load value to a register, on 0x000000010a5c1c4a compare it to our guard value of 4. If comparison fails, we jump back from 0x000000010a5c1c4e to 0x000000010a5c1c40, loading value again for the new check. This ensures that we will see changed value of NEXT_IN_LINE variable.

0x000000010a5c1c36: data32 nopw 0x0(%rax,%rax,1)
  0x000000010a5c1c40: mov    0x70(%r10),%r8d    ; OopMap{r10=Oop off=68}
                                                ;*if_icmplt
                                                ; - Volatility$CustomerInLine::run@4 (line 14)
  0x000000010a5c1c44: test   %eax,-0x1c1cc4a(%rip)        # 0x00000001089a5000
                                                ;   {poll}
  0x000000010a5c1c4a: cmp    $0x4,%r8d
  0x000000010a5c1c4e: jl     0x000000010a5c1c40  ;*if_icmplt
                                                ; - Volatility$CustomerInLine::run@4 (line 14)
  0x000000010a5c1c50: mov    $0x15,%esi

Now, hopefully the explanation will save you from couple of nasty bugs.

Reference: Understanding volatile via example from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button