Java Memory Model and optimisation
Overview
Many developers of multi-threaded code are familiar with the idea that different threads can have a different view of a value they are holding, this not the only reason a thread might not see a change if it is not made thread safe. The JIT itself can play a part.
Why do different threads see different values?
When you have multiple threads, they will attempt to minimise how much they will interact e.g. by trying to access the same memory. To do this they have a separate
local copy e.g. in Level 1 cache. This cache is usually eventually consistent. I have seen short periods of between one micro-second and up to 10 milli-seconds where two threads see different values. Eventually the thread is context switched, the cache cleared or updated. There is no guarantee as to when this will happen but it is almost always much less than a second.
How can the JIT play a part?
The Java Memory Model says there is no guarantee that a field which is not thread safe will ever see an update. This allows the JIT to make an optimisation where a value only read and not written to is effectively inlined into the code. This means that even if the cache is updated, the change might not be reflected in the code.
An example
This code will run until a boolean is set to false.
>static class MyTask implements Runnable { private final int loopTimes; private boolean running = true; boolean stopped = false; public MyTask(int loopTimes) { this.loopTimes = loopTimes; } @Override public void run() { try { while (running) { longCalculation(); } } finally { stopped = true; } } private void longCalculation() { for (int i = 1; i < loopTimes; i++) if (Math.log10(i) < 0) throw new AssertionError(); } } public static void main(String... args) throws InterruptedException { int loopTimes = Integer.parseInt(args[0]); MyTask task = new MyTask(loopTimes); Thread thread = new Thread(task); thread.setDaemon(true); thread.start(); TimeUnit.MILLISECONDS.sleep(100); task.running = false; for (int i = 0; i < 200; i++) { TimeUnit.MILLISECONDS.sleep(500); System.out.println("stopped = " + task.stopped); if (task.stopped) break; } }
This code repeatedly performs some work which has no impact on memory. The only difference it makes is how long it takes. By taking longer, it determines whether the code in run() will be optimised before or after running is set to false.
If I run this with 10 or 100 and -XX:+PrintCompilation I see
111 1 java.lang.String::hashCode (55 bytes) 112 2 java.lang.String::charAt (29 bytes) 135 3 vanilla.java.perfeg.threads.OptimisationMain$MyTask :longCalculation (35 bytes) 204 1 % ! vanilla.java.perfeg.threads.OptimisationMain$MyTask :run @ 0 (31 bytes) stopped = false stopped = false stopped = false stopped = false ... many deleted ... stopped = false stopped = false stopped = false stopped = false stopped = false
If I run this with 1000 you can see that the run() hasn’t been compiled and the thread stops
112 1 java.lang.String::hashCode (55 bytes) 112 2 java.lang.String::charAt (29 bytes) 133 3 vanilla.java.perfeg.threads.OptimisationMain $MyTask::longCalculation (35 bytes) 135 1 % vanilla.java.perfeg.threads.OptimisationMain $MyTask::longCalculation @ 2 (35 bytes) stopped = true
Once the thread has been compiled, the change is never seen even though the thread will have context switched etc. many times.
How to fix this
The simple solution is to make the field volatile. This will guarantee the field’s value consistent, not just eventually consistent which is what the cache might do for you.
Conclusion
While there are many examples of question like; Why doesn’t my thread stop? The answer has more to do with Java Memory Model which allows the JIT “inline” the fields that it does the hardware and having multiple copies of the data in different caches.
Reference: Java Memory Model and optimisation from our JCG partner Peter Lawrey at the Vanilla Java blog.