Concurrency optimization – Reduce lock granularity
In such cases we have a race condition, where only one of the threads will acquire the lock (on resource) and all the other threads that want the lock will block. This synchronization feature does not come for free; both JVM and OS consume resources in order to provide you with a valid concurrency model. The three most fundamental factors that makes concurrency implementation resource intensive are:
- Context switching
- Memory synchronization
- Blocking
In order to write optimized code for synchronization you have to be aware of these 3 factors and how to decrease them. There are many things that you must watch out when writing such code. In this article I will show you a technique to decrease these factors by reducing lock granularity.
Starting with the basic rule: Do not hold the lock longer than necessary.
Do whatever you need to do before acquire the lock, use the lock only to act on synchronised resource and release it immediately. See a simple example:
public class HelloSync { private Map dictionary = new HashMap(); public synchronized void borringDeveloper(String key, String value) { long startTime = (new java.util.Date()).getTime(); value = value + "_"+startTime; dictionary.put(key, value); System.out.println("I did this in "+ ((new java.util.Date()).getTime() - startTime)+" miliseconds"); } }
In this example we violate the basic rule, because we create two Date objects, call System.out.println(), and do many String concatenations. The only one action that needs synchronization is action: “dictionary.put(key, value);” Alter the code and move synchronization from method scope to this single line. A slightly better code is this:
public class HelloSync { private Map dictionary = new HashMap(); public void borringDeveloper(String key, String value) { long startTime = (new java.util.Date()).getTime(); value = value + "_"+startTime; synchronized (dictionary) { dictionary.put(key, value); } System.out.println("I did this in "+ ((new java.util.Date()).getTime() - startTime)+" miliseconds"); } }
Above code can be written even better, but I just want to give you the idea. If wondering how, check java.util.concurrent.ConcurrentHashMap.
So, how can we reduce lock granularity? In a short answer, by asking for locks as less as possible. The basic idea is to use separate locks to guard multiple independent state variables of a class, instead of having only one lock in class scope. Check this simple example that I have seen in many applications.
public class Grocery { private final ArrayList fruits = new ArrayList(); private final ArrayList vegetables = new ArrayList(); public synchronized void addFruit(int index, String fruit) { fruits.add(index, fruit); } public synchronized void removeFruit(int index) { fruits.remove(index); } public synchronized void addVegetable(int index, String vegetable) { vegetables.add(index, vegetable); } public synchronized void removeVegetable(int index) { vegetables.remove(index); } }
The grocery owner can add/remove fruits and vegetables in/from his grocery shop. This implementation of Grocery guards both fruits and vegetables using the base Grocery lock, as the synchronization is done on method scope. Instead of this fat lock, we can use two separate guards, one for each resource (fruits and vegetables). Check the improved code below.
public class Grocery { private final ArrayList fruits = new ArrayList(); private final ArrayList vegetables = new ArrayList(); public void addFruit(int index, String fruit) { synchronized(fruits) fruits.add(index, fruit); } public void removeFruit(int index) { synchronized(fruits) {fruits.remove(index);} } public void addVegetable(int index, String vegetable) { synchronized(vegetables) vegetables.add(index, vegetable); } public void removeVegetable(int index) { synchronized(vegetables) vegetables.remove(index); } }
After using two guards (splitting the lock), we will see less locking traffic than the original fat lock would have. This technique works better when we apply it on locks that have medium lock contention. If we apply it on locks that have slight contention, then the gain is small, but still positive. If we apply it on locks that have heavy contention, then the result is not always better and you must be aware of this.
Please use this technique with conscience. If you suspect that this is a heavy contention lock then please follow these steps:
- Confirm the traffic of your production requirements, multiple it by 3 or 5 (or even 10 even if you want to be prepared).
- Run the appropriate tests on your testbed, based on the new traffic.
- Compare both solutions and only then choose the most appropriate.
There are more techniques that can improve synchronization performance, but for all techniques the basic rule is one: Do not hold the lock longer than necessary.
This basic rule can be translated to “asking for locks as less as possible” as I have already explained you, or to other translations(solutions) which I will try to describe them in future articles.
Two more important advices:
- Be aware of classes in java.util.concurrent package (and subpackages) as there are very clever and useful implementations.
- Concurrency code most times can be minimized by using good design patterns. Always have in mind Enterprise Integration Patterns, they can save your nights.
Reference: Reduce lock granularity – Concurrency optimization from our JCG partner Adrianos Dadis at Java, Integration and the virtues of source.
- Java Concurrency Tutorial – Semaphores
- Java Concurrency Tutorial – Reentrant Locks
- Java Concurrency Tutorial – Thread Pools
- Java Concurrency Tutorial – Callable, Future
- Java Concurrency Tutorial – Blocking Queues
- Java Concurrency Tutorial – CountDownLatch
- Java Fork/Join for Parallel Programming
- Java Memory Model – Quick overview and things to notice
- Java Tutorials and Android Tutorials list