Odd practices in Java
There are a number of practices in Java which oddly baffle me. Here are but a few. Using -Xmx and -Xms
The option -Xmx is widely used to set the maximum memory size. As noted in the Java HotSpot VM Options Options that begin with -X are non-standard (not guaranteed to be supported on all VM implementations), and are subject to change without notice in subsequent releases of the JDK.
So you would think that such a widely used option would not be non-standard any more. In fact there is a standard option -mx and similarly -ms. I don’t know why these standard options are not use more widely, or even documented.
Using NIO for non-blocking IO only
Non blocking IO was a new feature of NIO for sockets. However, the default behaviour for NIO Socket is blocking. Files are only blocking in NIO. NIO2 provides an asynchronous interface, but does it by passing your request off to an ExecutorService (which is cheating really because it doesn’t anything do what you can’t do better already)
Personally I prefer blocking NIO. Its is only suitable when you have a low number of binary connections, but it is an option which doesn’t get enough press IMHO.
Using the 32-bit JVM to save memory
The amount of memory you save with a 32-bit JVM is far less than you might think. Modern 64-bit JVMs use 32-bit references by default for up to 32 GB heap sizes. It is unlikely you want to have a larger heap size (if only to avoid very long Full GC times)
The 32-bit JVM still has a smaller object header than the 64-bit JVM, but the difference is fairly small. The 64-bit JVM can use more, larger registers (on AMD/Intel x64 systems) and a much larger address space allowing you have less memory limitations.
Using threads to make everything faster
Using multiple threads can increase your CPU utilisation and reduce the impact of IO latency. It doesn’t solve all performance problems. It won’t make your disks go faster, increase your network bandwidth, the size of your L3 cache, increase you CPU to main memory bandwidth or make your database significantly faster.
Similarly making everything concurrent won’t make so much difference either. Do you need 1000 concurrent collections when you only have 8 cores? No matter how many threads you have, only 8 of them will be running at once, and if you have 1000 collections its quite unlikely that two will be using the same collection.
Use concurrency selectively for critical resources. Otherwise you risk not only increasing overhead and slowing down your application, but far worse is the increase in complexity you introduce.
Reference: Odd practices in Java from our JCG partner Peter Lawrey at the Vanilla Java blog.