What are the bad features of Java
Overview
When you first learn to develop you see overly broad statements about different features to be bad, for design, performance, clarity, maintainability, it feels like a hack, or they just don’t like it.
This might be backed by real world experience where removing the use of the feature improved the code. Sometimes this is because the developers didn’t know how to use the feature correctly, or the feature is inherently error prone (depending whether you like it or not)
It is disconcerting when either fashion, or your team changes and this feature becomes fine or even a preferred methodology.
In this post, I look at some of the feature people like to hate and why I think that used correctly, they should be a force for good. Features are not as yes/no, good/bad as many like to believe.
Checked Exceptions
I am often surprised at the degree that developers don’t like to think about error handling. New developers don’t even like to read error messages. It’s hard work, and they complain the application crashed, “it’s not working”. They have no idea why the exception was thrown when often the error message and stack dump tell them exactly what went wrong if they could only see the clues. When I write out stack traces for tracing purposes, many just see the log shaped like a crash when there was no error. Reading error messages is a skill and at first it can be overwhelming.
Similarly, handling exceptions in a useful manner is too often avoided. I have no idea what to do with this exception, I would rather either log the exception and pretend it didn’t happen or just blow up and let the operations people or to the GUI user, who have the least ability to deal the error.
Many experienced developers hate checked exceptions as a result. However, the more I hear this, the more I am glad Java has checked exception as I am convinced they really will find it too easy ignore the exceptions and just let the application die if they are not annoyed by them.
Checked exceptions can be overused of course. The question should be when throwing a checked exception; do I want to annoy the developer calling the code by forcing them to think a little bit about error handling? If the answer is yes, throw a checked exception.
IMHO, it is a failing of the lambda design that it doesn’t handle checked exception transparently. i.e. as a natural block of code would by throwing out any unhandled exception as it does for unchecked exceptions and errors. However, given the history of lambdas and functional programming, where they don’t like side effect at all, let alone short cut error handling, it is not surprising.
You can get around the limitation of lambdas by re-throwing a checked exception as if it were an unchecked one. This works because the JVM has no notion of checked exceptions, it is a compile time check like generics. My preferred method is to use Unsafe.rethrowException but there is 3 other ways of doing this. Thread.currentThread().stop(e) no longer working in Java 8 despite the fact it was always safe to do.
Was Thread.currentThread().stop(e) unsafe?
The method Thread.stop(Throwable) was unsafe when it could cause another thread to trigger an exception in a random section of code. This could be a checked exception in a portion of code which didn’t expect it, or throw an exception which is caught in some portions of the thread but not others leaving you with no idea what it would do.
However, the main reason it was unsafe is that it could leave atomic operations in as synchronized of locked section of code in an inconsistent state corrupting the memory in subtle and untestable ways.
To add to the confusion, the stack trace of the Throwable didn’t match the stack trace of the thread where the exception was actually thrown.
But what about Thread.currentThread().stop(e)? This triggers the current thread to throw an exception on the current line. This is no worse than just using throw exception you are performing an operation the compiler can’t check. The problem is that the compiler doesn’t always know what you are doing and whether it is really safe or not. For generics this is classed as an “unchecked cast” which is a warning which you can disable with an annotation. Java doesn’t support the same sort of operation with checked exception so well and you end up using hacks, or worse hiding the true checked exception as a runtime exception meaning there is little hope the caller will handle it correctly.
Is using static
bad?
This is a new “rule” for me. I understand where it is coming from, but there is more exceptions to this rule than where it should apply. Let us first consider all the contexts where the overloaded meaning of static
can be used.
- static mutable fields
- static immutable field (final primitive or final fields pointing to objects which are not changed)
- static methods.
- static classes (which have no implicit reference to an outer instance)
- static initialiser blocks.
I would agree that using static mutable fields is likely to be either a newbie bug, or something to be avoided if at all possible. If you see static fields being altered in a constructor, it is almost certainly a bug.(Even if not, I would avoid it) I believe this is the cause of the statement to avoid all static.
However, in all the other cases, using static is not only more performant, it is clearer. It shows this field isn’t different for each instance, or that the method or class doesn’t implicitly depend on than instance.
In short, static is good, and mutable static fields are the exception, not the rule.
Are Singletons bad?
The problems with singletons come from two directions. They are effectively global mutable state making them difficult to maintain or encapsulte e.g. in a unit test, and they support auto-wiring. i.e. any component can access it making your dependencies unclear and difficult to manage. For these reasons, some developers hate them.
However, following good dependency injection is a methodology which should be applied to all your components, singletons or not, and you should avoid global mutable state via singletons or not.
If you exclude global state and self wiring components, you are left with Singletons which are immutable and passed via dependency injection and in this case they can work really elegantly. A common pattern I use to implement strategies is to use an enum with one instance which implement an interface.
enum MyComparator implements Comparator { INSTANCE; public int compare(MyObject o1, MyObject o2) { // something a bit too complicated to put in a lambda } }
This instance can be passed as an implementation of Comparator via dependency injection and without mutable state can be used safely across threads and unit tests.
Can I get a library or framework to do that very simple thing for me?
Libraries and frameworks can save you a lot of time and wasted effort getting you own code to do something which already works else where.
Even if you want to write you own code I strongly suggest you have an understanding of what existing libraries and frameworks do so you can learn from them. Writing it yourself is not a short cut to avoid having to understand any existing solutions. A journalist once wrote with despair about an aspiring journalist that; didn’t like to read, only to write. The same applies in software development.
However, I have seen (on Stackoverflow) developers so to great lengths to avoid using their own code for even trivial examples. They feel like if they use a library it must be better than anything they have written. The problem with this is it assumes; adding libraries don’t come at a cost to complexity, you have a really good understanding of the library, and you will never need to learn to write code you can trust.
Some developers use frameworks to help learn what is actually a methodology. Often developers use a framework for dependency injection when actually you could just do this in plain Java, but they either don’t trust themselves or their team to do this.
In the high performance space, the simpler the code, the less work your application does, the easier it is to maintain with less moving parts and the faster it will go. You need to use the minimum of libraries and frameworks which are reasonably easy to understand so you can get your system to perform at it best.
Is using double for money bad?
Using fractional numbers without any regard for rounding will give you unexpected results. On the plus side, for double, are usually obviously wrong like 10.99999999999998 instead of 11.
Some have the view that BigDecimal is the solution. However, the problem is that BigDecimal has it’s own gotchas, is much harder to validate/read/write but worst of all can look correct when it is not. Take this example:
double d = 1.0 / 3 * 3 + 0.01; BigDecimal bd1 = BigDecimal.valueOf(1.0) .divide(BigDecimal.valueOf(3), 2, RoundingMode.HALF_UP) .multiply(BigDecimal.valueOf(3)) .add(BigDecimal.valueOf(0.01)) .setScale(2, BigDecimal.ROUND_HALF_UP); BigDecimal bd2 = BigDecimal.valueOf(1.0) .divide(BigDecimal.valueOf(3), 2, RoundingMode.HALF_UP) .multiply(BigDecimal.valueOf(3) .add(BigDecimal.valueOf(0.01))) .setScale(2, BigDecimal.ROUND_HALF_UP); System.out.println("d: " + d); System.out.println("bd1: " + bd1); System.out.println("bd2: " + bd2);
This produces three different results. By sight, which one produces the right result? Can you tell the difference between bd1 and bd2?
This prints:
d: 1.01 bd1: 1.00 bd2: 0.99
Can you see from the output which is wrong? Actually the answer should be 1.01.
Another gotcha of BigDecimal is that equals and compareTo do not behave the same. equals() can be false when compareTo() returns 0. i.e. in BigDecimal 1.0 equals 1.00 is false as the scales are different.
The problem I have with BigDecimal is that you get code which is often harder to understand and produces incorrect results which look like they could be right. BigDecimal is significantly slower and products lots of garbage. (This is improving in each version of Java 8) There are situations where BigDecimal is the best solution, but it is not a given as some would protest.
If BigDecimal is not a great alternative, is there any other? Often int and long are used with fixed precision e.g. whole number of cents instead of a fraction of dollars. This has some challenges in you have to remember where the decimal place is. If Java supports values types, it might makes sense to use these as wrappers for money and give you more safety, but the control, clarify and performance of dealing with whole number primitives.
Using null
values
For developers new to Java, getting repeated NullPointerException
is a draining experience. Do I really have to create a new instance of every object, every element in an array in Java? Other language don’t require this as it is often done via embedded data structures. (Something which is being considered for Java)
Even experienced Java developers have difficulty dealing with null
values and see it as a big mistake to have null in the language. IMHO The problem is that the replacements are often far worse. such as NULL objects which don’t NPE, but perhaps should have been initialised to something else. In Java 8, Optional is a good addition which makes the handling of a non-result clearer. I think it is useful for those who struggle with NullPointerException as it forces you to consider that there might not be a result at all. This doesn’t solve the problem of uninitialised fields.
I don’t like it personally as it solves a problem which can be solved more generally by handling null correctly, but I recognise that for many it is an improvement.
A common question is; how was I supposed to know a variable was null? This is the wrong way around in my mind. It should be, why assume it couldn’t be null? If you can’t answer that, you have to assume it could be null and an NPE shouldn’t be any surprise if you don’t check for it.
You could argue that Java could do with more syntactic sugar to make code which handles null cleaner such as the Elvis operator, but I think the problem is that developers are not thinking about null values enough. e.g. do you check that an enum variable is null before you switch on it?. (I think there should be a case null
: in switch but there isn’t or to fall through to default
: but it doesn’t)
How important is it to write code fast?
Java is not a terse language and without an IDE to write half the code for you, it would be really painful to write esp if you spent all day writing code.
But this is what developers do all day don”t they? Actually, they don’t. Developers don’t spend much of their time writing code, they spend 90% (for new code) to 99% (for legacy code) understanding the problem.
You might say; I write 1000 lines of code all day long and later and re-write the code (often making it shorter) and some time later I fixed the code However, while the code is still fresh in your mind, if you were to write just the code you needed in the end (or you do this from a print out) and you divide it by the total time you spent on the project, end to end, you are likely to find it was actually less than 100 lines of code per day, possibly less than 10 lines per day.
So what were you really doing over that time if it wasn’t writing the finished product. It was understand what was required by the end users, and what was required to implement the solution.
Someone once told me; it doesn’t matter how fast, how big, how deep or how many holes you dig, if you are digging them in the wrong place.
Conclusion
I hear views from beginners to distinguished developers claiming you shouldn’t/I can’t imagine why you would/you should be sacked if you use X, you should only use Y. I find such statements are rarely 100% accurate. Often there is either edge cases, and sometimes very common cases where such statement are misleading or simply incorrect.
I would treat any such broad comments with scepticism, and often they find they have to qualify what was said once they see that others don’t have the same view.
Reference: | What are the bad features of Java from our JCG partner Peter Lawrey at the Vanilla Java blog. |
The BigDecimal problem appears because you are using the double constructor (valueOf). At this time you already have the rounding error.
Try using new BigDecimal(“0.1”) with the String constructor which does not give the rounding error.
I think you have illustrated my point for me. Developers don’t understand how double works but they don’t understand BigDecimal either.
BigDecimal.valueOf (double) does the same as new BigDecimal (Double.toString (double)) as stated in the Java doc.
In any case, there is no way the alternative could result in a 2% error.
my point was that the source of the error isn’t clear and it is not in how the BigDecimal are constructed. It is in the use of rounding.
First I think you don’t get my comment. It is true that BigDecimal.valueOf(double) is the same as BigDecimal.valueOf(Double.toString(double)), but I didn’t say to use that. I say use new BigDecimal(string) that is completely different. “1.0” is NOT equal to Double.toString(0.1). So I think that I know both how double and BigDecimal work. I don’t think you do. Try that code: BigDecimal bd3 = new BigDecimal(“1.0”).divide(new BigDecimal(“3”), 3, BigDecimal.ROUND_HALF_UP) .multiply(new BigDecimal(“3”)).add(new BigDecimal(“0.01”)).setScale(2, BigDecimal.ROUND_HALF_UP); Its output is 1.01 In second instance your original code is wrong. Answering you question “Can you tell the difference between bd1 and bd2?” YES Let’s see the… Read more »
My beginning programming students commonly make this same error. This is one of those times when I rather wish that the API didn’t overload so readily. I try to impress upon them that user input is pretty much always String data, so keep as close to the source as possible.
So the point is proven – BigDecimal is tricky and may be deceiving. Would you trust junior developer to know all these gotchas when coding? I would completely expect junior dev to introduce a bug here and I can hardly imagine any senior dev to notice a subtle bug created by wrong order of operation on BigDecimal during quick code review.
That’s why I completely agree with Peter that one should use long to handle currency stuff.
Pietro, you’ve cheated because the initial rounding value was 2 digits and you put 3. Peter, I think that your example is not very good because the error is being introduced by defining 2 digits after the the point during rounding. Normally, if you want to use 2 digits of your result, you would leave 3 or more during the rounding to get a better precision. The point is that you can control this in BigDecimal, but you have no control about it in double. I think that if you want monetary values that follow some strict rounding rules (which… Read more »
You can control the rounding in double by adding your own rounding e.g. with Math.round(x * n) / n;
It is a good example for the point I was trying to make.
Well, I’ve not cheated, this is math. I’ve forgot to say because I thought it was obvious but maybe it isn’t. When you divide you must (!) use higher precision. This is a math rule. And it’s also said in the BigDecimal javadoc. When you divide you add decimals and if you not add them you lose them. All the subsequent calls are then wrong. The point in the whole post is not that you can’t use BigDecimal. To use BigDecimal you must know math and of course BigDecimal API. Because when using double all is done automatically, using BigDecimal… Read more »
test
Actually I think developers DO spend too much time on the actual coding – because Java requires insane amounts of typing.
I propose to make Java more compact. A LOT more compact! See my homepage for more :-)
Cheers
If you spend 2-3x typing, you spend less time understanding it’s quirks. If you have a decent IDE, you should be typing much more than you do in scala or ruby as the IDE does the typing for you. In any case, you still spend 90% of your time understanding the problem, perhaps in a more terse language it is only 5% of your time typing.