JVM: How to analyze Thread Dump
My goal is to share with you my knowledge on Thread Dump analysis that I accumulated over the last 10 years e.g. hundreds of Thread Dump analysis cycles with dozens of common problem patterns across many JVM versions and JVM vendors.
Please bookmark this page and stay tuned for weekly articles.
Please also feel free to share this Thread Dump training plan with your work colleagues and friends.
Sounds good, I really need to improve my Thread Dump skills… so where do we start?
What I’m proposing to you is a complete Thread Dump training plan. The following items will be covered. I will also provide you with real life Thread Dump examples that you can study and understand.
1) Thread Dump overview & fundamentals
2) Thread Dump generation techniques and available tools
3) Thread Dump format differences between Sun HotSpot, IBM JRE and Oracle JRockit
4) Thread Stack Trace explanation and interpretation
5) Thread Dump analysis and correlation techniques
6) Thread Dump common problem patterns (Thread race, deadlock, hanging IO calls, garbage collection / OutOfMemoryError problems, infinite looping etc.)
7) Thread Dump examples via real life case studies
I really hope this Thread Dump analysis training plan will be beneficial for you so please stay tuned for weekly updates and articles!
But what if I still have questions or still struggling to understand these training articles?
Don’t worry and please consider me as your trainer. I strongly encourage you to ask me any question on Thread Dump (remember, there are no stupid questions) so I propose the following options to you for free; simply chose the communication model that you are more comfortable with:
1) Submit your Thread Dump related question(s) by posting your comment(s) below the article (please feel free to remain Anonymous)
2) Submit your Thread Dump data to the Root Cause Analysis forum
3) Email me your Thread Dump related question(s) @phcharbonneau@hotmail.com
Can I send you my Thread Dump data from my production environment / servers?
Yes, please feel free to send me your generated Thread Dump data via email or Root Cause Analysis forum if you wish to discuss the root cause of your problem(s). Real life Thread Dump analysis is always the best way to learn.
I really hope that you will enjoy and share this Thread Dump analysis training plan. I will do my very best to provide you with quality material and answers to any question.
Before going deeper into Thread Dump analysis and problem patterns, it is very important that you understand the fundamentals. The post will cover the basics and allow you to better your JVM and middleware interaction with your Java EE container.
Java VM overview
The Java virtual machine is really the foundation of any Java EE platform. This is where your middleware and applications are deployed and active.
The JVM provides the middleware software and your Java / Java EE program with:
– A runtime environment for your Java / Java EE program (bytecode format)
– Several program features and utilities (IO facilities, data structure, Threads management, security, monitoring etc.)
– Dynamic memory allocation and management via the garbage collector
Your JVM can reside on many OS (Solaris, AIX, Windows etc.) and depending of your physical server specifications, you can install 1…n JVM processes per physical / virtual server.
JVM and Middleware software interactions
Find below a diagram showing you a high level interaction view between the JVM, middleware and application(s).
This is showing you a typical and simple interaction diagram between the JVM, middleware and application. As you can see, the Threads allocation for a standard Java EE application are done mainly between the middleware kernel itself and JVM (there are some exceptions when application itself or some APIs create Threads directly but this is not common and must be done very carefully).
Also, please note that certain Threads are managed internally within the JVM itself such as GC (garbage collection) Threads in order to handle concurrent garbage collections.
Since most of the Thread allocations are done by the Java EE container, it is important that you understand and recognize the Thread Stack Trace and identify it properly from the Thread Dump data. This will allow you to understand quickly the type of request that the Java EE container is attempting to execute.
From a Thread Dump analysis perspective, you will learn how to differentiate between the different Thread Pools found from the JVM and identify the request type.
This last section will provide you with an overview of what is a JVM Thread Dump for the HotSpot VM and the different Threads that you will find. Detail for the IBM VM Thread Dump format will be provided in the part 4.
Please note that you will find the Thread Dump sample used for this article from the root cause analysis forum.
JVM Thread Dump – what is it?
A JVM Thread Dump is a snapshot taken at a given time which provides you with a complete listing of all created Java Threads.
Each individual Java Thread found gives you information such as:
– Thread name; often used by middleware vendors to identify the Thread Id along with its associated Thread Pool name and state (running, stuck etc.)
– Thread type & priority ex: daemon prio=3 ** middleware softwares typically create their Threads as daemon meaning their Threads are running in background; providing services to its user e.g. your Java EE application **
– Java Thread ID ex: tid=0x000000011e52a800 ** This is the Java Thread Id obtained via java.lang.Thread.getId() and usually implemented as an auto-incrementing long 1..n**
– Native Thread ID ex: nid=0x251c** Crucial information as this native Thread Id allows you to correlate for example which Threads from an OS perspective are using the most CPU within your JVM etc. **
– Java Thread State and detail ex: waiting for monitor entry [0xfffffffea5afb000] java.lang.Thread.State: BLOCKED (on object monitor)
** Allows to quickly learn about Thread state and its potential current blocking condition **
– Java Thread Stack Trace; this is by far the most important data that you will find from the Thread Dump. This is also where you will spent most of your analysis time since the Java Stack Trace provides you with 90% of the information that you need in order to pinpoint root cause of many problem pattern types as you will learn later in the training sessions
– Java Heap breakdown; starting with HotSpot VM 1.6, you will also find at the bottom of the Thread Dump snapshot a breakdown of the HotSpot memory spaces utilization such as your Java Heap (YoungGen, OldGen) & PermGen space. This is quite useful when excessive GC is suspected as a possible root cause so you can do out-of-the-box correlation with Thread data / patterns found
Heap PSYoungGen total 466944K, used 178734K [0xffffffff45c00000, 0xffffffff70800000, 0xffffffff70800000) eden space 233472K, 76% used [0xffffffff45c00000,0xffffffff50ab7c50,0xffffffff54000000) from space 233472K, 0% used [0xffffffff62400000,0xffffffff62400000,0xffffffff70800000) to space 233472K, 0% used [0xffffffff54000000,0xffffffff54000000,0xffffffff62400000) PSOldGen total 1400832K, used 1400831K [0xfffffffef0400000, 0xffffffff45c00000, 0xffffffff45c00000) object space 1400832K, 99% used [0xfffffffef0400000,0xffffffff45bfffb8,0xffffffff45c00000) PSPermGen total 262144K, used 248475K [0xfffffffed0400000, 0xfffffffee0400000, 0xfffffffef0400000) object space 262144K, 94% used [0xfffffffed0400000,0xfffffffedf6a6f08,0xfffffffee0400000)
Thread Dump breakdown overview
In order for you to better understand, find below a diagram showing you a visual breakdown of a HotSpot VM Thread Dump and its common Thread Pools found:
For now, find below a detailed explanation for each Thread Dump section as per our sample HotSpot Thread Dump:
# Full thread dump identifier
This is basically the unique keyword that you will find in your middleware / standalong Java standard output log once you generate a Thread Dump (ex: via kill -3 <PID> for UNIX). This is the beginning of the Thread Dump snapshot data.
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.0-b11 mixed mode):
# Java EE middleware, third party & custom application Threads
This portion is the core of the Thread Dump and where you will typically spend most of your analysis time. The number of Threads found will depend on your middleware software that you use, third party libraries (that might have its own Threads) and your application (if creating any custom Thread, which is generally not a best practice).
In our sample Thread Dump, Weblogic is the middleware used. Starting with Weblogic 9.2, a self-tuning Thread Pool is used with unique identifier “’weblogic.kernel.Default (self-tuning)”
"[STANDBY] ExecuteThread: '414' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=3 tid=0x000000010916a800 nid=0x2613 in Object.wait() [0xfffffffe9edff000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0xffffffff27d44de0> (a weblogic.work.ExecuteThread) at java.lang.Object.wait(Object.java:485) at weblogic.work.ExecuteThread.waitForRequest(ExecuteThread.java:160) - locked <0xffffffff27d44de0> (a weblogic.work.ExecuteThread) at weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
# HotSpot VM Thread
This is an internal Thread managed by the HotSpot VM in order to perform internal native operations. Typically you should not worry about this one unless you see high CPU(via Thread Dump & prstat / native Thread id correlation).
"VM Periodic Task Thread" prio=3 tid=0x0000000101238800 nid=0x19 waiting on condition
# HotSpot GC Thread
When using HotSpot parallel GC (quite common these days when using multi physical cores hardware), the HotSpot VM create by default or as per your JVM tuning a certain # of GC Threads. These GC Threads allow the VM to perform its periodic GC cleanups in a parallel manner, leading to an overall reduction of the GC time; at the expense of increased CPU utilization.
"GC task thread#0 (ParallelGC)" prio=3 tid=0x0000000100120000 nid=0x3 runnable "GC task thread#1 (ParallelGC)" prio=3 tid=0x0000000100131000 nid=0x4 runnable ………………………………………………………………………………………………………………………………………………………………
This is crucial data as well since when facing GC related problems such as excessive GC, memory leaks etc, you will be able to correlate any high CPU observed from the OS / Java process(es) with these Threads using their native id value (nid=0x3). You will learn how to identify and confirm this problem is future articles.
# JNI global references count
JNI (Java Native Interface) global references are basically Object references from the native code to a Java object managed by the Java garbage collector. Its role is to prevent collection of an object that is still in use by native code but technically with no “live” references in the Java code.
It is also important to keep an eye on JNI references in order to detect JNI related leaks. This can happen if you program use JNI directly or using third party tools like monitoring tools which are prone to native memory leaks.
JNI global references: 1925
# Java Heap utilization view
This data was added back to JDK 1 .6 and provides you with a short and fast view of your HotSpot Heap. I find it quite useful when troubleshooting GC related problems along with HIGH CPU since you get both Thread Dump & Java Heap in a single snapshot allowing you to determine (or to rule out) any pressure point in a particular Java Heap memory space along with current Thread computing currently being done at that time. As you can see in our sample Thread Dump, the Java Heap OldGen is maxed out!
Heap PSYoungGen total 466944K, used 178734K [0xffffffff45c00000, 0xffffffff70800000, 0xffffffff70800000) eden space 233472K, 76% used [0xffffffff45c00000,0xffffffff50ab7c50,0xffffffff54000000) from space 233472K, 0% used [0xffffffff62400000,0xffffffff62400000,0xffffffff70800000) to space 233472K, 0% used [0xffffffff54000000,0xffffffff54000000,0xffffffff62400000) PSOldGen total 1400832K, used 1400831K [0xfffffffef0400000, 0xffffffff45c00000, 0xffffffff45c00000) object space 1400832K, 99% used [0xfffffffef0400000,0xffffffff45bfffb8,0xffffffff45c00000) PSPermGen total 262144K, used 248475K [0xfffffffed0400000, 0xfffffffee0400000, 0xfffffffef0400000) object space 262144K, 94% used [0xfffffffed0400000,0xfffffffedf6a6f08,0xfffffffee0400000)
I hope this article has helped to understand the basic view of a HotSpot VM Thread Dump.The next article will provide you this same Thread Dump overview and breakdown for the IBM VM.
Please feel free to post any comment or question.
Reference: How to analyze Thread Dump – part 1, How to analyze Thread Dump – Part2: JVM Overview & How to analyze Thread Dump – Part 3: HotSpot VM from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog.
thanks
Hi, Please note that I will release the next series shortly which will include Thread Dump Stack Trace analysis approach and common problem patterns that will be available to share on Java Code Geeks. Thread Dump analysis complexity is often due to so many different problem patterns so this should really help any Java / Java EE individual involved in either application support or development (load testing etc.) to identify such patterns more quickly. This series will also include data correlation techniques; especially for problems related to high CPU, excessive garbage collection etc. I’m also looking forward for… Read more »
Hi, what are the links to the next articles?
Thanks,
Crystal.
Where can I find this article P-H
This “JVM: How to analyze Thread Dump” article of your is ridiculously similar to “How to Analyze Java Thread Dumps” http://www.cubrid.org/blog/dev-platform/how-to-analyze-java-thread-dumps/ published couple months ago by CUBRID developers on their official blog.
I strongly believe these two should go together hand in hand as they mutually supportive. Great article, by the way!
My application is running in tomcat. If the server is running for many days without restarting tomcat, we face a problem as the server CPU usage will go above 100% very frequently and come back to normal immediately. By the time, we take thread dump, it come back to normal. However on continuous observation it is found that the high cpu usage nid is pointing to a line in thread dump “Concurrent Mark-Sweep GC Thread” prio=10 tid=0x00007fb3581e3800 nid=0x74f3 runnable Few more lines are given below “VM Thread” prio=10 tid=0x00007fb35826e800 nid=0x74f4 runnable “Gang worker#0 (Parallel GC Threads)” prio=10 tid=0x00007fb358012800 nid=0x74d7 runnable… Read more »
Hi C Jacob, It is likely that your problem is related to a major GC collection. You have quite a few GC threads created, I would recommend to verify if the increased CPU is due to GC threads performing excessive garbage collection, typical during a major collection. With 16 GC threads concurrently , this can significantly impact CPU utilization until the memory is fully reclaimed. The CMS collector has the tendency to fragment over time, at some point requiring a major collection which can trigger quite a large surge of your CPU utilization. One more recommendation, enable -verbose:gc or use… Read more »
Very useful info. Thank you so much
Hi Jacob,
DId u find the solution to the above problem?
even our application reaches 100% with following logs:
“Concurrent Mark-Sweep GC Thread” prio=10 tid=0x00007fb3581e3800 nid=0x74f3 runnable
Few more lines are given below
“VM Thread” prio=10 tid=0x00007fb35826e800 nid=0x74f4 runnable
“Gang worker#0 (Parallel GC Threads)” prio=10 tid=0x00007fb358012800 nid=0x74d7 runnable
“Gang worker#1 (Parallel GC Threads)” prio=10 tid=0x00007fb358014000 nid=0x74d8 runnable
“Gang worker#2 (Parallel GC Threads)” prio=10 tid=0x00007fb358016000 nid=0x74d9 runnable
“Gang worker#3 (Parallel GC Threads)” prio=10 tid=0x00007fb358018000 nid=0x74da runnable
HI, thanks very much.
I’ve started work with weblogic support , And i have some problems to identify this problems wich can be discovered analyzing thread dump
att
Why do you write that creating custom threads is not best practice?
System was built on
J2EE/Struts
Tomcat Catalina
IIS
Get frequently crash(Tomcat) problem. even after input enough RAM recently.
Task Logs: ‘attempt_201407291140_0007_m_000002_0’ stdout logs 2014-07-29 11:52:26 Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode): “IPC Client (339343228) connection to HDN003/192.168.101.233:60020 from root” daemon prio=10 tid=0x00007f7ef91fe800 nid=0x316c runnable [0x00007f7ee0191000] java.lang.Thread.State: RUNNABLE at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87) – locked (a sun.nio.ch.Util$2) – locked (a java.util.Collections$UnmodifiableSet) – locked (a sun.nio.ch.EPollSelectorImpl) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98) at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.FilterInputStream.read(FilterInputStream.java:133) at java.io.FilterInputStream.read(FilterInputStream.java:133) at org.apache.hadoop.hbase.ipc.RpcClient$Connection$PingInputStream.read(RpcClient.java:555) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) – locked (a java.io.BufferedInputStream) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.readResponse(RpcClient.java:1059) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.run(RpcClient.java:721) “org.apache.hadoop.hdfs.PeerCache@42d795d2” daemon prio=10 tid=0x00007f7ef9264000 nid=0x202d waiting on condition [0x00007f7ee0313000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method)… Read more »
My application is running on IBM WAS7 and it’s j2ee application . Application frequently throwing core dump . Requested by event USER. and current thread details are below: XMTHREADINFO Anonymous native thread 3XMTHREADINFO1 (native thread ID:0x52D0189, native priority: 0x0, native policy:UNKNOWN) 3XMTHREADINFO3 Native callstack: 4XENATIVESTACK (0x090000000BB481DC [libj9prt24.so+0xa1dc]) 4XENATIVESTACK (0x090000000BBA1140 [libj9dmp24.so+0x12140]) 4XENATIVESTACK (0x090000000BB3FFAC [libj9prt24.so+0x1fac]) 4XENATIVESTACK (0x090000000BB9ED88 [libj9dmp24.so+0xfd88]) 4XENATIVESTACK (0x090000000BB9D188 [libj9dmp24.so+0xe188]) 4XENATIVESTACK (0x090000000BB3FFAC [libj9prt24.so+0x1fac]) 4XENATIVESTACK (0x090000000BB9CD84 [libj9dmp24.so+0xdd84]) 4XENATIVESTACK (0x090000000BBA3968 [libj9dmp24.so+0x14968]) 4XENATIVESTACK (0x090000000BB91488 [libj9dmp24.so+0x2488]) 4XENATIVESTACK (0x090000000BB9567C [libj9dmp24.so+0x667c]) 4XENATIVESTACK (0x090000000BB3FFAC [libj9prt24.so+0x1fac]) 4XENATIVESTACK (0x090000000BB95620 [libj9dmp24.so+0x6620]) 4XENATIVESTACK (0x090000000BB953D4 [libj9dmp24.so+0x63d4]) 4XENATIVESTACK (0x090000000BBB26B8 [libj9dmp24.so+0x236b8]) 4XENATIVESTACK (0x090000000C44281C [libjclscar_24.so+0x1b81c]) 4XENATIVESTACK (0x090000000BB3FFAC [libj9prt24.so+0x1fac]) 4XENATIVESTACK (0x090000000C442680 [libjclscar_24.so+0x1b680]) 4XENATIVESTACK (0x090000000BB40FB4 [libj9prt24.so+0x2fb4]) 4XENATIVESTACK (0x090000000BB29C70… Read more »
Hi, One of our customer is able to see the 100% CPU utilization. SO we asked them to take thread dump and Top command with control shift to check the cpu utlization of every thread in our application. I found five threads cosnuming more CPU: Proc1 :Thread Name Gang worker#0 (Parallel CMS Threads) 89% Proc2 :Thread Name Gang worker#1 (Parallel CMS Threads) 89% proc3 :Thread Name Gang worker#2 (Parallel CMS Threads) 89% proc4 : Thread Name Gang worker#2 (Parallel CMS Threads) 89% proc 5: Thread Name Concurrent Mark-Sweep GC Thread — this changes from 69, 99 and 11% Whats the… Read more »
Updated the question above : i see the following error in Dump. JNI global references: 1181 Heap par new generation total 436928K, used 249467K [0x00000006b0000000, 0x00000006d0000000, 0x00000006d0000000) eden space 349568K, 66% used [0x00000006b0000000, 0x00000006be228a60, 0x00000006c5560000) from space 87360K, 20% used [0x00000006caab0000, 0x00000006cbc263f0, 0x00000006d0000000) to space 87360K, 0% used [0x00000006c5560000, 0x00000006c5560000, 0x00000006caab0000) concurrent mark-sweep generation total 4718592K, used 3590058K [0x00000006d0000000, 0x00000007f0000000, 0x00000007f0000000) concurrent-mark-sweep perm gen total 262144K, used 217383K [0x00000007f0000000, 0x0000000800000000, 0x0000000800000000) 2015-09-24T14:15:26.873+0200: 4505818.029: [CMS-concurrent-mark: 4.718/4.744 secs] [Times: user=18.14 sys=0.28, real=4.74 secs] 2015-09-24T14:15:26.873+0200: 4505818.030: [CMS-concurrent-preclean-start] 2015-09-24T14:15:27.803+0200: 4505818.959: [CMS-concurrent-preclean: 0.929/0.929 secs] [Times: user=0.94 sys=0.00, real=0.93 secs] 2015-09-24T14:15:27.803+0200: 4505818.959: [CMS-concurrent-abortable-preclean-start] 2015-09-24T14:15:28.721+0200: 4505819.877: [GC… Read more »
Hi P-H,
Greetings!
I’m working in WebSphere environment. Last week i could seen the javacore and HeapDumps. While analyzing the dumps i can see only the OutMemory issue. ”
Cause of thread dump : Dump Event “systhrow” (00040000) Detail “java/lang/OutOfMemoryError” “Java heap space” received”
i cross checked everything like Memory, CPU etc. Everything looks good. Moreover, i can see in Thread status there were some threads on “Waiting on Condition” some threads were “Blocked”. Could you please suggest me what excatly i need to investigate it over come this issue with “Waiting on Condition” and Blocked state.
Thanks in advance.
I came across Fastthread tool http://fastthread.io/ It parses complex thread dumps and presents with you with insightful metrics and beautiful graphs.
Root cause analysis forum link says “Sorry, the page you were looking for in this blog does not exist.”
Hi Pierre I am running grails 2.1.0 with JDK1.7.0 on a Windows 7(64Bit) OS. I try to run the command line and I get the following Exception: java.lang.RuntimeException thrown from the UncaughtExceptionHandler i n thread “main” here is the thread dump “AWT-EventQueue-0 14.0.5#IU-139.1803.20, eap:false” prio=0 tid=0x0 nid=0x0 runnable java.lang.Thread.State: RUNNABLE (in native) at java.util.zip.ZipFile.read(Native Method) at java.util.zip.ZipFile.access$1400(ZipFile.java:60) at java.util.zip.ZipFile$ZipFileInputStream.read(ZipFile.java:716) at com.intellij.openapi.util.io.FileUtilRt.loadBytes(FileUtilRt.java:496) at com.intellij.openapi.util.io.FileUtil.loadBytes(FileUtil.java:1469) at com.intellij.util.lang.MemoryResource.load(MemoryResource.java:75) at com.intellij.util.lang.JarLoader.getResource(JarLoader.java:102) at com.intellij.util.lang.ClassPath$ResourceStringLoaderIterator.process(ClassPath.java:279) at com.intellij.util.lang.ClassPath$ResourceStringLoaderIterator.process(ClassPath.java:269) at com.intellij.util.lang.ClasspathCache.iterateLoaders(ClasspathCache.java:112) at com.intellij.util.lang.ClassPath.getResource(ClassPath.java:82) at com.intellij.util.lang.UrlClassLoader.findClass(UrlClassLoader.java:164) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at com.intellij.psi.impl.source.JavaFileElementType.createNode(JavaFileElementType.java:65) at com.intellij.lang.ASTFactory.lazy(ASTFactory.java:57) at com.intellij.psi.impl.source.PsiFileImpl.createContentLeafElement(PsiFileImpl.java:110) at com.intellij.psi.impl.source.PsiFileImpl.createFileElement(PsiFileImpl.java:358) at com.intellij.psi.impl.source.PsiFileImpl.b(PsiFileImpl.java:192) at com.intellij.psi.impl.source.PsiFileImpl.getTreeElement(PsiFileImpl.java:125) at com.intellij.psi.impl.source.PsiFileImpl.calcTreeElement(PsiFileImpl.java:749) at com.intellij.psi.impl.source.PsiFileImpl.getNode(PsiFileImpl.java:959) at com.intellij.psi.stubs.LightStubBuilder.buildStubTree(LightStubBuilder.java:50) at… Read more »
Hi Ian, did you ever get this figured out? Jack on here mentioned you can upload your Java Thread dumps via http://fastthread.io/ to their server, and get results online…
Good article!
About Java Thread dumps, http://fastthread.io/ is also good. It is free web tool. You have to upload your Thread dumps their server, and results are provided online.