Java is visible

I have been using the Java Service wrapper in a special application for quite some time, and it works fine. Starting from updating our application to the new version in the last few days, the JVM started to hang, and then the wrapper prints it in the log: the JVM appears to hang: the signal timeout from the JVM.

Then it automatically shuts down the JVM and starts the application again. This happens after about 10 hours of operation, which makes debugging difficult.

Of course, I will review the changes we made, but there were no major changes that, I suspect, would not have caused this kind of problem.

Where can I try to figure out what is happening? Debug messages from the application do not indicate anything interesting. If the JVM just crashes, it usually creates a dump that can help debug it, but it hangs, so it does not create a dump. If I force it to not restart the service automatically, is there anything I can do to get useful information from the JVM before restarting it?

It seems to me that the JVM should not depend on typical programming errors. What have you encountered before, can this lead to a JVM freezing?

+6
source share
4 answers

I had several different versions of the library on the way to classes (JBPM). With a wrapper, you can use wildcards to include cans. Be careful with this though, as you may accidentally include more than you should.

Here is an IBM article that contains information on debugging in Java . This basically suggests that there are two things that can cause freezes:

  • Endless cycle,
  • Dead end.

Since then I have had to debug other problems with freezing. On linux, you can send the JVM QUIT signal to dump the stream to the console. It really helps to find out where the problem is. Use this command to do this: kill -QUIT

Edit 6/13/2017

These days I use jmap included in the JDK to flush all the program memory. Then I use the Eclipse Memory Analyzer to see the exact state of the program when it crashed. You can view a list of active threads, and then check the variables in each frame of the stack.

/usr/java/latest/bin/jmap -dump:file=/tmp/app-crash.hprof <PID> 

Where PID is the process identifier of the java process.

+1
source

Read the wrapper.ping.timeout property . The wrapper software constantly contacts your JVM to make sure it is live. If this message fails for any reason, the shell considers the process to freeze and tries to restart it.

Depending on how your application is archived, your JVM may be busy processing something else when the wrapper tries to ping.

+8
source

See if you can use Visual VM to find out what is happening. The visual virtual machine monitors the application all the time, and when it stops working, perhaps you can determine what is wrong.

If the virtual machine freezes, you can get the state of the threads ... I think that Visual VM will simplify your setup a bit more than regular ctrl-break (or whatever key combination).

(Edit based on comment)

Tried this one. The last time he hung up the number of threads and the amount of memory used was quite low, so none of them cause a problem. Unfortunately, after it hangs and the shell terminates it, you cannot receive a stream dump.

Is there a way to run it without a shell to debug it? Also, if you use the NetBeans profiler, it can give you the opportunity to deal with it when it stops (I will check back later today and see if I can find out if it will behave differently).

+2
source

What environment are you in? OS, JVM version, hardware architecture?

This sounds like an error, and given that it takes many hours, it sounds like an error related to running out of resources.

+1
source

All Articles