Are Java application servers for killing threads? If so, how?

Killing threads is deprecated in Java (and not implemented according to javadoc), and interruption is just an assumption that a stream is expected to exit, but may not do so. (Not providing any way to kill the thread inside the JVM is an alarming design, but my question is not related to the design.)

How do Java application servers offload applications? Can they somehow destroy the threads of the application that is being unloaded? If so, how? If not, then one thread of a deployed application with an infinite loop can bring down the entire application server without any possibility to intervene?

Sorry that I am not writing tests for this, but I would like to know what is really going on there.

+6
java multithreading jvm appserver
source share
2 answers

You are not allowed to create your own thread inside the ejb server.

It's not so unusual to spawn threads in a web container (like tomcat), although you should think carefully about this - and remember to manage the life cycle of these threads.

+2
source share

Not providing any way to kill the thread inside the JVM is an alarming design, but my question is not related to the design.

Since your real question will be answered, I will consider the proposal above.

The story is that Java developers initially tried to solve the problem of killing and pausing threads, but they ran into a fundamental problem that they could not solve in the context of the Java language.

The problem is that you simply cannot kill safe streams that can mutate shared data in a non-atomic way or that can be synchronized with others using a wait / notification mechanism. If you perform thread killing in this context, you get partial updates to data structures and other threads waiting for notifications that will never arrive. In other words, destroying one thread can leave the rest of the application in an undefined and broken state.

Other languages ​​/ libraries (e.g. C, C ++, C #) that allow killing threads suffer from the same problems that I described above, even if the relevant specifications / text books do not make this clear. Although you can kill threads, you have to be very careful in developing and implementing the entire application to make it safe. Generally speaking, it is too difficult to qualify.

So (hypothetically), what would you do to make thread killing safe in Java? Here are some ideas:

  • If your JVM has implemented Isolates, you can run a calculation that you can kill in the child isolate. The problem is that properly implemented isolation can only interact with other isolates by messaging, and they will usually be much more expensive to use.

  • The issue of shared mutable state can be resolved by completely banning the mutation, or by adding transactions to the Java execution model. Both of them would fundamentally change Java.

  • The wait / notify problem can be solved by replacing it with a rendezvous or messaging engine, which allows you to tell the "other" thread that the thread with which it interacted has left. The "other" stream still needs to be encoded to recover from this.

EDIT - in response to commits.

The Mutex deadlock was not a problem for thread.destroy() , since it was intended to release (break) all mutexes belonging to the destroyed thread. The problem was that there was no guarantee that the data structure that was protected by the mutex would be in normal condition after locking.

If I understand the history of this question Thread.suspend() , Thread.suspend() , Thread.delete() , etc. really cause problems in real Java 1.0 applications. And these problems were so serious and so difficult for application developers that JVM designers decided that the best way would be to abandon the methods. This would not be an easy decision.

Now, if you are brave, you can use these methods. And in some cases, they can be safe. But building an application around obsolete methods is not good software development practice.

+11
source share

All Articles