This is not well explained, but, as far as I can tell, the server mode is synchronous for the kernel, and the workstation mode is asynchronous.
In other words, the workstation mode is designed for a small number of applications with a long service life, which require constant performance. The garbage collection is trying to "stay away from the road", but, as a result, is on average less effective.
Server mode is intended for applications where each "task" is relatively short-lived and is processed by a single core (editing: think of a multi-threaded web server). The idea is that each βworkβ gets all the power of the processor and is fast, but sometimes the kernel stops processing requests and clears memory. Thus, in this case, we hope that the GC is on average more efficient, but the kernel is not available during its launch, so the application should be able to adapt to this.
In your case, it sounds like this because you have one application whose flows are relatively connected, you better approach the model expected by the first mode, and not the second.
But that's all after the justification. Measure the performance of your system (as indicated in the ammoQ file, not the performance of your GC, but how well the application works) and use what you measure to be better.
andrew cooke Nov 10 '09 at 11:33 2009-11-10 11:33
source share