Should we use garbage collection or garbage collection on a "workstation"?

I have a large multithreaded C # application running on a multi-core four-way server. We are currently using the server mode garbage collection. However, testing showed that the GC workstation mode is faster.

MSDN says :

Managed code applications using the server API take significant advantage of using the server-optimized garbage collector (GC) instead of the default GC of the workstation.

A workstation is the default GC mode and the only one available on single-processor computers. The GC workstation is hosted on Windows Forms console applications. It performs a complete (generation 2) collection simultaneously with the running program, thereby minimizing the delay. This mode is useful for client applications where perceived performance is usually more important than raw bandwidth.

The GC server is available only on multiprocessor computers. It creates a separate managed heap and thread for each processor and runs collections in parallel. During collection, all managed threads are suspended (threads executing native code are suspended only when they return their own call). Thus, the GC server mode maximizes throughput (the number of requests per second) and improves performance as the number of processors increases. Performance is especially pronounced on computers with four or more processors.

But we do not see the splendor of performance !!!! Anyone got any tips?

+24
garbage-collection c #
Nov 10 '09 at
source share
2 answers

This is not well explained, but, as far as I can tell, the server mode is synchronous for the kernel, and the workstation mode is asynchronous.

In other words, the workstation mode is designed for a small number of applications with a long service life, which require constant performance. The garbage collection is trying to "stay away from the road", but, as a result, is on average less effective.

Server mode is intended for applications where each "task" is relatively short-lived and is processed by a single core (editing: think of a multi-threaded web server). The idea is that each β€œwork” gets all the power of the processor and is fast, but sometimes the kernel stops processing requests and clears memory. Thus, in this case, we hope that the GC is on average more efficient, but the kernel is not available during its launch, so the application should be able to adapt to this.

In your case, it sounds like this because you have one application whose flows are relatively connected, you better approach the model expected by the first mode, and not the second.

But that's all after the justification. Measure the performance of your system (as indicated in the ammoQ file, not the performance of your GC, but how well the application works) and use what you measure to be better.

+15
Nov 10 '09 at 11:33
source share

.NET 4.5 introduces server garbage assembly.

http://msdn.microsoft.com/en-us/library/ee787088.aspx

specify <gcServer enabled="true"/> specify <gcConcurrent enabled="true"/> (this is the default so can be omitted) 

And there is a new SustainedLowLatencyMode;

In the .NET Framework 4.5, SustainedLowLatency mode is available for both the workstation and the GC server. To enable it, set the GCSettings.LatencyMode property to GCLatencyMode.SustainedLowLatency.

+5
Nov 02 '12 at 9:50
source share



All Articles