Why are WCF services configured with instance for each call and multiple concurrency working differently when started with a different process and completely different when called from threads?
I have one application that distributes data by the number of threads and makes calls (donβt think that the lock is happening in the code, check it again) to the WCF service. During the test, it was noticed that increasing the number of threads in the distribution application does not increase the overall performance of the wcf processing service, on average, about 800 m / min (message processing per minute), so the throughput really does not change, but if you run the second application, then the average throughput increases to ~ 1200 m / min.
What am I doing wrong? what did I miss? I do not understand this behavior.
UPDATE # 1 (answer to questions in the comments)
Thanks for such quick answers. The maximum number of connections is set to 1000 in the configuration (yes in system.net). Referring to this wcf article Instances and Streams , the maximum calls should be 16 x the number of cores, so I assume that if ~ 30 threads per 2 cpu form is called, should the wcf service accept basically all of these streaming calls?
Does this have anything to do with shared memory? because these are probably the only differences between multiple threads and processes. I think so.
You do not have the opportunity right now to test it with a large number of processors or a single. Will do when it can.
source share