Why is our software running much slower during virtualization?

I am trying to understand why our software is much slower when working under virtualization. Most of the statistics that I have seen say that in the worst case it should be only a 10% penalty for performance, but on a Windows virtual server, performance can be 100-400%. I tried to talk about the differences, but the profile results do not make much sense to me. Here is what I see when I profile on my 32-bit Vista field without virtualization: enter image description here

And here is one run on a Windows 2008 64-bit server with virtualization: enter image description here

Slow spends a very large amount of time on RtlInitializeExceptionChain , which shows as 0.0 on fast. Any idea what this does? In addition, when I attach my machine to a process, there is only one thread, PulseEvent , however, when I connect to the server, there are two threads: GetDurationFormatEx and RtlInitializeExceptionChain . As far as I know, the code we wrote uses only one thread. Also, for what it's worth, this is a console application written in pure C without an interface.

Can anyone shed light on all this for me? Even just information on what some of these ntdll and kernel32 calls are ntdll ? I'm also not sure how many of these differences are related to 64/32-bit and how many of them are related virtual / non-virtual. Unfortunately, I do not have easy access to other configurations to determine the difference.

+7
source share
2 answers

I suppose we could separate the reasons for slow performance on a virtual machine into two classes:

1. Skew configuration

This category is intended for all things that have nothing to do with virtualization per se, but where the configured virtual machine is not as good as the real one. It is very simple to do in order to give the virtual machine only one core of the processor, and then compare it with an application running on a dual-processor 8-core 16-core Hyper-Threading Intel Core i7. In your case, at least you did not run the same OS. Most likely, there is another bias.

2. Poor Fit Virtualization

Things like databases that do a lot of locking are not virtualized well, and so typical overhead might not apply to the test case. This is not your exact case, but I was told that the penalty is 30-40% for MySQL. I noticed an entry point with the name ... semaphore on your list. This is a sign that will virtualize slowly.

The main problem is that constructs that cannot be executed initially in user mode will require traps (slow, all by themselves), and then additional overhead in the hypervisor emulation code.

+6
source

I assume that you provide sufficient resources for your virtual machines, the advantage of virtualization is to consolidate 5 machines that work only on 10-15% of CPU / memory on one machine, which will work on 50-75% of CPU / memory and which all still leave you 25-50% of the overhead for these "spikes".

Personal joke: 20 machines were virtualized, but each of them used as much CPU as it could. This caused problems when one machine tried to use more energy than a single core could provide. Therefore, the hypervisor virtualized one core across multiple cores, killing performance. As soon as we utilized the CPU utilization of each virtual machine to the maximum available from any core, productivity increased dramatically.

0
source

All Articles