Virtual Server Hardware Requirements

We decided to go with a virtualization solution for several of our development servers. I have an idea of ​​what the hardware specifications will look like if we bought separate physical servers, but I don’t know how to consolidate this information into the specifications for a generalized virtual server.

I intuitively know that the specifications are not additive - I don't just have to add all the RAM requirements from each machine to get the RAM needed for the virtual server. I can not consider them as parallel systems, because no matter how good the software for virtualization, it can not abstract from two servers that are trying to bind the processor at the same time.

So my question is: is there a standard method for assessing hardware requirements for a virtualized system, taking into account the hardware requirements for basic virtual machines? Is there a constant + C for VMWare / MS Virtual Server service resources (and if so, what is C?)?

PS I promise to move this to serverfault after it goes into beta (Promise)

+4
source share
3 answers

Yes, add 25% of the additional resources for managing the virtual machine. Therefore, if I need 4 servers that are equal to single-core 2-gigabyte machines with 2 gigabytes, I need 10 gigabyte computing power plus 10 gigabytes. This will allow all systems to blush and everything will be in order.

In the real world this will never happen, all your servers will not always work all the time. You can feel the use by profiling current servers and determining their exact requirements, and then adding an additional 25% of the resources.

Check out this software to use profiling http://confluence.atlassian.com/display/JIRA/Profiling+Memory+and+CPU+usage+with+YourKit

+4
source

The requirements are actually additive. You must add memory requirements for each virtual machine and disk requirements, and have at least one processor core for each virtual machine. Then add everything you need for the host system. VMs can share the processor to some extent if you have very low performance requirements, but they cannot use disk space or memory.

+3
source

The answers above are too high, the second (1 core per VM) is closer. You can either 1) plan ahead, or perhaps retrain 2) add on time. You have a reason why you should be well aware (annual budget: your chosen host platform is not host clusters, so you cannot add later?)

Unless you have an incredible simple usage profile, it will be difficult to predict before and you will buy. The answer above (+ 25%) will be several times larger than that required for modern server virtualization software (VMware, Zen, etc.), which manages resources. It is accurate only for desktop products such as VPCs. I decided to use it on a napkin and profile my first environment (set of machines) on the host. I'm happy.

Examples of things that will interfere with your assessment

  • Disk space, Some systems (Lab Manager) use only the difference in space from the base template. 10 deployed machines with 10 GB disks using about 10 GB (template) + 200 MB.
  • Disk space: then you will find dislike deltas in specific scenarios.
  • CPU / Memory: this is a dev store - so you will have a random load. Smart hosts do not reserve memory and processor.
  • CPU / Memory: But then you want to perform perfection testing and want backup CPU cycles (not all hosts can do this)
  • We all virtualize for various reasons. Many guests in our midst do not have much work. We want them to see there something happening with a cluster of 3 servers of type X. Or we have a bunch of weird client desktop computers that are waiting for you using one of them as a tester. They rarely consume a lot of host resources.

So, if you use something like this, don't make delta disks, the disk space can be somewhat computable. If a lab manager (delta disk), disk space is really hard to predict.

Memory and processor usage: you will have to profile or overload most. I have a lot more guest processors than the CPUS host, and I have no problems with the former, but this is due to the volatile use in our QA environments.

+2
source

All Articles