The answers above are too high, the second (1 core per VM) is closer. You can either 1) plan ahead, or perhaps retrain 2) add on time. You have a reason why you should be well aware (annual budget: your chosen host platform is not host clusters, so you cannot add later?)
Unless you have an incredible simple usage profile, it will be difficult to predict before and you will buy. The answer above (+ 25%) will be several times larger than that required for modern server virtualization software (VMware, Zen, etc.), which manages resources. It is accurate only for desktop products such as VPCs. I decided to use it on a napkin and profile my first environment (set of machines) on the host. I'm happy.
Examples of things that will interfere with your assessment
- Disk space, Some systems (Lab Manager) use only the difference in space from the base template. 10 deployed machines with 10 GB disks using about 10 GB (template) + 200 MB.
- Disk space: then you will find dislike deltas in specific scenarios.
- CPU / Memory: this is a dev store - so you will have a random load. Smart hosts do not reserve memory and processor.
- CPU / Memory: But then you want to perform perfection testing and want backup CPU cycles (not all hosts can do this)
- We all virtualize for various reasons. Many guests in our midst do not have much work. We want them to see there something happening with a cluster of 3 servers of type X. Or we have a bunch of weird client desktop computers that are waiting for you using one of them as a tester. They rarely consume a lot of host resources.
So, if you use something like this, don't make delta disks, the disk space can be somewhat computable. If a lab manager (delta disk), disk space is really hard to predict.
Memory and processor usage: you will have to profile or overload most. I have a lot more guest processors than the CPUS host, and I have no problems with the former, but this is due to the volatile use in our QA environments.
source share