VMware Server 1.x and 2.x are great for some quick and easy visualization. Granted, they lack some options their big brothers have (live migration for example) but overall they are fine. It is quite annoying to find one of your virtual servers trashing the IO on your host in such a way that all systems (virtual and physical) come crawling to a halt.
Note that I am not talking about actual throughput caused by a heavy loaded system. I am talking about a few hundred kilobytes to a few megabytes of disk throughput that manage to clog up the host system while the virtual system is not doing anything (no network activity, no CPU activity, no disk IO on the virtual side and still the host gets trashed).
It seems like the trashing can come from clashes between the memory manager of VMware and your virtual OS and/or the virtual drive controller and the guest disk driver. The result seems a looping ‘optimization’ which never ends and instead of speeding up the system, it completely grinds down to a halt.
The solution? Well there isn’t a clear cut solution to all the problems out there but so far this one seems to work for me. It tells VMware to allocate all the memory it could be needing (instead of allocating it when its needed) and disables the paging of memory completely (as everything fits in one go).
File /etc/vmware/config
:
prefvmx.useRecommendedLockedMemSize = "TRUE" prefvmx.minVmMemPct = "100"
Guest “.vmx” file:
sched.mem.pshare.enable = "FALSE" mainMem.useNamedFile = "FALSE" MemTrimRate = "0" MemAllowAutoScaleDown = "FALSE"