A client recently purchased a 4-CPU Dell R900 to expand its VMware environment rather than the 2-CPU Dell R710 because the IT staff felt it would scale better. I hope it's not too late for them to return the server.
The R710 may have only two CPUs, but these are the 11 Generation Intel Xeon 5570 CPUs. The revolutionary Intel 5500 series CPU (Code named Nehalem) was designed to optimize virtualization performance and includes Intel Virtualization Technology for Directed I/O, QuickPath Interconnect, Integrated Memory Controllers, Hyper-Threading, hardware-based memory protection, Extended Page Table, and much more.
Dell's Web site says that the "2-socket PowerEdge R710 achieved the best performance for virtualization, better than all other 2-AND 4-socket servers." This claim is based upon VMmark scores which show a 2-CPU (8-Core) R710 handling slightly more VMs than a 4-CPU (24-Core Xenon 7460) R900. (While the R900 was running ESX 3.5 rather than vSphere for the R710, the results were similar to the AMD based 4-CPU, 16-Core R905 running vSphere).
Dell's server configuration tool shows a cost of $17,203 for a vSphere-ready R710 with 96GB of RAM versus a purchase price of $26,173 for a similarly configured R900 with four 6-Core CPUs. While the R900 scales to 256GB vs. only 144GB for the R710, this adds $19,113 of cost versus $966 to scale the R710 to 128GB. If 256GB is required, it makes more sense to simply purchase a second R710 (or Cisco UCS – see "Alternatives to Dell" below). Another advantage – the R710 uses only 2U of rack space vs. 4U for the R900.
The purchase cost disparity, advanced capabilities and reduced rack space requirements make the R710 the obvious choice as a virtualization host. But using only two CPUs provides further software licensing benefits. Here is a recap comparing three year licensing costs assuming the host is running both VMware vSphere Enterprise Plus and Cisco Nexus 1000V along with both Microsoft Windows Server Data Center Edition and SQL Server Enterprise Edition. The Microsoft products are both licensed according to the physical host CPUs and allow an unlimited number of the virtualized instances of the software to run on the host.
Alternatives to Dell
HP and IBM also offer Intel 5500 based servers. If the virtualization environment is sizable, however, we recommend evaluating the Cisco UCS as an optimized virtual infrastructure hosting platform. The superior UCS performance and unified fabric capabilities can further reduce licensing expenses along with rack space, power and cabling costs. Its integrated management capabilities help drive complexity out of operating the environment.
Just had the same scenario a couple months back. 2x boxen are so much less expensive especially in a blade environment. The only caveat with these 2xQuads are big workloads.
Also had another customer who's previous Virt. 'VAR' recommended HP BL680s for *slots* and only put in 2 procs and filled it with 2GB dimms. So it can't be upgraded without pitching the RAM and uses double the chassis space. What a waste of real estate and money.
Posted by: Jae Ellers | August 25, 2009 at 01:01 AM
Hey, so I looked it up - the enterprise licenses allow for 12 core CPUs (vs 6 core) http://www.vmware.com/download/eula/multicore.html
Now if vmware could just figure out how to manage the licensing - upgrading 3.5 to vSphere licenses has been a nightmare of surprise lapsed support - confusion and delays
Every week vmware finds a few more licenses we are owed and they randomly throw us other department's license codes and give ours to other depts...
Posted by: www.facebook.com/profile.php?id=658313066 | August 25, 2009 at 10:38 PM
Why Dell R710 vs Dell R610 in the analysis, I wonder? You can get the same CPUs and memory in a 1U footprint, without as much local disk, which many virtualization users don't really use anyway.
Posted by: Hans Jacobsen | August 25, 2009 at 11:11 PM
Hans, What model, other than blades, uses the 5500 series in 1U?
Posted by: Steve Kaplan (@ROIdude) | August 27, 2009 at 08:58 AM
Hi Fletcher, please let Lori or me know if we can be of any assistance.
Posted by: Steve Kaplan (@ROIdude) | August 27, 2009 at 08:59 AM
I don't know much about virtual networking but i am pretty sure whatever they have done they might have done for reason.
Posted by: cheap computers | September 10, 2009 at 12:36 AM
Comparing 4.0 with 3.5 benchmarks is definitely misleading as these are very different VmWare builds.
Furthermore, even if the r710s provided better performance per vm, the assertion of higher density here is further misleading. Consider that as your density is increased, so with there be an increasing need for more sockets to support higher redundancy, scalability and fault tolerance. We use 12 nics, and 2 dual chan ports on our ESX servers, with room for expansion.
The r710s cannot compare to the expandibility of the r900s in the enterprise scalability department.
Posted by: Aubrey Williams | September 13, 2009 at 02:11 AM
Aubrey, thanks for your comments. I had a similar concern about vSphere which is why I also linked to the vMark comparison with the 4-CPU R905. The results were very similar to the 4-CPU R910 running ESX 3.5, which indicates to me that the density factor isn't significant. This was further validated during a presentation of Ron Oglesby (Dell Sr Virtualization Consultant) at Virtualization Congress 2009 in which he showed a Dell chart listing the 2-CPU R910 has handling more VMs than the 4-CPUR900. Regarding your comment on the 12 NICs and 2 dual channel ports on your ESX servers - wow. My suggestion, as discussed in the article, would be to evaluate using Cisco's UCS. This should give you the ports you require while still enabling a savings in Microsoft licensing, cabling, etc.
Posted by: Steve Kaplanste | September 13, 2009 at 08:36 AM
you may had missed out the point when you come to server management issue. Imagine you need to patch 30 ESX hosts but the others will only require to patch 15. I will say that case by case basis will make the best situation to decide which model to go. Enterprise environment may have different consideration, as 6 cores is meant physical core, which Intel 5500 series is more on logical core. Performance is going to meant more when it come to real life experience. Just my 2 cents here.
Posted by: craig | September 29, 2009 at 11:34 PM
Craig, I don't understand your comment about server management. UCS, because of its increased memory and faster I/O capabilities, as well as its exlusive usage of Intel Nehalem processors, has as high, or much more likely higher, density factor than any other server platform.
Posted by: Steve Kaplan (@ROIdude) | December 24, 2009 at 09:21 AM