« Microsoft says VMware has determined virtualization is an OS feature. What? | Main | How to discourage producers from producing »

July 16, 2010


Feed You can follow this conversation by subscribing to the comment feed for this post.

Andrew VanSpronsen


Can't figure out why you chose to compare 24 blades when 32 would be a better comparison as it is divisible by both HP and Cisco blades per chassis; 4 Cisco UCS chassis vs 2 HP Chassis.

Stopped reading there as it seems like you are trying to dupe the reader. Very excited about UCS though. Currently involved in an eval ourselves. Matrix seems like a hurried,ugly response to UCS.

Steve Kaplan (@ROIdude)


I used 24 blades because I recently went through an ROI calculation for a customer with 26 blades, and 24 was convenient. But, I redid the calculations using 32 blades as you suggest. As you can see, it resulted in a still more advantageous price delta for UCS which generally scales more efficiently. Thank you for catching this, I appreciate it.

Patrick Thomas

Nice article

I recently started using UCS after being a long time user of 15x c7000 chassis

The one item missing in your price comparison that made a huge deal for our TCO was aggregation layer switches

CoreSW - distributionSW - accessSW - servers

UCS basically collapses the distribution and access layer with the 6040/6020

UCS 14chassis = 112 servers
c7000 7chassis = 112 servers

Uplink out of the environment
UCS needs minimum 4 ports (1 port channel to each core)
c7000 needs minimum 28 ports for 1 core port channel to each chassis ethernet switch


It has been said in the previous post of the series, but again, you shouldn't be comparing Cisco UCS with Matrix. It has nothing to do, you should be comparing UCS to BladeSystem OR vBlocks with Matrix.

I am also missing a detailed comparison on bandwidths, I have the feeling leaving just 8 ports (when maxing out the chassis) for all the servers is like a little to nothing bandwidth for the VMs.

On the other hand, having the mgmt solution inside of hardware elements leaves you with a hard limit for the max number chassis to be managed. In the best case you said it can handle up to 14 chassis (eventually 40) ... this doesn't seem to be like "enterprise scalability". AND in this case, you would leave just one port for each of the chassis, therefore no usable BW for the servers or furthermore the VMs.

And, if you would be comparing BladeSystem with UCS you would realize there are far more issues to have in mind when buying servers, I find the UCS offering a bit limited in server choice / connectivity, probably enough for a number of customers but definitely not a complete portfolio.

Quite an interesting read but you should be delivering a less biased article. I would be really interested in knowing your point of view in a detailed comparison of Matrix & vBlocks


Great post. I am just wondering about the vDS support in HP. I am not aware of any limitaions with vDS not being supported with VC/Flex-10. I have been trained and am certified in both UCS and HP C-Class Blades. I still (personally) slant to preferring HP, but UCS is also a great fit for many customers. Cisco is certainly giving HP a run for the money.

(Disclaimer I also work for a Cisco partner, but we are also an HP partner).

Steve Kaplan (@ROIdude)

Thanks for your comments. I would welcome the opportunity to learn more about the TCO calculations you derived if you are willing. Please email me: steve.kaplan at inxi.com. thanks.

Steve Kaplan (@ROIdude)

I think I explained pretty well why I decided to go with the UCS vs. Matrix comparison. In fact, an article on Thursday in searchdatacenter.com leads off with a UCS vs. Matrix comparison. Unless the Matrix picks up momentum, I do not know if I'll be doing any further comparisons whether with UCS or other solutions.

Steve Kaplan (@ROIdude)

Thanks for the compliment. As both a Cisco & HP partner, I particularly value your perspective. When researching the article, I spoke with a couple of other partner friends in the same category. I confess I do not know either about vDS port group support in either the VC/Flex-10 or Matrix configurations, though you may be able to find the later in the compatibility guide I linked to in the article.


what about Power consumption, foot print in datacenter, and IO throughput per chassis? Maybe you can consider to put this into your comparison as well

Steve Kaplan (@ROIdude)

I did mention in the article that the Matrix, due to increased power requirements, can have only 2 enclosures per rack vs. 5 chasses for UCS (though it can accomodate twice as many servers per enclosure). As far as actual power consumption goes, it would be interesting to compare the two, although I honestly am not quite sure how to go about that. I will see if I can come up with something. In any case, I don't expect the difference to be material one way or another.


Here's a recent 3rd party power comparison between Cisco UCS and HP BladeSystem:

It was sponsored by Cisco but it also provides all the details of the power comparison. HP is free to point out the mistakes, if any. So far, hearing just crickets from HP on the report is telling...



Steve, I believe the momentum depends on the country, here in Spain Matrix is doing well, but you are quite right, the hype on Cisco's products is always higher.

Of course I am not demanding you to write an apples-to-apples report, but as I mentioned I feel you left out several technical points that should be on this writting:

· Total bandwidth when maxing out the management switches (this is 40 chassis I believe with just 2x 10Gb FCoE links for 8 half height servers), there you need 4Gbps storage traffic, leaving you 6Gb Eth for 8 servers. How much BW is there for the VMs then?

· If you would like to increase this BW, then you can use up to 8x 10Gb FCoE links thus limiting the number of chassis scalability. (I know you can use an external manager to manage several UCS 6140 switches from a single pane of glass but this was mentioned as a weakness in HP's solution)

· Full-height servers with the Catalina memory adapter are not balanced servers, taking into account you have very limited I/O Bandwidth, you shouldn't be delivering so much memory for VMs ... you are just increasing the I/O bottleneck while adding components to the server that weren't on Intel's mind when they designed the memory controller

It is true HP can / may use a higher number of links for each chassis, but this is for the greater good. There is a good point with the Palo adapter where you can "partition" (I know this is not the right word, please excuse my limited english) up to 58 times, this is really good to assign ports to VMs directly but, what about when you need more BW (again!), with HP you can use another pair of Virtual Connects and a 2 port Flex-10 mezzanine and there you go, 20 more Gb Eth available). This leaves HP's half-height servers with up to 40Gb Eth while maintaining 2x 8Gbps FC ports.

Maybe I am wrong with this numbers, but I really feel you sould be taking care of them when talking about enterprise solutions, especially when you talked about scalability. Please correct me if I am wrong in any of those points.


Steve Kaplan (@ROIdude)


Thanks for your comment and question.

UCS is designed to deliver a robust computing environment that is simple to construct and engineer for application requirements. It provides a simple range of choices to each server – 2.5GB, 5GB and 10GB within each chassis. Applications within a UCS cluster can be vMotioned to the appropriate chassis with the bandwidth required for the application. With Web servers, add a chassis with just two links. With DB applications that might require 8GB of sustained bandwidth, add a chassis with 4 links.

As I mentioned, UCS will eventually scale to a maximum of 40 chasses with up to 1TB of traffic with the currently shipping Interconnect. Part of the UCS sizing is to engineer planning around how many applications require what type of bandwidth and map accordingly. As the fabric becomes more capable, the investment in UCS hardware is maintained as software updates accommodate these changes. Just add the appropriate links and vMotion VMs to the chassis with the appropriate bandwidth.

It appears that what you’re really asking is how to scale to the maximum number of servers while accommodating rapid changes in bandwidth demand. As you note, the UCS Palo adapter enables dynamic distribution of bandwidth across the UCS environment as applications demands change, enhanced with policy driven tools such as the Nexus 1000V and UCSM service profiles. Few applications require more than 1GB of bandwidth – those that do, especially at a sustained level, are quite rare. With the Palo, UCS can segment and allocate traffic to those apps needing bandwidth at a much more granular and efficient level than with Virtual Connect.

You criticize the Catalina ASIC because for this first generation, Cisco did surrender a bandwidth step (going from 133MHz memory to 1066 MHz), but if using all 18 memory slots available on a Matrix system, you have to go from 1333MHz all the way down to 800MHz. And on the new UCS M2 full-width blades, all memory can now run at a maximum speed of 1333MHz. Here is a good article from last April by Pete Karelis on the benefits of Cisco’s Catalina chip http://www.goarticles.com/cgi-bin/showa.cgi?C=2753596

The discussion of memory bandwidth is a bit of a red herring in any case since it only touches a very few high end HPC type of applications – those requiring sustained high memory transfers for hours of compute cycles at a time. Average applications using Nehalem CPUs on the desktop show little impact from lower memory bandwidth on applications performance. This review provides a good explanation http://www.tomshardware.com/reviews/memory-scaling-i7,2325-11.html I also like the explanatory comment by Joe Onisick in response to this Blades Made Simple Post http://bladesmadesimple.com/2010/05/dell-flexmem-bridge-helps-save-50-on-virtualization-licensing/

Joe Onisick


It really seems as though you're missing the point of the architecture. Your comments are arguing against the plausibility of the maximum architectural limits within UCS. For the sake of argument let's say you are totally correct lets say:

1)40 chassis would not provide enough bandwidth at 20Gbpps shared accross 8 blades.
2) 40Gbps of onboard I/O for a B250 blade would not be enough I/O to support utilizing 384GB of memory.

So with that let's envision a UCS system using 10 chassis each max attached with 4 links for 80Gbps of bandwidth. Let's place 4 B250 blades in each chassis, 192GB of memory, and 2x VIC/Palo cards.

In this configuration I will have 40Gbps to each blade that I can granularly tune per application or VM. This provides I/O flexibility not found on any other architecture. Rather than having 20-30Mbps utilization (industry average) on my redundant 4Gbps dedicated FC HBAs, I now have that bandwidth shared accross LAN and SAN on the same pipe. During the day when LAN traffic is heavy it utilizes the necessary bandwidth and at night the FCoE based backup kicks up and has access to enough bandwidth to shrink my backup window.

Additionally I have saved typically about 50% on memory costs by using less costly 4GB DIMMs to reach 192GB, this is true over most major manufacturers except the one that lowered their memory pricing below their own costs (losing money on memory) in order to comabt this Cisco advantage.

Additionally without a single additional license or mandatory service hour this system described above has 1 single point of management for the compute environment all the way to the VMware networking. That means 40 blades managed under one system without anything else involved. If I'm running VMware and using a very reasonable 25:1 virtualization ratio that means I have 2-3 enterprise racks, 1 point of management running 1000 VMs. All well within very reasonable CPU memory and I/O constraints.

The maximums of any system will be questionable for many workloads but have benefits in corner cases. The real value of an architecture is it's flexibility, how much can I tailor it to the specific application requirements.

384GB has it's use case in high memory requirements and possibly databases, whereas 40 chassis at 20Gbps may be more than enough for web clusters or hosted environments.


Ken O

Steve -

Great analysis. You should also know that Forrester Research (James Staten) did somewhat of a similar comparison, but also included offerings from Dell, Egenera, and IBM.

While you pointed it out in your analysis, I must emphasize that when doing a total-cost calculation, the Cisco UCS doesn't come complete with the management SW you'd typically need for an enterprise data center. So there's additional cost and integration there that's already included in the Matrix (Jose alluded to this).

With full disclosure, I work for Egenera -- and our architecture is very similar to that of UCS, except we use standard Ethernet and I/O. Plus, our SW already includes SW provisioning, HA and DR services,so there is no additional mgmt SW to purchase :)

The more "real world" analyses we do, the better. It doesn't serve any pragmatic purposes simply to compare speeds-n-feeds.

Steve Kaplan (@ROIdude)


Thanks for your comment. I will be contacting you soon and hope to learn more about your views on Egenera, UCS and convereged infrastructure in general.


(Disclosure NetApp Employee)

I noticed that the V2P portion says "only with certain storage vendors", I think it would be most accurate to say "Only with NetApp or IBM N-Series" - see


For more details.

John Martin
Principal Technologist
NetApp ANZ

Steve Kaplan (@ROIdude)


Thanks for your email. You might have noted that the NetApp post to which you link starts off referencing me; I am well aware of the NetApp capabilities. The reason I wrote "with specific storage vendors" is that I fairly recently read a post by EMC saying that their SANs could now do this as well (I believe it was by Chuck Hollis). Please let me know if you think I misinterpreted this claim.

Anthony Skipper

It is worth noting that the power consumption comparisons performed by "principled technologies" were done with 4 PDUs in the Cisco UCS and 6 PDUs in the HP gear. One of the ways you can tell someone doesn't know what they are doing is when they have 6 PDUs in HP chasis.

Shahin Ansari

I enjoyed, and learned a lot by reading these blogs. There was a number of comments regarding which HP solution should be compared with UCS. For me, what matters is feature functionality and effectiveness. I would really appreciate if you could point me to a comparison of HP's other solutions with UCS.

HP Mambo Jumbo

I would like to share my personal experience with the Matrix. I used to work for a company in the UK which had the Matrix deployed in their Data Center. Because the Matrix has to be deployed by authorized personnel, HP sent a couple of engineers to install it for us. We had to go through the mentioned 2 week deployment just to get the thing up and running. We ended up with some generic deployment and configuration. The solutions is not fit for purpose at all - public cloud as it was marketed and sold to us. It is almost over a year now and the Matrix is just sitting there collecting dust while the company is struggling to find clients for it. The Matrix in our case consists of 2 racks, including storage and the cabling is just appalling - bunch of copper and fiber cable running inside and between the racks. Again, this is first-hand experience and has nothing to do with all the technical mambo-jumbo. I have seen several production deployments of UCS now and they don't look anything like the Matrix - installed in single rack with just a few neatly bundled cables, installed and managed by the clients themselves.

Steve Kaplan (@ROIdude)

HP Mambo Jumbo,

I have heard of some success stories, but also (albeit 2nd hand other than you) of situations such as the one you've encountered. I am still very curious as to how HP counts thousands of Matrix customers as I wrote about om March 17, 2011 http://www.bythebell.com/2011/03/whats-behind-the-surge-in-hp-matrix-customers.html

The comments to this entry are closed.