Mar 06 2012

M3 goodies – a new UCS B200-M3 blade

The UCS B200-M3 blade is the third generation blade for Cisco, build around the Intel E5-2600 processor. It is a half-width blade that supports dual socket E5 processors with 24 DIMMs that can provide a whopping 768GB of memory.

B200 M3

New to the B200-M3 are an internal USB and dual Flash Card slots. The 24 DIMM slots deliver 768GB or memory when populated with 32GB DIMMs (which will not be available day one). Also new for the B200 is a modular LOM (Lan on Motherboard) which provides the network IO for the blade. There is also room for an extra mezzanine card that can be used for IO or can be used in the future for other purposes.
While technically an optional component, I expect that all but a few customers will order the blades with the mLOM. The mLOM that ships day one is the Cisco VIC1240.

The VIC1240 mLOM has 8 10Gb interfaces that can be used by the blade, 4 of these go to A-side fabric and the other 4 go to the B side fabric (this also requires the use of a special mezzanine card called the “Port-Expander” on the blade). The actual number of available/active 10Gb interfaces depend on the IO Module used in the chassis. With the 2208XP IO Modules in the chassis, the 8*10Gb interfaces are available to the blade.

For those customers that require redundancy of the mezzanine/LOM itself, an extra VIC1280 mezzanine card can be installed on the B200-M3 that will provide IO connectivity in addition to that what the VIC1240 mLOM provides. While this does not increase the number of 10Gb interfaces (it stays at 8 for the blade), it does deliver those 8 10Gb interfaces through two physical separate cards (one mLOM and one Mezzanine). If that level of redundancy is important to you, that is now available on the half-width B200-M3 blades.

The details of all of the B200-M3s network connectivity will be covered in a later post, including all of the options on how the bandwidth is delivers to a blade. It requires more detail then intended for this post.

Now before you flame me and say “those 8 10Gb interfaces are going to require a lot of expensive infrastructure components”, let me cover that.

Remember the UCS architecture, each chassis has two separate IO Modules (fabric A and B) and that is all a chassis needs. There is no room, nor need, to plug in 4/6/8 expensive IO modules just to deliver extra bandwidth. The UCS chassis was developed with a mid plane that already had all of the copper traces to support four 10G-KR lanes from each IO Module to each half-width slot. The current shipping 2208XP IO Module make use of all those 10Gb-KR lanes. If you have been deploying the 2208XP IO Module, you can now take advantage of that extra bandwidth simply by using the B200-M3 with the VIC1240 mLOM.

Mar 04 2012

New Cisco UCS Fabric and Management products

This is an exciting week for those that follow the Cisco UCS platform with announcements around the Fabric Infrastructure, the Compute and the Unified Management. With Intel’s E5 Romley processor launch this week, that is sure to grab a lot of attention, the announcements around Cisco UCS Fabric Infrastructure and UCS Management might have an even bigger impact to you.

I’m going to start with the fabric hardware announcements and I’ll save the the really cool UCS-Manager announcements for last.

Announcement

The new 6296UP completes the 6200 family of Fabric Interconnects (announced last year at Cisco Live in Las Vegas) and brings the port density to a level that allows customers to build UCS Domains that contain 160 servers (rack or blade) and simultaneously have high bandwidth. You might recall in the previous 6100 series of Fabric Interconnects there was a trade-off between density in blades versus bandwidth. That is now a thing of the past.

The new IO Module 2204XP is in my view going to be the de-facto replacement for the current 2104XP IO Module. It brings all of the feature benefits of the 2208XP but at a price point that is close to that if the existing 2104XP. With 4 * 10Gb uplinks, it provides the same bandwidth as the current 2104XP. However, in combination with any of the 6200UP series Fabric Interconnect that 40Gb bandwidth is now delivered in a port-channel. Bandwidth to the individual blades is doubled, from 1*10Gb on the 2104XP to 2*10Gb on the 2204XP.

The new VIC1240 is a new form factor not seen before on the UCS Blades, it is a modular LOM specifically designed for the new M3 blades that were announced this week. Functionally the VIC1240 is the same as the VIC1280, they share the same ASIC’s, and both provide 2*40Gps to the blade (each 40Gb consists of 4 * 10Gb that are automatically port-channeled when the chassis contains one of the 2200XP series IO Module).

Here is the complete family overview.

UCS family

A bit more details around the 6296UP.

One of the two key functionalities are the Unified Ports. Just like it’s smaller brother the 6248UP and the Nexus 5548/5596, the new 6296UP Fabric Interconnect is Unified Ports enabled on all ports.
UCS Components
Every port is capable of being an Ethernet (1/10Gb) or FCoE or Native FC Port (1/2/4/8G) simply by providing the right optic. Selecting the role of a port is a software setting that will require a reboot of the Fabric Interconnect (or of the expansion module if you are only changing the port on an expansion module) so it is relevant to think up-front about how you want to divide the ports on the Fabric Interconnect. The advise is to Start with Ethernet from port one and work up – start with FC from the highest port and work down.

The second key functionality is the port-channel capability to the 2200XP series of IO Modules. In the first generation Fabric Interconnect and IO Module the only option was static pinning. This gives predictability in terms of bandwidth that blades have available during failure conditions – but customers asked for more flexibility. The 6200UP, when connected to a 2200XP IOM, supports a single port-channel to the IO Module.

A bit more details around the 2204XP IO Module

Latency is reduced not just in the IO-Module but also in the 6200UP Fabric Interconnects. For those customers that felt that latency was a potential problem, the new platform delivers low consistent latency across 160 servers. If latency matters, usually the consistency of latency is even more relevant as the actual latency – with VM’s you never know where a VM might move to. Blade to Blade latency is consistent inside UCS at 3us (0.5 us for IOM, 2us for FI, 0.5us for IOM).

The 2204XP provides 16 10Gb ports to the 8 blades inside a chassis delivering up to 20Gb of bandwidth from a single IOM to a blade. No production system should be running using a single fabric, meaning a total of 40Gb of bandwidth can be delivered to a single half-width blade with the 2204XP.

The actual details behind how that works is a topic for an other blog post as it will introduce you networking on the new M3 blade as well as the current M2 blade.

UCS-Manager

The UCS Manager software received an upgrade to version 2.0.2. This might sound like a minor release, but actually delivers the hardware support for all the new components talked about earlier, as well as support for the new M3 blades and the Nexus 2232.

I can hear you ask “What does a Nexus 2232 have to do with UCS?”

C series

In version 1.4 Cisco introduced the concept of adding a specific UCS rack server to the UCS Domain, managed by UCSM. The implementation at that time left room for improvement – it didn’t scale and it wasn’t consistent between blades and rack servers.
UCSM 2.0.2 and the Nexus 2232 change that. Now a single UCS Domain can be 160 servers (UCS blade or UCS rack servers). All of the benefits UCSM provides to blades now become available for UCS rack servers. And in the process the type of supported UCS C-series servers goes up to the full current shipping family and the newly announced M3 series of rack servers.

I can still hear you ask “Ok, but what does that have to do with a Nexus 2232?”.

The Nexus 2232 is a Fabric Extender and is part of the Nexus 2200 series of Fabric Extenders. The IO Modules that go into the back of a UCS chassis are Fabric Extenders as well. They are part of the Nexus 2200 series as well – hence the product numbers 2204XP and 2208XP.

All the blades in a UCS chassis connect to the IO Modules. Both with their data interface as well as with their management interface. That all happens of course through the mid-plane in the chassis which means you don’t see those connections.
The Nexus 2232 brings the same architecture to the UCS rack servers. The servers have both a data and management interface that connects to the Nexus 2232, which itself is connected to the 6100 or 6200 series of Fabric Interconnects (just like any IO Module is).

This is what allows a single UCS Domain to now contain 160 UCS rack servers, and you can create some exciting solution with that – stay tuned for that.

Nov 29 2011

Truth or Dare… Power Calculators

Truth or Dare… didn’t we all play that game in our younger years. Where you lived with the consequences of a dare if you got it wrong. In the Data Center there are a few things you don’t want to get wrong, and one of them is power (outages).

Power

Vendors put a lot of effort into designing redundancy and high-availability into their products, protocols and architectures. Customers spend a lot of their valuable money building redundant and high-available infrastructure. All that means nothing if infrastructure goes down due to incorrect power sizing.

That is why Power-Calculators exist (HP calls it a Power Advisor, I’ll stick to the more generic term Power Calculator for this blog). They are a valuable tool to assist in ensuring power sizing is done in such a way that under worst case conditions, the PDU’s circuit breakers won’t trip. Of course some customers will choose to run an (for them) acceptable level of risk by assuming not all servers will require max power at the same time. However, even for those customers, to calculate that risk you still need truthful power sizing data.

Truth or Dare… Power-Calculators are either an accurate tool (Truth) or they are a marketing tool (Dare). Since customers have no easy way to test the power consumption of servers themselves, you rightfully expect and depend on vendors being truthful with you.

I was very surprised to learn that HP’s use of their Power Calculator tool is on the daring side. Cisco got curious about the power numbers HP was publishing and decided to do the testing that customers themselves normally would not do. The result of this first round of testing (Rack Servers) is rather shocking (pun intended).

UCS vs HP Power

Full report can be found here:
Server Power Calculator Analysis

The HP Power Calculator is under-sizing the power consumption by 14% (average) in the wrong direction. The Cisco Power Calculator seems a bit overly cautious with a 25% (average) margin on the safe side.

This blog isn’t about who makes the most power efficient server.

It is about is making customers aware that Power Calculators cannot be used to compare server power consumption for marketing reasons. It is also about keeping vendors honest. If HP tells customers that they publish conservative data, then customers should be able to depend on that.

Here is what HP has to say about their Power Advisor (some selected quotes as teasers)

The HP Power Advisor utility reduces the research and guesswork… is intended to be a conservative estimator of power… Proprietary software exercises the processors to the highest possible power level and operates all peripherals while taking voltage and current measurements…

The entire HP Power Advisor Utility Description document is of course available to you.

My Question: Why do the HP servers consume (significantly) more power then the HP Power Advisor say they do?

Do you want to play Truth or Dare…

Jul 17 2011

UCS 2.0: New Innovation

Two years on the market and a 10% market share – UCS 1.0 has proven itself… welcome UCS 2.0.

Last week at Cisco Live 2011 in Las Vegas, where “what happens in Vegas is shared online”, UCS 2.0 was announced (orderability scheduled for Q3 CY11)

With UCS 1.0 we brought innovation to the x86 blade server market with Unified Fabric, Service Profiles, Extended Memory technology, a virtual chassis architecture, very extensive networking features and an XML API.

 

UCS 2.0 adds onto this already great package by giving you even more choice with an additional Fabric Interconnect, an additional chassis IO Module and an additional blade Virtual Interface Card. These three new infrastructure components bring both innovation, scalability and feature enhancements.

 

One innovation is Unified Ports.  Any port on the new 6248UP Fabric interconnect can be 1GE, 10GE, FCoE, FC or DCB. It no longer matters at purchasing time what type of ports you need and how many. Any port, any role.

Another innovation is dual 40G of bandwidth on the new VIC1280. This is dual 40G per half-width blade!!

And on the software side, some exciting new features that have been on the top-ask list from you: support for disjoint L2 networks and iSCSI boot.

 

If you have made it this far reading this blog, you must be interested in a lot more details. For that I’m going to direct you to a very extensive blog write-up by M. Sean Mcgee who was the one that presented the UCS 2.0 launch at Cisco Live 2011 Las Vegas.

TJ

 

Older posts «

» Newer posts