Dec 06 2012

What Would You Build

What Would You Build – if you had the power to program the network.

In my presentations around Cisco ONE (Open Network Environment) and then specifically the onePK Platform API section, I challenge people to think about what they would want to build if they could use an API to program their routers and switches.

CiscoONE

And this is the place I asked people to go to write down what programming against a network (device) would mean for them. The goal was to collect some creative idea’s that people have that could potentially be solved if you could write a program to interact with your network devices. Since I don’t want to limit anybodies creativity I won’t give any examples here – if you were at my presentations you will have heard me give a few examples.

I’m very interested to hear some of your great idea’s.

Nov 04 2012

UCS Central preview – A larger scale view

When UCS was launched in June 2009, it raised the bar with the integrated management called UCS Manager. A single build-in management interface to configure every single aspect of the UCS system as well as providing an open XML interface.

Before we move forward, we need to get a few definitions straight:

  • UCS Domain: A single UCS system, including a redundant pair of Fabric Interconnects and up to 160 servers.
  • UCS Manager: The management interface of a single UCS Domain, running inside the Fabric Interconnects

UCS Manager is where administrators configure pools, policies and templates that give UCS its tremendous flexibility. But what if you own multiple UCS Domains and you wish to keep your pools, policies and templates consistent? Across multiple UCS Domains and across multiple geographical sites or continents. That functionality requires you to rely on external tools that leverage the XML API.

Welcome UCS Central

On November 1nd, Cisco lifted the curtain on what was called project “Pasadena” in a blog that included a 30 minute video that is worth watching. UCS Central is a tool to manage multiple UCS Domains. What does ‘multiple’ mean you might ask, because when UCS was released in 2009 it supported 5 chassis (40 blades) which over time grew to the current number of 20 chassis (160 servers – blade and/or rack). Well, according to the demonstrations given at VMworld Barcelona, the phase 1 release will support 10.000 servers.

Now that is a big number – 10.000 servers in phase 1 should be more then enough for any one of you out there reading this blog. If not, I would love to hear from you 🙂

Before I get into some of the details, lets start with Krish Sivakumar (Product Manager) and Roger Anderson (Technical Marketing Engineer) explaining UCS Central in this 5 minute video.

The architecture

UCS Central is not a standalone application that simply uses the XML API of UCS Manager to speak to it. That would have been the easy way out. Instead it is build from the ground up fully integrated with UCS Manager, leveraging the same Data Management Engine (DME) that runs inside UCS Manager.

UCS Manager uses the concept of setting a policy centrally, but making the end-points responsible for the execution of that policy. For example the policy can be that blades need to run a new version of BIOS. The Fabric Interconnect doesn’t push that down, instead the blades receive a message that a new policy is available, and they individually reach out and collect the new policy (and execute it).

UCS Central extends that mode, a policy will be set centrally and it will send out a message on the message bus. All subscribed UCS Managers receive that message of a new available policy. The individual UCS Managers will contact UCS Central and grab the new policy and execute upon it (which in some cases will mean they will in turn send out a message to components that live inside it’s UCS domain.
This architecture scales very well and provides the additional benefit that no functionality is taken away from the local UCS Manager. The functionality of UCS Central is simply added.

How do you set this up
It all starts at the UCS Manager level, the admin needs to register the UCS Manager with UCS Central. Simply add the IP address (DNS name) and the shared-secret. That will register UCS Manager. By default all of the policies remain in the so called local mode. Even if UCS Manager is registered to a UCS Central, the local administration remains fully in control as to which functionality will be delegated to UCS Central.

So what happens after registration
UCS Central will collect all of the configuration of the UCS Manager so that the full inventory is now available centrally. All of the used unique identifiers (MAC, WWPN, WWNN, UUID) are now also knows, as well as all of the service profiles that are created.

In phase one all of the policies are related to the UCS domain itself. Like Date & Timezone, NTP and DNS servers, SNMP credentials, AAA authentication settings etc… These can all be defined in UCS Central and UCS Manager will collect and execute against these policies and configure itself accordingly. This ensures consistency and compliancy across all UCS domains for all of the standard systems settings.

Phase 1 also allows you to create pools for the unique identifiers within UCS Central. Up to now these were always configured at the UCS Domain level and had only local significance. UCS Central adds global awareness – but keeps the flexibility that local pools can still be configured (and it will be fully aware of all of the local pools and which identifiers are used from these local pools).
UUID, MAC addresses, Fiber Channel WWPN/WWNN can all be configured centrally. And at the UCS Manager level these central pools can now be selected as one extra choice.

Firmware management is centralized
UCS Central can be configured to check with Cisco.com for new versions of software. It will collect a small XML file containing the latest releases, but it will not automatically download the actual files (those can be rather big). When policies are created or changed and the new setting references a version that is not downloaded, UCS Central will download it from Cisco.com.
And as described earlier, if the policy applies to individual UCS Managers, they will get the message that they need to download a copy of that software from UCS Central and store it in the Fabric Interconnect. Very scalable and very practical.

What to expect next?
Phase 1 is a very exciting release and every customer should be looking at adding UCS Central to their UCS installation. The functionality provided in phase 1 is more then enough to get started and for you to decide how to structure the UCS Central implementation. Phase 2 was not demonstrated at VMworld, but it was mentioned that it will follow phase 1 very rapidly and add the functionality to create Service Profiles centrally.

O, you might ask what all of this good stuff is going to cost you… Although pricing was not finalized, during VMworld the message was that licensing was going to be UCS Domain based, not server based. And the the first 5 UCS domains were going to be free. With a single UCS Domain scaling to 160 servers, that means up to 800 servers (5 * 160) managed via UCS Central for free.

I’ve had the opportunity to play with UCS Central for a little while now and I for one am looking forward to the release date and getting UCS Central into your hands. Let me know what you think and expect of UCS Central.

Jun 01 2012

Cisco UCS market share on the rise

IDC today released their Q1-CY2012 server marketshare data.

Lets take a look at the Blade Server specific numbers according to IDC.


Bladed Server Market Results

The blade market continued its growth in the quarter with factory revenue increasing 7.3% year over year, with shipment growth increasing by 4.8% compared to 1Q11. Overall, bladed servers, including x86, EPIC, and RISC blades, accounted for $2.0 billion in revenues, representing 16.6% of quarterly server market revenue. More than 90% of all blade revenue is driven by x86-based blades, which now represent 21.3% of all x86 server revenue. HP maintained the number 1 spot in the server blade market in 1Q12 with 46.1% revenue share, while IBM finished with 18.9% revenue share. Cisco and Dell rounded out the top 4 with 12.8% and 8.7% factory revenue share, respectively. Cisco’s blade server revenue increased 48.8% year over year and gained 3.6 points of blade server market share when compared with the first quarter of 2011.

An 48% increase in Cisco blade server revenue (YoY) is a validation that customers are adopting Cisco UCS. But what you can’t see easily from these numbers is Cisco’s marketshare of the x86-blade server market. IDC only lists the total blade server market share numbers. Remember that Cisco only has x86 blades, which skews the Cisco marketshare number. So I calculated Cisco’s share of the x86 blade server market using the following logic. Let me know if you think I’m off with the calculations.

– Overall blade server revenue: $2.0B

– Cisco blade server marketshare of that $2B: 12.8% which equals $256M

– x86 blade server revenue is 90% of the $2B which is $1.8B.

That means Cisco has 14.2% x86 blade server marketshare ($256M/$1.8B).

Do you agree?

Mar 17 2012

How UCS achieves 80GbE of bandwidth per blade

With the launch of the third generation of Unified Fabric, Cisco’s UCS system has moved well ahead of the simple 2*10GbE interfaces that a single blade can have. It now delivers up to 80GbE to a single blade in a dual 40GbE way. How is that achieved? What are the options?

There are four main parts that you need to understand in order to answer the above question.

– Backplane

– IO Modules

– Blades, the M2 and M3

– Adapters

You will see in the following pictures and text an A-side and a B-side. The data plane is split into two completely separate fabrics (A and B). Each side consists of a Fabric Interconnect (UCS6100/6200) and an IO-Module (2100/2200). And each blade is connected to both the A and B sides of the fabric. This is a full active/active setup. The concept of active/standby does not exist within the Cisco UCS architecture.

Backplane:

Since Cisco started shipping UCS in 2009, the Cisco 5108 chassis uses the same backplane. It is designed to connect 8 blades and 2 IO Modules. The backplane contains 64 10Gb-KR lanes, 32 from IOM-A and 32 from IOM-B.

Backplane

All UCS Chassis sold to date have this backplane which provides a total of 8 10Gb-KR lanes going to each blade slot. This means every chassis sold so far is completely ready to deliver 80Gb of bandwidth to every blade slot.

 

IO-Modules

The IO Modules are an implementation of the Cisco Fabric Extender (FEX) architecture, where the IO-Module operates as a remote line card to the Fabric Interconnect. The IO-Module has network facing interfaces (NIF) which connect the IOM to the Fabric Interconnect, and host facing interfaces (HIF) which connect the IOM to the adapters on the blades. All interfaces are 10Gb DCE (Data Center Ethernet).
There are three different IOMs, each providing a different number of interfaces.

Table NIV+HIV

Table Interfaces

With 8 blade slots in the chassis, the table on the right shows the number of 10Gb interfaces that are delivered to a single blade slot.

Lets complete the picture and look at the total bandwidth going to each blade slot. Each chassis has the A-side and B-side IOM, and each IOM provides the number of interfaces listed in the above table. Because UCS only runs in active/active, that brings the total bandwidth to each blade slot to:

IOM bandwidth

The Blades and Adapters

Hopefully by now you will have understood how the backplane delivers 8 10Gb-KR lanes to each blade slot and that the choice of IO Module determines how many of these are actually used. Now the blades themselves connect to the backplane via a connector that passes the 8 10Gb-KR lanes onto the motherboard.

Depending on the generation of blade (M2 and M3) these 8 10Gb-KR copper traces are distributed amongst the available adapters on the blade.

M2 blade

This is the easy one to explain because the M2 blades are designed with a single mezzanine adapter slot. That means all 8 10Gb-KR lanes go to this single adapter. Depending on the adapter that is selected to be placed in the mezzanine slot, the actual number of interfaces available will differ. Every adapter will have an dual interfaces connected to both IOM-A and IOM-B.

M2+adapters+IOM

The adapter choice for the M2 blades is either a 2 or 8 interface adapter. Remember the backplane and IOM paragraph, the IOM choice also determines the number of active interfaces on the adapter as can be seen in the above picture and table.

The final piece to be aware of is the fully automatic port-channel creation between the VIC1280 and the 2204/2208. This creates a dual 20GbE or dual 40GbE port-channel with automatic flow distribution across the lanes in the port-channel.

M2 port channel

M3 blade

The M3 blades offer both an mLOM slot as well as an extra mezzanine adapter slot. It splits the 8 10Gb-KR lanes it receives from the backplane to both. This means customers can run a redundant pair of IO adapters on the M3 blade if they desire (some RfP’ explicitly ask for physically redundant IO adapters).

The M3 blade splits up the 8 10Gb-KR lanes by delivering 4 to the mLOM and the remaining 4 to the mezzanine slot. Of course these are connected evenly to the IOM-A and IOM-B side. There are also 4 lanes that run on the motherboard between the mezzanine and mLOM.
This means that the mLOM slot receives a total of 8 10Gb-KR lanes, 4 from the backplane and 4 from the mezzanine slot on the blade. This will become relevant when we discuss the Port-Extender options for the M3.

M3 blade

Now that it is clear how the 10Gb-KR lanes are distributed amongst the mLOM and Mezzanine on the blade, lets take a look at how they map to the three possible IO Modules. The mLOM today is always the VIC1240.

M3 blade+IOM

In picture A – with the 2104 as IO Module, it is clear that there is a single 10GbE-KR lane from each IOM that goes to the mLOM VIC1240. The mezzanine slot in this scenario cannot be used for IO functionality as there are no 10Gb-KR lanes that connect it to the IOM.

In picture B – with the 2204 as IO Module, there are two 10GbE-KR lanes from each IOM. The mLOM VIC1240 receives a single 10Gb-KR from each IOM. The second 10Gb-KR lane goes to the mezzanine slot. There are three adapter choices for the Mezzanine slot. It is my opinion that the “Port-Extender” will be the most popular choice for this configuration.

In picture C – with the 2208 as IO Module, there are four 10GbE-KR lanes from each IOM. The mLOM VIC1240 will received a dual 10Gb-KR from each IOM. The other two 10Gb-KR lanes go to the mezzanine slot. Again here are three adapter choices for the Mezzanine slot. It is my opinion that here also the “Port-Extender” will be the most popular choice for this configuration.

Since I think the Port-Extender will be the most popular mezzanine adapter – lets cover what it is. Simply put, it extends the four 10Gb-KR lanes from the backplane to the mLOM by using the “dotted gray” lanes in the above picture. Look at it as a pass-through module.
If we take both picture B and C and add a Port-Extender into it, lets see what we get. Remember that whenever possible, an automatic port-channel will be created as can be seen in the following picture.

Port Extender

Lets do the same but now use a VIC1280 as a completely separate IO adapter in the Mezzanine slot. This provides full adapter redundancy for the M3 blade for those customers that need that extra level of redundancy. The automatic port-channels are created from both the mLOM VIC1240 as well as the VIC1280 mezzanine adapter.

VIC1280

In Summary

The UCS system can deliver up to 80GbE of bandwidth to each blade in a very effective way, without filling the chassis with a lot of expensive switches. Just two IO-Modules can deliver the 80GbE to each blade. That to my knowledge is a first in the blade-server market and shows the UCS architecture was designed to be flexible and cost effective.

I hope you made it all the way to the end here and have a much better understanding into selecting those UCS components that will give you the optimal bandwidth for your needs.

Older posts «

» Newer posts