Formal Cisco Announcement


This does not mark the end of life or anything significant for Cisco Intelligent Automation for Cloud.. What this does mean is that Cisco now has a better holistic single pane of glass to manage NetApp Flexpod / EMC VSPEX / VCE Vblock. In competitive situations, Cisco was left wide open in the end to end management gap from Cisco UCS Manager to a full bore Cloud environment without having to involve the likes of CA or BMC. Cloupia provides infrastructure element management with the ability to expose API hooks for CIAC to integrate to. The Cloupia Unified Infrastructure Controller allowed customers to have a menu driven approach to managing their converged infrastructure stack and perform some level of orchestration workflows ( bare metal installation of OS, VM creation / deletion / power on / power off, provisioning of storage ). This will be a great complementary growth for both companies as the demand for Cloupia was its meager staff was affecting their channel / sales growth so now by coupling Cisco’s field sales organization without fear of competitive backlash from the addressable market that Cloupia was focused on, it will better assert Cisco in the enterprise private cloud space.

It’s important to note that with the new ExpressPod architecture that Cloupia should be a critical selling component as it allows a single admin to provision and manage their entire stack for a nominal license fee.

Ran into a situation where I needed to log into vCenter Operations 5 without having to go through HTTPS. Fortunately, the UI VM seems to be running a modified version of SE Linux and thus I just had to alter the vCops Apache config to disable their HTTP redirect to SSL.

It should also be noted that the “admin” account is listed in sudoers and thus allowing root access to the native OS.

Apache Config file is located here: /usr/lib/vmware-vcops/user/conf/install/vcops-apache.conf

I copied the original config over to vcops-apache.conf.orig and then commented out the following in the apache config:

<VirtualHost *:80>
JkMountCopy On

# Redirect all HTTP requests to HTTPS
# RewriteEngine On
# RewriteCond %{HTTPS} off
# RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}

Once I saved the config, restarted the vCops vApp and everything works like a charm now. Now I can easily publish through my lab Citrix Netscaler without having to monkey around with SSL. Hope this helps.


Passed my VCP5

So I’m one of the many who had until the end of the month to pass the VCP 5 without having to take the class. Honestly, I think VMware did a very fair job of making this exam relevant to implementation without focusing on limits that will change with every release. Granted, a lot has changed in vSphere 5 but it’s all good especially with how High Availability was rewritten and Storage DRS was introduced. I do think that there is some legitimate gripes that because the VCAP5-DCA and VCAP5-DCD have still not been released so forcing experts to start all of the way at the bottom without being able to renew VCP with a valid VCAP pass is silly as I personally renew all of Cisco specifications by retaking a valid CCIE written exam every 15 months or so. Anyways, again I think VMware is finally hitting the content focus but they still have a ways to go with the overall program.

Well, it’s almost that time of year again for VMware’s Partner Exchange hosted in Las Vegas. This year I’ll be attending and focusing mostly on End User Compute and Cloud Management. I’ve recently had some exposure to vCenter Operations 5 and have been really impressed at how they’ve executed the integration of Capacity IQ while provided a very easy user interface to present all of the different dashboards at my disposal now. Definitely worth checking out and I’m really looking forward to my all day bootcamp on Monday. If you’re attending either drop me a note here and sent me a tweet @angryjesters and hopefully we can have a beer.. or two..

Boot from SAN is not necessarily one of the easiest things in the world; however, Cisco UCS does take away a lot of the complexity with its Service Profiles and associated Boot Policies. I’m not going to get into an exhaustive post around booting from SAN on Cisco UCS as I think most people have readily documented and how it works. However one important caveat to keep in mind is that the Cisco M81K-R does not have an “HBA BIOS” that is typically available in the Emulex or Qlogic HBA/CNA’s that we’re all familiar with. If you’re unfamiliar with the HBA BIOS utilities, this is typically as Cntrl+E or Cntrl+Q sequence that you can type in as a host is booting up so you can have the HBA log into the fabric and scan for its available luns.

Not to turn into a Debbie Downer, but there is a very easy way to circumvent this caveat and make use of the lovely UCSM CLI. As the host is booting, you can connect the VIC adapter and have it log into the fabric and list its luns. Couple this with the dialogue from the UCS Boot Policy and we can easily troubleshoot any SAN booting issues. This method negates any need for having a traditional HBA BIOS created for the Cisco VIC family and demonstrates from the real power of the UCSM CLI.

It should also be noted that once the host has completed booting, the host drivers will take over and the VIC drivers will not be accessible for scanning the fabric. See below for some of the command walk through.

For the example of this post, we’ll be logging into a Xiotech array which has a WWPN ID of 20:00:00:1F:93:00:12:9E. Our Service Profile is attached to Blade 1/1 in our UCS deployment (which is a B200-M2 and thus the adapter is located at slot 1).
So first we’ll connect down to our VIC firmware:
UCS-6200-TOP-A# connect adapter 1/1/1
adapter 1/1/1 # connect
adapter 1/1/1 (top):1# attach-fls
Now we’ll list our vnic ID’s and force the VIC to log into the SAN fabric
adapter 1/1/1 (fls):5# vnic
—- —- —- ——- ——-
vnic ecpu type state   lif
—- —-   —- ——- ——-
7    1       fc   active  4
8    2       fc   active  5
adapter 1/1/1 (fls):4# login 7
lifid: 4
ID     PORTNAME                                          NODENAME                          FID
0:    20:00:00:1f:93:00:12:9e  00:00:00:00:00:00:00:00  0xa70400
Looks like we’ve successfully logged into the fabric as we’ve got a successful PLOGI ( see above ) and now we can report out which luns we have access to
adapter 1/1/1 (fls):5# lunmap 7
lunmapid: 0  port_cnt: 1
lif_id: 4
PORTNAME                                               NODENAME                          LUN                           PLOGI
20:00:00:1f:93:00:12:9e  00:00:00:00:00:00:00:00  0000000000000000  Y

adapter 1/1/1 (fls):6# lunlist 7
vnic : 7 lifid: 4
– FLOGI State : flogi est (fc_id 0xa70804)
– PLOGI Sessions
– WWNN 20:00:00:1f:93:00:12:9e WWPN 20:00:00:1f:93:00:12:9e fc_id 0xa70400
– LUN’s configured (SCSI Type, Version, Vendor, Serial No.)
LUN ID : 0x0000000000000000 (0x0, 0x4, XIOTECH , 3BCC01C4)
– REPORT LUNs Query Response
LUN ID : 0x0000000000000000
– Nameserver Query Response
– WWPN : 20:00:00:1f:93:00:12:9e

Great – everything is working as expected and our Windows 2008 server successfully booted from our Xiotech array. Special thanks to Jeff Allen and some other helpful Cisco people in pointing this awesome feature. 

ERSPAN on the Nexus 5xxx

In the NX-OS 5.1(3)N1 release for the Nexus 5000 family of switches, Encapsulated Remote Switch Port Analyzer (ERSPAN) was finally added. This is a long standing feature enhancement request to allow for easier capturing of traffic for monitoring and analysis as ERSPAN allows you to statically place a network sniffer in the IP topology without having to relocate the sniffer to the local switch you want to monitor. ERSPAN copies the ingress/egress of a given switch source and creates a GRE tunnel back to a ERSPAN destination. This allows network operators to strategically place their network monitoring gear in a central location of their network as they then can collect historical traffic patterns in great detail. This is a Good Thing(c).

Now, you’re probably irritated that you need a second device to terminate the ERSPAN session (I know I am); however, let’s put this into perspective. You can only have 2 active source SPAN sessions per Nexus 7000 / Nexus 3000 or even Catalyst 6500 chassis. With the inclusion of the Nexus 5000 series for ERSPAN support, you now have more telemetry points within the network. Depending on your access layer deployment, you could actually more points of visibility within your network than you previously did if you’re leveraging the Nexus 5000 switches with Nexus 2000 fabric extenders for rackmount server deployments. Even with all of the VDC slicing and dicing you can do on a Nexus 7000, you’re only allowed 2 SPAN sources per chassis. Period. This got to be very troublesome in some early Nexus deployments as we were leveraging multiple VDC’s. ( I should point out that ACL captures are now supported on the Nexus 7000 as of NX-OS 5.2 ). Now, you can use the Nexus 7000 and Nexus 3000 ( as of  NX-OS 5.0(3) U2(2) )  to support up to 23 ERSPAN destinations per chassis.

Side Note:  you should use 5.0(3)U2(2b) on the Nexus 3000 because of a nasty memory leak with the monitor process that would cause the switch to crash.

There are however some important caveats to pay attention to:

  • The Nexus 5500 / 5000 switches ONLY support ERSPAN sources. You can NOT locally terminate an ERSPAN to a Nexus 5500/5000 chassis. The ERSPAN traffic must be sent to a switch capable of supporting ERSPAN sources like the Nexus 7000, 3000 or a Catalyst 6500.
  • The Nexus 5000 switches (1st generation ) support only 2 ERSPAN sources while the Nexus 5500 switches (2nd generation) support 4 ERSPAN sources.
  • With wireshark – if you specify “erspan.spanid == #”, you can filter on the specific ERSPAN you want to see.
  • You can specify sources based on ethernet interfaces, VLANs, FEX interfaces, and port-channels. With VLAN sources, both Tx and Rx information will be sent.

Example config below – in this topology: is the N5K’s management SVI; is the N3K management SVI.


monitor session 1 type erspan-source
erspan-id 10
vrf default
destination ip
source interface ethernet100/1/3
no shutdown

monitor erspan origin ip-address global

N7K // N3K:

interface Ethernet1/3
switchport monitor

monitor session 2 type erspan-destination
erspan-id 10
vrf default
source ip
destination interface ethernet1/3
no shutdown

Hope this helps!

A couple of weeks back, NX-OS 5.2.3 was released for the Nexus 7000 platform. This was the first maintenance release for the long running train of the NXOS 5.2 software train. Unfortunately, almost as soon as it was posted to CCO customers started to incur issues with the upgrade from 5.2.1 to 5.23.  One of the scenarios that produced the bug conditions was mentioned on the Cisco Network Service Provider Mailing List (cisco-nsp) – see here for the start of the thread. Cisco TAC quickly responded and has since deferred the 5.2.3 release for 5.2.3a.

Also should be noted that NX-OS 6.0(2) is now available for the Nexus 7000 which is a short runnig train that introduced hardware support for the Fabric 2 modules and F248XP-25 on the Nexus 7010 and Nexus 7018 chassis ( the Nexus 7009 had FAB2 / F248XP-25 support at its FCS on 5.2 ).

What does this all mean? Well, it means you should always be judicious when introducing new software to your production infrastructure as while hardware and software manufacturers will do whats possible for quality assurance, they don’t always take into account all of the variables that can occur in a given customer network. See some of the case examples in this book written by a friend of mine..

A common problem in designing Nexus 7000 installations is identifying what features are supported with which linecard. Many people are accustomed to the Catalyst 6500 where all of the features are primarily driven from the supervisor while the linecards have limitations of scale. With the Nexus 7000, the features are heavily dependent upon the linecard as the Nexus 7000 is a pure distributed forwarding switch.See Tim Stevenson’s excellent hardware architecture overview of the Nexus 7000 in this year’s Cisco Live presentation BRKARC-3470.

A good understanding of the differences in features is that the classic “M” series is for multiservice solutions like OTV, LISP, and MPLS while the “F” series will be leveraged for access layer enhancements like low port to port latency, FCoE, Fabric Path and Fabric Extenders (FEX).

Hope this helps.

(click on the image for full view while I still figure out how to format)

For those of you looking to upgrade legacy/new Nexus 7000 customers to NXOS 5.2, you’ll want to read the following in the Release Notes: http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/release/notes/52_nx-os_release_note.html#wp86458

Now, most brand new ( < 9 month old ) hardware orders should include the 8G upgrade memory on the SUP1 as this was a depot enhancement back in November of last year.

Memory Requirements

The Cisco NX-OS software requires 4 GB of memory or 8 GB of memory, depending on the software version you use and the software features you enable.

An 8 GB supervisor memory upgrade kit, N7K-SUP1-8GBUPG=, allows for growth in the features and capabilities that can be delivered in existing Cisco Nexus 7000 Series supervisor modules. The memory upgrade kit is supported on Cisco Nexus 7000 Series systems running Cisco NX-OS Release 5.1 or later releases. Instructions for upgrading to the new memory are available in the “Upgrading Memory for Supervisor Modules” section of the Cisco Nexus 7000 Series Hardware Installation and Reference Guide.

The following guidelines can help you determine whether or not to upgrade an existing supervisor module:

•When the system memory usage exceeds 3 GB (75 percent of total memory), we recommend that you upgrade the memory to 8 GB. Use the show system resources command from any VDC context to check the system memory usage:

Nexus-7000# show system resources
Load average:   1 minute: 0.47   5 minutes: 0.24   15 minutes: 0.15
Processes   :   959 total, 1 running
CPU states  :   3.0% user,   3.5% kernel,   93.5% idle
Memory usage:   4115776K total,   2793428K used,   1322348K free <-------------

•If you create more than one VDC with XL mode enabled, or if you have more than two VDCs, 8 GB of memory is required.

For additional guidance about whether or not to upgrade a supervisor module to 8 GB of memory, see Figure 1.

Figure 1 Supervisor Memory Upgrade Decision Flowchart

When you insert a supervisor module into a Cisco Nexus 7000 Series switch running Cisco NX-OS Release 5.1(x) or a later release, be aware that one of the following syslog messages will display, depending on the software version and the amount of memory for the supervisor module:

• If you are running Cisco NX-OS Release 5.1(1) or a later release and you have an 8-GB supervisor as the active supervisor and you insert a 4-GB supervisor module as the standby, it will be powered down. A severity 2 syslog message indicates that the memory amounts should be equivalent between the active and the standby supervisor:

2010 Dec 3 00:05:37 switch %$ VDC-1 %$ %SYSMGR-2-SUP_POWERDOWN: Supervisor in slot 10
is running with less memory than active supervisor in slot 9
In this situation, you have the option to upgrade the memory in the 4-GB supervisor or shut down
the system and remove the extra memory from the 8-GB supervisor.

•  If you are running Cisco NX-OS Release 5.1(2) or a later release and you insert a 8-GB supervisor module as the standby, a severity 4 syslog message appears.

2010 Dec  1 23:32:08 switch %SYSMGR-4-ACTIVE_LOWER_MEM_THAN_STANDBY: Active supervisor
in slot 5 is running with less memory than standby supervisor in slot 6.

In this situation, you have the option to remove the extra memory or do a switchover and upgrade the memory in the 4-GB supervisor.