Feeds:
Posts
Comments

Formal Cisco Announcement

Equities.com

This does not mark the end of life or anything significant for Cisco Intelligent Automation for Cloud.. What this does mean is that Cisco now has a better holistic single pane of glass to manage NetApp Flexpod / EMC VSPEX / VCE Vblock. In competitive situations, Cisco was left wide open in the end to end management gap from Cisco UCS Manager to a full bore Cloud environment without having to involve the likes of CA or BMC. Cloupia provides infrastructure element management with the ability to expose API hooks for CIAC to integrate to. The Cloupia Unified Infrastructure Controller allowed customers to have a menu driven approach to managing their converged infrastructure stack and perform some level of orchestration workflows ( bare metal installation of OS, VM creation / deletion / power on / power off, provisioning of storage ). This will be a great complementary growth for both companies as the demand for Cloupia was its meager staff was affecting their channel / sales growth so now by coupling Cisco’s field sales organization without fear of competitive backlash from the addressable market that Cloupia was focused on, it will better assert Cisco in the enterprise private cloud space.

It’s important to note that with the new ExpressPod architecture that Cloupia should be a critical selling component as it allows a single admin to provision and manage their entire stack for a nominal license fee.

Ran into a situation where I needed to log into vCenter Operations 5 without having to go through HTTPS. Fortunately, the UI VM seems to be running a modified version of SE Linux and thus I just had to alter the vCops Apache config to disable their HTTP redirect to SSL.

It should also be noted that the “admin” account is listed in sudoers and thus allowing root access to the native OS.

Apache Config file is located here: /usr/lib/vmware-vcops/user/conf/install/vcops-apache.conf

I copied the original config over to vcops-apache.conf.orig and then commented out the following in the apache config:

<VirtualHost *:80>
JkMountCopy On

# Redirect all HTTP requests to HTTPS
# RewriteEngine On
# RewriteCond %{HTTPS} off
# RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
</VirtualHost>

Once I saved the config, restarted the vCops vApp and everything works like a charm now. Now I can easily publish through my lab Citrix Netscaler without having to monkey around with SSL. Hope this helps.

 

Passed my VCP5

So I’m one of the many who had until the end of the month to pass the VCP 5 without having to take the class. Honestly, I think VMware did a very fair job of making this exam relevant to implementation without focusing on limits that will change with every release. Granted, a lot has changed in vSphere 5 but it’s all good especially with how High Availability was rewritten and Storage DRS was introduced. I do think that there is some legitimate gripes that because the VCAP5-DCA and VCAP5-DCD have still not been released so forcing experts to start all of the way at the bottom without being able to renew VCP with a valid VCAP pass is silly as I personally renew all of Cisco specifications by retaking a valid CCIE written exam every 15 months or so. Anyways, again I think VMware is finally hitting the content focus but they still have a ways to go with the overall program.

Well, it’s almost that time of year again for VMware’s Partner Exchange hosted in Las Vegas. This year I’ll be attending and focusing mostly on End User Compute and Cloud Management. I’ve recently had some exposure to vCenter Operations 5 and have been really impressed at how they’ve executed the integration of Capacity IQ while provided a very easy user interface to present all of the different dashboards at my disposal now. Definitely worth checking out and I’m really looking forward to my all day bootcamp on Monday. If you’re attending either drop me a note here and sent me a tweet @angryjesters and hopefully we can have a beer.. or two..

Boot from SAN is not necessarily one of the easiest things in the world; however, Cisco UCS does take away a lot of the complexity with its Service Profiles and associated Boot Policies. I’m not going to get into an exhaustive post around booting from SAN on Cisco UCS as I think most people have readily documented and how it works. However one important caveat to keep in mind is that the Cisco M81K-R does not have an “HBA BIOS” that is typically available in the Emulex or Qlogic HBA/CNA’s that we’re all familiar with. If you’re unfamiliar with the HBA BIOS utilities, this is typically as Cntrl+E or Cntrl+Q sequence that you can type in as a host is booting up so you can have the HBA log into the fabric and scan for its available luns.

Not to turn into a Debbie Downer, but there is a very easy way to circumvent this caveat and make use of the lovely UCSM CLI. As the host is booting, you can connect the VIC adapter and have it log into the fabric and list its luns. Couple this with the dialogue from the UCS Boot Policy and we can easily troubleshoot any SAN booting issues. This method negates any need for having a traditional HBA BIOS created for the Cisco VIC family and demonstrates from the real power of the UCSM CLI.

It should also be noted that once the host has completed booting, the host drivers will take over and the VIC drivers will not be accessible for scanning the fabric. See below for some of the command walk through.

For the example of this post, we’ll be logging into a Xiotech array which has a WWPN ID of 20:00:00:1F:93:00:12:9E. Our Service Profile is attached to Blade 1/1 in our UCS deployment (which is a B200-M2 and thus the adapter is located at slot 1).
So first we’ll connect down to our VIC firmware:
UCS-6200-TOP-A#
UCS-6200-TOP-A# connect adapter 1/1/1
adapter 1/1/1 # connect
adapter 1/1/1 (top):1# attach-fls
Now we’ll list our vnic ID’s and force the VIC to log into the SAN fabric
adapter 1/1/1 (fls):5# vnic
—- —- —- ——- ——-
vnic ecpu type state   lif
—- —-   —- ——- ——-
7    1       fc   active  4
8    2       fc   active  5
adapter 1/1/1 (fls):4# login 7
lifid: 4
ID     PORTNAME                                          NODENAME                          FID
0:    20:00:00:1f:93:00:12:9e  00:00:00:00:00:00:00:00  0xa70400
Looks like we’ve successfully logged into the fabric as we’ve got a successful PLOGI ( see above ) and now we can report out which luns we have access to
adapter 1/1/1 (fls):5# lunmap 7
lunmapid: 0  port_cnt: 1
lif_id: 4
PORTNAME                                               NODENAME                          LUN                           PLOGI
20:00:00:1f:93:00:12:9e  00:00:00:00:00:00:00:00  0000000000000000  Y

adapter 1/1/1 (fls):6# lunlist 7
vnic : 7 lifid: 4
– FLOGI State : flogi est (fc_id 0xa70804)
– PLOGI Sessions
– WWNN 20:00:00:1f:93:00:12:9e WWPN 20:00:00:1f:93:00:12:9e fc_id 0xa70400
– LUN’s configured (SCSI Type, Version, Vendor, Serial No.)
LUN ID : 0x0000000000000000 (0x0, 0x4, XIOTECH , 3BCC01C4)
– REPORT LUNs Query Response
LUN ID : 0x0000000000000000
– Nameserver Query Response
– WWPN : 20:00:00:1f:93:00:12:9e

Great – everything is working as expected and our Windows 2008 server successfully booted from our Xiotech array. Special thanks to Jeff Allen and some other helpful Cisco people in pointing this awesome feature. 

ERSPAN on the Nexus 5xxx

In the NX-OS 5.1(3)N1 release for the Nexus 5000 family of switches, Encapsulated Remote Switch Port Analyzer (ERSPAN) was finally added. This is a long standing feature enhancement request to allow for easier capturing of traffic for monitoring and analysis as ERSPAN allows you to statically place a network sniffer in the IP topology without having to relocate the sniffer to the local switch you want to monitor. ERSPAN copies the ingress/egress of a given switch source and creates a GRE tunnel back to a ERSPAN destination. This allows network operators to strategically place their network monitoring gear in a central location of their network as they then can collect historical traffic patterns in great detail. This is a Good Thing(c).

Now, you’re probably irritated that you need a second device to terminate the ERSPAN session (I know I am); however, let’s put this into perspective. You can only have 2 active source SPAN sessions per Nexus 7000 / Nexus 3000 or even Catalyst 6500 chassis. With the inclusion of the Nexus 5000 series for ERSPAN support, you now have more telemetry points within the network. Depending on your access layer deployment, you could actually more points of visibility within your network than you previously did if you’re leveraging the Nexus 5000 switches with Nexus 2000 fabric extenders for rackmount server deployments. Even with all of the VDC slicing and dicing you can do on a Nexus 7000, you’re only allowed 2 SPAN sources per chassis. Period. This got to be very troublesome in some early Nexus deployments as we were leveraging multiple VDC’s. ( I should point out that ACL captures are now supported on the Nexus 7000 as of NX-OS 5.2 ). Now, you can use the Nexus 7000 and Nexus 3000 ( as of  NX-OS 5.0(3) U2(2) )  to support up to 23 ERSPAN destinations per chassis.

Side Note:  you should use 5.0(3)U2(2b) on the Nexus 3000 because of a nasty memory leak with the monitor process that would cause the switch to crash.

There are however some important caveats to pay attention to:

  • The Nexus 5500 / 5000 switches ONLY support ERSPAN sources. You can NOT locally terminate an ERSPAN to a Nexus 5500/5000 chassis. The ERSPAN traffic must be sent to a switch capable of supporting ERSPAN sources like the Nexus 7000, 3000 or a Catalyst 6500.
  • The Nexus 5000 switches (1st generation ) support only 2 ERSPAN sources while the Nexus 5500 switches (2nd generation) support 4 ERSPAN sources.
  • With wireshark – if you specify “erspan.spanid == #”, you can filter on the specific ERSPAN you want to see.
  • You can specify sources based on ethernet interfaces, VLANs, FEX interfaces, and port-channels. With VLAN sources, both Tx and Rx information will be sent.

Example config below – in this topology: 10.1.1.2 is the N5K’s management SVI; 10.1.1.5 is the N3K management SVI.

N5K:

monitor session 1 type erspan-source
erspan-id 10
vrf default
destination ip 10.1.1.5
source interface ethernet100/1/3
no shutdown

monitor erspan origin ip-address 10.1.1.2 global

N7K // N3K:

interface Ethernet1/3
switchport
switchport monitor

monitor session 2 type erspan-destination
erspan-id 10
vrf default
source ip 10.1.1.2
destination interface ethernet1/3
no shutdown

Hope this helps!

A couple of weeks back, NX-OS 5.2.3 was released for the Nexus 7000 platform. This was the first maintenance release for the long running train of the NXOS 5.2 software train. Unfortunately, almost as soon as it was posted to CCO customers started to incur issues with the upgrade from 5.2.1 to 5.23.  One of the scenarios that produced the bug conditions was mentioned on the Cisco Network Service Provider Mailing List (cisco-nsp) – see here for the start of the thread. Cisco TAC quickly responded and has since deferred the 5.2.3 release for 5.2.3a.

Also should be noted that NX-OS 6.0(2) is now available for the Nexus 7000 which is a short runnig train that introduced hardware support for the Fabric 2 modules and F248XP-25 on the Nexus 7010 and Nexus 7018 chassis ( the Nexus 7009 had FAB2 / F248XP-25 support at its FCS on 5.2 ).

What does this all mean? Well, it means you should always be judicious when introducing new software to your production infrastructure as while hardware and software manufacturers will do whats possible for quality assurance, they don’t always take into account all of the variables that can occur in a given customer network. See some of the case examples in this book written by a friend of mine..