Cisco APIC – Physical Interface Configuration Workflow – Part-2 (Port-Channel)

This article describes how to create a port-channel policy on the Cisco ACI fabric. You may refer to our previous article regarding the workflow on configuring an interface. In this article we will jumped in directly to the interface policy groups navigation pane to start port-channel configuration.

1. Interface Policies – Policy Groups

  1. On the menu bar navigate to FabricAccess Policies.
  2. On the left navigation pane, expand Interface Policies.
  3. Right click on the Policy Groups and select Create PC Interface Policy Group.
  4. interface_policy_groups_port-channel_1

  5. On the Create PC Interface Policy Group dialog box, perform the following:
    1. Provide a name for the policy.
    2. Choose your parameter you were created on the interface policies.
    3. Below is ours.
    4. interface_policy_groups_port-channel_2

    5. Click submit.
  6. On the working pane click PC/VPC tab pane, you will find your port-channel policy appears on the list.
  7. interface_policy_groups_port-channel_3

2. Interface Policies – Profile

Now we are going to call our port channel interface policy group and bind it to some interfaces.

  1. On the menu bar navigate to FabricAccess Policies.
  2. On the left navigation pane, expand Interface Policies.
  3. Right click on the Profiles and select Create Leaf Interface Profile.
  4. interface_profile_port-channel_1

  5. On the Create Leaf Interface Profile dialog box, perform the following:
    1. Provide a name for the interface.
    2. Click plus sign next to Interface Selectors.
    3. On the Create Access Port Selectors dialog box, perform following action.
      1. Provide a name for the port.
      2. State your Interface IDs
      3. Select your Interface Policy Group.
      4. interface_profile_port-channel_2

      5. Click OK.
      6. interface_profile_port-channel_3

      7. Click submit.
      8. interface_profile_port-channel_4.0

      We are going to put some additional information regarding the the interface description.

      1. On the left navigation pane, expand your current port-channel interface profile (ours is IntProf-PortChannel) and click your interface selector (ours is PortSel-PortChannel).
      2. interface_profile_port-channel_4.1

      3. On the working pane , double click your blok interface list and type your interface description on the access port blok dialog box.
      4. interface_profile_port-channel_4.2

      5. Click submit
      6. interface_profile_port-channel_4.3

      7. On the left navigation pane, select your interface profile. Your interface selector will appears along with description on the port block collumn.
      8. interface_profile_port-channel_4.4

    3. Switch Policies – Profile

    Now we are going to call our port channel interface profile and bind it to leaf switches.

    1. On the menu bar navigate to FabricAccess Policies.
    2. On the left navigation pane, expand Switch PoliciesProfilesLeaf Profiles.
    3. Since we are going to configure these links on the Leaf101, click on the previous Leaf101 switch profile.
    4. switch_profile_port-channel_1

    5. On the working pane, click plus sign next to Associated Interface Selector Profiles.
    6. On the Create Interface Profile dialog box select your interface profile.
    7. switch_profile_port-channel_2

    8. Click submit
    9. switch_profile_port-channel_3

    4. Configuration Verification – Object CLI

    4.1 APIC Verification

    apic1# show port-channel map PolGrp-to-Cat2960 
    Legends:
    N/D : Not Deployed
    PC: Port Channel
    VPC: Virtual Port Channel
     
     Port-Channel Name  Type  Leaf ID, Name                     Fex Id  Port Channel   Ports                            
     ------------       ----  --------------------------------  ------  -------------  -------------------------------- 
     PolGrp-to-Cat2960  PC    101,LEAF101                               po2            eth1/25-26

    APIC allocates the number of the port-channel randomly. In our case it is Po2.

    4.2 LEAF101 Verification

    LEAF101# show port-channel summary 
    Flags:  D - Down        P - Up in port-channel (members)
            I - Individual  H - Hot-standby (LACP only)
            s - Suspended   r - Module-removed
            S - Switched    R - Routed
            U - Up (port-channel)
            M - Not in use. Min-links not met
            F - Configuration failed
    -------------------------------------------------------------------------------
    Group Port-       Type     Protocol  Member Ports
          Channel
    -------------------------------------------------------------------------------
    2     Po2(SU)     Eth      LACP      Eth1/25(P)   Eth1/26(P)
    
    LEAF101# show lacp counters interface port-channel 2
                        LACPDUs         Marker      Marker Response    LACPDUs
    Port              Sent   Recv     Sent   Recv     Sent   Recv      Pkts Err
    ---------------------------------------------------------------------
    port-channel2
    Ethernet1/25       20     13       0      0        0      0        0      
    Ethernet1/26       11     13       0      0        0      0        0      
     
    
    LEAF101# show interface ethernet 1/25-26 status   
    ----------------------------------------------------------------------------------------------
     Port           Name                Status     Vlan       Duplex   Speed    Type              
    ----------------------------------------------------------------------------------------------
     Eth1/25        to Cat2960-Gi0/25-2 out-of-ser  trunk      full     1G       --               
     Eth1/26        to Cat2960-Gi0/25-2 out-of-ser  trunk      full     1G       --

    As you notice on the interface status, it is on out-of-service state because we haven’t use the interface to any Endpoint Group (EPG) yet. Happy labbing!!!

    Contributor:
    
    Ananto Yudi Hendrawan
    Network Engineer - CCIE Service Provider #38962, RHCE, VCP6-DCV
    nantoyudi@gmail.com
    Advertisements

Cisco APIC – Physical Interface Configuration Workflow – Part-1 (Leaf Access)

This article describes how to configure a pyhsical interface on the Cisco Application Centric Infrastructur (ACI) fabric leaf swithces. The concept of the interface configuration on the ACI Fabric is different compared to switch on a traditional network. We will explain each steps of the configuration. In the end of the article we will try to configure each step on configuring physical interface.

Regarding the ACI fabric devices connectivity we need to focus on fabric tab on the APIC GUI menu bar. The fabric tab on the APIC GUI is used to configure system-level features including, but not limited to, device discovery and inventory management, diagnostic tool, configuring domain, and switch and port behaviour. The fabric pane is split into three sections Inventory, Fabric Policies and Access Polies.

Since we are preparing physical interface on the leaf switches, We will focus on access policies. Let’s take a look on below diagram regarding physical interface configuration workflow.

physical_interface_workflow_diagram

Arrow direction indicates one policy consumes the previous policy for its parameter construct.

1. Access Policies Overview

1.1 VLAN Pools

VLAN pools contain the VLANs used by EPGs the domain will be tied to. A domain is associated to a single VLAN pool.

1.2 Domains

According to Cisco, Endpoint Groups are considered the “who” in ACI; contracts are considered the “what/when/why”; AEPs can be considered the “where” and domains can be thought of as the “how” of the fabric. Different domain types are create depending on how a device is connected to the leaf switch. There are four different domain types:

  • Physical domains are generaly used for bare metal server or servers where hypervisor integration is not an option.
  • External Bridged domains are used for Layer 2 connections.
  • External Routed domains are used for Layer 3 connections.
  • VMM domains are used for EPGs on virtualize environment.

Domains act as the glue between the configuration done in the fabric tab to the policy model and endpoint group configuration found in the tenant pane.

1.3 Attachable Access Entity Profiles (AAEP/AEP)

Attachable Access Entity Profiles (AEPs) can be considered the “where” of the fabric configuration, and are used to group domains with similar requirements. AEPs are tied to interface policy groups.

1.4 Interface Policies

Interface policies are created to dictate interface behaviour, and are later tied to interface policy groups. For example you want a policy with a LLDP feature enabled and another policy with LLDP disabled. these can be reused for other interface policy groups.

1.5 Interface Policy Groups

Interface policy groups are templates to dictates interface behaviour and are associated to an AEP. Interface policy groups call its parameter from interface policies so it can describe how the link behave. For example, link should have CDP enable, LLDP disable and etc.

1.6 Interface Profiles

Interface profiles help tie the pieces together. Interface profiles contain block of ports – interface selectors – and are also tied to the interface policy groups you have created earlier. We are not talking about which interface on which switch, only arbitrary port with several policy defined on it.

1.7 Switch Profiles

Switch profiles allow the selection of one or more leaf switches and associate interface profiles to configure the port on that specific node. There are also policies configuration at switch policies and switch policy groups before the switch profiles, but we are going to leave it as is.

2. Access Policy Configuration

2.1 Vlan Pools

  1. On the menu bar, choose Fabric → Access Policies.
  2. In the Navigation pane, expand Pools.
  3. On the working pane click Action select Create VLAN Pool.
  4. On the Create VLAN Pool dialog box, perform following action:
    • Provide a name for pool.
    • Select Allocation Mode.
    • vlan_pools_1

    • Click plus sign next to Encap Blocks.
    • On Create Ranges dialog box, perform the following actions:
      • Type vlan range you want to use, Cisco says “it is better to have different vlans allocation for each tenant”.
      • Select vlan Allocation Mode.
      • vlan_pools_2

      • Click ok.
      • vlan_pools_3

      • Click submit.

Now you have your vlan pool listed on the Pools – VLAN working pane.

vlan_pools_4

2.2 Domains

    1. On the menu bar, choose Fabric → Access Policies.
    2. In the Navigation pane, expand Physical and External Domains and click Physical Domains.
    3. On the working pane click Action select Create Physical Domain.
    4. On the Create Physical Domain dialog box, perform following action:
      • Provide a name for domain.
      • Leave Associated Attachable Entity Profile. We will configure it later.
      • Click VLAN Pool drop down menu and choose your VLAN Pool
      • Leave the rest as is.

physical_domain_1

      • Click submit.

Now you have your domain listed on the Physical Domains working pane.

physical_domain_2

2.3 Attachable Access Entity Profiles (AAEP/AEP)

  1. On the menu bar, choose Fabric → Access Policies.
  2. In the Navigation pane, expand Global Policies and click Attachable Access Entity Profiles.
  3. On the working pane click Action select Create Attachable Access Entity Profiles.
  4. On the Create Attachable Access Entity Profiles dialog box, perform following action:
    • Provide a name.
    • Click plus sign next to Domains (VMM, Physical or External) To Be Associated to Interface.
    • Select your domain.
    • Click Update.
    • AEP_1

    • Click next.
    • AEP_2

    • We will do an interface policy group configuration later.
    • Click Finish

Now you have your AEP listed on the Attachable Access Entity Profiles working pane.

AEP_3

2.4 Interface Policies

2.4.1 Policies

  1. On the menu bar, choose Fabric → Access Policies.
  2. In the Navigation pane, expand Interface Policies.
  3. Expand Policies folder. You will see a lot of parameter Cisco has provided to create an interface policy. According to Cisco, it is better not to use default policy parameter, so we are going to configure a custom policy name for each parameter. We only pick several interface policies such as Link Level, CDP, LLDP and Port Channel.
  4. interface_policies_1

    1. Link Level
      1. On the left navigation pane click Link Level.
      2. On the working pane click Action select Create Link Level Policy.
      3. On the Create Link Level Policy dialog box, perform following action:
        • Provide a name.
        • Select Auto Negotiation mode.
        • Choose link speed on the speed drop down menu.

        link_level_1

      4. Click submit.

      Now you have your link level policy on the Policies – Link Level work pane.

      link_level_2

    2. CDP
      1. On the left navigation pane click CDP Interface.
      2. On the working pane click Action select Create CDP Interface Policy.
      3. On the Create CDP Interface Policy dialog box, perform following action:
        • Provide a name
        • Select Admin State mode.
        • CDP_1

        • Click submit.
        • Repeat above action for disable CDP policy.

      Now you have your CDP interface policy on the Policies – CDP Interface work pane.

      CDP_2

    3. LLDP
      1. On the left navigation pane click LLDP Interface.
      2. On the working pane click Action select Create LLDP Interface Policy.
      3. On the Create LLDP Interface Policy dialog box, perform following action:
        • Provide a name
        • Select Admin State mode.
        • LLDP_1

        • Click submit.
        • Repeat above action for disable LLDP policy.

      Now you have your LLDP interface policy on the Policies – LLDP Interface work pane.

      LLDP_2

    4. Port Channel
      1. On the left navigation pane click Port Channel.
      2. On the working pane click Action select Create Port Channel Policy.
      3. On the Create Port Channel Policy dialog box, perform following action:
        • Provide a name
        • On the Mode drop down menu choose LACP Active mode.
        • On the Control check box, select your requirement.
        • LACP_1

        • Click submit.
        • Repeat above action for others LACP type.

      Now you have your LLDP interface policy on the Policies – Port Channel work pane.

      LACP_2

2.4.2 Policy Groups

  1. On the menu bar, choose Fabric → Access Policies.
  2. In the Navigation pane, expand Interface Policies → Policy Groups. Click Leaf Policy Groups.
  3. On the working pane tab select Interfaces, click Action select Create Leaf Access Port Policy Group.
  4. interface_policy_groups_1

    Do notice that from this configuration foward, you may create different type of interfaces (e.g Individual, Port Channel, vPC, etc…).

  5. On the Create Leaf Access Port Policy Group dialog box, perform following action:
    • Provide a name for the policy.
    • Call all your interface policies parameter.
    • interface_policy_groups_2

    • Click submit.

Now you have your interface policy group on the Leaf Policy Groups work pane.

interface_policy_groups_3

2.4.3 Profiles

    1. On the menu bar, choose Fabric → Access Policies.
    2. In the Navigation pane, expand Interface Policies → Profiles. Click Leaf Profiles.
    3. On the working pane click Action select Create Leaf Profile.

On the Create Leaf Profile dialog box, perform following action:

      • Provide a name.
      • Click plus sign next to Leaf Selectors and add the information

interface_profile_1

      • On the Create Access Port Selector dialog box perform following action:
        • Provide a name for the interface.
        • Type your Interface IDs.
        • Choose your Interface Policy Group.

interface_profile_2

        • Click ok.

interface_profile_3

      • Click submit.

Now you have your interface profile on the Leaf Selector Profiles work pane.

interface_profile_4

Double click your interface profile on the working pane to see more detail its parameter construct.

interface_profile_5

2.5 Switch Policies

2.5.1 Profiles

  1. On the menu bar, choose Fabric → Access Policies.
  2. In the Navigation pane, expand Switch Policies → Profiles. Click Leaf Profiles.
  3. On the working pane click Action select Create Leaf Profile.
  4. On the Create Leaf Profile STEP1 > Profile dialog box, perform following action:
      • Provide a name.
      • Click plus sign next to Leaf Selectors and add the information
        • Provide a name.
        • Select switch on the block drop down menu.

    switch_profile_1

      • Click update
      • Click next
  5. On the Create Leaf Profile STEP2 > Assocoations dialog box, perform following action:
      • select your interface profile.

    switch_profile_2

    • Click finish

    Now you have your switch profile on the Profiles – Leaf Profiles work pane.

    switch_profile_3

    Double click your switch profile on the working pane to see more detail its parameter construct.

    switch_profile_4

Up to this point, your selected interface has been enabled with some parameter you have created earlier. In the technical word, you have created an interface with 1Gbps link speed, CDP enable, LLDP enable and it can use vlan range from 1 to 2500 on Leaf101 switch port 1/24. It bring the physical interface up but APIC put the interface operational state to out-of-service. You might need to bind an Endpoint Group(EPG) to interface path you prepared for to make it operational.

interface_status_1
interface_status_2

Happy labbing!!!

Source:
Operating Cisco Application Centric Infrastructure
Contributor:

Ananto Yudi Hendrawan
Network Engineer - CCIE Service Provider #38962, RHCE, VCP6-DCV
nantoyudi@gmail.com

Cisco APIC – Pod, Global and Monitoring Policies

This article describes how to configure Cisco APIC additional services that covered on Pod, Global and monitoring policies pane. Some of the configurations are optional but they are very helpful during the troubleshooting process. Navigate to the left pane from Fabric → Fabric Policies. Below on the green frame are subject to configure on our activity.

pod_policies_1

In order to set up monitoring policies, you need to navigate to Admin → External Data Collectors → Monitoring Destinations to have additional configuration to complete its policy construct.

external_data_collector_1

1. Monitoring Destinations

1.1 Syslog

  1. On the menu bar, choose Admin → External Data Collectors.
  2. In the Navigation pane, choose Monitoring Destinations → Syslog.
  3. On the working pane click Action select Create Syslog Monitoring Destination Group.
  4. On the Create Syslog Monitoring Destionation Group, STEP 1 > Profile dialog box, perform following action:
    • Provide a name for the profile.
    • Leave the other parameter as is.
    • Click Next

    syslog_destination_1

  5. On the Create Syslog Monitoring Destionation Group, STEP 2 > Remote Destinations dialog box, perform following action:
    • Provide syslog server Host Name/IP.
    • Name of the server.
    • Change Admin State to enabled.
    • Severity to warning (optional).
    • Port to 514 (optional).
    • Management EPG.
    • Click OK

    snmp_destination_2

  6. Click Finish
  7. Click Syslog on the left navigation pane, now you have a syslog configured as destination for external data collector.

syslog_destination_3

1.2 SNMP

  1. On the menu bar, choose Admin → External Data Collectors.
  2. In the Navigation pane, choose Monitoring Destinations → SNMP.
  3. On the working pane click Action select Create SNMP Monitoring Destination Group.
  4. On the Create Syslog Monitoring Destionation Group, STEP 1 > Profile dialog box, perform following action:
    • Provide a name for the profile.
    • Click Next

    snmp_destination_1

  5. On the Create Syslog Monitoring Destionation Group, STEP 2 > Trap Destinations dialog box, perform following action:
    • Provide SNMP server Host Name/IP.
    • Name of the server.
    • Port.
    • Version.
    • Community Name
    • Management EPG.
    • snmp_destination_2

  6. Click OK
  7. Click Finish
  8. Click SNMP on the left navigation pane, now you have a SNMP configured as destination for external data collector.

snmp_destination_3

2. Monitoring Policies

2.1 Syslog

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the navigation pane, choose Monitoring Policies → default → Callhome/SNMP/Syslog.
  3. monitoring_policies_1

  4. On the working pane, click Syslog and click + sign on the right.
  5. On Create Syslog Source dialog box, perform following action:
    • Provide a Name for the syslog source
    • Min Severity level.
    • Check all information you want to include on the syslog
    • Select Dest Group of the syslog.
    • Click Submit

    monitoring_policies_syslog_1

  6. Now you have your syslog service works to send messages to an external server.

monitoring_policies_syslog_2

2.2 SNMP

  1. On the same working pane, click SNMP tab.
  2. Click plus sign on the top right working pane.
  3. On the Create SNMP Sourcedialog box, perform following action:
    • SNMP source Name.
    • Dest Group server.
    • Click Submit.

    monitoring_policies_snmp_1

  4. Now you have your SNMP service enable on your fabric system. Your fabric system will not send any snmp information until you call this configuration on the pod policies construct (we will work on it later).

monitoring_policies_snmp_2

3. Global Policies

3.1 DNS Profile

Move on to Global Policies setting, we are going to configure
Domain Name Services (DNS)
. Setting up a DNS aserver allows the APIC to resolve various hotnames to IP addresses. This is useful when integrating VMM domains or other Layer 4 to Layer 7 devices and the hostname is referenced.

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the navigation pane, choose Global Policiesc → DNS Profile → default.
  3. global_policies_dns_1

  4. In the work pane, in the Management EPG drop-down list, choose the approriate management EPG. Note: The default is default (Out-of-Band).
  5. global_policies_dns_2

  6. Click + next to DNS Provider to add a DNS provider.
    • In the Address field, enter the provider address.
    • In the Preferred field, click the check box if you want to have this address as the preferred provider. Note: You can have only one preferred provider.
    • Click Update.

    global_policies_dns_3

  7. Repeat step 4 for each additional DNS provider.
  8. Click + next to DNS Domains to add a DNS domain.
    • In the Name field, enter the domain name, such as “mydomain.com”.
    • In the Default field, click the check box to make this domain the default domain. Note: you can have only one domain name as the default.
    • Click Update.

    global_policies_dns_4

  9. Click Submit.

global_policies_dns_5

3.2 Verifying DNS configuration using the object model CLI

  • SSH to the APIC
  • Switch to object model CLI.
  • APIC-01# bash
    admin@APIC:~>
  • Check the resolv.conf file: make sure your DNS server is appears on the configuration file.
  • admin@APIC:~> cat /etc/resolv.conf 
    # Generated by IFC
    search testbed.lab 
    
    nameserver 10.43.62.153

4. Pod Policies

4.1 Date and Time

Time syncronization is very essential on the monitoring, troubleshooting and operation task. There are two options for configuring management of all ACI nodes and APICs, in-band management and/or out-of-band management. Depending on which management option was chosen for the fabric , configuration of NTP will vary. In this example we are going to configure in-band management NTP.

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the Navigation pane, choose Pod Policies → Policies.
  3. pod_policies_ntp_1

  4. In the Work pane, choose Actions → Create Data and Time Policy.
  5. pod_policies_ntp_2

  6. In the Create Date and Time Policy dialog box, perform the following action:
    • Provide a name for the policy
    • Click Next

    pod_policies_ntp_3

    • Click the + sign to specify the NTP server information (provider) to be used.
    • In the Create Providers dialog box, enter rekevant information, including the following fields: Name, Description , and Minimum Polling Intervals, and Maximum Polling Intervals.
    • If you are creating multiple providers, click the Preferred check box for the most reliable NTP source.
    • In the Management EPG drop down list, choose the type management communication you have.
    • Click OK.
    • pod_policies_ntp_4

    • Click Finish

4.1.1 Date and Time (NTP) Configuration Verification (GUI)

  • On the navigation pane, choose Pod Policies → Policies → Date and Time. Your NTP policy now listed on the working pane.
  • pod_policies_ntp_5

  • Expand Date and Time on navigation pane, and click your NTP policy, you will see your NTP configuration parameter.
  • pod_policies_ntp_6

4.2 SNMP

Following up our SNMP configuration on Monitoring Policies, now we are going to set up our SNMP policy on the Pod Policies.

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the Navigation pane, choose Pod Policies → Policies → SNMP.
  3. Click action on the working pane and select Create SNMP Policy.
  4. pod_policies_snmp_1

  5. In the Create SNMP Policy dialog box, perform the following action:
    • Provide a name for the policy
    • Change the Admin State to Enabled.
    • Fill the Contact and Location.

    pod_policies_snmp_2

  6. Scroll down the working pane, click + sign next to Community Policies and fill the community string you are used for the SNMP. Click Update when you are done.
  7. pod_policies_snmp_3

  8. click + sign next to Client Group Policies, it will open up Create SNMP Client Group Profile. Provide following information.
    1. Client Group Profile Name
    2. Associate Management EPG
    3. CLick + next to Client Entries and fill the address of the server.

    pod_policies_snmp_4

  9. Click OK
  10. Click Submit

4.2.1 SNMP Configuration Verification (GUI)

  • On the navigation pane, choose Pod Policies → Policies → SNMP. Your SNMP policy now listed on the working pane.
  • pod_policies_snmp_5

  • SNMP on navigation pane, and click your SNMP policy, you will see your SNMP configuration parameter.
  • pod_policies_snmp_6

    pod_policies_snmp_7

4.2.2 SNMP Configuration Verification (CLI)

APIC-01# show snmp clientgroups
 SNMP Policy      Name            Description       Client Entries        Associated Management EPG 
 -----------      -----------     -----------       --------------        -------------------------
 SNMP-Policy      SNMP_Client                       10.43.63.4            Inband (In-Band)
APIC-01# show snmp community 
 SNMP Policy           Community Name        Description                    
 --------------------  --------------------  ------------------------------ 
 SNMP-Policy           ciscoSNMP
APIC-01# show snmp hosts
 IP-Address            Version     Security Level  Community            
 --------------------  ----------  ----------      -------------------- 
 10.43.63.4            v2c         noauth          ciscoSNMP

Below is more sophisticated command to gather up all the snmp informations.

APIC-01# show snmp summary 

Active Policy: SNMP-Policyl, Admin State: enabled

Local SNMP engineID: [Hex] 0x800000098019cc024a48db6e5900000000

----------------------------------------
Community            Description         
----------------------------------------
ciscoSNMP                                

------------------------------------------------------------
User                 Authentication       Privacy             
------------------------------------------------------------

------------------------------------------------------------
Client-Group         Mgmt-Epg                  Clients
------------------------------------------------------------
SNMP_Client   Inband (In-Band)          10.43.63.4

------------------------------------------------------------
Host                 Port  Version  Level      SecName             
------------------------------------------------------------
10.43.63.4           161   v2c      noauth     ciscoSNMP

All configurations for SNMP are set. We need to call it later on Pod Policy Groups.

4.3 Management Access

Management Access navigation pane let you customize the way you accessing APIC controller.

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the Navigation pane, choose Pod Policies → Policies → Management Access. By defaul it has default configuration for it.
  3. In the Work pane, choose Actions → Create Management Access Policy.
  4. pod_policies_management_access_1

  5. In the Create Management Access Policy dialog box, perform the following action:
    • Provide a name for the policy
    • Enable HTTPS service.
    • Enable SSH service
    • pod_policies_management_access_2

    • Click Submit when you are done.
  6. In my current lab, I just utillized the default configuration. Click default directory under Management Access on the navigation pane to see its configuration.
  7. pod_policies_management_access_3
    pod_policies_management_access_4

4.4 BGP Route Reflector

The ACI fabric route reflectors use multiprotocol border gateway protocol (MP-BGP) to distribute external routes within the fabric so full mesh BGP topology is not required. To enable route reflectors in the ACI fabric, the fabric administrator must select at least on spine switch that will be route reflector, and provide the autonomous system (AS) number for the fabric. Once route reflectors are configured, administrator can setup connectivity to external networks.

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the Navigation pane, choose Pod Policies → Policies → BGP Route Reflector default.
  3. In the Work pane, perform the following action.
    • Change the Autonomous System Number to match the required number for your network.
    • pod_policies_BGP_RR_1

    • Click + sign next to Route Reflector Nodes, fill the spine node number
    • pod_policies_BGP_RR_2

    • Click Submit
    • Repeat above procedure when you have more than one spine switch.
  4. Below are BGP Route Reflector we have created so far
  5. pod_policies_BGP_RR_3

  6. Click Submit on working pane when you are agree with the configuration.

4.5 Pod Policies – Policy Group

This section will conclude what we have created so far. Any policies construct related to SNMP, Management Access and BGP Route Reflector will be tied together on it.

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the Navigation pane, choose Pod Policies → Policy Groups.
  3. Click Actions on the working pane and select Create Pod Policy Group.
  4. pod_policies_policy_groups_1

  5. On the Create Pod Policy Group dialog box perform the following action:
    • Provide a name for the policy
    • Select Date Time Policy you created earlier.
    • Use default policy for ISIS Policy, COOP Group Policy, BGP Route Reflector Policy, Management Management Policy.
    • Select SNMP Policy you created earlier.
    • pod_policies_policy_groups_2

    • Click Submit
  6. Now your Pod Policies – Policy Groups listed on the working pane.
  7. pod_policies_policy_groups_3

 
4.6 Pod Policies – Profiles

Last but not least, we need to call Pod Policies – Policy Group on the Pod Policies – Profiles

  1. On the menu bar, choose Fabric → Fabric Policies.
  2. In the Navigation pane, choose Pod Policies → Profiles → Pod Profile Default → default.
  3. On the Pod Selector dialog box, select your Pod Policies – Policy Groups you created earlier on the Fabric Policy Group drop down menu.
  4. pod_policies_profiles_1

  5. Click Submit.
  6. Select Pod Profile Default on navigation pane. You may see your Pod Policies – Profiles configuration parameter.
  7. pod_policies_profiles_2

5. Service Verifications

It is time to verify all services we have enable on above subsections. You may adjust your Pod Policies – Policy Group parameter. In this case you may need to pay attention that some action may distrupt your fabric data plane traffic (A waring dialog bx will appears before you submit it).

5.1 Syslog

Below are output from my CentOS syslog server for the LEAF-01.

[root@syslog ~]# tail -f /var/log/remotedevices/10.43.70.18/2017-10-15.log 
2017-10-15T12:29:28.630625+07:00 10.43.70.18  Oct 15 12:29:29 LEAF-01 %LOG_LOCAL7-3-SYSTEM_MSG [F1323][soaking][equipment-ft-missing][major][sys/ch/ftslot-1/fault-F1323] Fan tray missing in slot 1 in leaf LEAF-01
2017-10-15T12:29:28.637886+07:00 10.43.70.18  Oct 15 12:29:29 LEAF-01 %LOG_LOCAL7-3-SYSTEM_MSG [F1323][soaking][equipment-ft-missing][major][sys/ch/ftslot-2/fault-F1323] Fan tray missing in slot 2 in leaf LEAF-01
2017-10-15T12:29:59.573827+07:00 10.43.70.18  Oct 15 12:29:59 LEAF-01 %LOG_LOCAL7-3-SYSTEM_MSG [F1323][soaking_clearing][equipment-ft-missing][major][sys/ch/ftslot-1/fault-F1323] Fan tray missing in slot 1 in leaf LEAF-01
2017-10-15T12:29:59.599442+07:00 10.43.70.18  Oct 15 12:30:00 LEAF-01 %LOG_LOCAL7-3-SYSTEM_MSG [F1323][soaking_clearing][equipment-ft-missing][major][sys/ch/ftslot-2/fault-F1323] Fan tray missing in slot 2 in leaf LEAF-01

5.2 SNMP

Below are the output from our SNMP server for our LEAF-01 device.

monitoring_policies_snmp_output_1

5.3 DNS

Ping host by DNS name that will reachable from the APIC (Inband/Out-of-Band) management.

admin@APIC:~> ping -i 0.2 -c 5 syslog.testbed.lab
PING syslog.testbed.lab (10.43.62.160) 56(84) bytes of data.
64 bytes from 10.43.62.160: icmp_seq=1 ttl=64 time=0.394 ms
64 bytes from 10.43.62.160: icmp_seq=2 ttl=64 time=0.319 ms
64 bytes from 10.43.62.160: icmp_seq=3 ttl=64 time=0.279 ms
64 bytes from 10.43.62.160: icmp_seq=4 ttl=64 time=0.319 ms
64 bytes from 10.43.62.160: icmp_seq=5 ttl=64 time=0.341 ms

--- syslog.testbed.lab ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 800ms
rtt min/avg/max/mdev = 0.279/0.330/0.394/0.040 ms

5.4 NTP

Login to your APIC, and execute below command.

APIC-01# show ntpq                                                                                      
 nodeid       remote          refid         st      t   when      poll      reach     delay     offset    jitter   
 --------  -  --------------  ------------  ------  --  --------  --------  --------  --------  --------  -------- 
 1         *  192.168.100.5   10.0.0.2      2       u   60        64        377       8.324     -0.050    0.060

Login to one of your fabric swithes, to verify its NTP status.

pod_policies_ntp_7

 

5.5 BGP Route Refelctor

Since we have only one spine and two leaf switches, each leaf will have only one bgp peer to the spine.

SPINE-01# show bgp vpnv4 unicast summary vrf all 
BGP summary information for VRF overlay-1, address family VPNv4 Unicast
BGP router identifier 10.0.152.94, local AS number 100
BGP table version is 869, VPNv4 Unicast config peers 3, capable peers 2
102 network entries and 104 paths using 16896 bytes of memory
BGP attribute entries [68/9792], BGP AS path entries [0/0]
BGP community entries [0/0], BGP clusterlist entries [0/0]

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
10.0.152.93     4   100   17682   17317      869    0    0     1w5d 92        
10.0.152.95     4   100    5525    5529      869    0    0    3d19h 4
LEAF-01# show bgp vpnv4 unicast summary vrf all 
BGP summary information for VRF overlay-1, address family VPNv4 Unicast
BGP router identifier 10.0.152.95, local AS number 100
BGP table version is 133, VPNv4 Unicast config peers 1, capable peers 1
36 network entries and 40 paths using 4788 bytes of memory
BGP attribute entries [10/1440], BGP AS path entries [0/0]
BGP community entries [0/0], BGP clusterlist entries [1/4]

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
10.0.152.94     4   100    5566    5562      133    0    0    3d20h 17
LEAF-02# show bgp vpnv4 unicast summary vrf all
BGP summary information for VRF overlay-1, address family VPNv4 Unicast
BGP router identifier 10.0.152.93, local AS number 100
BGP table version is 778, VPNv4 Unicast config peers 1, capable peers 1
99 network entries and 103 paths using 17028 bytes of memory
BGP attribute entries [4/576], BGP AS path entries [0/0]
BGP community entries [0/0], BGP clusterlist entries [1/4]

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
10.0.152.94     4   100   17355   17720      778    0    0     1w5d 4

We will cover more regarding APIC routing later on another article. Happy labbing!!!

Contributor:

Ananto Yudi Hendrawan
Network Engineer - CCIE Service Provider #38962, RHCE, VCP6-DCV
nantoyudi@gmail.com

Cisco APIC – Out-Of-Band (OOB) Management Connectivity Configuration

This article describes how to configure out-of-band (OOB) management connectivity on Cisco ACI fabric using GUI. When an ACI fabric is deployed with out-of-band manegement, each node of the fabric, inclusive of spines, leaves and all member of the APIC cluster, is managed from the outside the ACI fabric. Cisco provide three options of configuring node management addresses, Specific, Range and All. This article will demonstrate spesific and range mode on node management addresses.

Below are steps on creating OOB management connectivity policy:
Node Management AddressesOut-Of-Band ContractNode Management EPGsExternal Management Network Instance Profiles.

1. Node Management Addresses (IP Allocation Mode)

1.1 Range Mode

  1. On the menu bar, choose TENANTS → mgmt.
  2. In the Navigation pane, expand Tenants mgmt.
  3. Right-click Node Management Addresses and click Create Node Management Addresses.
  4. On the Create Node Management Addresses dialog box, perform following action:
    • Provide a Policy Name.
    • On Select Nodes By: radio button choose Specific.
    • On Nodes check box. Select your devices.
    • oob_part_1

    • On Config check box. select Out-of-Band Addresses. Out-of-Band IP Addresses dialog box will expand. Peform following action on it.
      • Select default on Out-Of-Band Management EPG.
      • On Out-Of-Band Gateway put IP of your OOB gateway.
      • On Out-Of-Band IP Addresses put range IP of your devices. I put range IP addresses for six device, since there are six leaves.
      • oob_part_2

      • Click Submit. A confirmation dialog box will pop up.
      • oob_part_3

      • Click Yes.
    • On the left navigation pane, expand Node Management Addresses
    • Click on your leaf OOB policy
    • You may see APIC was assigned an IP address to each leaf switch. Do notice that APIC will assign an IP address in unorderly fasion

      oob_part_4

    • Repeat above procedure for your spines. Once it done you may see it on the working pane when you click Node Management Addresses on the left navigation pane
    • oob_part_5

1.2 Spesific Mode

  1. On the menu bar, choose TENANTS → mgmt.
  2. In the Navigation pane, expand Tenants mgmt.
  3. Right-click Node Management Addresses and click Create Node Management Addresses.
  4. On the Create Node Management Addresses dialog box, perform following action:
    • Provide a Policy Name.
    • On Select Nodes By: radio button choose Specific.
    • On Nodes: check box. Select your device.
    • oob_specific_1

    • On Config check box. select Out-of-Band Addresses. Out-of-Band IP Addresses dialog box will expand. Peform following action on it.
      • Select default on Out-Of-Band Management EPG.
      • On Out-Of-Band Gateway put IP of your OOB gateway.
      • On Out-Of-Band IP Addresses put range IP of your devices. Since we are going to assign specific IP address to the devices, we put same IP address on the Out-Of-Band IP Addresses.
      • oob_specific_2

      • Click Submit. A confirmation dialog box will pop up.
      • oob_part_3

      • Click Yes.
    • On the left navigation pane, expand Node Management Addresses
    • Click on your device OOB policy
    • You may see APIC was assigned an IP address your switch.

      oob_specific_3

    • Repeat above procedure for your other devices. Once it done you may see it on the working pane when you click Node Management Addresses on the left navigation pane
    • oob_specific_4

According to our activities above, even we select spesific option, it will shows us range method. As an additional information. When you finish any configuration on Node Management Addresses related information will add on IP Address Pools and Managed Node Connectivity Groups directory on the left navigation pane.

  • On the left navigation pane, click IP Address Pools.
  • oob_specific_5

  • On the left navigation pane, expand Managed Node Connectivity Groups and select one of your OOB device policy.
  • oob_specific_6

Out-Of-Band Contract

Second step of confguring out-of-band management connectivity is creating a contract to be used by Out-Of-Band EPG.

  1. On the menu bar, choose TENANTS → mgmt.
  2. In the Navigation pane, expand Tenants mgmt.
  3. Expand Security Policies and click Out-Of-Band Contract.
  4. On the working pane click Action select Create Out-Of-Band Contract.
  5. oob_specific_7

  6. On the Create Out-Of-Band Contract dialog box perform following action:
    • Provide a contract Name.
    • Click plus sign next to Subject.
    • On the Create Contract Subject dialog box perform following action:
      • Give a Name of the subject.
      • Click plus sign on Filters area and choose default subject from tenant common.
      • oob_specific_8

      • Click Update.
      • Click OK.
    • Click Submit

    Now your contract is listed on the Security Policies – Out-Of-Band Contract working pane.

    oob_specific_9

Node Management EPGs

After we were finised Out-Of-Band Contract configuration, it is time to apply it on Node Management EPGs directory.

  1. On the menu bar, choose TENANTS → mgmt.
  2. In the Navigation pane, expand Tenants mgmt.
  3. Expand Node Management EPGs and click Out-Of-Band EPG – default.
  4. On the working pane click plus next to Provided Out-of-Band Contracts:.
  5. oob_specific_10

  6. ClickUpdate
  7. On the QoS Class drop down menu, select Level1.
  8. oob_specific_11

  9. Click Submit
  10. oob_specific_12

External Management Network Instance Profiles

Final configuration on OOB connectivity is creating External Management Network Instance Profile. This profile consist of consumed OOB contract and IP addresses you allowed to access the ACI fabric

  1. On the menu bar, choose TENANTS → mgmt.
  2. In the Navigation pane, expand Tenants mgmt.
  3. Click External Management Network Instance Profiles.
  4. On the working pane click Action select Create External Management Network Instance Profiles.
  5. On the Create External Management Network Instance Profiles dialog box perform following action:
    • Provide a Name.
    • Click plus sign next to Consumed Ot-of-Band Contract and select your contract from Out-of-Band Contract drop down menu.
    • click Update.
    • Click plus sign next to Subnets and type IP address that allowed to access the ACI fabric devices.
    • click Update.
    • oob_specific_13

    • click Submit.

Right now your OOB configuration is done. Your devices are accessible from external network. Happy labbiing!!!

Contributor:

Ananto Yudi Hendrawan
Network Engineer - CCIE Service Provider #38962, RHCE, VCP6-DCV
nantoyudi@gmail.com

Cisco APIC – Fabric Provisioning

This article describes how to provision Cisco ACI Fabric. It will start from script install and goes through the automatic fabric discovery as the final step for the fabric provisioning. We are using below topology during the fabric setup:

topology_1

APIC Controller Setup

After we connect all the ACI fabric devices its time to setup the system to make it operational. At the beginning, we will setup APIC controller. Cisco also call this activity as a script install. Power on your APIC server. access the CIMC interface for console access. After few minutes you will see initial setup dialog on the console like below.

apic_setup_01

apic_setup_02

Do notice, I left several parameters as is. One important parameter above is infra vlan. Make sure you are using vlan id that will not be used on the future operation purpose.

Fabric Discovery

After the APIC is ready, its time to register all fabric switches (spine and leaf) to the APIC controller. Make sure all the switches in the fabric are physically connected. On the menu bar, Navigate to Fabric → Inventory. In navigate pane click Fabric Membership. In the Work pane, in the Fabric Membership table, a single leaf switch is displayed with an ID of 0. It is the leaf switch that is connected to apic1. APIC use LLDP to discover its neighbor devices.

fabric_discovery_01

Configure the ID by double-clicking the leaf switch row, and performing the following actions:

  • In the Node ID field, add the appropriate ID (leaf1 is ID 101, and leaf 2 is ID 102). The ID must be a number that is greater than 100 because the first 100 IDs are for APIC appliance nodes. I am using 201 as leaf 1, 202 as leaf 2 and 101 as spine 1.
  • In the Node Name field, add the name of the switch and click UPDATE.

fabric_discovery_02

After the information has been updated. Now your switch is assigned with an IP address. When it done, another switch will appears. On my case, since I only have one connection from APIC to the leaf switch it will discover spine switch and the second leaf orderly one after another

fabric_discovery_03

Repeate above procedure for other switch that shows up on the working pane.

fabric_discovery_04

To verify all the fabric switches and APIC controller on the system connectect to each other, navigate to inventory pane an click Topology.

fabric_topology

Now your apic fabric system is ready. Happy Labbing!!!.

Sources:

Contributor:

Wahyu Herdyanto
F5 BIG-IP Specialist
wahyu.herdyanto@gmail.com

Wendra Pesliko
Network Datacenter Specialist
pesliko@gmail.com

Ananto Yudi Hendrawan
Network Engineer - CCIE Service Provider #38962, RHCE, VCP6-DCV
nantoyudi@gmail.com

 

Cisco APIC – Fabric OS Upgrade

ACI OSes Fabric Upgrade Overview

This article describes how to upgrade OS devices on Cisco ACI fabric. It will demonstrate step-by-step the upgrade process on “spine switch”, “leaf switches” and the “APIC controller”. We only have one spine switch, two leaf switches and one APIC controller server on our environment.

According to Cisco, at a high level, steps to upgrade the ACI fabric are as follow:

  • The procedure/steps for upgrade and downgrade are the same unless stated otherwise in the release notes of a specific release.
  • Download the ACI Controller image (APIC image) into the repository.
  • Download the ACI switch image into the repository.
  • Upgrade the ACI controller cluster (APICs).
  • Verify the fabric is operational.
  • Divide the switches into multiple groups. For example, divide into two groups – red and blue
  • Upgrade the red group of switches.
  • Verify the fabric is operational.
  • Upgrade the blue group of switches.
  • Verify the fabric is operational.

Pre Upgrade Verifications

Before we jump to the step-by-step ACI fabric upgrade, we need to verify our current operating system on our fabric. On APIC web UI, Navigate to APIC user mode on the top right side. Click on the username, pop up window will shows. Select about and it will displays your current APIC operating system.

APIC_Software_Info_Before.png

If you need more information regarding fabric switches (spine and leaf) operationg system info, from APIC UI you may navigate to Admin -> Firmware -> Fabric Node Firmware. The easiest way gathering up the all the informations is using the CLI mode from the APIC controller like below.

APIC-01# show  version 
 Role        Id          Name                      Version              
 ----------  ----------  ------------------------  -------------------- 
 controller  1           APIC-01                   2.1(2g)              
 spine       101         SPINE-101                 n9000-12.1(2g)       
 leaf        201         LEAF-201                  n9000-12.1(2g)       
 leaf        202         LEAF-202                  n9000-12.1(2g)

Step-by-Step Upgrade

This sub-article describes step-by-step upgrade process on Cisco ACI fabric (Spine switch, Leaf switches and APIC controller).

Upload images

After you obtain the OSes from Cisco, it is time to upload it to the APIC server. I was used HTTP download to transfer the OSes to the controller. I have set my PC as a HTTP server with list option like below. So it serves a directory list files rather than a web page.

http_server.png

Now from APIC web UI, navigate to Admin → Firmware → Download Tasks → ACTION → Create Firmware Download Task. Fill the dialog box with the information provided. Submit when you are done.

download_task_apic.png

Repeat above procedure for the switch OS. Navigate to Operational tab to view the progress.

download_task_progress.png

Once it done, verify using Firmware Repository tab on the left panel. Or if you want to verify it from the cli you may use below command.

APIC-01# show firmware repository 
 Name                                      Type        Version        Size(MB)   
 ----------------------------------------  ----------  -------------  ---------- 
 aci-apic-dk9.2.2.2j.bin                   controller  2.2(2j)        2966.320   
 aci-catalog-dk9.2.2.2j.bin                catalog     2.2(2j)        0.040      
 aci-n9000-dk9.12.2.2j.bin                 switch      12.2(2j)       1132.285

Controller Upgrade

Next step is upgrading the APIC controller. Hover your mouse to Controller Firmware, right click and click Controller Upgrade. Provide all the information requested on the dialog box.

Do notice on above picture we have 1 major fault that recommends us to resolve before we continue to upgrade the controller. We were navigated to System → Controllers → Faults, double click on the fault information, it will display more faults list on that domain. Double click one fault information you need to know. Below is ours.

According to the infromation, it seems we have one port down on the controller. In that case we good to go to the upgrade activity since I have another port working. From CLI you may see below output.

APIC-01# show faults controller
Code            : F0103
Severity        : major
Last Transition : 2017-08-06T23:47:33.420+07:00
Lifecycle       : raised
DN              : topology/pod-1/node-1/sys/cphys-[eth1/2]/fault-F0103
Description     : Physical Interface eth1/2 on Node 1 is now down

If you consider this information is not important, you may check the fault information on the list and click the gear icon to apply Hide Acked Fault.

Click submit on Controller Upgrade dialog box and your APIC server is starts to begin the upgrade process. During the upgrade, you might loose connectivity to the APIC server until it finish.

Fabric Switches Upgrade

On switches part, Cisco recommends us to configure the upgrade process using group upgrade. For example if we have two spines and two leafs, we can group spine one and leaf one as group one and spine two and leaf two as group two. Defining the group upgrade determine which group of switches will start to upgrade the OS. This group upgrade expects minimum downtime from the End Point Group devices, it is because at least there are one spine and one leaf switch that accomodates the data plane traffic.

Navigate to Admin → Firmware → Fabric Node Firmware → Firmware Groups, right click and select Create Firmware Group.

Two important informations you may put are Target Firmware Version and Group Node Ids. This group node ids define which devices will use the firmware OS as the target OS. click submit when you done.

Below is the CLI output from the APIC.

APIC-01# show running-config firmware switch-group 
# Command: show running-config firmware switch-group
# Time: Sat Sep  2 22:24:08 2017
  firmware
    switch-group OS-12.2.2j
      switch 101
      switch 201
      switch 202
      firmware-version aci-n9000-dk9.12.2.2j.bin
      exit
    exit

Now navigate to Maintenance Groups right click and select Create POD Maintenance Group. Fill the information provided. In this group I want to put spine 101 and leaf 201 as Group-1.

I left Run Mode and Scheduler as is. You can schedule an upgrade using Scheduler drop down menu.

Click submit when your are done. Repeate above procedure for the other leaf switch. You will see the summary of your group upgrade as below.

As we have finished the pre-upgrade activity, now we can start to upgrade the fabric switches. From the Maintenance Group tree on the left panel, right click on Group-1 and select Upgrade Now, it will initiate the upgrade process.

Once the upgrade activity on Group-1 is done repeate the same procedure for Group-2. You may see the upgrade status by clicking each group name on the left panel under Maintenance Groups.

From the CLI you may use below command to track down upgrade status for each fabric switch.

APIC-01# show firmware upgrade status 
  Node-Id     Current-Firmware      Target-Firmware       Status                     Upgrade-Progress(%)  
 ----------  --------------------  --------------------  -------------------------  -------------------- 
 1           apic-2.2(2j)          apic-2.2(2j)          success                    100                  
 101         n9000-12.2(2j)        n9000-12.2(2j)        success                    100                  
 201         n9000-12.2(2j)        n9000-12.2(2j)        success                    100                  
 202         n9000-12.2(2j)        n9000-12.2(2j)        success                    100

That’s all, happy labbing!!!

Source:

Operating Cisco Application Centric Infrastructure

Contributor:
Wahyu Herdyanto
F5 BIG-IP Specialist
wahyu.herdyanto@gmail.com

Wendra Pesliko
Network Datacenter Specialist
pesliko@gmail.com

Ananto Yudi Hendrawan
Network Engineer - CCIE Service Provider #38962, RHCE, VCP6-DCV
nantoyudi@gmail.com

Cisco Nexus 7000 VDC Administration

This article decribes how to manage Virtual Device Context (VDC) on Cisco Nexus 7010 series. Cisco’s VDC feature helps enable virtualization of a single physical device on one or more logical devices. In this lab testing, we are going to cover several scenario like below:

  • Configuring Admin VDC
  • Configuring VDC Resource and Templates
  • Managing VDCs

Configuring Admin VDC

Admin VDC without migrate option

You can create an Admin VDC in one of the following ways:

  • After a fresh switch bootup.
  • Enter the system admin-vdc command after bootup. All the nonglobal configuration in the default VDC is lost after you enter this command.
  • Use system admin-vdc migrate new vdc name option commmand to migrate a non global configuration on non default vdc to new vdc.

In this section we are going to focus on creating vdcs after the switch bootup. On this lab environment I am using Nexus 7010 Chassis with SUP2E and NX-OS 6.2(16).

Now let’s verify our default vdc configuration.

N7K-ADMIN# show run vdc
...
version 6.2(16)
no system admin-vdc
vdc N7K-ADMIN id 1
  limit-resource module-type m1 m1xl m2xl f2e 
  cpu-share 5
  allocate interface ethernet1/1-12
  limit-resource vlan minimum 16 maximum 4094
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource monitor-session-erspan-dst minimum 0 maximum 23
  limit-resource vrf minimum 2 maximum 4096
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 96 maximum 96
  limit-resource u6route-mem minimum 24 maximum 24
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8
  limit-resource monitor-session-inband-src minimum 0 maximum 1
  limit-resource anycast_bundleid minimum 0 maximum 16
  limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
  limit-resource monitor-session-extended minimum 0 maximum 12
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                       state               mac                 type        lc      
------  --------                       -----               ----------          ---------   ------  
1       N7K-ADMIN                      active              40:55:39:0e:43:41   Ethernet    m1 f1 m1xl m2xl
N7K-ADMIN# show interface description 

-------------------------------------------------------------------------------
Interface                Description                                            
-------------------------------------------------------------------------------
mgmt0                    ***Management_Interface***

-------------------------------------------------------------------------------
Port          Type   Speed   Description
-------------------------------------------------------------------------------
Eth1/1        eth    1000    --
Eth1/2        eth    1000    --
Eth1/3        eth    1000    --
Eth1/4        eth    1000    --
Eth1/5        eth    1000    --
Eth1/6        eth    1000    --
Eth1/7        eth    1000    --
Eth1/8        eth    1000    --
Eth1/9        eth    1000    --
Eth1/10       eth    1000    --
Eth1/11       eth    1000    --
Eth1/12       eth    1000    --

Now configure your system to use admin vdc.

N7K-ADMIN(config)# system admin-vdc 
All non-global configuration from the default vdc will be removed, Are you sure you want to continue? (yes/no) [no] yes
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None
N7K-ADMIN# show interface description 

-------------------------------------------------------------------------------
Interface                Description                                            
-------------------------------------------------------------------------------
mgmt0                    ***Management_Interface***

Do notice that right now we don’t have any interfaces from physical module interface. Only management interface is allowed on admin vdc. Now let’s try to allocate some interfaces to it and see how admin vdc respons to it.

N7K-ADMIN(config)# vdc N7K-ADMIN  
N7K-ADMIN(config-vdc)# allocate interface ethernet1/1-12M
Moving ports will cause all config associated to them in source vdc to be removed. Are you sure you want to move the ports (y/n)?  [yes] 
ERROR: 1 or more interfaces are from a module of type not supported by this vdc

According to the error message, it was clear that admin vdc only allow management interface on it. As it purpose as an admin vdc it necessary to have only one interface.

Admin VDC With Migrate Option

Another method to create admin vdc is by adding migrate option. This method allow you to keep you default vdc configuration. This option is recommended for existing deployments where the default VDC is used for production traffic whose downtime must be minimized.

Let’s verify our default vdc configuration before we do some changes.

N7K-ADMIN# sh run vdc
...
version 6.2(16)
no system admin-vdc
vdc N7K-ADMIN id 1
  limit-resource module-type m1 m1xl m2xl f2e 
  cpu-share 5
  limit-resource vlan minimum 16 maximum 4094
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource monitor-session-erspan-dst minimum 0 maximum 23
  limit-resource vrf minimum 2 maximum 4096
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 96 maximum 96
  limit-resource u6route-mem minimum 24 maximum 24
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8
  limit-resource monitor-session-inband-src minimum 0 maximum 1
  limit-resource anycast_bundleid minimum 0 maximum 16
  limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
  limit-resource monitor-session-extended minimum 0 maximum 12
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Ethernet    m1 m1xl m2xl f2e
N7K-ADMIN# show interface description 

-------------------------------------------------------------------------------
Interface                Description                                            
-------------------------------------------------------------------------------
mgmt0                    --

-------------------------------------------------------------------------------
Port          Type   Speed   Description
-------------------------------------------------------------------------------
Eth1/1        eth    1000    --
Eth1/2        eth    1000    --
Eth1/3        eth    1000    --
Eth1/4        eth    1000    --
Eth1/5        eth    1000    --
Eth1/6        eth    1000    --
Eth1/7        eth    1000    --
Eth1/8        eth    1000    --
Eth1/9        eth    1000    --
Eth1/10       eth    1000    --
Eth1/11       eth    1000    --
Eth1/12       eth    1000    --

Now we will configure admin vdc on our system and add new vdc (N7K-DEV) that will have default vdc configuration migrated to it.

N7K-ADMIN(config)# system admin-vdc migrate N7K-DEV
All non-global configuration from the default vdc will be removed, Are you sure you want to continue? (yes/no) [no] yes
Note: Interface mgmt0 will not have its ip address migrated to the new vdc
Note: During migration some configuration may not be migrated. Example: VTP will need to be reconfigured in the new vdc if it was enabled. Please refer to configuration guide for details
Please wait, this may take a while
Note: Ctrl-C has been temporarily disabled for the duration of this command
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e
N7K-ADMIN# show run vdc
...
version 6.2(16)
system admin-vdc
vdc N7K-ADMIN id 1
  cpu-share 5
  limit-resource vlan minimum 16 maximum 4094
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource monitor-session-erspan-dst minimum 0 maximum 23
  limit-resource vrf minimum 2 maximum 4096
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 96 maximum 96
  limit-resource u6route-mem minimum 24 maximum 24
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8
  limit-resource monitor-session-inband-src minimum 0 maximum 1
  limit-resource anycast_bundleid minimum 0 maximum 16
  limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
  limit-resource monitor-session-extended minimum 0 maximum 12
vdc N7K-DEV id 2
  limit-resource module-type m1 m1xl m2xl f2e 
  cpu-share 5
  allocate interface Ethernet1/1-12
  boot-order 1
  limit-resource vlan minimum 16 maximum 4094
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource monitor-session-erspan-dst minimum 0 maximum 23
  limit-resource vrf minimum 2 maximum 4096
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 96 maximum 96
  limit-resource u6route-mem minimum 24 maximum 24
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8
  limit-resource monitor-session-inband-src minimum 0 maximum 1
  limit-resource anycast_bundleid minimum 0 maximum 16
  limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
  limit-resource monitor-session-extended minimum 0 maximum 12

vdc resource template admin-vdc-migrate-template
  limit-resource vlan minimum 16 maximum 4094
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource monitor-session-erspan-dst minimum 0 maximum 23
  limit-resource vrf minimum 2 maximum 4096
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 96 maximum 96
  limit-resource u6route-mem minimum 24 maximum 24
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8
  limit-resource monitor-session-inband-src minimum 0 maximum 1
  limit-resource anycast_bundleid minimum 0 maximum 16
  limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
  limit-resource monitor-session-extended minimum 0 maximum 12

After admin vdc created, you will also have vdc resource template create based on admin vdc. We will cover vdc resource template on next section. Now try to login to new vdc using switchto vdc vdc_name and verify that it has the same membership of the interfaces on the default vdc.

N7K-ADMIN# switchto vdc N7K-DEV 
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
N7K-ADMIN-N7K-DEV#
N7K-ADMIN-N7K-DEV# sh interface description 

-------------------------------------------------------------------------------
Port          Type   Speed   Description
-------------------------------------------------------------------------------
Eth1/1        eth    1000    --
Eth1/2        eth    1000    --
Eth1/3        eth    1000    --
Eth1/4        eth    1000    --
Eth1/5        eth    1000    --
Eth1/6        eth    1000    --
Eth1/7        eth    1000    --
Eth1/8        eth    1000    --
Eth1/9        eth    1000    --
Eth1/10       eth    1000    --
Eth1/11       eth    1000    --
Eth1/12       eth    1000    --

Configuring VDC Resource and Templates

VDC resource templates set the minimum and maximum limits for shared physical device resources when you create the VDC. The Cisco NX-OS software reserves the minimum limit for the resource to the VDC. Any resources allocated to the VDC beyond the minimum are based on the maximum limit and availability on the device.

Below is one of the example of the vdc resource template we are using.

N7K-ADMIN(config)# vdc resource template new_vdc_template
N7K-ADMIN(config-vdc-template)# limit-resource vlan minimum 20 maximum 4094
N7K-ADMIN(config-vdc-template)#   limit-resource monitor-session minimum 0 maximum 2
N7K-ADMIN(config-vdc-template)#   limit-resource monitor-session-erspan-dst minimum 0 maximum 23
N7K-ADMIN(config-vdc-template)#   limit-resource vrf minimum 2 maximum 4096
N7K-ADMIN(config-vdc-template)#   limit-resource port-channel minimum 0 maximum 768
N7K-ADMIN(config-vdc-template)#   limit-resource u4route-mem minimum 96 maximum 96
N7K-ADMIN(config-vdc-template)#   limit-resource u6route-mem minimum 24 maximum 24
N7K-ADMIN(config-vdc-template)#   limit-resource m4route-mem minimum 58 maximum 58
N7K-ADMIN(config-vdc-template)#   limit-resource m6route-mem minimum 8 maximum 8
N7K-ADMIN(config-vdc-template)#   limit-resource monitor-session-inband-src minimum 0 maximum 1
N7K-ADMIN(config-vdc-template)#   limit-resource anycast_bundleid minimum 0 maximum 16
N7K-ADMIN(config-vdc-template)#   limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
N7K-ADMIN(config-vdc-template)#   limit-resource monitor-session-extended minimum 0 maximum 12
N7K-ADMIN# show run vdc
...
vdc resource template new_vdc_template
  limit-resource vlan minimum 20 maximum 4094
  limit-resource monitor-session minimum 0 maximum 2
  limit-resource monitor-session-erspan-dst minimum 0 maximum 23
  limit-resource vrf minimum 2 maximum 4096
  limit-resource port-channel minimum 0 maximum 768
  limit-resource u4route-mem minimum 96 maximum 96
  limit-resource u6route-mem minimum 24 maximum 24
  limit-resource m4route-mem minimum 58 maximum 58
  limit-resource m6route-mem minimum 8 maximum 8
  limit-resource monitor-session-inband-src minimum 0 maximum 1
  limit-resource anycast_bundleid minimum 0 maximum 16
  limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
  limit-resource monitor-session-extended minimum 0 maximum 12

Now create new vdc based on our vdc resource template.

N7K-ADMIN(config)# vdc N7K-PROD template new_vdc_template 
Note:  Creating VDC, one moment please ...
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          active              40:55:39:0e:43:43   Ethernet    m1 m1xl m2xl f2e
N7K-ADMIN# show vdc N7K-PROD detail 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc id: 3
vdc name: N7K-PROD
vdc state: active
vdc mac address: 40:55:39:0e:43:43
vdc ha policy: RESTART
vdc dual-sup ha policy: SWITCHOVER
vdc boot Order: 1
CPU Share: 5
CPU Share Percentage: 33%
vdc create time: Fri Apr  7 09:25:56 2017
vdc reload count: 0
vdc uptime: 0 day(s), 0 hour(s), 1 minute(s), 32 second(s)
vdc restart count: 0
vdc type: Ethernet
vdc supported linecards: m1 m1xl m2xl f2e
N7K-ADMIN# show vdc N7K-PROD resource

     Resource                   Min       Max       Used      Unused    Avail    
     --------                   ---       ---       ----      ------    -----    
     vlan                       20        4094      5         15        4089     
     monitor-session            0         2         0         0         2        
     monitor-session-erspan-dst 0         23        0         0         23       
     vrf                        2         4096      2         0         4090     
     port-channel               0         768       0         0         768      
     u4route-mem                96        96        1         95        95       
     u6route-mem                24        24        1         23        23       
     m4route-mem                58        58        1         57        57       
     m6route-mem                8         8         1         7         7        
     monitor-session-inband-src 0         1         0         0         1        
     anycast_bundleid           0         16        0         0         16       
     monitor-session-mx-excepti 0         1         0         0         1        
     monitor-session-extended   0         12        0         0         12

You may see on above resource output. Our vdc was assigned correctly by our configuration template. Let’s allocate some interfaces from the line card and verify it.

N7K-ADMIN(config-vdc)# allocate interface ethernet1/13-18
Moving ports will cause all config associated to them in source vdc to be removed. Are you sure you want to move the ports (y/n)?  [yes]
N7K-ADMIN# show vdc N7K-PROD membership 
Flags : b - breakout port
---------------------------------

vdc_id: 3 vdc_name: N7K-PROD interfaces:
        Ethernet1/13          Ethernet1/14          Ethernet1/15          
        Ethernet1/16          Ethernet1/17          Ethernet1/18

Login to vdc N7K-PROD and verify if it already have its interfaces. Since this is a new vdc, it is behave like a new switch. It will ask you to set up password and another administration task just like when you are entering the switch after fresh bootup.

N7K-ADMIN# switchto vdc N7K-PROD 


         ---- System Admin Account Setup ----


Do you want to enforce secure password standard (yes/no) [y]: no

  Enter the password for "admin": 
  Confirm the password for "admin": 

         ---- Basic System Configuration Dialog VDC: 3 ----

This setup utility will guide you through the basic configuration of
the system. Setup configures only enough connectivity for management
of the system.

Please register Cisco Nexus7000 Family devices promptly with your
supplier. Failure to register may affect response times for initial
service calls. Nexus7000 devices must be registered to receive 
entitled support services.

Press Enter at anytime to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.

Would you like to enter the basic configuration dialog (yes/no): no
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
N7K-ADMIN-N7K-PROD#
N7K-ADMIN-N7K-PROD# show interface description 

-------------------------------------------------------------------------------
Port          Type   Speed   Description
-------------------------------------------------------------------------------
Eth1/13       eth    1000    --
Eth1/14       eth    1000    --
Eth1/15       eth    1000    --
Eth1/16       eth    1000    --
Eth1/17       eth    1000    --
Eth1/18       eth    1000    --

Managing VDCs

Reloading VDCs

After we create VDCs, we can modify its parameter according to your network environment needs. In this subsection we are focusing on how to do some administrative task on your VDCs. Before we execute the command, let’s verify all VDCs we have on our Nexus 7000.

N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          active              40:55:39:0e:43:43   Ethernet    f2

Now we will pick vdc N7K-PROD as a target for this test.

N7K-ADMIN# reload vdc N7K-PROD 
Are you sure you want to reload this vdc (y/n)?  [no] yes
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          resume in progress  40:55:39:0e:43:43   Ethernet    f2

In order to measure how long the reload process takes, I did a continues ping to the management interface resides on N7K-PROD vdc. After 30 seconds, N7K-PROD vdc back online.

N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          active              40:55:39:0e:43:43   Ethernet    f2

Suspending VDCs

After assign a reload command to a vdc, now we are going to put a suspend action on it.

N7K-ADMIN(config)# vdc N7K-PROD suspend 
This command will suspend the VDC. (y/n)? [no] yes
Note: Suspending vdc N7K-PROD
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          suspended           40:55:39:0e:43:43   Ethernet    f2

Using the same procedure above to measure how long it back online, now let’s resume the vdc.

N7K-ADMIN(config)# no vdc N7K-PROD suspend 
Note: Resuming vdc N7K-PROD

After 30 seconds we can see vdc N7K-PROD is active

N7K-ADMIN# show vdc

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          active              40:55:39:0e:43:43   Ethernet    f2

Managing VDC Interfaces

When you create a VDC, you can allocate I/O interfaces to the VDC. One important thing regarding port allocation is, it is recommended to allocate all ports on the same port group to the same VDC. Beginning with Cisco NX-OS Release 5.2(1) for Nexus 7000 series devices, all members of a port group are automatically allocated to the VDC when you allocate an interface.

In this lab I am using two Line Cards, N7K-M148GT-11 on slot 1 and N7K-F248XP-25 on slot 2. Let’s see how we can verify port group on each line card.

Module N7K-M148GT-11

M1_Port_Group

Module N7K-F248XP-25
F2_Port_Group

We were omitted the rest of the output because those output is enough for us to understand how port group allocated on the line card. The interface number is listed in the FP port column, and the port ASIC number is listed in the MAC_0 column, which means that in slot 1 on the the above example, interfaces 1 through 12 share the same port ASIC (0) and on the slot 2, interfaces 1 through 4 share the same port ASIC (0).

When interfaces in different VDCs share the same port ASIC, reloading the VDC (with the reload vdc command) or provisioning interfaces to the VDC (with the allocate interface command) might cause short traffic disruptions (of 1 to 2 seconds) for these interfaces. If such behavior is undesirable, make sure to allocate all interfaces on the same port ASIC to the same VDC.

VDC Boot Order

Imagine you have a VDC connect to web servers, another VDC connect to app servers and Another VDC connec to database. In case your switch reload due to power outage or any force major incident, you expect specific VDC to go up first so the apps tier can comunicate properly. Another feature we can adjust on the VDC is boot order. Using boot order value you can manage which VDC should goes up first.

Use below command to adjust boot order value. By default it will have value of 1 on the boot order.

N7K-ADMIN(config)# vdc N7K-DEV
N7K-ADMIN(config-vdc)# boot-order 2
N7K-ADMIN# show vdc detail 
....
vdc id: 2
vdc name: N7K-DEV
vdc state: active
vdc mac address: 40:55:39:0e:43:42
vdc ha policy: BRINGDOWN
vdc dual-sup ha policy: SWITCHOVER
vdc boot Order: 2
CPU Share: 5
CPU Share Percentage: 33%
vdc create time: Tue Apr 11 11:53:32 2017
vdc reload count: 0
vdc uptime: 0 day(s), 0 hour(s), 52 minute(s), 25 second(s)
vdc restart count: 0
vdc type: Ethernet
vdc supported linecards: m1 m1xl m2xl f2e 
...

you cannot modify boot order on admin/default VDC. An Error will occurs when you try to modify it.

N7K-ADMIN(config)# vdc N7K-ADMIN 
N7K-ADMIN(config-vdc)# boot-order 1
ERROR: Default vdc boot order cannot be changed

Now let’s do some test by reloading the box and see the progress from each VDC.

N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                type        lc      
------  --------                          -----               ----------         ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41  Admin       None    
2       N7K-DEV                           create pending      40:55:39:0e:43:42  Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          create pending      40:55:39:0e:43:43  Ethernet    f2
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           create in progress  40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          create pending      40:55:39:0e:43:43   Ethernet    f2
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          create in progress  40:55:39:0e:43:43   Ethernet    f2
N7K-ADMIN# show vdc 

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3 

vdc_id  vdc_name                          state               mac                 type        lc      
------  --------                          -----               ----------          ---------   ------  
1       N7K-ADMIN                         active              40:55:39:0e:43:41   Admin       None    
2       N7K-DEV                           active              40:55:39:0e:43:42   Ethernet    m1 m1xl m2xl f2e 
3       N7K-PROD                          active              40:55:39:0e:43:43   Ethernet    f2

As you can see from above output, each VDC will start to active after another. Without boot order configured, each VDC will start to active at the same time.

VDC Hostname

You can change the format of the CLI prompt for nondefault VDCs. By default, the prompt format is a combination of the default VDC name and the nondefault VDC name. You can change the prompt to only contain the nondefault VDC name using no vdc combined-hostname. You can use this command only on spesific non default VDC or for the entire VDCs. Let’s verify non default VDC hostname before we change it.

N7K-ADMIN# switchto vdc N7K-PROD 
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
N7K-ADMIN-N7K-PROD#

Apply the config and see the change.

N7K-ADMIN(config)# no vdc combined-hostname
N7K-ADMIN# switchto vdc N7K-PROD 
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2016, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
N7K-PROD#

Now we have non default VDC hostname without additional name from the admin/default VDC.

VDC Management Interface

Nexus SUP2E module has one physical port for management. As Infromed earlier, this management interface is belong to admin/default VDC.

N7K-ADMIN# show interface description 

-------------------------------------------------------------------------------
Interface                Description                                            
-------------------------------------------------------------------------------
mgmt0                    --

-------------------------------------------------------------------------------
Port          Type   Speed   Description
-------------------------------------------------------------------------------
Eth1/1        eth    1000    --
Eth1/2        eth    1000    --
Eth1/3        eth    1000    --
Eth1/4        eth    1000    --
Eth1/5        eth    1000    --
Eth1/6        eth    1000    --
Eth1/7        eth    1000    --
Eth1/8        eth    1000    --
Eth1/9        eth    1000    --
Eth1/10       eth    1000    --
Eth1/11       eth    1000    --
Eth1/12       eth    1000    --

When we create more VDCs other than default VDC, Management interface is distributed through the VDCs. You can configure each VDC with an IP address with same segment with other IP addresses on the other VDCs. For example I was configured N7K-ADMIN VDC with 10.10.10.1/24, N7K-DEV 10.10.10.2/24, N7K-PROD 10.10.10.3/24 and management PC using 10.10.10.100/24. Do notice that on non default VDC, management interface is not shown when you execute show interface description.

N7K-PROD# show interface description 


-------------------------------------------------------------------------------
Port          Type   Speed   Description
-------------------------------------------------------------------------------
Eth2/1        eth    10G     --
Eth2/2        eth    10G     --
Eth2/3        eth    10G     --
Eth2/4        eth    10G     --

You can configure management interface just like you configure it on the admin/default VDC.

N7K-PROD(config)# interface mgmt 0
N7K-PROD(config-if)# vrf member management 
N7K-PROD(config-if)# ip address 10.10.10.3/24
N7K-PROD(config-if)# Description ***Management_Link***
N7K-PROD(config-if)# no shut
N7K-PROD# show interface description 

-------------------------------------------------------------------------------
Interface                Description                                            
-------------------------------------------------------------------------------
mgmt0             ***Management_Link***

-------------------------------------------------------------------------------
Port          Type   Speed   Description
-------------------------------------------------------------------------------
Eth2/1        eth    10G     --
Eth2/2        eth    10G     --
Eth2/3        eth    10G     --
Eth2/4        eth    10G     --

sources:

Contributor:

Ananto Yudi Hendrawan
Network Engineer - CCIE Service Provider #38962, RHCE, VCP6-DCV
nantoyudi@gmail.com