DNA Center Fundamentals

Cisco ISE Integration

DNAC and ISE communicate via REST API. pxGrid must be enabled on CISCO ISE.

Devices Identified by DNA center during discovery will automatically be pushed to ISE.

When provisioned devices are updated in DNA center, those updates(mgmt IP, SNMP, creds) will be pushed and updated to ISE automatically.

When a device is deleted in DNAC it will also be deleted in ISE.

A pre-requisite for these types of automatically synched changes is that the devices must be associated with a site where ISE is the AAA server.

You must define ISE CLI and GUI accounts and the accounts and passwords must be the same for both access methods.

Configuring ISE integration

  • From DNAC, specify the IP address of the ISE PAN

  • Create a shared secret for Radius and TACACS authentication

  • Select a Cisco ISE server

  • enter username and password

  • enter FQDN of ISE PAN node

  • create a subscriber name for DNAX pxGrid client

DNAC Device Inventory

For Discovery to add devices to inventory, you must provide credential sets and IP ranges or a seed IP for CDP and LLDP discovery.

After initial discovery, inventory maintained by polling every 25 minutes. You can change the default up to a max of 24 hours.

For LLDP and CDP you can set the maximum number of hops away from the seed device that you want to scan.

DNAC is smart enough to try and find the best logical interface(loopback) to be used as the mgmt interface when discovering a device.

  • If one loopback on device that will be used.

  • If multiple loopback, the highest IP address will win

  • If no loopback interfaces, DNAC uses the ethernet interface with the highest IP address.

  • If no ethernet it will use the highest serial interface IP.

I see no mention of SVI interfaces, need to research/lab that

Once in DNAC the devices will show the status of the device in the reachability status:

  • Reachable

  • Connecting

  • Authentication Failed

  • Unreachable

You can provision devices to the inventory using the GUI menu or via bulk import with CSV(REST API is also an option)

??DNA scanning default?? 5 min? max?

DNAC Configuration Management

Dnac will backup and synchronize configuration backups.

Templating… This will be it’s own post… Return to review configuration managagment

On cisco IOS $h and $t variables are passed to the device? To resolve to hostname and time…. Is this done by DNAC or a hidden IOS feature?

LAN Automation

  • Plan

  • Design

  • Discover

  • Provision

Layer 2 links converted to layer 3. Only supported in certain topologies (collapsed core, three tier, etc.)

DNAC SWIM (Software Image Management)

  • Software Image Repository

  • Golden Image

  • Image Status

  • Software Maintenance Update

Special considerations should be taken for non cat9k devices that have ROMMON upgrades which can only be upgraded to the newest available ROMMON code from cisco.com. Two reboots are required for those types of upgrades.(One for rmon, the other for the base image IOS)

The upgrade proccess will do both pre and post checks for things like CDP neighbors, interface status, confreg, and STP summary.

DNA Initial workflow

Design:

  • Build areas and sites

  • define baseline credentials, ISE integration, templates, images

  • discover devices or manually add to inventory

  • assign devices to sites

  • provision devices at sites based on templated policy

SD-ACCESS Fundamentals

Foundations for underlay network

LAN Automation feature uses IS-IS, a manual automation for the underlay can use EIGRP, OSPF.

In a manual topology. Layer2 or Layer 3 topologies are supported but layer 3 is recommended.

The MTU of the interfaces needs to support vxlan and fabric headers. It’s generally advised just to enable jumbo frames up to at least 9100 bytes although 9216 or 9196 are typically the limits of most switching hardware.

Latency of 100ms or less in the network.

Overlay Network

LISP is the routing protocol (control plane)

vxlan is used for encapsulation(data plane)

trustsec used for micro segmentation (policy plane)

The three planes of SD-ACCESS

Control Plane using LISP

  • Host tracking database (HTDB)
    • can support multiple ID type lookups, (MAC, IPV4/32, IPV6/128)
  • Map Server (MS)
    • Learns EID to RLOC mappings. Learns from ETR(Egress tunnel router) and aggregates prefixes into ALT(Alternative Logical topology) to redistribute wth BGP. Can be one or multiple MS, if more than one they all share the same DB. MS does not forward data, so it can run on smaller systems like routers. Can be a dedicated node or can be combined with MR or Border Node.
  • Map Resolver (MR)
    • Maps requests from ITR(Ingress Tunnel Router) when an EID wants to talk to another EID it goes to the fabric edge node, the edge node asks the control plane(MR) node for the RLOC to EID mapping, if it does not have one it will end to that last known ETR that registered the EID.

Fabric edge and border nodes provide ETR and ITR

  • ETR

  • ITR

  • PROXY ETR

  • Proxy ITR

LISP Instance ID is used to identify different VRF’s. THese map to the VNI of the data plane.

Subnets can be stretched between different RLOC. Anycast gateways used.

RLOC’s only need to be reachable from global routing table. EID are cached at the local node through conversational learning.

Border and edge nodes register with all control plane nodes so the same type of device should be used for consistent performance.

Fabric Data Plane

using vxlan

overlay network is called a vxlan segment. id’d with a 24 bit vxlan network identifier. 16 million vxlan segments possible. Layer2 frame encapsulated in a layer3 UDP packet.

SDA replaces 16 reserved bits in the vxlan header to facilitate 64,000 SGTs. The format is known as VXLAN-GPO

Mitigates flooding of BUM traffic.

can run dual-stack

allow layer2 extension over layer3 similar to VPLS.

Policy Plane

using SGT and trustsec

network and group based policies

macro and micros segmentation.

macro segmentation

VRF’s used for macro seg. Completley isolated within the SD-ACCESS fabric.

the default vrf is ‘DEFAULT_VN’

An external device called a ‘fusion router’(or firewall can be used) to provide inter-vrf routing. Can apply stand policy here based on IP, SGT or both.

To extend policy across service provider wan links or devices that are not part of the fabric can be vrf-lite for hop by hop vrf routing. The other option is vpnv4 address families in mp-bgp to carry macro seg info.

micro segmentation

based on SGT tags can apply acl in hardware on switches and SGT based forwarding devices.

In IP-based transit, SGT can be lost with de-cap. The two ways to solve is to use inline tagging with CTS Meta-Data(CMD) hop by hop(similar to vrf-lite hop by hop) The next approach is to use SXP for IP to SGT bindings over GRE,IPsec, DMVPN and GET VPN, with compatible devices.

  • SGT enforced by border and edge nodes in SDA fabric

  • Trustsec is used to carry SGT infor in CMD on layer2 transports. The preferred solution if possible.

  • SXP used to transport over non-trustsec compatible networks

Control and border design

Control nodes and border nodes can be colocated for simple design. To scale out, you may need to seperate them and this required ibgp to synchronise the databases.

Packet flow in SD-ACCESS from host perspective

Onboarding

  1. Host onboarding - host is authenticated and authorized using AAA(ISE)

  2. Host onboarding - DHCP request and EID registration

      • use option 82 to make sure it get’s to the right relay agent.

Host to Host communication

If within the same subnet:

  1. Host A sends arp request for MAC of host B.

  2. Edge node A intercepts arp and query control plane(MR) for mac addr of host B.

  3. Control plane checks the ARP table for IP/MAC of host B.

  4. If found, CP respond to edge A with info.

  5. Once edge A has MAC of host B it query CP again for RLOC of host B.

  6. IF CP has info, send to Edge A

  7. Edge A stores info in layer 2 forwarding table.

  8. Edge A will send arp entry(converted to unicast) to Edge node of host B(call it C for now)

  9. Edge node C recieves the ARP entry, will forward to Host B and install the host A entry in it’s L2 forwarding table.

  10. Host B can now reply with unicast reply to Host A. Since both edge nodes have layer2 forwarding information, the CP nodes are not involved any longer.

  11. Traffic can flow between the two edge nodes, vxlan encapsulation will carry their traffic to each other.

SD-Access Distributed Campus

SXP not required

Transit Network

BGP AS 65540 is reserved for use on transit control plane.

Underlay network must have reachability from loopback 0 interfaces on all fabric nodes

IP Based

vxlan packets decapped into standard IP and mapped to vrf lite instances.

SDA Native Transit

VXLAN carried throughout. Requires jumbo frames. <10ms latency recommended. Metro Ethernet, VPLS, dark fiber, etc.

SD-WAN Transit

Uses OMP in control plane, ipsec to secure traffic.

Deploy SDA

provision devices

  • planned provision, defined in heirarchy

  • unclaimed provision

  • Smart account synch