Cisco ACI with Avi Vantage Deployment Guide
Cisco Application Centric Infrastructure is a software defined networking solution offered by Cisco for data centers and clouds, which helps in increasing operational efficiencies, delivering network automation, and improving security for any combination of on-premises data centers, private, and public clouds.
The important building blocks of Cisco ACI are Nexus 9000 hardware and APIC.
The APIC provides centralized policy automation and management for ACI fabric. The Controller helps in common policy and management framework across physical, virtual, and cloud infrastructure.
ACI is based on open architecture (open APIs and standards) which integrates Layer 4-Layer 7 (L4-L7) services in the network. ACI solution offers robust implementation of multi-tenant security, quality of service (QoS), and high availability.
The following is a list of most used terminologies in ACI:
|ACI fabric||A Virtual Extensible LAN (VXLAN) overlay configured by APIC on leaf or spine switches to provide end-to-end connectivity for clients or servers.|
|Bridge domains||A bridge domain is a Layer 2 segment analogous to VLANs in a traditional network.|
|Endpoint groups (EPGs)||Endpoint groups are associated with endpoints in the network. The endpoints are identified by their domain connectivity (virtual, physical, or outside) and their connectivity method. For instance, virtual machine port groups (VLAN, VXLAN), physical interfaces or VLANs including virtual port channels, external VLANs, external subnets.|
|Contracts||These are directional access lists between the provider and consumer EPGs. They comprise of one or more filters (ACEs) to identify and allow traffic between EPGs. By default, communication between EPGs is blocked and it requires a contract to allow the traffic.
Note: Intra EPG traffic is allowed by default and so no contract is required.
|Application network profiles||These are containers that group one or more EPGs together with their associated connectivity policies.|
|Tenants||These are network wide administrative containers, which are like logical containers for application policies.|
The Avi Vantage Platform provides enterprise-grade distributed ADC and iWAF(Intelligent Web Application Firewall) solutions for on-premise and public cloud infrastructure. Avi Vantage also provides inbuilt analytics that enhances the end-user application experience as well as ease of operationalizing for network administrators.
Avi Vantage is a complete software solution which runs on commodity x86 servers or as a virtual machine and is entirely enabled by REST APIs.
The product components include:
Avi Controller (control plane): Central policy and management plane that analyzes the real-time telemetry collected from Avi Service Engines and presents it in visual, actionable dashboards for administrators using an intuitive user interface built on RESTful APIs.
Avi Service Engines (data plane): Distributed load balancers with iWAF that are deployed closest to the applications across multiple cloud infrastructures. The Avi Service Engines collect and send real-time application telemetry to the Avi Controller.
The architecture for Avi Vantage is Controller led, which de-couples the control plane and data plane. This architecture makes it possible to automate L4-L7 using Avi Controller and for ACI to provide L2-L3 network automation for Service Engines.
Below is the architectural representation of Avi Vantage deployment within ACI.
The following table lists the minimum software requirements for Avi Vantage integration with ACI:
|Avi Controller||17.2.10 or later|
|Cisco APIC||1.03f or later|
|VMware vCenter||5.1, 5.5, 6.0, 6.5, or 6.7|
Integration Options for Avi Vantage in ACI Fabric
This section discusses integrating Avi Vantage with ACI in different hosting infrastructures.
Network policy mode with write access vCenter cloud
This is a traditional mode where EPGs for the Avi Service Engines are being used. ACI is used only to provide access between the client network and virtual service network. Avi Vantage integrates with vCenter in write access mode. This is our recommended deployment mode.
Network policy mode with no-access/read access vCenter cloud
This is a traditional mode where EPGs for the Avi Service Engines are being used. ACI is used only to provide access between the client network and virtual service network. Avi Vantage will be deployed as an external L3out on VMware infrastructure with no access cloud (no integration with vCenter) or read access (read only access to vcenter) and will perform the BGP peering with ACI.
Cisco CSP 2100/ Bare Metal Servers
- Network policy mode
This is a traditional mode where EPGs for the Avi Service Engines are being used. Avi Vantage will be deployed as an external L3Out on Cisco CSP and will perform the BGP peering with ACI.
Network Policy Mode with Avi Vantage on Write Access VMware Cloud
In this mode, Avi Vantage is integrated with VMware and the VMware infrastructure is used to configure the interfaces and port groups.
For deploying Avi Vantage in Network Policy Mode the following minimum ACI configurations are required:
- A bridge domain defined containing subnets to be used for Virtual Service IPs and Service Engine interfaces.
- An EPG created referencing the previous bridge domain
As seen above, the integration is with vCenter in write access mode. Given below is the configuration workflow for this mode.
To deploy Avi Vantage in vCenter with write access mode, refer to Installing Avi Vantage for VMware vCenter.
This is a traditional deployment where ACI provides access (contracts) between the clients and virtual service and the service engine and servers.
Configuring APIC contracts for Avi Vantage
After deploying Avi Vantage in vCenter write access mode, you need to create contracts to allow communication between the clients and virtual service networks and service engines and servers.
The contracts can be configured in ACI for the following deployment modes:
Avi Vantage Deployed in Two-Arm Mode
In this mode, Avi service engine connectivity to the clients and servers will use different networks. You need to create a contract to allow communication between the client EPG and virtual servers EPG and between Avi Vantage and the server EPG.
Avi Vantage Deployed in One Arm Mode
In this mode, connectivity from the Avi service engine to the clients and servers will use the same interface. You need to create a contract to allow communication between the client EPG and virtual servers EPG and between Avi Vantage and the server EPG.
Network Policy Mode with Avi Vantage on No Access or Read Access VMware Cloud
In this deployment, BGP is used for peering with ACI fabric and to exchange the virtual service routes. This deployment is mostly applicable for setups with no vCenter deployed or have no vCenter integration with Avi Controller.
In vCenter no-access cloud, the Avi Controller has no access to vCenter resources. In vCenter read access cloud, the Avi Controller will integrate with vCenter, but the Controller can only read the vCenter resources and not write or create any Service Engines or resources. In both cases, you need to manually deploy Service Engines.
For more information on VMware no-access or read access, please refer to Deploying in Read/No access mode.
To support BGP peering between the Avi Service Engines hosted on vCenter and ACI fabric the SEs should be connected to the ACI fabric and configured as external routed device in APIC. Then, APIC will considered it as a L3Out device.
For configuring BGP L3Out on Cisco APIC, please refer to the APIC configuration section in Avi Vantage Deployment for North-South Traffic.
To configure BGP on Avi Controller, navigate to Infrastructure > Routing, and select the cloud in which BGP peering is configured. Under BGP Peering, enter the peer IP, AS numbers, and other parameters.
For complete information on BGP on Avi Controller, refer to BGP Support for Scaling Virtual Services.
Virtual Service Provisioning
After configuring BGP peering, proceed with creating virtual services.
In Avi Controller, click on creating virtual service (Advanced) option. Enter all virtual service parameters, such as, the name, IP address, pool, etc. Click on Next and navigate to the Advanced tab. Enable Advertise VIP via BGP option.
After the virtual service is created, the Avi Service Engine will start publishing the routes to ACI fabric.
You can check the BGP peering status and advertised routes in APIC. Alternatively, to check BGP peering and neighbour status on Avi Vantage, login to the SE CLI and run the commands shown below under quagga router. Refer to How to Access and Use Quagga Shell using Avi CLI for reference.
10-128-3-198> show ip bgp BGP table version is 0, local router ID is 22.214.171.124 Status codes: s suppressed, d damped, h history, * valid, > best, = multipath, i internal, r RIB-failure, S Stale, R Removed Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *> 126.96.36.199/32 0.0.0.0 0 32768 i *> 188.8.131.52/24 184.108.40.206 0 0 500 670 ? Total number of prefixes 2
10-128-3-198> show ip bgp summary BGP router identifier 220.127.116.11, local AS number 600 RIB entries 3, using 336 bytes of memory Peers 1, using 4568 bytes of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 18.104.22.168 4 500 19369 19373 0 0 0 01w6d10h 1 Total number of neighbors 1
You can verify the virtual service routes in ACI fabric. The sample below is the output of virtual service route learned by a border leaf in ACI fabric.
leaf-1# show bgp ipv4 unicast vrf VMware-NO-Access-Demo:vmware-no-vrf BGP routing table information for VRF VMware-NO-Access-Demo:vmware-no-vrf, address family IPv4 Unicast BGP table version is 12, local router ID is 22.214.171.124 Status: s-suppressed, x-deleted, S-stale, d-dampened, h-history, *-valid, >-best Path type: i-internal, e-external, c-confed, l-local, a-aggregate, r-redist, I-injected Origin codes: i - IGP, e - EGP, ? - incomplete, | - multipath, & - backup Network Next Hop Metric LocPrf Weight Path *>r126.96.36.199/32 0.0.0.0 0 100 32768 ? *>e188.8.131.52/32 184.108.40.206 0 0 500 600 *>r220.127.116.11/24 0.0.0.0 0 100 32768 ? *>r18.104.22.168/24 0.0.0.0 0 100 32768 ?
Network Policy Mode with Avi Vantage in Write Access VMware Cloud with BGP L3_Out
In this deployment, BGP is used for peering with ACI fabric and to exchange the virtual service routes. This deployment is mostly applicable for setups where BGP auto scaling is required for virtual service scaling on Avi Service Engines.
In vCenter write access mode, the Avi Controller is configured with vCenter cloud connector. The Controller has write access permissions to vCenter and handles complete automation involved in creating Service Engines and placing them in the network. The Controller also scales the Service Engines based on the configured threshold.
For more information, refer to Installing Avi Vantage for VMware vCenter.
The BGP peering and virtual service configuration remains the same as mentioned in the Network Policy Mode with Avi Vantage on No Access or Read Access VMware Cloud section.
A port group with a static VLAN needs to exist in vCenter for the Service Engine Data vNICs. Use the SVI interface with static VLAN as L3out in the ACI fabric.
Avi Controller provides real time analytics dashboards which provide application load balancing and security analytics in a single frame.
Avi Vantage virtual service real time metrics provides details relating to transactions per second, delay, response times, etc.
Avi Vantage logs provide detailed view of each connection, as in the displayed case, where the client/virtual service/server end-to-end communication is displayed, which is used for troubleshooting.
Avi Vantage WAF analytics provides information on real time web security attacks on the virtual service.
The Avi Controller offers the following monitoring capabilities for Cisco ACI fabric:
- Monitors Load Balancer (SE) and application server health.
- Provides real-time application analytics.
- Protects applications against L4-L7 DDoS attacks.
- Monitors APIC EPG membership to automatically add or remove application instances from pools.
- Performs Load Balancer auto scaling based on real-time performance metrics (CPU, memory, bandwidth, connections, latency, etc).
- Provides point-and-click simplicity for iWAF policies with central control.
- Provides granular security insights on traffic flows and rule matches to enable precise policies using iWAF.