Avi Vantage on Linux KVM with SR-IOV data NIC

Overview

Starting with release 18.2.3, Avi Vantage supports installing Linux KVM with SR-IOV data NICs.

Mode Explanation Comments Drivers and Supported NICs
SR-IOV mode Virtual network functions created from physical NIC Allows physical NICs to be shared amongst VMs without sacrificing performance, since packets are switched in the hardware.

Maximum 32 virtual functions (VFs) can be configured per pNIC.
ixgbe-vf driver supports these NICs (and bonding): 10Gb NICs 82599, X520, X540, X550, X552.

i40e-vf driver supports these NICs (bonding not supported): 10Gb NICs X710, 40Gb NICs XL710.

Suggested Reading

Prerequisites

Hardware

  1. Ensure Intel Virtualisation Technology [Intel VT-d] in CPU configuration in the BIOS is enabled. This can be turned on in the BIOS settings during bootup time. Navigate to BIOS Settings > CPU Configuration. Enable Intel Virtualisation Technology [Intel VT-d] option Once the system has booted up, run lscpu or grep command to see if your virtualization support is enabled:

     lscpu | egrep Virtualization 
     Virtualization:     VT-x    

    This ensures VT-d is enabled

     
     $ egrep -o '(vmx|svm)' /proc/cpuinfo | sort | uniq
     vmx
     

    The output should not be empty.

  2. Ensure SRIOV-support for PCI in the BIOS is enabled. This also can be turned on in the BIOS settings during bootup time.

  3. In grub config [/etc/default/grub] add intel_iommu=on in GRUB_CMDLINE_LINUX statement. Rebuild grub config via sudo update-grub for Ubuntu distros OR via grub2-mkconfig -o /boot/grub2/grub.cfg on others. Ensure the same is reflected in cat /proc/cmdline output.

Software

Ubuntu Distro

DISTRIB_ID Ubuntu
DISTRIB_RELEASE 16.04
DISTRIB_CODENAME xenial
DISTRIB_DESCRIPTION Ubuntu 16.04.6 LTS
OS kernel version 4.4.0-131-generic
python version 2.7.12 and above
virsh version 1.3.1 and above
libvirt-bin version 1.3.1 and above
ansible version 2.0.0.2 and above
virt-manager version 1.3.2 and above
qemu-kvm version 2.5.0 and above
genisomage version 1.1.11 (Linux) and above

RHEL/CentOS distro

CentOS Linux release 7.6.1810 (Core)
OS kernel version 3.10.0-957.5.1.el7.x86_64
python version 2.7.5 and above
vrish version 4.5.0 and above
libvirt-bin version 4.5.0 and above
ansible version 2.4.2.0 and above
virt-manager version 1.5.0 and above
qemu-kvm version 1.5.3
genisomage version 1.1.11 (Linux) and above

Avi Vantage Software

Please ensure to copy over Avi Vantage se.qcow2 & controller.qcow2 images to root directory of the host machine. se.qcow2 can be fetched from the Controller UI once its up as explained later below.

Ansible Playbook

Use the following commands to install Ansible and Avi Python SDK:

  • Ansible version 2.7 - pip install ansible
  • Avi python SDK - pip install avisdk

Use the following commands to download the required Ansible roles:

  • ansible-galaxy install avinetworks.avicontroller_kvm
  • ansible-galaxy install avinetworks.avise_kvm

Note: Usage of the above Ansible roles are explained in the below sections.

Virtual Functions (VF) from PF

Ensure the requisite number of VF’s are created from the parent PF using the parent PF’s BDF value. Note this has to be redone afresh every time the host machine is rebooted unless the config is made persistent, explained in the following section.

 
 echo vf-num /sys/bus/pci/devices/parent PF bdf/sriov_numvfs
 

Using the lspci command, grep to search for Virtual Functions carved out from PF.

Making the virtual functions persistent

To make the Virtual Functions persistent across reboots, use the editor of your choice to create an udev rule similar to the following, where you specify the intended number of VFs (in this example, 2), up to the limit supported by the network interface card.

 
 vim /etc/udev/rules.d/enp14s0f0.rules
  ACTION=="add", SUBSYSTEM=="net", ENV{ID_NET_DRIVER}=="ixgbe",
  ATTR{device/sriov_numvfs}="2"  

In the following example, replace enp14s0f0 with the PF network device name(s) and adjust the value of ENV{ID_NET_DRIVER} to match the driver in use.

To find the driver in use, run the following command:

ethtool -i | grep driver</code>

This will ensure the feature is enabled at boot-time.

Post Host Reboot

The following enlists steps to be executed post every reboot.

  1. Post reboot, all VMs will automatically be in the STOP state. All the VM names can be checked using virsh list output. Bring up all the VMs using virsh start <VM-name> command.

  2. If the VF config was not made persistent as described above, repeat the steps mentioned under Virtual Functions from PF.

VFs should be configured with unique MAC addresses through PF using the following command:

ip link set <PF_interface_name> vf <VF_index> mac <unique_mac_addr>

Deploying Avi Controller

In avinetworks.avicontroller_kvm please edit the kvm host (inventory) file according to your Avi Controller config needs and then run the playbook as:

 
ansible-playbook kvm.yml -i inventory-file -vv
 

Verify the SE is able to connect to the Avi Controller by navigating to Infrastructure > Dashboard on Avi Controller UI (this may take a few minutes).

Controller Initial Setup

Use a browser to navigate to the Controller IP address and follow the steps to perform the initial setup:

  1. Configure an administrator password
  2. Set DNS information
  3. Select No Orchestrator

Deploying Service Engine

Upload the SE image to install Avi Service Engine in SR-IOV mode.

  1. On the Avi Controller, navigate to Infrastructure > Clouds.
  2. Click on the download icon on Default Cloud row and select Qcow2.
  3. Upload the se.qcow2 to the root directory of the host machine.

Configuring MAC VLAN

MAC VLAN need to be configured before using the script to create the VM and passthrough the interfaces. This is configured for the data path NICs by using Transparent VLAN or Tagged VLAN.

Transparent VLAN

Transparent VLAN is used by the NIC to isolate VF traffic. From the perspective of VF driver or guest OS, the NIC is in access mode and it should not see any VLAN header. Transparent VLAN is to be specified by the system admin at the time of creation of VF. In the TX path, the NIC inserts the VLAN header in the packet before putting it on wire. In the RX path, the NIC strips the VLAN header before handing over the packet to VF driver.

For instance, if VF number 0 of the parent PF eno2 has to be configured in transparent VLAN 109, the same can be configured via the following configuration.

The following is the code snippet for the configuration:

 
ifconfig eno2 down    
ip link set dev eno2 vf 0 trust on
ip link set dev eno2 vf 0 spoofchk off
ip link set eno2 vf 0 vlan 0
ip link set eno2 vf 0 vlan 109  
        [Vlan num of the transparent vlan which needs to be configured]
ifconfig eno2 up
 

In this case, once the Avi Service Engine is up, on Avi UI, navigate to Infrastructure > Service Engine and select the relevant SE. Use the edit button to go to the respective interface and configure the IP address/mask for either static or DHCP.

Tagged VLAN

If in case the goal is to create a Vlan-tagged interface instead, the following steps can be used.

For instance, if vf number 0 of the parent PF eno2 has to be configured in tagged VLAN 109, the same can be configured via the following configuration.

 
ifconfig eno2 down    
ip link set dev eno2 vf 0 trust on
ip link set dev eno2 vf 0 spoofchk off
ip link set eno2 vf 0 vlan 0
ifconfig eno2 up
 

Once Avi SE is up, on Avi UI, navigate to Infrastructure > Service Engine and select the relevant SE. Use the edit button to go to Create VLAN interface and configure tagged VLAN interface in VLAN 109.

Bond interface

Bond-Interfaces can be specified via the SE Ansible yaml file input. The interfaces name are taken as input and the bond-sequence can be specified as follows:

  • Bond-if sequence: 1,2 3,4
    Implies interface 1,2 are in bond and interface 3,4 are in bond (Note the space between 1,2 and 3,4).

  • Bond-if sequence: 1,2,3,4
    Implies interface 1,2,3,4 are in bond.

Note: Refer to template example described in README section of the avinetworks.avise_kvm Ansible role for more details on SE yaml.

Disabling Avi Controller and Service Engines

VMs and their corresponding images can be cleared using the following commands:

 
 virsh undefine vm-name
 virsh destroy vm-name
 

At the root directory:

 
 rm -rf /var/lib/libvirt/images/vm-name.qcow2 
  rm -rf vm-name  
 

The rm -rf vm-name command will delete the local folder created in root directory for the relevant VM.

Note:

  1. If at anytime if one sees the disk space of the host getting exhausted it might be due to the N number of qcow2’s being used in /var/lib/libvirt/images/ as part of the VM’s creation. Clear up any unused ones by deleting the respective VM and deleting the respective qcow2 image alongwith.
  2. Please ensure to manually cleanup (force-delete) the stale Service Engine entries present in the AVI-Controller GUI post destroying SE VMs.

Document Revision History

Date Change Summary
December 21, 2020 Updated VF mac address details for 20.1.3