This is Chapter 2 of the blog series: Open vSwitch with DPDK on Arm. This blog describes how to setup OvS with DPDK to run the PHY-PHY traffic test on an Arm platform. The high-level steps, in order, are:
This blog assumes that you have built and installed OvS with DPDK in your home directory on your Arm platform. This Arm platform serves as the Device Under Test (DUT) for the traffic test. If you have not completed those steps, please refer to Chapter 1: Build and Install of this blog series.
Isolating a CPU prevents tasks and processes from being assigned to the CPU by the Linux scheduler. This is very useful when we want to dedicate a fixed number of CPUs on a multiprocessor system to perform specific tasks with the least unwanted interruption. Once a CPU core has been isolated, processes and tasks can only be assigned to that CPU manually in the taskset command, cset commands, or any other software utilizing the CPU affinity syscalls.
taskset
cset
It is important to isolate CPUs for traffic testing to allow the OvS threads the most execution time possible. If your Arm system already has isolated cores and the system configuration is similar to the following, then you can skip this section and start from 'Hugepages Configuration'.
1. Check the count of CPU cores on your system using the lscpu command. The output on my N1SDP platform is as follows:
lscpu
$ lscpu | grep 'CPU.s' CPU(s): 4 On-line CPU(s) list: 0-3 NUMA node0 CPU(s): 0-3
2. Add the 'isolcpus' kernel boot parameter to the option GRUB_CMDLINE_LINUX in /etc/default/grub file. I have isolated 3 CPU cores on my system, starting from CPU core 1 to CPU core 3. It is a good idea to add the nohz_full and rcu_nocbs boot parameters as well to the GRUB_CMDLINE_LINUX option to further reduce interference on those CPUs. If a CPU is listed under the nohz_full parameter, the kernel stops sending timer ticks to that CPU so that the CPU spends less time servicing interrupts and context switching. Using the rcu_nocbs parameter allows the CPUs to run in a "no RCU callbacks" mode, i.e., callbacks are offloaded to and addressed by a "housekeeping CPU" instead.
'isolcpus'
GRUB_CMDLINE_LINUX
/etc/default/grub
nohz_full
rcu_nocbs
GRUB_CMDLINE_LINUX="isolcpus=1-3 nohz_full=1-3 rcu_nocbs=1-3"
3. Run update-grub to update the kernel configuration.
update-grub
$ sudo update-grub
4. Reboot your system. Once your system is up, you can check that CPU cores have indeed been isolated.
$ cat /proc/cmdline | grep isolcpus
1. You need to configure and allocate hugepages for successfully running OvS with DPDK. Most Arm systems either support 2MB or 1GB hugepage size.
For run-time allocation of 2MB hugepages, the sysctl utility is used:
sysctl
$ sudo sysctl -w vm.nr_hugepages=N where N is number of 2M pages
Since we are allocating hugepages at runtime, this hugepage configuration will not be persistent across reboots. You can edit the /etc/sysctl.d/hugepages.conf file to make the allocation of hugepages permanent across reboots.
/etc/sysctl.d/hugepages.conf
If your systems supports 1GB as default hugepage size, then it is not possible to reserve hugepages after the system has already booted. You can edit the /etc/default/grub file to add the kernel boot parameters for reserving 1GB hugepages:
GRUB_CMDLINE_LINUX="default_hugepagesz=1G hugepagesz=1G hugepages=4"
Run update-grub after editing the file and reboot the system for the changes to take effect. You can verify the hugepage configuration with the following command:
$ grep Huge /proc/meminfo
For 1GB hugepages, it is best to allocate hugepages at the time of the first system boot or very soon after the first system boot. This prevents the physical memory from being fragmented and ensures that a sequential memory segment can be allocated for the hugepage.
2. Once you have allocated hugepages, check if hugepages have been mounted. If the following output is seen, hugetlbfs has already been mounted and you do not need to mount it again.
hugetlbfs
$ mount | grep hugetlbfs hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)
If they were not already mounted by default, mount the hugepages with the following command:
$ sudo mount -t hugetlbfs none /dev/hugepages
1. Load the vfio-pci kernel module.
vfio-pci
$ sudo modprobe vfio-pci
2. As I am using an Intel Network Interface Card (NIC), I need to bind the interfaces (which are connected to the traffic generator) to the vfio driver. This step may be skipped for other NIC types; please check the manufacturer of your NIC. The PCIe address of the NIC can be observed from the first command:
vfio
$ sudo $HOME/dpdk-19.11/usertools/dpdk-devbind.py --status $ sudo $HOME/dpdk-19.11/usertools/dpdk-devbind.py --bind=vfio-pci <pci_address_1> <pci_address_2>
1. Clean the OvS environment by killing any existing OvS daemons and files generated from the previous run.
$ sudo killall ovsdb-server ovs-vswitchd $ sudo rm -f $HOME/var/run/openvswitch/* $ sudo rm -f $HOME/etc/openvswitch/conf.db $ sudo rm -f $HOME/var/log/openvswitch/ovs-vswitchd.log
2. Create directories for the OvS daemon. You need to perform this step only when you are setting up the PHY-PHY test for the first time.
$ mkdir -p $HOME/etc/openvswitch $ mkdir -p $HOME/var/run/openvswitch $ mkdir -p $HOME/var/log/openvswitch
3. Set the environment variables.
$ export DPDK_DIR=$HOME/dpdk-19.11 $ export PATH=$HOME/usr/share/openvswitch/scripts:$PATH
4. Before starting ovs-vswitchd itself, you need to start its configuration database, ovsdb-server. Configure a database to be used by the ovsdb-server before it can be started.
ovs-vswitchd
ovsdb-server
$ cd $HOME/usr/bin $ sudo ./ovsdb-tool create $HOME/etc/openvswitch/conf.db $HOME/usr/share/openvswitch/vswitch.ovsschema
5. Configure ovsdb-server to use the database created in the previous step, to listen on a Unix domain socket, and to connect to any managers specified in the database itself.
$ cd $HOME/usr/sbin $ sudo ./ovsdb-server --remote=punix:$HOME/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
6. Initialize the database using ovs-vsctl. This is only necessary the first time after the database is created with ovsdb-tool but can be run at any time. ovs-vswitchd requires some additional configuration to enable DPDK functionality. DPDK configuration arguments can be passed to ovs-vswitchd via the other_config column of the OvS table. At a minimum, the dpdk-init option must be set to either true or try. Defaults are provided for all configuration options that have not been set explicitly.
ovs-vsctl
ovsdb-tool
other_config
dpdk-init
$ cd $HOME/usr/bin $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:hw-offload=false $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:max-idle=500000 $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x02 $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0x04 $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem=2048 $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:n-rxq=1 $ sudo ./ovs-vsctl --no-wait set Open_vSwitch . other_config:n-txq=1
For traffic testing, it is essential to assign an isolated CPU to the pmd-cpu-mask argument. Check the isolated CPUs on the system with the help of the following command:
pmd-cpu-mask
$ cat /etc/default/grub | grep isolcpus isolcpus=1-3
On my N1SDP platform, I see that CPUs 1-3 are isolated. So, I can assign CPU 2 to the pmd-cpu-mask argument by setting it to 0x04.
7. Start the main Open vSwitch daemon, telling it to connect to the same Unix domain socket created earlier.
$ export DB_SOCK=$HOME/var/run/openvswitch/db.sock $ cd $HOME/usr/sbin $ sudo ./ovs-vswitchd unix:$DB_SOCK --pidfile --detach --log-file=$HOME/var/log/openvswitch/ovs-vswitchd.log
If DPDK Initialization is successful, then this log entry would be displayed:
|dpdk|INFO|DPDK Enabled - initialized
1. Add a userspace bridge. The ovs-vsctl utility can be used for this purpose. Bridges should be created with a datapath_type=netdev which corresponds to the DPDK datapath.
datapath_type=netdev
$ cd $HOME/usr/bin $ sudo ./ovs-vsctl add-br dpdk-br1 -- set bridge dpdk-br1 datapath_type=netdev
2. Add two DPDK ports. The ovs-vsctl utility can also be used for this.
$ sudo ./ovs-vsctl add-port dpdk-br1 dpdk1 -- set Interface dpdk1 type=dpdk options:dpdk-devargs=<pci_address_1> ofport_request=1 $ sudo ./ovs-vsctl add-port dpdk-br1 dpdk2 -- set Interface dpdk2 type=dpdk options:dpdk-devargs=<pci_address_2> ofport_request=2
At this point, configuration of the bridge and ports associated with that bridge can be seen with the following command:
$ sudo ./ovs-vsctl show
3. Bring up the bridge and its interfaces.
$ sudo ip link set dpdk-br1 up
4. Add test flows to forward packets between the DPDK ports. The flows have been configured such that traffic transmitted into either port is seen returned with the other port.
$ sudo ./ovs-ofctl add-flow dpdk-br1 in_port=1,action=output:2 $ sudo ./ovs-ofctl add-flow dpdk-br1 in_port=2,action=output:1
At this point, the setup for PHY-PHY test is complete and all you need to do now is to configure your traffic generator to send traffic to the DUT.
This blog provided a step-by-step tutorial of how to setup OvS with DPDK for the PHY-PHY test. The setup has mostly been derived from the OvS official documentation but some important points relating to hugepages and DPDK configuration for OvS have also been highlighted. This will help to avoid potential issues and achieve the most optimal traffic testing environment.
[CTAToken URL = "https://community.arm.com/management/arm-blog-review/b/blogs-under-review/posts/open-vswitch-with-dpdk-on-arm-build-and-install-from-source" target="_blank" text="Previous Blog in Series" class ="green"] [CTAToken URL = "https://community.arm.com/management/arm-blog-review/b/blogs-under-review/posts/open-vswitch-with-dpdk-setup-on-arm-for-phy-vm-phy-vhost-loopback-test" target="_blank" text="Next Blog in Series" class ="green"]