Dpdk, drivers and Ovs !!

Photo by Flickr on Pexels.com

Dpdk uses its own vendor supported poll mode drivers and kernel needs to be told that. Kernel needs to provide pass through drivers for those devices. These drivers map/bind device memory to user space application and registers interrupt. Applications use appropriate PMDs to poll memory of these devices in order to process the arrived packets.

Now, lets look at What Ovs does when Dpdk is set as a datapath. There are multiple binding drivers like vfio, uio etc. However, Ovs uses vfio-pci as binding driver for NICs. These NICs are added to Ovs bridge (data path type set to netdev).

Lets take a look of following . I have added 2 Dpdk ports to Ovs bridge.

Port "dpdkbond0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:05:00.0", n_rxq="2"}
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:05:00.1", n_rxq="2"}

Now, check what drivers are used to bind these ports.

$ sudo driverctl -v list-overrides
0000:05:00.0 vfio-pci (Ethernet Controller X710 for 10GbE SFP+ (Ethernet 10G 4P X710 Adapter))
0000:05:00.1 vfio-pci (Ethernet Controller X710 for 10GbE SFP+ (Ethernet Converged Network Adapter X710))

The above is for Intel X710 NICs. Lets check what we see when i added Mellanox NICs to the bridge.

$ sudo driverctl -v list-overrides
0000:82:00.3 vfio-pci (MT27800 Family [ConnectX-5 Virtual Function])
0000:82:00.4 vfio-pci (MT27800 Family [ConnectX-5 Virtual Function])

We now know that ports are bound using vfio-pci and Ovs has to poll using appropriate PMDs drivers. Take a look to following.

$ sudo ovs-vsctl list Interface | grep dpdk0 -A8 -B26
 status              : {driver_name="net_i40e", if_descr="DPDK 17.11.4 net_i40e", if_type="6", max_hash_mac_addrs="0", max_mac_addrs="64", max_rx_pktlen="9018", max_rx_queues="192", max_tx_queues="192", max_vfs="0", max_vmdq_pools="32", min_rx_bufsize="1024", numa_id="0", pci-device_id="0x1572", pci-vendor_id="0x8086", port_no="0"}
 type                : dpdk 

As we see here, PMD drivers used for polling this NIC is net_i40e. i40e are intel drivers for X710 series. Similarly, For, mellanox ConnectX-5 NICs are polled by PMDs of mlx5_core.

So, Physical NICs are understood then How about Virtual NICs. See below.

# sudo ovs-vsctl list Interface |  grep vhu -A8 -B 20
options : {vhost-server-path="/var/lib/vhost_sockets/vhu7a70e87a-df"}
status : {features="0x0000000150208182", mode=client, num_of_vrings="2", numa="0", socket="/var/lib/vhost_sockets/vhu7a70e87a-df", status=connected, "vring_0_size"="1024", "vring_1_size"="1024"}
type : dpdkvhostuserclient

As we see, no drivers mentioned here. Notice this, port type is vHost. vHost is a frame work backed by VirtIO queues and uses Unix sockets. QEMU works as a vHost server port where as Ovs has corresponding vHost client port. Guest VM uses VirtIO-net para virtualization driver which let communication between guest memory system to Ovs happens over VirtIO queue. So, basically we are using PMDs of VirtIO to poll the virtual nics in this case.

I hope, reader of this blog is now able to understand the basics of the drivers used for dpdk in Ovs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s