
Recently, someone asked me this question, At the face of the question I felt Why not and then “Networking guy” in me asked me the same question. So, thought to share the “networking” answer of this question as ultimately dpdk is networking data path technology only.
Before getting into dpdk, lets see how traditional router/switch process the packets. See below picture, this is cisco nexus 3100 switch HW layout. Give special focus to ‘Trident 3’ area.

Now lets zoom Trident 3. Below is HW layout of trident 3 chipset.

This switch model has 32 100 gbps ports, So, those 32 100 gbps are connected to SerDes of Trident 3. Each 100 gbps port here is backed by isolated 1 Falcon cpu core which does packet processing for all incoming/outing packets through that port at the line rate. So, 32 100 gbps ports have 32 Falcon cpu cores in Trident 3 chipset and 32×100 gbps = 3.2 Tbps is what switch’s packet processing speed.
Lets come back to dpdk now, dpdk is basically a server side accelerated data path technology which takes the advantage of PMD cores (poll mode drivers cores) to process the packets as it does not have Trident 3 kind of ASICs specifically for packet processing. PMD cores are isolated dedicate cpu cores in the host system which are given to dpdk libraries to poll the packet directly from the NIC memory to user space.
So, a server has 100 gbps port, dpdk library will use pmd core (1 or many) to just keep eyes on 100 gbps port and process its packets as n when packets arrive. Sound similar to what we saw in Cisco nexus 3100 switch above. One more thing, without dpdk, Server machine has normal Kernel networking (No HW accelerated data path) data path which means any available (not dedicated) cpu core will process the packet.
Then, where is the mpps? So, when i am testing dpdk datapath throughput, basically i am stressing its packet polling capacity from the HW to the user space. Let me introduce sk_buff() now to explain further. The core structure of linux networking is sk_buff. sk_buff() is the socket buffer structure which is manipulated and used at various level of networking stack as packet passes through it. PMD core needs to copy each packet from NIC memory to sk_buff(). so there on-wards, it will be taken into networking stack for the processing (ovs-dpdk in this case). So, bottom line is, How many packets PMD core can poll/create sk_buff per Sec is what my testing vector for dpdk. Here, packet size matters a lot, for example, pmd core will be stressed more if on 10 gig nic it has to poll 64 bytes size packet (14.8 million packets at line rate in that case), but if packet size is 9000 bytes, then only 139K needs to be polled at the line rate.
Remember, PMD cpu core is a cpu, if there are many small packets, it has to spend many cycles, if there are few big packets, it has to spend less cycles, so, when we have to stress it, what do we do? send many small packets and thus the unit here is MPPS (million packets per second).
Below is a sample stats of pmd core:
pmd thread numa_id 0 core_id 41:
packets received: 41291293567
packet recirculations: 0
avg. datapath passes per packet: 1.00
emc hits: 21413320312
smc hits: 0
megaflow hits: 19873465646
avg. subtable lookups per megaflow hit: 1.01
miss with success upcall: 4507608
miss with failed upcall: 1
avg. packets per output batch: 1.65
idle cycles: 7502943777110604 (96.67%)
processing cycles: 258432354923724 (3.33%)
avg cycles per packet: 187966.41 (7761376132034328/41291293567)
avg processing cycles per packet: 6258.76 (258432354923724/41291293567)
You can see “processing cycles”, this is our test vector, When it is 100%, we can say, we have stressed pmd core enough now and at that percentage, we get its throughput number.
PMD cores are configured as below in ovs-dpdk. This is hexadecimal format, converting it to binary (001000000000110000000010000000001100) and calculating bitmaps, it gives cpu core number, 2,3,13,22,23,33 are given to dpdk libraries as an isolated cores.
sudo ovs-vsctl set open_vswitch . other_config:pmd-cpu-mask="200C0200C"
I hope, now it is clear why dpdk performance is measure in mpps and not in gbps.