test patch
Contents
VM environment with a Ethernet switch
single path testbed
Virtual machines (VMs) as TCP traffic senders using iperf3 are hosted by Bhyve in a physical box (Beelink SER5 AMD Mini PC). This box uses FreeBSD 14.2 release. Another box that is the same type uses Ubuntu Linux desktop version. The two boxes are connected through a 5-Port Gigabit Ethernet Switch (TP-Link TL-SG105). The switch has a shared 1Mb (125KB) Packet Buffer Memory.
functionality show case
This test will show a general TCP congestion control window (cwnd) growth pattern before/after the patch. And also compare with the cwnd growth pattern from the Linux kernel.
- additional 40ms latency is added/emulated in the receiver (Ubuntu Linux with traffic shaping: The minimum bandwidth delay product (BDP) is 1000Mbps x 40ms == 5 Mbytes. So I configured sender/receiver with 10MB buffer)
cc@Linux:~ % sudo tc qdisc add dev enp1s0 root netem delay 40ms cc@Linux:~ % tc qdisc show dev enp1s0 qdisc netem 8001: root refcnt 2 limit 1000 delay 40ms cc@Linux:~ % root@n1fbsd:~ # ping -c 4 -S n1fbsd Linux PING Linux (192.168.50.46) from n1fbsdvm: 56 data bytes 64 bytes from 192.168.50.46: icmp_seq=0 ttl=64 time=41.053 ms 64 bytes from 192.168.50.46: icmp_seq=1 ttl=64 time=41.005 ms 64 bytes from 192.168.50.46: icmp_seq=2 ttl=64 time=41.011 ms 64 bytes from 192.168.50.46: icmp_seq=3 ttl=64 time=41.168 ms --- Linux ping statistics --- 4 packets transmitted, 4 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 41.005/41.059/41.168/0.066 ms root@n1fbsd:~ # root@n1ubuntu24:~ # ping -c 4 -I 192.168.50.161 Linux PING Linux (192.168.50.46) from 192.168.50.161 : 56(84) bytes of data. 64 bytes from Linux (192.168.50.46): icmp_seq=1 ttl=64 time=40.9 ms 64 bytes from Linux (192.168.50.46): icmp_seq=2 ttl=64 time=40.9 ms 64 bytes from Linux (192.168.50.46): icmp_seq=3 ttl=64 time=41.0 ms 64 bytes from Linux (192.168.50.46): icmp_seq=4 ttl=64 time=41.1 ms --- Linux ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3061ms rtt min/avg/max/mdev = 40.905/40.981/41.108/0.078 ms root@n1ubuntu24:~ # root@n1fbsd:~ # sysctl -f /etc/sysctl.conf net.inet.tcp.hostcache.enable: 0 -> 0 kern.ipc.maxsockbuf: 10485760 -> 10485760 net.inet.tcp.sendbuf_max: 10485760 -> 10485760 net.inet.tcp.recvbuf_max: 10485760 -> 10485760 root@n1fbsd:~ # root@n1ubuntu24:~ # sysctl -p net.core.rmem_max = 10485760 net.core.wmem_max = 10485760 net.ipv4.tcp_rmem = 4096 131072 10485760 net.ipv4.tcp_wmem = 4096 16384 10485760 net.ipv4.tcp_no_metrics_save = 1 root@n1ubuntu24:~ # cc@Linux:~ % sudo sysctl -p net.core.rmem_max = 10485760 net.core.wmem_max = 10485760 net.ipv4.tcp_rmem = 4096 131072 10485760 net.ipv4.tcp_wmem = 4096 16384 10485760 net.ipv4.tcp_no_metrics_save = 1 cc@Linux:~ %
- info for switching to the FreeBSD RACK TCP stack
root@n1fbsd:~ # kldstat Id Refs Address Size Name 1 5 0xffffffff80200000 1f75ca0 kernel 2 1 0xffffffff82810000 368d8 tcp_rack.ko 3 1 0xffffffff82847000 f0f0 tcphpts.ko root@n1fbsd:~ # sysctl net.inet.tcp.functions_default=rack net.inet.tcp.functions_default: freebsd -> rack root@n1fbsd:~ #
FreeBSD VM sender info |
FreeBSD 15.0-CURRENT (GENERIC) #0 main-6f6c07813b38: Fri Mar 21 2025 |
Linux VM sender info |
Ubuntu server 24.04.2 LTS (GNU/Linux 6.8.0-55-generic x86_64) |
Linux receiver info |
Ubuntu desktop 24.04.2 LTS (GNU/Linux 6.11.0-19-generic x86_64) |
- iperf3 command
receiver: iperf3 -s -p 5201 --affinity 1 sender: iperf3 -B ${src} --cport ${tcp_port} -c ${dst} -p 5201 -l 1M -t 300 -i 1 -f m -VC ${name}
CUBIC growth pattern before/after patch in FreeBSD default stack:
CUBIC growth pattern before/after patch in FreeBSD RACK stack:
For reference, the CUBIC in Linux kernel growth pattern is also attached:
single flow 300 seconds with 1Gbps x 40ms BDP performance
TCP CC |
version |
iperf3 single flow 300s average throughput |
comment |
CUBIC in freebsd default stack |
before patch |
715 Mbits/sec |
base |
after patch |
693 Mbits/sec |
a benign -3.1% performance lose, but cwnd growth has finer gratitude and less burst |
|
CUBIC in freebsd RACK stack |
before patch |
738 Mbits/sec |
base |
after patch |
730 Mbits/sec |
negligible -1.1% performance lose |
|
CUBIC in Linux |
kernel 6.8.0 |
725 Mbits/sec |
reference |
CUBIC cwnd and throughput before patch in FreeBSD default stack:
CUBIC cwnd and throughput after patch in FreeBSD default stack:
CUBIC cwnd and throughput before patch in FreeBSD RACK stack:
CUBIC cwnd and throughput after patch in FreeBSD RACK stack:
CUBIC cwnd and throughput in Linux stack:
loss resilience in LAN test (no additional latency added)
Similar to testD46046, this configure tests TCP CUBIC in the VM traffic sender, with 1%, 2%, 3% or 4% incoming packet drop rate at the Linux receiver. No additional latency is added.
root@n1fbsd:~ # ping -c 4 -S n1fbsd Linux PING Linux (192.168.50.46) from n1fbsdvm: 56 data bytes 64 bytes from 192.168.50.46: icmp_seq=0 ttl=64 time=0.499 ms 64 bytes from 192.168.50.46: icmp_seq=1 ttl=64 time=0.797 ms 64 bytes from 192.168.50.46: icmp_seq=2 ttl=64 time=0.860 ms 64 bytes from 192.168.50.46: icmp_seq=3 ttl=64 time=0.871 ms --- Linux ping statistics --- 4 packets transmitted, 4 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.499/0.757/0.871/0.151 ms root@n1fbsd:~ # root@n1ubuntu24:~ # ping -c 4 -I 192.168.50.161 Linux PING Linux (192.168.50.46) from 192.168.50.161 : 56(84) bytes of data. 64 bytes from Linux (192.168.50.46): icmp_seq=1 ttl=64 time=0.528 ms 64 bytes from Linux (192.168.50.46): icmp_seq=2 ttl=64 time=0.583 ms 64 bytes from Linux (192.168.50.46): icmp_seq=3 ttl=64 time=0.750 ms 64 bytes from Linux (192.168.50.46): icmp_seq=4 ttl=64 time=0.590 ms --- Linux ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3165ms rtt min/avg/max/mdev = 0.528/0.612/0.750/0.082 ms root@n1ubuntu24:~ # ## list of different packet loss rate cc@Linux:~ % sudo iptables -A INPUT -p tcp --dport 5201 -m statistic --mode nth --every 100 --packet 0 -j DROP cc@Linux:~ % sudo iptables -A INPUT -p tcp --dport 5201 -m statistic --mode nth --every 50 --packet 0 -j DROP cc@Linux:~ % sudo iptables -A INPUT -p tcp --dport 5201 -m statistic --mode nth --every 33 --packet 0 -j DROP cc@Linux:~ % sudo iptables -A INPUT -p tcp --dport 5201 -m statistic --mode nth --every 25 --packet 0 -j DROP ...
- iperf3 command
cc@Linux:~ % iperf3 -s -p 5201 --affinity 1 root@n1fbsd:~ # iperf3 -B n1fbsd --cport 54321 -c Linux -p 5201 -l 1M -t 30 -i 1 -f m -VC cubic
- iperf3 single flow 30s average throughput under different packet loss rate at the Linux receiver
TCP CC |
version |
1% loss rate |
2% loss rate |
3% loss rate |
4% loss rate |
comment |
CUBIC in freebsd default stack |
before patch |
854 Mbits/sec |
331 Mbits/sec |
53.1 Mbits/sec |
22.9 Mbits/sec |
|
after patch |
814 Mbits/sec |
150 Mbits/sec |
45.1 Mbits/sec |
24.2 Mbits/sec |
|
|
NewReno in freebsd default stack |
15-CURRENT |
814 Mbits/sec |
110 Mbits/sec |
40.4 Mbits/sec |
21.5 Mbits/sec |
|
|
||||||
CUBIC in freebsd RACK stack |
before patch |
552 Mbits/sec |
353 Mbits/sec |
185 Mbits/sec |
130 Mbits/sec |
|
after patch |
515 Mbits/sec |
337 Mbits/sec |
181 Mbits/sec |
127 Mbits/sec |
|
|
NewReno in freebsd RACK stack |
15-CURRENT |
462 Mbits/sec |
281 Mbits/sec |
163 Mbits/sec |
118 Mbits/sec |
|
|
||||||
CUBIC in Linux |
kernel 6.8.0 |
665 Mbits/sec |
645 Mbits/sec |
550 Mbits/sec |
554 Mbits/sec |
reference |
NewReno in Linux |
kernel 6.8.0 |
924 Mbits/sec |
862 Mbits/sec |
740 Mbits/sec |
599 Mbits/sec |
reference |
tri-point topology testbed
Now there are two virtual machines (VMs) as traffic senders at the same time. They are hosted by Bhyve in two separate physical boxs (Beelink SER5 AMD Mini PCs). The traffic receiver is the same Linux box as before, for the simplicity of a tri-point topology.
The three physical boxes are connected through a 5 Port Gigabit Ethernet Switch (TP-Link TL-SG105). The switch has a shared 1Mb (0.125MB) Packet Buffer Memory, which is just 2.5% of the 5 Mbytes BDP (1000Mbps x 40ms).
- additional 40ms latency is added/emulated in the receiver and senders/receiver have 2.5MB buffer (50% BDP)
- iperf3 commands
iperf3 -s -p 5201 iperf3 -B ${src} --cport ${tcp_port} -c ${dst} -p 5201 -l 1M -t 200 -i 1 -f m -VC ${name} iperf3 -s -p 5202 iperf3 -B ${src} --cport ${tcp_port} -c ${dst} -p 5202 -l 1M -t 200 -i 1 -f m -VC ${name}
FreeBSD VM sender1 & sender2 info |
FreeBSD 15.0-CURRENT (GENERIC) #0 main-6f6c07813b38: Fri Mar 21 2025 |
Linux VM sender1 & sender2 info |
Ubuntu server 24.04.2 LTS (GNU/Linux 6.8.0-55-generic x86_64) |
Linux receiver info |
Ubuntu desktop 24.04.2 LTS (GNU/Linux 6.11.0-19-generic x86_64) |
link utilization under two competing TCP flows
- the link utilization of two competing TCP flows starting at the same time from each VM
TCP CC |
version |
link utilization under two flows: average(flow1 + flow2) |
comment |
CUBIC in freebsd default stack |
before patch |
x Mbits/sec |
|
after patch |
x Mbits/sec |
|
|
NewReno in freebsd default stack |
15-CURRENT |
x Mbits/sec |
|
|
|||
CUBIC in freebsd RACK stack |
before patch |
x Mbits/sec |
|
after patch |
x Mbits/sec |
|
|
NewReno in freebsd RACK stack |
15-CURRENT |
x Mbits/sec |
|
|
|||
CUBIC in Linux |
kernel 6.8.0 |
x Mbits/sec |
reference |
NewReno in Linux |
kernel 6.8.0 |
x Mbits/sec |
reference |