test patch D49540
patch link: D49540
- table of bottleneck queue size ratio to the bandwidth delay product (BDP)
bottleneck queue size |
ratio to 1Gbps x 40ms BDP (5000000B) |
ratio to 1Gbps x 80ms BDP (10000000B) |
64 packets (96000B) |
1.9% BDP |
1% BDP |
96 packets (144000B) |
2.9% BDP |
1.4% BDP |
128 packets (192000B) |
3.8% BDP |
1.9% BDP |
256 packets (384000B) |
7.7% BDP |
3.8% BDP |
512 packets (768000B) |
15% BDP |
7.7% BDP |
1Mb (125KB) |
2.6% BDP |
1.3% BDP |
Contents
VM environment with a 1GE switch
single path testbed
Virtual machines (VMs) as TCP traffic senders using iperf3 are hosted by Bhyve in a physical box (Beelink SER5 AMD Mini PC). This box uses FreeBSD 14.2 release. Another box uses VMs as traffic receivers. The two boxes are connected through a 5-Port Gigabit Ethernet Switch (TP-Link TL-SG105). The switch has a shared 1Mb (125KB) Packet Buffer Memory, which means the link buffer is at most 2.4%BDP.
functionality show case
This test will show a general TCP congestion control window (cwnd) growth pattern before/after the patch. And also compare with the cwnd growth pattern from the Linux kernel.
- additional 40ms latency is added/emulated in the Ubuntu VM receiver with traffic shaping.
- The minimum bandwidth delay product (BDP) is 1000Mbps x 40ms == 5 Mbytes. So I configured sender/receiver with 6MB buffer(120%BDP).
root@n3ubuntu24:~ # tc qdisc add dev enp0s5 root netem delay 40ms root@n3ubuntu24:~ # tc qdisc show dev enp0s5 qdisc netem 8001: root refcnt 2 limit 1000 delay 40ms root@n3ubuntu24:~ # root@n1fbsd:~ # ping -c 4 -S n1fbsd n3ubuntu24 PING n3ubuntu24 (192.168.50.32) from n1fbsdvm: 56 data bytes 64 bytes from 192.168.50.32: icmp_seq=0 ttl=64 time=43.908 ms 64 bytes from 192.168.50.32: icmp_seq=1 ttl=64 time=44.173 ms 64 bytes from 192.168.50.32: icmp_seq=2 ttl=64 time=44.258 ms 64 bytes from 192.168.50.32: icmp_seq=3 ttl=64 time=44.421 ms --- n3ubuntu24 ping statistics --- 4 packets transmitted, 4 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 43.908/44.190/44.421/0.186 ms root@n1fbsd:~ # root@n1fbsd:~ # sysctl -f /etc/sysctl.conf net.inet.tcp.hostcache.enable: 0 -> 0 kern.ipc.maxsockbuf: 10485760 -> 10485760 net.inet.tcp.sendbuf_max: 6291456 -> 6291456 net.inet.tcp.recvbuf_max: 6291456 -> 6291456 root@n1fbsd:~ # root@n3ubuntu24:~ # sysctl -p net.core.rmem_max = 10485760 net.core.wmem_max = 10485760 net.ipv4.tcp_rmem = 4096 131072 6291456 net.ipv4.tcp_wmem = 4096 16384 6291456 net.ipv4.tcp_no_metrics_save = 1 root@n3ubuntu24:~ #
- info for switching to the FreeBSD RACK TCP stack
root@n1fbsd:~ # kldstat Id Refs Address Size Name 1 5 0xffffffff80200000 1f75ca0 kernel 2 1 0xffffffff82810000 368d8 tcp_rack.ko 3 1 0xffffffff82847000 f0f0 tcphpts.ko root@n1fbsd:~ # sysctl net.inet.tcp.functions_default=rack net.inet.tcp.functions_default: freebsd -> rack root@n1fbsd:~ #
FreeBSD VM sender info |
FreeBSD 15.0-CURRENT (GENERIC) main-581e064ddeb4: Fri May 30 2025 |
Linux VM sender info |
Ubuntu server 24.04.2 LTS (GNU/Linux 6.8.0-60-generic x86_64) |
Linux VM receiver info |
Ubuntu server 24.04.2 LTS (GNU/Linux 6.8.0-60-generic x86_64) |