Test Cluster One Pointers

This page describes some useful hints on working with the FreeBSD netperf cluster.


zoo.FreeBSD.org is the central management host for the FreeBSD test cluster, and is the only link between the test boxes and the outside world. This host was donated by Sentex, along with its network connectivity. SSH into this box to access the test cluster. Contact netperf-admin to get set up with an account there; this typically requires that you already have a FreeBSD.org account, which will be cloned to the box; because the test cluster is geographically quite distinct from the main FreeBSD.org cluster, standard FreeBSD.org home directories are not available there. zoo.FreeBSD.org tends to run whatever the most reliable -STABLE branch is at a particular time.

Network, console, and power configuration

Most test hosts are configured so that that:

See TestClusterOneReservations for more specific information on how to reach the console and power control for your test box. In the common case you can type console system-name to connect to the console of the host reached via 'portnumber' as identified on the reservations page. Please contact the admins if you need help with this.

Machine and power console accounts

Please contact netperf-admin to request the creation of a new account; normally, this is available only to FreeBSD developers with existing FreeBSD.org accounts, and SSH authentication keys will be propagated from freefall.FreeBSD.org.

If you need root access on zoo, please contact netperf-admin.

All test systems have a 'test' account with a password of 'test', as well as a root account without a password.

The username for apc2 is 'apc'; the username for apc3 and apc4 is 'apc'. Passwords are likewise.

The username for apc5 is 'apc'. Password is likewise.

Source access

Most test cluster users make use of Perforce to move locally edited source trees to the test cluster for testing; they may also do source code editing on zoo itself. A range of development models can be used, from netbooting kernels built on zoo or a test machine with an NFS root, to rebuilding and installing FreeBSD versions on test machine hard disks, to uploading kernels with embedded MFS file systems.

A copy of p4 is installed in /usr/local/bin. If you've not used p4 for work on multiple hosts at a time before, remember to create a new p4 client configuration (with a different name from your existing p4 client) when setting up the configuration. I.e., username_zoo. Test cluster users are encouraged to back up their data, possibly using revision control, and especially not to rely on the persistence of data on test machine hard disks.

A cvsup mirror of the FreeBSD CVS repository may be found in /zoo/cvsup/FreeBSD-CVS. You'll need to use cvs -R as otherwise CVS will complain about not being able to write to the directory tree. The mirror is updated every couple of hours using cvsup.


zoo.FreeBSD.org runs FreeBSD 11-STABLE, which may mean you have to build custom versions of config or build tools if you are building other versions of FreeBSD, especially kernels. Some users will keep a series of config versions in ~/bin for this reason.

The /zoo file system is a good place to keep your scratch files. Most users will have a /zoo/username. You may find you wish to build on your test box, or you might want to build on zoo. If building on zoo, please run large builds under nice so as to avoid disrupting interactive sessions for other users.


By default, DHCPd is configured to provide an IP address and also netboot paramaters to boot using PXE. Test box BIOS's will tftp /zoo/hostname/boot/pxeboot to begin the boot process. They will use /zoo/hostname as the root file system for the loader, and unless configured otherwise using /zoo/hostname/boot/loader.conf, will load their kernel, modules, etc, from there. By default, hosts will also use an NFS root mounted from, but loader.conf may be configured to force a boot from a local hard disk on the test box in the style of vfs.root.mountfrom=ufs:/dev/aacd0s1a or similar device string. This may be particularly useful for testing RAID and file systems.

Warning: even when loader.conf is configured to boot user space from a local hard disk, the kernel and any modules will still be loaded from NFS. On the other hand, once booted, kldload will load from the local hard disk. This may result in confusion if you don't keep the two in sync, or aren't careful to load modules out of NFS after boot. It is possible to configure loader.conf to load the kernel from the local hard disk also.

Network Configuration

Tests hosts and zoo are linked by a gigabit switch used for access to power consoles, test hosts via SSH, NFS mounting, DHCP, etc. Please do not do extensive high bandwidth testing on this network segment (192.168.5), as it will affect performance for other users, as well as produce unreliable results (as it is a shared network segment). Several test hosts are configured with direct gigabit links between them, most notably the hosts tiger-1, tiger-2, and tiger-3, which each have four gigabit ports and are interconnected. cheetah and sloth are also connected to the tigers, and can be used for comparative measurements as well. These hosts should be used for any tests relating to IP forwarding, bridging, etc. leopard-1, leopard-2, leopard-3, and hydra-2 are linked via a dedicated 10gbps switch and are available for 10gbps testing.

Fetching external files

zoo is running micro_proxy from inetd to allow test machines to fetch individual files or packages. It is not supposed to help with any high speed downloading etc. Use  setenv HTTP_PROXY  or your shell equivalent (or zoo's IPv6 link-local in case you prefer IPv6 to IPv4) to use it.

Time Server

The cluster has its own time server which syncs to GPS. If you want to keep time on hosts in the test lab you can either run PTPv2 (/usr/ports/net/ptpd2) or use NTP to talk to the m600 (

some interconnects


                                  switch1 mgmt .5
sloth    100 vr0  00:00:24:c8:d9:a0 --+ g1
         100 vr1  00:00:24:c8:d9:a1 -~|~--------- Inet
         100 vr2  00:00:24:c8:d9:a2 -~|~-----+
         100 vr3  00:00:24:c8:d9:a3 -~|~----~|~-+
                                      |      |  |
devnull 1000 e0a  00:a0:98:0d:31:88 --+ g2   |  |
         100 e0b  00:a0:98:0d:31:89 -~|~-----+  |
         100 mgmt 00:a0:98:0d:31:8a -~|~--------+
                               no/c --+ g3
        1000 backup link g3 switch2 --+ g4
                               no/c --+ g5
        1000        link g2 switch2 --+ g6
                               no/c --+ g7
                               no/c --+ g8 (copper)
                               no/c --+ g9 (fiber)

no/c == no carrier
???  == active, other end / iface / ...  unkown

Guidelines and Procedures for Use

Before being used, systems must be reserved using the wiki. This prevents accidental collisions in use and allows the cluster maintainers to decide when systems need recycling, may be redeployed for other uses, etc. TestClusterOneReservations may be used to reserve systems in the test cluster. For any long-term commitment of hardware (over a day or two), please contact the cluster administrators before making a reservation.

Please do not reconfigure BIOS settings on the hosts using the BIOS serial console without consulting the cluster admins. This can lead to difficult to diagnose and fix problems, inconsistent performance measurement results, etc. Under no circumstances disable BIOS serial redirection (if available) or disable PXE booting.

Test boxes have no direct network access to the outside world; any data, programs, etc, must be moved to them via zoo.FreeBSD.org. This is intentional, and prevents high bandwidth network streams, etc, from accidentally being spewed out onto the Internet.

Root access on test boxes also gives privilege with respect to the /zoo NFS export, and root on test boxes can be used to reconfigure /zoo/hostname subtrees even though the normal account on zoo.FreeBSD.org is not privileged. Please take extra care to avoid damaging test configurations for other test boxes and users.

Feel free to reinstall the operating systems or repartition the disks on any test box you have a reservation for. However, when done with a testing session and reservation, please restore a vanilla install of FreeBSD-CURRENT on the testbox and its netboot tree so that other users don't have to clean up after you. Please coordinate any long-term use of the test cluster with the cluster administrators, and update the wiki to reflect your use.

Please log use of the test cluster for any significant project on the TestClusterOneReservations page, so that we can point at the interesting projects done there when soliciting further hardware donations. Please also mention the test cluster in commits and posts if you use it for significant testing and development as part of a project for the same reason.

Do not power off the systems using APM/ACPI ie, no 'shutdown -p now', since this will mean (for at least some of the systems) that an on-site person has to push the power button instead of users just being able to power on systems remotely using the power controllers.

Todo/wanted/wish list/ideas list

We would love it if someone added conserver support to kgdb, so that kgdb could attach to kernels via the existing conserver setup. Right now, the two appear to be mutually exclusive, requiring disconnecting conserver from a host before it can be debugged using kgdb.

We would love it if release disk images and memory file system images could be constructed without root access on the host where the build is done. Be it a UFS file system tool in userspace, tarfs and tar-based root file system images, or whatever, it would make life much easier if this support were integrated into our build and release environment. (See e.g. Tar output mode for installworld.)

RMI MIPS eval board use

RMI Corporation (http://www.rmicorp.com/) has generously loaned us two highly-threaded MIPS eval boards capable of running FreeBSD. lama1 is a 32-thread RMI XLR eval board, and millipede1 is a 16-thread RMI XLS eval board. The machines are setup to netboot FreeBSD 9.x. The following is how the (interrupted) boot loader looks like:

Using Micron Flash for environment storage.
RMI Bootloader [Version: 1.6.0] for XLS416 on the [ATX_XLS_LTE]
(type 'h' for help)
Board OPN Number: [XLS4LTE-IIIA-20]
B0_XLS416 @ ATX_XLS_LTE $ 
Booting in 2 units. Press any key to halt...
[=== Autoboot Aborted. Dropping to Shell ===]


To netboot RMI's FreeBSD 6.1, interrupt the normal boot loader process when prompted, and then use the following commands (appropriately adapted):


ifconfig -i gmac1
ifconfig -i gmac1 -g
tftpc -f /zoo/lama1/boot/RWATSON/kernel -s
userapp vfs.root.mountfrom=nfs: boot.netif.name=rge1 boot.netif.ip= boot.netif.netmask= boot.netif.hwaddr=00:0f:30:00:20:4f boot.nfsroot.server= boot.nfsroot.path=/zoo/lama1


ifconfig -i pcie-dlink-0
ifconfig -i pcie-dlink-0 -g
tftpc -f /zoo/millipede1/boot/RWATSON/kernel -s
userapp vfs.root.mountfrom=nfs: boot.netif.name=msk0 boot.netif.ip= boot.netif.netmask= boot.netif.hwaddr=00:21:91:19:73:6b boot.nfsroot.server= boot.nfsroot.path=/zoo/millipede1

Booting Linux

To boot Linux present in the CF card or on the Promise IDE disk, type the following in the RMI boot prompt:


dload promise_0_1 1 /vmlinux.atx


dload pcmcia_1 1 /vmlinux.atx 20000000

Boot options

From the XLR/XLS boot loader, the following boot flags can be passed to FreeBSD kernel during boot up:



d, D

enter debugger as soon as possible

g, G

select gdb as debugger instead of default ddb

v, V

boot in verbose mode.

s, S

boot to single user mode.

One or more of the above flags can be set using the following command from XLR boot loader; for example:

userapp boot_flags=dv

TestClusterOnePointers (last edited 2018-02-22T14:02:37+0000 by EdMaste)