AFS: Andrew File System


Development of AFS started in 1983 at Carnegie Mellon University with the Andrew Project. Most parts of this project are replaced by newer implementations. Only the file system survives nowadays even though it had an eventful past:

The goal of the Andrew File System is to provide a scalable and global file system that allows storage and sharing of files independent of location and operating system. This has been achieved with a server side consisting of a namespace database, volume based fileservers and clients for almost every desktop operating system available.

AFS and Kerberos

Part of the AFS server suite is the Authentication Server which provided mutual authentication and was implemented using a set of algorithms developed during Massachusetts Institute of Technology's Project Athena; hence these were known as Kerberos. This original authentication server is still available in the OpenAFS code, but has been superseded by newer implementations: Kerberos5 from both MIT and Heimdal. The current AFS servers are compatible with Kerberos5 even though the kerberos ticket on the user side still has to be converted to an AFS token. In the example below this is done by a additional command, but some implementations of kinit can do this nativly.

#>kinit <username>@<REALM>
Please enter the password for <username>@<REALM>: ********
#>aklog <cell>
Tokens held by the Cache Manager:
Tokens for afs@<cell> [Expires Mar  2 07:40]
   --End of list--

For the conversion command, several implementations are available and may be called aklog (OpenAFS), afslog (Arla) or afs5log (something RedHat has come up with).

AFS design

One of the great things about AFS is it being a global file system: it has a namespace through which all available AFS cells (an administrative unit) can be reached. The root of the namespace tends to be the /afs directory. When doing a ls in this root directory, arla contacts the vlserver of its parent cell to query the available cells and displays them:

#>ls /afs                           

If you go a step further into the filesystem and query e.g. the cell, the AFS client will first contact the vlserver of this cell to point to the root volume of the cell. Next the client contacts the fileserver containing this volume to provide the actual data:

#>ls /afs/
admin           hp_ux102        i386_fbsd_51    i386_obsd29     pkg             sparc_nbsd13    sun4c_open_21   usr
alpha_dux40d    i386_fbsd22     i386_linux24    i386_obsd30     pkg.old         sparc_nbsd14    sun4m_413       var

Something similar happens when contacting the cell

#>ls /afs/
backup  pittnet usr11   usr19   usr26   usr33   usr40   usr48   usr55   usr62   usr7    usr77   usr84   usr91   usr99
class   public  usr12   usr2    usr27   usr34   usr41   usr49   usr56   usr63   usr70   usr78   usr85   usr92   web

The example above shows two great features: AFS allows a user to browse through cells all over the world, as simple as moving through a local filesystem. Secondly it doesn't require any knowledge of servers or shares within a cell, the namespace will take care of providing a view, and pointing to the correct fileservers.

Have a look here to find more info on the server side.

AFS usage

anonymous and authenticated access

The above ls examples can be done by anyone, since the you are by definition part of the system:anyuser group. This group normally has list and read rights for the root of a AFS cel.

#> fs listacl /afs/
Access list for /afs/ is
Normal rights:
  system:administrators rlidwka
  system:anyuser rl

When descending into a directory to which you don't have rights, you will be stopped:

#> fs la /afs/
fs: You don't have the required access rights on '/afs/'

After becoming a authenticated user:

#> kinit hugo
hugo@MEILAND.NL's Password: 
kinit: NOTICE: ticket renewable lifetime is 1 week
#> aklog
#> tokens

Tokens held by the Cache Manager:

Tokens for [Expires Mar  2 20:41]
   --End of list--

#> fs la /afs/
Access list for /afs/ is
Normal rights:
  system:administrators rlidwka
  hugo rlidwka

#> echo "hello afs" > /afs/
#> cat /afs/ 
hello afs

There are 7 levels of access to a directory or files in a directory:

These acl's can only be set on directories, not at file level.

OpenAFS support and FreeBSD

There are two main OpenAFS major versions at present, the 1.4 series and the 1.6 series. The 1.4 series has largely been untouched since FreeBSD 5.x, and is not expected to work as a client on modern FreeBSD versions. As of mid-may, OpenAFS 1.6.0 is nearing release; currently 1.6.0pre5 is available and has received enough attention so as to be usable on current FreeBSD versions. The client is not quite usable on FreeBSD 7.x, but for 8.x and 9-CURRENT the client is useful under moderate load. (There are known bugs which are expected to trigger under high load.) Unfortunately, due the the chunk-based caching architecture of the client, the VFS locking is very hard to get right when using a disk-based cache, so only a memory cache is usable.

OpenAFS installation through ports

#>cd /usr/ports/net
#>sh openafs.shar
#>cd openafs
#>make install

(tested on FreeBSD 8.1, 8.2 and a 9-CURRENT snapshot ca. May 2011)

OpenAFS has several configuration files; defaults are provided in many cases but are not reasonable for all configurations. For client functionality, afsd_enable="YES" must be set in rc.conf. The configuration file /usr/local/etc/openafs/ThisCell sets the "home cell" for the client machine, and /usr/local/etc/openafs/cacheinfo tells the client where to mount AFS, where the cache partition is (irrelevant for memory cache), and the size of the cache (in kB). In order for the client to run, the mountpoint /afs must exist, and the "cache partition" (or directory) which defaults to /usr/vice/cache must exist as well. The initscript will not start if any of these conditions are not met. Though the cache partition is irrelevant for a memory cache client, the cache manager still checks for its existence at startup. When a disk-based cache is used, though, it is EXTREMELY IMPORTANT to ensure that the filesystem on which the disk cache lives has at least as much space available as claimed in the cacheinfo file; if the cache partition fills, bad things will occur. For this reason, it is preferred to have the disk cache live on a dedicated slice or partition.

#>/usr/local/etc/rc.d/afsd start
#>ls /afs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                

Note that the above list of cells in /afs will vary based on the version of CellServDB and/or the default cell configured in the client. New versions of CellServDB can be downloaded from; the ports packaging will be updated for future CellServDB changes.

afs (last edited 2011-06-26 22:54:05 by BenKaduk)