AFS: Andrew File System
Development of AFS started in 1983 at Carnegie Mellon University with the Andrew Project. Most parts of this project are replaced by newer implementations. Only the file system survives nowadays even though it had an eventful past:
- in 1988 Transarc was founded by Carnegie Mellon employees, who commercially supported server and client code for several different operating systems.
- To create an open version of the AFS client, the Arla project started in 1993 at the Royal Institute of Technology in Stockholm (Sweden).
- In 1994 Transarc was sold to IBM and became the IBM Pittsburgh Lab in 1999.
- The next year (in 2000), IBM open sourced the AFS code as OpenAFS under the IBM Public License.
The goal of the Andrew File System is to provide a scalable and global file system that allows storage and sharing of files independent of location and operating system. This has been achieved with a server side consisting of a namespace database, volume based fileservers and clients for almost every desktop operating system available.
AFS and Kerberos
Part of the AFS server suite is the Authentication Server which provided mutual authentication and was implemented using a set of algorithms developed during Massachusetts Institute of Technology's Project Athena; hence these were known as Kerberos. This original authentication server is still available in the OpenAFS code, but has been superseded by newer implementations: Kerberos5 from both MIT and Heimdal. The current AFS servers are compatible with Kerberos5 even though the kerberos ticket on the user side still has to be converted to an AFS token. In the example below this is done by a additional command, but some implementations of kinit can do this nativly.
#>kinit <username>@<REALM> Please enter the password for <username>@<REALM>: ******** #>aklog <cell> #>tokens Tokens held by the Cache Manager: Tokens for afs@<cell> [Expires Mar 2 07:40] --End of list--
For the conversion command, several implementations are available and may be called aklog (OpenAFS), afslog (Arla) or afs5log (something RedHat has come up with).
One of the great things about AFS is it being a global file system: it has a namespace through which all available AFS cells (an administrative unit) can be reached. The root of the namespace tends to be the /afs directory. When doing a ls in this root directory, arla contacts the vlserver of its parent cell to query the available cells and displays them:
#>ls /afs .e.kth.se besserwisser.org iastate.edu ncsa.uiuc.edu stacken.kth.se .kth.se cern.ch ies.auc.dk northstar.dartmouth.edu su.se ...
If you go a step further into the filesystem and query e.g. the cell stacken.kth.se, the AFS client will first contact the vlserver of this cell to point to the root volume of the cell. Next the client contacts the fileserver containing this volume to provide the actual data:
#>ls /afs/stacken.kth.se admin hp_ux102 i386_fbsd_51 i386_obsd29 pkg sparc_nbsd13 sun4c_open_21 usr alpha_dux40d i386_fbsd22 i386_linux24 i386_obsd30 pkg.old sparc_nbsd14 sun4m_413 var ...
Something similar happens when contacting the cell pitts.edu:
#>ls /afs/pitt.edu backup pittnet usr11 usr19 usr26 usr33 usr40 usr48 usr55 usr62 usr7 usr77 usr84 usr91 usr99 class public usr12 usr2 usr27 usr34 usr41 usr49 usr56 usr63 usr70 usr78 usr85 usr92 web ...
The example above shows two great features: AFS allows a user to browse through cells all over the world, as simple as moving through a local filesystem. Secondly it doesn't require any knowledge of servers or shares within a cell, the namespace will take care of providing a view, and pointing to the correct fileservers.
Have a look here to find more info on the server side.
anonymous and authenticated access
The above ls examples can be done by anyone, since the you are by definition part of the system:anyuser group. This group normally has list and read rights for the root of a AFS cel.
#> fs listacl /afs/meiland.nl/ Access list for /afs/meiland.nl/ is Normal rights: system:administrators rlidwka system:anyuser rl
When descending into a directory to which you don't have rights, you will be stopped:
#> fs la /afs/meiland.nl/users/hugo/ fs: You don't have the required access rights on '/afs/meiland.nl/users/hugo/'
After becoming a authenticated user:
#> kinit hugo hugo@MEILAND.NL's Password: kinit: NOTICE: ticket renewable lifetime is 1 week #> aklog meiland.nl #> tokens Tokens held by the Cache Manager: Tokens for firstname.lastname@example.org [Expires Mar 2 20:41] --End of list-- #> fs la /afs/meiland.nl/users/hugo Access list for /afs/meiland.nl/users/hugo is Normal rights: system:administrators rlidwka hugo rlidwka #> echo "hello afs" > /afs/meiland.nl/users/hugo/hello.txt #> cat /afs/meiland.nl/users/hugo/hello.txt hello afs
There are 7 levels of access to a directory or files in a directory:
- r: read (files)
- l: list (directory)
- i: insert (directory)
- d: delete (directory)
- w: write (files)
- k: lock (files)
- a: administer (directory)
These acl's can only be set on directories, not at file level.
- volume properties
- directory acl
OpenAFS support and FreeBSD
There are two main OpenAFS major versions at present, the 1.4 series and the 1.6 series. The 1.4 series has largely been untouched since FreeBSD 5.x, and is not expected to work as a client on modern FreeBSD versions. As of mid-may, OpenAFS 1.6.0 is nearing release; currently 1.6.0pre5 is available and has received enough attention so as to be usable on current FreeBSD versions. The client is not quite usable on FreeBSD 7.x, but for 8.x and 9-CURRENT the client is useful under moderate load. (There are known bugs which are expected to trigger under high load.) Unfortunately, due the the chunk-based caching architecture of the client, the VFS locking is very hard to get right when using a disk-based cache, so only a memory cache is usable.
OpenAFS installation through ports
#>cd /usr/ports/net #>fetch http://web.mit.edu/freebsd/openafs/openafs.shar #>sh openafs.shar #>cd openafs #>make install
(tested on FreeBSD 8.1, 8.2 and a 9-CURRENT snapshot ca. May 2011)
OpenAFS has several configuration files; defaults are provided in many cases but are not reasonable for all configurations. For client functionality, afsd_enable="YES" must be set in rc.conf. The configuration file /usr/local/etc/openafs/ThisCell sets the "home cell" for the client machine, and /usr/local/etc/openafs/cacheinfo tells the client where to mount AFS, where the cache partition is (irrelevant for memory cache), and the size of the cache (in kB). In order for the client to run, the mountpoint /afs must exist, and the "cache partition" (or directory) which defaults to /usr/vice/cache must exist as well. The initscript will not start if any of these conditions are not met. Though the cache partition is irrelevant for a memory cache client, the cache manager still checks for its existence at startup. When a disk-based cache is used, though, it is EXTREMELY IMPORTANT to ensure that the filesystem on which the disk cache lives has at least as much space available as claimed in the cacheinfo file; if the cache partition fills, bad things will occur. For this reason, it is preferred to have the disk cache live on a dedicated slice or partition.
#>/usr/local/etc/rc.d/afsd start #>ls /afs 1ts.org kloe.infn.it acm-csuf.org kth.se acm.uiuc.edu laroia.net adrake.org lcp.nrl.navy.mil ams.cern.ch le.infn.it andrew.cmu.edu lees.mit.edu anl.gov lnf.infn.it asu.edu lngs.infn.it athena.mit.edu lrz-muenchen.de atlas.umich.edu mars.asu.edu atlass01.physik.uni-bonn.de math.cornell.edu ba.infn.it math.unifi.it bazquux.org mathi.uni-heidelberg.de biocenter.helsinki.fi mcc.ac.gb bme.hu md.kth.se caspur.it mech.kth.se cats.ucsc.edu membrain.com cede.psu.edu meteo.uni-koeln.de cern.ch mpe.mpg.de cgv.tugraz.at mrow.org chem.cmu.edu mrph.org ciemat.es msc.cornell.edu citi.umich.edu mstacm.org clarkson.edu msu.edu club.cc.cmu.edu mw.andrew.cmu.edu cmf.nrl.navy.mil nada.kth.se cms.hu-berlin.de ncsa.uiuc.edu cnf.cornell.edu nd.edu coed.org nersc.gov combi.tfh-wildau.de net.mit.edu crc.nd.edu nikhef.nl cs.cmu.edu nimlabs.org cs.hm.edu nomh.org cs.pitt.edu northstar.dartmouth.edu cs.rose-hulman.edu numenor.mit.edu cs.stanford.edu oc7.org cs.uwm.edu p-ng.si cs.wisc.edu pdc.kth.se dapnia.saclay.cea.fr pfriedma.org dbic.dartmouth.edu phy.bris.ac.uk dementia.org physics.ucsb.edu desy.de physics.unc.edu dev.mit.edu physics.wisc.edu dia.uniroma3.it physik.uni-freiburg.de doe.atomki.hu physik.uni-mainz.de dsrw.org physik.uni-wuppertal.de e18.ph.tum.de physnet.uni-hamburg.de ece.cmu.edu physto.se eecs.berkeley.edu pi.infn.it eecs.harvard.edu pitt.edu enea.it psc.edu eng.utah.edu psi.ch engr.wisc.edu psm.it epfl.ch qatar.cmu.edu epitech.net rhic.bnl.gov es.net riscpkg.org ethz.ch rl.ac.uk extundo.com roma3.infn.it f9.ijs.si rose-hulman.edu fnal.gov rpi.edu freedaemon.com rrz.uni-koeln.de fusione.it ruk.cuni.cz glue.umd.edu rz.uni-jena.de gppc.de s-et.aau.dk grand.central.org sanchin.se hackish.org sbp.ri.cmu.edu hep-ex.physics.metu.edu.tr scoobydoo.psc.edu hep.caltech.edu scotch.ece.cmu.edu hep.man.ac.uk setfilepointer.com hep.sc.edu sinenomine.net hep.wisc.edu sipb.mit.edu hephy.at slac.stanford.edu i1.informatik.rwth-aachen.de slackers.net iastate.edu soap.mit.edu ic-afs.arc.nasa.gov sodre.cx ic.ac.uk sph.umich.edu icemb.it stacken.kth.se ics.muni.cz su.se ictp.it sums.math.mcgill.ca idahofuturetruck.org syd.kth.se ies.auc.dk tgrid.it ifca.unican.es tproa.net ifh.de tu-bs.de ific.uv.es tu-chemnitz.de illigal.uiuc.edu ugcs.caltech.edu impetus.uni-koeln.de umbc.edu in2p3.fr umich.edu inf.ed.ac.uk uncc.edu infn.it uni-freiburg.de ing.uniroma1.it uni-hohenheim.de interdose.net uni-mannheim.de ipp-garching.mpg.de uni-paderborn.de ir.stanford.edu urz.uni-heidelberg.de isis.unc.edu usatlas.bnl.gov isk.kth.se vn.uniroma3.it it.kth.se wam.umd.edu italia wu-wien.ac.at itp.tugraz.at zcu.cz jpl.nasa.gov ziti.uni-heidelberg.de kfki.hu zone.mit.edu
Note that the above list of cells in /afs will vary based on the version of CellServDB and/or the default cell configured in the client. New versions of CellServDB can be downloaded from grand.central.org; the ports packaging will be updated for future CellServDB changes.