Adapting to New Storage Technologies

Organizer: AdrianPalmer <>


New technologies in storage can (and are) seen as disruptive. Work needs to be identified and carried out to enable the use of new technology. The storage stack is changing, adding new layers and shifting existing layers. Zoned Block Devices and persistent memory require changes in the stack to function for system administrators. Work is progressing in the Linux kernel, and work needs to happen in the BSD kernel as well. We'll explore the needed changes and how that affect the probing/identification, the geom topology, the CAM layer, and even possibly the page cache. The major new change is the requirement of forward-write-only on the disk, now BOTH rotational and radial.

Adrian's focus will be on SMR and Host Aware Zoned Block Devices, but discussion is welcome on persistent memory, especially to find synergies in the working stacks.

Filesystem design topics will also be welcome, but we will try to focus on the lower layers.


In order to attend you need register for the storage summit as well as email and have your working group slot confirmed by the working group organizer.

Please do NOT add yourself here. Your name will appear automatically once you have received the confirmation email. You need to put your name on the general storage summit attendees list though.


Username / Affiliation

Topics of Interest


Adrian Palmer





Session 1: Device Identification and the structural requirements

We'll look over the Identification nuances and what needs to change to support the structure. Support for IO order guarantees, forward-write only requirements, new commands and topology. Dig into CAM and GEOM layers. Solutions should be fast and have as few code paths as possible.

Session 2: Information dissemination and consumption

Where and how will information from the report_zones command be gathered, stored, combined and used. This will include userspace storage and multi-volume management. Will CAM store this data, or will GEOM? How frequently will this need to be queried/updated/verified from the drive?

Session 3: Application/FS framing of IO

We'll look at how the applications/FS can use the information and format its IO to match what is needed at the lower layers. Retrofitting FFS and ZFS could be quite the challenge, especially to 1) keep backwards compatibility and/or 2) make older devices/schemes forward-compatible.

Background (SMR)

SMR, in a big way, is the propagator of the ZAC/ZBC standards, but the standards are not limited to SMR. In fact, with a little work, the new specs can be retroactively applied to conventional drives, as well as other media types (tape, flash, optical, etc). Here are some links to the fundamental technology.

SMR Overview

T10 ZBC draft Registration required, but material is in other presentations

Linux Storage Stack Diagram Changes made in AHCI, libata, SD, IO scheduler (possibly), device mappers (expected) and Filesystem. I expect us to arrive at similar changes in the newbus, CAM, GEOM, and FS layers in FreeBSD.

SMR overview with FS

SMR Filesystem Design

Alternative solutions/code repositories (linux):

SMR Friendly File System -- comprehensive Stack overhaul (in pre-alpha stage)

Zoned Device Mapper -- conversion shim from zoned lower layer to conventional upper layer

libzbc -- Userspace library for zoned devices

SMR simulator --device mapper to simulate zoned devices

SCSI development for ZBC -- Dr. Hannes Reinecke's working/development branch for ZBC

Background (Persistent Memory)

My understanding of persistent memory is limited, but here are a few thoughts.

Persistent memory can either be thought of as byte addressed and developed from the existing memory management code, or as block addressed, and developed from the existing IO stack code. We'll be focusing on the block addressed path.

For the block addressed paradigm, I've seen a number of proposed changes: For the speed requirement,much of the storage stack is being striped away, removing hooks for administrative checks, including antivirus. A challenge is detecting/correcting torn writes, especially if used as an extension for memory.


I welcome feedback for additional background information for the working group. My goal is to get a plan of direction for kernel changes required for these new technologies, so work may begin.


Session 1:

Small audience. We talked about zoned characteristics, and how it can be used in various workloads, projected to be implemented in years.

Discussed the basic changes in CAM and GEOM. Decided 2 GEOMs are needed: 1) one that exposes the zone information and 2) one that translates the zone information. The first would be for ZFS, and the second for FFS.

Session 2:

Merged with ZFS working group to discuss SMR for Session 3. Came up with idea that could be implemented as circular buffer zone type. Began to discuss solutions among developers.

We began by briefly mentioning SMR (ZAC/ZBC) characteristics and propositioning Back References in ZFS. McKusick offered a USENIX paper to read on back references. Talk shifted over to ZFS write-in-order mechanics and alignment to ZAC/ZBC zone constructs. An idea was floated on how to mimic GC in ZFS by adding a zone construct that 1) left behind holes as data was re-written, and 2) zone data was re-written up to the point of a hole to merge in new data (old data was simply re-written with no change). This advances the write pointer, but the current problem exists where data in front of WP is unreadable -- the solution to this is proposes as an amendment to ZAC/ZBC v2 (circular write zones).

Agreement to move forward in baby steps for ZFS modifications. First step: teach ZFS to write forward only.

Session 3:

N/A - Working groups were all merged.

201602StorageSummit/NewStorageTechnologies (last edited 2016-03-07 21:45:45 by AdrianPalmer)