Main index | Section 7 | Options |
The pool format does not affect file system version compatibility or the ability to send file systems between pools.
Since most features can be enabled independently of each other the on-disk format of the pool is specified by the set of all features marked as active on the pool. If the pool was created by another software version this set may include unsupported features.
Each supported feature also has a short name. By convention a feature's short name is the portion of its guid which follows the ':' (e.g. com.example:feature_name would have the short name feature_name ), however a feature's short name may differ across ZFS implementations if following the convention would result in name conflicts.
active | This feature's on-disk format changes are in effect on the pool. Support for this feature is required to import the pool in read-write mode. If this feature is not read-only compatible, support is also required to import the pool in read-only mode (see "Read-only compatibility"). |
enabled | An administrator has marked this feature as enabled on the pool, but the feature's on-disk format changes have not been made yet. The pool can still be imported by software that does not support this feature, but changes may be made to the on-disk format at any time which will move the feature to the active state. Some features may support returning to the enabled state after becoming active. See feature-specific documentation for details. |
disabled | |
This feature's on-disk format changes have not been made and will not be made unless an administrator moves the feature to the enabled state. Features cannot be disabled once they have been enabled. | |
The state of supported features is exposed through pool properties of the form feature@short_name.
inactive | |
The feature is in the enabled state and therefore the pool's on-disk format is still compatible with software that does not support this feature. | |
readonly | |
The feature is read-only compatible and the pool has been imported in read-only mode. | |
async_destroy | |||||||
Destroying a file system requires traversing all of its data in order to return its used space to the pool. Without async_destroy the file system is not fully removed until all space has been reclaimed. If the destroy operation is interrupted by a reboot or power outage the next attempt to open the pool will need to complete the destroy operation synchronously. When async_destroy is enabled the file system's data will be reclaimed by a background process, allowing the destroy operation to complete without traversing the entire file system. The background process is able to resume interrupted destroys after the pool has been opened, eliminating the need to finish interrupted destroys as part of the open operation. The amount of space remaining to be reclaimed by the background process is available through the freeing property. This feature is only active while freeing is non-zero. | |||||||
empty_bpobj | |||||||
This feature increases the performance of creating and using a large number of snapshots of a single filesystem or volume, and also reduces the disk space required. When there are many snapshots, each snapshot uses many Block Pointer Objects (bpobj's) to track blocks associated with that snapshot. However, in common use cases, most of these bpobj's are empty. This feature allows us to create each bpobj on-demand, thus eliminating the empty bpobjs. This feature is active while there are any filesystems, volumes, or snapshots which were created after enabling this feature. | |||||||
filesystem_limits | |||||||
This feature enables filesystem and snapshot limits. These limits can be used to control how many filesystems and/or snapshots can be created at the point in the tree on which the limits are set. This feature is active once either of the limit properties has been set on a dataset. Once activated the feature is never deactivated. | |||||||
lz4_compress | |||||||
lz4 is a high-performance real-time compression algorithm that features significantly faster compression and decompression as well as a higher compression ratio than the older lzjb compression. Typically, lz4 compression is approximately 50% faster on compressible data and 200% faster on incompressible data than lzjb. It is also approximately 80% faster on decompression, while giving approximately 10% better compression ratio. When the lz4_compress feature is set to enabled, the administrator can turn on lz4 compression on any dataset on the pool using the zfs(8) command. Also, all newly written metadata will be compressed with lz4 algorithm. Since this feature is not read-only compatible, this operation will render the pool unimportable on systems without support for the lz4_compress feature. Booting off of lz4 -compressed root pools is supported. This feature becomes active as soon as it is enabled and will never return to being enabled. | |||||||
multi_vdev_crash_dump | |||||||
This feature allows a dump device to be configured with a pool comprised of multiple vdevs. Those vdevs may be arranged in any mirrored or raidz configuration. | |||||||
spacemap_histogram | |||||||
This features allows ZFS to maintain more information about how free space is organized within the pool. If this feature is enabled, ZFS will set this feature to active when a new space map object is created or an existing space map is upgraded to the new format. Once the feature is active, it will remain in that state until the pool is destroyed. | |||||||
extensible_dataset | |||||||
This feature allows more flexible use of internal ZFS data structures, and exists for other features to depend on. This feature will be active when the first dependent feature uses it, and will be returned to the enabled state when all datasets that use this feature are destroyed. | |||||||
bookmarks | |||||||
This feature enables use of the zfs bookmark subcommand.
This feature is
active
while any bookmarks exist in the pool.
All bookmarks in the pool can be listed by running
zfs
list
| |||||||
enabled_txg | |||||||
Once this feature is enabled ZFS records the transaction group number in which new features are enabled. This has no user-visible impact, but other features may depend on this feature. This feature becomes active as soon as it is enabled and will never return to being enabled. | |||||||
hole_birth | |||||||
This feature improves performance of incremental sends ("zfs send -i") and receives for objects with many holes. The most common case of hole-filled objects is zvols. An incremental send stream from snapshot A to snapshot B contains information about every block that changed between A and B. Blocks which did not change between those snapshots can be identified and omitted from the stream using a piece of metadata called the 'block birth time', but birth times are not recorded for holes (blocks filled only with zeroes). Since holes created after A cannot be distinguished from holes created before A, information about every hole in the entire filesystem or zvol is included in the send stream. For workloads where holes are rare this is not a problem. However, when incrementally replicating filesystems or zvols with many holes (for example a zvol formatted with another filesystem) a lot of time will be spent sending and receiving unnecessary information about holes that already exist on the receiving side. Once the hole_birth feature has been enabled the block birth times of all new holes will be recorded. Incremental sends between snapshots created after this feature is enabled will use this new metadata to avoid sending information about holes that already exist on the receiving side. This feature becomes active as soon as it is enabled and will never return to being enabled. | |||||||
embedded_data | |||||||
This feature improves the performance and compression ratio of highly-compressible blocks. Blocks whose contents can compress to 112 bytes or smaller can take advantage of this feature. When this feature is enabled, the contents of highly-compressible blocks are stored in the block "pointer" itself (a misnomer in this case, as it contains the compressed data, rather than a pointer to its location on disk ). Thus the space of the block (one sector, typically 512 bytes or 4KB) is saved, and no additional i/o is needed to read and write the data block. This feature becomes active as soon as it is enabled and will never return to being enabled. | |||||||
zpool_checkpoint | |||||||
This feature enables the "zpool checkpoint" subcommand that can checkpoint the state of the pool at the time it was issued and later rewind back to it or discard it. This feature becomes active when the "zpool checkpoint" command is used to checkpoint the pool. The feature will only return back to being enabled when the pool is rewound or the checkpoint has been discarded. | |||||||
device_removal | |||||||
This feature enables the "zpool remove" subcommand to remove top-level vdevs, evacuating them to reduce the total size of the pool. This feature becomes active when the "zpool remove" command is used on a top-level vdev, and will never return to being enabled. | |||||||
obsolete_counts | |||||||
This feature is an enhancement of device_removal, which will over time reduce the memory used to track removed devices. When indirect blocks are freed or remapped, we note that their part of the indirect mapping is "obsolete", i.e. no longer needed. See also the "zfs remap" subcommand in zfs(8). This feature becomes active when the "zpool remove" command is used on a top-level vdev, and will never return to being enabled. | |||||||
spacemap_v2 | |||||||
This feature enables the use of the new space map encoding which consists of two words (instead of one) whenever it is advantageous. The new encoding allows space maps to represent large regions of space more efficiently on-disk while also increasing their maximum addressable offset. This feature becomes active as soon as it is enabled and will never return to being enabled. | |||||||
large_blocks | |||||||
The large_block feature allows the record size on a dataset to be set larger than 128KB. This feature becomes active once a recordsize property has been set larger than 128KB, and will return to being enabled once all filesystems that have ever had their recordsize larger than 128KB are destroyed. Please note that booting from datasets that have recordsize greater than 128KB is NOT supported by the FreeBSD boot loader. | |||||||
large_dnode | |||||||
The large_dnode feature allows the size of dnodes in a dataset to be set larger than 512B. This feature becomes active once a dataset contains an object with a dnode larger than 512B, which occurs as a result of setting the dnodesize dataset property to a value other than legacy. The feature will return to being enabled once all filesystems that have ever contained a dnode larger than 512B are destroyed. Large dnodes allow more data to be stored in the bonus buffer, thus potentially improving performance by avoiding the use of spill blocks. | |||||||
sha512 |
The sha512 feature enables the use of the SHA-512/256 truncated hash algorithm (FIPS 180-4) for checksum and dedup. The native 64-bit arithmetic of SHA-512 provides an approximate 50% performance boost over SHA-256 on 64-bit hardware and is thus a good minimum-change replacement candidate for systems where hash performance is important, but these systems cannot for whatever reason utilize the faster skein algorithms. When the sha512 feature is set to enabled, the administrator can turn on the sha512 checksum on any dataset using the # zfs set checksum=sha512 dataset command. This feature becomes active once a checksum property has been set to sha512, and will return to being enabled once all filesystems that have ever had their checksum set to sha512 are destroyed. | ||||||
skein |
The skein feature enables the use of the Skein hash algorithm for checksum and dedup. Skein is a high-performance secure hash algorithm that was a finalist in the NIST SHA-3 competition. It provides a very high security margin and high performance on 64-bit hardware (80% faster than SHA-256). This implementation also utilizes the new salted checksumming functionality in ZFS, which means that the checksum is pre-seeded with a secret 256-bit random key (stored on the pool) before being fed the data block to be checksummed. Thus the produced checksums are unique to a given pool, preventing hash collision attacks on systems with dedup. When the skein feature is set to enabled, the administrator can turn on the skein checksum on any dataset using the # zfs set checksum=skein dataset command. This feature becomes active once a checksum property has been set to skein, and will return to being enabled once all filesystems that have ever had their checksum set to skein are destroyed. Booting off of pools using skein is supported. | ||||||
allocation_classes | |||||||
This feature enables support for separate allocation classes. This feature becomes active when a dedicated allocation class vdev (dedup or special) is created with "zpool create" or "zpool add". With device removal, it can be returned to the enabled state if all the top-level vdevs from an allocation class are removed. | |||||||
The mdoc(7) implementation of this manual page was initially written by Martin Matuska <mm@FreeBSD.org>.
ZPOOL-FEATURES (7) | June 7, 2017 |
Main index | Section 7 | Options |
Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
“ | Like a classics radio station whose play list spans decades, Unix simultaneously exhibits its mixed and dated heritage. There's Clash-era graphics interfaces; Beatles-era two-letter command names; and systems programs (for example, ps) whose terse and obscure output was designed for slow teletypes; Bing Crosby-era command editing (# and @ are still the default line editing commands), and Scott Joplin-era core dumps. | ” |
— The Unix Haters' handbook |