| Main index | Section 8 | Options |
For an overview of creating and managing ZFS storage pools see the zpoolconcepts(7) manual page.
The zpool command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported:
| zpool
| |
| zpool
| |
| zpool version Displays the software version of the zpool userland utility and the ZFS kernel module. | |
| zpool-create(8) | |
| Creates a new storage pool containing the virtual devices specified on the command line. | |
| zpool-initialize(8) | |
| Begins initializing by writing to all unallocated regions on the specified devices, or all eligible devices in the pool if no individual devices are specified. | |
| zpool-destroy(8) | |
| Destroys the given pool, freeing up any devices for other use. | |
| zpool-labelclear(8) | |
| Removes ZFS label information from the specified device. | |
| zpool-attach(8)/zpool-detach(8) Converts a non-redundant disk into a mirror, or increases the redundancy level of an existing mirror ( attach), or performs the inverse operation( detach). | |
| zpool-add(8)/zpool-remove(8) Adds the specified virtual devices to the given pool, or removes the specified device from the pool. | |
| zpool-replace(8) | |
| Replaces an existing device (which may be faulted) with a new one. | |
| zpool-split(8) | |
| Creates a new pool by splitting all mirrors in an existing pool (which decreases its redundancy). | |
| zpool-list(8) | |
| Lists the given pools along with a health status and space usage. | |
| zpool-get(8)/zpool-set(8) Retrieves the given list of properties ( or all properties if all is used ) for the specified storage pool(s). | |
| zpool-status(8) | |
| Displays the detailed health status for the given pools. | |
| zpool-iostat(8) | |
| Displays logical I/O statistics for the given pools/vdevs. Physical I/O operations may be observed via iostat(1). | |
| zpool-events(8) | |
| Lists all recent events generated by the ZFS kernel modules. These events are consumed by the zed(8) and used to automate administrative tasks such as replacing a failed device with a hot spare. That manual page also describes the subclasses and event payloads that can be generated. | |
| zpool-history(8) | |
| Displays the command history of the specified pool(s) or all pools if no pool is specified. | |
| zpool-scrub(8) | |
| Begins a scrub or resumes a paused scrub. | |
| zpool-checkpoint(8) | |
| Checkpoints the current state of pool, which can be later restored by zpool. | |
| zpool-trim(8) | |
| Initiates an immediate on-demand TRIM operation for all of the free space in a pool. This operation informs the underlying storage devices of all blocks in the pool which are no longer allocated and allows thinly provisioned devices to reclaim the space. | |
| zpool-sync(8) | |
| This command forces all in-core dirty data to be written to the primary pool storage and not the ZIL. It will also update administrative information including quota reporting. Without arguments, zpool will sync all pools on the system. Otherwise, it will sync only the specified pool(s). | |
| zpool-upgrade(8) | |
| Manage the on-disk format version of storage pools. | |
| zpool-wait(8) | |
| Waits until all background activity of the given types has ceased in the given pool. | |
| zpool-offline(8)/zpool-online(8) Takes the specified physical device offline or brings it online. | |
| zpool-resilver(8) | |
| Starts a resilver. If an existing resilver is already running it will be restarted from the beginning. | |
| zpool-reopen(8) | |
| Reopen all the vdevs associated with the pool. | |
| zpool-clear(8) | |
| Clears device errors in a pool. | |
| zpool-import(8) | |
| Make disks containing ZFS storage pools available for use on the system. | |
| zpool-export(8) | |
| Exports the given pools from the system. | |
| zpool-reguid(8) | |
| Generates a new unique identifier for the pool. | |
| 0 | Successful completion. |
| 1 | An error occurred. |
| 2 | Invalid command line options were specified. |
# zpool
# zpool
# zpool
# zpool
# zpool
# zpool
# zpool NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE - tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE - zion - - - - - - - FAULTED -
# zpool
# zpool
# zpool
pool: tank
id: 15451357997522795478
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
tank ONLINE
mirror ONLINE
sda ONLINE
sdb ONLINE
# zpool
# zpool This system is currently running ZFS version 2.
# zpool
If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command:
# zpool
Once the data has been resilvered, the spare is automatically removed and is made available for use should another device fail. The hot spare can be permanently removed from the pool using the following command:
# zpool
# zpool
# zpool
Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the iostat subcommand as follows:
# zpool
Given this configuration:
pool: tank state: ONLINE scrub: none requested config:NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda ONLINE 0 0 0 sdb ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0
The command to remove the mirrored log mirror-2 is:
# zpool
The command to remove the mirrored data mirror-1 is:
# zpool
# zpool
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
raidz1 23.9G 14.6G 9.30G - 48%
sda - - - - -
sdb - - - 10G -
sdc - - - - -
# zpool NAME STATE READ WRITE CKSUM vendor model size tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T# zpool capacity operations bandwidth pool alloc free read write read write size ---------- ----- ----- ----- ----- ----- ----- ---- rpool 14.6G 54.9G 4 55 250K 2.69M sda1 14.6G 54.9G 4 55 250K 2.69M 70G ---------- ----- ----- ----- ----- ----- ----- ----
| ZFS_ABORT | Cause zpool to dump core on exit for the purposes of running ::findleaks. |
| ZFS_COLOR | Use ANSI color in zpool and zpool output. |
| ZPOOL_AUTO_POWER_ON_SLOT |
Automatically attempt to turn on the drives enclosure slot power to a drive when
running the
zpool
or
zpool
commands.
This has the same effect as passing the
|
| ZPOOL_POWER_ON_SLOT_TIMEOUT_MS | The maximum time in milliseconds to wait for a slot power sysfs value to return the correct value after writing it. For example, after writing "on" to the sysfs enclosure slot power_control file, it can take some time for the enclosure to power down the slot and return "on" if you read back the 'power_control' value. Defaults to 30 seconds (30000ms) if not set. |
| ZPOOL_IMPORT_PATH |
The search path for devices or files to use with the pool.
This is a colon-separated list of directories in which
zpool
looks for device nodes and files.
Similar to the
|
| ZPOOL_IMPORT_UDEV_TIMEOUT_MS | The maximum time in milliseconds that zpool will wait for an expected device to be available. |
| ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE | If set, suppress warning about non-native vdev ashift in zpool. The value is not used, only the presence or absence of the variable matters. |
| ZPOOL_VDEV_NAME_GUID | Cause zpool subcommands to output vdev guids by default. This behavior is identical to the zpool command line option. |
| ZPOOL_VDEV_NAME_FOLLOW_LINKS | Cause zpool subcommands to follow links for vdev names by default. This behavior is identical to the zpool command line option. |
| ZPOOL_VDEV_NAME_PATH | Cause zpool subcommands to output full vdev path names by default. This behavior is identical to the zpool command line option. |
| ZFS_VDEV_DEVID_OPT_OUT |
Older OpenZFS implementations had issues when attempting to display pool
config vdev names if a
devid
NVP value is present in the pool's config.
For example, a pool that originated on illumos platform would have a devid value in the config and zpool would fail when listing the config. This would also be true for future Linux-based pools. A pool can be stripped of any devid values on import or prevented from adding them on zpool or zpool by setting ZFS_VDEV_DEVID_OPT_OUT.
|
| ZPOOL_SCRIPTS_AS_ROOT |
Allow a privileged user to run
zpool.
Normally, only unprivileged users are allowed to run
|
| ZPOOL_SCRIPTS_PATH | The search path for scripts when running zpool. This is a colon-separated list of directories and overrides the default ~/.zpool.d and /etc/zfs/zpool.d search paths. |
| ZPOOL_SCRIPTS_ENABLED | Allow a user to run zpool. If ZPOOL_SCRIPTS_ENABLED is not set, it is assumed that the user is allowed to run zpool. |
| ZFS_MODULE_TIMEOUT | Time, in seconds, to wait for /dev/zfs to appear. Defaults to 10, max 600 (10 minutes). If < 0, wait forever; if 0, don't wait. |
| ZPOOL (8) | March 16, 2022 |
| Main index | Section 8 | Options |
Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
