ZPOOL Administration

Managing Storage Pools

In ZFS, storage pools, or zpools, are the fundamental units of storage management. A zpool aggregates the capacity of physical devices into a single, logical storage space. All data, including datasets, snapshots, and volumes, is stored within zpools. The management of these storage pools is essential for ensuring that ZFS operates efficiently and reliably. The following sections describe how to create and destroy zpools, manage devices within a pool, and monitor the health of a zpool.

Creating and Destroying zpools

Creating a ZFS Storage Pool

To create a zpool, use the zpool create command. The syntax for creating a pool requires specifying a pool name and the devices that will be part of the pool. Below are several examples demonstrating different configurations.

Example 1: Creating a Single Mirror Pool
$ sudo zpool create mypool mirror /dev/ada1 /dev/ada2
$ sudo zpool status
  pool: mypool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0

errors: No known data errors

This command creates a zpool named mypool using the devices /dev/ada1 and /dev/ada2 in a mirrored configuration, providing redundancy by storing identical data across both devices.

Example 2: Creating a Pool with Two Mirrors
$ sudo zpool create mypool mirror /dev/ada1 /dev/ada2 mirror /dev/ada3 /dev/ada4
$ sudo zpool status
  pool: mypool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0

errors: No known data errors

This creates a zpool named mypool consisting of two mirrored pairs. The first mirror is made of /dev/ada1 and /dev/ada2, while the second mirror uses /dev/ada3 and /dev/ada4. Data is mirrored across both pairs of devices, providing redundancy.

Example 3: Creating a RAID-Z1 Pool
$ sudo zpool create myraidzpool raidz1 /dev/ada1 /dev/ada2 /dev/ada3 /dev/ada4
$ sudo zpool status
  pool: myraidzpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        myraidzpool ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0

errors: No known data errors

This command creates a RAID-Z1 pool named myraidzpool using four devices. RAID-Z1 provides parity protection, allowing the pool to tolerate the failure of one disk without data loss.


These additional examples show how to create a ZFS storage pool with two mirrors and a RAID-Z1 configuration, each providing different levels of redundancy and performance.

Destroying a ZFS Storage Pool

To destroy a zpool, use the zpool destroy command. This operation is irreversible and will result in the loss of all data in the pool, so it should be used with caution:

$ sudo zpool destroy mypool

The mypool zpool and all datasets, snapshots, and volumes within it will be permanently deleted. Ensure that all important data is backed up before destroying a pool.

Adding and Removing Devices

Adding Devices to a ZFS Storage Pool

ZFS allows additional devices to be added to an existing pool, increasing its capacity. Devices can be added to a pool using the zpool add command. For example, to add a new disk to the pool mypool:

$ sudo zpool add mypool /dev/ada3
$ sudo zpool status mypool
  pool: mypool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
        ada3        ONLINE       0     0     0

errors: No known data errors

This increases the storage capacity of the pool by adding /dev/ada3. It is important to note that adding devices to a pool is a permanent operation. Devices added to the pool cannot be removed unless specific configurations, such as mirrored VDEVs, are used.

Removing Devices from a ZFS Storage Pool

ZFS supports removing devices from certain pool configurations. For example, in a mirrored configuration, a device can be safely removed without data loss. Use the zpool remove command to remove a device:

$ sudo zpool remove mypool /dev/ada3
$ sudo zpool status mypool
  pool: mypool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0

errors: No known data errors

This removes the device /dev/ada3 from the mypool zpool. Keep in mind that not all pool configurations allow for device removal, and attempting to remove devices from configurations that do not support it can lead to data loss.

Expanding and Resizing Pools

Expanding a ZFS Storage Pool

A ZFS storage pool can be expanded by adding additional devices or replacing existing devices with larger ones. When a device in a pool is replaced with a larger one, ZFS automatically resizes the pool to utilize the additional capacity. To replace a device, use the zpool replace command:

$ sudo zpool replace mypool /dev/ada1 /dev/ada4
$ sudo zpool status mypool
  pool: mypool
 state: ONLINE
  scan: resilvering   20% complete
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada2    ONLINE       0     0     0

errors: No known data errors

In this example, /dev/ada1 is replaced with /dev/ada4. Once the replacement is complete, ZFS resizes the pool to make use of the extra space.

Resizing a ZFS Storage Pool

In most cases, ZFS automatically resizes a pool when new devices are added or when existing devices are replaced with larger ones. There is no need to manually resize the pool. However, the pool’s capacity can be checked with the zpool list command:

$ sudo zpool list mypool
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool  928G  10.5G   917G        -         -     0%     1%  1.00x  ONLINE  -

This command provides information about the current capacity and usage of the pool.

Monitoring Pool Health

Checking the Health of a ZFS Storage Pool

ZFS provides tools to monitor the health of zpools and ensure that data is protected against corruption or device failure. The zpool status command is the primary tool for checking the status of a pool:

$ sudo zpool status mypool
  pool: mypool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada2    ONLINE       0     0     0

errors: No known data errors

This command displays detailed information about the pool’s configuration, the status of the devices, and any errors that have occurred. ZFS continuously checks for data consistency and reports any issues. If errors are detected, ZFS attempts to repair the data using redundant copies.

Scrubbing a ZFS Storage Pool

Scrubbing is a process that verifies data integrity across the entire pool by reading all the data and correcting any errors using redundant copies. To initiate a scrub, use the zpool scrub command:

$ sudo zpool scrub mypool
$ sudo zpool status mypool
  pool: mypool
 state: ONLINE
  scan: scrub in progress since Fri Nov  5 14:32:09 2024
        20.1G scanned at 215M/s, 6.73G issued at 72.0M/s, 10.5G total
        0B repaired, 64% done, 00:02:11 to go
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada2    ONLINE       0     0     0

errors: No known data errors

This command starts a scrub of the mypool zpool. Scrubbing is recommended periodically to ensure data integrity, especially in pools with large amounts of data or in environments where data reliability is critical. The progress of the scrub can be monitored using the zpool status command.

Dealing with a Failed Device in ZFS

One of the key features of ZFS is its ability to handle device failures gracefully, particularly in configurations that offer redundancy, such as mirrors or RAID-Z. When a device in a ZFS pool fails, the system can continue operating if redundancy is configured, and the faulty device can be replaced to restore the pool’s full functionality. Below is a step-by-step guide to diagnosing and addressing a failed device in ZFS.

Identifying a Failed Device

The first step in dealing with a failed device is to identify the issue. The zpool status command provides detailed information about the health of a zpool and its devices.

For example:

$ sudo zpool status mypool
  pool: mypool
 state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Replace the faulted device, or run 'zpool clear' to mark the device healthy.
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            ada1    ONLINE       0     0     0
            ada2    FAULTED      0    42     0  too many errors

errors: No known data errors

In this example, the pool mypool is in a DEGRADED state, and the device /dev/ada2 has FAULTED due to too many I/O errors. The pool is still functional because /dev/ada1 is part of a mirror, but redundancy is lost.

Replacing the Failed Device

Once a device has failed, it needs to be replaced to restore redundancy and ensure data integrity. Follow these steps to replace the failed device.

1. Attach the New Device

First, physically replace the failed device with a new one. This could involve swapping out a disk or connecting a new disk to the system.

2. Use the zpool replace Command

After physically replacing the failed device, the zpool replace command can be used to replace the old device with the new one. For example:

$ sudo zpool replace mypool /dev/ada2 /dev/ada3

In this command, /dev/ada2 is the failed device, and /dev/ada3 is the new device that will take its place.

During the replacement process, ZFS automatically begins a resilvering process, which rebuilds the data from the working device(s) in the mirror or RAID-Z configuration onto the new device.

The resilvering process can be monitored using the zpool status command:

$ sudo zpool status mypool
  pool: mypool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will continue
        to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Sep  5 12:45:09 2024
        1.21G scanned at 105M/s, 780M issued at 65M/s, 2.10G total
        0B repaired, 37% done, 00:03:52 to go
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      DEGRADED     0     0     0
          mirror-0  DEGRADED     0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0  (resilvering)

errors: No known data errors

Once the resilvering process is complete, the pool will return to a healthy state, and redundancy will be restored.

3. Verify the Pool's Health

After the resilvering process is complete, it is important to verify that the pool is back to a healthy state by checking the pool status:

$ sudo zpool status mypool
  pool: mypool
 state: ONLINE
  scan: resilvered 2.10G in 0h3m with 0 errors on Sun Sep  5 12:49:01 2024
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors

At this point, the pool has returned to an ONLINE state, and the system is fully functional with redundancy restored.

Dealing with Non-Redundant Pools

If the failed device is part of a non-redundant configuration (for example, a single-disk pool), the pool will enter a FAULTED state and become unavailable. In these cases, data recovery might be difficult unless backups exist. Here’s how the status might look in such a situation:

$ sudo zpool status mypool
  pool: mypool
 state: FAULTED
status: The pool cannot be accessed. All devices are faulted or degraded.
action: Replace the faulted device and restore the pool, if possible.
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      FAULTED     0     0     0
          ada1      FAULTED     0     0     0  too many errors

errors: No known data errors

In this case, there is no redundancy, and the data on the failed disk may be lost. Replacing the device will not restore the pool, and data recovery tools outside of ZFS may be required. This underscores the importance of using redundant configurations like mirrors or RAID-Z and maintaining regular backups.

Clearing Errors After a Transient Failure

In some cases, a device may experience a temporary or transient failure, such as a loose cable or momentary I/O issue. After addressing the root cause of the problem, errors can be cleared and the device re-tested using the zpool clear command:

$ sudo zpool clear mypool /dev/ada2

This command clears the error counts for the specified device and allows ZFS to re-check the device's health.

Importing and Exporting Pools

ZFS allows storage pools (zpools) to be exported and imported, providing a way to move storage pools between systems or manage storage pools across different operating environments. When a pool is exported, it is safely detached from the current system, ensuring that the pool’s metadata is consistent. The pool can then be imported on the same or another system without data loss.

Exporting a ZFS Storage Pool

Exporting a pool detaches it from the system while leaving the data on the underlying devices untouched. This operation is useful when moving a pool to another system or making hardware changes.

To export a pool, use the zpool export command. For example, to export a pool named mypool:

$ sudo zpool export mypool

After this command, the pool is no longer accessible from the current system. The pool's data and structure are preserved, but ZFS releases the pool and any associated devices, allowing them to be moved to another system.

Note: Ensure that no datasets or applications are actively using the pool before exporting it, as this could result in data corruption or access errors.

Importing a ZFS Storage Pool

Once a pool has been exported, it can be imported back into the same system or a different one. To import a pool, use the zpool import command. First, you can list all available pools that can be imported with:

$ sudo zpool import
   pool: mypool
     id: 1234567890123456789
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        mypool     ONLINE
          mirror-0  ONLINE
            ada1    ONLINE
            ada2    ONLINE

This command lists all pools available for import, including the pool’s name (mypool), pool ID, and the configuration of its devices. You can then import the pool using its name:

$ sudo zpool import mypool
$ sudo zpool status
  pool: mypool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        mypool      ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0

errors: No known data errors

Alternatively, if there are multiple pools with the same name, you can use the pool ID to import the pool:

$ sudo zpool import 1234567890123456789

After importing, the pool and all datasets within it become available, and the system can access the data as if the pool had always been attached.

Forcing an Import

In some cases, the pool may not have been properly exported, or it may have been imported on another system without proper detachment. If a pool is in use by another system or was not exported cleanly, you may need to force the import. Use the -f option to force the import:

$ sudo zpool import -f mypool

Warning: Forcing an import should be used with caution. If the pool is in use on another system, forcing an import may lead to data corruption.

Handling Pool Conflicts During Import

If the system detects a pool with the same name as an existing pool on the current system, ZFS will prevent the import to avoid conflicts. In such cases, you can rename the pool during import to resolve the conflict. For example, to import a pool named mypool as mypool2:

$ sudo zpool import mypool mypool2

This renames the pool during import, allowing it to coexist with the existing pool of the same name.

Example Workflow: Moving a Pool Between Systems

  1. On the source system, export the pool:

    $ sudo zpool export mypool
    
  2. Move the storage devices (e.g., disks) associated with the pool to the new system.

  3. On the destination system, import the pool:

    $ sudo zpool import mypool
    

If the pool was not cleanly exported or the system cannot automatically import it, use the force option:

$ sudo zpool import -f mypool

Using zpool history to Track Pool Operations

The zpool history command is a useful tool that provides an audit trail of all operations performed on a ZFS storage pool. It records all administrative actions, such as pool creation, modifications, and settings changes, along with the corresponding timestamps. This can be helpful for debugging, auditing, or simply understanding the lifecycle of a pool and its datasets.

Viewing the History of a ZFS Pool

To view the history of a specific zpool, use the zpool history command followed by the name of the pool. This will display a list of all commands and operations performed on the pool since its creation.

For example, to view the history of a pool named mypool, run the following command:

$ sudo zpool history mypool
History for 'mypool':
2024-11-05.14:45:32 zpool create mypool mirror /dev/ada1 /dev/ada2
2024-11-05.15:10:45 zpool add mypool /dev/ada3
2024-11-05.15:22:13 zfs create mypool/dataset1
2024-11-05.15:55:01 zfs set compression=on mypool/dataset1

In this example, the output shows the following operations performed on the mypool zpool:

  1. The pool was created as a mirror of two devices (ada1 and ada2).
  2. An additional device (ada3) was added to the pool.
  3. A dataset (mypool/dataset1) was created within the pool.
  4. Compression was enabled on the dataset.

Each entry includes a timestamp to provide a chronological record of events.

Viewing All Pool Histories

If you want to see the history of all zpools on your system, simply run the zpool history command without specifying a pool name:

$ sudo zpool history
History for 'mypool':
2024-11-05.14:45:32 zpool create mypool mirror /dev/ada1 /dev/ada2
2024-11-05.15:10:45 zpool add mypool /dev/ada3
2024-11-05.15:22:13 zfs create mypool/dataset1
2024-11-05.15:55:01 zfs set compression=on mypool/dataset1

History for 'anotherpool':
2024-11-06.10:25:18 zpool create anotherpool raidz1 /dev/ada4 /dev/ada5 /dev/ada6
2024-11-06.12:13:39 zfs create anotherpool/dataset2

This command lists the history of all zpools on the system, along with a breakdown of the commands run on each pool.

Including Internal ZFS Events

In addition to user-issued commands, you can also display internal ZFS events by using the -i option. This shows not only administrative commands but also internal system events related to the pool, such as scrubbing, resilvering, or other automated maintenance tasks:

$ sudo zpool history -i mypool
History for 'mypool':
2024-11-05.14:45:32 zpool create mypool mirror /dev/ada1 /dev/ada2
2024-11-05.15:10:45 zpool add mypool /dev/ada3
2024-11-05.15:55:01 zfs set compression=on mypool/dataset1
2024-11-06.09:10:21 scrub initiated
2024-11-06.09:50:42 scrub completed

This example shows that a scrub was automatically initiated and completed, in addition to the user-issued commands.

Filtering the History

You can also filter the history output to show only certain types of operations. This is useful for auditing specific changes or debugging. For example, to search for all dataset creation commands, you could filter the output like this:

$ sudo zpool history | grep create
2024-11-05.15:22:13 zfs create mypool/dataset1
2024-11-06.12:13:39 zfs create anotherpool/dataset2

This filters the output to show only zfs create commands from the history logs.

Use Cases for zpool history

The zpool history command is a valuable tool in several scenarios:

  • Auditing: It allows administrators to review what changes were made to the pool and when. This is especially useful in environments where multiple people have access to ZFS administrative tools.

  • Debugging: If a pool is experiencing issues or if changes led to unexpected behavior, reviewing the history can help track down which operations might have caused the problem.

  • Compliance: For organizations that need to track administrative operations for compliance or security reasons, zpool history provides an easily accessible audit trail.

  • Documentation: When performing complex configurations, zpool history can serve as a record of the steps taken to configure pools, making it easier to replicate the setup on other systems.