ZFS Advanced Topics
This chapter describes ZFS volumes, using ZFS with zones, ZFS alternate root pools, and ZFS rights profiles.
The following sections are provided in this chapter:
9.1. ZFS Volumes
A ZFS volume is a dataset that represents a block device and can be used like any block device. ZFS volumes are identified as devices in the /dev/zvol/{dsk,rdsk}/path directory.
In the following example, 5-Gbyte ZFS volume, tank/vol, is created:
# zfs create -V 5gb tank/vol
When you create a volume, a reservation is automatically set to the initial size of the volume. The reservation size continues to equal the size of the volume so that unexpected behavior doesn't occur. For example, if the size of the volume shrinks, data corruption might occur. You must be careful when changing the size of the volume.
In addition, if you create a snapshot of a volume that changes in size, you might introduce file system inconsistencies if you attempt to rollback the snapshot or create a clone from the snapshot.
For information about file system properties that can be applied to volumes, see ZFS Native Property Descriptions.
If you are using a system configured to use zones, you cannot create or clone a ZFS volume in a non-global zone. Any attempt to do so will fail. For information about using ZFS volumes in a global zone, see Adding ZFS Volumes to a Non-Global Zone.
9.1.1. Using a ZFS Volume as a Swap or Dump Device
To set up a swap area, create a ZFS volume of a specific size and then enable swap on that device. Do not swap to a file on a ZFS file system. A ZFS swap file configuration is not supported.
In the following example, the 5-Gbyte tank/vol volume is added as a swap device.
# swap -a /dev/zvol/dsk/tank/vol # swap -l swapfile dev swaplo blocks free /dev/dsk/c0t0d0s1 32,33 16 1048688 1048688 /dev/zvol/dsk/tank/vol 254,1 16 10485744 10485744
Using a ZFS volume as a dump device is not supported. Use the dumpadm
command to set up a dump device.
9.1.2. Using a ZFS Volume as a Solaris iSCSI Target
Solaris iSCSI targets and initiators are supported in the Solaris release.
In addition, you can easily create a ZFS volume as a iSCSI target by
setting the shareiscsi
property on the volume. For example:
# zfs create -V 2g tank/volumes/v2 # zfs set shareiscsi=on tank/volumes/v2 # iscsitadm list target Target: tank/volumes/v2 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0
After the iSCSI target is created, set up the iSCSI initiator. For more information about Solaris iSCSI targets and initiators, see Chapter 14, Configuring Solaris iSCSI Targets and Initiators (Tasks), in System Administration Guide: Devices and File Systems.
Solaris iSCSI targets can also be created and managed with iscsitadm
command. If you set the shareiscsi
property
on a ZFS volume, do not use the iscsitadm
command to also
create the same target device. Otherwise, you will end up with duplicate target
information for the same device.
A ZFS volume as an iSCSI target is managed just like another ZFS dataset. However, the rename, export, and import operations work a little differently for iSCSI targets.
-
When you rename a ZFS volume, the iSCSI target name remains the same. For example:
# zfs rename tank/volumes/v2 tank/volumes/v1 # iscsitadm list target Target: tank/volumes/v1 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0
-
Exporting a pool that contains a shared ZFS volume causes the target to be removed. Importing a pool that contains a shared ZFS volume causes the target to be shared. For example:
# zpool export tank # iscsitadm list target # zpool import tank # iscsitadm list target Target: tank/volumes/v1 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0
All iSCSI target configuration information is stored within the dataset. Like an NFS shared file system, an iSCSI target that is imported on a different system is shared appropriately.
9.2. Using ZFS With Zones
The following sections describe how to use ZFS with zones.
Keep the following points in mind when associating ZFS datasets with zones:
-
You can add a ZFS file system or a ZFS clone to a non-global with or without delegating administrative control.
-
You can add a ZFS volume as a device to non-global zones
-
You cannot associate ZFS snapshots with zones at this time
-
Do not use a ZFS file system for a global zone root path or a non-global zone root path in the Solaris 10 releases. You can use ZFS as a zone root path in the Solaris Express releases, but keep in mind that patching or upgrading these zones is not supported.
In the sections below, a ZFS dataset refers to a file system or clone.
Adding a dataset allows the non-global zone to share space with the global zone, though the zone administrator cannot control properties or create new file systems in the underlying file system hierarchy. This is identical to adding any other type of file system to a zone, and should be used when the primary purpose is solely to share common space.
ZFS also allows datasets to be delegated to a non-global zone, giving complete control over the dataset and all its children to the zone administrator. The zone administrator can create and destroy file systems or clones within that dataset, and modify properties of the datasets. The zone administrator cannot affect datasets that have not been added to the zone, and cannot exceed any top-level quotas set on the exported dataset.
Consider the following interactions when working with ZFS on a system configured to use zones:
-
A ZFS file system that is added to a non-global zone must have its
mountpoint
property set to legacy. -
When a source
zonepath
and the targetzonepath
both reside on ZFS and are in the same pool,zoneadm clone
will now automatically use ZFS clone to clone a zone. Thezoneadm clone
command will take a ZFS snapshot of the sourcezonepath
and set up the targetzonepath
. You cannot use thezfs clone
command to clone a zone. For more information, see Part II, Zones, in System Administration Guide: Virtualization Using the Solaris Operating System.
9.2.1. Adding ZFS File Systems to a Non-Global Zone
You can add a ZFS file system as a generic file system when the goal
is solely to share space with the global zone. A ZFS file system that is added
to a non-global zone must have its mountpoint
property
set to legacy.
You can add a ZFS file system to a non-global zone by using the zonecfg
command's add fs
subcommand. For example:
In the following example, a ZFS file system is added to a non-global zone by a global administrator in the global zone.
# zonecfg -z zion zonecfg:zion> add fs zonecfg:zion:fs> set type=zfs zonecfg:zion:fs> set special=tank/zone/zion zonecfg:zion:fs> set dir=/export/shared zonecfg:zion:fs> end
This syntax adds the ZFS file system, tank/zone/zion,
to the already configured zion
zone, mounted at /export/shared. The mountpoint
property of the file system
must be set to legacy
, and the file system cannot already
be mounted in another location. The zone administrator can create and destroy
files within the file system. The file system cannot be remounted in a different
location, nor can the zone administrator change properties on the file system
such as atime, readonly, compression, and so on. The
global zone administrator is responsible for setting and controlling properties
of the file system.
For more information about the zonecfg
command and
about configuring resource types with zonecfg
, see Part II, Zones, in System Administration Guide: Virtualization Using the Solaris
Operating System.
9.2.2. Delegating Datasets to a Non-Global Zone
If the primary goal is to delegate the administration of storage to
a zone, then ZFS supports adding datasets to a non-global zone through use
of the zonecfg
command's add dataset
subcommand.
In the following example, a ZFS file system is delegated to a non-global zone by a global administrator in the global zone.
# zonecfg -z zion zonecfg:zion> add dataset zonecfg:zion:dataset> set name=tank/zone/zion zonecfg:zion:dataset> end
Unlike adding a file system, this syntax causes the ZFS file system tank/zone/zion to be visible within the already configured zion
zone. The zone administrator can set file system properties, as
well as create children. In addition, the zone administrator can take snapshots,
create clones, and otherwise control the entire file system hierarchy.
For more information about what actions are allowed within zones, see Managing ZFS Properties Within a Zone.
9.2.3. Adding ZFS Volumes to a Non-Global Zone
ZFS volumes cannot be added to a non-global zone by using the zonecfg
command's add dataset
subcommand. If an attempt
to add an ZFS volume is detected, the zone cannot boot. However, volumes can
be added to a zone by using the zonecfg
command's
add device
subcommand.
In the following example, a ZFS volume is added to a non-global zone by a global administrator in the global zone:
# zonecfg -z zion zion: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zion> create zonecfg:zion> add device zonecfg:zion:device> set match=/dev/zvol/dsk/tank/vol zonecfg:zion:device> end
This syntax exports the tank/vol
volume to the zone.
Note that adding a raw volume to a zone has implicit security risks, even
if the volume doesn't correspond to a physical device. In particular, the
zone administrator could create malformed file systems that would panic the
system when a mount is attempted. For more information about adding devices
to zones and the related security risks, see Understanding the zoned Property.
For more information about adding devices to zones, see Part II, Zones, in System Administration Guide: Virtualization Using the Solaris Operating System.
9.2.4. Using ZFS Storage Pools Within a Zone
ZFS storage pools cannot be created or modified within a zone. The delegated
administration model centralizes control of physical storage devices within
the global zone and control of virtual storage to non-global zones. While
a pool-level dataset can be added to a zone, any command that modifies the
physical characteristics of the pool, such as creating, adding, or removing
devices, is not allowed from within a zone. Even if physical devices are added
to a zone by using the zonecfg
command's add device
subcommand,
or if files are used, the zpool
command does not allow
the creation of any new pools within the zone.
9.2.5. Managing ZFS Properties Within a Zone
After a dataset is added to a zone, the zone administrator can control specific dataset properties. When a dataset is added to a zone, all its ancestors are visible as read-only datasets, while the dataset itself is writable as are all of its children. For example, consider the following configuration:
global# zfs list -Ho name tank tank/home tank/data tank/data/matrix tank/data/zion tank/data/zion/home
If tank/data/zion is added to a zone, each dataset would have the following properties.
Dataset |
Visible |
Writable |
Immutable Properties |
---|---|---|---|
tank |
Yes |
No |
- |
tank/home |
No |
- |
- |
tank/data |
Yes |
No |
- |
tank/data/matrix |
No |
- |
- |
tank/data/zion |
Yes |
Yes |
|
tank/data/zion/home |
Yes |
Yes |
|
Note that every parent of tank/zone/zion is visible
read-only, all children are writable, and datasets that are not part of the
parent hierarchy are not visible at all. The zone administrator cannot change
the sharenfs
property, because non-global zones cannot
act as NFS servers. Neither can the zone administrator change the zoned
property,
because doing so would expose a security risk as described in the next section.
Any other settable property can be changed, except for the quota
property,
and the dataset itself. This behavior allows the global zone administrator
to control the space consumption of all datasets used by the non-global zone.
In addition, the sharenfs
and mountpoint
properties
cannot be changed by the global zone administrator once a dataset has been
added to a non-global zone.
9.2.6. Understanding the zoned Property
When a dataset is added to a non-global zone, the dataset must be specially
marked so that certain properties are not interpreted within the context of
the global zone. After a dataset has been added to a non-global zone under
the control of a zone administrator, its contents can no longer be trusted.
As with any file system, there might be setuid binaries, symbolic links, or
otherwise questionable contents that might adversely affect the security of
the global zone. In addition, the mountpoint
property
cannot be interpreted in the context of the global zone. Otherwise, the zone
administrator could affect the global zone's namespace. To address the latter,
ZFS uses the zoned
property to indicate that a dataset
has been delegated to a non-global zone at one point in time.
The zoned
property is a boolean value that is automatically
turned on when a zone containing a ZFS dataset is first booted. A zone administrator
will not need to manually turn on this property. If the zoned
property
is set, the dataset cannot be mounted or shared in the global zone, and is
ignored when the zfs share
-a
command or
the zfs mount
-a
command is executed. In
the following example, tank/zone/zion has been added
to a zone, while tank/zone/global has not:
# zfs list -o name,zoned,mountpoint -r tank/zone NAME ZONED MOUNTPOINT tank/zone/global off /tank/zone/global tank/zone/zion on /tank/zone/zion # zfs mount tank/zone/global /tank/zone/global tank/zone/zion /export/zone/zion/root/tank/zone/zion
Note the difference between the mountpoint
property
and the directory where the tank/zone/zion dataset is
currently mounted. The mountpoint
property reflects the
property as stored on disk, not where the dataset is currently mounted on
the system.
When a dataset is removed from a zone or a zone is destroyed, the zoned
property is not automatically
cleared. This behavior is due to the inherent security risks associated with
these tasks. Because an untrusted user has had complete access to the dataset
and its children, the mountpoint
property might be set
to bad values, or setuid binaries might exist on the file systems.
To prevent accidental security risks, the zoned
property
must be manually cleared by the global administrator if you want to reuse
the dataset in any way. Before setting the zoned
property
to off
, make sure that the mountpoint
property
for the dataset and all its children are set to reasonable values and that
no setuid binaries exist, or turn off the setuid
property.
After you have verified that no security vulnerabilities
are left, the zoned
property can be turned off by using
the zfs set
or zfs inherit
commands.
If the zoned
property is turned off while a dataset is
in use within a zone, the system might behave in unpredictable ways. Only
change the property if you are sure the dataset is no longer in use by a non-global
zone.
9.3. Using ZFS Alternate Root Pools
When a pool is created, the pool is intrinsically tied to the host system. The host system maintains knowledge about the pool so that it can detect when the pool is otherwise unavailable. While useful for normal operation, this knowledge can prove a hindrance when booting from alternate media, or creating a pool on removable media. To solve this problem, ZFS provides an alternate root pool feature. An alternate root pool does not persist across system reboots, and all mount points are modified to be relative to the root of the pool.
9.3.1. Creating ZFS Alternate Root Pools
The most common use for creating an alternate root pool is for use with
removable media. In these circumstances, users typically want a single file
system, and they want it to be mounted wherever they choose on the target
system. When an alternate root pool is created by using the -R
option,
the mount point of the root file system is automatically set to /,
which is the equivalent of the alternate root itself.
In the following example, a pool called morpheus
is
created with /mnt as the alternate root path:
# zpool create -R /mnt morpheus c0t0d0 # zfs list morpheus NAME USED AVAIL REFER MOUNTPOINT morpheus 32.5K 33.5G 8K /mnt/
Note the single file system, morpheus
, whose mount
point is the alternate root of the pool, /mnt. The mount
point that is stored on disk is / and the full path to /mnt is interpreted only in the context of the alternate root pool.
This file system can then be exported and imported under an arbitrary alternate
root pool on a different system.
9.3.2. Importing Alternate Root Pools
Pools can also be imported using an alternate root. This feature allows for recovery situations, where the mount points should not be interpreted in context of the current root, but under some temporary directory where repairs can be performed. This feature also can be used when mounting removable media as described above.
In the following example, a pool called morpheus
is
imported with /mnt as the alternate root path. This example
assumes that morpheus
was previously exported.
# zpool import -R /mnt morpheus # zpool list morpheus NAME SIZE USED AVAIL CAP HEALTH ALTROOT morpheus 33.8G 68.0K 33.7G 0% ONLINE /mnt # zfs list morpheus NAME USED AVAIL REFER MOUNTPOINT morpheus 32.5K 33.5G 8K /mnt/morpheus
9.4. ZFS Rights Profiles
If you want to perform ZFS management tasks without using the superuser (root) account, you can assume a role with either of the following profiles to perform ZFS administration tasks:
-
ZFS Storage Management – Provides the ability to create, destroy, and manipulate devices within a ZFS storage pool
-
ZFS File system Management – Provides the ability to create, destroy, and modify ZFS file systems
For more information about creating or assigning roles, see System Administration Guide: Security Services.
In addition to using RBAC roles for administering ZFS file systems, you might also consider using ZFS delegated administration for distributed ZFS administration tasks. For more information, see ZFS Delegated Administration.