Cari di RHE Linux 
    Red Hat Enterprise Linux Manual
Daftar Isi
(Sebelumnya) 10 : Logical Volume Manager Ad ...10 : Chapter 4. LVM Administra ... (Berikutnya)

Logical Volume Manager Administration

Chapter 2. LVM Components

This chapter describes the components of an LVM Logical volume.

2.1. Physical Volumes

The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. To use the device for an LVM logical volume the device must be initialized as a physical volume (PV). Initializing a block device as a physical volume places a label near the start of the device.
By default, the LVM label is placed in the second 512-byte sector. You can overwrite this default by placing the label on any of the first 4 sectors. This allows LVM volumes to co-exist with other users of these sectors, if necessary.
An LVM label provides correct identification and device ordering for a physical device, since devices can come up in any order when the system is booted. An LVM label remains persistent across reboots and throughout a cluster.
The LVM label identifies the device as an LVM physical volume. It contains a random unique identifier (the UUID) for the physical volume. It also stores the size of the block device in bytes, and it records where the LVM metadata will be stored on the device.
The LVM metadata contains the configuration details of the LVM volume groups on your system. By default, an identical copy of the metadata is maintained in every metadata area in every physical volume within the volume group. LVM metadata is small and stored as ASCII.
Currently LVM allows you to store 0, 1 or 2 identical copies of its metadata on each physical volume. The default is 1 copy. Once you configure the number of metadata copies on the physical volume, you cannot change that number at a later time. The first copy is stored at the start of the device, shortly after the label. If there is a second copy, it is placed at the end of the device. If you accidentally overwrite the area at the beginning of your disk by writing to a different disk than you intend, a second copy of the metadata at the end of the device will allow you to recover the metadata.
For detailed information about the LVM metadata and changing the metadata parameters, see Appendix D, LVM Volume Group Metadata.

2.1.1. LVM Physical Volume Layout

Figure 2.1, "Physical Volume layout" shows the layout of an LVM physical volume. The LVM label is on the second sector, followed by the metadata area, followed by the usable space on the device.

Note

In the Linux kernel (and throughout this document), sectors are considered to be 512 bytes in size.
Physical Volume layout
LVM Physical Volume Layout

Figure 2.1. Physical Volume layout


2.1.2. Multiple Partitions on a Disk

LVM allows you to create physical volumes out of disk partitions. It is generally recommended that you create a single partition that covers the whole disk to label as an LVM physical volume for the following reasons:
  • Administrative convenience
    It is easier to keep track of the hardware in a system if each real disk only appears once. This becomes particularly true if a disk fails. In addition, multiple physical volumes on a single disk may cause a kernel warning about unknown partition types at boot-up.
  • Striping performance
    LVM cannot tell that two physical volumes are on the same physical disk. If you create a striped logical volume when two physical volumes are on the same physical disk, the stripes could be on different partitions on the same disk. This would result in a decrease in performance rather than an increase.
Although it is not recommended, there may be specific circumstances when you will need to divide a disk into separate LVM physical volumes. For example, on a system with few disks it may be necessary to move data around partitions when you are migrating an existing system to LVM volumes. Additionally, if you have a very large disk and want to have more than one volume group for administrative purposes then it is necessary to partition the disk. If you do have a disk with more than one partition and both of those partitions are in the same volume group, take care to specify which partitions are to be included in a logical volume when creating striped volumes.

2.2. Volume Groups

Physical volumes are combined into volume groups (VGs). This creates a pool of disk space out of which logical volumes can be allocated.
Within a volume group, the disk space available for allocation is divided into units of a fixed-size called extents. An extent is the smallest unit of space that can be allocated. Within a physical volume, extents are referred to as physical extents.
A logical volume is allocated into logical extents of the same size as the physical extents. The extent size is thus the same for all logical volumes in the volume group. The volume group maps the logical extents to physical extents.

2.3. LVM Logical Volumes

In LVM, a volume group is divided up into logical volumes. There are three types of LVM logical volumes: linear volumes, striped volumes, and mirrored volumes. These are described in the following sections.

2.3.1. Linear Volumes

A linear volume aggregates space from one or more physical volumes into one logical volume. For example, if you have two 60GB disks, you can create a 120GB logical volume. The physical storage is concatenated.
Creating a linear volume assigns a range of physical extents to an area of a logical volume in order. For example, as shown in Figure 2.2, "Extent Mapping" logical extents 1 to 99 could map to one physical volume and logical extents 100 to 198 could map to a second physical volume. From the point of view of the application, there is one device that is 198 extents in size.
Extent Mapping
Mapping Extents in a Linear Volume

Figure 2.2. Extent Mapping


The physical volumes that make up a logical volume do not have to be the same size. Figure 2.3, "Linear Volume with Unequal Physical Volumes" shows volume group VG1 with a physical extent size of 4MB. This volume group includes 2 physical volumes named PV1 and PV2. The physical volumes are divided into 4MB units, since that is the extent size. In this example, PV1 is 100 extents in size (400MB) and PV2 is 200 extents in size (800MB). You can create a linear volume any size between 1 and 300 extents (4MB to 1200MB). In this example, the linear volume named LV1 is 300 extents in size.
Linear Volume with Unequal Physical Volumes
Linear Volume With Unequal Physical Volumes

Figure 2.3. Linear Volume with Unequal Physical Volumes


You can configure more than one linear logical volume of whatever size you require from the pool of physical extents. Figure 2.4, "Multiple Logical Volumes" shows the same volume group as in Figure 2.3, "Linear Volume with Unequal Physical Volumes", but in this case two logical volumes have been carved out of the volume group: LV1, which is 250 extents in size (1000MB) and LV2 which is 50 extents in size (200MB).
Multiple Logical Volumes
Two Unequal Linear Volume With Unequal Physical Volumes

Figure 2.4. Multiple Logical Volumes


2.3.2. Striped Logical Volumes

When you write data to an LVM logical volume, the file system lays the data out across the underlying physical volumes. You can control the way the data is written to the physical volumes by creating a striped logical volume. For large sequential reads and writes, this can improve the efficiency of the data I/O.
Striping enhances performance by writing data to a predetermined number of physical volumes in round-robin fashion. With striping, I/O can be done in parallel. In some situations, this can result in near-linear performance gain for each additional physical volume in the stripe.
The following illustration shows data being striped across three physical volumes. In this figure:
  • the first stripe of data is written to PV1
  • the second stripe of data is written to PV2
  • the third stripe of data is written to PV3
  • the fourth stripe of data is written to PV1
In a striped logical volume, the size of the stripe cannot exceed the size of an extent.
Striping Data Across Three PVs
Striping Data Across Three Physical Volumes

Figure 2.5. Striping Data Across Three PVs


Striped logical volumes can be extended by concatenating another set of devices onto the end of the first set. In order extend a striped logical volume, however, there must be enough free space on the underlying physical volumes that make up the volume group to support the stripe. For example, if you have a two-way stripe that uses up an entire volume group, adding a single physical volume to the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For more information on extending a striped volume, see Section 4.4.14.1, "Extending a Striped Volume".

2.3.3. Mirrored Logical Volumes

A mirror maintains identical copies of data on different devices. When data is written to one device, it is written to a second device as well, mirroring the data. This provides protection for device failures. When one leg of a mirror fails, the logical volume becomes a linear volume and can still be accessed.
LVM supports mirrored volumes. When you create a mirrored logical volume, LVM ensures that data written to an underlying physical volume is mirrored onto a separate physical volume. With LVM, you can create mirrored logical volumes with multiple mirrors.
An LVM mirror divides the device being copied into regions that are typically 512KB in size. LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. This log can be kept on disk, which will keep it persistent across reboots, or it can be maintained in memory.
Figure 2.6, "Mirrored Logical Volume" shows a mirrored logical volume with one mirror. In this configuration, the log is maintained on disk.
Mirrored Logical Volume
Mirrored Logical Volume

Figure 2.6. Mirrored Logical Volume


For information on creating and modifying mirrors, see Section 4.4.3, "Creating Mirrored Volumes".

2.3.4. Thinly-Provisioned Logical Volumes (Thin Volumes)

As of the Red Hat Enterprise Linux 6.4 release, logical volumes can be thinly provisioned. This allows you to create logical volumes that are larger than the available extents. Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, to be allocated to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application actually writes to the logical volume. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space.

Note

Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin volumes must be exclusively activated on only one cluster node.
By using thin provisioning, a storage administrator can over-commit the physical storage, often avoiding the need to purchase additional storage. For example, if ten users each request a 100GB file system for their application, the storage administrator can create what appears to be a 100GB file system for each user but which is backed by less actual storage that is used only when needed. When using thin provisioning, it is important that the storage administrator monitor the storage pool and add more capacity if it starts to become full.
To make sure that all available space can be used, LVM supports data discard. This allows for re-use of the space that was formerly used by a discarded file or other block range.
For information on creating thin volumes, refer to Section 4.4.4, "Creating Thinly-Provisioned Logical Volumes".
Thin volumes provide support for a new implementation of copy-on-write (COW) snapshot logical volumes, which allow many virtual devices to share the same data in the thin pool. For information on thin snapsnot volumes, refer to Section 2.3.6, "Thinly-Provisioned Snapshot Volumes".

2.3.5. Snapshot Volumes

The LVM snapshot feature provides the ability to create virtual images of a device at a particular instant without causing a service interruption. When a change is made to the original device (the origin) after a snapshot is taken, the snapshot feature makes a copy of the changed data area as it was prior to the change so that it can reconstruct the state of the device.

Note

As of the Red Hat Enterprise Linux 6.4 release, LVM supports thinly-provisioned snapshots. For information on thinly provisioned snapshot volumes, refer to Section 2.3.6, "Thinly-Provisioned Snapshot Volumes".

Note

LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group.
Because a snapshot copies only the data areas that change after the snapshot is created, the snapshot feature requires a minimal amount of storage. For example, with a rarely updated origin, 3-5 % of the origin's capacity is sufficient to maintain the snapshot.

Note

Snapshot copies of a file system are virtual copies, not actual media backup for a file system. Snapshots do not provide a substitute for a backup procedure.
The size of the snapshot governs the amount of space set aside for storing the changes to the origin volume. For example, if you made a snapshot and then completely overwrote the origin the snapshot would have to be at least as big as the origin volume to hold the changes. You need to dimension a snapshot according to the expected level of change. So for example a short-lived snapshot of a read-mostly volume, such as /usr, would need less space than a long-lived snapshot of a volume that sees a greater number of writes, such as /home.
If a snapshot runs full, the snapshot becomes invalid, since it can no longer track changes on the origin volume. You should regularly monitor the size of the snapshot. Snapshots are fully resizeable, however, so if you have the storage capacity you can increase the size of the snapshot volume to prevent it from getting dropped. Conversely, if you find that the snapshot volume is larger than you need, you can reduce the size of the volume to free up space that is needed by other logical volumes.
When you create a snapshot file system, full read and write access to the origin stays possible. If a chunk on a snapshot is changed, that chunk is marked and never gets copied from the original volume.
There are several uses for the snapshot feature:
  • Most typically, a snapshot is taken when you need to perform a backup on a logical volume without halting the live system that is continuously updating the data.
  • You can execute the fsck command on a snapshot file system to check the file system integrity and determine whether the original file system requires file system repair.
  • Because the snapshot is read/write, you can test applications against production data by taking a snapshot and running tests against the snapshot, leaving the real data untouched.
  • You can create LVM volumes for use with Red Hat virtualization. LVM snapshots can be used to create snapshots of virtual guest images. These snapshots can provide a convenient way to modify existing guests or create new guests with minimal additional storage. For information on creating LVM-based storage pools with Red Hat Virtualization, see the Virtualization Administration Guide.
For information on creating snapshot volumes, see Section 4.4.5, "Creating Snapshot Volumes".
As of the Red Hat Enterprise Linux 6 release, you can use the --merge option of the lvconvert command to merge a snapshot into its origin volume. One use for this feature is to perform system rollback if you have lost data or files or otherwise need to restore your system to a previous state. After you merge the snapshot volume, the resulting logical volume will have the origin volume's name, minor number, and UUID and the merged snapshot is removed. For information on using this option, see Section 4.4.7, "Merging Snapshot Volumes".

2.3.6. Thinly-Provisioned Snapshot Volumes

The Red Hat Enterprise Linux release 6.4 version of LVM provides support for thinly-provisioned snapshot volumes. Thin snapshot volumes allow many virtual devices to be stored on the same data volume. This simplifies administration and allows for the sharing of data between snapshot volumes.
As for all LVM snapshot volumes, as well as all thin volumes, thin snapshot volumes are not supported across the nodes in a cluster. The snapshot volume must be exclusively activated on only one cluster node.
Thin snapshot volumes provide the following benefits:
  • A thin snapshot volume can reduce disk usage when there are multiple snapshots of the same origin volume.
  • If there are multiple snapshots of the same origin, then a write to the origin will cause one COW operation to preserve the data. Increasing the number of snapshots of the origin should yield no major slowdown.
  • Thin snapshot volumes can be used as a logical volume origin for another snapshot. This allows for an arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots...).
  • A snapshot of a thin logical volume also creates a thin logical volume. This consumes no data space until a COW operation is required, or until the snapshot itself is written.
  • A thin snapshot volume does not need to be activated with its origin, so a user may have only the origin active while there are many inactive snapshot volumes of the origin.
  • When you delete the origin of a thinly-provisioned snapshot volume, each snapshot of that origin volume becomes an independent thinly-provisioned volume. This means that instead of merging a snapshot with its origin volume, you may choose to delete the origin volume and then create a new thinly-provisioned snapshot using that independent volume as the origin volume for the new snapshot.
Although there are many advantages to using thin snapshot volumes, there are some use cases for which the older LVM snapshot volume feature may be more appropriate to your needs:
  • You cannot change the chunk size of a thin pool. If the thin pool has a large chunk size (for example, 1MB) and you require a short-living snapshot for which a chunk size that large is not efficient, you may elect to use the older snapshot feature.
  • You cannot limit the size of a thin snapshot volume; the snapshot will use all of the space in the thin pool, if necessary. This may not be appropriate for your needs.
In general, you should consider the specific requirements of your site when deciding which snapshot format to use.
For information on configuring thin snapshot volumes, refer to Section 4.4.6, "Creating Thinly-Provisioned Snapshot Volumes".

Chapter 3. LVM Administration Overview

This chapter provides an overview of the administrative procedures you use to configure LVM logical volumes. This chapter is intended to provide a general understanding of the steps involved. For specific step-by-step examples of common LVM configuration procedures, see Chapter 5, LVM Configuration Examples.
For descriptions of the CLI commands you can use to perform LVM administration, see Chapter 4, LVM Administration with CLI Commands. Alternately, you can use the LVM GUI, which is described in Chapter 7, LVM Administration with the LVM GUI.

3.1. Creating LVM Volumes in a Cluster

To create logical volumes in a cluster environment, you use the Clustered Logical Volume Manager (CLVM), which is a set of clustering extensions to LVM. These extensions allow a cluster of computers to manage shared storage (for example, on a SAN) using LVM. In order to use CLVM, the High Availability Add-On and Resilient Storage Add-On software, including the clvmd daemon, must be started at boot time, as described in Section 1.4, "The Clustered Logical Volume Manager (CLVM)".
Creating LVM logical volumes in a cluster environment is identical to creating LVM logical volumes on a single node. There is no difference in the LVM commands themselves, or in the LVM GUI interface. In order to enable the LVM volumes you are creating in a cluster, the cluster infrastructure must be running and the cluster must be quorate.
CLVM requires changes to the lvm.conf file for cluster-wide locking. Information on configuring the lvm.conf file to support clustered locking is provided within the lvm.conf file itself. For information about the lvm.conf file, see Appendix B, The LVM Configuration Files.
By default, logical volumes created with CLVM on shared storage are visible to all systems that have access to the shared storage. It is possible to create volume groups in which all of the storage devices are visible to only one node in the cluster. It is also possible to change the status of a volume group from a local volume group to a clustered volume group. For information, see Section 4.3.3, "Creating Volume Groups in a Cluster" and Section 4.3.8, "Changing the Parameters of a Volume Group"

Warning

When you create volume groups with CLVM on shared storage, you must ensure that all nodes in the cluster have access to the physical volumes that constitute the volume group. Asymmetric cluster configurations in which some nodes have access to the storage and others do not are not supported.
For information on how to install the High Availability Add-On and set up the cluster infrastructure, see Cluster Administration.
For an example of creating a mirrored logical volume in a cluster, see Section 5.5, "Creating a Mirrored LVM Logical Volume in a Cluster".

3.2. Logical Volume Creation Overview

The following is a summary of the steps to perform to create an LVM logical volume.
  1. Initialize the partitions you will use for the LVM volume as physical volumes (this labels them).
  2. Create a volume group.
  3. Create a logical volume.
After creating the logical volume you can create and mount the file system. The examples in this document use GFS2 file systems.

Note

Although a GFS2 file system can be implemented in a standalone system or as part of a cluster configuration, for the Red Hat Enterprise Linux 6 release Red Hat does not support the use of GFS2 as a single-node file system. Red Hat will continue to support single-node GFS2 file systems for mounting snapshots of cluster file systems (for example, for backup purposes).
  1. Create a GFS2 file system on the logical volume with the mkfs.gfs2 command.
  2. Create a new mount point with the mkdir command. In a clustered system, create the mount point on all nodes in the cluster.
  3. Mount the file system. You may want to add a line to the fstab file for each node in the system.
Alternately, you can create and mount the GFS2 file system with the LVM GUI.
Creating the LVM volume is machine independent, since the storage area for LVM setup information is on the physical volumes and not the machine where the volume was created. Servers that use the storage have local copies, but can recreate that from what is on the physical volumes. You can attach physical volumes to a different server if the LVM versions are compatible.

3.3. Growing a File System on a Logical Volume

To grow a file system on a logical volume, perform the following steps:
  1. Make a new physical volume.
  2. Extend the volume group that contains the logical volume with the file system you are growing to include the new physical volume.
  3. Extend the logical volume to include the new physical volume.
  4. Grow the file system.
If you have sufficient unallocated space in the volume group, you can use that space to extend the logical volume instead of performing steps 1 and 2.

3.4. Logical Volume Backup

Metadata backups and archives are automatically created on every volume group and logical volume configuration change unless disabled in the lvm.conf file. By default, the metadata backup is stored in the /etc/lvm/backup file and the metadata archives are stored in the /etc/lvm/archive file. How long the metadata archives stored in the /etc/lvm/archive file are kept and how many archive files are kept is determined by parameters you can set in the lvm.conf file. A daily system backup should include the contents of the /etc/lvm directory in the backup.
Note that a metadata backup does not back up the user and system data contained in the logical volumes.
You can manually back up the metadata to the /etc/lvm/backup file with the vgcfgbackup command. You can restore metadata with the vgcfgrestore command. The vgcfgbackup and vgcfgrestore commands are described in Section 4.3.13, "Backing Up Volume Group Metadata".

3.5. Logging

All message output passes through a logging module with independent choices of logging levels for:
  • standard output/error
  • syslog
  • log file
  • external log function
The logging levels are set in the /etc/lvm/lvm.conf file, which is described in Appendix B, The LVM Configuration Files.

3.6. The Metadata Daemon (lvmetad)

LVM can optionally use a central metadata cache, implemented through a daemon (lvmetad) and a udev rule. The metadata daemon has two main purposes: It improves performance of LVM commands and it allows udev to automatically activate logical volumes or entire volume groups as they become available to the system.

Note

The lvmetad daemon is not currently supported across the nodes of a cluster, and requires that the locking type be local file-based locking.
To take advantage of the daemon, you must do the following:
  • Start the daemon through the lvm2-lvmetad service. To start the daemon automatically at boot time, use the chkconfig lvm2-lvmetad on command. To start the daemon manually, use the service lvm2-lvmetad start command.
  • Configure LVM to make use of the daemon by setting the global/use_lvmetad variable to 1 in the lvm.conf configuration file. For information on the lvm.conf configuration file, refer to Appendix B, The LVM Configuration Files.
Normally, each LVM command issues a disk scan to find all relevant physical volumes and to read volume group metadata. However, if the metadata daemon is running and enabled, this expensive scan can be skipped. Instead, the lvmetad daemon scans each device only once, when it becomes available, via udev rules. This can save a significant amount of I/O and reduce the time required to complete LVM operations, particularly on systems with many disks,
When a new volume group is made available at runtime (for example, through hotplug or iSCSI), its logical volumes must be activated in order to be used. When the lvmetad daemon is enabled, the activation/auto_activation_volume_list option in the lvm.conf configuration file can be used to configure a list of volume groups and/or logical volumes that should be automatically activated. Without the lvmetad daemon, a manual activation is necessary.
(Sebelumnya) 10 : Logical Volume Manager Ad ...10 : Chapter 4. LVM Administra ... (Berikutnya)