Cari di RHE Linux 
    Red Hat Enterprise Linux Manual
Daftar Isi
(Sebelumnya) 10 : 4.5. Controlling LVM Devi ...10 : Chapter 7. LVM Administra ... (Berikutnya)

Logical Volume Manager Administration

Chapter 5. LVM Configuration Examples

This chapter provides some basic LVM configuration examples.

5.1. Creating an LVM Logical Volume on Three Disks

This example creates an LVM logical volume called new_logical_volume that consists of the disks at /dev/sda1, /dev/sdb1, and /dev/sdc1.

5.1.1. Creating the Physical Volumes

To use disks in a volume group, you label them as LVM physical volumes.

Warning

This command destroys any data on /dev/sda1, /dev/sdb1, and /dev/sdc1.
# pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1  Physical volume "/dev/sda1" successfully created  Physical volume "/dev/sdb1" successfully created  Physical volume "/dev/sdc1" successfully created

5.1.2. Creating the Volume Group

The following command creates the volume group new_vol_group.
# vgcreate new_vol_group /dev/sda1 /dev/sdb1 /dev/sdc1  Volume group "new_vol_group" successfully created
You can use the vgs command to display the attributes of the new volume group.
# vgs  VG #PV #LV #SN Attr   VSize  VFree  new_vol_group   3   0   0 wz--n- 51.45G 51.45G

5.1.3. Creating the Logical Volume

The following command creates the logical volume new_logical_volume from the volume group new_vol_group. This example creates a logical volume that uses 2GB of the volume group.
# lvcreate -L2G -n new_logical_volume new_vol_group  Logical volume "new_logical_volume" created

5.1.4. Creating the File System

The following command creates a GFS2 file system on the logical volume.
# mkfs.gfs2 -plock_nolock -j 1 /dev/new_vol_group/new_logical_volumeThis will destroy any data on /dev/new_vol_group/new_logical_volume.Are you sure you want to proceed? [y/n] yDevice: /dev/new_vol_group/new_logical_volumeBlocksize: 4096Filesystem Size:   491460Journals:  1Resource Groups:   8Locking Protocol:  lock_nolockLock Table:Syncing...All Done
The following commands mount the logical volume and report the file system disk space usage.
# mount /dev/new_vol_group/new_logical_volume /mnt[root@tng3-1 ~]# dfFilesystem   1K-blocks  Used Available Use% Mounted on/dev/new_vol_group/new_logical_volume   1965840 20   1965820   1% /mnt

5.2. Creating a Striped Logical Volume

This example creates an LVM striped logical volume called striped_logical_volume that stripes data across the disks at /dev/sda1, /dev/sdb1, and /dev/sdc1.

5.2.1. Creating the Physical Volumes

Label the disks you will use in the volume groups as LVM physical volumes.

Warning

This command destroys any data on /dev/sda1, /dev/sdb1, and /dev/sdc1.
# pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1  Physical volume "/dev/sda1" successfully created  Physical volume "/dev/sdb1" successfully created  Physical volume "/dev/sdc1" successfully created

5.2.2. Creating the Volume Group

The following command creates the volume group volgroup01.
# vgcreate volgroup01 /dev/sda1 /dev/sdb1 /dev/sdc1  Volume group "volgroup01" successfully created
You can use the vgs command to display the attributes of the new volume group.
# vgs  VG #PV #LV #SN Attr   VSize  VFree  volgroup01  3   0   0 wz--n- 51.45G 51.45G

5.2.3. Creating the Logical Volume

The following command creates the striped logical volume striped_logical_volume from the volume group volgroup01. This example creates a logical volume that is 2 gigabytes in size, with three stripes and a stripe size of 4 kilobytes.
# lvcreate -i3 -I4 -L2G -nstriped_logical_volume volgroup01  Rounding size (512 extents) up to stripe boundary size (513 extents)  Logical volume "striped_logical_volume" created

5.2.4. Creating the File System

The following command creates a GFS2 file system on the logical volume.
# mkfs.gfs2 -plock_nolock -j 1 /dev/volgroup01/striped_logical_volumeThis will destroy any data on /dev/volgroup01/striped_logical_volume.Are you sure you want to proceed? [y/n] yDevice: /dev/volgroup01/striped_logical_volumeBlocksize: 4096Filesystem Size:   492484Journals:  1Resource Groups:   8Locking Protocol:  lock_nolockLock Table:Syncing...All Done
The following commands mount the logical volume and report the file system disk space usage.
# mount /dev/volgroup01/striped_logical_volume /mnt[root@tng3-1 ~]# dfFilesystem   1K-blocks  Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00  13902624   1656776  11528232  13% //dev/hda1   101086 10787 85080  12% /boottmpfs   127880 0 127880   0% /dev/shm/dev/volgroup01/striped_logical_volume   1969936 20   1969916   1% /mnt

5.3. Splitting a Volume Group

In this example, an existing volume group consists of three physical volumes. If there is enough unused space on the physical volumes, a new volume group can be created without adding new disks.
In the initial set up, the logical volume mylv is carved from the volume group myvol, which in turn consists of the three physical volumes, /dev/sda1, /dev/sdb1, and /dev/sdc1.
After completing this procedure, the volume group myvg will consist of /dev/sda1 and /dev/sdb1. A second volume group, yourvg, will consist of /dev/sdc1.

5.3.1. Determining Free Space

You can use the pvscan command to determine how much free space is currently available in the volume group.
# pvscan  PV /dev/sda1  VG myvg   lvm2 [17.15 GB / 0 free]  PV /dev/sdb1  VG myvg   lvm2 [17.15 GB / 12.15 GB free]  PV /dev/sdc1  VG myvg   lvm2 [17.15 GB / 15.80 GB free]  Total: 3 [51.45 GB] / in use: 3 [51.45 GB] / in no VG: 0 [0   ]

5.3.2. Moving the Data

You can move all the used physical extents in /dev/sdc1 to /dev/sdb1 with the pvmove command. The pvmove command can take a long time to execute.
# pvmove /dev/sdc1 /dev/sdb1  /dev/sdc1: Moved: 14.7%  /dev/sdc1: Moved: 30.3%  /dev/sdc1: Moved: 45.7%  /dev/sdc1: Moved: 61.0%  /dev/sdc1: Moved: 76.6%  /dev/sdc1: Moved: 92.2%  /dev/sdc1: Moved: 100.0%
After moving the data, you can see that all of the space on /dev/sdc1 is free.
# pvscan  PV /dev/sda1   VG myvg   lvm2 [17.15 GB / 0 free]  PV /dev/sdb1   VG myvg   lvm2 [17.15 GB / 10.80 GB free]  PV /dev/sdc1   VG myvg   lvm2 [17.15 GB / 17.15 GB free]  Total: 3 [51.45 GB] / in use: 3 [51.45 GB] / in no VG: 0 [0   ]

5.3.3. Splitting the Volume Group

To create the new volume group yourvg, use the vgsplit command to split the volume group myvg.
Before you can split the volume group, the logical volume must be inactive. If the file system is mounted, you must unmount the file system before deactivating the logical volume.
You can deactivate the logical volumes with the lvchange command or the vgchange command. The following command deactivates the logical volume mylv and then splits the volume group yourvg from the volume group myvg, moving the physical volume /dev/sdc1 into the new volume group yourvg.
# lvchange -a n /dev/myvg/mylv# vgsplit myvg yourvg /dev/sdc1  Volume group "yourvg" successfully split from "myvg"
You can use the vgs command to see the attributes of the two volume groups.
# vgs  VG #PV #LV #SN Attr   VSize  VFree  myvg 2   1   0 wz--n- 34.30G 10.80G  yourvg   1   0   0 wz--n- 17.15G 17.15G

5.3.4. Creating the New Logical Volume

After creating the new volume group, you can create the new logical volume yourlv.
# lvcreate -L5G -n yourlv yourvg  Logical volume "yourlv" created

5.3.5. Making a File System and Mounting the New Logical Volume

You can make a file system on the new logical volume and mount it.
#  mkfs.gfs2 -plock_nolock -j 1 /dev/yourvg/yourlvThis will destroy any data on /dev/yourvg/yourlv.Are you sure you want to proceed? [y/n] yDevice: /dev/yourvg/yourlvBlocksize: 4096Filesystem Size:   1277816Journals:  1Resource Groups:   20Locking Protocol:  lock_nolockLock Table:Syncing...All Done[root@tng3-1 ~]# mount /dev/yourvg/yourlv /mnt

5.3.6. Activating and Mounting the Original Logical Volume

Since you had to deactivate the logical volume mylv, you need to activate it again before you can mount it.
# lvchange -a y /dev/myvg/mylv[root@tng3-1 ~]# mount /dev/myvg/mylv /mnt[root@tng3-1 ~]# dfFilesystem   1K-blocks  Used Available Use% Mounted on/dev/yourvg/yourlv 24507776 32  24507744   1% /mnt/dev/myvg/mylv 24507776 32  24507744   1% /mnt

5.4. Removing a Disk from a Logical Volume

This example shows how you can remove a disk from an existing logical volume, either to replace the disk or to use the disk as part of a different volume. In order to remove a disk, you must first move the extents on the LVM physical volume to a different disk or set of disks.

5.4.1. Moving Extents to Existing Physical Volumes

In this example, the logical volume is distributed across four physical volumes in the volume group myvg.
# pvs -o+pv_used  PV VG   Fmt  Attr PSize  PFree  Used  /dev/sda1  myvg lvm2 a-   17.15G 12.15G  5.00G  /dev/sdb1  myvg lvm2 a-   17.15G 12.15G  5.00G  /dev/sdc1  myvg lvm2 a-   17.15G 12.15G  5.00G  /dev/sdd1  myvg lvm2 a-   17.15G  2.15G 15.00G
We want to move the extents off of /dev/sdb1 so that we can remove it from the volume group.
If there are enough free extents on the other physical volumes in the volume group, you can execute the pvmove command on the device you want to remove with no other options and the extents will be distributed to the other devices.
# pvmove /dev/sdb1  /dev/sdb1: Moved: 2.0% ...  /dev/sdb1: Moved: 79.2% ...  /dev/sdb1: Moved: 100.0%
After the pvmove command has finished executing, the distribution of extents is as follows:
# pvs -o+pv_used  PV VG   Fmt  Attr PSize  PFree  Used  /dev/sda1  myvg lvm2 a-   17.15G  7.15G 10.00G  /dev/sdb1  myvg lvm2 a-   17.15G 17.15G 0  /dev/sdc1  myvg lvm2 a-   17.15G 12.15G  5.00G  /dev/sdd1  myvg lvm2 a-   17.15G  2.15G 15.00G
Use the vgreduce command to remove the physical volume /dev/sdb1 from the volume group.
# vgreduce myvg /dev/sdb1  Removed "/dev/sdb1" from volume group "myvg"[root@tng3-1 ~]# pvs  PV VG   Fmt  Attr PSize  PFree  /dev/sda1  myvg lvm2 a-   17.15G  7.15G  /dev/sdb1   lvm2 --   17.15G 17.15G  /dev/sdc1  myvg lvm2 a-   17.15G 12.15G  /dev/sdd1  myvg lvm2 a-   17.15G  2.15G
The disk can now be physically removed or allocated to other users.

5.4.2. Moving Extents to a New Disk

In this example, the logical volume is distributed across three physical volumes in the volume group myvg as follows:
# pvs -o+pv_used  PV VG   Fmt  Attr PSize  PFree  Used  /dev/sda1  myvg lvm2 a-   17.15G  7.15G 10.00G  /dev/sdb1  myvg lvm2 a-   17.15G 15.15G  2.00G  /dev/sdc1  myvg lvm2 a-   17.15G 15.15G  2.00G
We want to move the extents of /dev/sdb1 to a new device, /dev/sdd1.

5.4.2.1. Creating the New Physical Volume

Create a new physical volume from /dev/sdd1.
# pvcreate /dev/sdd1  Physical volume "/dev/sdd1" successfully created

5.4.2.2. Adding the New Physical Volume to the Volume Group

Add /dev/sdd1 to the existing volume group myvg.
# vgextend myvg /dev/sdd1  Volume group "myvg" successfully extended[root@tng3-1]# pvs -o+pv_used  PV VG   Fmt  Attr PSize  PFree  Used  /dev/sda1   myvg lvm2 a-   17.15G  7.15G 10.00G  /dev/sdb1   myvg lvm2 a-   17.15G 15.15G  2.00G  /dev/sdc1   myvg lvm2 a-   17.15G 15.15G  2.00G  /dev/sdd1   myvg lvm2 a-   17.15G 17.15G 0

5.4.2.3. Moving the Data

Use the pvmove command to move the data from /dev/sdb1 to /dev/sdd1.
# pvmove /dev/sdb1 /dev/sdd1  /dev/sdb1: Moved: 10.0%...  /dev/sdb1: Moved: 79.7%...  /dev/sdb1: Moved: 100.0%[root@tng3-1]# pvs -o+pv_used  PV  VG   Fmt  Attr PSize  PFree  Used  /dev/sda1   myvg lvm2 a-   17.15G  7.15G 10.00G  /dev/sdb1   myvg lvm2 a-   17.15G 17.15G 0  /dev/sdc1   myvg lvm2 a-   17.15G 15.15G  2.00G  /dev/sdd1   myvg lvm2 a-   17.15G 15.15G  2.00G

5.4.2.4. Removing the Old Physical Volume from the Volume Group

After you have moved the data off /dev/sdb1, you can remove it from the volume group.
# vgreduce myvg /dev/sdb1  Removed "/dev/sdb1" from volume group "myvg"
You can now reallocate the disk to another volume group or remove the disk from the system.

5.5. Creating a Mirrored LVM Logical Volume in a Cluster

Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume on a single node. However, in order to create a mirrored LVM volume in a cluster the cluster and cluster mirror infrastructure must be running, the cluster must be quorate, and the locking type in the lvm.conf file must be set correctly to enable cluster locking, either directly or by means of the lvmconf command as described in Section 3.1, "Creating LVM Volumes in a Cluster".
The following procedure creates a mirrored LVM volume in a cluster. First the procedure checks to see whether the cluster services are installed and running, then the procedure creates the mirrored volume.
  1. In order to create a mirrored logical volume that is shared by all of the nodes in a cluster, the locking type must be set correctly in the lvm.conf file in every node of the cluster. By default, the locking type is set to local. To change this, execute the following command in each node of the cluster to enable clustered locking:
    # /sbin/lvmconf --enable-cluster
  2. To create a clustered logical volume, the cluster infrastructure must be up and running on every node in the cluster. The following example verifies that the clvmd daemon is running on the node from which it was issued:
     ps auxw | grep clvmdroot 17642  0.0  0.1 32164 1072 ? Ssl  Apr06   0:00 clvmd -T20 -t 90
    The following command shows the local view of the cluster status:
    # cman_tool servicesfence domainmember count  3victim count  0victim now 0master nodeid 2wait state nonemembers   1 2 3dlm lockspacesname  clvmdid 0x4104eefaflags 0x00000000change member 3 joined 1 remove 0 failed 0 seq 1,1members   1 2 3
  3. Ensure that the cmirror package is installed.
  4. Start the cmirrord service.
    # service cmirrord startStarting cmirrord: [  OK  ]
  5. Create the mirror. The first step is creating the physical volumes. The following commands create three physical volumes. Two of the physical volumes will be used for the legs of the mirror, and the third physical volume will contain the mirror log.
    # pvcreate /dev/xvdb1  Physical volume "/dev/xvdb1" successfully created[root@doc-07 ~]# pvcreate /dev/xvdb2  Physical volume "/dev/xvdb2" successfully created[root@doc-07 ~]# pvcreate /dev/xvdc1  Physical volume "/dev/xvdc1" successfully created
  6. Create the volume group. This example creates a volume group vg001 that consists of the three physical volumes that were created in the previous step.
    # vgcreate vg001 /dev/xvdb1 /dev/xvdb2 /dev/xvdc1  Clustered volume group "vg001" successfully created
    Note that the output of the vgcreate command indicates that the volume group is clustered. You can verify that a volume group is clustered with the vgs command, which will show the volume group's attributes. If a volume group is clustered, it will show a c attribute.
     vgs vg001  VG   #PV #LV #SN Attr   VSize  VFree  vg001  3   0   0 wz--nc 68.97G 68.97G
  7. Create the mirrored logical volume. This example creates the logical volume mirrorlv from the volume group vg001. This volume has one mirror leg. This example specifies which extents of the physical volume will be used for the logical volume.
    # lvcreate -l 1000 -m1 vg001 -n mirrorlv /dev/xvdb1:1-1000 /dev/xvdb2:1-1000 /dev/xvdc1:0  Logical volume "mirrorlv" created
    You can use the lvs command to display the progress of the mirror creation. The following example shows that the mirror is 47% synced, then 91% synced, then 100% synced when the mirror is complete.
    # lvs vg001/mirrorlv  LV   VG   Attr   LSize Origin Snap%  Move Log   Copy%  Convert  mirrorlv vg001 mwi-a- 3.91G vg001_mlog 47.00[root@doc-07 log]# lvs vg001/mirrorlv  LV   VG   Attr   LSize Origin Snap%  Move Log   Copy%  Convert  mirrorlv vg001 mwi-a- 3.91G vg001_mlog 91.00   [root@doc-07 ~]#  lvs vg001/mirrorlv  LV   VG   Attr   LSize Origin Snap%  Move Log   Copy%  Convert  mirrorlv vg001 mwi-a- 3.91G vg001_mlog 100.00
    The completion of the mirror is noted in the system log:
    May 10 14:52:52 doc-07 [19402]: Monitoring mirror device vg001-mirrorlv for eventsMay 10 14:55:00 doc-07 lvm[19402]: vg001-mirrorlv is now in-sync
  8. You can use the lvs with the -o +devices options to display the configuration of the mirror, including which devices make up the mirror legs. You can see that the logical volume in this example is composed of two linear images and one log.
    # lvs -a -o +devices  LV  VG Attr   LSize  Origin Snap%  Move Log   Copy%  Convert Devices mirrorlv vg001  mwi-a-  3.91G mirrorlv_mlog 100.00 mirrorlv_mimage_0(0),mirrorlv_mimage_1(0)  [mirrorlv_mimage_0] vg001  iwi-ao  3.91G /dev/xvdb1(1)  [mirrorlv_mimage_1] vg001  iwi-ao  3.91G /dev/xvdb2(1)  [mirrorlv_mlog] vg001  lwi-ao  4.00M /dev/xvdc1(0)
    You can use the seg_pe_ranges option of the lvs to display the data layout. You can use this option to verify that your layout is properly redundant. The output of this command displays PE ranges in the same format that the lvcreate and lvresize commands take as input.
    # lvs -a -o +seg_pe_ranges --segments  PE Ranges mirrorlv_mimage_0:0-999 mirrorlv_mimage_1:0-999  /dev/xvdb1:1-1000 /dev/xvdb2:1-1000 /dev/xvdc1:0-0

Note

For information on recovering from the failure of one of the legs of an LVM mirrored volume, see Section 6.3, "Recovering from LVM Mirror Failure".

Chapter 6. LVM Troubleshooting

This chapter provide instructions for troubleshooting a variety of LVM issues.

6.1. Troubleshooting Diagnostics

If a command is not working as expected, you can gather diagnostics in the following ways:
  • Use the -v, -vv, -vvv, or -vvvv argument of any command for increasingly verbose levels of output.
  • If the problem is related to the logical volume activation, set 'activation = 1' in the 'log' section of the configuration file and run the command with the -vvvv argument. After you have finished examining this output be sure to reset this parameter to 0, to avoid possible problems with the machine locking during low memory situations.
  • Run the lvmdump command, which provides an information dump for diagnostic purposes. For information, see the lvmdump(8) man page.
  • Execute the lvs -v, pvs -a or dmsetup info -c command for additional system information.
  • Examine the last backup of the metadata in the /etc/lvm/backup file and archived versions in the /etc/lvm/archive file.
  • Check the current configuration information by running the lvm dumpconfig command.
  • Check the .cache file in the /etc/lvm directory for a record of which devices have physical volumes on them.

6.2. Displaying Information on Failed Devices

You can use the -P argument of the lvs or vgs command to display information about a failed volume that would otherwise not appear in the output. This argument permits some operations even though the metadata is not completely consistent internally. For example, if one of the devices that made up the volume group vg failed, the vgs command might show the following output.
# vgs -o +devices  Volume group "vg" not found
If you specify the -P argument of the vgs command, the volume group is still unusable but you can see more information about the failed device.
# vgs -P -o +devices  Partial mode. Incomplete volume groups will be activated read-only.  VG   #PV #LV #SN Attr   VSize VFree Devices  vg 9   2   0 rz-pn- 2.11T 2.07T unknown device(0)  vg 9   2   0 rz-pn- 2.11T 2.07T unknown device(5120),/dev/sda1(0)
In this example, the failed device caused both a linear and a striped logical volume in the volume group to fail. The lvs command without the -P argument shows the following output.
# lvs -a -o +devices  Volume group "vg" not found
Using the -P argument shows the logical volumes that have failed.
# lvs -P -a -o +devices  Partial mode. Incomplete volume groups will be activated read-only.  LV VG   Attr   LSize  Origin Snap%  Move Log Copy%  Devices  linear vg   -wi-a- 20.00G   unknown device(0)  stripe vg   -wi-a- 20.00G   unknown device(5120),/dev/sda1(0)
The following examples show the output of the pvs and lvs commands with the -P argument specified when a leg of a mirrored logical volume has failed.
#  vgs -a -o +devices -P  Partial mode. Incomplete volume groups will be activated read-only.  VG #PV #LV #SN Attr   VSize VFree Devices  corey   4   4   0 rz-pnc 1.58T 1.34T my_mirror_mimage_0(0),my_mirror_mimage_1(0)  corey   4   4   0 rz-pnc 1.58T 1.34T /dev/sdd1(0)  corey   4   4   0 rz-pnc 1.58T 1.34T unknown device(0)  corey   4   4   0 rz-pnc 1.58T 1.34T /dev/sdb1(0)
# lvs -a -o +devices -P  Partial mode. Incomplete volume groups will be activated read-only.  LV   VG Attr   LSize   Origin Snap%  Move Log Copy%  Devices  my_mirror corey mwi-a- 120.00G my_mirror_mlog   1.95 my_mirror_mimage_0(0),my_mirror_mimage_1(0)  [my_mirror_mimage_0] corey iwi-ao 120.00G  unknown device(0)  [my_mirror_mimage_1] corey iwi-ao 120.00G  /dev/sdb1(0)  [my_mirror_mlog] corey lwi-ao   4.00M  /dev/sdd1(0)

6.3. Recovering from LVM Mirror Failure

This section provides an example of recovering from a situation where one leg of an LVM mirrored volume fails because the underlying device for a physical volume goes down and the mirror_log_fault_policy parameter is set to remove, requiring that you manually rebuild the mirror. For information on setting the mirror_log_fault_policy parameter, refer to Section 6.3, "Recovering from LVM Mirror Failure".
When a mirror leg fails, LVM converts the mirrored volume into a linear volume, which continues to operate as before but without the mirrored redundancy. At that point, you can add a new disk device to the system to use as a replacement physical device and rebuild the mirror.
The following command creates the physical volumes which will be used for the mirror.
# pvcreate /dev/sd[abcdefgh][12]  Physical volume "/dev/sda1" successfully created  Physical volume "/dev/sda2" successfully created  Physical volume "/dev/sdb1" successfully created  Physical volume "/dev/sdb2" successfully created  Physical volume "/dev/sdc1" successfully created  Physical volume "/dev/sdc2" successfully created  Physical volume "/dev/sdd1" successfully created  Physical volume "/dev/sdd2" successfully created  Physical volume "/dev/sde1" successfully created  Physical volume "/dev/sde2" successfully created  Physical volume "/dev/sdf1" successfully created  Physical volume "/dev/sdf2" successfully created  Physical volume "/dev/sdg1" successfully created  Physical volume "/dev/sdg2" successfully created  Physical volume "/dev/sdh1" successfully created  Physical volume "/dev/sdh2" successfully created
The following commands creates the volume group vg and the mirrored volume groupfs.
# vgcreate vg /dev/sd[abcdefgh][12]  Volume group "vg" successfully created[root@link-08 ~]# lvcreate -L 750M -n groupfs -m 1 vg /dev/sda1 /dev/sdb1 /dev/sdc1  Rounding up size to full physical extent 752.00 MB  Logical volume "groupfs" created
You can use the lvs command to verify the layout of the mirrored volume and the underlying devices for the mirror leg and the mirror log. Note that in the first example the mirror is not yet completely synced; you should wait until the Copy% field displays 100.00 before continuing.
# lvs -a -o +devices  LV VG   Attr   LSize   Origin Snap%  Move Log  Copy% Devices  groupfs vg   mwi-a- 752.00M groupfs_mlog 21.28 groupfs_mimage_0(0),groupfs_mimage_1(0)  [groupfs_mimage_0] vg   iwi-ao 752.00M   /dev/sda1(0)  [groupfs_mimage_1] vg   iwi-ao 752.00M   /dev/sdb1(0)  [groupfs_mlog] vg   lwi-ao   4.00M   /dev/sdc1(0)[root@link-08 ~]# lvs -a -o +devices  LV VG   Attr   LSize   Origin Snap%  Move Log  Copy%  Devices  groupfs vg   mwi-a- 752.00M groupfs_mlog 100.00 groupfs_mimage_0(0),groupfs_mimage_1(0)  [groupfs_mimage_0] vg   iwi-ao 752.00M /dev/sda1(0)  [groupfs_mimage_1] vg   iwi-ao 752.00M /dev/sdb1(0)  [groupfs_mlog] vg   lwi-ao   4.00M i  /dev/sdc1(0)
In this example, the primary leg of the mirror /dev/sda1 fails. Any write activity to the mirrored volume causes LVM to detect the failed mirror. When this occurs, LVM converts the mirror into a single linear volume. In this case, to trigger the conversion, we execute a dd command
# dd if=/dev/zero of=/dev/vg/groupfs count=1010+0 records in10+0 records out
You can use the lvs command to verify that the device is now a linear device. Because of the failed disk, I/O errors occur.
# lvs -a -o +devices  /dev/sda1: read failed after 0 of 2048 at 0: Input/output error  /dev/sda2: read failed after 0 of 2048 at 0: Input/output error  LV  VG   Attr   LSize   Origin Snap%  Move Log Copy%  Devices  groupfs vg   -wi-a- 752.00M   /dev/sdb1(0)
At this point you should still be able to use the logical volume, but there will be no mirror redundancy.
To rebuild the mirrored volume, you replace the broken drive and recreate the physical volume. If you use the same disk rather than replacing it with a new one, you will see "inconsistent" warnings when you run the pvcreate command. You can prevent that warning from appearing by executing the vgreduce --removemissing command.
# pvcreate /dev/sdi[12]  Physical volume "/dev/sdi1" successfully created  Physical volume "/dev/sdi2" successfully created[root@link-08 ~]# pvscan  PV /dev/sdb1   VG vg   lvm2 [67.83 GB / 67.10 GB free]  PV /dev/sdb2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdc1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdc2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdd1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdd2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sde1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sde2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdf1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdf2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdg1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdg2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdh1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdh2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdi1   lvm2 [603.94 GB]  PV /dev/sdi2   lvm2 [603.94 GB]  Total: 16 [2.11 TB] / in use: 14 [949.65 GB] / in no VG: 2 [1.18 TB]
Next you extend the original volume group with the new physical volume.
# vgextend vg /dev/sdi[12]  Volume group "vg" successfully extended# pvscan  PV /dev/sdb1   VG vg   lvm2 [67.83 GB / 67.10 GB free]  PV /dev/sdb2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdc1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdc2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdd1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdd2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sde1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sde2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdf1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdf2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdg1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdg2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdh1   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdh2   VG vg   lvm2 [67.83 GB / 67.83 GB free]  PV /dev/sdi1   VG vg   lvm2 [603.93 GB / 603.93 GB free]  PV /dev/sdi2   VG vg   lvm2 [603.93 GB / 603.93 GB free]  Total: 16 [2.11 TB] / in use: 16 [2.11 TB] / in no VG: 0 [0   ]
Convert the linear volume back to its original mirrored state.
# lvconvert -m 1 /dev/vg/groupfs /dev/sdi1 /dev/sdb1 /dev/sdc1  Logical volume mirror converted.
You can use the lvs command to verify that the mirror is restored.
# lvs -a -o +devices  LV VG   Attr   LSize   Origin Snap%  Move Log  Copy% Devices  groupfs vg   mwi-a- 752.00M groupfs_mlog 68.62 groupfs_mimage_0(0),groupfs_mimage_1(0)  [groupfs_mimage_0] vg   iwi-ao 752.00M   /dev/sdb1(0)  [groupfs_mimage_1] vg   iwi-ao 752.00M   /dev/sdi1(0)  [groupfs_mlog] vg   lwi-ao   4.00M   /dev/sdc1(0)

6.4. Recovering Physical Volume Metadata

If the volume group metadata area of a physical volume is accidentally overwritten or otherwise destroyed, you will get an error message indicating that the metadata area is incorrect, or that the system was unable to find a physical volume with a particular UUID. You may be able to recover the data the physical volume by writing a new metadata area on the physical volume specifying the same UUID as the lost metadata.

Warning

You should not attempt this procedure with a working LVM logical volume. You will lose your data if you specify the incorrect UUID.
The following example shows the sort of output you may see if the metadata area is missing or corrupted.
# lvs -a -o +devices  Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.  Couldn't find all physical volumes for volume group VG.  Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.  Couldn't find all physical volumes for volume group VG.  ...
You may be able to find the UUID for the physical volume that was overwritten by looking in the /etc/lvm/archive directory. Look in the file VolumeGroupName_xxxx.vg for the last known valid archived LVM metadata for that volume group.
Alternately, you may find that deactivating the volume and setting the partial (-P) argument will enable you to find the UUID of the missing corrupted physical volume.
# vgchange -an --partial  Partial mode. Incomplete volume groups will be activated read-only.  Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.  Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.  ...
Use the --uuid and --restorefile arguments of the pvcreate command to restore the physical volume. The following example labels the /dev/sdh1 device as a physical volume with the UUID indicated above, FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk. This command restores the physical volume label with the metadata information contained in VG_00050.vg, the most recent good archived metadata for the volume group. The restorefile argument instructs the pvcreate command to make the new physical volume compatible with the old one on the volume group, ensuring that the new metadata will not be placed where the old physical volume contained data (which could happen, for example, if the original pvcreate command had used the command line arguments that control metadata placement, or if the physical volume was originally created using a different version of the software that used different defaults). The pvcreate command overwrites only the LVM metadata areas and does not affect the existing data areas.
# pvcreate --uuid "FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk" --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1  Physical volume "/dev/sdh1" successfully created
You can then use the vgcfgrestore command to restore the volume group's metadata.
# vgcfgrestore VG  Restored volume group VG
You can now display the logical volumes.
# lvs -a -o +devices  LV VG   Attr   LSize   Origin Snap%  Move Log Copy%  Devices  stripe VG   -wi--- 300.00G   /dev/sdh1 (0),/dev/sda1(0)  stripe VG   -wi--- 300.00G   /dev/sdh1 (34728),/dev/sdb1(0)
The following commands activate the volumes and display the active volumes.
# lvchange -ay /dev/VG/stripe[root@link-07 backup]# lvs -a -o +devices  LV VG   Attr   LSize   Origin Snap%  Move Log Copy%  Devices  stripe VG   -wi-a- 300.00G   /dev/sdh1 (0),/dev/sda1(0)  stripe VG   -wi-a- 300.00G   /dev/sdh1 (34728),/dev/sdb1(0)
If the on-disk LVM metadata takes as least as much space as what overrode it, this command can recover the physical volume. If what overrode the metadata went past the metadata area, the data on the volume may have been affected. You might be able to use the fsck command to recover that data.

6.5. Replacing a Missing Physical Volume

If a physical volume fails or otherwise needs to be replaced, you can label a new physical volume to replace the one that has been lost in the existing volume group by following the same procedure as you would for recovering physical volume metadata, described in Section 6.4, "Recovering Physical Volume Metadata". You can use the --partial and --verbose arguments of the vgdisplay command to display the UUIDs and sizes of any physical volumes that are no longer present. If you wish to substitute another physical volume of the same size, you can use the pvcreate command with the --restorefile and --uuid arguments to initialize a new device with the same UUID as the missing physical volume. You can then use the vgcfgrestore command to restore the volume group's metadata.

6.6. Removing Lost Physical Volumes from a Volume Group

If you lose a physical volume, you can activate the remaining physical volumes in the volume group with the --partial argument of the vgchange command. You can remove all the logical volumes that used that physical volume from the volume group with the --removemissing argument of the vgreduce command.
It is recommended that you run the vgreduce command with the --test argument to verify what you will be destroying.
Like most LVM operations, the vgreduce command is reversible in a sense if you immediately use the vgcfgrestore command to restore the volume group metadata to its previous state. For example, if you used the --removemissing argument of the vgreduce command without the --test argument and find you have removed logical volumes you wanted to keep, you can still replace the physical volume and use another vgcfgrestore command to return the volume group to its previous state.

6.7. Insufficient Free Extents for a Logical Volume

You may get the error message "Insufficient free extents" when creating a logical volume when you think you have enough extents based on the output of the vgdisplay or vgs commands. This is because these commands round figures to 2 decimal places to provide human-readable output. To specify exact size, use free physical extent count instead of some multiple of bytes to determine the size of the logical volume.
The vgdisplay command, by default, includes this line of output that indicates the free physical extents.
# vgdisplay  --- Volume group ---  ...  Free  PE / Size   8780 / 34.30 GB
Alternately, you can use the vg_free_count and vg_extent_count arguments of the vgs command to display the free extents and the total number of extents.
# vgs -o +vg_free_count,vg_extent_count  VG #PV #LV #SN Attr   VSize  VFree  Free #Ext  testvg   2   0   0 wz--n- 34.30G 34.30G 8780 8780
With 8780 free physical extents, you can run the following command, using the lower-case l argument to use extents instead of bytes:
# lvcreate -l8780 -n testlv testvg
This uses all the free extents in the volume group.
# vgs -o +vg_free_count,vg_extent_count  VG #PV #LV #SN Attr   VSize  VFree Free #Ext  testvg   2   1   0 wz--n- 34.30G 0 0 8780
Alternately, you can extend the logical volume to use a percentage of the remaining free space in the volume group by using the -l argument of the lvcreate command. For information, see Section 4.4.1, "Creating Linear Logical Volumes".
(Sebelumnya) 10 : 4.5. Controlling LVM Devi ...10 : Chapter 7. LVM Administra ... (Berikutnya)