Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 24 : Chapter 9. Miscellaneous ...24 : Chapter 12. Volumes - Vir ... (Berikutnya)

Virtualization Administration Guide

Chapter 11. Storage pools

This chapter includes instructions on creating storage pools of assorted types. A storage pool is a quantity of storage set aside by an administrator, often a dedicated storage administrator, for use by virtual machines. Storage pools are often divided into storage volumes either by the storage administrator or the system administrator, and the volumes are assigned to guest virtual machines as block devices.

Example 11.1. NFS storage pool

Suppose a storage administrator responsible for an NFS server creates a share to store guest virtual machines' data. The system administrator defines a pool on the host with the details of the share (nfs.example.com:/path/to/share should be mounted on /vm_data). When the pool is started, libvirt mounts the share on the specified directory, just as if the system administrator logged in and executed mount nfs.example.com:/path/to/share /vmdata. If the pool is configured to autostart, libvirt ensures that the NFS share is mounted on the directory specified when libvirt is started.
Once the pool starts, the files that the NFS share, are reported as volumes, and the storage volumes' paths are then queried using the libvirt APIs. The volumes' paths can then be copied into the section of a guest virtual machine's XML definition file describing the source storage for the guest virtual machine's block devices. With NFS, applications using the libvirt APIs can create and delete volumes in the pool (files within the NFS share) up to the limit of the size of the pool (the maximum storage capacity of the share). Not all pool types support creating and deleting volumes. Stopping the pool negates the start operation, in this case, unmounts the NFS share. The data on the share is not modified by the destroy operation, despite the name. See man virsh for more details.

Note

Storage pools and volumes are not required for the proper operation of guest virtual machines. Pools and volumes provide a way for libvirt to ensure that a particular piece of storage will be available for a guest virtual machine, but some administrators will prefer to manage their own storage and guest virtual machines will operate properly without any pools or volumes defined. On systems that do not use pools, system administrators must ensure the availability of the guest virtual machines' storage using whatever tools they prefer, for example, adding the NFS share to the host's fstab so that the share is mounted at boot time.

11.1.  Creating storage pools

11.1.1. Disk-based storage pools

This section covers creating disk based storage devices for guest virtual machines.

Warning

Guests should not be given write access to whole disks or block devices (for example, /dev/sdb). Use partitions (for example, /dev/sdb1) or LVM volumes.
If you pass an entire block device to the guest, the guest will likely partition it or create its own LVM groups on it. This can cause the host to detect these partitions or LVM groups and cause errors.

11.1.1.1. Creating a disk based storage pool using virsh

This procedure creates a new storage pool using a disk device with the virsh command.

Warning

Dedicating a disk to a storage pool will reformat and erase all data presently stored on the disk device! It is strongly recommended to back up the storage device before commencing with the following procedure:
  1. Create a GPT disk label on the disk

    The disk must be relabeled with a GUID Partition Table (GPT) disk label. GPT disk labels allow for creating a large numbers of partitions, up to 128 partitions, on each device. GPT partition tables can store partition data for far more partitions than the MS-DOS partition table.
    # parted /dev/sdbGNU Parted 2.1Using /dev/sdbWelcome to GNU Parted! Type 'help' to view a list of commands.(parted) mklabel  New disk label type? gpt  (parted) quit Information: You may need to update /etc/fstab.   #
  2. Create the storage pool configuration file

    Create a temporary XML text file containing the storage pool information required for the new device.
    The file must be in the format shown below, and contain the following fields:
    <name>guest_images_disk</name>
    The name parameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below.
    <device path='/dev/sdb'/>
    The device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb.
    <target> <path>/dev</path></target>
    The file system target parameter with the path sub-parameter determines the location on the host file system to attach volumes created with this storage pool.
    For example, sdb1, sdb2, sdb3. Using /dev/, as in the example below, means volumes created from this storage pool can be accessed as /dev/sdb1, /dev/sdb2, /dev/sdb3.
    <format type='gpt'/>
    The format parameter specifies the partition table type. This example uses the gpt in the example below, to match the GPT disk label type created in the previous step.
    Create the XML file for the storage pool device with a text editor.

    Example 11.2. Disk based storage device storage pool

    <pool type='disk'>  <name>guest_images_disk</name>  <source> <device path='/dev/sdb'/> <format type='gpt'/>  </source>  <target> <path>/dev</path>  </target></pool>

  3. Attach the device

    Add the storage pool definition using the virsh pool-define command with the XML configuration file created in the previous step.
    # virsh pool-define ~/guest_images_disk.xmlPool guest_images_disk defined from /root/guest_images_disk.xml# virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   guest_images_disk inactive   no
  4. Start the storage pool

    Start the storage pool with the virsh pool-start command. Verify the pool is started with the virsh pool-list --all command.
    # virsh pool-start guest_images_diskPool guest_images_disk started# virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   guest_images_disk active no
  5. Turn on autostart

    Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
    # virsh pool-autostart guest_images_diskPool guest_images_disk marked as autostarted# virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   guest_images_disk active yes
  6. Verify the storage pool configuration

    Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running.
    # virsh pool-info guest_images_diskName:   guest_images_diskUUID:   551a67c8-5f2a-012c-3844-df29b167431cState:  runningCapacity:   465.76 GBAllocation: 0.00 Available:  465.76 GB# ls -la /dev/sdbbrw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb# virsh vol-list guest_images_diskName Path-----------------------------------------
  7. Optional: Remove the temporary configuration file

    Remove the temporary storage pool XML configuration file if it is not needed.
    # rm ~/guest_images_disk.xml
A disk based storage pool is now available.

11.1.1.2. Deleting a storage pool using virsh

The following demonstrates how to delete a storage pool using virsh:
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it.
    # virsh pool-destroy guest_images_disk
  2. Remove the storage pool's definition
    # virsh pool-undefine guest_images_disk

11.1.2. Partition-based storage pools

This section covers using a pre-formatted block device, a partition, as a storage pool.
For the following examples, a host has a 500GB hard drive (/dev/sdc) partitioned into one 500GB, ext4 formatted partition (/dev/sdc1). We set up a storage pool for it using the procedure below.

11.1.2.1. Creating a partition-based storage pool using virt-manager

This procedure creates a new storage pool using a partition of a storage device.

Procedure 11.1. Creating a partition-based storage pool with virt-manager

  1. Open the storage pool settings

    1. In the virt-manager graphical interface, select the host from the main window.
      Open the Edit menu and select Connection Details
      Connection Details

      Figure 11.1. Connection Details


    2. Click on the Storage tab of the Connection Details window.
      Storage tab

      Figure 11.2. Storage tab


  2. Create the new storage pool

    1. Add a new pool (part 1)

      Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
      Choose a Name for the storage pool. This example uses the name guest_images_fs. Change the Type to fs: Pre-Formatted Block Device.
      Storage pool name and type

      Figure 11.3. Storage pool name and type


      Press the Forward button to continue.
    2. Add a new pool (part 2)

      Change the Target Path, Format, and Source Path fields.
      Storage pool path and format

      Figure 11.4. Storage pool path and format


      Target Path
      Enter the location to mount the source device for the storage pool in the Target Path field. If the location does not already exist, virt-manager will create the directory.
      Format
      Select a format from the Format list. The device is formatted with the selected format.
      This example uses the ext4 file system, the default Red Hat Enterprise Linux file system.
      Source Path
      Enter the device in the Source Path field.
      This example uses the /dev/sdc1 device.
      Verify the details and press the Finish button to create the storage pool.
  3. Verify the new storage pool

    The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 458.20 GB Free in this example. Verify the State field reports the new storage pool as Active.
    Select the storage pool. In the Autostart field, click the On Boot checkbox. This will make sure the storage device starts whenever the libvirtd service starts.
    Storage list confirmation

    Figure 11.5. Storage list confirmation


    The storage pool is now created, close the Connection Details window.

11.1.2.2. Deleting a storage pool using virt-manager

This procedure demonstrates how to delete a storage pool.
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click the red X icon at the bottom of the Storage window.
    Stop Icon

    Figure 11.6. Stop Icon


  2. Delete the storage pool by clicking the Trash can icon. This icon is only enabled if you stop the storage pool first.

11.1.2.3. Creating a partition-based storage pool using virsh

This section covers creating a partition-based storage pool with the virsh command.

Warning

Do not use this procedure to assign an entire disk as a storage pool (for example, /dev/sdb). Guests should not be given write access to whole disks or block devices. Only use this method to assign partitions (for example, /dev/sdb1) to storage pools.

Procedure 11.2. Creating pre-formatted block device storage pools using virsh

  1. Create the storage pool definition

    Use the virsh pool-define-as command to create a new storage pool definition. There are three options that must be provided to define a pre-formatted disk as a storage pool:
    Partition name
    The name parameter determines the name of the storage pool. This example uses the name guest_images_fs in the example below.
    device
    The device parameter with the path attribute specifies the device path of the storage device. This example uses the partition /dev/sdc1.
    mountpoint
    The mountpoint on the local file system where the formatted device will be mounted. If the mount point directory does not exist, the virsh command can create the directory.
    The directory /guest_images is used in this example.
    # virsh pool-define-as guest_images_fs fs - - /dev/sdc1 - "/guest_images"Pool guest_images_fs defined
    The new pool and mount points are now created.
  2. Verify the new pool

    List the present storage pools.
    # virsh pool-list --allName State  Autostart-----------------------------------------default  active yesguest_images_fs  inactive   no
  3. Create the mount point

    Use the virsh pool-build command to create a mount point for a pre-formatted file system storage pool.
    # virsh pool-build guest_images_fsPool guest_images_fs built# ls -la /guest_imagestotal 8drwx------.  2 root root 4096 May 31 19:38 .dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..# virsh pool-list --allName State  Autostart-----------------------------------------default  active yesguest_images_fs  inactive   no
  4. Start the storage pool

    Use the virsh pool-start command to mount the file system onto the mount point and make the pool available for use.
    # virsh pool-start guest_images_fsPool guest_images_fs started# virsh pool-list --allName State  Autostart-----------------------------------------default  active yesguest_images_fs  active no
  5. Turn on autostart

    By default, a storage pool is defined with virsh is not set to automatically start each time libvirtd starts. Turn on automatic start with the virsh pool-autostart command. The storage pool is now automatically started each time libvirtd starts.
    # virsh pool-autostart guest_images_fsPool guest_images_fs marked as autostarted# virsh pool-list --allName State  Autostart-----------------------------------------default  active yesguest_images_fs  active yes
  6. Verify the storage pool

    Verify the storage pool was created correctly, the sizes reported are as expected, and the state is reported as running. Verify there is a "lost+found" directory in the mount point on the file system, indicating the device is mounted.
    # virsh pool-info guest_images_fsName:   guest_images_fsUUID:   c7466869-e82a-a66c-2187-dc9d6f0877d0State:  runningPersistent: yesAutostart:  yesCapacity:   458.39 GBAllocation: 197.91 MBAvailable:  458.20 GB# mount | grep /guest_images/dev/sdc1 on /guest_images type ext4 (rw)# ls -la /guest_imagestotal 24drwxr-xr-x.  3 root root  4096 May 31 19:47 .dr-xr-xr-x. 25 root root  4096 May 31 19:38 ..drwx------.  2 root root 16384 May 31 14:18 lost+found

11.1.2.4. Deleting a storage pool using virsh

  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it.
    # virsh pool-destroy guest_images_disk
  2. Optionally, if you want to remove the directory where the storage pool resides use the following command:
    # virsh pool-delete guest_images_disk
  3. Remove the storage pool's definition
    # virsh pool-undefine guest_images_disk

11.1.3. Directory-based storage pools

This section covers storing guests in a directory on the host.
Directory-based storage pools can be created with virt-manager or the virsh command line tools.

11.1.3.1. Creating a directory-based storage pool with virt-manager

  1. Create the local directory

    1. Optional: Create a new directory for the storage pool

      Create the directory on the host for the storage pool. This example uses a directory named /guest_images.
      # mkdir /guest_images
    2. Set directory ownership

      Change the user and group ownership of the directory. The directory must be owned by the root user.
      # chown root:root /guest_images
    3. Set directory permissions

      Change the file permissions of the directory.
      # chmod 700 /guest_images
    4. Verify the changes

      Verify the permissions were modified. The output shows a correctly configured empty directory.
      # ls -la /guest_imagestotal 8drwx------.  2 root root 4096 May 28 13:57 .dr-xr-xr-x. 26 root root 4096 May 28 13:57 ..
  2. Configure SELinux file contexts

    Configure the correct SELinux context for the new directory. Note that the name of the pool and the directory do not have to match. However, when you shutdown the guest virtual machine, libvirt has to set the context back to a default value. The context of the directory determines what this default value is. It is worth explicitly labelling the directory virt_image_t, so that when the guest virtual machine is shutdown, the images get labeled 'virt_image_t' and are thus isolated from other processes running on the host.
    # semanage fcontext -a -t virt_image_t '/guest_images(/.*)?'# restorecon -R /guest_images
  3. Open the storage pool settings

    1. In the virt-manager graphical interface, select the host from the main window.
      Open the Edit menu and select Connection Details
      Connection details window

      Figure 11.7. Connection details window


    2. Click on the Storage tab of the Connection Details window.
      Storage tab

      Figure 11.8. Storage tab


  4. Create the new storage pool

    1. Add a new pool (part 1)

      Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
      Choose a Name for the storage pool. This example uses the name guest_images. Change the Type to dir: Filesystem Directory.
      Name the storage pool

      Figure 11.9. Name the storage pool


      Press the Forward button to continue.
    2. Add a new pool (part 2)

      Change the Target Path field. For example, /guest_images.
      Verify the details and press the Finish button to create the storage pool.
  5. Verify the new storage pool

    The new storage pool appears in the storage list on the left after a few seconds. Verify the size is reported as expected, 36.41 GB Free in this example. Verify the State field reports the new storage pool as Active.
    Select the storage pool. In the Autostart field, confirm that the On Boot checkbox is checked. This will make sure the storage pool starts whenever the libvirtd service starts.
    Verify the storage pool information

    Figure 11.10. Verify the storage pool information


    The storage pool is now created, close the Connection Details window.

11.1.3.2. Deleting a storage pool using virt-manager

This procedure demonstrates how to delete a storage pool.
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click the red X icon at the bottom of the Storage window.
    Stop Icon

    Figure 11.11. Stop Icon


  2. Delete the storage pool by clicking the Trash can icon. This icon is only enabled if you stop the storage pool first.

11.1.3.3. Creating a directory-based storage pool with virsh

  1. Create the storage pool definition

    Use the virsh pool-define-as command to define a new storage pool. There are two options required for creating directory-based storage pools:
    • The name of the storage pool.
      This example uses the name guest_images. All further virsh commands used in this example use this name.
    • The path to a file system directory for storing guest image files. If this directory does not exist, virsh will create it.
      This example uses the /guest_images directory.
     # virsh pool-define-as guest_images dir - - - - "/guest_images"Pool guest_images defined
  2. Verify the storage pool is listed

    Verify the storage pool object is created correctly and the state reports it as inactive.
    # virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   guest_images inactive   no
  3. Create the local directory

    Use the virsh pool-build command to build the directory-based storage pool for the directory guest_images (for example), as shown:
    # virsh pool-build guest_imagesPool guest_images built# ls -la /guest_imagestotal 8drwx------.  2 root root 4096 May 30 02:44 .dr-xr-xr-x. 26 root root 4096 May 30 02:44 ..# virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   guest_images inactive   no
  4. Start the storage pool

    Use the virsh command pool-start to enable a directory storage pool, thereby allowing allowing volumes of the pool to be used as guest disk images.
    # virsh pool-start guest_imagesPool guest_images started# virsh pool-list --allName State  Autostart -----------------------------------------default active yes   guest_images active no
  5. Turn on autostart

    Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
    # virsh pool-autostart guest_imagesPool guest_images marked as autostarted# virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   guest_images active yes
  6. Verify the storage pool configuration

    Verify the storage pool was created correctly, the size is reported correctly, and the state is reported as running. If you want the pool to be accessible even if the guest is not running, make sure that Persistent is reported as yes. If you want the pool to start automatically when the service starts, make sure that Autostart is reported as yes.
    # virsh pool-info guest_imagesName:   guest_imagesUUID:   779081bf-7a82-107b-2874-a19a9c51d24cState:  runningPersistent: yesAutostart:  yesCapacity:   49.22 GBAllocation: 12.80 GBAvailable:  36.41 GB# ls -la /guest_imagestotal 8drwx------.  2 root root 4096 May 30 02:44 .dr-xr-xr-x. 26 root root 4096 May 30 02:44 ..#
A directory-based storage pool is now available.

11.1.3.4. Deleting a storage pool using virsh

The following demonstrates how to delete a storage pool using virsh:
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it.
    # virsh pool-destroy guest_images_disk
  2. Optionally, if you want to remove the directory where the storage pool resides use the following command:
    # virsh pool-delete guest_images_disk
  3. Remove the storage pool's definition
    # virsh pool-undefine guest_images_disk

11.1.4. LVM-based storage pools

This chapter covers using LVM volume groups as storage pools.
LVM-based storage groups provide the full flexibility of LVM.

Note

Please refer to the Red Hat Enterprise Linux Storage Administration Guide for more details on LVM.

Warning

LVM-based storage pools require a full disk partition. If activating a new partition/device with these procedures, the partition will be formatted and all data will be erased. If using the host's existing Volume Group (VG) nothing will be erased. It is recommended to back up the storage device before commencing the following procedure.

11.1.4.1. Creating an LVM-based storage pool with virt-manager

LVM-based storage pools can use existing LVM volume groups or create new LVM volume groups on a blank partition.
  1. Optional: Create new partition for LVM volumes

    These steps describe how to create a new partition and LVM volume group on a new hard disk drive.

    Warning

    This procedure will remove all data from the selected storage device.
    1. Create a new partition

      Use the fdisk command to create a new disk partition from the command line. The following example creates a new partition that uses the entire disk on the storage device /dev/sdb.
      # fdisk /dev/sdbCommand (m for help):
      Press n for a new partition.
    2. Press p for a primary partition.
      Command action   e   extended   p   primary partition (1-4)
    3. Choose an available partition number. In this example the first partition is chosen by entering 1.
      Partition number (1-4): 1
    4. Enter the default first cylinder by pressing Enter.
      First cylinder (1-400, default 1):
    5. Select the size of the partition. In this example the entire disk is allocated by pressing Enter.
      Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
    6. Set the type of partition by pressing t.
      Command (m for help): t
    7. Choose the partition you created in the previous steps. In this example, the partition number is 1.
      Partition number (1-4): 1
    8. Enter 8e for a Linux LVM partition.
      Hex code (type L to list codes): 8e
    9. write changes to disk and quit.
      Command (m for help): w Command (m for help): q
    10. Create a new LVM volume group

      Create a new LVM volume group with the vgcreate command. This example creates a volume group named guest_images_lvm.
      # vgcreate guest_images_lvm /dev/sdb1  Physical volume "/dev/vdb1" successfully created  Volume group "guest_images_lvm" successfully created
    The new LVM volume group, guest_images_lvm, can now be used for an LVM-based storage pool.
  2. Open the storage pool settings

    1. In the virt-manager graphical interface, select the host from the main window.
      Open the Edit menu and select Connection Details
      Connection details

      Figure 11.12. Connection details


    2. Click on the Storage tab.
      Storage tab

      Figure 11.13. Storage tab


  3. Create the new storage pool

    1. Start the Wizard

      Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
      Choose a Name for the storage pool. We use guest_images_lvm for this example. Then change the Type to logical: LVM Volume Group, and
      Add LVM storage pool

      Figure 11.14. Add LVM storage pool


      Press the Forward button to continue.
    2. Add a new pool (part 2)

      Change the Target Path field. This example uses /guest_images.
      Now fill in the Target Path and Source Path fields, then tick the Build Pool check box.
      • Use the Target Path field to either select an existing LVM volume group or as the name for a new volume group. The default format is /dev/storage_pool_name.
        This example uses a new volume group named /dev/guest_images_lvm.
      • The Source Path field is optional if an existing LVM volume group is used in the Target Path.
        For new LVM volume groups, input the location of a storage device in the Source Path field. This example uses a blank partition /dev/sdc.
      • The Build Pool checkbox instructs virt-manager to create a new LVM volume group. If you are using an existing volume group you should not select the Build Pool checkbox.
        This example is using a blank partition to create a new volume group so the Build Pool checkbox must be selected.
      Add target and source

      Figure 11.15. Add target and source


      Verify the details and press the Finish button format the LVM volume group and create the storage pool.
    3. Confirm the device to be formatted

      A warning message appears.
      Warning message

      Figure 11.16. Warning message


      Press the Yes button to proceed to erase all data on the storage device and create the storage pool.
  4. Verify the new storage pool

    The new storage pool will appear in the list on the left after a few seconds. Verify the details are what you expect, 465.76 GB Free in our example. Also verify the State field reports the new storage pool as Active.
    It is generally a good idea to have the Autostart check box enabled, to ensure the storage pool starts automatically with libvirtd.
    Confirm LVM storage pool details

    Figure 11.17. Confirm LVM storage pool details


    Close the Host Details dialog, as the task is now complete.

11.1.4.2. Deleting a storage pool using virt-manager

This procedure demonstrates how to delete a storage pool.
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click the red X icon at the bottom of the Storage window.
    Stop Icon

    Figure 11.18. Stop Icon


  2. Delete the storage pool by clicking the Trash can icon. This icon is only enabled if you stop the storage pool first.

11.1.4.3. Creating an LVM-based storage pool with virsh

This section outlines the steps required to create an LVM-based storage pool with the virsh command. It uses the example of a pool named guest_images_lvm from a single drive (/dev/sdc). This is only an example and your settings should be substituted as appropriate.

Procedure 11.3. Creating an LVM-based storage pool with virsh

  1. Define the pool name guest_images_lvm.
    # virsh pool-define-as guest_images_lvm logical - - /dev/sdc libvirt_lvm \ /dev/libvirt_lvmPool guest_images_lvm defined
  2. Build the pool according to the specified name.
    # virsh pool-build guest_images_lvmPool guest_images_lvm built
  3. Initialize the new pool.
    # virsh pool-start guest_images_lvmPool guest_images_lvm started
  4. Show the volume group information with the vgs command.
    # vgsVG  #PV #LV #SN Attr   VSize   VFree  libvirt_lvm   1   0   0 wz--n- 465.76g 465.76g
  5. Set the pool to start automatically.
    # virsh pool-autostart guest_images_lvmPool guest_images_lvm marked as autostarted
  6. List the available pools with the virsh command.
    # virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   guest_images_lvm active yes
  7. The following commands demonstrate the creation of three volumes (volume1, volume2 and volume3) within this pool.
    # virsh vol-create-as guest_images_lvm volume1 8GVol volume1 created# virsh vol-create-as guest_images_lvm volume2 8GVol volume2 created# virsh vol-create-as guest_images_lvm volume3 8GVol volume3 created
  8. List the available volumes in this pool with the virsh command.
    # virsh vol-list guest_images_lvmName Path-----------------------------------------volume1  /dev/libvirt_lvm/volume1volume2  /dev/libvirt_lvm/volume2volume3  /dev/libvirt_lvm/volume3
  9. The following two commands (lvscan and lvs) display further information about the newly created volumes.
    # lvscanACTIVE '/dev/libvirt_lvm/volume1' [8.00 GiB] inheritACTIVE '/dev/libvirt_lvm/volume2' [8.00 GiB] inheritACTIVE '/dev/libvirt_lvm/volume3' [8.00 GiB] inherit# lvsLV   VG Attr LSize   Pool Origin Data%  Move Log Copy%  Convertvolume1  libvirt_lvm   -wi-a-   8.00gvolume2  libvirt_lvm   -wi-a-   8.00gvolume3  libvirt_lvm   -wi-a-   8.00g

11.1.4.4. Deleting a storage pool using virsh

The following demonstrates how to delete a storage pool using virsh:
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it.
    # virsh pool-destroy guest_images_disk
  2. Optionally, if you want to remove the directory where the storage pool resides use the following command:
    # virsh pool-delete guest_images_disk
  3. Remove the storage pool's definition
    # virsh pool-undefine guest_images_disk

11.1.5. iSCSI-based storage pools

This section covers using iSCSI-based devices to store guests.
iSCSI (Internet Small Computer System Interface) is a network protocol for sharing storage devices. iSCSI connects initiators (storage clients) to targets (storage servers) using SCSI instructions over the IP layer.

11.1.5.1. Configuring a software iSCSI target

The scsi-target-utils package provides a tool for creating software-backed iSCSI targets.

Procedure 11.4. Creating an iSCSI target

  1. Install the required packages

    Install the scsi-target-utils package and all dependencies
    # yum install scsi-target-utils
  2. Start the tgtd service

    The tgtd service hosts SCSI targets and uses the iSCSI protocol to host targets. Start the tgtd service and make the service persistent after restarting with the chkconfig command.
    # service tgtd start# chkconfig tgtd on
  3. Optional: Create LVM volumes

    LVM volumes are useful for iSCSI backing images. LVM snapshots and resizing can be beneficial for guests. This example creates an LVM image named virtimage1 on a new volume group named virtstore on a RAID5 array for hosting guests with iSCSI.
    1. Create the RAID array

      Creating software RAID5 arrays is covered by the Red Hat Enterprise Linux Deployment Guide.
    2. Create the LVM volume group

      Create a volume group named virtstore with the vgcreate command.
      # vgcreate virtstore /dev/md1
    3. Create a LVM logical volume

      Create a logical volume group named virtimage1 on the virtstore volume group with a size of 20GB using the lvcreate command.
      # lvcreate --size 20G -n virtimage1 virtstore
      The new logical volume, virtimage1, is ready to use for iSCSI.
  4. Optional: Create file-based images

    File-based storage is sufficient for testing but is not recommended for production environments or any significant I/O activity. This optional procedure creates a file based imaged named virtimage2.img for an iSCSI target.
    1. Create a new directory for the image

      Create a new directory to store the image. The directory must have the correct SELinux contexts.
      # mkdir -p /var/lib/tgtd/virtualization
    2. Create the image file

      Create an image named virtimage2.img with a size of 10GB.
      # dd if=/dev/zero of=/var/lib/tgtd/virtualization/virtimage2.img bs=1M seek=10000 count=0
    3. Configure SELinux file contexts

      Configure the correct SELinux context for the new image and directory.
      # restorecon -R /var/lib/tgtd
      The new file-based image, virtimage2.img, is ready to use for iSCSI.
  5. Create targets

    Targets can be created by adding a XML entry to the /etc/tgt/targets.conf file. The target attribute requires an iSCSI Qualified Name (IQN). The IQN is in the format:
    iqn.yyyy-mm.reversed domain name:optional identifier text
    Where:
    • yyyy-mm represents the year and month the device was started (for example: 2010-05);
    • reversed domain name is the hosts domain name in reverse (for example server1.example.com in an IQN would be com.example.server1); and
    • optional identifier text is any text string, without spaces, that assists the administrator in identifying devices or hardware.
    This example creates iSCSI targets for the two types of images created in the optional steps on server1.example.com with an optional identifier trial. Add the following to the /etc/tgt/targets.conf file.
    <target iqn.2010-05.com.example.server1:trial>   backing-store /dev/virtstore/virtimage1  #LUN 1   backing-store /var/lib/tgtd/virtualization/virtimage2.img  #LUN 2   write-cache off</target>
    Ensure that the /etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI. The driver uses iSCSI by default.

    Important

    This example creates a globally accessible target without access control. Refer to the scsi-target-utils for information on implementing secure access.
  6. Restart the tgtd service

    Restart the tgtd service to reload the configuration changes.
    # service tgtd restart
  7. iptables configuration

    Open port 3260 for iSCSI access with iptables.
    # iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT# service iptables save# service iptables restart
  8. Verify the new targets

    View the new targets to ensure the setup was successful with the tgt-admin --show command.
    # tgt-admin --showTarget 1: iqn.2010-05.com.example.server1:trialSystem information:Driver: iscsiState: readyI_T nexus information:LUN information:LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: NoneLUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 20000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /dev/virtstore/virtimage1LUN: 2 Type: disk SCSI ID: IET 00010002 SCSI SN: beaf12 Size: 10000 MB Online: Yes Removable media: No Backing store type: rdwr Backing store path: /var/lib/tgtd/virtualization/virtimage2.imgAccount information:ACL information:ALL

    Warning

    The ACL list is set to all. This allows all systems on the local network to access this device. It is recommended to set host access ACLs for production environments.
  9. Optional: Test discovery

    Test whether the new iSCSI device is discoverable.
    # iscsiadm --mode discovery --type sendtargets --portal server1.example.com127.0.0.1:3260,1 iqn.2010-05.com.example.server1:iscsirhel6guest
  10. Optional: Test attaching the device

    Attach the new device (iqn.2010-05.com.example.server1:iscsirhel6guest) to determine whether the device can be attached.
    # iscsiadm -d2 -m node --loginscsiadm: Max file limits 1024 1024Logging in to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260]Login to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful.
    Detach the device.
    # iscsiadm -d2 -m node --logoutscsiadm: Max file limits 1024 1024Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260Logout of [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful.
An iSCSI device is now ready to use for virtualization.

11.1.5.2. Adding an iSCSI target to virt-manager

This procedure covers creating a storage pool with an iSCSI target in virt-manager.

Procedure 11.5. Adding an iSCSI device to virt-manager

  1. Open the host storage tab

    Open the Storage tab in the Host Details window.
    1. Open virt-manager.
    2. Select a host from the main virt-manager window. Click Edit menu and select Connection Details.
      Connection details

      Figure 11.19. Connection details


    3. Click on the Storage tab.
      Storage menu

      Figure 11.20. Storage menu


  2. Add a new pool (part 1)

    Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
    Add an iscsi storage pool name and type

    Figure 11.21. Add an iscsi storage pool name and type


    Choose a name for the storage pool, change the Type to iscsi, and press Forward to continue.
  3. Add a new pool (part 2)

    Enter the target path for the device, the host name of the target and the source path (the IQN). The Format option is not available as formatting is handled by the guests. It is not advised to edit the Target Path. The default target path value, /dev/disk/by-path/, adds the drive path to that directory. The target path should be the same on all hosts for migration.
    Enter the hostname or IP address of the iSCSI target. This example uses server1.example.com.
    Enter the source path, for the iSCSI target. This example uses demo-target.
    Check the IQN checkbox to enter the IQN. This example uses iqn.2010-05.com.example.server1:iscsirhel6guest.
    Create an iscsi storage pool

    Figure 11.22. Create an iscsi storage pool


    Press Finish to create the new storage pool.

11.1.5.3. Deleting a storage pool using virt-manager

This procedure demonstrates how to delete a storage pool.
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click the red X icon at the bottom of the Storage window.
    Stop Icon

    Figure 11.23. Stop Icon


  2. Delete the storage pool by clicking the Trash can icon. This icon is only enabled if you stop the storage pool first.

11.1.5.4. Creating an iSCSI-based storage pool with virsh

  1. Use pool-define-as to define the pool from the command line

    Storage pool definitions can be created with the virsh command line tool. Creating storage pools with virsh is useful for systems administrators using scripts to create multiple storage pools.
    The virsh pool-define-as command has several parameters which are accepted in the following format:
    virsh pool-define-as name type source-host source-path source-dev source-name target
    The parameters are explained as follows:
    type
    defines this pool as a particular type, iscsi for example
    name
    must be unique and sets the name for the storage pool
    source-host and source-path
    the hostname and iSCSI IQN respectively
    source-dev and source-name
    these parameters are not required for iSCSI-based pools, use a - character to leave the field blank.
    target
    defines the location for mounting the iSCSI device on the host
    The example below creates the same iSCSI-based storage pool as the previous step.
    #   virsh pool-define-as --name scsirhel6guest --type iscsi \ --source-host server1.example.com \ --source-dev iqn.2010-05.com.example.server1:iscsirhel6guest --target /dev/disk/by-pathPool iscsirhel6guest defined
  2. Verify the storage pool is listed

    Verify the storage pool object is created correctly and the state reports as inactive.
    # virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   iscsirhel6guest  inactive   no
  3. Start the storage pool

    Use the virsh command pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guests.
    # virsh pool-start guest_images_diskPool guest_images_disk started# virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   iscsirhel6guest  active no
  4. Turn on autostart

    Turn on autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
    # virsh pool-autostart iscsirhel6guestPool iscsirhel6guest marked as autostarted
    Verify that the iscsirhel6guest pool has autostart set:
    # virsh pool-list --allName State  Autostart -----------------------------------------default  active yes   iscsirhel6guest  active yes
  5. Verify the storage pool configuration

    Verify the storage pool was created correctly, the sizes reported correctly, and the state reports as running.
    # virsh pool-info iscsirhel6guestName:   iscsirhel6guestUUID:   afcc5367-6770-e151-bcb3-847bc36c5e28State:  runningPersistent: unknownAutostart:  yesCapacity:   100.31 GBAllocation: 0.00Available:  100.31 GB
An iSCSI-based storage pool is now available.

11.1.5.5. Deleting a storage pool using virsh

The following demonstrates how to delete a storage pool using virsh:
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it.
    # virsh pool-destroy guest_images_disk
  2. Remove the storage pool's definition
    # virsh pool-undefine guest_images_disk

11.1.6. NFS-based storage pools

This procedure covers creating a storage pool with a NFS mount point in virt-manager.

11.1.6.1. Creating a NFS-based storage pool with virt-manager

  1. Open the host storage tab

    Open the Storage tab in the Host Details window.
    1. Open virt-manager.
    2. Select a host from the main virt-manager window. Click Edit menu and select Connection Details.
      Connection details

      Figure 11.24. Connection details


    3. Click on the Storage tab.
      Storage tab

      Figure 11.25. Storage tab


  2. Create a new pool (part 1)

    Press the + button (the add pool button). The Add a New Storage Pool wizard appears.
    Add an NFS name and type

    Figure 11.26. Add an NFS name and type


    Choose a name for the storage pool and press Forward to continue.
  3. Create a new pool (part 2)

    Enter the target path for the device, the hostname and the NFS share path. Set the Format option to NFS or auto (to detect the type). The target path must be identical on all hosts for migration.
    Enter the hostname or IP address of the NFS server. This example uses server1.example.com.
    Enter the NFS path. This example uses /nfstrial.
    Create an NFS storage pool

    Figure 11.27. Create an NFS storage pool


    Press Finish to create the new storage pool.

11.1.6.2. Deleting a storage pool using virt-manager

This procedure demonstrates how to delete a storage pool.
  1. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click the red X icon at the bottom of the Storage window.
    Stop Icon

    Figure 11.28. Stop Icon


  2. Delete the storage pool by clicking the Trash can icon. This icon is only enabled if you stop the storage pool first.
(Sebelumnya) 24 : Chapter 9. Miscellaneous ...24 : Chapter 12. Volumes - Vir ... (Berikutnya)