Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 24 : Chapter 11. Storage pools24 : Chapter 14. Managing gues ... (Berikutnya)

Virtualization Administration Guide

Chapter 12.  Volumes

12.1. Creating volumes

This section shows how to create disk volumes inside a block based storage pool. In the example below, the virsh vol-create-as command will create a storage volume with a specific size in GB within the guest_images_disk storage pool. As this command is repeated per volume needed, three volumes are created as shown in the example.
# virsh vol-create-as guest_images_disk volume1 8GVol volume1 created# virsh vol-create-as guest_images_disk volume2 8GVol volume2 created# virsh vol-create-as guest_images_disk volume3 8GVol volume3 created# virsh vol-list guest_images_diskName Path-----------------------------------------volume1  /dev/sdb1volume2  /dev/sdb2volume3  /dev/sdb3# parted -s /dev/sdb printModel: ATA ST3500418AS (scsi)Disk /dev/sdb: 500GBSector size (logical/physical): 512B/512BPartition Table: gptNumber  Start   End Size File system  Name Flags2  17.4kB  8590MB  8590MB   primary3  8590MB  17.2GB  8590MB   primary1  21.5GB  30.1GB  8590MB   primary

12.2. Cloning volumes

The new volume will be allocated from storage in the same storage pool as the volume being cloned. The virsh vol-clone must have the --pool argument which dictates the name of the storage pool that contains the volume to be cloned. The rest of the command names the volume to be cloned (volume3) and the name of the new volume that was cloned (clone1). The virsh vol-list command lists the volumes that are present in the storage pool (guest_images_disk).
# virsh vol-clone --pool guest_images_disk volume3 clone1Vol clone1 cloned from volume3# virsh vol-list guest_images_diskName Path -----------------------------------------volume1  /dev/sdb1   volume2  /dev/sdb2   volume3  /dev/sdb3clone1   /dev/sdb4   # parted -s /dev/sdb printModel: ATA ST3500418AS (scsi)Disk /dev/sdb: 500GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber  Start   End Size File system  Name Flags1  4211MB  12.8GB  8595MB  primary2  12.8GB  21.4GB  8595MB  primary3  21.4GB  30.0GB  8595MB  primary4  30.0GB  38.6GB  8595MB  primary

12.3. Adding storage devices to guests

This section covers adding storage devices to a guest. Additional storage can only be added as needed.

12.3.1. Adding file based storage to a guest

File-based storage is a collection of files that are stored on the hosts file system that act as virtualized hard drives for guests. To add file-based storage, perform the following steps:

Procedure 12.1. Adding file-based storage

  1. Create a storage file or use an existing file (such as an ISO file). Note that both of the following commands create a 4GB file which can be used as additional storage for a guest:
    • Pre-allocated files are recommended for file-based storage images. Create a pre-allocated file using the following dd command as shown:
      # dd if=/dev/zero of=/var/lib/libvirt/images/FileName.iso bs=1M count=4096
    • Alternatively, create a sparse file instead of a pre-allocated file. Sparse files are created much faster and can be used for testing, but are not recommended for production environments due to data integrity and performance issues.
      # dd if=/dev/zero of=/var/lib/libvirt/images/FileName.iso bs=1M seek=4096 count=0
  2. Create the additional storage by writing a <disk> element in a new file. In this example, this file will be known as NewStorage.xml.
    A <disk> element describes the source of the disk, and a device name for the virtual block device. The device name should be unique across all devices in the guest, and identifies the bus on which the guest will find the virtual block device. The following example defines a virtio block device whose source is a file-based storage container named FileName.img:
    <disk type='file' device='disk'>   <driver name='qemu' type='raw' cache='none'/>   <source file='/var/lib/libvirt/images/FileName.img'/>   <target dev='vdb'/></disk>
    Device names can also start with "hd" or "sd", identifying respectively an IDE and a SCSI disk. The configuration file can also contain an <address> sub-element that specifies the position on the bus for the new device. In the case of virtio block devices, this should be a PCI address. Omitting the <address> sub-element lets libvirt locate and assign the next available PCI slot.
  3. Attach the CD-ROM as follows:
    <disk type='file' device='cdrom'>   <driver name='qemu' type='raw' cache='none'/>   <source file='/var/lib/libvirt/images/FileName.iso'/>   <readonly/>   <target dev='hdc'/></disk >
  4. Add the device defined in NewStorage.xml with your guest (Guest1):
    # virsh attach-device --config Guest1 ~/NewStorage.xml

    Note

    This change will only apply after the guest has been destroyed and restarted. In addition, persistent devices can only be added to a persistent domain, that is a domain whose configuration has been saved with virsh define command.
    If the guest is running, and you want the new device to be added temporarily until the guest is destroyed, omit the --config option:
    # virsh attach-device Guest1 ~/NewStorage.xml

    Note

    The virsh command allows for an attach-disk command that can set a limited number of parameters with a simpler syntax and without the need to create an XML file. The attach-disk command is used in a similar manner to the attach-device command mentioned previously, as shown:
    # virsh attach-disk Guest1 /var/lib/libvirt/images/FileName.iso vdb --cache none
    Note that the virsh attach-disk command also accepts the --config option.
  5. Start the guest machine (if it is currently not running):
    # virsh start Guest1

    Note

    The following steps are Linux guest specific. Other operating systems handle new storage devices in different ways. For other systems, refer to that operating system's documentation.
  6. Partitioning the disk drive

    The guest now has a hard disk device called /dev/vdb. If required, partition this disk drive and format the partitions. If you do not see the device that you added, then it indicates that there is an issue with the disk hotplug in your guest's operating system.
    1. Start fdisk for the new device:
      # fdisk /dev/vdbCommand (m for help):
    2. Type n for a new partition.
    3. The following appears:
      Command actione   extendedp   primary partition (1-4)
      Type p for a primary partition.
    4. Choose an available partition number. In this example, the first partition is chosen by entering 1.
      Partition number (1-4): 1
    5. Enter the default first cylinder by pressing Enter.
      First cylinder (1-400, default 1):
    6. Select the size of the partition. In this example the entire disk is allocated by pressing Enter.
      Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
    7. Enter t to configure the partition type.
      Command (m for help): t
    8. Select the partition you created in the previous steps. In this example, the partition number is 1 as there was only one partition created and fdisk automatically selected partition 1.
      Partition number (1-4): 1
    9. Enter 83 for a Linux partition.
      Hex code (type L to list codes): 83
    10. Enter w to write changes and quit.
      Command (m for help): w
    11. Format the new partition with the ext3 file system.
      # mke2fs -j /dev/vdb1
  7. Create a mount directory, and mount the disk on the guest. In this example, the directory is located in myfiles.
    # mkdir /myfiles# mount /dev/vdb1 /myfiles
    The guest now has an additional virtualized file-based storage device. Note however, that this storage will not mount persistently across reboot unless defined in the guest's /etc/fstab file:
    /dev/vdb1 /myfiles ext3 defaults 0 0

12.3.2. Adding hard drives and other block devices to a guest

System administrators use additional hard drives to provide increased storage space for a guest, or to separate system data from user data.

Procedure 12.2. Adding physical block devices to guests

  1. This procedure describes how to add a hard drive on the host to a guest. It applies to all physical block devices, including CD-ROM, DVD and floppy devices.
    Physically attach the hard disk device to the host. Configure the host if the drive is not accessible by default.
  2. Do one of the following:
    1. Create the additional storage by writing a disk element in a new file. In this example, this file will be known as NewStorage.xml. The following example is a configuration file section which contains an additional device-based storage container for the host partition /dev/sr0:
      <disk type='block' device='disk'>  <driver name='qemu' type='raw' cache='none'/>  <source dev='/dev/sr0'/>  <target dev='vdc' bus='virtio'/></disk>
    2. Follow the instruction in the previous section to attach the device to the guest. Alternatively, you can use the virsh attach-disk command, as shown:
      # virsh attach-disk Guest1 /dev/sr0 vdc
      Note that the following options are available:
      • Thevirsh attach-disk command also accepts the --config, --type, and --mode options, as shown:
        virsh attach-disk Guest1 /dev/sr0 vdc --config --type cdrom --mode readonly
      • Additionally, --type also accepts --type disk in cases where the device is a hard drive.
  3. The guest now has a new hard disk device called /dev/vdc on Linux (or something similar, depending on what the guest OS chooses) or D: drive (for example) on Windows. You can now initialize the disk from the guest, following the standard procedures for the guest's operating system. Refer to Procedure 12.1, "Adding file-based storage" and Step 6 for an example.

    Warning

    The host should not use filesystem labels to identify file systems in the fstab file, the initrd file or on the kernel command line. Doing so presents a security risk if less privileged users, such as guests, have write access to whole partitions or LVM volumes, because a guest could potentially write a filesystem label belonging to the host, to its own block device storage. Upon reboot of the host, the host could then mistakenly use the guest's disk as a system disk, which would compromise the host system.
    It is preferable to use the UUID of a device to identify it in the fstab file, the initrd file or on the kernel command line. While using UUIDs is still not completely secure on certain file systems, a similar compromise with UUID is significantly less feasible.

    Important

    Guests should not be given write access to whole disks or block devices (for example, /dev/sdb). Guests with access to whole block devices may be able to modify volume labels, which can be used to compromise the host system. Use partitions (for example, /dev/sdb1) or LVM volumes to prevent this issue.

12.3.3. Managing storage controllers in a guest

Starting from Red Hat Enterprise Linux 6.3, SCSI devices are also supported inside guests.
Unlike virtio disks, SCSI devices require the presence of a controller in the guest.
This section details the necessary steps to create a virtual SCSI controller (also known as "Host Bus Adapter", or HBA), and to add SCSI storage to the guest.

Procedure 12.3. Creating a virtual SCSI controller

  1. Display the configuration of the guest (Guest1) and look for a pre-existing SCSI controller:
    # virsh dumpxml Guest1 | grep controller.*scsi
    If a controller is present, the command will output one or more lines similar to the following:
    <controller type='scsi' model='virtio-scsi' index='0'/>
  2. If the previous step did not show a controller, create the description for one in a new file and add it to the virtual machine, using the following steps:
    1. Create the controller by writing a <controller> element in a new file and save this file with an XML extension. NewHBA.xml, for example.
      <controller type='scsi' model='virtio-scsi'/>
    2. Associate the device in the NewHBA.xml you just created with your guest:
      # virsh attach-device --config Guest1 ~/NewHBA.xml
      In this example the --config option behaves the same as it does for disks. Refer to Procedure 12.2, "Adding physical block devices to guests" for more information.
  3. Add a new SCSI disk or CD-ROM. The new disk can be added using the methods in sections Section 12.3.1, "Adding file based storage to a guest" and Section 12.3.2, "Adding hard drives and other block devices to a guest". In order to create a SCSI disk, specify a target device name that starts with sd.
    # virsh attach-disk Guest1 /var/lib/libvirt/images/FileName.iso sdb --cache none
    Depending on the version of the driver in the guest, the new disk may not be detected immediately by a running guest. Follow the steps in the Red Hat Enterprise Linux Storage Administration Guide.

12.4. Deleting and removing volumes

This section shows how to delete a disk volume from a block based storage pool using the virsh vol-delete command. In this example, the volume is volume 1 and the storage pool is guest_images.
# virsh vol-delete --pool guest_images volume1Vol volume1 deleted

Chapter 13. The Virtual Host Metrics Daemon (vhostmd)

vhostmd (the Virtual Host Metrics Daemon) allows virtual machines to see limited information about the host they are running on.
In the host, a daemon (vhostmd) runs which writes metrics periodically into a disk image. This disk image is exported read-only to guests. Guests can read the disk image to see metrics. Simple synchronization stops guests from seeing out of date or corrupt metrics.
The system administrator chooses which metrics the guests can see, and also which guests get to see the metrics at all.

13.1. Installing vhostmd on the host

The vhostmd package is available from RHN and is located in the Downloads area. It must be installed on each host where guests are required to get host metrics.

13.2. Configuration of vhostmd

After installing the package, but before starting the daemon, it is a good idea to understand exactly what metrics vhostmd will expose to guests, and how this happens.
The metrics are controlled by the file /etc/vhostmd/vhostmd.conf.
There are two parts of particular importance in this XML file. Firstly <update_period>60</update_period> controls how often the metrics are updated (in seconds). Since updating metrics can be an expensive operation, you can reduce the load on the host by increasing this period. Secondly, each <metric>...</metric> section controls what information is exposed by vhostmd. For example:
<metric type="string" context="host">   <name>HostName</name>   <action>hostname</action></metric>
means that the hostname of the host is exposed to selected guests. To disable particular metrics, you can comment out <metric> sections by putting <!-- ... --> around them. Note that disabling metrics may cause problems for guest software such as SAP that may rely on these metrics being available.
When the daemon (also called vhostmd) is running, it writes the metrics into a temporary file called /dev/shm/vhostmd0. This file contains a small binary header followed by the selected metrics encoded as XML. In practice you can display this file with a tool like less. The file is updated every 60 seconds (or however often <update_period> was set).
The vhostmd(8) man page contains a detailed description of the configuration file, as well as examples of the XML output in /dev/shm/vhostmd0. To read this, do:
# man vhostmd
In addition, there is a README file which covers some of the same information:
less /usr/share/doc/vhostmd-*/README

13.3. Starting and stopping the daemon

The daemon (vhostmd) will not be started automatically. To enable it to be started at boot, run:
# /sbin/chkconfig vhostmd on
To start the daemon running, do:
# /sbin/service vhostmd start
To stop the daemon running, do:
# /sbin/service vhostmd stop
To disable the daemon from being started at boot, do:
# /sbin/chkconfig vhostmd off

13.4. Verifying that vhostmd is working from the host

A short time after the daemon has started, you should see a metrics disk appearing. Do:
# ls -l /dev/shm# less /dev/shm/vhostmd0
This file has a short binary header, followed by XML. The less program identifies it as binary and asks:
"/dev/shm/vhostmd0" may be a binary file.  See it anyway?
Press the y key to indicate that you wish to view it.
You should see the binary header appearing as garbled characters, followed by the <metrics> XML, and after that, many zero bytes (displayed as ^@^@^@...).

13.5. Configuring guests to see the metrics

Although metrics are written to /dev/shm/vhostmd0, they are not made available to guests by default. The administrator must choose which guests get to see metrics, and must manually change the configuration of selected guests to see metrics.
The guest must be shut down before the disk is attached. (Hot attaching the metrics disk is also possible, but only for a limited number of guest configurations. In particular it is NOT possible to hot-add the metrics disk to guests that do not have virtio / PV drivers installed. See the vhostmd README file for more information).

Important

It is extremely important that the metrics disk is added in readonly mode to all guests. If this is not done, then it would be possible for a guest to modify the metrics and possibly subvert other guests that are reading it.

Procedure 13.1. Configuring KVM guests

  1. Shut down the guest.
  2. Do:
    # virsh edit GuestName
    and add the following section into <devices>:
    <disk type='file' device='disk'>  <driver name='qemu' type='raw'/>  <source file='/dev/shm/vhostmd0'/>  <target dev='vdd' bus='virtio'/>  <readonly/>   </disk>
  3. Reboot the guest.

Procedure 13.2. Configuring Xen guests

  1. Shut down the guest.
  2. Do:
    # virsh edit GuestName
    and add the following section into <devices>:
    <disk type='file' device='disk'>  <source dev='/dev/shm/vhostmd0'/>  <target dev='hdd' bus='ide'/>  <readonly/>   </disk>
  3. Reboot the guest.

13.6. Using vm-dump-metrics in Red Hat Enterprise Linux guests to verify operation

Optionally, the vm-dump-metrics package from the RHN Downloads area may be installed in Red Hat Enterprise Linux guests. This package provides a simple command line tool (also called vm-dump-metrics) which allows host metrics to be displayed in the guest.
This is useful for verifying correct operation of vhostmd from a guest.
In the guest, run the following command as root:
# vm-dump-metrics
If everything is working, this should print out a long XML document starting with <metrics>.
If this does not work, then verify that the metrics disk has appeared in the guest. It should appear as /dev/vd* (for example, /dev/vdb, /dev/vdd).
On the host, verify that the libvirt configuration changes have been made by using the command:
# virsh dumpxml GuestName
Verify that vhostmd is running on the host and the /dev/shm/vhostmd0 file exists.
(Sebelumnya) 24 : Chapter 11. Storage pools24 : Chapter 14. Managing gues ... (Berikutnya)