Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 24 : Chapter 12. Volumes - Vir ...24 : Chapter 16. Guest disk ac ... (Berikutnya)

Virtualization Administration Guide

Chapter 14. Managing guests with virsh

virsh is a command line interface tool for managing guests and the hypervisor. The virsh command-line tool is built on the libvirt management API and operates as an alternative to the qemu-kvm command and the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration.

14.1. virsh command quick reference

The following tables provide a quick reference for all virsh command line options.

Table 14.1. Guest management commands

CommandDescription
helpPrints basic help information.
listLists all guests.
dumpxmlOutputs the XML configuration file for the guest.
createCreates a guest from an XML configuration file and starts the new guest.
startStarts an inactive guest.
destroyForces a guest to stop.
defineCreates a guest from an XML configuration file without starting the new guest.
domidDisplays the guest's ID.
domuuidDisplays the guest's UUID.
dominfoDisplays guest information.
domnameDisplays the guest's name.
domstateDisplays the state of a guest.
quitQuits the interactive terminal.
rebootReboots a guest.
restoreRestores a previously saved guest stored in a file.
resumeResumes a paused guest.
saveSaves the present state of a guest to a file.
shutdownGracefully shuts down a guest.
suspendPauses a guest.
undefineDeletes all files associated with a guest.
migrateMigrates a guest to another host.

The following virsh command options manage guest and hypervisor resources:

Table 14.2. Resource management options

CommandDescription
setmemSets the allocated memory for a guest. Refer to the virsh manpage for more details.
setmaxmemSets maximum memory limit for the hypervisor. Refer to the virsh manpage for more details.
setvcpusChanges number of virtual CPUs assigned to a guest. Refer to the virsh manpage for more details.
vcpuinfoDisplays virtual CPU information about a guest.
vcpupinControls the virtual CPU affinity of a guest.
domblkstatDisplays block device statistics for a running guest.
domifstatDisplays network interface statistics for a running guest.
attach-deviceAttach a device to a guest, using a device definition in an XML file.
attach-diskAttaches a new disk device to a guest.
attach-interfaceAttaches a new network interface to a guest.
update-deviceDetaches a disk image from a guest's CD-ROM drive. See Section 14.2, "Attaching and updating a device with virsh" for more details.
detach-deviceDetaches a device from a guest, takes the same kind of XML descriptions as command attach-device.
detach-diskDetaches a disk device from a guest.
detach-interfaceDetach a network interface from a guest.

The virsh commands for managing and creating storage pools and volumes.
For more information on using storage pools with virsh, refer to http://libvirt.org/formatstorage.html

Table 14.3. Storage Pool options

CommandDescription
find-storage-pool-sourcesReturns the XML definition for all storage pools of a given type that could be found.
find-storage-pool-sources host portReturns data on all storage pools of a given type that could be found as XML. If the host and port are provided, this command can be run remotely.
pool-autostart Sets the storage pool to start at boot time.
pool-buildThe pool-build command builds a defined pool. This command can format disks and create partitions.
pool-createpool-create creates and starts a storage pool from the provided XML storage pool definition file.
pool-create-as nameCreates and starts a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool.
pool-defineCreates a storage bool from an XML definition file but does not start the new storage pool.
pool-define-as nameCreates but does not start, a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool.
pool-destroyPermanently destroys a storage pool in libvirt. The raw data contained in the storage pool is not changed and can be recovered with the pool-create command.
pool-deleteDestroys the storage resources used by a storage pool. This operation cannot be recovered. The storage pool still exists after this command but all data is deleted.
pool-dumpxmlPrints the XML definition for a storage pool.
pool-editOpens the XML definition file for a storage pool in the users default text editor.
pool-infoReturns information about a storage pool.
pool-listLists storage pools known to libvirt. By default, pool-list lists pools in use by active guests. The --inactive parameter lists inactive pools and the --all parameter lists all pools.
pool-undefineDeletes the definition for an inactive storage pool.
pool-uuidReturns the UUID of the named pool.
pool-name Prints a storage pool's name when provided the UUID of a storage pool.
pool-refreshRefreshes the list of volumes contained in a storage pool.
pool-startStarts a storage pool that is defined but inactive.

Table 14.4. Volume options

CommandDescription
vol-createCreate a volume from an XML file.
vol-create-fromCreate a volume using another volume as input.
vol-create-asCreate a volume from a set of arguments.
vol-cloneClone a volume.
vol-deleteDelete a volume.
vol-wipeWipe a volume.
vol-dumpxmlShow volume information in XML.
vol-infoShow storage volume information.
vol-listList volumes.
vol-poolReturns the storage pool for a given volume key or path.
vol-pathReturns the volume path for a given volume name or key.
vol-nameReturns the volume name for a given volume key or path.
vol-keyReturns the volume key for a given volume name or path.

Table 14.5. Secret options

CommandDescription
secret-defineDefine or modify a secret from an XML file.
secret-dumpxmlShow secret attributes in XML.
secret-set-valueSet a secret value.
secret-get-valueOutput a secret value.
secret-undefineUndefine a secret.
secret-listList secrets.

Table 14.6. Network filter options

CommandDescription
nwfilter-defineDefine or update a network filter from an XML file.
nwfilter-undefineUndefine a network filter.
nwfilter-dumpxmlShow network filter information in XML.
nwfilter-listList network filters.
nwfilter-editEdit XML configuration for a network filter.

This table contains virsh command options for snapshots:

Table 14.7. Snapshot options

CommandDescription
snapshot-createCreate a snapshot.
snapshot-currentGet the current snapshot.
snapshot-deleteDelete a domain snapshot.
snapshot-dumpxmlDump XML for a domain snapshot.
snapshot-listList snapshots for a domain.
snapshot-revertRevert a domain to a snapshot.

This table contains miscellaneous virsh commands:

Table 14.8. Miscellaneous options

CommandDescription
versionDisplays the version of virsh.
nodeinfoOutputs information about the hypervisor.

14.2. Attaching and updating a device with virsh

For information on this procedure refer to Section 12.3.1, "Adding file based storage to a guest"

14.3. Connecting to the hypervisor

Connect to a hypervisor session with virsh:
# virsh connect {name}
Where {name} is the machine name (hostname) or URL (the output of the virsh uri command) of the hypervisor. To initiate a read-only connection, append the above command with --readonly.

14.4. Creating a virtual machine XML dump (configuration file)

Output a guest's XML configuration file with virsh:
# virsh dumpxml {guest-id, guestname or uuid}
This command outputs the guest's XML configuration file to standard out (stdout). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml:
# virsh dumpxml GuestID > guest.xml
This file guest.xml can recreate the guest (refer to Editing a guest's configuration file. You can edit this XML configuration file to configure additional devices or to deploy additional guests.
An example of virsh dumpxml output:
# virsh dumpxml guest1-rhel6-64<domain type='kvm'>  <name>guest1-rhel6-64</name>  <uuid>b8d7388a-bbf2-db3a-e962-b97ca6e514bd</uuid>  <memory>2097152</memory>  <currentMemory>2097152</currentMemory>  <vcpu>2</vcpu>  <os> <type arch='x86_64' machine='rhel6.2.0'>hvm</type> <boot dev='hd'/>  </os>  <features> <acpi/> <apic/> <pae/>  </features>  <clock offset='utc'/>  <on_poweroff>destroy</on_poweroff>  <on_reboot>restart</on_reboot>  <on_crash>restart</on_crash>  <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'>  <driver name='qemu' type='raw' cache='none' io='threads'/>  <source file='/home/guest-images/guest1-rhel6-64.img'/>  <target dev='vda' bus='virtio'/>  <shareable/<  <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <interface type='bridge'>  <mac address='52:54:00:b9:35:a9'/>  <source bridge='br0'/>  <model type='virtio'/>  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'>  <target port='0'/> </serial> <console type='pty'>  <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='ich6'>  <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video>  <model type='cirrus' vram='9216' heads='1'/>  <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'>  <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon>  </devices></domain>
Note that the <shareable/> flag is set. This indicates the device is expected to be shared between domains (assuming the hypervisor and OS support this), which means that caching should be deactivated for that device.
Creating a guest from a configuration file
Guests can be created from XML configuration files. You can copy existing XML from previously created guests or use the dumpxml option (refer to Section 14.4, "Creating a virtual machine XML dump (configuration file)"). To create a guest with virsh from an XML file:
# virsh create configuration_file.xml
Editing a guest's configuration file
Instead of using the dumpxml option (refer to Section 14.4, "Creating a virtual machine XML dump (configuration file)") guests can be edited either while they run or while they are offline. The virsh edit command provides this functionality. For example, to edit the guest named softwaretesting:
# virsh edit softwaretesting
This opens a text editor. The default text editor is the $EDITOR shell parameter (set to vi by default).

14.4.1. Adding multifunction PCI devices to KVM guests

This section will demonstrate how to add multi-function PCI devices to KVM guests.
  1. Run the virsh edit [guestname] command to edit the XML configuration file for the guest.
  2. In the address type tag, add a multifunction='on' entry for function='0x0'.
    This enables the guest to use the multifunction PCI devices.
    <disk type='file' device='disk'><driver name='qemu' type='raw' cache='none'/><source file='/var/lib/libvirt/images/rhel62-1.img'/><target dev='vda' bus='virtio'/><address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/</disk>
    For a PCI device with two functions, amend the XML configuration file to include a second device with the same slot number as the first device and a different function number, such as function='0x1'.
    For Example:
    <disk type='file' device='disk'><driver name='qemu' type='raw' cache='none'/><source file='/var/lib/libvirt/images/rhel62-1.img'/><target dev='vda' bus='virtio'/><address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/></disk><disk type='file' device='disk'><driver name='qemu' type='raw' cache='none'/><source file='/var/lib/libvirt/images/rhel62-2.img'/><target dev='vdb' bus='virtio'/><address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/></disk>
  3. lspci output from the KVM guest shows:
    $ lspci00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device00:05.1 SCSI storage controller: Red Hat, Inc Virtio block device

14.5. Suspending, resuming, saving and restoring a guest

Suspending a guest
Suspend a guest with virsh:
# virsh suspend {domain-id, domain-name or domain-uuid}
When a guest is in a suspended state, it consumes system RAM but not processor resources. Disk and network I/O does not occur while the guest is suspended. This operation is immediate and the guest can be restarted with the resume (Resuming a guest) option.
Resuming a guest
Restore a suspended guest with virsh using the resume option:
# virsh resume {domain-id, domain-name or domain-uuid}
This operation is immediate and the guest parameters are preserved for suspend and resume operations.
Save a guest
Save the current state of a guest to a file using the virsh command:
# virsh save {domain-name, domain-id or domain-uuid} filename
This stops the guest you specify and saves the data to a file, which may take some time given the amount of memory in use by your guest. You can restore the state of the guest with the restore (Restore a guest) option. Save is similar to pause, instead of just pausing a guest the present state of the guest is saved.
Restore a guest
Restore a guest previously saved with the virsh save command (Save a guest) using virsh:
# virsh restore filename
This restarts the saved guest, which may take some time. The guest's name and UUID are preserved but are allocated for a new id.

14.6. Shutting down, rebooting and force-shutdown of a guest

Shut down a guest
Shut down a guest using the virsh command:
# virsh shutdown {domain-id, domain-name or domain-uuid}
You can control the behavior of the rebooting guest by modifying the on_shutdown parameter in the guest's configuration file.
Rebooting a guest
Reboot a guest using virsh command:
#virsh reboot {domain-id, domain-name or domain-uuid}
You can control the behavior of the rebooting guest by modifying the on_reboot element in the guest's configuration file.
Forcing a guest to stop
Force a guest to stop with the virsh command:
# virsh destroy {domain-id, domain-name or domain-uuid}
This command does an immediate ungraceful shutdown and stops the specified guest. Using virsh destroy can corrupt guest file systems. Use the destroy option only when the guest is unresponsive.

14.7. Retrieving guest information

Getting the domain ID of a guest
To get the domain ID of a guest:
# virsh domid {domain-name or domain-uuid}
Getting the domain name of a guest
To get the domain name of a guest:
# virsh domname {domain-id or domain-uuid}
Getting the UUID of a guest
To get the Universally Unique Identifier (UUID) for a guest:
# virsh domuuid {domain-id or domain-name}
An example of virsh domuuid output:
# virsh domuuid r5b2-mySQL014a4c59a7-ee3f-c781-96e4-288f2862f011
Displaying guest Information
Using virsh with the guest's domain ID, domain name or UUID you can display information on the specified guest:
# virsh dominfo {domain-id, domain-name or domain-uuid}
This is an example of virsh dominfo output:
# virsh dominfo vr-rhel6u1-x86_64-kvmId: 9Name:   vr-rhel6u1-x86_64-kvmUUID:   a03093a1-5da6-a2a2-3baf-a845db2f10b9OS Type: hvmState:  runningCPU(s): 1CPU time:   21.6sMax memory: 2097152 kBUsed memory: 1025000 kBPersistent: yesAutostart:  disableSecurity model: selinuxSecurity DOI:   0Security label: system_u:system_r:svirt_t:s0:c612,c921 (permissive)

14.8. Retrieving node information

Displaying node information
To display information about the node:
# virsh nodeinfo
An example of virsh nodeinfo output:
# virsh nodeinfoCPU model x86_64CPU (s)  8CPU frequency 2895 MhzCPU socket(s) 2  Core(s) per socket   2Threads per core: 2Numa cell(s) 1Memory size: 1046528 kB
Returns basic information about the node, including the model number, number of CPUs, type of CPU, and size of the physical memory. The output corresponds to virNodeInfo structure. Specifically, the "CPU socket(s)" field indicates the number of CPU sockets per NUMA cell.

14.9. Storage pool information

Editing a storage pool definition
The virsh pool-edit command takes the name or UUID for a storage pool and opens the XML definition file for a storage pool in the users default text editor.
The virsh pool-edit command is equivalent to running the following commands:
# virsh pool-dumpxml pool > pool.xml# vim pool.xml# virsh pool-define pool.xml

Note

The default editor is defined by the $VISUAL or $EDITOR environment variables, and default is vi.

14.10. Displaying per-guest information

Displaying the guests
To display the guest list and their current states with virsh:
# virsh list
Other options available include:
the --inactive option to list inactive guests (that is, guests that have been defined but are not currently active), and
the --all option lists all guests. For example:
# virsh list --all Id Name State----------------------------------  0 Domain-0 running  1 Domain202 paused  2 Domain010 inactive  3 Domain9600   crashed
There are seven states that can be visible using this command:
  • Running - The running state refers to guests which are currently active on a CPU.
  • Idle - The idle state indicates that the domain is idle, and may not be running or able to run. This can be caused because the domain is waiting on IO (a traditional wait state) or has gone to sleep because there was nothing else for it to do.
  • Paused - The paused state lists domains that are paused. This occurs if an administrator uses the pause button in virt-manager, xm pause or virsh suspend. When a guest is paused it consumes memory and other resources but it is ineligible for scheduling and CPU resources from the hypervisor.
  • Shutdown - The shutdown state is for guests in the process of shutting down. The guest is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest operating systems; some operating systems do not respond to these signals.
  • Shut off - The shut off state indicates that the domain is not running. This can be caused when a domain completly shuts down or has not been started.
  • Crashed - The crashed state indicates that the domain has crashed and can only occur if the guest has been configured not to restart on crash.
  • Dying - Domains in the dying state are in is in process of dying, which is a state where the domain has not completely shut-down or crashed.
Displaying virtual CPU information
To display virtual CPU information from a guest with virsh:
# virsh vcpuinfo {domain-id, domain-name or domain-uuid}
An example of virsh vcpuinfo output:
# virsh vcpuinfo r5b2-mySQL01VCPU:   0CPU: 0State:  blockedCPU time:   0.0sCPU Affinity:   yy
Configuring virtual CPU affinity
To configure the affinity of virtual CPUs with physical CPUs:
# virsh vcpupin domain-id vcpu cpulist
The domain-id parameter is the guest's ID number or name.
The vcpu parameter denotes the number of virtualized CPUs allocated to the guest.The vcpu parameter must be provided.
The cpulist parameter is a list of physical CPU identifier numbers separated by commas. The cpulist parameter determines which physical CPUs the VCPUs can run on.
Configuring virtual CPU count
To modify the number of CPUs assigned to a guest with virsh:
# virsh setvcpus {domain-name, domain-id or domain-uuid} count
This count value cannot exceed the number of CPUs that were assigned to the guest when it was created.
Configuring memory allocation
To modify a guest's memory allocation with virsh:
# virsh setmem {domain-id or domain-name} count
# virsh setmem vr-rhel6u1-x86_64-kvm --kilobytes 1025000
You must specify the count in kilobytes. The new count value cannot exceed the amount you specified when you created the guest. Values lower than 64 MB are unlikely to work with most guest operating systems. A higher maximum memory value does not affect active guests. If the new value is lower than the available memory, it will shrink possibly causing the guest to crash.
This command has the following options
  • [--domain] <string> domain name, id or uuid
  • [--size] <number> new memory size, as scaled integer (default KiB)
  • --config takes affect next boot
  • --live controls the memory of the running domain
  • --current controls the memory on the current domain
Configuring memory Tuning
The element memtune provides details regarding the memory tunable parameters for the domain. If this is omitted, it defaults to the OS provided defaults. For QEMU/KVM, the parameters are applied to the QEMU process as a whole. Thus, when counting them, one needs to add up guest RAM, guest video RAM, and some memory overhead of QEMU itself. The last piece is hard to determine so one needs guess and try. For each tunable, it is possible to designate which unit the number is in on input, using the same values as for <memory>. For backwards compatibility, output is always in KiB. units.
Here is an example XML with the memtune options used:
<domain>  <memtune> <hard_limit unit='G'>1</hard_limit> <soft_limit unit='M'>128</soft_limit> <swap_hard_limit unit='G'>2</swap_hard_limit> <min_guarantee unit='bytes'>67108864</min_guarantee>  </memtune>  ...</domain>
memtune has the following options:
  • hard_limit - The optional hard_limit element is the maximum memory the guest can use. The units for this value are kibibytes (i.e. blocks of 1024 bytes)
  • soft_limit - The optional soft_limit element is the memory limit to enforce during memory contention. The units for this value are kibibytes (i.e. blocks of 1024 bytes)
  • swap_hard_limit - The optional swap_hard_limit element is the maximum memory plus swap the guest can use. The units for this value are kibibytes (i.e. blocks of 1024 bytes). This has to be more than hard_limit value provided
  • min_guarantee - The optional min_guarantee element is the guaranteed minimum memory allocation for the guest. The units for this value are kibibytes (i.e. blocks of 1024 bytes)
# virsh memtune vr-rhel6u1-x86_64-kvm --hard-limit 512000# virsh memtune vr-rhel6u1-x86_64-kvmhard_limit : 512000 kBsoft_limit : unlimitedswap_hard_limit: unlimited
hard_limit is 512000 kB, it is maximum memory the guest domain can use.
Displaying guest block device information
Use virsh domblkstat to display block device statistics for a running guest.
# virsh domblkstat GuestName block-device
Displaying guest network device information
Use virsh domifstat to display network interface statistics for a running guest.
# virsh domifstat GuestName interface-device 

14.11. Managing virtual networks

This section covers managing virtual networks with the virsh command. To list virtual networks:
# virsh net-list
This command generates output similar to:
# virsh net-listName State  Autostart-----------------------------------------default  active yes  vnet1 active yes  vnet2 active yes
To view network information for a specific virtual network:
# virsh net-dumpxml NetworkName
This displays information about a specified virtual network in XML format:
# virsh net-dumpxml vnet1<network>  <name>vnet1</name>  <uuid>98361b46-1581-acb7-1643-85a412626e70</uuid>  <forward dev='eth0'/>  <bridge name='vnet0' stp='on' forwardDelay='0' />  <ip address='192.168.100.1' netmask='255.255.255.0'> <dhcp>  <range start='192.168.100.128' end='192.168.100.254' /> </dhcp>  </ip></network>
Other virsh commands used in managing virtual networks are:
  • virsh net-autostart network-name - Autostart a network specified as network-name.
  • virsh net-create XMLfile - generates and starts a new network using an existing XML file.
  • virsh net-define XMLfile - generates a new network device from an existing XML file without starting it.
  • virsh net-destroy network-name - destroy a network specified as network-name.
  • virsh net-name networkUUID - convert a specified networkUUID to a network name.
  • virsh net-uuid network-name - convert a specified network-name to a network UUID.
  • virsh net-start nameOfInactiveNetwork - starts an inactive network.
  • virsh net-undefine nameOfInactiveNetwork - removes the definition of an inactive network.

14.12. Migrating guests with virsh

Information on migration using virsh is located in the section entitled Live KVM Migration with virsh Refer to Section 4.4, "Live KVM migration with virsh"

14.13. Guest CPU model configuration

14.13.1. Introduction

Every hypervisor has its own policy for what a guest will see for its CPUs by default. Whereas some hypervisors decide which CPU host features will be available for the guest, QEMU/KVM presents the guest with a generic model named qemu32 or qemu64. These hypervisors perform more advanced filtering, classifying all physical CPUs into a handful of groups and have one baseline CPU model for each group that is presented to the guest. Such behavior enables the safe migration of guests between hosts, provided they all have physical CPUs that classify into the same group. libvirt does not typically enforce policy itself, rather it provides the mechanism on which the higher layers define their own desired policy. Understanding how to obtain CPU model information and define a suitable guest CPU model is critical to ensure guest migration is successful between hosts. Note that a hypervisor can only emulate features that it is aware of and features that were created after the hypervisor was released may not be emulated.

14.13.2. Learning about the host CPU model

The virsh capabilities command displays an XML document describing the capabilities of the hypervisor connection and host. The XML schema displayed has been extended to provide information about the host CPU model. One of the big challenges in describing a CPU model is that every architecture has a different approach to exposing their capabilities. On x86, the capabilities of a modern CPU are exposed via the CPUID instruction. Essentially this comes down to a set of 32-bit integers with each bit given a specific meaning. Fortunately AMD and Intel agree on common semantics for these bits. Other hypervisors expose the notion of CPUID masks directly in their guest configuration format. However, QEMU/KVM supports far more than just the x86 architecture, so CPUID is clearly not suitable as the canonical configuration format. QEMU ended up using a scheme which combines a CPU model name string, with a set of named flags. On x86, the CPU model maps to a baseline CPUID mask, and the flags can be used to then toggle bits in the mask on or off. libvirt decided to follow this lead and uses a combination of a model name and flags. Here is an example of what libvirt reports as the capabilities on a development workstation:
# virsh capabilities<capabilities>  <host> <uuid>c4a68e53-3f41-6d9e-baaf-d33a181ccfa0</uuid> <cpu>  <arch>x86_64</arch>  <model>core2duo</model>  <topology sockets='1' cores='4' threads='1'/>  <feature name='lahf_lm'/>  <feature name='sse4.1'/>  <feature name='xtpr'/>  <feature name='cx16'/>  <feature name='tm2'/>  <feature name='est'/>  <feature name='vmx'/>  <feature name='ds_cpl'/>  <feature name='pbe'/>  <feature name='tm'/>  <feature name='ht'/>  <feature name='ss'/>  <feature name='acpi'/>  <feature name='ds'/> </cpu>   ... snip ...  </host></capabilities>
It is not practical to have a database listing all known CPU models, so libvirt has a small list of baseline CPU model names. It chooses the one that shares the greatest number of CPUID bits with the actual host CPU and then lists the remaining bits as named features. Notice that libvirt does not display which features the baseline CPU contains. This might seem like a flaw at first, but as will be explained in this section, it is not actually necessary to know this information.

14.13.3. Determining a compatible CPU model to suit a pool of hosts

Now that it is possible to find out what CPU capabilities a single host has, the next step is to determine what CPU capabilities are best to expose to the guest. If it is known that the guest will never need to be migrated to another host, the host CPU model can be passed straight through unmodified. A virtualized data center may have a set of configurations that can guarantee all servers will have 100% identical CPUs. Again the host CPU model can be passed straight through unmodified. The more common case, though, is where there is variation in CPUs between hosts. In this mixed CPU environment, the lowest common denominator CPU must be determined. This is not entirely straightforward, so libvirt provides an API for exactly this task. If libvirt is provided a list of XML documents, each describing a CPU model for a host, libvirt will internally convert these to CPUID masks, calculate their intersection, and convert the CPUID mask result back into an XML CPU description. Taking the CPU description from a server:
# virsh capabilities<capabilities>  <host> <uuid>8e8e4e67-9df4-9117-bf29-ffc31f6b6abb</uuid> <cpu>  <arch>x86_64</arch>  <model>Westmere</model>  <vendor>Intel</vendor>  <topology sockets='2' cores='4' threads='2'/>  <feature name='rdtscp'/>  <feature name='pdpe1gb'/>  <feature name='dca'/>  <feature name='xtpr'/>  <feature name='tm2'/>  <feature name='est'/>  <feature name='vmx'/>  <feature name='ds_cpl'/>  <feature name='monitor'/>  <feature name='pbe'/>  <feature name='tm'/>  <feature name='ht'/>  <feature name='ss'/>  <feature name='acpi'/>  <feature name='ds'/>  <feature name='vme'/> </cpu>   ... snip ...</capabilities>
A quick check can be made to see whether this CPU description is compatible with the previous workstation CPU description, using the virsh cpu-compare command. To do so, the virsh capabilities > virsh-caps-workstation-full.xml command was executed on the workstation. The file virsh-caps-workstation-full.xml was edited and reduced to just the following content:
<cpu>  <arch>x86_64</arch>  <model>core2duo</model>  <topology sockets='1' cores='4' threads='1'/>  <feature name='lahf_lm'/>  <feature name='sse4.1'/>  <feature name='xtpr'/>  <feature name='cx16'/>  <feature name='tm2'/>  <feature name='est'/>  <feature name='vmx'/>  <feature name='ds_cpl'/>  <feature name='pbe'/>  <feature name='tm'/>  <feature name='ht'/>  <feature name='ss'/>  <feature name='acpi'/>  <feature name='ds'/> </cpu>
The reduced content was stored in a file named virsh-caps-workstation-cpu-only.xml and the virsh cpu-compare command can be executed using this file:
virsh cpu-compare virsh-caps-workstation-cpu-only.xmlHost CPU is a superset of CPU described in virsh-caps-workstation-cpu-only.xml
As seen in this output, libvirt is correctly reporting the CPUs are not strictly compatible, because there are several features in the server CPU that are missing in the workstation CPU. To be able to migrate between the workstation and the server, it will be necessary to mask out some features, but to determine which ones, libvirt provides an API for this, shown via the virsh cpu-baseline command:
# virsh cpu-baseline virsh-cap-weybridge-strictly-cpu-only.xml<cpu match='exact'>  <model>Penryn</model>  <feature policy='require' name='xtpr'/>  <feature policy='require' name='tm2'/>  <feature policy='require' name='est'/>  <feature policy='require' name='vmx'/>  <feature policy='require' name='ds_cpl'/>  <feature policy='require' name='monitor'/>  <feature policy='require' name='pbe'/>  <feature policy='require' name='tm'/>  <feature policy='require' name='ht'/>  <feature policy='require' name='ss'/>  <feature policy='require' name='acpi'/>  <feature policy='require' name='ds'/>  <feature policy='require' name='vme'/></cpu>
Similarly, if the two <cpu>...</cpu> elements are put into a single file named both-cpus.xml, the following command would generate the same result:
 # virsh cpu-baseline both-cpus.xml
In this case, libvirt has determined that in order to safely migrate a guest between the workstation and the server, it is necessary to mask out 3 features from the XML description for the server, and 3 features from the XML description for the workstation.

14.13.4. Configuring the guest CPU model

For simple defaults, the guest CPU configuration accepts the same basic XML representation as the host capabilities XML exposes. In other words, the XML from the cpu-baseline virsh command can now be copied directly into the guest XML at the top level under the <domain> element. As the observant reader will have noticed from the previous XML snippet, there are a few extra attributes available when describing a CPU in the guest XML. These can mostly be ignored, but for the curious here is a quick description of what they do. The top level <cpu> element has an attribute called match with possible values of:
  • match='minimum' - the host CPU must have at least the CPU features described in the guest XML. If the host has additional features beyond the guest configuration, these will also be exposed to the guest.
  • match='exact' - the host CPU must have at least the CPU features described in the guest XML. If the host has additional features beyond the guest configuration, these will be masked out from the guest.
  • match='strict' - the host CPU must have exactly the same CPU features described in the guest XML.
The next enhancement is that the <feature> elements can each have an extra 'policy' attribute with possible values of:
  • policy='force' - expose the feature to the guest even if the host does not have it. This is usually only useful in the case of software emulation.
  • policy='require' - expose the feature to the guest and fail if the host does not have it. This is the sensible default.
  • policy='optional' - expose the feature to the guest if it happens to support it.
  • policy='disable' - if the host has this feature, then hide it from the guest.
  • policy='forbid' - if the host has this feature, then fail and refuse to start the guest.
The 'forbid' policy is for a niche scenario where an incorrectly functioning application will try to use a feature even if it is not in the CPUID mask, and you wish to prevent accidentally running the guest on a host with that feature. The 'optional' policy has special behavior with respect to migration. When the guest is initially started the flag is optional, but when the guest is live migrated, this policy turns into 'require', since you cannot have features disappearing across migration.

Chapter 15. Managing guests with the Virtual Machine Manager (virt-manager)

This section describes the Virtual Machine Manager (virt-manager) windows, dialog boxes, and various GUI controls.
virt-manager provides a graphical view of hypervisors and guests on your host system and on remote host systems. virt-manager can perform virtualization management tasks, including:
  • defining and creating guests,
  • assigning memory,
  • assigning virtual CPUs,
  • monitoring operational performance,
  • saving and restoring, pausing and resuming, and shutting down and starting guests,
  • links to the textual and graphical consoles, and
  • live and offline migrations.

15.1. Starting virt-manager

To start virt-manager session open the Applications menu, then the System Tools menu and select Virtual Machine Manager (virt-manager).
The virt-manager main window appears.
Starting virt-manager

Figure 15.1. Starting virt-manager


Alternatively, virt-manager can be started remotely using ssh as demonstrated in the following command:
ssh -X host's address[remotehost]# virt-manager
Using ssh to manage virtual machines and hosts is discussed further in Section 5.1, "Remote management with SSH".

15.2. The Virtual Machine Manager main window

This main window displays all the running guests and resources used by guests. Select a guest by double clicking the guest's name.
Virtual Machine Manager main window

Figure 15.2. Virtual Machine Manager main window


15.3. The virtual hardware details window

The virtual hardware details window displays information about the virtual hardware configured for the guest. Virtual hardware resources can be added, removed and modified in this window. To access the virtual hardware details window, click on the icon in the toolbar.
The virtual hardware details icon

Figure 15.3. The virtual hardware details icon


Clicking the icon displays the virtual hardware details window.
The virtual hardware details window

Figure 15.4. The virtual hardware details window


15.4. Virtual Machine graphical console

This window displays a guest's graphical console. Guests can use several different protocols to export their graphical framebuffers: virt-manager supports VNC and SPICE. If your virtual machine is set to require authentication, the Virtual Machine graphical console prompts you for a password before the display appears.
Graphical console window

Figure 15.5. Graphical console window


Note

VNC is considered insecure by many security experts, however, several changes have been made to enable the secure usage of VNC for virtualization on Red Hat enterprise Linux. The guest machines only listen to the local host's loopback address (127.0.0.1). This ensures only those with shell privileges on the host can access virt-manager and the virtual machine through VNC. Although virt-manager is configured to listen to other public network interfaces and alternative methods can be configured, it is not recommended.
Remote administration can be performed by tunneling over SSH which encrypts the traffic. Although VNC can be configured to access remotely without tunneling over SSH, for security reasons, it is not recommended. To remotely administer the guest follow the instructions in: Chapter 5, Remote management of guests. TLS can provide enterprise level security for managing guest and host systems.
Your local desktop can intercept key combinations (for example, Ctrl+Alt+F1) to prevent them from being sent to the guest machine. You can use the Send key menu option to send these sequences. From the guest machine window, click the Send key menu and select the key sequence to send. In addition, from this menu you can also capture the screen output.
SPICE is an alternative to VNC available for Red Hat Enterprise Linux.

15.5. Adding a remote connection

This procedure covers how to set up a connection to a remote system using virt-manager.
  1. To create a new connection open the File menu and select the Add Connection... menu item.
  2. The Add Connection wizard appears. Select the hypervisor. For Red Hat Enterprise Linux 6 systems select QEMU/KVM. Select Local for the local system or one of the remote connection options and click Connect. This example uses Remote tunnel over SSH which works on default installations. For more information on configuring remote connections refer to Chapter 5, Remote management of guests
    Add Connection

    Figure 15.6. Add Connection


  3. Enter the root password for the selected host when prompted.
A remote host is now connected and appears in the main virt-manager window.
Remote host in the main virt-manager window

Figure 15.7. Remote host in the main virt-manager window


15.6. Displaying guest details

You can use the Virtual Machine Monitor to view activity information for any virtual machines on your system.
To view a virtual system's details:
  1. In the Virtual Machine Manager main window, highlight the virtual machine that you want to view.
    Selecting a virtual machine to display

    Figure 15.8. Selecting a virtual machine to display


  2. From the Virtual Machine Manager Edit menu, select Virtual Machine Details.
    Displaying the virtual machine details

    Figure 15.9. Displaying the virtual machine details


    When the Virtual Machine details window opens, there may be a console displayed. Should this happen, clikc View and then select Details. The Overview window opens first by default. To go back to this window, select Overview from the navigation pane on the left hand side.
    The Overview view shows a summary of configuration details for the guest.
    Displaying guest details overview

    Figure 15.10. Displaying guest details overview


  3. Select Performance from the navigation pane on the left hand side.
    The Performance view shows a summary of guest performance, including CPU and Memory usage.
    Displaying guest performance details

    Figure 15.11. Displaying guest performance details


  4. Select Processor from the navigation pane on the left hand side. The Processor view allows you to view or change the current processor allocation.
    Processor allocation panel

    Figure 15.12. Processor allocation panel


  5. Select Memory from the navigation pane on the left hand side. The Memory view allows you to view or change the current memory allocation.
    Displaying memory allocation

    Figure 15.13. Displaying memory allocation


  6. Each virtual disk attached to the virtual machine is displayed in the navigation pane. Click on a virtual disk to modify or remove it.
    Displaying disk configuration

    Figure 15.14. Displaying disk configuration


  7. Each virtual network interface attached to the virtual machine is displayed in the navigation pane. Click on a virtual network interface to modify or remove it.
    Displaying network configuration

    Figure 15.15. Displaying network configuration


15.7. Performance monitoring

Performance monitoring preferences can be modified with virt-manager's preferences window.
To configure performance monitoring:
  1. From the Edit menu, select Preferences.
    Modifying guest preferences

    Figure 15.16. Modifying guest preferences


    The Preferences window appears.
  2. From the Stats tab specify the time in seconds or stats polling options.
    Configuring performance monitoring

    Figure 15.17. Configuring performance monitoring


15.8. Displaying CPU usage for guests

To view the CPU usage for all guests on your system:
  1. From the View menu, select Graph, then the Guest CPU Usage check box.
    Enabling guest CPU usage statistics graphing

    Figure 15.18. Enabling guest CPU usage statistics graphing


  2. The Virtual Machine Manager shows a graph of CPU usage for all virtual machines on your system.
    Guest CPU usage graph

    Figure 15.19. Guest CPU usage graph


15.9. Displaying CPU usage for hosts

To view the CPU usage for all hosts on your system:
  1. From the View menu, select Graph, then the Host CPU Usage check box.
    Enabling host CPU usage statistics graphing

    Figure 15.20. Enabling host CPU usage statistics graphing


  2. The Virtual Machine Manager shows a graph of host CPU usage on your system.
    Host CPU usage graph

    Figure 15.21. Host CPU usage graph


15.10. Displaying Disk I/O

To view the disk I/O for all virtual machines on your system:
  1. Make sure that the Disk I/O statisctics collection is enabled. To do this, from the Edit menu, select Preferences and click the Statstab.
  2. Select the Disk I/O checkbox.
    Enabling Disk I/O

    Figure 15.22. Enabling Disk I/O


  3. To enable the Disk I.O display, from the View menu, select Graph, then the Disk I/O check box.
    Selecting Disk I/O

    Figure 15.23. Selecting Disk I/O


  4. The Virtual Machine Manager shows a graph of Disk I/O for all virtual machines on your system.
    Displaying Disk I/O

    Figure 15.24. Displaying Disk I/O


15.11. Displaying Network I/O

To view the network I/O for all virtual machines on your system:
  1. Make sure that the Network I/O statisctics collection is enabled. To do this, from the Edit menu, select Preferences and click the Statstab.
  2. Select the Network I/O checkbox.
    Enabling Network I/O

    Figure 15.25. Enabling Network I/O


  3. To display the Network I/O statistics, from the View menu, select Graph, then the Network I/O check box.
    Selecting Network I/O

    Figure 15.26. Selecting Network I/O


  4. The Virtual Machine Manager shows a graph of Network I/O for all virtual machines on your system.
    Displaying Network I/O

    Figure 15.27. Displaying Network I/O


(Sebelumnya) 24 : Chapter 12. Volumes - Vir ...24 : Chapter 16. Guest disk ac ... (Berikutnya)