| Virtualization Administration GuideChapter 14. Managing guests with virshvirsh is a command line interface tool for managing guests and the hypervisor. The virsh command-line tool is built on the libvirt management API and operates as an alternative to the qemu-kvm command and the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration.
14.1. virsh command quick referenceThe following tables provide a quick reference for all virsh command line options. Table 14.1. Guest management commands Command | Description |
---|
help | Prints basic help information. | list | Lists all guests. | dumpxml | Outputs the XML configuration file for the guest. | create | Creates a guest from an XML configuration file and starts the new guest. | start | Starts an inactive guest. | destroy | Forces a guest to stop. | define | Creates a guest from an XML configuration file without starting the new guest. | domid | Displays the guest's ID. | domuuid | Displays the guest's UUID. | dominfo | Displays guest information. | domname | Displays the guest's name. | domstate | Displays the state of a guest. | quit | Quits the interactive terminal. | reboot | Reboots a guest. | restore | Restores a previously saved guest stored in a file. | resume | Resumes a paused guest. | save | Saves the present state of a guest to a file. | shutdown | Gracefully shuts down a guest. | suspend | Pauses a guest. | undefine | Deletes all files associated with a guest. | migrate | Migrates a guest to another host. |
The following virsh command options manage guest and hypervisor resources: Table 14.2. Resource management options Command | Description |
---|
setmem | Sets the allocated memory for a guest. Refer to the virsh manpage for more details. | setmaxmem | Sets maximum memory limit for the hypervisor. Refer to the virsh manpage for more details. | setvcpus | Changes number of virtual CPUs assigned to a guest. Refer to the virsh manpage for more details. | vcpuinfo | Displays virtual CPU information about a guest. | vcpupin | Controls the virtual CPU affinity of a guest. | domblkstat | Displays block device statistics for a running guest. | domifstat | Displays network interface statistics for a running guest. | attach-device | Attach a device to a guest, using a device definition in an XML file. | attach-disk | Attaches a new disk device to a guest. | attach-interface | Attaches a new network interface to a guest. | update-device | Detaches a disk image from a guest's CD-ROM drive. See Section 14.2, "Attaching and updating a device with virsh" for more details. | detach-device | Detaches a device from a guest, takes the same kind of XML descriptions as command attach-device . | detach-disk | Detaches a disk device from a guest. | detach-interface | Detach a network interface from a guest. |
The virsh commands for managing and creating storage pools and volumes. Table 14.3. Storage Pool options Command | Description |
---|
find-storage-pool-sources | Returns the XML definition for all storage pools of a given type that could be found. | find-storage-pool-sources host port | Returns data on all storage pools of a given type that could be found as XML. If the host and port are provided, this command can be run remotely. | pool-autostart | Sets the storage pool to start at boot time. | pool-build | The pool-build command builds a defined pool. This command can format disks and create partitions. | pool-create | pool-create creates and starts a storage pool from the provided XML storage pool definition file. | pool-create-as name | Creates and starts a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool. | pool-define | Creates a storage bool from an XML definition file but does not start the new storage pool. | pool-define-as name | Creates but does not start, a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool. | pool-destroy | Permanently destroys a storage pool in libvirt . The raw data contained in the storage pool is not changed and can be recovered with the pool-create command. | pool-delete | Destroys the storage resources used by a storage pool. This operation cannot be recovered. The storage pool still exists after this command but all data is deleted. | pool-dumpxml | Prints the XML definition for a storage pool. | pool-edit | Opens the XML definition file for a storage pool in the users default text editor. | pool-info | Returns information about a storage pool. | pool-list | Lists storage pools known to libvirt. By default, pool-list lists pools in use by active guests. The --inactive parameter lists inactive pools and the --all parameter lists all pools. | pool-undefine | Deletes the definition for an inactive storage pool. | pool-uuid | Returns the UUID of the named pool. | pool-name | Prints a storage pool's name when provided the UUID of a storage pool. | pool-refresh | Refreshes the list of volumes contained in a storage pool. | pool-start | Starts a storage pool that is defined but inactive. |
Table 14.4. Volume options Command | Description |
---|
vol-create | Create a volume from an XML file. | vol-create-from | Create a volume using another volume as input. | vol-create-as | Create a volume from a set of arguments. | vol-clone | Clone a volume. | vol-delete | Delete a volume. | vol-wipe | Wipe a volume. | vol-dumpxml | Show volume information in XML. | vol-info | Show storage volume information. | vol-list | List volumes. | vol-pool | Returns the storage pool for a given volume key or path. | vol-path | Returns the volume path for a given volume name or key. | vol-name | Returns the volume name for a given volume key or path. | vol-key | Returns the volume key for a given volume name or path. |
Table 14.5. Secret options Command | Description |
---|
secret-define | Define or modify a secret from an XML file. | secret-dumpxml | Show secret attributes in XML. | secret-set-value | Set a secret value. | secret-get-value | Output a secret value. | secret-undefine | Undefine a secret. | secret-list | List secrets. |
Table 14.6. Network filter options Command | Description |
---|
nwfilter-define | Define or update a network filter from an XML file. | nwfilter-undefine | Undefine a network filter. | nwfilter-dumpxml | Show network filter information in XML. | nwfilter-list | List network filters. | nwfilter-edit | Edit XML configuration for a network filter. |
This table contains virsh command options for snapshots: Table 14.7. Snapshot options Command | Description |
---|
snapshot-create | Create a snapshot. | snapshot-current | Get the current snapshot. | snapshot-delete | Delete a domain snapshot. | snapshot-dumpxml | Dump XML for a domain snapshot. | snapshot-list | List snapshots for a domain. | snapshot-revert | Revert a domain to a snapshot. |
This table contains miscellaneous virsh commands: Table 14.8. Miscellaneous options Command | Description |
---|
version | Displays the version of virsh . | nodeinfo | Outputs information about the hypervisor. |
14.2. Attaching and updating a device with virsh14.3. Connecting to the hypervisorConnect to a hypervisor session with virsh : # virsh connect {name} Where {name} is the machine name (hostname) or URL (the output of the virsh uri command) of the hypervisor. To initiate a read-only connection, append the above command with --readonly . 14.4. Creating a virtual machine XML dump (configuration file)Output a guest's XML configuration file with virsh : # virsh dumpxml {guest-id, guestname or uuid} This command outputs the guest's XML configuration file to standard out (stdout ). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml : # virsh dumpxml GuestID > guest.xml This file guest.xml can recreate the guest (refer to Editing a guest's configuration file. You can edit this XML configuration file to configure additional devices or to deploy additional guests. An example of virsh dumpxml output: # virsh dumpxml guest1-rhel6-64<domain type='kvm'> <name>guest1-rhel6-64</name> <uuid>b8d7388a-bbf2-db3a-e962-b97ca6e514bd</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='rhel6.2.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='threads'/> <source file='/home/guest-images/guest1-rhel6-64.img'/> <target dev='vda' bus='virtio'/> <shareable/< <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <interface type='bridge'> <mac address='52:54:00:b9:35:a9'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='cirrus' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices></domain> Note that the <shareable/> flag is set. This indicates the device is expected to be shared between domains (assuming the hypervisor and OS support this), which means that caching should be deactivated for that device. # virsh create configuration_file.xml # virsh edit softwaretesting This opens a text editor. The default text editor is the $EDITOR shell parameter (set to vi by default). 14.4.1. Adding multifunction PCI devices to KVM guestsThis section will demonstrate how to add multi-function PCI devices to KVM guests. Run the virsh edit [guestname] command to edit the XML configuration file for the guest. In the address type tag, add a multifunction='on' entry for function='0x0' . This enables the guest to use the multifunction PCI devices. <disk type='file' device='disk'><driver name='qemu' type='raw' cache='none'/><source file='/var/lib/libvirt/images/rhel62-1.img'/><target dev='vda' bus='virtio'/><address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/</disk> For a PCI device with two functions, amend the XML configuration file to include a second device with the same slot number as the first device and a different function number, such as function='0x1' . For Example: <disk type='file' device='disk'><driver name='qemu' type='raw' cache='none'/><source file='/var/lib/libvirt/images/rhel62-1.img'/><target dev='vda' bus='virtio'/><address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/></disk><disk type='file' device='disk'><driver name='qemu' type='raw' cache='none'/><source file='/var/lib/libvirt/images/rhel62-2.img'/><target dev='vdb' bus='virtio'/><address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/></disk> lspci output from the KVM guest shows:
$ lspci00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device00:05.1 SCSI storage controller: Red Hat, Inc Virtio block device
14.5. Suspending, resuming, saving and restoring a guest# virsh suspend {domain-id, domain-name or domain-uuid} When a guest is in a suspended state, it consumes system RAM but not processor resources. Disk and network I/O does not occur while the guest is suspended. This operation is immediate and the guest can be restarted with the resume ( Resuming a guest) option. # virsh resume {domain-id, domain-name or domain-uuid} This operation is immediate and the guest parameters are preserved for suspend and resume operations. # virsh save {domain-name, domain-id or domain-uuid} filename This stops the guest you specify and saves the data to a file, which may take some time given the amount of memory in use by your guest. You can restore the state of the guest with the restore ( Restore a guest) option. Save is similar to pause, instead of just pausing a guest the present state of the guest is saved. # virsh restore filename This restarts the saved guest, which may take some time. The guest's name and UUID are preserved but are allocated for a new id. 14.6. Shutting down, rebooting and force-shutdown of a guest# virsh shutdown {domain-id, domain-name or domain-uuid} You can control the behavior of the rebooting guest by modifying the on_shutdown parameter in the guest's configuration file. #virsh reboot {domain-id, domain-name or domain-uuid} You can control the behavior of the rebooting guest by modifying the on_reboot element in the guest's configuration file. # virsh destroy {domain-id, domain-name or domain-uuid} This command does an immediate ungraceful shutdown and stops the specified guest. Using virsh destroy can corrupt guest file systems. Use the destroy option only when the guest is unresponsive. 14.7. Retrieving guest information# virsh domid {domain-name or domain-uuid} # virsh domname {domain-id or domain-uuid} # virsh domuuid {domain-id or domain-name} An example of virsh domuuid output: # virsh domuuid r5b2-mySQL014a4c59a7-ee3f-c781-96e4-288f2862f011 # virsh dominfo {domain-id, domain-name or domain-uuid} This is an example of virsh dominfo output: # virsh dominfo vr-rhel6u1-x86_64-kvmId: 9Name: vr-rhel6u1-x86_64-kvmUUID: a03093a1-5da6-a2a2-3baf-a845db2f10b9OS Type: hvmState: runningCPU(s): 1CPU time: 21.6sMax memory: 2097152 kBUsed memory: 1025000 kBPersistent: yesAutostart: disableSecurity model: selinuxSecurity DOI: 0Security label: system_u:system_r:svirt_t:s0:c612,c921 (permissive) 14.8. Retrieving node information# virsh nodeinfo An example of virsh nodeinfo output: # virsh nodeinfoCPU model x86_64CPU (s) 8CPU frequency 2895 MhzCPU socket(s) 2 Core(s) per socket 2Threads per core: 2Numa cell(s) 1Memory size: 1046528 kB Returns basic information about the node, including the model number, number of CPUs, type of CPU, and size of the physical memory. The output corresponds to virNodeInfo structure. Specifically, the "CPU socket(s)" field indicates the number of CPU sockets per NUMA cell. 14.9. Storage pool informationThe virsh pool-edit command is equivalent to running the following commands: # virsh pool-dumpxml pool > pool .xml# vim pool .xml# virsh pool-define pool .xml The default editor is defined by the $VISUAL or $EDITOR environment variables, and default is vi . 14.10. Displaying per-guest information# virsh list Other options available include: the --inactive option to list inactive guests (that is, guests that have been defined but are not currently active), and the --all option lists all guests. For example: # virsh list --all Id Name State---------------------------------- 0 Domain-0 running 1 Domain202 paused 2 Domain010 inactive 3 Domain9600 crashed There are seven states that can be visible using this command: Running - The running state refers to guests which are currently active on a CPU. Idle - The idle state indicates that the domain is idle, and may not be running or able to run. This can be caused because the domain is waiting on IO (a traditional wait state) or has gone to sleep because there was nothing else for it to do. Paused - The paused state lists domains that are paused. This occurs if an administrator uses the pause button in virt-manager , xm pause or virsh suspend . When a guest is paused it consumes memory and other resources but it is ineligible for scheduling and CPU resources from the hypervisor. Shutdown - The shutdown state is for guests in the process of shutting down. The guest is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest operating systems; some operating systems do not respond to these signals. Shut off - The shut off state indicates that the domain is not running. This can be caused when a domain completly shuts down or has not been started. Crashed - The crashed state indicates that the domain has crashed and can only occur if the guest has been configured not to restart on crash. Dying - Domains in the dying state are in is in process of dying, which is a state where the domain has not completely shut-down or crashed.
# virsh vcpuinfo {domain-id, domain-name or domain-uuid} An example of virsh vcpuinfo output: # virsh vcpuinfo r5b2-mySQL01VCPU: 0CPU: 0State: blockedCPU time: 0.0sCPU Affinity: yy # virsh vcpupin domain-id vcpu cpulist The domain-id parameter is the guest's ID number or name. The vcpu parameter denotes the number of virtualized CPUs allocated to the guest.The vcpu parameter must be provided. The cpulist parameter is a list of physical CPU identifier numbers separated by commas. The cpulist parameter determines which physical CPUs the VCPUs can run on. # virsh setvcpus {domain-name, domain-id or domain-uuid} count This count value cannot exceed the number of CPUs that were assigned to the guest when it was created. # virsh setmem {domain-id or domain-name} count # virsh setmem vr-rhel6u1-x86_64-kvm --kilobytes 1025000 You must specify the count in kilobytes. The new count value cannot exceed the amount you specified when you created the guest. Values lower than 64 MB are unlikely to work with most guest operating systems. A higher maximum memory value does not affect active guests. If the new value is lower than the available memory, it will shrink possibly causing the guest to crash. This command has the following options [--domain] <string> domain name, id or uuid [--size] <number> new memory size, as scaled integer (default KiB) --config takes affect next boot --live controls the memory of the running domain --current controls the memory on the current domain
Here is an example XML with the memtune options used: <domain> <memtune> <hard_limit unit='G'>1</hard_limit> <soft_limit unit='M'>128</soft_limit> <swap_hard_limit unit='G'>2</swap_hard_limit> <min_guarantee unit='bytes'>67108864</min_guarantee> </memtune> ...</domain> memtune has the following options: hard_limit - The optional hard_limit element is the maximum memory the guest can use. The units for this value are kibibytes (i.e. blocks of 1024 bytes) soft_limit - The optional soft_limit element is the memory limit to enforce during memory contention. The units for this value are kibibytes (i.e. blocks of 1024 bytes) swap_hard_limit - The optional swap_hard_limit element is the maximum memory plus swap the guest can use. The units for this value are kibibytes (i.e. blocks of 1024 bytes). This has to be more than hard_limit value provided min_guarantee - The optional min_guarantee element is the guaranteed minimum memory allocation for the guest. The units for this value are kibibytes (i.e. blocks of 1024 bytes)
# virsh memtune vr-rhel6u1-x86_64-kvm --hard-limit 512000# virsh memtune vr-rhel6u1-x86_64-kvmhard_limit : 512000 kBsoft_limit : unlimitedswap_hard_limit: unlimited hard_limit is 512000 kB, it is maximum memory the guest domain can use. # virsh domblkstat GuestName block-device # virsh domifstat GuestName interface-device 14.11. Managing virtual networksThis section covers managing virtual networks with the virsh command. To list virtual networks: # virsh net-list This command generates output similar to: # virsh net-listName State Autostart-----------------------------------------default active yes vnet1 active yes vnet2 active yes To view network information for a specific virtual network: # virsh net-dumpxml NetworkName This displays information about a specified virtual network in XML format: # virsh net-dumpxml vnet1<network> <name>vnet1</name> <uuid>98361b46-1581-acb7-1643-85a412626e70</uuid> <forward dev='eth0'/> <bridge name='vnet0' stp='on' forwardDelay='0' /> <ip address='192.168.100.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.100.128' end='192.168.100.254' /> </dhcp> </ip></network> Other virsh commands used in managing virtual networks are: virsh net-autostart network-name - Autostart a network specified as network-name .
virsh net-create XMLfile - generates and starts a new network using an existing XML file.
virsh net-define XMLfile - generates a new network device from an existing XML file without starting it.
virsh net-destroy network-name - destroy a network specified as network-name .
virsh net-name networkUUID - convert a specified networkUUID to a network name.
virsh net-uuid network-name - convert a specified network-name to a network UUID.
virsh net-start nameOfInactiveNetwork - starts an inactive network.
virsh net-undefine nameOfInactiveNetwork - removes the definition of an inactive network.
14.12. Migrating guests with virsh14.13. Guest CPU model configurationEvery hypervisor has its own policy for what a guest will see for its CPUs by default. Whereas some hypervisors decide which CPU host features will be available for the guest, QEMU/KVM presents the guest with a generic model named qemu32 or qemu64. These hypervisors perform more advanced filtering, classifying all physical CPUs into a handful of groups and have one baseline CPU model for each group that is presented to the guest. Such behavior enables the safe migration of guests between hosts, provided they all have physical CPUs that classify into the same group. libvirt does not typically enforce policy itself, rather it provides the mechanism on which the higher layers define their own desired policy. Understanding how to obtain CPU model information and define a suitable guest CPU model is critical to ensure guest migration is successful between hosts. Note that a hypervisor can only emulate features that it is aware of and features that were created after the hypervisor was released may not be emulated. 14.13.2. Learning about the host CPU modelThe virsh capabilities command displays an XML document describing the capabilities of the hypervisor connection and host. The XML schema displayed has been extended to provide information about the host CPU model. One of the big challenges in describing a CPU model is that every architecture has a different approach to exposing their capabilities. On x86, the capabilities of a modern CPU are exposed via the CPUID instruction. Essentially this comes down to a set of 32-bit integers with each bit given a specific meaning. Fortunately AMD and Intel agree on common semantics for these bits. Other hypervisors expose the notion of CPUID masks directly in their guest configuration format. However, QEMU/KVM supports far more than just the x86 architecture, so CPUID is clearly not suitable as the canonical configuration format. QEMU ended up using a scheme which combines a CPU model name string, with a set of named flags. On x86, the CPU model maps to a baseline CPUID mask, and the flags can be used to then toggle bits in the mask on or off. libvirt decided to follow this lead and uses a combination of a model name and flags. Here is an example of what libvirt reports as the capabilities on a development workstation: # virsh capabilities<capabilities> <host> <uuid>c4a68e53-3f41-6d9e-baaf-d33a181ccfa0</uuid> <cpu> <arch>x86_64</arch> <model>core2duo</model> <topology sockets='1' cores='4' threads='1'/> <feature name='lahf_lm'/> <feature name='sse4.1'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> </cpu> ... snip ... </host></capabilities> It is not practical to have a database listing all known CPU models, so libvirt has a small list of baseline CPU model names. It chooses the one that shares the greatest number of CPUID bits with the actual host CPU and then lists the remaining bits as named features. Notice that libvirt does not display which features the baseline CPU contains. This might seem like a flaw at first, but as will be explained in this section, it is not actually necessary to know this information. 14.13.3. Determining a compatible CPU model to suit a pool of hostsNow that it is possible to find out what CPU capabilities a single host has, the next step is to determine what CPU capabilities are best to expose to the guest. If it is known that the guest will never need to be migrated to another host, the host CPU model can be passed straight through unmodified. A virtualized data center may have a set of configurations that can guarantee all servers will have 100% identical CPUs. Again the host CPU model can be passed straight through unmodified. The more common case, though, is where there is variation in CPUs between hosts. In this mixed CPU environment, the lowest common denominator CPU must be determined. This is not entirely straightforward, so libvirt provides an API for exactly this task. If libvirt is provided a list of XML documents, each describing a CPU model for a host, libvirt will internally convert these to CPUID masks, calculate their intersection, and convert the CPUID mask result back into an XML CPU description. Taking the CPU description from a server: # virsh capabilities<capabilities> <host> <uuid>8e8e4e67-9df4-9117-bf29-ffc31f6b6abb</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <topology sockets='2' cores='4' threads='2'/> <feature name='rdtscp'/> <feature name='pdpe1gb'/> <feature name='dca'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> ... snip ...</capabilities> A quick check can be made to see whether this CPU description is compatible with the previous workstation CPU description, using the virsh cpu-compare command. To do so, the virsh capabilities > virsh-caps-workstation-full.xml command was executed on the workstation. The file virsh-caps-workstation-full.xml was edited and reduced to just the following content: <cpu> <arch>x86_64</arch> <model>core2duo</model> <topology sockets='1' cores='4' threads='1'/> <feature name='lahf_lm'/> <feature name='sse4.1'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> </cpu> The reduced content was stored in a file named virsh-caps-workstation-cpu-only.xml and the virsh cpu-compare command can be executed using this file: virsh cpu-compare virsh-caps-workstation-cpu-only.xmlHost CPU is a superset of CPU described in virsh-caps-workstation-cpu-only.xml As seen in this output, libvirt is correctly reporting the CPUs are not strictly compatible, because there are several features in the server CPU that are missing in the workstation CPU. To be able to migrate between the workstation and the server, it will be necessary to mask out some features, but to determine which ones, libvirt provides an API for this, shown via the virsh cpu-baseline command: # virsh cpu-baseline virsh-cap-weybridge-strictly-cpu-only.xml<cpu match='exact'> <model>Penryn</model> <feature policy='require' name='xtpr'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='ht'/> <feature policy='require' name='ss'/> <feature policy='require' name='acpi'/> <feature policy='require' name='ds'/> <feature policy='require' name='vme'/></cpu> Similarly, if the two <cpu>...</cpu> elements are put into a single file named both-cpus.xml , the following command would generate the same result: # virsh cpu-baseline both-cpus.xml In this case, libvirt has determined that in order to safely migrate a guest between the workstation and the server, it is necessary to mask out 3 features from the XML description for the server, and 3 features from the XML description for the workstation. 14.13.4. Configuring the guest CPU modelFor simple defaults, the guest CPU configuration accepts the same basic XML representation as the host capabilities XML exposes. In other words, the XML from the cpu-baseline virsh command can now be copied directly into the guest XML at the top level under the <domain> element. As the observant reader will have noticed from the previous XML snippet, there are a few extra attributes available when describing a CPU in the guest XML. These can mostly be ignored, but for the curious here is a quick description of what they do. The top level <cpu> element has an attribute called match with possible values of: match='minimum' - the host CPU must have at least the CPU features described in the guest XML. If the host has additional features beyond the guest configuration, these will also be exposed to the guest. match='exact' - the host CPU must have at least the CPU features described in the guest XML. If the host has additional features beyond the guest configuration, these will be masked out from the guest. match='strict' - the host CPU must have exactly the same CPU features described in the guest XML.
The next enhancement is that the <feature> elements can each have an extra 'policy' attribute with possible values of: policy='force' - expose the feature to the guest even if the host does not have it. This is usually only useful in the case of software emulation. policy='require' - expose the feature to the guest and fail if the host does not have it. This is the sensible default. policy='optional' - expose the feature to the guest if it happens to support it. policy='disable' - if the host has this feature, then hide it from the guest. policy='forbid' - if the host has this feature, then fail and refuse to start the guest.
The 'forbid' policy is for a niche scenario where an incorrectly functioning application will try to use a feature even if it is not in the CPUID mask, and you wish to prevent accidentally running the guest on a host with that feature. The 'optional' policy has special behavior with respect to migration. When the guest is initially started the flag is optional, but when the guest is live migrated, this policy turns into 'require', since you cannot have features disappearing across migration. Chapter 15. Managing guests with the Virtual Machine Manager (virt-manager)This section describes the Virtual Machine Manager (virt-manager ) windows, dialog boxes, and various GUI controls. virt-manager provides a graphical view of hypervisors and guests on your host system and on remote host systems. virt-manager can perform virtualization management tasks, including:
defining and creating guests, assigning memory, assigning virtual CPUs, monitoring operational performance, saving and restoring, pausing and resuming, and shutting down and starting guests, links to the textual and graphical consoles, and live and offline migrations.
15.1. Starting virt-managerTo start virt-manager session open the menu, then the menu and select (virt-manager ). The virt-manager main window appears. Alternatively, virt-manager can be started remotely using ssh as demonstrated in the following command: ssh -X host's address [remotehost]# virt-manager 15.2. The Virtual Machine Manager main windowThis main window displays all the running guests and resources used by guests. Select a guest by double clicking the guest's name. 15.3. The virtual hardware details windowThe virtual hardware details window displays information about the virtual hardware configured for the guest. Virtual hardware resources can be added, removed and modified in this window. To access the virtual hardware details window, click on the icon in the toolbar. Clicking the icon displays the virtual hardware details window. 15.4. Virtual Machine graphical consoleThis window displays a guest's graphical console. Guests can use several different protocols to export their graphical framebuffers: virt-manager supports VNC and SPICE. If your virtual machine is set to require authentication, the Virtual Machine graphical console prompts you for a password before the display appears. VNC is considered insecure by many security experts, however, several changes have been made to enable the secure usage of VNC for virtualization on Red Hat enterprise Linux. The guest machines only listen to the local host's loopback address (127.0.0.1 ). This ensures only those with shell privileges on the host can access virt-manager and the virtual machine through VNC. Although virt-manager is configured to listen to other public network interfaces and alternative methods can be configured, it is not recommended. Remote administration can be performed by tunneling over SSH which encrypts the traffic. Although VNC can be configured to access remotely without tunneling over SSH, for security reasons, it is not recommended. To remotely administer the guest follow the instructions in: Chapter 5, Remote management of guests. TLS can provide enterprise level security for managing guest and host systems. Your local desktop can intercept key combinations (for example, Ctrl+Alt+F1) to prevent them from being sent to the guest machine. You can use the menu option to send these sequences. From the guest machine window, click the menu and select the key sequence to send. In addition, from this menu you can also capture the screen output. SPICE is an alternative to VNC available for Red Hat Enterprise Linux. 15.5. Adding a remote connectionThis procedure covers how to set up a connection to a remote system using virt-manager . To create a new connection open the File menu and select the Add Connection... menu item. The Add Connection wizard appears. Select the hypervisor. For Red Hat Enterprise Linux 6 systems select QEMU/KVM. Select Local for the local system or one of the remote connection options and click Connect. This example uses Remote tunnel over SSH which works on default installations. For more information on configuring remote connections refer to Chapter 5, Remote management of guests
Enter the root password for the selected host when prompted.
A remote host is now connected and appears in the main virt-manager window. 15.6. Displaying guest detailsYou can use the Virtual Machine Monitor to view activity information for any virtual machines on your system. To view a virtual system's details: In the Virtual Machine Manager main window, highlight the virtual machine that you want to view.
From the Virtual Machine Manager Edit menu, select Virtual Machine Details.
When the Virtual Machine details window opens, there may be a console displayed. Should this happen, clikc View and then select Details. The Overview window opens first by default. To go back to this window, select Overview from the navigation pane on the left hand side. The Overview view shows a summary of configuration details for the guest.
Select Performance from the navigation pane on the left hand side. The Performance view shows a summary of guest performance, including CPU and Memory usage.
Select Processor from the navigation pane on the left hand side. The Processor view allows you to view or change the current processor allocation.
Select Memory from the navigation pane on the left hand side. The Memory view allows you to view or change the current memory allocation.
Each virtual disk attached to the virtual machine is displayed in the navigation pane. Click on a virtual disk to modify or remove it.
Each virtual network interface attached to the virtual machine is displayed in the navigation pane. Click on a virtual network interface to modify or remove it.
15.7. Performance monitoringPerformance monitoring preferences can be modified with virt-manager 's preferences window. To configure performance monitoring: From the Edit menu, select Preferences.
The Preferences window appears. From the Stats tab specify the time in seconds or stats polling options.
15.8. Displaying CPU usage for guestsTo view the CPU usage for all guests on your system: From the View menu, select Graph, then the Guest CPU Usage check box.
The Virtual Machine Manager shows a graph of CPU usage for all virtual machines on your system.
15.9. Displaying CPU usage for hostsTo view the CPU usage for all hosts on your system: From the View menu, select Graph, then the Host CPU Usage check box.
The Virtual Machine Manager shows a graph of host CPU usage on your system.
15.10. Displaying Disk I/OTo view the disk I/O for all virtual machines on your system: Make sure that the Disk I/O statisctics collection is enabled. To do this, from the Edit menu, select Preferences and click the Statstab. Select the Disk I/O checkbox.
To enable the Disk I.O display, from the View menu, select Graph, then the Disk I/O check box.
The Virtual Machine Manager shows a graph of Disk I/O for all virtual machines on your system.
15.11. Displaying Network I/OTo view the network I/O for all virtual machines on your system: Make sure that the Network I/O statisctics collection is enabled. To do this, from the Edit menu, select Preferences and click the Statstab. Select the Network I/O checkbox.
To display the Network I/O statistics, from the View menu, select Graph, then the Network I/O check box.
The Virtual Machine Manager shows a graph of Network I/O for all virtual machines on your system.
|
| |
|