Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 23 : Chapter 5. Converting phy ...24 : Chapter 5. Remote managem ... (Berikutnya)

Virtualization Administration Guide

Managing your virtual environment

Edition 1

Laura Novich

Red Hat Engineering Content Services

Legal Notice

Copyright © 2013 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution-Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
All other trademarks are the property of their respective owners.


1801 Varsity Drive
RaleighNC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701

Daftar Isi

Abstract

The Virtualization Administration Guide covers administration of hosts, networking, storage, device and guest management, and troubleshooting.
Preface
1. Document Conventions
1.1. Typographic Conventions
1.2. Pull-quote Conventions
1.3. Notes and Warnings
2. Getting Help and Giving Feedback
2.1. Do You Need Help?
2.2. We Need Feedback!
1. Server best practices
2. Security for virtualization
2.1. Storage security issues
2.2. SELinux and virtualization
2.3. SELinux
2.4. Virtualization firewall information
3. sVirt
3.1. Security and Virtualization
3.2. sVirt labeling
4. KVM live migration
4.1. Live migration requirements
4.2. Live migration and Red Hat Enterprise Linux version compatibility
4.3. Shared storage example: NFS for a simple migration
4.4. Live KVM migration with virsh
4.4.1. Additonal tips for migration with virsh
4.4.2. Additional options for the virsh migrate command
4.5. Migrating with virt-manager
5. Remote management of guests
5.1. Remote management with SSH
5.2. Remote management over TLS and SSL
5.3. Transport modes
6. Overcommitting with KVM
6.1. Introduction
6.2. Overcommitting virtualized CPUs
7. KSM
8. Advanced virtualization administration
8.1. Control Groups (cgroups)
8.2. Hugepage support
9. Miscellaneous administration tasks
9.1. Automatically starting guests
9.2. Guest memory allocation
9.3. Using qemu-img
9.4. Verifying virtualization extensions
9.5. Setting KVM processor affinities
9.6. Generating a new unique MAC address
9.7. Improving guest response time
9.8. Disable SMART disk monitoring for guests
9.9. Configuring a VNC Server
9.10. Gracefully shutting down guests
9.11. Virtual machine timer management with libvirt
9.12. Using PMU to monitor guest performance
9.13. Guest virtual machine power management
9.14. QEMU Guest Agent Protocol
9.14.1. guest-sync
9.14.2. guest-sync-delimited
9.15. Setting a limit on device redirection
9.16. Dynamically changing a host or a network bridge that is attached to a virtual NIC
10. Storage concepts
10.1. Storage pools
10.2. Volumes
11. Storage pools
11.1. Creating storage pools
11.1.1. Disk-based storage pools
11.1.2. Partition-based storage pools
11.1.3. Directory-based storage pools
11.1.4. LVM-based storage pools
11.1.5. iSCSI-based storage pools
11.1.6. NFS-based storage pools
12. Volumes
12.1. Creating volumes
12.2. Cloning volumes
12.3. Adding storage devices to guests
12.3.1. Adding file based storage to a guest
12.3.2. Adding hard drives and other block devices to a guest
12.3.3. Managing storage controllers in a guest
12.4. Deleting and removing volumes
13. The Virtual Host Metrics Daemon (vhostmd)
13.1. Installing vhostmd on the host
13.2. Configuration of vhostmd
13.3. Starting and stopping the daemon
13.4. Verifying that vhostmd is working from the host
13.5. Configuring guests to see the metrics
13.6. Using vm-dump-metrics in Red Hat Enterprise Linux guests to verify operation
14. Managing guests with virsh
14.1. virsh command quick reference
14.2. Attaching and updating a device with virsh
14.3. Connecting to the hypervisor
14.4. Creating a virtual machine XML dump (configuration file)
14.4.1. Adding multifunction PCI devices to KVM guests
14.5. Suspending, resuming, saving and restoring a guest
14.6. Shutting down, rebooting and force-shutdown of a guest
14.7. Retrieving guest information
14.8. Retrieving node information
14.9. Storage pool information
14.10. Displaying per-guest information
14.11. Managing virtual networks
14.12. Migrating guests with virsh
14.13. Guest CPU model configuration
14.13.1. Introduction
14.13.2. Learning about the host CPU model
14.13.3. Determining a compatible CPU model to suit a pool of hosts
14.13.4. Configuring the guest CPU model
15. Managing guests with the Virtual Machine Manager (virt-manager)
15.1. Starting virt-manager
15.2. The Virtual Machine Manager main window
15.3. The virtual hardware details window
15.4. Virtual Machine graphical console
15.5. Adding a remote connection
15.6. Displaying guest details
15.7. Performance monitoring
15.8. Displaying CPU usage for guests
15.9. Displaying CPU usage for hosts
15.10. Displaying Disk I/O
15.11. Displaying Network I/O
16. Guest disk access with offline tools
16.1. Introduction
16.2. Terminology
16.3. Installation
16.4. The guestfish shell
16.4.1. Viewing file systems with guestfish
16.4.2. Modifying files with guestfish
16.4.3. Other actions with guestfish
16.4.4. Shell scripting with guestfish
16.4.5. Augeas and libguestfs scripting
16.5. Other commands
16.6. virt-rescue: The rescue shell
16.6.1. Introduction
16.6.2. Running virt-rescue
16.7. virt-df: Monitoring disk usage
16.7.1. Introduction
16.7.2. Running virt-df
16.8. virt-resize: resizing guests offline
16.8.1. Introduction
16.8.2. Expanding a disk image
16.9. virt-inspector: inspecting guests
16.9.1. Introduction
16.9.2. Installation
16.9.3. Running virt-inspector
16.10. virt-win-reg: Reading and editing the Windows Registry
16.10.1. Introduction
16.10.2. Installation
16.10.3. Using virt-win-reg
16.11. Using the API from Programming Languages
16.11.1. Interaction with the API via a C program
16.12. Troubleshooting
16.13. Where to find further documentation
17. Virtual Networking
17.1. Virtual network switches
17.2. Network Address Translation
17.3. Networking protocols
17.3.1. DNS and DHCP
17.3.2. Routed mode
17.3.3. Isolated mode
17.4. The default configuration
17.5. Examples of common scenarios
17.5.1. Routed mode
17.5.2. NAT mode
17.5.3. Isolated mode
17.6. Managing a virtual network
17.7. Creating a virtual network
17.8. Attaching a virtual network to a guest
17.9. Directly attaching to physical interface
17.10. Applying network filtering
17.10.1. Introduction
17.10.2. Filtering chains
17.10.3. Filtering chain priorities
17.10.4. Usage of variables in filters
17.10.5. Automatic IP address detection and DHCP snooping
17.10.6. Reserved Variables
17.10.7. Element and attribute overview
17.10.8. References to other filters
17.10.9. Filter rules
17.10.10. Supported protocols
17.10.11. Advanced Filter Configuration Topics
17.10.12. Limitations
18. qemu-kvm Whitelist
18.1. Introduction
18.2. Basic options
18.3. Disk options
18.4. Display options
18.5. Network options
18.6. Device options
18.7. Linux/Multiboot boot
18.8. Expert options
18.9. Help and information options
18.10. Miscellaneous options
19. Troubleshooting
19.1. Debugging and troubleshooting tools
19.2. kvm_stat
19.3. Troubleshooting with serial consoles
19.4. Virtualization log files
19.5. Loop device errors
19.6. Live Migration Errors
19.7. Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS
19.8. KVM networking performance
19.9. Missing characters on guest console with Japanese keyboard
19.10. Known Windows XP guest issues
A. Additional resources
A.1. Online resources
A.2. Installed documentation
B. Revision History
Index

Preface

1. Document Conventions

This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default.

1.1. Typographic Conventions

Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keys and key combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all distinguishable thanks to context.
Key combinations can be distinguished from an individual key by the plus sign that connects each part of a key combination. For example:
Press Enter to execute the command.
Press Ctrl+Alt+F2 to switch to a virtual terminal.
The first example highlights a particular key to press. The second example highlights a key combination: a set of three keys pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:
Choose SystemPreferencesMouse from the main menu bar to launch Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).
To insert a special character into a gedit file, choose ApplicationsAccessoriesCharacter Map from the main menu bar. Next, choose SearchFind . . . . . . from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose EditPaste from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh [email protected].
The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.
Note the words in bold italics above - username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
Publican is a DocBook publishing system.

1.2. Pull-quote Conventions

Terminal output and source code listings are set off visually from the surrounding text.
Output sent to a terminal is set in mono-spaced roman and presented thus:
books Desktop   documentation  drafts  mss photos   stuff  svnbooks_tests  Desktop1  downloads  images  notes  scripts  svgs
Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows:
package org.jboss.book.jca.ex1;import javax.naming.InitialContext;public class ExClient{   public static void main(String args[]) throws Exception   {  InitialContext iniCtx = new InitialContext();  Object ref = iniCtx.lookup("EchoBean");  EchoHome   home   = (EchoHome) ref;  Echo   echo   = home.create();  System.out.println("Created Echo");  System.out.println("Echo.echo('Hello') = " + echo.echo("Hello"));   }}

1.3. Notes and Warnings

Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.

Note

Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.

Important

Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled 'Important' will not cause data loss but may cause irritation and frustration.

Warning

Warnings should not be ignored. Ignoring warnings will most likely cause data loss.

2. Getting Help and Giving Feedback

2.1. Do You Need Help?

If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com. Through the customer portal, you can:
  • search or browse through a knowledgebase of technical support articles about Red Hat products.
  • submit a support case to Red Hat Global Support Services (GSS).
  • access other product documentation.
Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click on the name of any mailing list to subscribe to that list or to access the list archives.

2.2. We Need Feedback!

If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Documentation.
When submitting a bug report, be sure to mention the manual's identifier: doc-Virtualization_Administration_Guide
If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.

Chapter 1. Server best practices

The following tasks and tips can assist you with securing and ensuring reliability of your Red Hat Enterprise Linux host.
  • Run SELinux in enforcing mode. Set SELinux to run in enforcing mode with the setenforce command.
    # setenforce 1
  • Remove or disable any unnecessary services such as AutoFS, NFS, FTP, HTTP, NIS, telnetd, sendmail and so on.
  • Only add the minimum number of user accounts needed for platform management on the server and remove unnecessary user accounts.
  • Avoid running any unessential applications on your host. Running applications on the host may impact virtual machine performance and can affect server stability. Any application which may crash the server will also cause all virtual machines on the server to go down.
  • Use a central location for virtual machine installations and images. Virtual machine images should be stored under /var/lib/libvirt/images/. If you are using a different directory for your virtual machine images make sure you add the directory to your SELinux policy and relabel it before starting the installation. Use of shareable, network storage in a central location is highly recommended.

Chapter 2. Security for virtualization

When deploying virtualization technologies, you must ensure that the host cannot be compromised. The host is a Red Hat Enterprise Linux system that manages the system, devices, memory and networks as well as all virtualized guests. If the host is insecure, all guests in the system are vulnerable. There are several ways to enhance security on systems using virtualization. You or your organization should create a Deployment Plan containing the operating specifications and specifies which services are needed on your virtualized guests and host servers as well as what support is required for these services. Here are a few security issues to consider while developing a deployment plan:
  • Run only necessary services on hosts. The fewer processes and services running on the host, the higher the level of security and performance.
  • Enable SELinux on the hypervisor. Read Section 2.2, "SELinux and virtualization" for more information on using SELinux and virtualization.
  • Use a firewall to restrict traffic to the host. You can setup a firewall with default-reject rules that will help secure the host from attacks. It is also important to limit network-facing services.
  • Do not allow normal users to access the host. The host is privileged, and granting access to unprivileged accounts may compromise the level of security.

2.1. Storage security issues

Administrators of virtualized guests can change the partitions the host boots in certain circumstances. To prevent this administrators should follow these recommendations:
The host should not use disk labels to identify file systems in the fstab file, the initrd file or used by the kernel command line. If less privileged users, especially virtualized guests, have write access to whole partitions or LVM volumes.
Guests should not be given write access to whole disks or block devices (for example, /dev/sdb). Use partitions (for example, /dev/sdb1) or LVM volumes.

2.2. SELinux and virtualization

Security Enhanced Linux was developed by the NSA with assistance from the Linux community to provide stronger security for Linux. SELinux limits an attacker's abilities and works to prevent many common security exploits such as buffer overflow attacks and privilege escalation. It is because of these benefits that all Red Hat Enterprise Linux systems should run with SELinux enabled and in enforcing mode.
Adding LVM based storage with SELinux in enforcing mode
The following section is an example of adding a logical volume to a virtualized guest with SELinux enabled. These instructions also work for hard drive partitions.

Procedure 2.1. Creating and mounting a logical volume on a virtualized guest with SELinux enabled

  1. Create a logical volume. This example creates a 5 gigabyte logical volume named NewVolumeName on the volume group named volumegroup.
    # lvcreate -n NewVolumeName -L 5G volumegroup
  2. Format the NewVolumeName logical volume with a file system that supports extended attributes, such as ext3.
    # mke2fs -j /dev/volumegroup/NewVolumeName
  3. Create a new directory for mounting the new logical volume. This directory can be anywhere on your file system. It is advised not to put it in important system directories (/etc, /var, /sys) or in home directories (/home or /root). This example uses a directory called /virtstorage
    # mkdir /virtstorage
  4. Mount the logical volume.
    # mount /dev/volumegroup/NewVolumeName /virtstorage
  5. Set the correct SELinux type for the libvirt image location.
    # semanage fcontext -a -t virt_image_t "/virtstorage(/.*)?"
    If the targeted policy is used (targeted is the default policy) the command appends a line to the /etc/selinux/targeted/contexts/files/file_contexts.local file which makes the change persistent. The appended line may resemble this:
    /virtstorage(/.*)? system_u:object_r:virt_image_t:s0
  6. Run the command to change the type of the mount point (/virtstorage) and all files under it to virt_image_t (the restorecon and setfiles commands read the files in /etc/selinux/targeted/contexts/files/).
    # restorecon -R -v /virtstorage

Note

Create a new file (using the touch command) on the file system.
# touch /virtstorage/newfile
Verify the file has been relabeled using the following command:
# sudo ls -Z /virtstorage-rw-------. root root system_u:object_r:virt_image_t:s0 newfile
The output shows that the new file has the correct attribute, virt_image_t.

2.3. SELinux

This section contains topics to consider when using SELinux with your virtualization deployment. When you deploy system changes or add devices, you must update your SELinux policy accordingly. To configure an LVM volume for a guest, you must modify the SELinux context for the respective underlying block device and volume group. Make sure that you have installed the policycoreutilis-python package (yum install policycoreutilis-python) before running the command.
# semanage fcontext -a -t virt_image_t -f -b /dev/sda2# restorecon /dev/sda2
KVM and SELinux
The following table shows the SELinux Booleans which affect KVM when launched by libvirt.
KVM SELinux Booleans
SELinux BooleanDescription
virt_use_commAllow virt to use serial/parallel communication ports.
virt_use_fusefsAllow virt to read fuse files.
virt_use_nfsAllow virt to manage NFS files.
virt_use_sambaAllow virt to manage CIFS files.
virt_use_sanlockAllow sanlock to manage virt lib files.
virt_use_sysfsAllow virt to manage device configuration (PCI).
virt_use_xserverAllow virtual machine to interact with the xserver.
virt_use_usbAllow virt to use USB devices.

2.4. Virtualization firewall information

Various ports are used for communication between virtualized guests and management utilities.

Note

Any network service on a virtualized guest must have the applicable ports open on the guest to allow external access. If a network service on a guest is firewalled it will be inaccessible. Always verify the guests network configuration first.
  • ICMP requests must be accepted. ICMP packets are used for network testing. You cannot ping guests if ICMP packets are blocked.
  • Port 22 should be open for SSH access and the initial installation.
  • Ports 80 or 443 (depending on the security settings on the RHEV Manager) are used by the vdsm-reg service to communicate information about the host.
  • Ports 5634 to 6166 are used for guest console access with the SPICE protocol.
  • Ports 49152 to 49216 are used for migrations with KVM. Migration may use any port in this range depending on the number of concurrent migrations occurring.
  • Enabling IP forwarding (net.ipv4.ip_forward = 1) is also required for shared bridges and the default bridge. Note that installing libvirt enables this variable so it will be enabled when the virtualization packages are installed unless it was manually disabled.

Note

Note that enabling IP forwarding is not required for physical bridge devices. When a guest is connected through a physical bridge, traffic only operates at a level that does not require IP configuration such as IP forwarding.

Chapter 3. sVirt

sVirt is a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtualized guests. The main reasons for integrating these technologies are to improve security and harden the system against bugs in the hypervisor that might be used as an attack vector aimed toward the host or to another virtualized guest.
This chapter describes how sVirt integrates with virtualization technologies in Red Hat Enterprise Linux 6.
Non-virtualized environments
In a non-virtualized environment, hosts are separated from each other physically and each host has a self-contained environment, consisting of services such as a web server, or a DNS server. These services communicate directly to their own user space, host kernel and physical host, offering their services directly to the network. The following image represents a non-virtualized environment:
Virtualized environments
In a virtualized environment, several operating systems can run on a single host kernel and physical host. The following image represents a virtualized environment:

3.1. Security and Virtualization

When services are not virtualized, machines are physically separated. Any exploit is usually contained to the affected machine, with the obvious exception of network attacks. When services are grouped together in a virtualized environment, extra vulnerabilities emerge in the system. If there is a security flaw in the hypervisor that can be exploited by a guest instance, this guest may be able to not only attack the host, but also other guests running on that host. These attacks can extend beyond the guest instance and could expose other guests to attack.
sVirt is an effort to isolate guests and limit their ability to launch further attacks if exploited. This is demonstrated in the following image, where an attack can not break out of the virtualized guest and extend to another guest instance:
SELinux introduces a pluggable security framework for virtualized instances in its implementation of Mandatory Access Control (MAC). The sVirt framework allows guests and their resources to be uniquely labeled. Once labeled, rules can be applied which can reject access between different guests.

3.2. sVirt labeling

Like other services under the protection of SELinux, sVirt uses process-based mechanisms and restrictions to provide an extra layer of security over guest instances. Under typical use, you should not even notice that sVirt is working in the background. This section describes the labeling features of sVirt.
As shown in the following output, when using sVirt, each virtualized guest process is labeled and runs with a dynamically generated level. Each process is isolated from other VMs with different levels:
# ps -eZ | grep qemusystem_u:system_r:svirt_t:s0:c87,c520 27950 ?  00:00:17 qemu-kvm
The actual disk images are automatically labeled to match the processes, as shown in the following output:
# ls -lZ /var/lib/libvirt/images/*  system_u:object_r:svirt_image_t:s0:c87,c520   image1
The following table outlines the different labels that can be assigned when using sVirt:

Table 3.1. sVirt labels

Type/DescriptionSELinux Context
Virtualized guest processes. MCS1 is a random MCS field. Approximately 500,000 labels are supported.system_u:system_r:svirt_t:MCS1
Virtualized guest images. Only svirt_t processes with the same MCS fields can read/write these images.system_u:object_r:svirt_image_t:MCS1
Virtualized guest shared read/write content. All svirt_t processes can write to the svirt_image_t:s0 files.system_u:object_r:svirt_image_t:s0
Virtualized guest shared read only content. All svirt_t processes can read these files/devices.system_u:object_r:svirt_content_t:s0
Virtualized guest images. Default label for when an image exits. No svirt_t virtual processes can read files/devices with this label.system_u:object_r:virt_content_t:s0

It is also possible to perform static labeling when using sVirt. Static labels allow the administrator to select a specific label, including the MCS/MLS field, for a virtualized guest. Administrators who run statically-labeled virtualized guests are responsible for setting the correct label on the image files. The virtualized guest will always be started with that label, and the sVirt system will never modify the label of a statically-labeled virtual machine's content. This allows the sVirt component to run in an MLS environment. You can also run multiple virtualized guests with different sensitivity levels on a system, depending on your requirements.

Chapter 4. KVM live migration

This chapter covers migrating guests running on a KVM hypervisor to another KVM host.
Migration describes the process of moving a guest from one host to another. This is possible because guests are running in a virtualized environment instead of directly on the hardware. Migration is useful for:
  • Load balancing - guests can be moved to hosts with lower usage when their host becomes overloaded, or another host is under-utilized.
  • Hardware independence - when we need to upgrade, add, or remove hardware devices on the host, we can safely relocate guests to other hosts. This means that guests do not experience any downtime for hardware improvements.
  • Energy saving - guests can be redistributed to other hosts and host systems powered off to save energy and cut costs in low usage periods.
  • Geographic migration - guests can be moved to another location for lower latency or in serious circumstances.
Migration works by sending the state of the guest's memory and any virtualized devices to a destination host. It is recommended to use shared, networked storage to store guest images to be migrated. It is also recommended to libvirt-managed storage pools for shared storage when migrating virtual machines.
Migrations can be performed live or not.
In a live migration, the guest continues to run on the source host while its memory pages are transferred, in order, to the destination host. During migration, KVM monitors the source for any changes in pages it has already transferred, and begins to transfer these changes when all of the initial pages have been transferred. KVM also estimates transfer speed during migration, so when the remaining amount of data to transfer will take a certain configurable period of time (10ms by default), KVM suspends the original guest, transfers the remaining data, and resumes the guest on the destination host.
A migration that is not performed live, suspends the guest, then moves an image of the guest's memory to the destination host. The guest is then resumed on the destination host and the memory the guest used on the source host is freed. The time it takes to complete such a migration depends on network bandwidth and latency. If the network is experiencing heavy use or low bandwidth, the migration will take much longer.
If the original guest modifies pages faster than KVM can transfer them to the destination host, offline migration must be used, as live migration would never complete.

4.1. Live migration requirements

Migrating guests requires the following:

Migration requirements

  • A guest installed on shared storage using one of the following protocols:
    • Fibre Channel-based LUNs
    • iSCSI
    • FCoE
    • NFS
    • GFS2
    • SCSI RDMA protocols (SCSI RCP): the block export protocol used in Infiniband and 10GbE iWARP adapters
  • The migration platforms and versions should be checked against table Table 4.1, "Live Migration Compatibility"
  • Both systems must have the appropriate TCP/IP ports open.
  • A separate system exporting the shared storage medium. Storage should not reside on either of the two hosts being used for migration.
  • Shared storage must mount at the same location on source and destination systems. The mounted directory names must be identical. Although it is possible to keep the images using different paths, it is not recommended. Note that, if you are intending to use virt-manager to perform the migration, the path names must be identical. If however you intend to use virsh to perform the migration, different network configurations and mount directories can be used with the help of --xml option or pre-hooks when doing migrations. Even with out shared storage, migration can still succeed with the command --copy-storage-all. For more information on prehooks, refer to libvirt.org, and for more information on the XML option, see the virsh manual.
  • When migration is attempted on an existing guest in a public bridge+tap network, the source and destination hosts must be located in the same network. Otherwise, the guest network will not operate after migration.
Make sure that the libvirtd service is enabled (# chkconfig libvirtd on) and running (# service libvirtd start). It is also important to note that the ability to migrate effectively is dependent on the parameter settings in the /etc/libvirt/libvirtd.conf configuration file.

Procedure 4.1. Configuring libvirtd.conf

  1. Opening the libvirtd.conf requires running the command as root:
    # vim /etc/libvirt/libvirtd.conf
  2. Change the parameters as needed and save the file.
  3. Restart the libvirtd service:
    # service libvirtd restart

4.2. Live migration and Red Hat Enterprise Linux version compatibility

Live Migration is supported as shown in table Table 4.1, "Live Migration Compatibility":

Table 4.1. Live Migration Compatibility

Migration MethodRelease TypeExampleLive Migration SupportNotes
ForwardMajor release5.x ⤍ 6.yNot supported
ForwardMinor release5.x ⤍ 5.y (y>x, x>=4)Fully supportedAny issues should be reported
ForwardMinor release6.x ⤍ 6.y (y>x, x>=0)Fully supportedAny issues should be reported
BackwardMajor release6.x ⤍ 5.yNot supported
BackwardMinor release5.x ⤍ 5.y (x>y,y>=4)SupportedRefer to Troubleshooting problems with migration for known issues
BackwardMinor release6.x ⤍ 6.y (x>y, y>=0)SupportedRefer to Troubleshooting problems with migration for known issues

Troubleshooting problems with migration

  • Issues with SPICE - It has been found that SPICE has an incompatible change when migrating from 6.0 ⤍ 6.1. In such cases, the client may disconnect and then reconnect, causing a temporary loss of audio and video. This is only temporary and all services will resume.
  • Issues with USB - Red Hat Enterprise Linux 6.2 added USB functionality which included migration support. This means in versions 6.0 and 6.1 (and likewise 6.2+ with -M rhel (oldversion)), USB devices will reset, causing any application running over the device to abort. To prevent this from happening, do not migrate while USB devices are in use. This problem was fixed in Red Hat Enterprise Linux 6.4 and should not occur in future versions.
  • Issues with the migration protocol - If backward migration ends with "unknown section error", repeating the migration process can repair the issue as it may be a transient error. If not, please report the problem.
Configuring network storage
Configure shared storage and install a guest on the shared storage.

4.3. Shared storage example: NFS for a simple migration

Important

This example uses NFS to share guest images with other KVM hosts. Although not practical for large installations, it is presented to demonstrate migration techniques only. Do not use this example for migrating or running more than a few guests.
iSCSI storage is a better choice for large deployments. Refer to Section 11.1.5, "iSCSI-based storage pools" for configuration details.
Also note, that the instructions provided herin are not meant to replace the detailed instructions found in Red Hat Linux Storage Administration Guide. Refer to this guide for information on configuring NFS, opening IP tables, and configuring the firewall.
Make sure that NFS filelocking is not used as it is not supported in KVM.
  1. Export your libvirt image directory

    Migration requires storage to reside on a system that is separate to the migration target systems. On this separate system, export the storage by adding the default image directory to the /etc/exports file:
    /var/lib/libvirt/images *.example.com(rw,no_root_squash,sync)
    Change the hostname parameter as required for your environment.
  2. Start NFS

    1. Install the NFS packages if they are not yet installed:
      # yum install nfs
    2. Make sure that the ports for NFS in iptables (2049, for example) are opened and add NFS to the /etc/hosts.allow file.
    3. Start the NFS service:
      # service nfs start
  3. Mount the shared storage on the destination

    On the migration destination system, mount the /var/lib/libvirt/images directory:
    # mount storage_host:/var/lib/libvirt/images /var/lib/libvirt/images

    Warning

    Whichever directory is chosen for the guests must be exactly the same on host and guest. This applies to all types of shared storage. The directory must be the same or the migration with virt-manager will fail.

4.4. Live KVM migration with virsh

A guest can be migrated to another host with the virsh command. The migrate command accepts parameters in the following format:
# virsh migrate --live GuestName DestinationURL
Note that the --live option may be eliminated when live migration is not desired. Additional options are listed in Section 4.4.2, "Additional options for the virsh migrate command".
The GuestName parameter represents the name of the guest which you want to migrate.
The DestinationURL parameter is the connection URL of the destination host. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have libvirt running.

Note

The DestinationURL parameter for normal migration and peer2peer migration has different semantics:
  • normal migration: the DestinationURL is the URL of the target host as seen from the source guest.
  • peer2peer migration: DestinationURL is the URL of the target host as seen from the source host.
Once the command is entered, you will be prompted for the root password of the destination system.

Important

An entry for the destination host, in the /etc/hosts file on the source server is required for migration to succeed. Enter the IP address and hostname for the destination host in this file as shown in the following example, substituting your destination host's IP address and hostname:
10.0.0.20host2.example.com
Example: live migration with virsh
This example migrates from host1.example.com to host2.example.com. Change the host names for your environment. This example migrates a virtual machine named guest1-rhel6-64.
This example assumes you have fully configured shared storage and meet all the prerequisites (listed here: Migration requirements).
  1. Verify the guest is running

    From the source system, host1.example.com, verify guest1-rhel6-64 is running:
    [root@host1 ~]# virsh listId Name State---------------------------------- 10 guest1-rhel6-64 running
  2. Migrate the guest

    Execute the following command to live migrate the guest to the destination, host2.example.com. Append /system to the end of the destination URL to tell libvirt that you need full access.
    # virsh migrate --live guest1-rhel6-64 qemu+ssh://host2.example.com/system
    Once the command is entered you will be prompted for the root password of the destination system.
  3. Wait

    The migration may take some time depending on load and the size of the guest. virsh only reports errors. The guest continues to run on the source host until fully migrated.
  4. Verify the guest has arrived at the destination host

    From the destination system, host2.example.com, verify guest1-rhel6-64 is running:
    [root@host2 ~]# virsh listId Name State---------------------------------- 10 guest1-rhel6-64 running
The live migration is now complete.

Note

libvirt supports a variety of networking methods including TLS/SSL, UNIX sockets, SSH, and unencrypted TCP. Refer to Chapter 5, Remote management of guests for more information on using other methods.

Note

Non-running guests cannot be migrated with the virsh migrate command. To migrate a non-running guest, the following script should be used:
virsh dumpxml Guest1 > Guest1.xmlvirsh -c qemu+ssh://<target-system-FQDN>  define Guest1.xmlvirsh undefine Guest1

4.4.1. Additonal tips for migration with virsh

It is possible to perform multiple, concurrent live migrations where each migration runs in a separate command shell. However, this should be done with caution and should involve careful calculations as each migration instance uses one MAX_CLIENT from each side (source and target). As the default setting is 20, there is enough to run 10 instances without changing the settings. Should you need to change the settings, refer to the procedure Procedure 4.1, "Configuring libvirtd.conf".
  1. Open the libvirtd.conf file as described in Procedure 4.1, "Configuring libvirtd.conf".
  2. Look for the Processing controls section.
    ################################################################### Processing controls## The maximum number of concurrent client connections to allow# over all sockets combined.#max_clients = 20# The minimum limit sets the number of workers to start up# initially. If the number of active clients exceeds this,# then more threads are spawned, upto max_workers limit.# Typically you'd want max_workers to equal maximum number# of clients allowed#min_workers = 5#max_workers = 20# The number of priority workers. If all workers from above# pool will stuck, some calls marked as high priority# (notably domainDestroy) can be executed in this pool.#prio_workers = 5# Total global limit on concurrent RPC calls. Should be# at least as large as max_workers. Beyond this, RPC requests# will be read into memory and queued. This directly impact# memory usage, currently each request requires 256 KB of# memory. So by default upto 5 MB of memory is used## XXX this isn't actually enforced yet, only the per-client# limit is used so far#max_requests = 20# Limit on concurrent requests from a single client# connection. To avoid one client monopolizing the server# this should be a small fraction of the global max_requests# and max_workers parameter#max_client_requests = 5#################################################################
  3. Change the max_clients and max_workers parameters settings. It is recommended that the number be the same in both parameters. The max_clients will use 2 clients per migration (one per side) and max_workers will use 1 worker on the source and 0 workers on the destination during the perform phase and 1 worker on the destination during the finish phase.

    Important

    The max_clients and max_workers parameters settings are effected by all guest connections to the libvirtd service. This means that any user that is using the same guest and is performing a migration at the same time will also beholden to the limits set in the the max_clients and max_workers parameters settings. This is why the maximum value needs to be considered carefully before performing a concurrent live migration.
  4. Save the file and restart the service.

    Note

    There may be cases where a migration connection drops because there are too many ssh sessions that have been started, but not yet authenticated. By default, sshd allows only 10 sessions to be in a "pre-authenticated state" at any time. This setting is controlled by the MaxStartups parameter in the sshd configuration file (located here: /etc/ssh/sshd_config), which may require some adjustment. Adjusting this parameter should be done with caution as the limitation is put in place to prevent DoS attacks (and over-use of resources in general). Setting this value too high will negate its purpose. To change this parameter, edit the file /etc/ssh/sshd_config, remove the # from the beginning of the MaxStartups line, and change the 10 (default value) to a higher number. Remember to save the file and restart the sshd service. For more information, refer to the sshd_config MAN page.

4.4.2. Additional options for the virsh migrate command

In addition to --live, virsh migrate accepts the following options:
  • --direct - used for direct migration
  • --p2p - used for peer-2-peer migration
  • --tunnelled - used for tunnelled migration
  • --persistent - leaves the domain persistent on destination host
  • --undefinesource - undefines the domain on the source host
  • --suspend - leaves the domain paused on the destination host
  • --copy-storage-all - indicates migration with non-shared storage with full disk copy
  • --copy-storage-inc - indicates migration with non-shared storage with incremental copy (same base image shared between source and destination). In both cases the disk images have to exist on the destination host, the --copy-storage-.options only tell libvirt to transfer data from the images on source host to the images found at the same place on the destination host
  • --change-protection - enforces that no incompatible configuration changes will be made to the domain while the migration is underway; this flag is implicitly enabled when supported by the hypervisor, but can be explicitly used to reject the migration if the hypervisor lacks change protection support.
  • --unsafe - forces the migration to occur, ignoring all safety procedures.
  • --verbose displays the progress of migration as it is occurring
  • migrateuri - the migration URI which is usually omitted.
  • --timeout seconds - forces a guest to suspend when the live migration counter exceeds N seconds. It can only be used with a live migration. Once the timeout is initiated, the migration continues on the suspended guest.
  • dname - is used for renaming the domain to new name during migration, which also usually can be omitted
  • s
  • --xml file can be used to supply an alternative XML file for use on the destination to supply a larger set of changes to any host-specific portions of the domain XML, such as accounting for naming differences between source and destination in accessing underlying storage. This option is usually omitted.
Refer to the virsh man page for more information.

4.5. Migrating with virt-manager

This section covers migrating a KVM guest with virt-manager from one host to another.
  1. Open virt-manager

    Open virt-manager. Choose ApplicationsSystem ToolsVirtual Machine Manager from the main menu bar to launch virt-manager.
    Virt-Manager main menu

    Figure 4.1. Virt-Manager main menu


  2. Connect to the target host

    Connect to the target host by clicking on the File menu, then click Add Connection.
    Open Add Connection window

    Figure 4.2. Open Add Connection window


  3. Add connection

    The Add Connection window appears.
    Adding a connection to the target host

    Figure 4.3. Adding a connection to the target host


    Enter the following details:
    • Hypervisor: Select QEMU/KVM.
    • Method: Select the connection method.
    • Username: Enter the username for the remote host.
    • Hostname: Enter the hostname for the remote host.
    Click the Connect button. An SSH connection is used in this example, so the specified user's password must be entered in the next step.
    Enter password

    Figure 4.4. Enter password


  4. Migrate guest

    Right-click on the host to be migrated (guest1-rhel6-64 in this example) and click Migrate.
    Choosing the host to migrate

    Figure 4.5. Choosing the host to migrate


    Select the host you wish to migrate to and click Migrate.
    Migrating the host

    Figure 4.6. Migrating the host


    A progress window will appear.
    Progress window

    Figure 4.7. Progress window


    virt-manager now displays the newly migrated guest.
    Migrated guest status

    Figure 4.8. Migrated guest status


  5. View the storage details for the host

    In the Edit menu, click Connection Details, the Connection Details window appears.
    Click the Storage tab. The iSCSI target details for this host is shown.
    Storage details

    Figure 4.9. Storage details


    This host was defined by the following XML configuration:
    <pool type='iscsi'> <name>iscsirhel6guest</name> <source> <host name='virtlab22.example.com.'/> <device path='iqn.2001-05.com.iscsivendor:0-8a0906-fbab74a06-a700000017a4cc89-rhevh'/>   </source>   <target> <path>/dev/disk/by-path</path> </target></pool>
(Sebelumnya) 23 : Chapter 5. Converting phy ...24 : Chapter 5. Remote managem ... (Berikutnya)