Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 24 : Virtualization Administra ...24 : Chapter 9. Miscellaneous ... (Berikutnya)

Virtualization Administration Guide

Chapter 5. Remote management of guests

This section explains how to remotely manage your guests using ssh or TLS and SSL. More information on SSH can be found in the Red Hat Enterprise Linux Deployment Guide

5.1. Remote management with SSH

The ssh package provides an encrypted network protocol which can securely send management functions to remote virtualization servers. The method described uses the libvirt management connection securely tunneled over an SSH connection to manage the remote machines. All the authentication is done using SSH public key cryptography and passwords or passphrases gathered by your local SSH agent. In addition the VNC console for each guest is tunneled over SSH.
Be aware of the issues with using SSH for remotely managing your virtual machines, including:
  • you require root log in access to the remote machine for managing virtual machines,
  • the initial connection setup process may be slow,
  • there is no standard or trivial way to revoke a user's key on all hosts or guests, and
  • ssh does not scale well with larger numbers of remote machines.

Note

Red Hat Enterprise Virtualization enables remote management of large numbers of virtual machines. Refer to the Red Hat Enterprise Virtualization documentation for further details.
The following packages are required for ssh access:
  • openssh
  • openssh-askpass
  • openssh-clients
  • openssh-server
Configuring password less or password managed SSH access for virt-manager
The following instructions assume you are starting from scratch and do not already have SSH keys set up. If you have SSH keys set up and copied to the other systems you can skip this procedure.

Important

SSH keys are user dependent and may only be used by their owners. A key's owner is the one who generated it. Keys may not be shared.
virt-manager must be run by the user who owns the keys to connect to the remote host. That means, if the remote systems are managed by a non-root user virt-manager must be run in unprivileged mode. If the remote systems are managed by the local root user then the SSH keys must be owned and created by root.
You cannot manage the local host as an unprivileged user with virt-manager.
  1. Optional: Changing user

    Change user, if required. This example uses the local root user for remotely managing the other hosts and the local host.
    $ su -
  2. Generating the SSH key pair

    Generate a public key pair on the machine virt-manager is used. This example uses the default key location, in the ~/.ssh/ directory.
    # ssh-keygen -t rsa
  3. Copying the keys to the remote hosts

    Remote login without a password, or with a passphrase, requires an SSH key to be distributed to the systems being managed. Use the ssh-copy-id command to copy the key to root user at the system address provided (in the example, [email protected]).
    # ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected][email protected]'s password:
    Now try logging into the machine, with the ssh [email protected] command and check in the .ssh/authorized_keys file to make sure unexpected keys have not been added.
    Repeat for other systems, as required.
  4. Optional: Add the passphrase to the ssh-agent

    The instructions below describe how to add a passphrase to an existing ssh-agent. It will fail to run if the ssh-agent is not running. To avoid errors or conflicts make sure that your SSH parameters are set correctly. Refer to the Red Hat Enterprise Linux Deployment Guide for more information.
    Add the passphrase for the SSH key to the ssh-agent, if required. On the local host, use the following command to add the passphrase (if there was one) to enable password-less login.
    # ssh-add ~/.ssh/id_rsa.pub
    The SSH key is added to the remote system.
The libvirt daemon (libvirtd)
The libvirt daemon provides an interface for managing virtual machines. You must have the libvirtd daemon installed and running on every remote host that needs managing.
$ ssh root@somehost# chkconfig libvirtd on# service libvirtd start
After libvirtd and SSH are configured you should be able to remotely access and manage your virtual machines. You should also be able to access your guests with VNC at this point.
Accessing remote hosts with virt-manager
Remote hosts can be managed with the virt-manager GUI tool. SSH keys must belong to the user executing virt-manager for password-less login to work.
  1. Start virt-manager.
  2. Open the File->Add Connection menu.
    Add connection menu

    Figure 5.1. Add connection menu


  3. Use the drop down menu to select hypervisor type, and click the Connect to remote host check box to open the Connection Method (in this case Remote tunnel over SSH), and enter the desired User name and Hostname, then click Connect.

5.2. Remote management over TLS and SSL

You can manage virtual machines using TLS and SSL. TLS and SSL provides greater scalability but is more complicated than ssh (refer to Section 5.1, "Remote management with SSH"). TLS and SSL is the same technology used by web browsers for secure connections. The libvirt management connection opens a TCP port for incoming connections, which is securely encrypted and authenticated based on x509 certificates. The procedures that follow provide instructions on creating and deploying authentication certificates for TLS and SSL management.

Procedure 5.1. Creating a certificate authority (CA) key for TLS management

  1. Before you begin, confirm that certtool is installed. If not:
    # yum install certtool
  2. Generate a private key, using the following command:
    # certtool --generate-privkey > cakey.pem
  3. Once the key generates, the next step is to create a signature file so the key can be self-signed. To do this, create a file with signature details and name it ca.info. This file should contain the following:
    # vim ca.info
    cn = Name of your organizationcacert_signing_key
  4. Generate the self-signed key with the following command:
    # certtool --generate-self-signed --load-privkey cakey.perm --template ca.info --outfile cacert.pem
    Once the file generates, the ca.info file may be deleted using the rm command. The file that results from the generation process is named cacert.pem. This file is the public key (certificate). The loaded file cakey.pem is the private key. This file should not be kept in a shared space. Keep this key private.
  5. Install the cacert.pem Certificate Authority Certificate file on all clients and servers in the /etc/pki/CA/cacert.pem directory to let them know that the certificate issued by your CA can be trusted. To view the contents of this file, run:
    # certtool -i --infile cacert.pem
    This is all that is required to set up your CA. Keep the CA's private key safe as you will need it in order to issue certificates for your clients and servers.

Procedure 5.2. Issuing a server certificate

This procedure demonstrates how to issue a certificate with the X.509 CommonName (CN)field set to the hostname of the server. The CN must match the hostname which clients will be using to connect to the server. In this example, clients will be connecting to the server using the URI: qemu://mycommonname/system, so the CN field should be identical, ie mycommoname.
  1. Create a private key for the server.
    # certtool --generate-privkey > serverkey.pem
  2. Generate a signature for the CA's private key by first creating a template file called server.info . Make sure that the CN is set to be the same as the server's hostname:
    organization = Name of your organizationcn = mycommonnametls_www_serverencryption_keysigning_key
  3. Create the certificate with the following command:
    # certtool --generate-certificate --load-privkey serverkey.pem --load-ca-certificate cacert.pem --load-ca-privkey cakey.pem \ --template server.info --outfile servercert.pem
  4. This results in two files being generated:
    • serverkey.pem - The server's private key
    • servercert.pem - The server's public key
    Make sure to keep the location of the private key secret. To view the contents of the file, perform the following command:
    # certtool -i -inifile servercert.pem
    When opening this file the CN= parameter should be the same as the CN that you set earlier. For example, mycommonname.
  5. Install the two files in the following locations:
    • serverkey.pem - the server's private key. Place this file in the following location: /etc/pki/libvirt/private/serverkey.pem
    • servercert.pem - the server's certificate. Install it in the following location on the server: /etc/pki/libvirt/servercert.pem

Procedure 5.3. Issuing a client certificate

  1. For every client (ie. any program linked with libvirt, such as virt-manager), you need to issue a certificate with the X.509 Distinguished Name (DN) set to a suitable name. This needs to be decided on a corporate level.
    For example purposes the following information will be used:
    C=USA,ST=North Carolina,L=Raleigh,O=Red Hat,CN=name_of_client
    This process is quite similar to Procedure 5.2, "Issuing a server certificate", with the following exceptions noted.
  2. Make a private key with the following command:
    # certtool --generate-privkey > clientkey.pem
  3. Generate a signature for the CA's private key by first creating a template file called client.info . The file should contain the following (fields should be customized to reflect your region/location):
    country = USAstate = North Carolinalocality = Raleighorganization = Red Hatcn = client1tls_www_clientencryption_keysigning_key
  4. Sign the certificate with the following command:
    # certtool --generate-certificate --load-privkey clientkey.pem --load-ca-certificate cacert.pem \ --load-ca-privkey cakey.pem --template client.info --outfile clientcert.pem
  5. Install the certificates on the client machine:
    # cp clientkey.pem /etc/pki/libvirt/private/clientkey.pem# cp clientcert.pem /etc/pki/libvirt/clientcert.pem

5.3. Transport modes

For remote management, libvirt supports the following transport modes:
Transport Layer Security (TLS)
Transport Layer Security TLS 1.0 (SSL 3.1) authenticated and encrypted TCP/IP socket, usually listening on a public port number. To use this you will need to generate client and server certificates. The standard port is 16514.
UNIX sockets
UNIX domain sockets are only accessible on the local machine. Sockets are not encrypted, and use UNIX permissions or SELinux for authentication. The standard socket names are /var/run/libvirt/libvirt-sock and /var/run/libvirt/libvirt-sock-ro (for read-only connections).
SSH
Transported over a Secure Shell protocol (SSH) connection. Requires Netcat (the nc package) installed. The libvirt daemon (libvirtd) must be running on the remote machine. Port 22 must be open for SSH access. You should use some sort of SSH key management (for example, the ssh-agent utility) or you will be prompted for a password.
ext
The ext parameter is used for any external program which can make a connection to the remote machine by means outside the scope of libvirt. This parameter is unsupported.
TCP
Unencrypted TCP/IP socket. Not recommended for production use, this is normally disabled, but an administrator can enable it for testing or use over a trusted network. The default port is 16509.
The default transport, if no other is specified, is TLS.
Remote URIs
A Uniform Resource Identifier (URI) is used by virsh and libvirt to connect to a remote host. URIs can also be used with the --connect parameter for the virsh command to execute single commands or migrations on remote hosts.
libvirt URIs take the general form (content in square brackets, "[]", represents optional functions):
driver[+transport]://[username@][hostname][:port]/[path][?extraparameters]
The transport method or the hostname must be provided to target an external location.

Examples of remote management parameters

  • Connect to a remote KVM host named host2, using SSH transport and the SSH username virtuser.
    qemu+ssh://virtuser@host2/
  • Connect to a remote KVM hypervisor on the host named host2 using TLS.
    qemu://host2/

Testing examples

  • Connect to the local KVM hypervisor with a non-standard UNIX socket. The full path to the UNIX socket is supplied explicitly in this case.
    qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock
  • Connect to the libvirt daemon with an unencrypted TCP/IP connection to the server with the IP address 10.1.1.10 on port 5000. This uses the test driver with default settings.
    test+tcp://10.1.1.10:5000/default
Extra URI parameters
Extra parameters can be appended to remote URIs. The table below Table 5.1, "Extra URI parameters" covers the recognized parameters. All other parameters are ignored. Note that parameter values must be URI-escaped (that is, a question mark (?) is appended before the parameter and special characters are converted into the URI format).

Table 5.1. Extra URI parameters

NameTransport modeDescriptionExample usage
nameall modesThe name passed to the remote virConnectOpen function. The name is normally formed by removing transport, hostname, port number, username and extra parameters from the remote URI, but in certain very complex cases it may be better to supply the name explicitly.name=qemu:///system
commandssh and extThe external command. For ext transport this is required. For ssh the default is ssh. The PATH is searched for the command.command=/opt/openssh/bin/ssh
socketunix and sshThe path to the UNIX domain socket, which overrides the default. For ssh transport, this is passed to the remote netcat command (see netcat).socket=/opt/libvirt/run/libvirt/libvirt-sock
netcatssh
The netcat command can be used to connect to remote systems. The default netcat parameter uses the nc command. For SSH transport, libvirt constructs an SSH command using the form below:
command -p port [-l username] hostname
netcat -U socket
The port, username and hostname parameters can be specified as part of the remote URI. The command, netcat and socket come from other extra parameters.
netcat=/opt/netcat/bin/nc
no_verifytlsIf set to a non-zero value, this disables client checks of the server's certificate. Note that to disable server checks of the client's certificate or IP address you must change the libvirtd configuration.no_verify=1
no_ttysshIf set to a non-zero value, this stops ssh from asking for a password if it cannot log in to the remote machine automatically (for using ssh-agent or similar). Use this when you do not have access to a terminal - for example in graphical programs which use libvirt.no_tty=1

Chapter 6. Overcommitting with KVM

6.1. Introduction

The KVM hypervisor supports overcommitting CPUs and overcommitting memory. Overcommitting is allocating more virtualized CPUs or memory than there are physical resources on the system. With CPU overcommit, under-utilized virtualized servers or desktops can run on fewer servers which saves a number of system resources, with the net effect of less power, cooling, and investment in server hardware.
As most processes do not access 100% of their allocated memory all the time, KVM can use this behavior to its advantage and allocate more memory for guests than the host has physically available, in a process called overcommiting of resources.

Important

Overcommitting is not an ideal solution for all memory issues as the recommended method to deal with memory shortage is to allocate less memory per guest so that the sum of all guests memory (+4G for the host O/S) is lower than the host's physical memory. If the guests need more memory, then increase the guests' swap. If however, should you decide to overcommit, do so with caution.
Guest virtual machines running on a KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each guest virtual machine functions as a host Linux process where the host Linux kernel allocates memory only when requested. In addition the host's memory manager can move the guest's memory between its own physical memory and swap space. This is why overcommitting requires allotting sufficient swap space on the host physical machine to accommodate all guest virtual machines as well as all physical host processes. As a basic rule, the host operating system requires a maximum of 4GB of memory along with a minimum of 4GB of swap space. Refer to Example 6.1, "Memory overcommit example" for more information.
Red Hat Knowledgebase has an article on safely and efficiently determining the size of the swap partition.

Note

The example below is provided as a guide for configuring swap only. The settings listed may not be appropriate for your environment.

Example 6.1. Memory overcommit example

This example demonstrates how to calculate swap space for overcommitting. Although it may appear to be simple in nature, the ramifications of overcommitting should not be ignored. Refer to Important before proceeding.
ExampleServer1 has 32GB of physical RAM. The system is being configured to run 50 guests, each requiring 1GB of virtualized memory. As mentioned above, the host system itself needs a maximum of 4GB (apart from the guests) as well as an additional 4GB as a swap space minimum.
The swap space is calculated as follows:
  • Calculate the amount of memory needed for the sum of all the guests - In this example: (50 guests * 1GB of memory per guest) = 50GB
  • Add the guest memory amount to the amount needed for the host OS and for the host minimum swap space - In this example: 50GB guest memory + 4GB host OS + 4GB minimal swap = 58GB
  • Subtract this amount from the amount of physical RAM there is on the system - In this example 58GB - 32GB = 26GB
  • The answer is the amount of swap space that needs to be allocated. In this example 26GB

Note

Overcommitting does not work with all guests, but has been found to work in a desktop virtualization setup with minimal intensive usage or running several identical guests with KSM. It should be noted that configuring swap and memory overcommit is not a simple plug-in and configure formula, as each environment and setup is different. Proceed with caution before changing these settings and make sure you completely understand your environment and setup before making any changes.
For more information on KSM and overcommitting, refer to Chapter 7, KSM.

6.2. Overcommitting virtualized CPUs

The KVM hypervisor supports overcommitting virtualized CPUs. Virtualized CPUs can be overcommitted as far as load limits of guests allow. Use caution when overcommitting VCPUs as loads near 100% may cause dropped requests or unusable response times.
Virtualized CPUs are overcommitted best when each guest only has a single VCPU. The Linux scheduler is very efficient with this type of load. KVM should safely support guests with loads under 100% at a ratio of five VCPUs. Overcommitting single VCPU guests is not an issue.
You cannot overcommit symmetric multiprocessing guests on more than the physical number of processing cores. For example a guest with four VCPUs should not be run on a host with a dual core processor. Overcommitting symmetric multiprocessing guests in over the physical number of processing cores will cause significant performance degradation.
Assigning guests VCPUs up to the number of physical cores is appropriate and works as expected. For example, running guests with four VCPUs on a quad core host. Guests with less than 100% loads should function effectively in this setup.

Important

Do not overcommit memory or CPUs in a production environment without extensive testing. Applications which use 100% of memory or processing resources may become unstable in overcommitted environments. Test before deploying.

Chapter 7. KSM

The concept of shared memory is common in modern operating systems. For example, when a program is first started it shares all of its memory with the parent program. When either the child or parent program tries to modify this memory, the kernel allocates a new memory region, copies the original contents and allows the program to modify this new region. This is known as copy on write.
KSM is a new Linux feature which uses this concept in reverse. KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest, a new page is created for that guest.
This is useful for virtualization with KVM. When a guest is started, it only inherits the memory from the parent qemu-kvm process. Once the guest is running the contents of the guest operating system image can be shared when guests are running the same operating system or applications. KSM only identifies and merges identical pages which does not interfere with the guest or impact the security of the host or the guests. KSM allows KVM to request that these identical guest memory regions be shared.
KSM provides enhanced memory speed and utilization. With KSM, common process data is stored in cache or in main memory. This reduces cache misses for the KVM guests which can improve performance for some applications and operating systems. Secondly, sharing memory reduces the overall memory usage of guests which allows for higher densities and greater utilization of resources.
Red Hat Enterprise Linux uses two separate methods for controlling KSM:
  • The ksm service starts and stops the KSM kernel thread.
  • The ksmtuned service controls and tunes the ksm, dynamically managing same-page merging. The ksmtuned service starts ksm and stops the ksm service if memory sharing is not necessary. The ksmtuned service must be told with the retune parameter to run when new guests are created or destroyed.
Both of these services are controlled with the standard service management tools.
The KSM service
The ksm service is included in the qemu-kvm package. KSM is off by default on Red Hat Enterprise Linux 6. When using Red Hat Enterprise Linux 6 as a KVM host, however, it is likely turned on by the ksm/ksmtuned services.
When the ksm service is not started, KSM shares only 2000 pages. This default is low and provides limited memory saving benefits.
When the ksm service is started, KSM will share up to half of the host system's main memory. Start the ksm service to enable KSM to share more memory.
# service ksm startStarting ksm:  [  OK  ]
The ksm service can be added to the default startup sequence. Make the ksm service persistent with the chkconfig command.
# chkconfig ksm on
The KSM tuning service
The ksmtuned service does not have any options. The ksmtuned service loops and adjusts ksm. The ksmtuned service is notified by libvirt when a guest is created or destroyed.
# service ksmtuned startStarting ksmtuned: [  OK  ]
The ksmtuned service can be tuned with the retune parameter. The retune parameter instructs ksmtuned to run tuning functions manually.
The /etc/ksmtuned.conf file is the configuration file for the ksmtuned service. The file output below is the default ksmtuned.conf file.
# Configuration file for ksmtuned.# How long ksmtuned should sleep between tuning adjustments# KSM_MONITOR_INTERVAL=60# Millisecond sleep between ksm scans for 16Gb server.# Smaller servers sleep more, bigger sleep less.# KSM_SLEEP_MSEC=10# KSM_NPAGES_BOOST=300# KSM_NPAGES_DECAY=-50# KSM_NPAGES_MIN=64# KSM_NPAGES_MAX=1250# KSM_THRES_COEF=20# KSM_THRES_CONST=2048# uncomment the following to enable ksmtuned debug information# LOGFILE=/var/log/ksmtuned# DEBUG=1
KSM variables and monitoring
KSM stores monitoring data in the /sys/kernel/mm/ksm/ directory. Files in this directory are updated by the kernel and are an accurate record of KSM usage and statistics.
The variables in the list below are also configurable variables in the /etc/ksmtuned.conf file as noted below.

The /sys/kernel/mm/ksm/ files

full_scans
Full scans run.
pages_shared
Total pages shared.
pages_sharing
Pages presently shared.
pages_to_scan
Pages not scanned.
pages_unshared
Pages no longer shared.
pages_volatile
Number of volatile pages.
run
Whether the KSM process is running.
sleep_millisecs
Sleep milliseconds.
KSM tuning activity is stored in the /var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings.
Deactivating KSM
KSM has a performance overhead which may be too large for certain environments or host systems.
KSM can be deactivated by stopping the ksmtuned and the ksm service. Stopping the services deactivates KSM but does not persist after restarting.
# service ksmtuned stopStopping ksmtuned: [  OK  ]# service ksm stopStopping ksm:  [  OK  ]
Persistently deactivate KSM with the chkconfig command. To turn off the services, run the following commands:
# chkconfig ksm off# chkconfig ksmtuned off

Important

Ensure the swap size is sufficient for the committed RAM even with KSM. KSM reduces the RAM usage of identical or similar guests. Overcommitting guests with KSM without sufficient swap space may be possible but is not recommended because guest memory use can result in pages becoming unshared.

Chapter 8. Advanced virtualization administration

This chapter covers advanced administration tools for fine tuning and controlling guests and host system resources.

8.1. Control Groups (cgroups)

Red Hat Enterprise Linux 6 provides a new kernel feature: control groups, which are often referred to as cgroups. Cgroups allow you to allocate resources such as CPU time, system memory, network bandwidth, or combinations of these resources among user-defined groups of tasks (processes) running on a system. You can monitor the cgroups you configure, deny cgroups access to certain resources, and even reconfigure your cgroups dynamically on a running system.
The cgroup functionality is fully supported by libvirt. By default, libvirt puts each guest into a separate control group for various controllers (such as memory, cpu, blkio, device).
When a guest is started, it is already in a cgroup. The only configuration that may be required is the setting of policies on the cgroups. Refer to the Red Hat Enterprise Linux Resource Management Guide for more information on cgroups.

8.2. Hugepage support

Introduction
x86 CPUs usually address memory in 4kB pages, but they are capable of using larger pages known as hugepages. KVM guests can be deployed with hugepage memory support in order to improve performance by increasing CPU cache hits against the Transaction Lookaside Buffer (TLB). hugepages can significantly increase performance, particularly for large memory and memory-intensive workloads. Red Hat Enterprise Linux 6 is able to more effectively manage large amounts of memory by increasing the page size through the use of hugepages.
By using hugepages for a KVM guest, less memory is used for page tables and TLB (Translation Lookaside Buffer) misses are reduced, thereby significantly increasing performance, especially for memory-intensive situations.
Transparent Hugepage Support is a kernel feature that reduces TLB entries needed for an application. By also allowing all free memory to be used as cache, performance is increased.
Using Transparent Hugepage Support
To use Transparent Hugepage Support, no special configuration in the qemu.conf file is required. Hugepages are used by default if /sys/kernel/mm/redhat_transparent_hugepage/enabled is set to always.
Transparent Hugepage Support does not prevent the use of hugetlbfs. However, when hugetlbfs is not used, KVM will use transparent hugepages instead of the regular 4kB page size.
(Sebelumnya) 24 : Virtualization Administra ...24 : Chapter 9. Miscellaneous ... (Berikutnya)