| Hypervisor Deployment GuideThe complete guide to obtaining, deploying, configuring, and maintaining the Red Hat Enterprise Virtualization Hypervisor.Edition 4.0 Legal NoticeCopyright © 2012 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution-Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. All other trademarks are the property of their respective owners. 1801 Varsity Drive Raleigh, NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701
Daftar IsiAbstract The Red Hat Enterprise Virtualization Hypervisor is a fully featured virtualization platform for quick, easy deployment and management of virtual machines. The Hypervisor is designed to be managed by Red Hat Enterprise Virtualization Manager. This Hypervisor Deployment Guide documents the steps required to obtain, deploy, configure, and maintain the Red Hat Enterprise Virtualization Hypervisor. This is a guide to the installation and configuration of Red Hat Enterprise Virtualization Hypervisors. The guide also provides step-by-step procedures to connect the Hypervisor with the Red Hat Enterprise Virtualization Manager. Advanced options are covered to assist users with configuration of Hypervisors in a wide variety of environments. This guide describes the procedures for installation and configuration of Red Hat Enterprise Virtualization Hypervisors. Having read this guide you will be able to: create Hypervisor boot media, perform interactive installation of the Hypervisor, perform automated, or unattended, installation of the Hypervisor, configure the Hypervisor, attach the Hypervisor to a Red Hat Enterprise Virtualization Manager installation, and upgrade the Hypervisor as new versions become available.
Installation and configuration of the Red Hat Enterprise Virtualization Manager, other than the attachment of Hypervisors to the manager, is outside the scope of this document. For instruction on installation and configuration of the Red Hat Enterprise Virtualization Manager consult the Red Hat Enterprise Virtualization Installation Guide. This guide is intended for use by those who need to install, configure, and maintain instances of the Red Hat Enterprise Virtualization Hypervisor. A relative level of comfort in the administration of computers that run Linux based operating systems is beneficial but is not strictly required. The Red Hat Enterprise Virtualization documentation suite provides information on installation, development of applications, configuration and usage of the Red Hat Enterprise Virtualization platform and its related products. Red Hat Enterprise Virtualization - Administration Guide describes how to set up, configure and manage Red Hat Enterprise Virtualization. It assumes that you have successfully installed the Red Hat Enterprise Virtualization Manager and hosts. Red Hat Enterprise Virtualization - Evaluation Guide enables prospective customers to evaluate the features of Red Hat Enterprise Virtualization. Use this guide if you have an evaluation license. Red Hat Enterprise Virtualization - Installation Guide describes the installation prerequisites and procedures. Read this if you need to install Red Hat Enterprise Virtualization. The installation of hosts, Manager and storage are covered in this guide. You will need to refer to the Red Hat Enterprise Virtualization Administration Guide to configure the system before you can start using the platform. Red Hat Enterprise Virtualization - Manager Release Notes contain release specific information for Red Hat Enterprise Virtualization Managers. Red Hat Enterprise Virtualization - Power User Portal Guide describes how power users can create and manage virtual machines from the Red Hat Enterprise Virtualization User Portal. Red Hat Enterprise Virtualization - Quick Start Guide provides quick and simple instructions for first time users to set up a basic Red Hat Enterprise Virtualization environment. Red Hat Enterprise Virtualization - REST API Guide describes how to use the REST API to set up and manage virtualization tasks. Use this guide if you wish to develop systems which integrate with Red Hat Enterprise Virtualization, using an open and platform independent API. Red Hat Enterprise Virtualization - Technical Reference Guide describes the technical architecture of Red Hat Enterprise Virtualization and its interactions with existing infrastructure. Red Hat Enterprise Virtualization - User Portal Guide describes how users of the Red Hat Enterprise Virtualization system can access and use virtual desktops from the User Portal. Red Hat Enterprise Linux - Hypervisor Deployment Guide describes how to deploy and install the Hypervisor. Read this guide if you need advanced information about installing and deploying Hypervisors. The basic installation of Hypervisor hosts is also described in the Red Hat Enterprise Virtualization Installation Guide. Red Hat Enterprise Linux - V2V Guide describes importing virtual machines from KVM, Xen and VMware ESX/ESX(i) to Red Hat Enterprise Virtualization and KVM managed by libvirt.
This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information. In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default. 2.1. Typographic ConventionsFour typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows. Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight keys and key combinations. For example: To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a key, all presented in mono-spaced bold and all distinguishable thanks to context. Key combinations can be distinguished from an individual key by the plus sign that connects each part of a key combination. For example: Press Enter to execute the command. Press Ctrl+Alt+F2 to switch to a virtual terminal.
The first example highlights a particular key to press. The second example highlights a key combination: a set of three keys pressed simultaneously. If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in mono-spaced bold . For example: File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.
Proportional Bold This denotes words or phrases encountered on a system, including application names; dialog box text; labeled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example: Choose ⤍ ⤍ from the main menu bar to launch Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand). To insert a special character into a gedit file, choose ⤍ ⤍ from the main menu bar. Next, choose ⤍ from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose ⤍ from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context. Mono-spaced Bold Italic or Proportional Bold Italic
Whether mono-spaced bold or proportional bold, the addition of italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example: To connect to a remote machine using ssh, type ssh username @domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh [email protected] . The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home . To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release .
Note the words in bold italics above - username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system. Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example: Publican is a DocBook publishing system.
2.2. Pull-quote ConventionsTerminal output and source code listings are set off visually from the surrounding text. Output sent to a terminal is set in mono-spaced roman and presented thus: books Desktop documentation drafts mss photos stuff svnbooks_tests Desktop1 downloads images notes scripts svgs Source-code listings are also set in mono-spaced roman but add syntax highlighting as follows: package org.jboss.book.jca.ex1;import javax.naming.InitialContext;public class ExClient{ public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); }} Finally, we use three visual styles to draw attention to information that might otherwise be overlooked. Notes are tips, shortcuts or alternative approaches to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier. Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring a box labeled 'Important' will not cause data loss but may cause irritation and frustration. Warnings should not be ignored. Ignoring warnings will most likely cause data loss. If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: https://bugzilla.redhat.com/ against the product Red Hat Enterprise Linux 6. When submitting a bug report, be sure to provide the following information: If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, include the section number and some of the surrounding text so we can find it easily. The Hypervisor is distributed as a compact image for use on a variety of installation media. It provides a minimal installation of Red Hat Enterprise Linux and includes the packages necessary to communicate with the Red Hat Enterprise Virtualization Manager. The Hypervisor is certified for use with all hardware which has passed Red Hat Enterprise Linux certification except where noted in Chapter 2, Requirements. The Hypervisor uses the Red Hat Enterprise Linux kernel and benefits from the default kernel's extensive testing, device support and flexibility. This chapter contains all system requirements and limitations which apply to Red Hat Enterprise Virtualization Hypervisors. These requirements are determined based on present hardware and software limits as well as testing and support considerations. System requirements and limitations will vary over time due to ongoing software development and hardware improvements. 2.1. Hypervisor RequirementsRed Hat Enterprise Virtualization Hypervisors have a number of hardware requirements and supported limits. Table 2.1. Red Hat Enterprise Virtualization Hypervisor Requirements and Supported Limits Item | Support Limit |
---|
CPU | A minimum of 1 physical CPU is required. Red Hat Enterprise Virtualization supports the use of these CPU models in virtualization hosts: AMD Opteron G1 AMD Opteron G2 AMD Opteron G3 Intel Conroe Intel Penryn Intel Nehalem Intel Westmere
All CPUs must have support for the Intel® 64 or AMD64 CPU extensions, and the AMD-V� or Intel VT® hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required.
| RAM | A minimum of 2GB of RAM is recommended. The amount of RAM required for each virtual machine varies depending on: guest operating system requirements, guest application requirements, and memory activity and usage of virtual machines.
Additionally KVM is able to over-commit physical RAM for virtual machines. It does this by only allocating RAM for virtual machines as required and shifting underutilized virtual machines into swap. A maximum of 2 TB of RAM is supported.
| Storage | The minimum supported internal storage for a Hypervisor is the total of the following list: The root partitions require at least 512 MB of storage. The configuration partition requires at least 8 MB of storage. The recommended minimum size of the logging partition is 2048 MB. The data partition requires at least 256 MB of storage. Use of a smaller data partition may prevent future upgrades of the Hypervisor from the Red Hat Enterprise Virtualization Manager. By default all disk space remaining after allocation of swap space will be allocated to the data partition. The swap partition requires at least 8 MB of storage. The recommended size of the swap partition varies depending on both the system the Hypervisor is being installed upon and the anticipated level of overcommit for the environment. Overcommit allows the Red Hat Enterprise Virtualization environment to present more RAM to virtual machines than is actually physically present. The default overcommit ratio is 0.5 . The recommended size of the swap partition can be determined by: Multiplying the amount of system RAM by the expected overcommit ratio, and adding 2 GB of swap space for systems with 4 GB of RAM or less, or 4 GB of swap space for systems with between 4 GB and 16 GB of RAM, or 8 GB of swap space for systems with between 16 GB and 64 GB of RAM, or 16 GB of swap space for systems with between 64 GB and 256 GB of RAM.
Example 2.1. Calculating Swap Partition Size For a system with 8 GB of RAM this means the formula for determining the amount of swap space to allocate is: (8 GB x 0.5) + 4 GB = 8 GB
Please note that these are the minimum storage requirements for Hypervisor installation. It is recommended to use the default allocations which use more storage space. | PCI Devices | At least one network controller is required with a recommended minimum bandwidth of 1 Gbps. |
When the Red Hat Enterprise Virtualization Hypervisor boots a message may appear: Virtualization hardware is unavailable.(No virtualization hardware was detected on this system) This warning indicates the virtualization extensions are either disabled or not present on your processor. Ensure that the CPU supports the listed extensions and they are enabled in the system BIOS. To check that processor has virtualization extensions, and that they are enabled: At the Hypervisor boot screen press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. After the last kernel parameter listed ensure there is a Space and append the rescue parameter. Press Enter to boot into rescue mode. At the prompt which appears, determine that your processor has the virtualization extensions and that they are enabled by running this command: # grep -E 'svm|vmx' /proc/cpuinfo If any output is shown, the processor is hardware virtualization capable. If no output is shown it is still possible that your processor supports hardware virtualization. In some circumstances manufacturers disable the virtualization extensions in the BIOS. Where you believe this to be the case consult the system's BIOS and the motherboard manual provided by the manufacturer. As an additional check, verify that the kvm modules are loaded in the kernel: # lsmod | grep kvm If the output includes kvm_intel or kvm_amd then the kvm hardware virtualization modules are loaded and your system meets requirements.
The Red Hat Enterprise Virtualization Hypervisor does not support installation on fakeraid devices. Where a fakeraid device is present it must be reconfigured such that it no longer runs in RAID mode. Access the RAID controller's BIOS and remove all logical drives from it. Change controller mode to be non-RAID. This may be referred to as compatibility or JBOD mode.
Access the manufacturer provided documentation for further information related to the specific device in use. 2.2. Guest requirements and support limitsThe following requirements and support limits apply to guests that are run on the Hypervisor: Table 2.2. Virtualized Hardware Item | Limitations |
---|
CPU | | RAM | Different guests have different RAM requirements. The amount of RAM required for each guest varies based on the requirements of the guest operating system and the load under which the guest is operating. A number of support limits also apply. A minimum of 512 MB of virtualized RAM per guest is supported. Creation of guests with less than 512 MB of virtualized RAM while possible is not supported. A maximum of 512 GB of virtualized RAM per 64 bit guest is supported. The supported virtualized RAM maximum for 32 bit virtual machines varies depending on the virtual machine. 32 bit virtual machines operating in standard 32 bit mode have a supported virtualized RAM maximum of 4 GB virtualized RAM per virtual machine. However, note that some virtualized operating systems will only use 2 GB of the supported 4 GB. 32 bit virtual machines operating in PAE (Page Address Extension) mode have a supported virtualized RAM maximum of 64 GB per virtual machine. However, not all virtualized operating systems can be configured to use this amount of virtualized RAM.
| PCI devices | A maximum of 31 virtualized PCI devices per guest is supported. A number of system devices count against this limit, some of which are mandatory. Mandatory devices which count against the PCI devices limit include the PCI host bridge, ISA bridge, USB bridge, board bridge, graphics card, and the IDE or VirtIO block device.
| Storage | |
2.3. Supported Virtual Machine Operating SystemsRed Hat Enterprise Virtualization presently supports the virtualization of these guest operating systems: Red Hat Enterprise Linux 3 (32 bit and 64 bit) Red Hat Enterprise Linux 4 (32 bit and 64 bit) Red Hat Enterprise Linux 5 (32 bit and 64 bit) Red Hat Enterprise Linux 6 (32 bit and 64 bit) Windows XP Service Pack 3 and newer (32 bit only) Windows 7 (32 bit and 64 bit) Windows Server 2003 Service Pack 2 and newer (32 bit and 64 bit) Windows Server 2008 (32 bit and 64 bit) Windows Server 2008 R2 (64 bit only)
Chapter 3. Preparing Red Hat Enterprise Virtualization Hypervisor Installation MediaThis chapter covers creating installation media and preparing your systems before installing a Red Hat Enterprise Virtualization Hypervisor. This chapter covers installing Red Hat Enterprise Virtualization Hypervisors on a local storage device. This storage device is a removable USB storage device, an internal hard disk drive or solid state drive. Once the Hypervisor is installed, the system will boot the Hypervisor and all configuration data is preserved on the system. 3.1. Preparation InstructionsThe rhev-hypervisor package is needed for installation of Hypervisors. The rhev-hypervisor package contains the Hypervisor CD-ROM image. The following procedure installs the rhev-hypervisor package. Entitlements to the Red Hat Enterprise Virtualization Hypervisor (v.6 x86-64) channel must be available on your Red Hat Network account to download the Hypervisor image. The channel's label is rhel-x86_64-server-6-rhevh . 3.1.1. Downloading and Installing the RPM PackageThe Red Hat Enterprise Virtualization Hypervisor package contains additional tools for USB and PXE installations as well as the Hypervisor ISO image. You can download and install the Hypervisor either with yum (the recommended approach), or manually. In either case, the Hypervisor ISO image is installed into the /usr/share/rhev-hypervisor/ directory and named rhev-hypervisor.iso . The livecd-iso-to-disk and livecd-iso-to-pxeboot scripts are now included in the livecd-tools sub-package. These scripts are installed to the /usr/bin directory. Red Hat Enterprise Linux 6.2 and later allows more than one version of the Hypervisor ISO image to be installed at one time. As such, rhev-hypervisor.iso is now a symbolic link to a uniquely-named version of the Hypervisor ISO image, such as /usr/share/rhev-hypervisor/rhevh-6.2-20111006.0.el6.iso . Different versions of the Hypervisor ISO can be installed alongside each other, allowing administrators to run and maintain a cluster on a previous version of the Hypervisor while upgrading another cluster for testing. Procedure 3.1. Downloading and installing with yum Subscribe to the correct channel Subscribe to the Red Hat Enterprise Virtualization Hypervisor (v.6 x86_64) channel on Red Hat Network. # rhn-channel --add --channel=rhel-x86_64-server-6-rhevh To subscribe to a channel via the command line, you must have administrative credentials. Attempting to subscribe to a channel with a normal user account results in the following message: # rhn-channel --add --channel=rhel-x86_64-server-6-rhevhUsername: xx-xxPassword: Error communicating with server. The message was:Error Class Code: 37Error Class Info: You are not allowed to perform administrative tasks on this system.Explanation: An error has occurred while processing your request. If this problem persists please enter a bug report at bugzilla.redhat.com. If you choose to submit the bug report, please be sure to include details of what you were trying to do when this error occurred and details on how to reproduce this problem. Refer to the Red Hat Enterprise Virtualization Installation Guide, available from https://access.redhat.com/knowledge/docs/ if you need further assistance registering with Red Hat Network or subscribing to other channels related to virtualization. Install the Hypervisor Install the rhev-hypervisor package. # yum install rhev-hypervisor
Procedure 3.2. Downloading and installing manually Install the RPM on a Red Hat Enterprise Linux system. You must log in as the root user and navigate to the location of the downloaded file to perform this step. # yum localinstall rhev-hypervisor*.rpm 3.1.1.1. BIOS Settings and Boot Process TroubleshootingBefore installing Red Hat Enterprise Virtualization Hypervisors it is necessary to verify the BIOS is correctly configured for the chosen installation method. Many motherboard and PC manufacturers disable different booting methods in the BIOS. Most BIOS chips boot from the following devices in order: 3.5 inch diskette CD-ROM or DVD device Local hard disk
Many BIOS chips have disabled one or more of the following boot methods: USB storage devices, CD-ROMs, DVDs or network boot. To boot from your chosen method, enable the method or device and set that device as the first boot device in BIOS. Most but not all motherboards support the boot methods described in this chapter. Consult the documentation for your motherboard or system to determine whether it is possible to use a particular boot method. BIOS settings vary between manufacturers. Any specific examples of BIOS settings may be inaccurate for some systems. Due to this inconsistency, it is necessary to review the motherboard or system manufacturer's documentation. Procedure 3.3. Confirm Hardware Virtualization Support Verify that your system is capable of running the Red Hat Enterprise Virtualization Hypervisor. Hypervisors require that virtualization extensions are present and enabled in the BIOS before installation proceeds. Boot the Hypervisor from removable media. For example, a USB stick or CD-ROM. Once the Hypervisor boot prompt is displayed, enter the command: : linux rescue Once the Hypervisor boots, verify your CPU contains the virtualization extensions with the following command: # grep -E 'svm|vmx' /proc/cpuinfo Output displays if the processor has the hardware virtualization extensions. Verify that the KVM modules load by default: # lsmod | grep kvm If the output includes kvm_intel or kvm_amd then the KVM hardware virtualization modules are loaded and the system meets the requirements.
3.2. Modifying the Hypervisor ISOThe edit-node tool allows users to make specific changes to the Hypervisor ISO to adapt the Hypervisor to the requirements of a specific environment. edit-node extracts the file system from a livecd-based ISO and modifies aspects of the image, such as user passwords, SSH keys, and package installation. edit-node Options
--name=image_name Specifies the name of the edited LiveISO. .edited.iso is automatically appended to the name specified. --output=directory Specifies the directory to which the edited ISO is saved. --passwd=user ,encrypted_password Defines a password for the specified user . This option accepts MD5-encrypted password values. The --password parameter can be specified multiple times to specify multiple users. If no user is specified, the default user is admin . --sshkey=user ,public_key_file Specifies the public key for the specified user. This option can be specified multiple times to specify keys for multiple users. If no user is specified, the default user is admin . --install-kmod=package_name Installs the specified driver update package from a yum repository or specified .rpm file. Specified .rpm files are valid only if in whitelisted locations (kmod-specific areas). --repo=repository Specifies the yum repository to be used in conjunction with the --install-* options. The value specified can be a local directory, a yum repository file (.repo ), or a driver disk .iso file. --nogpgcheck Skip GPG Key verification during yum install . --print-version Prints current version information from /etc/system-release . --print-manifests Prints a list of manifest files within the ISO. --print-manifest=manifest Print specified manifest file to stdout . --get-manifests=manifest Creates a .tar file of manifests files within the ISO. --print-file-manifest Prints contents of rootfs on ISO to stdout . --print-rpm-manifest Prints a list of installed RPMs in rootfs on ISO to stdout .
Example 3.1. Example: Modifying the Hypervisor ISO The following command adds the kmod-qla2xxx package: # edit-node --install-kmod=kmod-qla2xxx --repo kmod.repo rhev-hypervisor6.iso The following command adds an SSH public key for the admin user: # edit-node --sshkey=admin,keyfile.pub rhev-hypervisor6.iso 3.3. Deploying Hypervisors with PXE and tftpProcedure 3.4. Setting up a PXE server and FTP transfer Identify the appropriate subnet Identify the subnet you will need to use by running ifconfig interface . In the following example, we examine the eth0 interface, for which the appropriate subnet is 192.168.1.X : # ifconfig eth0eth0 Link encap:Ethernet HWaddr 00:1E:C9:20:3F:6B inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21e:c9ff:fe20:3f6b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4453951 errors:0 dropped:0 overruns:0 frame:0 TX packets:3350991 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3216624601 (2.9 GiB) TX bytes:1188936874 (1.1 GiB) Interrupt:21 Memory:febe0000-fec00000 Install DHCP on the server The client machine needs access to a DHCP server to acquire an IP address at boot. Install DHCP by executing the following command: # yum install dhcp -y Configure DHCP on the server Edit the configuration file Update the /etc/dhcp/dhcpd.conf configuration file with: the subnet you identified in Step 1; the MAC address of the client machine (the machine that needs to boot via PXE); the server's fixed IP address; and the file to be downloaded by the client machine.
# vi /etc/dhcp/dhcpd.confauthoritative;ddns-update-style none;subnet 192.168.1.0 netmask 255.255.255.0 {range dynamic-bootp 192.168.1.190 192.168.1.200;range 192.168.1.201 192.168.1.250;option domain-name-servers 192.168.1.101; # DHCP Server#option domain-name "medogz.com";option routers 192.168.1.101; # DHCP Serveroption broadcast-address 192.168.1.255;default-lease-time 600;max-lease-time 7200;}# HOST - RHEVhost rhev-server1 {hardware ethernet 00:15:C5:E0:D3:27; # MAC from machine that will boot via PXEfixed-address 192.168.1.240; # Fixed IP addressfilename "pxelinux.0"; # File that will be downloaded by the clientnext-server 192.168.1.101; # DHCP Server} Set DHCP options Ensure that the DHCP daemon is using the appropriate interface: # vi /etc/sysconfig/dhcpd# Command line options hereDHCPDARGS=eth0 Enable and start DHCP Enable the DHCP daemon with chkconfig : # chkconfig dhcpd on Start the DHCP daemon with service : # service dhcpd start
Install FTP transfer packages Install the tftp packages to allow file transfer between the client and server. # yum install tftp tftp-server -y Enable and start FTP transfer services Enable the following services with chkconfig : # chkconfig tftp on# chkconfig xinetd on Start the following service with service : # service xinetd start
Once these services are installed, enabled, and started, you can begin preparing your installation image for use with PXE. Procedure 3.5. Installing the Hypervisor with PXE and tftp Install the Hypervisor Convert the Hypervisor image for PXE Create vmlinuz and initrd images with the livecd-iso-to-pxeboot tool. The default location of the Hypervisor ISO (hypervisor.iso ) is /usr/share/rhev-hypervisor/rhev-hypervisor.iso . # livecd-iso-to-pxeboot hypervisor.iso The root=live:/rhev-hypervisor.iso parameter in pxelinux.cfg/default is a default value. If the ISO file you are using has a name other than rhev-hypervisor.iso , it must be passed when calling livecd-iso-to-pxeboot . For example, for the ISO file rhev_hypervisor_6_2.iso , use the following command: # livecd-iso-to-pxeboot rhev_hypervisor_6_2.iso This will produce the correct parameter, root=live:/rhev_hypervisor_6_2.iso in pxelinux.cfg/default . This command returns the following when conversion is complete: Your pxeboot image is complete.Copy tftpboot/ subdirectory to /tftpboot or a subdirectory of /tftpboot.Set up your DHCP, TFTP and PXE server to serve /tftpboot/.../pxeboot.0Note: The initrd image contains the whole CD ISO and is consequentlyvery large. You will notice when pxebooting that initrd can take along time to download. This is normal behaviour. Import the converted image to the tftp server The output of livecd-iso-to-pxeboot command is a directory called tftpboot that has the following files in it: pxelinux.0
pxelinux.cfg/default
vmlinuz0
initrd0.img
We need to import the vmlinuz and initrd files into our PXE and tftp servers. To do so, we must first determine the directory used by the TFTP service. Locate the tftpboot service Examine the /etc/xinetd.d/tftp configuration file to determine the directory used by the TFTP service: # cat /etc/xinetd.d/tftp | grep -v ^#service tftp{ disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /var/lib/tftpboot per_source = 11 cps = 100 2 flags = IPv4} In this case, the directory is /var/lib/tftpboot . Move the output of livecd-iso-to-pxeboot Copy the tftpboot directory output by the livecd-iso-to-pxeboot command to the location shown in the tftp configuration file. # cp -Rpv tftpboot/* /var/lib/tftpboot/`tftpboot/initrd0.img' -> `/var/lib/tftpboot/initrd0.img'`tftpboot/pxelinux.0' -> `/var/lib/tftpboot/pxelinux.0'`tftpboot/pxelinux.cfg' -> `/var/lib/tftpboot/pxelinux.cfg'`tftpboot/pxelinux.cfg/default' -> `/var/lib/tftpboot/pxelinux.cfg/default'`tftpboot/vmlinuz0' -> `/var/lib/tftpboot/vmlinuz0'
Copy the PXE boot default configuration file The /var/lib/tftpboot/pxelinux.cfg/default file is a template configuration file containing the settings used by the PXE server to export the Hypervisor image. The default settings are: DEFAULT pxebootTIMEOUT 20PROMPT 0LABEL pxebootKERNEL vmlinuz0APPEND rootflags=loop initrd=initrd0.img root=live:/rhev-hypervisor.iso rootfstype=auto ro liveimg nomodeset check rootflags=ro crashkernel=512M-2G:64M,2G-:128M elevator=deadline processor.max_cstate=1 install rhgb rd_NO_LUKS rd_NO_MD rd_NO_DMONERROR LOCALBOOT 0 Create a copy of the default configuration file, and name the copy with the fixed IP address of the client machine, as converted to hexadecimal. Use the following command to determine the correct hexadecimal representation of the IP address, ip_address : # gethostip -x ip_address For example, if the IP address of your client is 198.168.1.240 , the command and its output will look like this: # gethostip -x 198.168.1.240C0A801F0 In this case, we would name the new configuration file C0A801F0 like so: # cp default C0A801F0 Edit your new PXE boot configuration file Modify the new configuration file as required for your environment. Configure the firewall PXE booted Hypervisors rely on the PXE server passing the MAC address of the PXE interface to the kernel. This is provided via the IPAPPEND 2 parameter. As such, you will need to make changes using iptables . # iptables -I INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT# iptables -I INPUT 1 -m mac --mac-source 00:15:C5:E0:D3:27 -j ACCEPT# service iptables save
3.3.1. Booting a Hypervisor with PXEFor network booting the network interface card must support PXE booting. To boot a Hypervisor from a PXE server: Enter your system's BIOS. On most systems, the key or combination of keys is prompted shortly after the system has power. Usually, this key is delete, F1 or F2. Enable network booting if network booting is disabled. Set the network interface card as the first boot device. The network interface used for PXE boot installation must be same interface used to connect to the Manager. Boot the system. If the PXE parameters are configured correctly an automated installation will begin. Refer to Section 4.2, "Automated Installation" for further details about the kernel parameters.
The Hypervisor is now installed. Change or disable network booting after the Hypervisor is installed to avoid overwriting the installation on each reboot (unless this is desired functionality) and to prevent certain security vulnerabilities. 3.4. Preparing a Hypervisor USB Storage DeviceThe Hypervisor is able to install from USB storage devices and solid state disks. However, the initial boot/install USB device must be a separate device from the installation target. Network booting with PXE and tftp provides the greatest flexibility and scalability. For environments where network restrictions prevent network booting, or for systems without PXE capable network interface cards, a local media installation such as CD-ROM or USB is necessary. Booting from USB storage devices is a useful alternative to booting from CD, for systems without CD-ROM drives. Not all systems support booting from a USB storage device. Ensure that your system's BIOS supports booting from USB storage devices before proceeding. 3.4.1. Making a USB Storage Device into a Hypervisor Boot DeviceThis section covers making USB storage devices which are able to be used to boot Hypervisors. 3.4.1.1. Using livecd-iso-to-disk to Create USB Install MediaThe livecd-iso-to-disk command will install a Hypervisor onto a USB storage device. The livecd-iso-to-disk command is part of the rhev-hypervisor package. Devices created with this command are able to boot the Hypervisors on systems which support booting via USB. The basic livecd-iso-to-disk command usage follows this structure: # livecd-iso-to-disk image device Where the device parameter is the partition name of the USB storage device to install to. The image parameter is a ISO image of the Hypervisor. The default Hypervisor image location is /usr/share/rhev-hypervisor/rhev-hypervisor.iso . The livecd-iso-to-disk command requires devices to be formatted with the FAT or EXT3 file system. livecd-iso-to-disk uses a FAT or EXT3 formatted partition or block device.
USB storage devices are sometimes formatted without a partition table, use /dev/sdb , or similar, as the device name to be used by livecd-iso-to-disk . When a USB storage device is formatted with a partition table, use /dev/sdb1 , or similar, as the device name to be used by livecd-iso-to-disk . Use the livecd-iso-to-disk command to copy the .iso file to the disk. The --format parameter formats the disk. The --reset-mbr initializes the Master Boot Record ( MBR). The example uses a USB storage device named /dev/sdc . Example 3.2. Use of livecd-iso-to-disk # livecd-iso-to-disk --format --reset-mbr /usr/share/rhev-hypervisor/rhev-hypervisor.iso /dev/sdcVerifying image.../usr/share/rhev-hypervisor/rhev-hypervisor.iso: eccc12a0530b9f22e5ba62b848922309Fragment sums: 8688f5473e9c176a73f7a37499358557e6c397c9ce2dafb5eca5498fb586Fragment count: 20Checking: 100.0%The media check is complete, the result is: PASS.It is OK to use this media.Copying live image to USB stickUpdating boot config fileInstalling boot loadersyslinux: only 512-byte sectors are supportedUSB stick set up as live image!
Red Hat Enterprise Linux 6.3 enables the use of the Unified Extensible Firmware Interface (UEFI) as a Technology Preview. Technology Preview features provide early access to upcoming product features, allowing you to test functionality and provide feedback during feature development. However, these features are not fully supported, may not be functionally complete, and are not intended for production use. Because these features are still under development, Red Hat cannot guarantee their stability. Therefore, you may not be able to upgrade seamlessly from a Technology Preview feature to a subsequent release of that feature. Additionally, if the feature does not meet standards for enterprise viability, Red Hat cannot guarantee that the Technology Preview will be released in a supported manner. Some Technology Preview features may only be available for specific hardware architectures. Using UEFI requires an additional parameter, --efi , with the livecd-iso-to-disk command in order to correctly set up and enable UEFI. The --efi parameter is used like so: # livecd-iso-to-disk --format --efi image device # livecd-iso-to-disk --format --efi /usr/share/rhev-hypervisor/rhev-hypervisor.iso /dev/sdc Note that this Technology Preview is only available in Red Hat Enterprise Linux 6.3. The USB storage device ( /dev/sdc ) is ready to boot a Hypervisor. 3.4.1.2. Using dd to Create USB Install MediaThe dd command can also be used to install a Hypervisor onto a USB storage device. Media created with the command can boot the Hypervisor on systems which support booting via USB. Red Hat Enterprise Linux provides dd as part of the coreutils package. Versions of dd are also available on a wide variety of Linux and Unix operating systems. The basic dd command usage follows this structure: # dd if=image of=device Where the device parameter is the device name of the USB storage device to install to. The image parameter is a ISO image of the Hypervisor. The default Hypervisor image location is /usr/share/rhev-hypervisor/rhev-hypervisor.iso . The dd command does not make assumptions as to the format of the device as it performs a low-level copy of the raw data in the selected image. Procedure 3.6. Using dd to Create USB Install Media Use the dd command to copy the .iso file to the disk. The example uses a USB storage device named /dev/sdc . Example 3.3. Use of dd # dd if=/usr/share/rhev-hypervisor/rhev-hypervisor.iso of=/dev/sdc243712+0 records in243712+0 records out124780544 bytes (125 MB) copied, 56.3009 s, 2.2 MB/s
The dd command will overwrite all data on the device specified for the of parameter. Any existing data on the device will be destroyed. Ensure that the correct device is specified and that it contains no valuable data before invocation of the dd command. The USB storage device ( /dev/sdc ) is ready to boot a Hypervisor. Procedure 3.7. Using dd to Create USB Install Media on Systems Running Windows As the Administrator user run the downloaded rhsetup.exe executable. The Red Hat Cygwin installer will display. Follow the prompts to complete a standard installation of Red Hat Cygwin. The Coreutils package within the Base package group provides the dd utility. This is automatically selected for installation. Copy the rhev-hypervisor.iso file downloaded from Red Hat Network to C:\rhev-hypervisor.iso . As the Administrator user run Red Hat Cygwin from the desktop. A terminal window will appear. On the Windows 7 and Windows Server 2008 platforms it is necessary to right click the Red Hat Cygwin icon and select the Run as Administrator... option to ensure the application runs with the correct permissions. In the terminal run cat /proc/partitions to see the drives and partitions currently visible to the system. Example 3.4. View of Disk Partitions Attached to System Administrator@test /$ cat /proc/partitionsmajor minor #blocks name 8 0 15728640 sda 8 1 102400 sda1 8 2 15624192 sda2
Plug the USB storage device which is to be used as the media for the Hypervisor installation into the system. Re-run the cat /proc/partitions command and compare the output to that of the previous run. A new entry will appear which designates the USB storage device. Example 3.5. View of Disk Partitions Attached to System Administrator@test /$ cat /proc/partitionsmajor minor #blocks name 8 0 15728640 sda 8 1 102400 sda1 8 2 15624192 sda2 8 16 524288 sdb
Use the dd command to copy the rhev-hypervisor.iso file to the disk. The example uses a USB storage device named /dev/sdb . Replace sdb with the correct device name for the USB storage device to be used. Example 3.6. Use of dd Command Under Red Hat Cygwin Administrator@test /$ dd if=/cygdrive/c/rhev-hypervisor.iso of=/dev/sdb& pid=$!
The provided command starts the transfer in the background and saves the process identifier so that it can be used to monitor the progress of the transfer. The dd command will overwrite all data on the device specified for the of parameter. Any existing data on the device will be destroyed. Ensure that the correct device is specified and that it contains no valuable data before invocation of the dd command. Transfer of the ISO file to the USB storage device with the version of dd included with Red Hat Cygwin can take significantly longer than the equivalent on other platforms. To check the progress of the transfer in the same terminal window that the process was started in send it the USR1 signal. This can be achieved by issuing the kill in the terminal window as follows: kill -USR1 $pid When the transfer operation completes the final record counts will be displayed. Example 3.7. Result of dd Initiated Copy 210944+0 records in210944+0 records out108003328 bytes (108 MB) copied, 2035.82 s, 53.1 kB/s[1]+Donedd if=/cygdrive/c/rhev-hypervisor.iso of=/dev/sdb
The USB storage device ( /dev/sdb ) is ready to boot a Hypervisor. 3.4.2. Booting a Hypervisor USB Storage DeviceBooting a Hypervisor from a USB storage device is similar to booting other live USB operating systems. To boot from a USB storage device: Enter the system's BIOS menu to enable USB storage device booting if not already enabled. Enable USB booting if this feature is disabled. Set booting USB storage devices to be first boot device. Shut down the system.
Insert the USB storage device that contains the Hypervisor boot image. Restart the system. The Hypervisor will boot automatically.
3.5. Preparing a Hypervisor from a CD-ROM or DVDIt is possible to install the Hypervisor with a CD-ROM or DVD. 3.5.1. Making a Hypervisor CD-ROM Boot DiskBurn the Hypervisor image to a CD-ROM with the wodim command. The wodim command is part of the wodim package which is installed on Red Hat Enterprise Linux by default. Verify that the wodim package is installed on the system. Example 3.8. Verify Installation of wodim Package # rpm -q wodimwodim-1.1.9-11.el6.x86_64
If the package version is in the output the package is available. If it is not listed, install wodim: # yum install wodim Insert a blank CD-ROM or DVD into your CD or DVD writer. Record the ISO file to the disc. The wodim command uses the following syntax: wodim dev=device /iso/file/path/ This example uses the first CD-RW (/dev/cdrw ) device available and the default Hypervisor image location, /usr/share/rhev-hypervisor/rhev-hypervisor.iso . Example 3.9. Use of wodim Command # wodim dev=/dev/cdrw /usr/share/rhev-hypervisor/rhev-hypervisor.iso
If no errors occurred, the Hypervisor is ready to boot. Errors sometimes occur during the recording process due to errors on the media itself. If an error occurs, insert another writable disk and repeat the command above. The Hypervisor uses a program (isomd5sum ) to verify the integrity of the installation media every time the Hypervisor is booted. If media errors are reported in the boot sequence you have a bad CD-ROM. Follow the procedure above to create a new CD-ROM or DVD. 3.5.2. Booting a Hypervisor CD-ROMTo boot from CD-ROM insert the Hypervisor CD-ROM and then restart the computer. The Hypervisor will start to boot. If the Hypervisor does not start to boot your BIOS may not be configured to boot from CD-ROM first or booting from CD-ROM may be disabled. |
| |
|