| Deployment GuideChapter 26. The kdump Crash Recovery ServiceWhen the kdump crash dumping mechanism is enabled, the system is booted from the context of another kernel. This second kernel reserves a small amount of memory and its only purpose is to capture the core dump image in case the system crashes. Being able to analyze the core dump significantly helps to determine the exact cause of the system failure, and it is therefore strongly recommended to have this feature enabled. This chapter explains how to configure, test, and use the kdump service in Red Hat Enterprise Linux, and provides a brief overview of how to analyze the resulting core dump using the crash debugging utility. 26.1. Installing the kdump ServiceIn order use the kdump service on your system, make sure you have the kexec-tools package installed. To do so, type the following at a shell prompt as root : yum install kexec-tools
26.2. Configuring the kdump ServiceThere are three common means of configuring the kdump service: at the first boot, using the Kernel Dump Configuration graphical utility, and doing so manually on the command line. A limitation in the current implementation of the Intel IOMMU driver can occasionally prevent the kdump service from capturing the core dump image. To use kdump on Intel architectures reliably, it is advised that the IOMMU support is disabled. 26.2.1. Configuring the kdump at First BootWhen the system boots for the first time, the firstboot application is launched to guide the user through the initial configuration of the freshly installed system. To configure kdump , navigate to the Kdump section and follow the instructions below. Unless the system has enough memory, this option will not be available. For the information on minimum memory requirements, refer to the Required minimums section of the Red Hat Enterprise Linux Technology capabilities and limits comparison chart. When the kdump crash recovery is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by the user, and defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory). 26.2.1.1. Enabling the ServiceTo allow the kdump daemon to start at boot time, select the Enable kdump? checkbox. This will enable the service for runlevels 2 , 3 , 4 , and 5 , and start it for the current session. Similarly, unselecting the checkbox will disable it for all runlevels and stop the service immediately. 26.2.1.2. Configuring the Memory UsageTo configure the amount of memory that is reserved for the kdump kernel, click the up and down arrow buttons next to the Kdump Memory field to increase or decrease the value. Notice that the Usable System Memory field changes accordingly showing you the remaining memory that will be available to the system. 26.2.2. Using the Kernel Dump Configuration UtilityTo start the Kernel Dump Configuration utility, select ⤍ ⤍ from the panel, or type system-config-kdump at a shell prompt. You will be presented with a window as shown in Figure 26.1, "Basic Settings". The utility allows you to configure kdump as well as to enable or disable starting the service at boot time. When you are done, click Apply to save the changes. The system reboot will be requested, and unless you are already authenticated, you will be prompted to enter the superuser password. Unless the system has enough memory, the utility will not start and you will be presented with an error message. For the information on minimum memory requirements, refer to the Required minimums section of the Red Hat Enterprise Linux Technology capabilities and limits comparison chart. When the kdump crash recovery is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by the user, and defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory). 26.2.2.1. Enabling the ServiceTo start the kdump daemon at boot time, click the Enable button on the toolbar. This will enable the service for runlevels 2 , 3 , 4 , and 5 , and start it for the current session. Similarly, clicking the Disable button will disable it for all runlevels and stop the service immediately. 26.2.2.2. The Basic Settings TabThe Basic Settings tab enables you to configure the amount of memory that is reserved for the kdump kernel. To do so, select the Manual kdump memory settings radio button, and click the up and down arrow buttons next to the New kdump Memory field to increase or decrease the value. Notice that the Usable Memory field changes accordingly showing you the remaining memory that will be available to the system. 26.2.2.3. The Target Settings TabThe Target Settings tab enables you to specify the target location for the vmcore dump. It can be either stored as a file in a local file system, written directly to a device, or sent over a network using the NFS (Network File System) or SSH (Secure Shell) protocol. To save the dump to the local file system, select the Local filesystem radio button. Optionally, you can customize the settings by choosing a different partition from the Partition, and a target directory from the Path pulldown lists. To write the dump directly to a device, select the Raw device radio button, and choose the desired target device from the pulldown list next to it. To store the dump to a remote machine, select the Network radio button. To use the NFS protocol, select the NFS radio button, and fill the Server name and Path to directory fields. To use the SSH protocol, select the SSH radio button, and fill the Server name, Path to directory, and User name fields with the remote server address, target directory, and a valid remote user name respectively. Refer to Chapter 12, OpenSSH for information on how to configure an SSH server, and how to set up a key-based authentication. Table 26.1. Supported kdump targets Type | Supported Targets | Unsupported Targets |
---|
Raw device | All locally attached raw disks and partitions. | - | Local file system | ext2 , ext3 , ext4 , minix , btrfs and xfs file systems on directly attached disk drives, hardware RAID logical drives, LVM devices, and mdraid arrays. | Any local file system not explicitly listed as supported in this table, including the auto type (automatic file system detection). | Remote directory | Remote directories accessed using the NFS or SSH protocol over IPv4 . | Remote directories on the rootfs file system accessed using the NFS protocol. | Remote directories accessed using the iSCSI protocol over software initiators, unless iBFT (iSCSI Boot Firmware Table) is utilized. | Remote directories accessed using the iSCSI protocol using iBFT . | Multipath-based storages. | Remote directories accessed using the iSCSI protocol over hardware initiators. | - | Remote directories accessed over IPv6 . | Remote directories accessed using the SMB /CIFS protocol. | Remote directories accessed using the FCoE (Fibre Channel over Ethernet) protocol. | Remote directories accessed using wireless network interfaces. |
26.2.2.4. The Filtering Settings TabThe Filtering Settings tab enables you to select the filtering level for the vmcore dump. To exclude the zero page, cache page, cache private, user data, or free page from the dump, select the checkbox next to the appropriate label. 26.2.2.5. The Expert Settings TabThe Expert Settings tab enables you to choose which kernel and initial RAM disk to use, as well as to customize the options that are passed to the kernel and the core collector program. To use a different initial RAM disk, select the Custom initrd radio button, and choose the desired RAM disk from the pulldown list next to it. To capture a different kernel, select the Custom kernel radio button, and choose the desired kernel image from the pulldown list on the right. To adjust the list of options that are passed to the kernel at boot time, edit the content of the Edited text field. Note that you can always revert your changes by clicking the Refresh button. To choose what action to perform when kdump fails to create a core dump, select an appropriate option from the Default action pulldown list. Available options are (the default action), (to reboot the system), (to present a user with an interactive shell prompt), (to halt the system), and (to power the system off). 26.2.3. Configuring kdump on the Command Line26.2.3.1. Configuring the Memory UsageTo configure the amount of memory that is reserved for the kdump kernel, as root , open the /boot/grub/grub.conf file in a text editor and add the crashkernel=<size> M (or crashkernel=auto ) parameter to the list of kernel options as shown in Example 26.1, "A sample /boot/grub/grub.conf file". Unless the system has enough memory, the kdump crash recovery service will not be operational. For the information on minimum memory requirements, refer to the Required minimums section of the Red Hat Enterprise Linux Technology capabilities and limits comparison chart. When kdump is enabled, the minimum memory requirements increase by the amount of memory reserved for it. This value is determined by a user, and when the crashkernel=auto option is used, it defaults to 128 MB plus 64 MB for each TB of physical memory (that is, a total of 192 MB for a system with 1 TB of physical memory). In Red Hat Enterprise Linux 6, the crashkernel=auto only reserves memory if the system has 4 GB of physical memory or more. Example 26.1. A sample /boot/grub/grub.conf file # grub.conf generated by anaconda## Note that you do not have to rerun grub after making changes to this file# NOTICE: You have a /boot partition. This means that# all kernel and initrd paths are relative to /boot/, eg.# root (hd0,0)# kernel /vmlinuz-version ro root=/dev/sda3# initrd /initrd#boot=/dev/sdadefault=0timeout=5splashimage=(hd0,0)/grub/splash.xpm.gzhiddenmenutitle Red Hat Enterprise Linux Server (2.6.32-220.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-220.el6.x86_64 ro root=/dev/sda3 crashkernel=128M initrd /initramfs-2.6.32-220.el6.x86_64.img 26.2.3.2. Configuring the Target TypeWhen a kernel crash is captured, the core dump can be either stored as a file in a local file system, written directly to a device, or sent over a network using the NFS (Network File System) or SSH (Secure Shell) protocol. Only one of these options can be set at the moment, and the default option is to store the vmcore file in the /var/crash/ directory of the local file system. To change this, as root , open the /etc/kdump.conf configuration file in a text editor and edit the options as described below. To change the local directory in which the core dump is to be saved, remove the hash sign ("#") from the beginning of the #path /var/crash line, and replace the value with a desired directory path. Optionally, if you wish to write the file to a different partition, follow the same procedure with the #ext4 /dev/sda3 line as well, and change both the file system type and the device (a device name, a file system label, and UUID are all supported) accordingly. For example: ext3 /dev/sda4path /usr/local/cores To write the dump directly to a device, remove the hash sign ("#") from the beginning of the #raw /dev/sda5 line, and replace the value with a desired device name. For example: raw /dev/sdb1 To store the dump to a remote machine using the NFS protocol, remove the hash sign ("#") from the beginning of the #net my.server.com:/export/tmp line, and replace the value with a valid hostname and directory path. For example: net penguin.example.com:/export/cores To store the dump to a remote machine using the SSH protocol, remove the hash sign ( "#") from the beginning of the #net [email protected] line, and replace the value with a valid username and hostname. For example: net [email protected] Refer to Chapter 12, OpenSSH for information on how to configure an SSH server, and how to set up a key-based authentication. 26.2.3.3. Configuring the Core CollectorTo reduce the size of the vmcore dump file, kdump allows you to specify an external application (that is, a core collector) to compress the data, and optionally leave out all irrelevant information. Currently, the only fully supported core collector is makedumpfile . To enable the core collector, as root , open the /etc/kdump.conf configuration file in a text editor, remove the hash sign ("#") from the beginning of the #core_collector makedumpfile -c --message-level 1 -d 31 line, and edit the command line options as described below. To enable the dump file compression, add the -c parameter. For example: core_collector makedumpfile -c To remove certain pages from the dump, add the -d value parameter, where value is a sum of values of pages you want to omit as described in Table 26.2, "Supported filtering levels". For example, to remove both zero and free pages, use the following: core_collector makedumpfile -d 17 -c Refer to the manual page for makedumpfile for a complete list of available options. Table 26.2. Supported filtering levels Option | Description |
---|
1 | Zero pages | 2 | Cache pages | 4 | Cache private | 8 | User pages | 16 | Free pages |
26.2.3.4. Changing the Default ActionBy default, when kdump fails to create a core dump, the root file system is mounted and /sbin/init is run. To change this behavior, as root , open the /etc/kdump.conf configuration file in a text editor, remove the hash sign ( "#") from the beginning of the #default shell line, and replace the value with a desired action as described in Table 26.3, "Supported actions". Table 26.3. Supported actions Option | Description |
---|
reboot | Reboot the system, losing the core in the process. | halt | Halt the system. | poweroff | Power off the system. | shell | Run the msh session from within the initramfs, allowing a user to record the core manually. |
For example: default halt 26.2.3.5. Enabling the ServiceTo start the kdump daemon at boot time, type the following at a shell prompt as root : chkconfig kdump on
This will enable the service for runlevels 2 , 3 , 4 , and 5 . Similarly, typing chkconfig kdump off will disable it for all runlevels. To start the service in the current session, use the following command as root : service kdump start
26.2.4. Testing the ConfigurationThe commands below will cause the kernel to crash. Use caution when following these steps, and by no means use them on a production machine. To test the configuration, reboot the system with kdump enabled, and make sure that the service is running (refer to Section 10.3, "Running Services" for more information on how to run a service in Red Hat Enterprise Linux): ~]# service kdump status Kdump is operational Then type the following commands at a shell prompt: echo 1 > /proc/sys/kernel/sysrq echo c > /proc/sysrq-trigger
This will force the Linux kernel to crash, and the address -YYYY-MM-DD -HH:MM:SS /vmcore file will be copied to the location you have selected in the configuration (that is, to /var/crash/ by default). 26.3. Analyzing the Core DumpTo determine the cause of the system crash, you can use the crash utility, which provides an interactive prompt very similar to the GNU Debugger (GDB). This utility allows you to interactively analyze a running Linux system as well as a core dump created by netdump , diskdump , xendump , or kdump . To analyze the vmcore dump file, you must have the crash and kernel-debuginfo packages installed. To install these packages, type the following at a shell prompt as root : yum install crash debuginfo-install kernel
26.3.1. Running the crash UtilityTo start the utility, type the command in the following form at a shell prompt: crash /var/crash/timestamp /vmcore /usr/lib/debug/lib/modules/kernel /vmlinux
Note that the kernel version should be the same that was captured by kdump . To find out which kernel you are currently running, use the uname -r command. Example 26.2. Running the crash utility ~]# crash /usr/lib/debug/lib/modules/2.6.32-69.el6.i686/vmlinux \ /var/crash/127.0.0.1-2010-08-25-08:45:02/vmcore crash 5.0.0-23.el6Copyright (C) 2002-2010 Red Hat, Inc.Copyright (C) 2004, 2005, 2006 IBM CorporationCopyright (C) 1999-2006 Hewlett-Packard CoCopyright (C) 2005, 2006 Fujitsu LimitedCopyright (C) 2006, 2007 VA Linux Systems Japan K.K.Copyright (C) 2005 NEC CorporationCopyright (C) 1999, 2002, 2007 Silicon Graphics, Inc.Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc.This program is free software, covered by the GNU General Public License,and you are welcome to change it and/or distribute copies of it undercertain conditions. Enter "help copying" to see the conditions.This program has absolutely no warranty. Enter "help warranty" for details.GNU gdb (GDB) 7.0Copyright (C) 2009 Free Software Foundation, Inc.License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>This is free software: you are free to change and redistribute it.There is NO WARRANTY, to the extent permitted by law. Type "show copying"and "show warranty" for details.This GDB was configured as "i686-pc-linux-gnu"... KERNEL: /usr/lib/debug/lib/modules/2.6.32-69.el6.i686/vmlinux DUMPFILE: /var/crash/127.0.0.1-2010-08-25-08:45:02/vmcore [PARTIAL DUMP] CPUS: 4 DATE: Wed Aug 25 08:44:47 2010 UPTIME: 00:09:02LOAD AVERAGE: 0.00, 0.01, 0.00 TASKS: 140 NODENAME: hp-dl320g5-02.lab.bos.redhat.com RELEASE: 2.6.32-69.el6.i686 VERSION: #1 SMP Tue Aug 24 10:31:45 EDT 2010 MACHINE: i686 (2394 Mhz) MEMORY: 8 GB PANIC: "Oops: 0002 [#1] SMP " (check log for details) PID: 5591 COMMAND: "bash" TASK: f196d560 [THREAD_INFO: ef4da000] CPU: 2 STATE: TASK_RUNNING (PANIC)crash> 26.3.2. Displaying the Message BufferTo display the kernel message buffer, type the log command at the interactive prompt. Example 26.3. Displaying the kernel message buffer crash> log ... several lines omitted ...EIP: 0060:[<c068124f>] EFLAGS: 00010096 CPU: 2EIP is at sysrq_handle_crash+0xf/0x20EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000ESI: c0a09ca0 EDI: 00000286 EBP: 00000000 ESP: ef4dbf24 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068Process bash (pid: 5591, ti=ef4da000 task=f196d560 task.ti=ef4da000)Stack: c068146b c0960891 c0968653 00000003 00000000 00000002 efade5c0 c06814d0<0> fffffffb c068150f b7776000 f2600c40 c0569ec4 ef4dbf9c 00000002 b7776000<0> efade5c0 00000002 b7776000 c0569e60 c051de50 ef4dbf9c f196d560 ef4dbfb4Call Trace: [<c068146b>] ? __handle_sysrq+0xfb/0x160 [<c06814d0>] ? write_sysrq_trigger+0x0/0x50 [<c068150f>] ? write_sysrq_trigger+0x3f/0x50 [<c0569ec4>] ? proc_reg_write+0x64/0xa0 [<c0569e60>] ? proc_reg_write+0x0/0xa0 [<c051de50>] ? vfs_write+0xa0/0x190 [<c051e8d1>] ? sys_write+0x41/0x70 [<c0409adc>] ? syscall_call+0x7/0xbCode: a0 c0 01 0f b6 41 03 19 d2 f7 d2 83 e2 03 83 e0 cf c1 e2 04 09 d0 88 41 03 f3 c3 90 c7 05 c8 1b 9e c0 01 00 00 00 0f ae f8 89 f6 <c6> 05 00 00 00 00 01 c3 89 f6 8d bc 27 00 00 00 00 8d 50 d0 83EIP: [<c068124f>] sysrq_handle_crash+0xf/0x20 SS:ESP 0068:ef4dbf24CR2: 0000000000000000 Type help log for more information on the command usage. 26.3.3. Displaying a BacktraceTo display the kernel stack trace, type the bt command at the interactive prompt. You can use bt pid to display the backtrace of the selected process. Example 26.4. Displaying the kernel stack trace crash> bt PID: 5591 TASK: f196d560 CPU: 2 COMMAND: "bash" #0 [ef4dbdcc] crash_kexec at c0494922 #1 [ef4dbe20] oops_end at c080e402 #2 [ef4dbe34] no_context at c043089d #3 [ef4dbe58] bad_area at c0430b26 #4 [ef4dbe6c] do_page_fault at c080fb9b #5 [ef4dbee4] error_code (via page_fault) at c080d809 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 EBP: 00000000 DS: 007b ESI: c0a09ca0 ES: 007b EDI: 00000286 GS: 00e0 CS: 0060 EIP: c068124f ERR: ffffffff EFLAGS: 00010096 #6 [ef4dbf18] sysrq_handle_crash at c068124f #7 [ef4dbf24] __handle_sysrq at c0681469 #8 [ef4dbf48] write_sysrq_trigger at c068150a #9 [ef4dbf54] proc_reg_write at c0569ec2#10 [ef4dbf74] vfs_write at c051de4e#11 [ef4dbf94] sys_write at c051e8cc#12 [ef4dbfb0] system_call at c0409ad5 EAX: ffffffda EBX: 00000001 ECX: b7776000 EDX: 00000002 DS: 007b ESI: 00000002 ES: 007b EDI: b7776000 SS: 007b ESP: bfcb2088 EBP: bfcb20b4 GS: 0033 CS: 0073 EIP: 00edc416 ERR: 00000004 EFLAGS: 00000246 Type help bt for more information on the command usage. 26.3.4. Displaying a Process StatusTo display status of processes in the system, type the ps command at the interactive prompt. You can use ps pid to display the status of the selected process. Example 26.5. Displaying status of processes in the system crash> ps PID PPID CPU TASK ST %MEM VSZ RSS COMM> 0 0 0 c09dc560 RU 0.0 0 0 [swapper]> 0 0 1 f7072030 RU 0.0 0 0 [swapper] 0 0 2 f70a3a90 RU 0.0 0 0 [swapper]> 0 0 3 f70ac560 RU 0.0 0 0 [swapper] 1 0 1 f705ba90 IN 0.0 2828 1424 init... several lines omitted ... 5566 1 1 f2592560 IN 0.0 12876 784 auditd 5567 1 2 ef427560 IN 0.0 12876 784 auditd 5587 5132 0 f196d030 IN 0.0 11064 3184 sshd> 5591 5587 2 f196d560 RU 0.0 5084 1648 bash Type help ps for more information on the command usage. 26.3.5. Displaying Virtual Memory InformationTo display basic virtual memory information, type the vm command at the interactive prompt. You can use vm pid to display information on the selected process. Example 26.6. Displaying virtual memory information of the current context crash> vm PID: 5591 TASK: f196d560 CPU: 2 COMMAND: "bash" MM PGD RSS TOTAL_VMf19b5900 ef9c6000 1648k 5084k VMA START END FLAGS FILEf1bb0310 242000 260000 8000875 /lib/ld-2.12.sof26af0b8 260000 261000 8100871 /lib/ld-2.12.soefbc275c 261000 262000 8100873 /lib/ld-2.12.soefbc2a18 268000 3ed000 8000075 /lib/libc-2.12.soefbc23d8 3ed000 3ee000 8000070 /lib/libc-2.12.soefbc2888 3ee000 3f0000 8100071 /lib/libc-2.12.soefbc2cd4 3f0000 3f1000 8100073 /lib/libc-2.12.soefbc243c 3f1000 3f4000 100073efbc28ec 3f6000 3f9000 8000075 /lib/libdl-2.12.soefbc2568 3f9000 3fa000 8100071 /lib/libdl-2.12.soefbc2f2c 3fa000 3fb000 8100073 /lib/libdl-2.12.sof26af888 7e6000 7fc000 8000075 /lib/libtinfo.so.5.7f26aff2c 7fc000 7ff000 8100073 /lib/libtinfo.so.5.7efbc211c d83000 d8f000 8000075 /lib/libnss_files-2.12.soefbc2504 d8f000 d90000 8100071 /lib/libnss_files-2.12.soefbc2950 d90000 d91000 8100073 /lib/libnss_files-2.12.sof26afe00 edc000 edd000 4040075f1bb0a18 8047000 8118000 8001875 /bin/bashf1bb01e4 8118000 811d000 8101873 /bin/bashf1bb0c70 811d000 8122000 100073f26afae0 9fd9000 9ffa000 100073... several lines omitted ... Type help vm for more information on the command usage. 26.3.6. Displaying Open FilesTo display information about open files, type the files command at the interactive prompt. You can use files pid to display files opened by the selected process. Example 26.7. Displaying information about open files of the current context crash> files PID: 5591 TASK: f196d560 CPU: 2 COMMAND: "bash"ROOT: / CWD: /root FD FILE DENTRY INODE TYPE PATH 0 f734f640 eedc2c6c eecd6048 CHR /pts/0 1 efade5c0 eee14090 f00431d4 REG /proc/sysrq-trigger 2 f734f640 eedc2c6c eecd6048 CHR /pts/0 10 f734f640 eedc2c6c eecd6048 CHR /pts/0255 f734f640 eedc2c6c eecd6048 CHR /pts/0 Type help files for more information on the command usage. 26.3.7. Exiting the UtilityTo exit the interactive prompt and terminate crash, type exit or q . Example 26.8. Exiting the crash utility 26.4. Additional Resources26.4.1. Installed Documentationkdump.conf(5) - a manual page for the /etc/kdump.conf configuration file containing the full documentation of available options. makedumpfile(8) - a manual page for the makedumpfile core collector. kexec(8) - a manual page for kexec. crash(8) - a manual page for the crash utility. /usr/share/doc/kexec-tools-version /kexec-kdump-howto.txt - an overview of the kdump and kexec installation and usage.
Consistent Network Device NamingRed Hat Enterprise Linux 6 provides consistent network device naming for network interfaces. This feature changes the name of network interfaces on a system in order to make locating and differentiating the interfaces easier. Traditionally, network interfaces in Linux are enumerated as eth[0123 . . . . . . ] , but these names do not necessarily correspond to actual labels on the chassis. Modern server platforms with multiple network adapters can encounter non-deterministic and counter-intuitive naming of these interfaces. This affects both network adapters embedded on the motherboard ( Lan-on-Motherboard, or LOM) and add-in (single and multiport) adapters. The new naming convention assigns names to network interfaces based on their physical location, whether embedded or in PCI slots. By converting to this naming convention, system administrators will no longer have to guess at the physical location of a network port, or modify each system to rename them into some consistent order. This feature, implemented via the biosdevname program, will change the name of all embedded network interfaces, PCI card network interfaces, and virtual function network interfaces from the existing eth[0123 . . . . . . ] to the new naming convention as shown in Table A.1, "The new naming convention". Table A.1. The new naming convention Device | Old Name | New Name |
---|
Embedded network interface (LOM) | eth[0123 . . . . . . ] | em[1234 . . . . . . ] | PCI card network interface | eth[0123 . . . . . . ] | p<slot >p<ethernet port > | Virtual function | eth[0123 . . . . . . ] | p<slot >p<ethernet port >_<virtual interface > |
System administrators may continue to write rules in /etc/udev/rules.d/70-persistent-net.rules to change the device names to anything desired; those will take precedence over this physical location naming convention. Consistent network device naming is enabled by default for a set of Dell PowerEdge , C Series , and Precision Workstation systems. For more details regarding the impact on Dell systems, visit https://access.redhat.com/kb/docs/DOC-47318. Regardless of the type of system, Red Hat Enterprise Linux 6 guests running under Red Hat Enterprise Linux 5 hosts will not have devices renamed, since the virtual machine BIOS does not provide SMBIOS information. Upgrades from Red Hat Enterprise Linux 6.0 to Red Hat Enterprise Linux 6.1 are unaffected, and the old eth[0123 . . . . . . ] naming convention will continue to be used. The biosdevname program uses information from the system's BIOS, specifically the type 9 (System Slot) and type 41 (Onboard Devices Extended Information) fields contained within the SMBIOS. If the system's BIOS does not have SMBIOS version 2.6 or higher and this data, the new naming convention will not be used. Most older hardware does not support this feature because of a lack of BIOSes with the correct SMBIOS version and field information. For BIOS or SMBIOS version information, contact your hardware vendor. For this feature to take effect, the biosdevname package must also be installed. The biosdevname package is part of the base package group in Red Hat Enterprise Linux 6. All install options, except for , include this package. It is not installed on upgrades of Red Hat Enterprise Linux 6.0 to RHEL 6.1. A.3. Enabling and Disabling the FeatureTo disable the consistent network device naming on Dell systems that would normally have it on by default, pass the following option on the boot command line, both during and after installation: biosdevname=0
To enable this feature on other system types that meet the minimum requirements (see Section A.2, "System Requirements"), pass the following option on the boot command line, both during and after installation: biosdevname=1
Unless the system meets the minimum requirements, this option will be ignored and the system will boot with the traditional network interface name format. If the biosdevname install option is specified, it must remain as a boot option for the lifetime of the system. A.4. Notes for AdministratorsMany system customization files can include network interface names, and thus will require updates if moving a system from the old convention to the new convention. If you use the new naming convention, you will also need to update network interface names in areas such as custom iptables rules, scripts altering irqbalance, and other similar configuration files. Also, enabling this change for installation will require modification to existing kickstart files that use device names via the ksdevice parameter; these kickstart files will need to be updated to use the network device's MAC address or the network device's new name. Red Hat strongly recommends that you consider this feature to be an install-time choice; enabling or disabling the feature post-install, while technically possible, can be complicated and is not recommended. For those system administrators who wish to do so, on a system that meets the minimum requirements, remove the /etc/udev/rules.d/70-persistent-net.rules file and the HWADDR lines from all /etc/sysconfig/network-scripts/ifcfg-* files. In addition, rename those ifcfg-* files to use this new naming convention. The new names will be in effect after reboot. Remember to update any custom scripts, iptables rules, and service configuration files that might include network interface names. |
| |