Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 20 : Storage Administration Guide20 : Chapter 8. The XFS File S ... (Berikutnya)

Storage Administration Guide

Part I. File Systems

The File Systems section starts with an explanation of file system structure, followed by explaining how encrypted file systems work. A tech preview section on btrfs is next before moving into the various types of file systems: ext3, ext4, global file system 2, XFS, and NFS. Finally, FS-Cache is explained.
Use the following Daftar Isi to explore these File Systems sections.

Daftar Isi

2. File System Structure and Maintenance
2.1. Why Share a Common Structure?
2.2. Overview of File System Hierarchy Standard (FHS)
2.2.1. FHS Organization
2.3. Special Red Hat Enterprise Linux File Locations
2.4. The /proc Virtual File System
2.5. Discard unused blocks
3. Encrypted File System
3.1. Mounting a File System as Encrypted
3.2. Additional Information
4. Btrfs
4.1. Btrfs Features
5. The Ext3 File System
5.1. Creating an Ext3 File System
5.2. Converting to an Ext3 File System
5.3. Reverting to an Ext2 File System
6. The Ext4 File System
6.1. Creating an Ext4 File System
6.2. Mounting an Ext4 File System
6.3. Resizing an Ext4 File System
6.4. Other Ext4 File System Utilities
7. Global File System 2
8. The XFS File System
8.1. Creating an XFS File System
8.2. Mounting an XFS File System
8.3. XFS Quota Management
8.4. Increasing the Size of an XFS File System
8.5. Repairing an XFS File System
8.6. Suspending an XFS File System
8.7. Backup and Restoration of XFS File Systems
8.8. Other XFS File System Utilities
9. Network File System (NFS)
9.1. How It Works
9.1.1. Required Services
9.2. pNFS
9.3. NFS Client Configuration
9.3.1. Mounting NFS File Systems using /etc/fstab
9.4. autofs
9.4.1. Improvements in autofs Version 5 over Version 4
9.4.2. autofs Configuration
9.4.3. Overriding or Augmenting Site Configuration Files
9.4.4. Using LDAP to Store Automounter Maps
9.5. Common NFS Mount Options
9.6. Starting and Stopping NFS
9.7. NFS Server Configuration
9.7.1. The /etc/exports Configuration File
9.7.2. The exportfs Command
9.7.3. Running NFS Behind a Firewall
9.7.4. Hostname Formats
9.7.5. NFS over RDMA
9.8. Securing NFS
9.8.1. NFS Security with AUTH_SYS and export controls
9.8.2. NFS security with AUTH_GSS
9.8.3. File Permissions
9.9. NFS and rpcbind
9.9.1. Troubleshooting NFS and rpcbind
9.10. NFS Support for SELinux
9.11. pNFS Support (Block, Object and File)
9.12. NFSv4
9.13. References
10. FS-Cache
10.1. Performance Guarantee
10.2. Setting Up a Cache
10.3. Using the Cache With NFS
10.3.1. Cache Sharing
10.3.2. Cache Limitations With NFS
10.4. Setting Cache Cull Limits
10.5. Statistical Information
10.6. References

Chapter 2. File System Structure and Maintenance

2.1. Why Share a Common Structure?

The file system structure is the most basic level of organization in an operating system. Almost all of the ways an operating system interacts with its users, applications, and security model are dependent on how the operating system organizes files on storage devices. Providing a common file system structure ensures users and programs can access and write files.
File systems break files down into two logical categories:
  • Shareable vs. unsharable files
  • Variable vs. static files
Shareable files can be accessed locally and by remote hosts; unsharable files are only available locally. Variable files, such as documents, can be changed at any time; static files, such as binaries, do not change without an action from the system administrator.
Categorizing files in this manner helps correlate the function of each file with the permissions assigned to the directories which hold them. How the operating system and its users interact with a file determines the directory in which it is placed, whether that directory is mounted with read-only or read/write permissions, and the level of access each user has to that file. The top level of this organization is crucial; access to the underlying directories can be restricted, otherwise security problems could arise if, from the top level down, access rules do not adhere to a rigid structure.

2.2. Overview of File System Hierarchy Standard (FHS)

Red Hat Enterprise Linux uses the Filesystem Hierarchy Standard (FHS) file system structure, which defines the names, locations, and permissions for many file types and directories.
The FHS document is the authoritative reference to any FHS-compliant file system, but the standard leaves many areas undefined or extensible. This section is an overview of the standard and a description of the parts of the file system not covered by the standard.
The two most important elements of FHS compliance are:
  • Compatibility with other FHS-compliant systems
  • The ability to mount a /usr/ partition as read-only. This is especially crucial, since /usr/ contains common executables and should not be changed by users. In addition, since /usr/ is mounted as read-only, it should be mountable from the CD-ROM drive or from another machine via a read-only NFS mount.

2.2.1. FHS Organization

The directories and files noted here are a small subset of those specified by the FHS document. Refer to the latest FHS document for the most complete information at http://www.pathname.com/fhs/.

2.2.1.1. Gathering File System Information

The df command reports the system's disk space usage. Its output looks similar to the following:

Example 2.1. Output of the df command

Filesystem   1K-blocks  Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00   11675568   6272120   4810348  57% / /dev/sda1 100691  9281 86211  10% /bootnone 322856 0 322856   0% /dev/shm

By default, df shows the partition size in 1 kilobyte blocks and the amount of used/available disk space in kilobytes. To view the information in megabytes and gigabytes, use the command df -h. The -h argument stands for "human-readable" format. The output for df -h looks similar to the following:

Example 2.2. Output of the df -h command

Filesystem Size  Used Avail Use% Mounted on/dev/mapper/VolGroup00-LogVol00 12G  6.0G  4.6G  57% / /dev/sda199M  9.1M   85M  10% /boot none 316M 0  316M   0% /dev/shm

Note

The mounted partition /dev/shm represents the system's virtual memory file system.
The du command displays the estimated amount of space being used by files in a directory, displaying the disk usage of each subdirectory. The last line in the output of du shows the total disk usage of the directory; to see only the total disk usage of a directory in human-readable format, use du -hs. For more options, refer to man du.
To view the system's partitions and disk space usage in a graphical format, use the Gnome System Monitor by clicking on Applications > System Tools > System Monitor or using the command gnome-system-monitor. Select the File Systems tab to view the system's partitions. The figure below illustrates the File Systems tab.
GNOME System Monitor File Systems tab
File systems tab of the gnome-system-monitor

Figure 2.1. GNOME System Monitor File Systems tab


2.2.1.2. The /boot/ Directory

The /boot/ directory contains static files required to boot the system, e.g. the Linux kernel. These files are essential for the system to boot properly.

Warning

Do not remove the /boot/ directory. Doing so renders the system unbootable.

2.2.1.3. The /dev/ Directory

The /dev/ directory contains device nodes that represent the following device types:
  • Devices attached to the system
  • Virtual devices provided by the kernel
These device nodes are essential for the system to function properly. The udevd daemon creates and removes device nodes in /dev/ as needed.
Devices in the /dev/ directory and subdirectories are either character (providing only a serial stream of input/output, e.g. mouse or keyboard) or block (accessible randomly, e.g. hard drive, floppy drive). If you have GNOME or KDE installed, some storage devices are automatically detected when connected (e.g via USB) or inserted (e.g via CD or DVD drive), and a popup window displaying the contents appears.

Table 2.1. Examples of common files in the /dev

FileDescription
/dev/hdaThe master device on primary IDE channel.
/dev/hdbThe slave device on primary IDE channel.
/dev/tty0The first virtual console.
/dev/tty1The second virtual console.
/dev/sdaThe first device on primary SCSI or SATA channel.
/dev/lp0The first parallel port.

2.2.1.4. The /etc/ Directory

The /etc/ directory is reserved for configuration files that are local to the machine. It should contain no binaries; any binaries should be moved to /bin/ or /sbin/.
For example, the /etc/skel/ directory stores "skeleton" user files, which are used to populate a home directory when a user is first created. Applications also store their configuration files in this directory and may reference them when executed. The /etc/exports file controls which file systems to export to remote hosts.

2.2.1.5. The /lib/ Directory

The /lib/ directory should only contain libraries needed to execute the binaries in /bin/ and /sbin/. These shared library images are used to boot the system or execute commands within the root file system.

2.2.1.6. The /media/ Directory

The /media/ directory contains subdirectories used as mount points for removable media such as USB storage media, DVDs, CD-ROMs, and Zip disks.

2.2.1.7. The /mnt/ Directory

The /mnt/ directory is reserved for temporarily mounted file systems, such as NFS file system mounts. For all removable storage media, use the /media/ directory. Automatically detected removeable media will be mounted in the /media directory.

Note

The /mnt directory must not be used by installation programs.

2.2.1.8. The /opt/ Directory

The /opt/ directory is normally reserved for software and add-on packages that are not part of the default installation. A package that installs to /opt/ creates a directory bearing its name, e.g. /opt/packagename/. In most cases, such packages follow a predictable subdirectory structure; most store their binaries in /opt/packagename/bin/ and their man pages in /opt/packagename/man/.

2.2.1.9. The /proc/ Directory

The /proc/ directory contains special files that either extract information from the kernel or send information to it. Examples of such information include system memory, cpu information, and hardware configuration. For more information about /proc/, refer to Section 2.4, "The /proc Virtual File System".

2.2.1.10. The /sbin/ Directory

The /sbin/ directory stores binaries essential for booting, restoring, recovering, or repairing the system. The binaries in /sbin/ require root privileges to use. In addition, /sbin/ contains binaries used by the system before the /usr/ directory is mounted; any system utilities used after /usr/ is mounted is typically placed in /usr/sbin/.
At a minimum, the following programs should be stored in /sbin/:
  • arp
  • clock
  • halt
  • init
  • fsck.*
  • grub
  • ifconfig
  • mingetty
  • mkfs.*
  • mkswap
  • reboot
  • route
  • shutdown
  • swapoff
  • swapon

2.2.1.11. The /srv/ Directory

The /srv/ directory contains site-specific data served by a Red Hat Enterprise Linux system. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the /home/ directory.

2.2.1.12. The /sys/ Directory

The /sys/ directory utilizes the new sysfs virtual file system specific to the 2.6 kernel. With the increased support for hot plug hardware devices in the 2.6 kernel, the /sys/ directory contains information similar to that held by /proc/, but displays a hierarchical view device information specific to hot plug devices.

2.2.1.13. The /usr/ Directory

The /usr/ directory is for files that can be shared across multiple machines. The /usr/ directory is often on its own partition and is mounted read-only. At a minimum, /usr/ should contain the following subdirectories:
  • /usr/bin, used for binaries
  • /usr/etc, used for system-wide configuration files
  • /usr/games
  • /usr/include, used for C header files
  • /usr/kerberos, used for Kerberos-related binaries and files
  • /usr/lib, used for object files and libraries that are not designed to be directly utilized by shell scripts or users
  • /usr/libexec, contains small helper programs called by other programs
  • /usr/sbin, stores system administration binaries that do not belong to /sbin/
  • /usr/share, stores files that are not architecture-specific
  • /usr/src, stores source code
  • /usr/tmp -> /var/tmp
The /usr/ directory should also contain a /local/ subdirectory. As per the FHS, this subdirectory is used by the system administrator when installing software locally, and should be safe from being overwritten during system updates. The /usr/local directory has a structure similar to /usr/, and contains the following subdirectories:
  • /usr/local/bin
  • /usr/local/etc
  • /usr/local/games
  • /usr/local/include
  • /usr/local/lib
  • /usr/local/libexec
  • /usr/local/sbin
  • /usr/local/share
  • /usr/local/src
Red Hat Enterprise Linux's usage of /usr/local/ differs slightly from the FHS. The FHS states that /usr/local/ should be used to store software that should remain safe from system software upgrades. Since the RPM Package Manager can perform software upgrades safely, it is not necessary to protect files by storing them in /usr/local/.
Instead, Red Hat Enterprise Linux uses /usr/local/ for software local to the machine. For instance, if the /usr/ directory is mounted as a read-only NFS share from a remote host, it is still possible to install a package or program under the /usr/local/ directory.

2.2.1.14. The /var/ Directory

Since the FHS requires Linux to mount /usr/ as read-only, any programs that write log files or need spool/ or lock/ directories should write them to the /var/ directory. The FHS states /var/ is for variable data, which includes spool directories/files, logging data, transient/temp files.
Below are some of the directories found within the /var/ directory:
  • /var/account/
  • /var/arpwatch/
  • /var/cache/
  • /var/crash/
  • /var/db/
  • /var/empty/
  • /var/ftp/
  • /var/gdm/
  • /var/kerberos/
  • /var/lib/
  • /var/local/
  • /var/lock/
  • /var/log/
  • /var/mail -> /var/spool/mail/
  • /var/mailman/
  • /var/named/
  • /var/nis/
  • /var/opt/
  • /var/preserve/
  • /var/run/
  • /var/spool/
  • /var/tmp/
  • /var/tux/
  • /var/www/
  • /var/yp/
System log files, such as messages and lastlog, go in the /var/log/ directory. The /var/lib/rpm/ directory contains RPM system databases. Lock files go in the /var/lock/ directory, usually in directories for the program using the file. The /var/spool/ directory has subdirectories that store data files for some programs. These subdirectories include:
  • /var/spool/at/
  • /var/spool/clientmqueue/
  • /var/spool/cron/
  • /var/spool/cups/
  • /var/spool/exim/
  • /var/spool/lpd/
  • /var/spool/mail/
  • /var/spool/mailman/
  • /var/spool/mqueue/
  • /var/spool/news/
  • /var/spool/postfix/
  • /var/spool/repackage/
  • /var/spool/rwho/
  • /var/spool/samba/
  • /var/spool/squid/
  • /var/spool/squirrelmail/
  • /var/spool/up2date/
  • /var/spool/uucp/
  • /var/spool/uucppublic/
  • /var/spool/vbox/

2.3. Special Red Hat Enterprise Linux File Locations

Red Hat Enterprise Linux extends the FHS structure slightly to accommodate special files.
Most files pertaining to RPM are kept in the /var/lib/rpm/ directory. For more information on RPM, refer to man rpm.
The /var/cache/yum/ directory contains files used by the Package Updater, including RPM header information for the system. This location may also be used to temporarily store RPMs downloaded while updating the system. For more information about Red Hat Network, refer to the documentation online at https://rhn.redhat.com/.
Another location specific to Red Hat Enterprise Linux is the /etc/sysconfig/ directory. This directory stores a variety of configuration information. Many scripts that run at boot time use the files in this directory.

2.4. The /proc Virtual File System

Unlike most file systems, /proc contains neither text nor binary files. Instead, it houses virtual files; hence, /proc is normally referred to as a virtual file system. These virtual files are typically zero bytes in size, even if they contain a large amount of information.
The /proc file system is not used for storage per se. Its main purpose is to provide a file-based interface to hardware, memory, running processes, and other system components. You can retrieve real-time information on many system components by viewing the corresponding /proc file. Some of the files within /proc can also be manipulated (by both users and applications) to configure the kernel.
The following /proc files are relevant in managing and monitoring system storage:
/proc/devices
Displays various character and block devices currently configured
/proc/filesystems
Lists all file system types currently supported by the kernel
/proc/mdstat
Contains current information on multiple-disk or RAID configurations on the system, if they exist
/proc/mounts
Lists all mounts currently used by the system
/proc/partitions
Contains partition block allocation information
For more information about the /proc file system, refer to the Red Hat Enterprise Linux 6 Deployment Guide.

2.5. Discard unused blocks

Batch discard and online discard operations are features of mounted file systems that discard blocks which are not in use by the file system. They are useful for both solid-state drives and thinly-provisioned storage.
Batch discard operations are run explicitly by the user with the fstrim command. This command discards all unused blocks in a file system that match the user's criteria. Both operation types are supported for use with the XFS and ext4 file systems in Red Hat Enterprise Linux 6.2 and later as long as the block device underlying the file system supports physical discard operations. Physical discard operations are supported if the value of /sys/block/device/queue/discard_max_bytes is not zero.
Online discard operations are specified at mount time with the -o discard option (either in /etc/fstab or as part of the mount command), and run in realtime without user intervention. Online discard operations only discard blocks that are transitioning from used to free. Online discard operations are supported on ext4 file systems in Red Hat Enterprise Linux 6.2 and later, and on XFS file systems in Red Hat Enterprise Linux 6.4 and later.
Red Hat recommends batch discard operations unless the system's workload is such that batch discard is not feasible, or online discard operations are necessary to maintain performance.

Chapter 3.  Encrypted File System

Red Hat Enterprise Linux 6 now supports eCryptfs, a "pseudo-file system" which provides data and filename encryption on a per-file basis. The term "pseudo-file system" refers to the fact that eCryptfs does not have an on-disk format; rather, it is a file system layer that resides on top of an actual file system. The eCryptfs layer provides encryption capabilities.
eCryptfs works like a bind mount, as it intercepts file operations that write to the underlying (i.e. encrypted) file system. The eCryptfs layer adds a header to the metadata of files in the underlying file system. This metadata describes the encryption for that file, and eCryptfs encrypts file data before it is passed to the encrypted file system. Optionally, eCryptfs can also encrypt filenames.
eCryptfs is not an on-disk file system; as such, there is no need to create it via tools such as mkfs. Instead, eCryptfs is initiated by issuing a special mount command. To manage file systems protected by eCryptfs, the ecryptfs-utils package must be installed first.

3.1. Mounting a File System as Encrypted

The easiest way to encrypt a file system with eCryptfs and mount it is interactively. To start this process, execute the following command:
# mount -t ecryptfs /source /destination
Encrypting a directory hierarchy (i.e. /source) with eCryptfs means mounting it to a mount point encrypted by eCryptfs (i.e. /destination). All file operations to /destination will be passed encrypted to the underlying /source file system. In some cases, however, it may be possible for a file operation to modify /source directly without passing through the eCryptfs layer; this could lead to inconsistencies.
This is why for most environments, Red Hat recommends that both /source and /destination be identical. For example:
# mount -t ecryptfs /home /home
This effectively means encrypting a file system and mounting it on itself. Doing so helps ensure that all file operations to /home pass through the eCryptfs layer.
During the interactive encryption/mount process, mount will allow the following settings to be configured:
  • Encryption key type; openssl, tspi, or passphrase. When choosing passphrase, mount will ask for one.
  • Cipher; aes, blowfish, des3_ede, cast6, or cast5.
  • Key bytesize; 16, 32, 24.
  • Whether or not plaintext passthrough is enabled.
  • Whether or not filename encryption is enabled.
After the last step of an interactive mount, mount will display all the selections made and perform the mount. This output consists of the command-line option equivalents of each chosen setting. For example, mounting /home with a key type of passphrase, aes cipher, key bytesize of 16 with both plaintext passthrough and filename encryption disabled, the output would be:
Attempting to mount with the following options:  ecryptfs_unlink_sigs  ecryptfs_key_bytes=16  ecryptfs_cipher=aes  ecryptfs_sig=c7fed37c0a341e19Mounted eCryptfs
The options in this display can then be passed directly to the command line to encrypt and mount a file system using the same configuration. To do so, use each option as an argument to the -o option of mount. For example:
mount -t ecryptfs /home /home -o ecryptfs_unlink_sigs \ ecryptfs_key_bytes=16 ecryptfs_cipher=aes ecryptfs_sig=c7fed37c0a341e19[2]

3.2. Additional Information

For more information on eCryptfs and its mount options, refer to man ecryptfs (provided by the ecryptfs-utils package). The following Kernel document (provided by the kernel-doc package) also provides additional information on eCryptfs:
/usr/share/doc/kernel-doc-version/Documentation/filesystems/ecryptfs.txt


[2] This is a single command split into multiple lines, to accommodate printed and PDF versions of this document. All concatenated lines - preceded by the backslash (\) - should be treated as one command, sans backslashes.

Chapter 4. Btrfs

Btrfs is a new local file system under active development. It aims to provide better performance and scalability which will in turn benefit users.

Note

Btrfs is not a production quality file system at this point. With Red Hat Enterprise Linux 6 it is at a tech preview stage and as such is only being built for Intel 64 and AMD64.

4.1. Btrfs Features

Several utilities are built in to Btrfs to provide ease of administration for system administrators.
Built-in System Rollback
File system snapshots, which are quick and easy to create, make it simple to roll a system back to a prior, known-good state if something goes wrong.
Built-in Compression
This makes saving space easier.
Checksum Functionality
This improves error detection.
Specific features include integrated LVM operations, such as:
  • dynamic, online addition or removal of new storage devices
  • internal support for RAID across the component devices
  • the ability to use different RAID levels for meta or user data
  • full checksum functionality for all meta and user data.

Chapter 5. The Ext3 File System

The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements provide the following advantages:
Availability
After an unexpected power failure or system crash (also called an unclean system shutdown), each mounted ext2 file system on the machine must be checked for consistency by the e2fsck program. This is a time-consuming process that can delay system boot time significantly, especially with large volumes containing a large number of files. During this time, any data on the volumes is unreachable.
It is possible to run fsck -n on a live filesystem. However, it will not make any changes and may give misleading results if partially written metadata is encountered.
If LVM is used in the stack, another option is to take an LVM snapshot of the filesystem and run fsck on it instead.
Finally, there is the option to remount the filesystem as read only. All pending metadata updates (and writes) are then forced to the disk prior to the remount. This ensures the filesystem is in a consistent state, provided there is no previous corruption. It is now possible to run fsck -n.
The journaling provided by the ext3 file system means that this sort of file system check is no longer necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file system after an unclean system shutdown does not depend on the size of the file system or the number of files; rather, it depends on the size of the journal used to maintain consistency. The default journal size takes about a second to recover, depending on the speed of the hardware.

Note

The only journaling mode in ext3 supported by Red Hat is data=ordered (default).
Data Integrity
The ext3 file system prevents loss of data integrity in the event that an unclean system shutdown occurs. The ext3 file system allows you to choose the type and level of protection that your data receives. With regard to the state of the file system, ext3 volumes are configured to keep a high level of data consistency by default.
Speed
Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2 because ext3's journaling optimizes hard drive head motion. You can choose from three journaling modes to optimize speed, but doing so means trade-offs in regards to data integrity if the system was to fail.
Easy Transition
It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without reformatting. Refer to Section 5.2, "Converting to an Ext3 File System" for more on how to perform this task.
The Red Hat Enterprise Linux 7 version of ext3 features the following update:
Unified extN driver
Red Hat Enterprise Linux 7 provides a unified extN driver. It does this by disabling the ext2 and ext3 configurations and uses ext4.ko for these on-disk formats. With this change, kernel messages will all refer to ext4, regardless of the ext file system used.

5.1. Creating an Ext3 File System

After installation, it is sometimes necessary to create a new ext3 file system. For example, if you add a new disk drive to the system, you may want to partition the drive and use the ext3 file system.
The steps for creating an ext3 file system are as follows:
  1. Format the partition with the ext3 file system using mkfs.
  2. Label the file system using e2label.

5.2. Converting to an Ext3 File System

The tune2fs allows you to convert an ext2 file system to ext3.

Note

A default installation of Red Hat Enterprise Linux uses ext4 for all file systems. However, to convert ext2 to ext3, always use the e2fsck utility to check your file system before and after using tune2fs. Before trying to convert ext2 to ext3, back up all your file systems in case any errors occur.
In addition, Red Hat recommends that you should, whenever possible, create a new ext3 file system and migrate your data to it instead of converting from ext2 to ext3.
To convert an ext2 file system to ext3, log in as root and type the following command in a terminal:
tune2fs -j block_device
where block_device contains the ext2 file system you wish to convert.
A valid block device could be one of two types of entries:
  • A mapped device - A logical volume in a volume group, for example, /dev/mapper/VolGroup00-LogVol02.
  • A static device - A traditional storage volume, for example, /dev/sdbX, where sdb is a storage device name and X is the partition number.
Issue the df command to display mounted file systems.

5.3. Reverting to an Ext2 File System

In order to revert to an ext2 file system, use the following procedure.
For simplicity, the sample commands in this section use the following value for the block device:
/dev/mapper/VolGroup00-LogVol02

Procedure 5.1. Revert to ext2

  1. If you wish to revert a partition from ext3 to ext2 for any reason, you must first unmount the partition by logging in as root and typing:
    # umount /dev/mapper/VolGroup00-LogVol02
  2. Next, change the file system type to ext2 by typing the following command:
    # tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02
  3. Check the partition for errors by typing the following command:
    # e2fsck -y /dev/mapper/VolGroup00-LogVol02
  4. Then mount the partition again as ext2 file system by typing:
    # mount -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point
    In the above command, replace /mount/point with the mount point of the partition.

    Note

    If a .journal file exists at the root level of the partition, delete it.
You now have an ext2 partition.
If you want to permanently change the partition to ext2, remember to update the /etc/fstab file.

Chapter 6. The Ext4 File System

The ext4 file system is a scalable extension of the ext3 file system, which was the default file system of Red Hat Enterprise Linux 5. Ext4 is the default file system of Red Hat Enterprise Linux 7, and can now support files and file systems larger than 16 terabytes in size (unlike Red Hat Enterprise Linux 6 where it only supported file systems up to 16 terabytes in size). It also supports an unlimited number of sub-directories (the ext3 file system only supports up to 32,000), though once the link count exceeds 65,000 it resets to 1 and is no longer increased.

Note

As with ext3, an ext4 volume must be umounted in order to perform an fsck. For more information, see Chapter 5, The Ext3 File System.
Main Features
Ext4 uses extents (as opposed to the traditional block mapping scheme used by ext2 and ext3), which improves performance when using large files and reduces metadata overhead for large files. In addition, ext4 also labels unallocated block groups and inode table sections accordingly, which allows them to be skipped during a file system check. This makes for quicker file system checks, which becomes more beneficial as the file system grows in size.
Allocation Features
The ext4 file system features the following allocation schemes:
  • Persistent pre-allocation
  • Delayed allocation
  • Multi-block allocation
  • Stripe-aware allocation
Because of delayed allocation and other performance optimizations, ext4's behavior of writing files to disk is different from ext3. In ext4, a program's writes to the file system are not guaranteed to be on-disk unless the program issues an fsync() call afterwards.
By default, ext3 automatically forces newly created files to disk almost immediately even without fsync(). This behavior hid bugs in programs that did not use fsync() to ensure that written data was on-disk. The ext4 file system, on the other hand, often waits several seconds to write out changes to disk, allowing it to combine and reorder writes for better disk performance than ext3.

Warning

Unlike ext3, the ext4 file system does not force data to disk on transaction commit. As such, it takes longer for buffered writes to be flushed to disk. As with any file system, use data integrity calls such as fsync() to ensure that data is written to permanent storage.
Other Ext4 Features
The ext4 file system also supports the following:
  • Extended attributes (xattr), which allows the system to associate several additional name/value pairs per file.
  • Quota journaling, which avoids the need for lengthy quota consistency checks after a crash.

    Note

    The only supported journaling mode in ext4 is data=ordered (default).
  • Subsecond timestamps

6.1. Creating an Ext4 File System

To create an ext4 file system, use the mkfs.ext4 command. In general, the default options are optimal for most usage scenarios:
# mkfs.ext4 /dev/device
Below is a sample output of this command, which displays the resulting file system geometry and features:

Example 6.1. Output of the mkfs.ext4 command

~]# mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks245280 inodes, 979456 blocks48972 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=100663296030 block groups32768 blocks per group, 32768 fragments per group8176 inodes per groupSuperblock backups stored on blocks:  32768, 98304, 163840, 229376, 294912, 819200, 884736Writing inode tables: done Creating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 20 mounts or180 days, whichever comes first.  Use tune2fs -c or -i to override.

For striped block devices (e.g. RAID5 arrays), the stripe geometry can be specified at the time of file system creation. Using proper stripe geometry greatly enhances performance of an ext4 file system.
When creating file systems on lvm or md volumes, mkfs.ext4 chooses an optimal geometry. This may also be true on some hardware RAIDs which export geometry information to the operating system.
To specify stripe geometry, use the -E option of mkfs.ext4 (that is, extended file system options) with the following sub-options:
stride=value
Specifies the RAID chunk size.
stripe-width=value
Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.
For both sub-options, value must be specified in file system block units. For example, to create a file system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command:
# mkfs.ext4 -E stride=16,stripe-width=64 /dev/device
For more information about creating file systems, refer to man mkfs.ext4.

Note

It is possible to use tune2fs to enable some ext4 features on ext3 file systems, and to use the ext4 driver to mount an ext3 file system. These actions, however, are not supported in Red Hat Enterprise Linux 6, as they have not been fully tested. Because of this, Red Hat cannot guarantee consistent performance and predictable behavior for ext3 file systems converted or mounted in this way.

6.2. Mounting an Ext4 File System

An ext4 file system can be mounted with no extra options. For example:
# mount /dev/device /mount/point
The ext4 file system also supports several mount options to influence behavior. For example, the acl parameter enables access control lists, while the user_xattr parameter enables user extended attributes. To enable both options, use their respective parameters with -o, as in:
# mount -o acl,user_xattr /dev/device /mount/point
The tune2fs utility also allows administrators to set default mount options in the file system superblock. For more information on this, refer to man tune2fs.

Write Barriers

By default, ext4 uses write barriers to ensure file system integrity even when power is lost to a device with write caches enabled. For devices without write caches, or with battery-backed write caches, disable barriers using the nobarrier option, as in:
# mount -o nobarrier /dev/device /mount/point
For more information about write barriers, refer to Chapter 21, Write Barriers.

6.3. Resizing an Ext4 File System

Before growing an ext4 file system, ensure that the underlying block device is of an appropriate size to hold the file system later. Use the appropriate resizing methods for the affected block device.
An ext4 file system may be grown while mounted using the resize2fs command:
# resize2fs /mount/device node
The resize2fs command can also decrease the size of an unmounted ext4 file system:
# resize2fs /dev/device size
When resizing an ext4 file system, the resize2fs utility reads the size in units of file system block size, unless a suffix indicating a specific unit is used. The following suffixes indicate specific units:
  • s - 512 byte sectors
  • K - kilobytes
  • M - megabytes
  • G - gigabytes

Note: Size parameter

The size parameter is optional (and often redundant) when expanding. The resize2fs automatically expands to fill all available space of the container, usually a logical volume or partition.
For more information about resizing an ext4 file system, refer to man resize2fs.

6.4. Other Ext4 File System Utilities

Red Hat Enterprise Linux 6 also features other utilities for managing ext4 file systems:
e2fsck
Used to repair an ext4 file system. This tool checks and repairs an ext4 file system more efficiently than ext3, thanks to updates in the ext4 disk structure.
e2label
Changes the label on an ext4 file system. This tool also works on ext2 and ext3 file systems.
quota
Controls and reports on disk space (blocks) and file (inode) usage by users and groups on an ext4 file system. For more information on using quota, refer to man quota and Section 15.1, "Configuring Disk Quotas".
As demonstrated earlier in Section 6.2, "Mounting an Ext4 File System", the tune2fs utility can also adjust configurable file system parameters for ext2, ext3, and ext4 file systems. In addition, the following tools are also useful in debugging and analyzing ext4 file systems:
debugfs
Debugs ext2, ext3, or ext4 file systems.
e2image
Saves critical ext2, ext3, or ext4 file system metadata to a file.
For more information about these utilities, refer to their respective man pages.

Chapter 7. Global File System 2

The Red Hat Global File System 2 (GFS2) is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs distributed metadata and multiple journals.
GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 exabytes file system. However, the current supported maximum size of a GFS2 file system is 100 TB. If your system requires GFS2 file systems larger than 100 TB, contact your Red Hat service representative.
When determining the size of your file system, you should consider your recovery needs. Running the fsck command on a very large file system can take a long time and consume a large amount of memory. Additionally, in the event of a disk or disk-subsystem failure, recovery time is limited by the speed of your backup media.
When configured in a Red Hat Cluster Suite, Red Hat GFS2 nodes can be configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS2 then provides data sharing among GFS2 nodes in a Red Hat cluster, with a single, consistent view of the file system name space across the GFS2 nodes. This allows processes on different nodes to share GFS2 files in the same way that processes on the same node can share files on a local file system, with no discernible difference. For information about the Red Hat Cluster Suite, refer to Configuring and Managing a Red Hat Cluster.
A GFS2 must be built on a logical volume (created with LVM) that is a linear or mirrored volume. Logical volumes created with LVM in a Red Hat Cluster suite are managed with CLVM (a cluster-wide implementation of LVM), enabled by the CLVM daemon clvmd, and running in a Red Hat Cluster Suite cluster. The daemon makes it possible to use LVM2 to manage logical volumes across a cluster, allowing all nodes in the cluster to share the logical volumes. For information on the Logical Volume Manager, see Logical Volume Manager Administration.
The gfs2.ko kernel module implements the GFS2 file system and is loaded on GFS2 cluster nodes.

Important

For comprehensive information on the creation and configuration of GFS2 file systems in clustered and non-clustered storage, please refer to the Global File System 2 guide also provided by Red Hat.
(Sebelumnya) 20 : Storage Administration Guide20 : Chapter 8. The XFS File S ... (Berikutnya)