Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 20 : Part II. Storage Administ ...20 : Chapter 18. The volume_ke ... (Berikutnya)

Storage Administration Guide

Chapter 15. Disk Quotas

Disk space can be restricted by implementing disk quotas which alert a system administrator before a user consumes too much disk space or a partition becomes full.
Disk quotas can be configured for individual users as well as user groups. This makes it possible to manage the space allocated for user-specific files (such as email) separately from the space allocated to the projects a user works on (assuming the projects are given their own groups).
In addition, quotas can be set not just to control the number of disk blocks consumed but to control the number of inodes (data structures that contain information about files in UNIX file systems). Because inodes are used to contain file-related information, this allows control over the number of files that can be created.
The quota RPM must be installed to implement disk quotas.

15.1. Configuring Disk Quotas

To implement disk quotas, use the following steps:
  1. Enable quotas per file system by modifying the /etc/fstab file.
  2. Remount the file system(s).
  3. Create the quota database files and generate the disk usage table.
  4. Assign quota policies.
Each of these steps is discussed in detail in the following sections.

15.1.1. Enabling Quotas

As root, using a text editor, edit the /etc/fstab file.

Example 15.1. Edit /etc/fstab

For example, to use the text editor vim type the following:
# vim /etc/fstab

Add the usrquota and/or grpquota options to the file systems that require quotas:

Example 15.2. Add quotas

/dev/VolGroup00/LogVol00 / ext3 defaults 1 1 LABEL=/boot  /boot ext3 defaults 1 2 none /dev/pts  devpts  gid=5,mode=620  0 0 none /dev/shm  tmpfs   defaults 0 0 none /proc proc defaults 0 0 none /sys  sysfs   defaults 0 0 /dev/VolGroup00/LogVol02 /home ext3 defaults,usrquota,grpquota  1 2 /dev/VolGroup00/LogVol01 swap  swap defaults 0 0 . . .
In this example, the /home file system has both user and group quotas enabled.

Note

The following examples assume that a separate /home partition was created during the installation of Red Hat Enterprise Linux. The root (/) partition can be used for setting quota policies in the /etc/fstab file.

15.1.2. Remounting the File Systems

After adding the usrquota and/or grpquota options, remount each file system whose fstab entry has been modified. If the file system is not in use by any process, use one of the following methods:
  • Issue the umount command followed by the mount command to remount the file system. Refer to the man page for both umount and mount for the specific syntax for mounting and unmounting various file system types.
  • Issue the mount -o remount file-system command (where file-system is the name of the file system) to remount the file system. For example, to remount the /home file system, the command to issue is mount -o remount /home.
If the file system is currently in use, the easiest method for remounting the file system is to reboot the system.

15.1.3. Creating the Quota Database Files

After each quota-enabled file system is remounted run the quotacheck command.
The quotacheck command examines quota-enabled file systems and builds a table of the current disk usage per file system. The table is then used to update the operating system's copy of disk usage. In addition, the file system's disk quota files are updated.
To create the quota files (aquota.user and aquota.group) on the file system, use the -c option of the quotacheck command.

Example 15.3. Create quota files

For example, if user and group quotas are enabled for the /home file system, create the files in the /home directory:
# quotacheck -cug /home

The -c option specifies that the quota files should be created for each file system with quotas enabled, the -u option specifies to check for user quotas, and the -g option specifies to check for group quotas.
If neither the -u or -g options are specified, only the user quota file is created. If only -g is specified, only the group quota file is created.
After the files are created, run the following command to generate the table of current disk usage per file system with quotas enabled:
# quotacheck -avug
The options used are as follows:
a
Check all quota-enabled, locally-mounted file systems
v
Display verbose status information as the quota check proceeds
u
Check user disk quota information
g
Check group disk quota information
After quotacheck has finished running, the quota files corresponding to the enabled quotas (user and/or group) are populated with data for each quota-enabled locally-mounted file system such as /home.

15.1.4. Assigning Quotas per User

The last step is assigning the disk quotas with the edquota command.
To configure the quota for a user, as root in a shell prompt, execute the command:
# edquota username
Perform this step for each user who needs a quota. For example, if a quota is enabled in /etc/fstab for the /home partition (/dev/VolGroup00/LogVol02 in the example below) and the command edquota testuser is executed, the following is shown in the editor configured as the default for the system:
Disk quotas for user testuser (uid 501):   Filesystem blocks soft hard inodes   soft   hard   /dev/VolGroup00/LogVol02  440436 0 0 37418  0  0

Note

The text editor defined by the EDITOR environment variable is used by edquota. To change the editor, set the EDITOR environment variable in your ~/.bash_profile file to the full path of the editor of your choice.
The first column is the name of the file system that has a quota enabled for it. The second column shows how many blocks the user is currently using. The next two columns are used to set soft and hard block limits for the user on the file system. The inodes column shows how many inodes the user is currently using. The last two columns are used to set the soft and hard inode limits for the user on the file system.
The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once this limit is reached, no further disk space can be used.
The soft block limit defines the maximum amount of disk space that can be used. However, unlike the hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace period. The grace period can be expressed in seconds, minutes, hours, days, weeks, or months.
If any of the values are set to 0, that limit is not set. In the text editor, change the desired limits.

Example 15.4. Change desired limits

For example:
Disk quotas for user testuser (uid 501):   Filesystem blocks soft hard   inodes   soft   hard   /dev/VolGroup00/LogVol02  440436   500000   550000 37418  0  0

To verify that the quota for the user has been set, use the command:
# quota usernameDisk quotas for user username (uid 501): Filesystem  blocks   quota   limit   grace   files   quota   limit   grace /dev/sdb 1000*   1000 1000   0   0   0

15.1.5. Assigning Quotas per Group

Quotas can also be assigned on a per-group basis. For example, to set a group quota for the devel group (the group must exist prior to setting the group quota), use the command:
# edquota -g devel
This command displays the existing quota for the group in the text editor:
Disk quotas for group devel (gid 505):   Filesystem blocks soft hard inodes soft hard   /dev/VolGroup00/LogVol02  440400   0 0 37418   0   0
Modify the limits, then save the file.
To verify that the group quota has been set, use the command:
# quota -g devel

15.1.6. Setting the Grace Period for Soft Limits

If a given quota has soft limits, you can edit the grace period (i.e. the amount of time a soft limit can be exceeded) with the following command:
# edquota -t
This command works on quotas for inodes or blocks, for either users or groups.

Important

While other edquota commands operate on quotas for a particular user or group, the -t option operates on every file system with quotas enabled.

15.2. Managing Disk Quotas

If quotas are implemented, they need some maintenance - mostly in the form of watching to see if the quotas are exceeded and making sure the quotas are accurate.
Of course, if users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator has a few choices to make depending on what type of users they are and how much disk space impacts their work. The administrator can either help the user determine how to use less disk space or increase the user's disk quota.

15.2.1. Enabling and Disabling

It is possible to disable quotas without setting them to 0. To turn all user and group quotas off, use the following command:
# quotaoff -vaug
If neither the -u or -g options are specified, only the user quotas are disabled. If only -g is specified, only group quotas are disabled. The -v switch causes verbose status information to display as the command executes.
To enable quotas again, use the quotaon command with the same options.
For example, to enable user and group quotas for all file systems, use the following command:
# quotaon -vaug
To enable quotas for a specific file system, such as /home, use the following command:
# quotaon -vug /home
If neither the -u or -g options are specified, only the user quotas are enabled. If only -g is specified, only group quotas are enabled.

15.2.2. Reporting on Disk Quotas

Creating a disk usage report entails running the repquota utility.

Example 15.5. Output of repquota command

For example, the command repquota /home produces this output:
*** Report for user quotas on device /dev/mapper/VolGroup00-LogVol02 Block grace time: 7days; Inode grace time: 7daysBlock limitsFile limitsUserusedsofthardgraceusedsofthardgrace ---------------------------------------------------------------------- root  --  36   0   0  4 0 0 kristin   -- 540   0   0 125 0 0 testuser  --  440400  500000  550000  37418 0 0

To view the disk usage report for all (option -a) quota-enabled file systems, use the command:
# repquota -a
While the report is easy to read, a few points should be explained. The -- displayed after each user is a quick way to determine whether the block or inode limits have been exceeded. If either soft limit is exceeded, a + appears in place of the corresponding -; the first - represents the block limit, and the second represents the inode limit.
The grace columns are normally blank. If a soft limit has been exceeded, the column contains a time specification equal to the amount of time remaining on the grace period. If the grace period has expired, none appears in its place.

15.2.3. Keeping Quotas Accurate

When a file system fails to unmount cleanly (due to a system crash, for example), it is necessary to run quotacheck. However, quotacheck can be run on a regular basis, even if the system has not crashed. Safe methods for periodically running quotacheck include:
Ensuring quotacheck runs on next reboot

Note: Best method for most systems

This method works best for (busy) multiuser systems which are periodically rebooted.
As root, place a shell script into the /etc/cron.daily/ or /etc/cron.weekly/ directory-or schedule one using the crontab -e command-that contains the touch /forcequotacheck command. This creates an empty forcequotacheck file in the root directory, which the system init script looks for at boot time. If it is found, the init script runs quotacheck. Afterward, the init script removes the /forcequotacheck file; thus, scheduling this file to be created periodically with cron ensures that quotacheck is run during the next reboot.
For more information about cron, refer to man cron.
Running quotacheck in single user mode
An alternative way to safely run quotacheck is to boot the system into single-user mode to prevent the possibility of data corruption in quota files and run the following commands:
# quotaoff -vaug /file_system
# quotacheck -vaug /file_system
# quotaon -vaug /file_system
Running quotacheck on a running system
If necessary, it is possible to run quotacheck on a machine during a time when no users are logged in, and thus have no open files on the file system being checked. Run the command quotacheck -vaug file_system ; this command will fail if quotacheck cannot remount the given file_system as read-only. Note that, following the check, the file system will be remounted read-write.

Warning

Running quotacheck on a live file system mounted read-write is not recommended due to the possibility of quota file corruption.
Refer to man cron for more information about configuring cron.

15.3. References

For more information on disk quotas, refer to the man pages of the following commands:
  • quotacheck
  • edquota
  • repquota
  • quota
  • quotaon
  • quotaoff

Chapter 16. Redundant Array of Independent Disks (RAID)

The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of drives appears to the computer as a single logical storage unit or drive.

16.1. What is RAID?

RAID allows information to be spread across several disks. RAID uses techniques such as disk striping (RAID Level 0), disk mirroring (RAID Level 1), and disk striping with parity (RAID Level 5) to achieve redundancy, lower latency, increased bandwidth, and maximized ability to recover from hard disk crashes.
RAID distributes data across each drive in the array by breaking it down into consistently-sized chunks (commonly 256K or 512k, although other values are acceptable). Each chunk is then written to a hard drive in the RAID array according to the RAID level employed. When the data is read, the process is reversed, giving the illusion that the multiple drives in the array are actually one large drive.

16.2. Who Should Use RAID?

System Administrators and others who manage large amounts of data would benefit from using RAID technology. Primary reasons to deploy RAID include:
  • Enhances speed
  • Increases storage capacity using a single virtual disk
  • Minimizes data loss from disk failure

16.3. RAID Types

There are three possible RAID approaches: Firmware RAID, Hardware RAID and Software RAID.

Firmware RAID

Firmware RAID (also known as ATARAID) is a type of software RAID where the RAID sets can be configured using a firmware-based menu. The firmware used by this type of RAID also hooks into the BIOS, allowing you to boot from its RAID sets. Different vendors use different on-disk metadata formats to mark the RAID set members. The Intel Matrix RAID is a good example of a firmware RAID system.

Hardware RAID

The hardware-based array manages the RAID subsystem independently from the host. It presents a single disk per RAID array to the host.
A Hardware RAID device may be internal or external to the system, with internal devices commonly consisting of a specialized controller card that handles the RAID tasks transparently to the operating system and with external devices commonly connecting to the system via SCSI, fiber channel, iSCSI, InfiniBand, or other high speed network interconnect and presenting logical volumes to the system.
RAID controller cards function like a SCSI controller to the operating system, and handle all the actual drive communications. The user plugs the drives into the RAID controller (just like a normal SCSI controller) and then adds them to the RAID controllers configuration. The operating system will not be able to tell the difference.

Software RAID

Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis [4] are not required. Software RAID also works with cheaper IDE disks as well as SCSI disks. With today's faster CPUs, Software RAID also generally outperforms Hardware RAID.
The Linux kernel contains a multi-disk (MD) driver that allows the RAID solution to be completely hardware independent. The performance of a software-based array depends on the server CPU performance and load.
Here are some of the key features of the Linux software RAID stack:
  • Multi-threaded design
  • Portability of arrays between Linux machines without reconstruction
  • Backgrounded array reconstruction using idle system resources
  • Hot-swappable drive support
  • Automatic CPU detection to take advantage of certain CPU features such as streaming SIMD support
  • Automatic correction of bad sectors on disks in an array
  • Regular consistency checks of RAID data to ensure the health of the array
  • Proactive monitoring of arrays with email alerts sent to a designated email address on important events
  • Write-intent bitmaps which drastically increase the speed of resync events by allowing the kernel to know precisely which portions of a disk need to be resynced instead of having to resync the entire array
  • Resync checkpointing so that if you reboot your computer during a resync, at startup the resync will pick up where it left off and not start all over again
  • The ability to change parameters of the array after installation. For example, you can grow a 4-disk RAID5 array to a 5-disk RAID5 array when you have a new disk to add. This grow operation is done live and does not require you to reinstall on the new array.

16.4. RAID Levels and Linear Support

RAID supports various configurations, including levels 0, 1, 4, 5, 6, 10, and linear. These RAID types are defined as follows:
Level 0
RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into strips and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy.
Many RAID level 0 implementations will only stripe the data across the member devices up to the size of the smallest device in the array. This means that if you have multiple devices with slightly different sizes, each device will get treated as though it is the same size as the smallest drive. Therefore, the common storage capacity of a level 0 array is equal to the capacity of the smallest member disk in a Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the number of disks or partitions in the array.
Level 1
RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks, and provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost. [5]
The storage capacity of the level 1 array is equal to the capacity of the smallest mirrored hard disk in a Hardware RAID or the smallest mirrored partition in a Software RAID. Level 1 redundancy is the highest possible among all RAID types, with the array being able to operate with only a single disk present.
Level 4
Level 4 uses parity [6] concentrated on a single disk drive to protect data. Because the dedicated parity disk represents an inherent bottleneck on all write transactions to the RAID array, level 4 is seldom used without accompanying technologies such as write-back caching, or in specific circumstances where the system administrator is intentionally designing the software RAID device with this bottleneck in mind (such as an array that will have little to no write transactions once the array is populated with data). RAID level 4 is so rarely used that it is not available as an option in Anaconda. However, it could be created manually by the user if truly needed.
The storage capacity of Hardware RAID level 4 is equal to the capacity of the smallest member partition multiplied by the number of partitions minus one. Performance of a RAID level 4 array will always be asymmetrical, meaning reads will outperform writes. This is because writes consume extra CPU and main memory bandwidth when generating parity, and then also consume extra bus bandwidth when writing the actual data to disks because you are writing not only the data, but also the parity. Reads need only read the data and not the parity unless the array is in a degraded state. As a result, reads generate less traffic to the drives and across the busses of the computer for the same amount of data transfer under normal operating conditions.
Level 5
This is the most common type of RAID. By distributing parity across all of an array's member disk drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process itself. With modern CPUs and Software RAID, that is usually not a bottleneck at all since modern CPUs can generate parity very fast. However, if you have a sufficiently large number of member devices in a software RAID5 array such that the combined aggregate data transfer speed across all devices is high enough, then this bottleneck can start to come into play.
As with level 4, level 5 has asymmetrical performance, with reads substantially outperforming writes. The storage capacity of RAID level 5 is calculated the same way as with level 4.
Level 6
This is a common level of RAID when data redundancy and preservation, and not performance, are the paramount concerns, but where the space inefficiency of level 1 is not acceptable. Level 6 uses a complex parity scheme to be able to recover from the loss of any two drives in the array. This complex parity scheme creates a significantly higher CPU burden on software RAID devices and also imposes an increased burden during write transactions. As such, level 6 is considerably more asymmetrical in performance than levels 4 and 5.
The total capacity of a RAID level 6 array is calculated similarly to RAID level 5 and 4, except that you must subtract 2 devices (instead of 1) from the device count for the extra parity storage space.
Level 10
This RAID level attempts to combine the performance advantages of level 0 with the redundancy of level 1. It also helps to alleviate some of the space wasted in level 1 arrays with more than 2 devices. With level 10, it is possible to create a 3-drive array configured to store only 2 copies of each piece of data, which then allows the overall array size to be 1.5 times the size of the smallest devices instead of only equal to the smallest device (like it would be with a 3-device, level 1 array).
The number of options available when creating level 10 arrays (as well as the complexity of selecting the right options for a specific use case) make it impractical to create during installation. It is possible to create one manually using the command line mdadm tool. For details on the options and their respective performance trade-offs, refer to man md.
Linear RAID
Linear RAID is a simple grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the next drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations will be split between member drives. Linear RAID also offers no redundancy and, in fact, decreases reliability - if any one member drive fails, the entire array cannot be used. The capacity is the total of all member disks.

16.5.  Linux RAID Subsystems

RAID in Linux is composed of the following subsystems:

Linux Hardware RAID controller drivers

Hardware RAID controllers have no specific RAID subsystem in Linux. Because they use special RAID chipsets, hardware RAID controllers come with their own drivers; these drivers allow the system to detect the RAID sets as regular disks.

mdraid

The mdraid subsystem was designed as a software RAID solution for Linux; it is also the preferred solution for software RAID under Linux. This subsystem uses its own metadata format, generally referred to as native mdraid metadata.
mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 6 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility.

dmraid

Device-mapper RAID or dmraid refers to device-mapper kernel code that offers the mechanism to piece disks together into a RAID set. This same kernel code does not provide any RAID configuration mechanism.
dmraid is configured entirely in user-space, making it easy to support various on-disk metadata formats. As such, dmraid is used on a wide variety of firmware RAID implementations. dmraid also supports Intel firmware RAID, although Red Hat Enterprise Linux 6 uses mdraid to access Intel firmware RAID sets.

16.6.  RAID Support in the Installer

The Anaconda installer will automatically detect any hardware and firmware RAID sets on a system, making them available for installation. Anaconda also supports software RAID using mdraid, and can recognize existing mdraid sets.
Anaconda provides utilities for creating RAID sets during installation; however, these utilities only allow partitions (as opposed to entire disks) to be members of new sets. To use an entire disk for a set, simply create a partition on it spanning the entire disk, and use that partition as the RAID set member.
When the root file system uses a RAID set, Anaconda will add special kernel command-line options to the bootloader configuration telling the initrd which RAID set(s) to activate before searching for the root file system.
For instructions on configuring RAID during installation, refer to the Red Hat Enterprise Linux 6 Installation Guide.

16.7.  Configuring RAID Sets

Most RAID sets are configured during creation, typically through the firmware menu or from the installer. In some cases, you may need to create or modify RAID sets after installing the system, preferably without having to reboot the machine and enter the firmware menu to do so.
Some hardware RAID controllers allow you to configure RAID sets on-the-fly or even define completely new sets after adding extra disks. This requires the use of driver-specific utilities, as there is no standard API for this. Refer to your hardware RAID controller's driver documentation for information on this.

mdadm

The mdadm command-line tool is used to manage software RAID in Linux, i.e. mdraid. For information on the different mdadm modes and options, refer to man mdadm. The man page also contains useful examples for common operations like creating, monitoring, and assembling software RAID arrays.

dmraid

As the name suggests, dmraid is used to manage device-mapper RAID sets. The dmraid tool finds ATARAID devices using multiple metadata format handlers, each supporting various formats. For a complete list of supported formats, run dmraid -l.
As mentioned earlier in Section 16.5, " Linux RAID Subsystems", the dmraid tool cannot configure RAID sets after creation. For more information about using dmraid, refer to man dmraid.

16.8.  Advanced RAID Device Creation

In some cases, you may wish to install the operating system on an array that can't be created after the installation completes. Usually, this means setting up the /boot or root file system arrays on a complex RAID device; in such cases, you may need to use array options that are not supported by Anaconda. To work around this, perform the following procedure:

Procedure 16.1. Advanced RAID device creation

  1. Insert the install disk as you normally would.
  2. During the initial boot up, select Rescue Mode instead of Install or Upgrade. When the system fully boots into Rescue mode, the user will be presented with a command line terminal.
  3. From this terminal, use parted to create RAID partitions on the target hard drives. Then, use mdadm to manually create raid arrays from those partitions using any and all settings and options available. For more information on how to do these, refer to Chapter 12, Partitions, man parted, and man mdadm.
  4. Once the arrays are created, you can optionally create file systems on the arrays as well. Refer to Section 11.2, "Overview of Supported File Systems" for basic technical information on file systems supported by Red Hat Enterprise Linux 6.
  5. Reboot the computer and this time select Install or Upgrade to install as normal. As Anaconda searches the disks in the system, it will find the pre-existing RAID devices.
  6. When asked about how to use the disks in the system, select Custom Layout and click Next. In the device listing, the pre-existing MD RAID devices will be listed.
  7. Select a RAID device, click Edit and configure its mount point and (optionally) the type of file system it should use (if you did not create one earlier) then click Done. Anaconda will perform the install to this pre-existing RAID device, preserving the custom options you selected when you created it in Rescue Mode.

Note

The limited Rescue Mode of the installer does not include man pages. Both the man mdadm and man md contain useful information for creating custom RAID arrays, and may be needed throughout the workaround. As such, it can be helpful to either have access to a machine with these man pages present, or to print them out prior to booting into Rescue Mode and creating your custom arrays.


[4] A hot-swap chassis allows you to remove a hard drive without having to power-down your system.
[5] RAID level 1 comes at a high cost because you write the same information to all of the disks in the array, provides data reliability, but in a much less space-efficient manner than parity based RAID levels such as level 5. However, this space inefficiency comes with a performance benefit: parity-based RAID levels consume considerably more CPU power in order to generate the parity while RAID level 1 simply writes the same data more than once to the multiple RAID members with very little CPU overhead. As such, RAID level 1 can outperform the parity-based RAID levels on machines where software RAID is employed and CPU resources on the machine are consistently taxed with operations other than RAID activities.
[6] Parity information is calculated based on the contents of the rest of the member disks in the array. This information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has been replaced.

Chapter 17. Using the mount Command

On Linux, UNIX, and similar operating systems, file systems on different partitions and removable devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount point) in the directory tree, and then detached again. To attach or detach a file system, use the mount or umount command respectively. This chapter describes the basic use of these commands, as well as some advanced topics, such as moving a mount point or creating shared subtrees.

17.1. Listing Currently Mounted File Systems

To display all currently attached file systems, run the mount command with no additional arguments:
mount
This command displays the list of known mount points. Each line provides important information about the device name, the file system type, the directory in which it is mounted, and relevant mount options in the following form:
device on directory type type (options)
The findmnt utility, which allows users to list mounted file systems in a tree-like form, is also available from Red Hat Enterprise Linux 6.1. To display all currently attached file systems, run the findmnt command with no additional arguments:
findmnt

17.1.1. Specifying the File System Type

By default, the output of the mount command includes various virtual file systems such as sysfs and tmpfs. To display only the devices with a certain file system type, supply the -t option on the command line:
mount -t type
Similarly, to display only the devices with a certain file system type by using the findmnt command, type:
findmnt -t type
For a list of common file system types, refer to Table 17.1, "Common File System Types". For an example usage, see Example 17.1, "Listing Currently Mounted ext4 File Systems".

Example 17.1. Listing Currently Mounted ext4 File Systems

Usually, both / and /boot partitions are formatted to use ext4. To display only the mount points that use this file system, type the following at a shell prompt:
~]$ mount -t ext4/dev/sda2 on / type ext4 (rw)/dev/sda1 on /boot type ext4 (rw)
To list such mount points using the findmnt command, type:
~]$ findmnt -t ext4TARGET SOURCE FSTYPE OPTIONS/  /dev/sda2 ext4   rw,realtime,seclabel,barrier=1,data=ordered/boot  /dev/sda1 ext4   rw,realtime,seclabel,barrier=1,data=ordered

17.2. Mounting a File System

To attach a certain file system, use the mount command in the following form:
mount [option . . . . . . ] device directory
The device can be identified by a full path to a block device (for example, "/dev/sda3"), a universally unique identifier (UUID; for example, "UUID=34795a28-ca6d-4fd8-a347-73671d0c19cb"), or a volume label (for example, "LABEL=home"). Note that while a file system is mounted, the original content of the directory is not accessible.

Important: Make Sure the Directory is Not in Use

Linux does not prevent a user from mounting a file system to a directory with a file system already attached to it. To determine whether a particular directory serves as a mount point, run the findmnt utility with the directory as its argument and verify the exit code:
findmnt directory; echo $?
If no file system is attached to the directory, the above command returns 1.
When the mount command is run without all required information (that is, without the device name, the target directory, or the file system type), it reads the content of the /etc/fstab configuration file to see if the given file system is listed. This file contains a list of device names and the directories in which the selected file systems should be mounted, as well as the file system type and mount options. Because of this, when mounting a file system that is specified in this file, you can use one of the following variants of the command:
mount [option . . . . . . ] directorymount [option . . . . . . ] device
Note that permissions are required to mount the file systems unless the command is run as root (see Section 17.2.2, "Specifying the Mount Options").

Note: Determining the UUID and Label of a Particular Device

To determine the UUID and-if the device uses it-the label of a particular device, use the blkid command in the following form:
blkid device
For example, to display information about /dev/sda3, type:
~]# blkid /dev/sda3/dev/sda3: LABEL="home" UUID="34795a28-ca6d-4fd8-a347-73671d0c19cb" TYPE="ext3"

17.2.1. Specifying the File System Type

In most cases, mount detects the file system automatically. However, there are certain file systems, such as NFS (Network File System) or CIFS (Common Internet File System), that are not recognized, and need to be specified manually. To specify the file system type, use the mount command in the following form:
mount -t type device directory
Table 17.1, "Common File System Types" provides a list of common file system types that can be used with the mount command. For a complete list of all available file system types, consult the relevant manual page as referred to in Section 17.4.1, "Manual Page Documentation".

Table 17.1. Common File System Types

TypeDescription
ext2The ext2 file system.
ext3The ext3 file system.
ext4The ext4 file system.
iso9660The ISO 9660 file system. It is commonly used by optical media, typically CDs.
jfsThe JFS file system created by IBM.
nfsThe NFS file system. It is commonly used to access files over the network.
nfs4The NFSv4 file system. It is commonly used to access files over the network.
ntfsThe NTFS file system. It is commonly used on machines that are running the Windows operating system.
udfThe UDF file system. It is commonly used by optical media, typically DVDs.
vfatThe FAT file system. It is commonly used on machines that are running the Windows operating system, and on certain digital media such as USB flash drives or floppy disks.

Example 17.2. Mounting a USB Flash Drive

Older USB flash drives often use the FAT file system. Assuming that such drive uses the /dev/sdc1 device and that the /media/flashdisk/ directory exists, mount it to this directory by typing the following at a shell prompt as root:
~]# mount -t vfat /dev/sdc1 /media/flashdisk

17.2.2. Specifying the Mount Options

To specify additional mount options, use the command in the following form:
mount -o options device directory
When supplying multiple options, do not insert a space after a comma, or mount will incorrectly interpret the values following spaces as additional parameters.
Table 17.2, "Common Mount Options" provides a list of common mount options. For a complete list of all available options, consult the relevant manual page as referred to in Section 17.4.1, "Manual Page Documentation".

Table 17.2. Common Mount Options

OptionDescription
asyncAllows the asynchronous input/output operations on the file system.
autoAllows the file system to be mounted automatically using the mount -a command.
defaultsProvides an alias for async,auto,dev,exec,nouser,rw,suid.
execAllows the execution of binary files on the particular file system.
loopMounts an image as a loop device.
noautoDefault behavior disallows the automatic mount of the file system using the mount -a command.
noexecDisallows the execution of binary files on the particular file system.
nouserDisallows an ordinary user (that is, other than root) to mount and unmount the file system.
remountRemounts the file system in case it is already mounted.
roMounts the file system for reading only.
rwMounts the file system for both reading and writing.
userAllows an ordinary user (that is, other than root) to mount and unmount the file system.

See Example 17.3, "Mounting an ISO Image" for an example usage.

Example 17.3. Mounting an ISO Image

An ISO image (or a disk image in general) can be mounted by using the loop device. Assuming that the ISO image of the Fedora 14 installation disc is present in the current working directory and that the /media/cdrom/ directory exists, mount the image to this directory by running the following command as root:
~]# mount -o ro,loop Fedora-14-x86_64-Live-Desktop.iso /media/cdrom
Note that ISO 9660 is by design a read-only file system.

17.2.3. Sharing Mounts

Occasionally, certain system administration tasks require access to the same file system from more than one place in the directory tree (for example, when preparing a chroot environment). This is possible, and Linux allows you to mount the same file system to as many directories as necessary. Additionally, the mount command implements the --bind option that provides a means for duplicating certain mounts. Its usage is as follows:
mount --bind old_directory new_directory
Although this command allows a user to access the file system from both places, it does not apply on the file systems that are mounted within the original directory. To include these mounts as well, type:
mount --rbind old_directory new_directory
Additionally, to provide as much flexibility as possible, Red Hat Enterprise Linux 6 implements the functionality known as shared subtrees. This feature allows the use of the following four mount types:
Shared Mount
A shared mount allows the creation of an exact replica of a given mount point. When a mount point is marked as a shared mount, any mount within the original mount point is reflected in it, and vice versa. To change the type of a mount point to a shared mount, type the following at a shell prompt:
mount --make-shared mount_point
Alternatively, to change the mount type for the selected mount point and all mount points under it, type:
mount --make-rshared mount_point

Example 17.4. Creating a Shared Mount Point

There are two places where other file systems are commonly mounted: the /media directory for removable media, and the /mnt directory for temporarily mounted file systems. By using a shared mount, you can make these two directories share the same content. To do so, as root, mark the /media directory as "shared":
~]# mount --bind /media /media~]# mount --make-shared /media
Then create its duplicate in /mnt by using the following command:
~]# mount --bind /media /mnt
It is now possible to verify that a mount within /media also appears in /mnt. For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the following commands:
~]# mount /dev/cdrom /media/cdrom~]# ls /media/cdromEFI  GPL  isolinux  LiveOS~]# ls /mnt/cdromEFI  GPL  isolinux  LiveOS
Similarly, it is possible to verify that any file system mounted in the /mnt directory is reflected in /media. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type:
~]# mount /dev/sdc1 /mnt/flashdisk~]# ls /media/flashdisken-US  publican.cfg~]# ls /mnt/flashdisken-US  publican.cfg

Slave Mount
A slave mount allows the creation of a limited duplicate of a given mount point. When a mount point is marked as a slave mount, any mount within the original mount point is reflected in it, but no mount within a slave mount is reflected in its original. To change the type of a mount point to a slave mount, type the following at a shell prompt:
mount --make-slave mount_point
Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it by typing:
mount --make-rslave mount_point

Example 17.5. Creating a Slave Mount Point

This example shows how to get the content of the /media directory to appear in /mnt as well, but without any mounts in the /mnt directory to be reflected in /media. As root, first mark the /media directory as "shared":
~]# mount --bind /media /media~]# mount --make-shared /media
Then create its duplicate in /mnt, but mark it as "slave":
~]# mount --bind /media /mnt~]# mount --make-slave /mnt
Now verify that a mount within /media also appears in /mnt. For example, if the CD-ROM drive contains non-empty media and the /media/cdrom/ directory exists, run the following commands:
~]# mount /dev/cdrom /media/cdrom~]# ls /media/cdromEFI  GPL  isolinux  LiveOS~]# ls /mnt/cdromEFI  GPL  isolinux  LiveOS
Also verify that file systems mounted in the /mnt directory are not reflected in /media. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type:
~]# mount /dev/sdc1 /mnt/flashdisk~]# ls /media/flashdisk~]# ls /mnt/flashdisken-US  publican.cfg

Private Mount
A private mount is the default type of mount, and unlike a shared or slave mount, it does not receive or forward any propagation events. To explicitly mark a mount point as a private mount, type the following at a shell prompt:
mount --make-private mount_point
Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it:
mount --make-rprivate mount_point

Example 17.6. Creating a Private Mount Point

Taking into account the scenario in Example 17.4, "Creating a Shared Mount Point", assume that a shared mount point has been previously created by using the following commands as root:
~]# mount --bind /media /media~]# mount --make-shared /media~]# mount --bind /media /mnt
To mark the /mnt directory as "private", type:
~]# mount --make-private /mnt
It is now possible to verify that none of the mounts within /media appears in /mnt. For example, if the CD-ROM drives contains non-empty media and the /media/cdrom/ directory exists, run the following commands:
~]# mount /dev/cdrom /media/cdrom~]# ls /media/cdromEFI  GPL  isolinux  LiveOS~]# ls /mnt/cdrom~]#
It is also possible to verify that file systems mounted in the /mnt directory are not reflected in /media. For instance, if a non-empty USB flash drive that uses the /dev/sdc1 device is plugged in and the /mnt/flashdisk/ directory is present, type:
~]# mount /dev/sdc1 /mnt/flashdisk~]# ls /media/flashdisk~]# ls /mnt/flashdisken-US  publican.cfg

Unbindable Mount
In order to prevent a given mount point from being duplicated whatsoever, an unbindable mount is used. To change the type of a mount point to an unbindable mount, type the following at a shell prompt:
mount --make-unbindable mount_point
Alternatively, it is possible to change the mount type for the selected mount point and all mount points under it:
mount --make-runbindable mount_point

Example 17.7. Creating an Unbindable Mount Point

To prevent the /media directory from being shared, as root, type the following at a shell prompt:
~]# mount --bind /media /media~]# mount --make-unbindable /media
This way, any subsequent attempt to make a duplicate of this mount will fail with an error:
~]# mount --bind /media /mntmount: wrong fs type, bad option, bad superblock on /media,missing codepage or helper program, or other errorIn some cases useful info is found in syslog - trydmesg | tail  or so

17.2.4. Moving a Mount Point

To change the directory in which a file system is mounted, use the following command:
mount --move old_directory new_directory

Example 17.8. Moving an Existing NFS Mount Point

An NFS storage contains user directories and is already mounted in /mnt/userdirs/. As root, move this mount point to /home by using the following command:
~]# mount --move /mnt/userdirs /home
To verify the mount point has been moved, list the content of both directories:
~]# ls /mnt/userdirs~]# ls /homejill  joe

17.3. Unmounting a File System

To detach a previously mounted file system, use either of the following variants of the umount command:
umount directoryumount device
Note that unless this is performed while logged in as root, the correct permissions must be available to unmount the file system (see Section 17.2.2, "Specifying the Mount Options"). See Example 17.9, "Unmounting a CD" for an example usage.

Important: Make Sure the Directory is Not in Use

When a file system is in use (for example, when a process is reading a file on this file system, or when it is used by the kernel), running the umount command will fail with an error. To determine which processes are accessing the file system, use the fuser command in the following form:
fuser -m directory
For example, to list the processes that are accessing a file system mounted to the /media/cdrom/ directory, type:
~]$ fuser -m /media/cdrom/media/cdrom: 1793  2013  2022  2435 10532c 10672c

Example 17.9. Unmounting a CD

To unmount a CD that was previously mounted to the /media/cdrom/ directory, type the following at a shell prompt:
~]$ umount /media/cdrom

17.4. Documentation

The following resources provide an in-depth documentation on the subject.

17.4.1. Manual Page Documentation

  • man 8 mount - The manual page for the mount command that provides a full documentation on its usage.
  • man 8 umount - The manual page for the umount command that provides a full documentation on its usage.
  • man 8 findmnt - The manual page for the findmnt command that provides a full documentation on its usage.
  • man 5 fstab - The manual page providing a thorough description of the /etc/fstab file format.

17.4.2. Useful Websites

(Sebelumnya) 20 : Part II. Storage Administ ...20 : Chapter 18. The volume_ke ... (Berikutnya)