Cari di RHE Linux 
    RHE Linux User Manual
Daftar Isi
(Sebelumnya) 20 : Chapter 8. The XFS File S ...20 : Chapter 10. FS-Cache - St ... (Berikutnya)

Storage Administration Guide

Chapter 9. Network File System (NFS)

A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network.
This chapter focuses on fundamental NFS concepts and supplemental information.

9.1. How It Works

Currently, there are three versions of NFS. NFS version 2 (NFSv2) is older and is widely supported. NFS version 3 (NFSv3) supports safe asynchronous writes and a more robust error handling than NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access more than 2Gb of file data.
NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an rpcbind service, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux 6 supports NFSv2, NFSv3, and NFSv4 clients. When mounting a file system via NFS, Red Hat Enterprise Linux uses NFSv4 by default, if the server supports it.
All versions of NFS can use Transmission Control Protocol (TCP) running over an IP network, with NFSv4 requiring it. NFSv2 and NFSv3 can use the User Datagram Protocol (UDP) running over an IP network to provide a stateless network connection between the client and server.
When using NFSv2 or NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol overhead than TCP. This can translate into better performance on very clean, non-congested networks. However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. In addition, when a frame is lost with UDP, the entire RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these reasons, TCP is the preferred protocol when connecting to an NFS server.
The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind [3], lockd, and rpc.statd daemons. The rpc.mountd daemon is still required on the NFS server to set up the exports, but is not involved in any over-the-wire operations.

Note

TCP is the default transport protocol for NFS version 2 and 3 under Red Hat Enterprise Linux. UDP can be used for compatibility purposes as needed, but is not recommended for wide usage. NFSv4 requires TCP.
All the RPC/NFS daemon have a '-p' command line option that can set the port, making firewall configuration easier.
After TCP wrappers grant access to the client, the NFS server refers to the /etc/exports configuration file to determine whether the client is allowed to access any exported file systems. Once verified, all file and directory operations are available to the user.

Important

In order for NFS to work with a default installation of Red Hat Enterprise Linux with a firewall enabled, configure IPTables with the default TCP port 2049. Without proper IPTables configuration, NFS will not function properly.
The NFS initialization script and rpc.nfsd process now allow binding to any specified port during system start up. However, this can be error-prone if the port is unavailable, or if it conflicts with another daemon.

9.1.1. Required Services

Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers. RPC services under Red Hat Enterprise Linux 6 are controlled by the rpcbind service. To share or mount NFS file systems, the following services work together, depending on which version of NFS is implemented:

Note

The portmap service was used to map RPC program numbers to IP address port number combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced by rpcbind in Red Hat Enterprise Linux 6 to enable IPv6 support. For more information about this change, refer to the following links:
nfs
service nfs start starts the NFS server and the appropriate RPC processes to service requests for shared NFS file systems.
nfslock
service nfslock start activates a mandatory service that starts the appropriate RPC processes which allow NFS clients to lock files on the server.
rpcbind
rpcbind accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them. rpcbind responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4.
The following RPC processes facilitate NFS services:
rpc.mountd
This process is used by an NFS server to process MOUNT requests from NFSv2 and NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the rpc.mountd server replies with a Success status and provides the File-Handle for this NFS share back to the NFS client.
rpc.nfsd
rpc.nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the nfs service.
lockd
lockd is a kernel thread which runs on both clients and servers. It implements the Network Lock Manager (NLM) protocol, which allows NFSv2 and NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted.
rpc.statd
This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down. rpc.statd is started automatically by the nfslock service, and does not require user configuration. This is not used with NFSv4.
rpc.rquotad
This process provides user quota information for remote users. rpc.rquotad is started automatically by the nfs service and does not require user configuration.
rpc.idmapd
rpc.idmapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (which are strings in the form of user@domain) and local UIDs and GIDs. For idmapd to function with NFSv4, the /etc/idmapd.conf must be configured. This service is required for use with NFSv4, although not when all hosts share the same DNS domain name.

9.2. pNFS

Support for Parallel NFS (pNFS) as part of the NFS v4.1 standard is available as of Red Hat Enterprise Linux 6.4. The pNFS architecture improves the scalability of NFS, with possible improvements to performance. That is, when a server implements pNFS as well, a client is able to access data through multiple servers concurrently. It supports three storage protocols or layouts: files, objects, and blocks.
To enable this functionality, use one of the following two commands from a pNFS-enabled server:
$ -o minorversion=1
or
$ -o v4.1
After the server is pNFS-enabled, the nfs_layout_nfsv41_files kernel is automatically loaded on the first mount. Use the following command to verify the module was loaded:
$ lsmod | grep nfs_layout_nfsv41_files
Another way to verify a successful NFSv4.1 mount is with the mount command. The mount entry in the output should contain minorversion=1.

Important

The protocol allows for three possible pNFS layout types: files, objects, and blocks. However the Red Hat Enterprise Linux 6.4 client only supports the files layout type, so will use pNFS only when the server also supports the files layout type.
For more information on pNFS, refer to: http://www.pnfs.com.

9.3. NFS Client Configuration

The mount command mounts NFS shares on the client side. Its format is as follows:
# mount -t nfs -o options host:/remote/export /local/directory
This command uses the following variables:
options
A comma-delimited list of mount options; refer to Section 9.5, "Common NFS Mount Options" for details on valid NFS mount options.
server
The hostname, IP address, or fully qualified domain name of the server exporting the file system you wish to mount
/remote/export
The file system / directory being exported from server, i.e. the directory you wish to mount
/local/directory
The client location where /remote/export should be mounted
The NFS protocol version used in Red Hat Enterprise Linux 6 is identified by the mount options nfsvers or vers. By default, mount will use NFSv4 with mount -t nfs. If the server does not support NFSv4, the client will automatically step down to a version supported by the server. If you use the nfsvers/vers option to pass a particular version not supported by the server, the mount will fail. The file system type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o nfsvers=4 host:/remote/export /local/directory.
Refer to man mount for more details.
If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the /etc/fstab file and the autofs service. Refer to Section 9.3.1, "Mounting NFS File Systems using /etc/fstab" and Section 9.4, "autofs" for more information.

9.3.1. Mounting NFS File Systems using /etc/fstab

An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file.

Example 9.1. Syntax example

The general syntax for the line in /etc/fstab is as follows:
server:/usr/local/pub /pub   nfs defaults 0 0

The mount point /pub must exist on the client machine before this command can be executed. After adding this line to /etc/fstab on the client system, use the command mount /pub, and the mount point /pub is mounted from the server.
The /etc/fstab file is referenced by the netfs service at boot time, so lines referencing NFS shares have the same effect as manually typing the mount command during the boot process.
A valid /etc/fstab entry to mount an NFS export should contain the following information:
server:/remote/export /local/directory nfs options 0 0
The variables server, /remote/export, /local/directory, and options are the same ones used when manually mounting an NFS share. Refer to Section 9.3, "NFS Client Configuration" for a definition of each variable.

Note

The mount point /local/directory must exist on the client before /etc/fstab is read. Otherwise, the mount will fail.
For more information about /etc/fstab, refer to man fstab.

9.4. autofs

One drawback to using /etc/fstab is that, regardless of how infrequently a user accesses the NFS mounted file system, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to /etc/fstab is to use the kernel-based automount utility. An automounter consists of two components:
  • a kernel module that implements a file system
  • a user-space daemon that performs all of the other functions
The automount utility can mount and unmount NFS file systems automatically (on-demand mounting), therefore saving system resources. It can be used to mount other file systems including AFS, SMBFS, CIFS, and local file systems.

Important: Automounting NFS Shares

The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File System Client' groups. As such it is no longer installed by default with the Base group. Ensure that nfs-utils is installed on the system first before attempting to automount an NFS share. Note that autofs is also part of the 'Network File System Client' group.
autofs uses /etc/auto.master (master map) as its default primary configuration file. This can be changed to use another supported network source and name using the autofs configuration (in /etc/sysconfig/autofs) in conjunction with the Name Service Switch (NSS) mechanism. An instance of the autofs version 4 daemon was run for each mount point configured in the master map and so it could be run manually from the command line for any given mount point. This is not possible with autofs version 5, because it uses a single daemon to manage all configured mount points; as such, all automounts must be configured in the master map. This is in line with the usual requirements of other industry standard automounters. Mount point, hostname, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host.

9.4.1. Improvements in autofs Version 5 over Version 4

autofs version 5 features the following enhancements over version 4:
Direct map support
Direct maps in autofs provide a mechanism to automatically mount file systems at arbitrary points in the file system hierarchy. A direct map is denoted by a mount point of /- in the master map. Entries in a direct map contain an absolute path name as a key (instead of the relative path names used in indirect maps).
Lazy mount and unmount support
Multi-mount map entries describe a hierarchy of mount points under a single key. A good example of this is the -hosts map, commonly used for automounting all exports from a host under /net/host as a multi-mount map entry. When using the -hosts map, an ls of /net/host will mount autofs trigger mounts for each export from host. These will then mount and expire them as they are accessed. This can greatly reduce the number of active mounts needed when accessing a server with a large number of exports.
Enhanced LDAP support
The autofs configuration file (/etc/sysconfig/autofs) provides a mechanism to specify the autofs schema that a site implements, thus precluding the need to determine this via trial and error in the application itself. In addition, authenticated binds to the LDAP server are now supported, using most mechanisms supported by the common LDAP server implementations. A new configuration file has been added for this support: /etc/autofs_ldap_auth.conf. The default configuration file is self-documenting, and uses an XML format.
Proper use of the Name Service Switch (nsswitch) configuration.
The Name Service Switch configuration file exists to provide a means of determining from where specific configuration data comes. The reason for this configuration is to allow administrators the flexibility of using the back-end database of choice, while maintaining a uniform software interface to access the data. While the version 4 automounter is becoming increasingly better at handling the NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete implementation.
Refer to man nsswitch.conf for more information on the supported syntax of this file. Please note that not all NSS databases are valid map sources and the parser will reject ones that are invalid. Valid sources are files, yp, nis, nisplus, ldap, and hesiod.
Multiple master map entries per autofs mount point
One thing that is frequently used but not yet mentioned is the handling of multiple master map entries for the direct mount point /-. The map keys for each entry are merged and behave as one map.

Example 9.2. Multiple master map entries per autofs mount point

An example is seen in the connectathon test maps for the direct mounts below:
/- /tmp/auto_dcthon/- /tmp/auto_test3_direct/- /tmp/auto_test4_direct

 

9.4.2. autofs Configuration

The primary configuration file for the automounter is /etc/auto.master, also referred to as the master map which may be changed as described in the Section 9.4.1, "Improvements in autofs Version 5 over Version 4". The master map lists autofs-controlled mount points on the system, and their corresponding configuration files or network sources known as automount maps. The format of the master map is as follows:
mount-point map-name options
The variables used in this format are:
mount-point
The autofs mount point e.g /home.
map-name
The name of a map source which contains a list of mount points, and the file system location from which those mount points should be mounted. The syntax for a map entry is described below.
options
If supplied, these will apply to all entries in the given map provided they don't themselves have options specified. This behavior is different from autofs version 4 where options were cumulative. This has been changed to implement mixed environment compatibility.

Example 9.3. /etc/auto.master file

The following is a sample line from /etc/auto.master file (displayed with cat /etc/auto.master):
/home /etc/auto.misc
The general format of maps is similar to the master map, however the "options" appear between the mount point and the location instead of at the end of the entry as in the master map:
mount-point   [options]   location

The variables used in this format are:
mount-point
This refers to the autofs mount point. This can be a single directory name for an indirect mount or the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-point above) may be followed by a space separated list of offset directories (sub directory names each beginning with a "/") making them what is known as a multi-mount entry.
options
Whenever supplied, these are the mount options for the map entries that do not specify their own options.
location
This refers to the file system location such as a local file system path (preceded with the Sun map format escape character ":" for map names beginning with "/"), an NFS file system or other valid file system location.
The following is a sample of contents from a map file (i.e. /etc/auto.misc):
payroll -fstype=nfs personnel:/dev/hda3sales -fstype=ext3 :/dev/hda4
The first column in a map file indicates the autofs mount point (sales and payroll from the server called personnel). The second column indicates the options for the autofs mount while the third column indicates the source of the mount. Following the above configuration, the autofs mount points will be /home/payroll and /home/sales. The -fstype= option is often omitted and is generally not needed for correct operation.
The automounter will create the directories if they do not exist. If the directories exist before the automounter was started, the automounter will not remove them when it exits. You can start or restart the automount daemon by issuing either of the following two commands:
service autofs start (if the automount daemon has stopped)
service autofs restart
Using the above configuration, if a process requires access to an autofs unmounted directory such as /home/payroll/2006/July.sxc, the automount daemon automatically mounts the directory. If a timeout is specified, the directory will automatically be unmounted if the directory is not accessed for the timeout period.
You can view the status of the automount daemon by issuing the following command:
#  service autofs status

9.4.3. Overriding or Augmenting Site Configuration Files

It can be useful to override site defaults for a specific mount point on a client system. For example, consider the following conditions:
  • Automounter maps are stored in NIS and the /etc/nsswitch.conf file has the following directive:
    automount: files nis
  • The auto.master file contains the following
    +auto.master
  • The NIS auto.master map file contains the following:
    /home auto.home
  • The NIS auto.home map contains the following:
    beth fileserver.example.com:/export/home/bethjoe fileserver.example.com:/export/home/joe*   fileserver.example.com:/export/home/&
  • The file map /etc/auto.home does not exist.
Given these conditions, let's assume that the client system needs to override the NIS map auto.home and mount home directories from a different server. In this case, the client will need to use the following /etc/auto.master map:
/home �/etc/auto.home+auto.master
And the /etc/auto.home map contains the entry:
* labserver.example.com:/export/home/&
Because the automounter only processes the first occurrence of a mount point, /home will contain the contents of /etc/auto.home instead of the NIS auto.home map.
Alternatively, if you just want to augment the site-wide auto.home map with a few entries, create a /etc/auto.home file map, and in it put your new entries and at the end, include the NIS auto.home map. Then the /etc/auto.home file map might look similar to:
mydir someserver:/export/mydir+auto.home
Given the NIS auto.home map listed above, ls /home would now output:
beth joe mydir
This last example works as expected because autofs knows not to include the contents of a file map of the same name as the one it is reading. As such, autofs moves on to the next map source in the nsswitch configuration.

9.4.4. Using LDAP to Store Automounter Maps

LDAP client libraries must be installed on all systems configured to retrieve automounter maps from LDAP. On Red Hat Enterprise Linux, the openldap package should be installed automatically as a dependency of the automounter. To configure LDAP access, modify /etc/openldap/ldap.conf. Ensure that BASE, URI, and schema are set appropriately for your site.
The most recently established schema for storing automount maps in LDAP is described by rfc2307bis. To use this schema it is necessary to set it in the autofs configuration (/etc/sysconfig/autofs) by removing the comment characters from the schema definition. For example:

Example 9.4. Setting autofs configuration

DEFAULT_MAP_OBJECT_CLASS="automountMap"DEFAULT_ENTRY_OBJECT_CLASS="automount"DEFAULT_MAP_ATTRIBUTE="automountMapName"DEFAULT_ENTRY_ATTRIBUTE="automountKey"DEFAULT_VALUE_ATTRIBUTE="automountInformation"

Ensure that these are the only schema entries not commented in the configuration. Note that the automountKey replaces the cn attribute in the rfc2307bis schema. An LDIF of a sample configuration is described below:

Example 9.5. LDF configuration

# extended LDIF## LDAPv3# base <> with scope subtree# filter: (&(objectclass=automountMap)(automountMapName=auto.master))# requesting: ALL## auto.master, example.comdn: automountMapName=auto.master,dc=example,dc=comobjectClass: topobjectClass: automountMapautomountMapName: auto.master# extended LDIF## LDAPv3# base <automountMapName=auto.master,dc=example,dc=com> with scope subtree# filter: (objectclass=automount)# requesting: ALL## /home, auto.master, example.comdn: automountMapName=auto.master,dc=example,dc=comobjectClass: automountcn: /homeautomountKey: /homeautomountInformation: auto.home# extended LDIF## LDAPv3# base <> with scope subtree# filter: (&(objectclass=automountMap)(automountMapName=auto.home))# requesting: ALL## auto.home, example.comdn: automountMapName=auto.home,dc=example,dc=comobjectClass: automountMapautomountMapName: auto.home# extended LDIF## LDAPv3# base <automountMapName=auto.home,dc=example,dc=com> with scope subtree# filter: (objectclass=automount)# requesting: ALL## foo, auto.home, example.comdn: automountKey=foo,automountMapName=auto.home,dc=example,dc=comobjectClass: automountautomountKey: fooautomountInformation: filer.example.com:/export/foo# /, auto.home, example.comdn: automountKey=/,automountMapName=auto.home,dc=example,dc=comobjectClass: automountautomountKey: /automountInformation: filer.example.com:/export/&

9.5. Common NFS Mount Options

Beyond mounting a file system via NFS on a remote host, you can also specify other options at mount time to make the mounted share easier to use. These options can be used with manual mount commands, /etc/fstab settings, and autofs.
The following are options commonly used for NFS mounts:
intr
Allows NFS requests to be interrupted if the server goes down or cannot be reached.
lookupcache=mode
Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are all, none, or pos/positive.
nfsvers=version
Specifies which version of the NFS protocol to use, where version is 2, 3, or 4. This is useful for hosts that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by the kernel and mount command.
The option vers is identical to nfsvers, and is included in this release for compatibility reasons.
noacl
Turns off all ACL processing. This may be needed when interfacing with older versions of Red Hat Enterprise Linux, Red Hat Linux, or Solaris, since the most recent ACL technology is not compatible with older systems.
nolock
Disables file locking. This setting is occasionally required when connecting to older NFS servers.
noexec
Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system containing incompatible binaries.
nosuid
Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program.
port=num
port=num - Specifies the numeric value of the NFS server port. If num is 0 (the default), then mount queries the remote host's rpcbind service for the port number to use. If the remote host's NFS daemon is not registered with its rpcbind service, the standard NFS port number of TCP 2049 is used instead.
rsize=num and wsize=num
These settings speed up NFS communication for reads (rsize) and writes (wsize) by setting a larger data block size (num, in bytes), to be transferred at one time. Be careful when changing these values; some older Linux kernels and network cards do not work well with larger block sizes. For NFSv2 or NFSv3, the default values for both parameters is set to 8192. For NFSv4, the default values for both parameters is set to 32768.
sec=mode
Specifies the type of security to utilize when authenticating an NFS connection. Its default setting is sec=sys, which uses local UNIX UIDs and GIDs by using AUTH_SYS to authenticate NFS operations.
sec=krb5 uses Kerberos V5 instead of local UNIX UIDs and GIDs to authenticate users.
sec=krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.
sec=krb5p uses Kerberos V5 for user authentication, integrity checking, and encrypts NFS traffic to prevent traffic sniffing. This is the most secure setting, but it also involves the most performance overhead.
tcp
Instructs the NFS mount to use the TCP protocol.
udp
Instructs the NFS mount to use the UDP protocol.
For a complete list of options and more detailed information on each one, refer to man mount and man nfs. .

9.6. Starting and Stopping NFS

To run an NFS server, the rpcbind[3] service must be running. To verify that rpcbind is active, use the following command:
 # service rpcbind status

Note

Using service command to start, stop, or restart a daemon requires root privileges.
If the rpcbind service is running, then the nfs service can be started. To start an NFS server, use the following command as root:
 # service nfs start

Note

nfslock must also be started for both the NFS client and server to function properly. To start NFS locking, use the following command:
 # service nfslock start
If NFS is set to start at boot, ensure that nfslock also starts by running chkconfig --list nfslock. If nfslock is not set to on, this implies that you will need to manually run the service nfslock start each time the computer starts. To set nfslock to automatically start on boot, use chkconfig nfslock on.
nfslock is only needed for NFSv2 and NFSv3.
To stop the server, use:
 # service nfs stop
The restart option is a shorthand way of stopping and then starting NFS. This is the most efficient way to make configuration changes take effect after editing the configuration file for NFS. To restart the server, as root, type:
 # service nfs restart
The condrestart (conditional restart) option only starts nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server, as root, type:
 # service nfs condrestart
To reload the NFS server configuration file without restarting the service, as root, type:
 # service nfs reload

9.7. NFS Server Configuration

There are two ways to configure an NFS server:
  • By manually editing the NFS configuration file, i.e. /etc/exports
  • Through the command line, i.e. through exportfs

9.7.1.  The /etc/exports Configuration File

The /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:
  • Blank lines are ignored.
  • To add a comment, start a line with the hash mark (#).
  • You can wrap long lines with a backslash (\).
  • Each exported file system should be on its own individual line.
  • Any lists of authorized hosts placed after an exported file system must be separated by space characters.
  • Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.
Each entry for an exported file system has the following structure:
export host(options)
The aforementioned structure uses the following variables:
export
The directory being exported
host
The host or network to which the export is being shared
options
The options to be used for host
You can specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each hostname followed by its respective options (in parentheses), as in:
export host1(options1) host2(options2) host3(options3)
For information on different methods for specifying hostnames, refer to Section 9.7.4, "Hostname Formats".
In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted to access it, as in the following example:

Example 9.6. The /etc/exports file

/exported/directory bob.example.com
Here, bob.example.com can mount /exported/directory/ from the NFS server. Because no options are specified in this example, NFS will use default settings.

The default settings are:
ro
The exported file system is read-only. Remote hosts cannot change the data shared on the file system. To allow hosts to make changes to the file system (i.e. read/write), specify the rw option.
sync
The NFS server will not reply to requests before changes made by previous requests are written to disk. To enable asynchronous writes instead, specify the option async.
wdelay
The NFS server will delay writing to the disk if it suspects another write request is imminent. This can improve performance as it reduces the number of times the disk must be accesses by separate write commands, thereby reducing write overhead. To disable this, specify the no_wdelay; note that no_wdelay is only available if the default sync option is also specified.
root_squash
This prevents root users connected remotely (as opposed to locally) from having root privileges; instead, the NFS server will assign them the user ID nfsnobody. This effectively "squashes" the power of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable root squashing, specify no_root_squash.
To squash every remote user (including root), use all_squash. To specify the user and group IDs that the NFS server should assign to remote users from a particular host, use the anonuid and anongid options, respectively, as in:
export host(anonuid=uid,anongid=gid)
Here, uid and gid are user ID number and group ID number, respectively. The anonuid and anongid options allow you to create a special user/group account for remote NFS users to share.

Important

By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the no_acl option when exporting the file system.
Each default for every exported file system must be explicitly overridden. For example, if the rw option is not specified, then the exported file system is shared as read-only. The following is a sample line from /etc/exports which overrides two default options:
/another/exported/directory 192.168.0.3(rw,async)
In this example 192.168.0.3 can mount /another/exported/directory/ read/write and all writes to disk are asynchronous. For more information on exporting options, refer to man exportfs.
Additionally, other options are available where no default value is specified. These include the ability to disable sub-tree checking, allow access from insecure ports, and allow insecure file locks (necessary for certain early NFS client implementations). Refer to man exports for details on these less-used options.

Warning

The format of the /etc/exports file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space character. However, there should be no other space characters in the file except on comment lines.
For example, the following two lines do not mean the same thing:
/home bob.example.com(rw) /home bob.example.com (rw)
The first line allows only users from bob.example.com read/write access to the /home directory. The second line allows users from bob.example.com to mount the directory as read-only (the default), while the rest of the world can mount it read/write.

9.7.2. The exportfs Command

Every file system being exported to remote users via NFS, as well as the access level for those file systems, are listed in the /etc/exports file. When the nfs service starts, the /usr/sbin/exportfs command launches and reads this file, passes control to rpc.mountd (if NFSv2 or NFSv3) for the actual mounting process, then to rpc.nfsd where the file systems are then available to remote users.
When issued manually, the /usr/sbin/exportfs command allows the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the /usr/sbin/exportfs command writes the exported file systems to /var/lib/nfs/xtab. Since rpc.mountd refers to the xtab file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately.
The following is a list of commonly-used options available for /usr/sbin/exportfs:
-r
Causes all directories listed in /etc/exports to be exported by constructing a new export list in /etc/lib/nfs/xtab. This option effectively refreshes the export list with any changes made to /etc/exports.
-a
Causes all directories to be exported or unexported, depending on what other options are passed to /usr/sbin/exportfs. If no other options are specified, /usr/sbin/exportfs exports all file systems specified in /etc/exports.
-o file-systems
Specifies directories to be exported that are not listed in /etc/exports. Replace file-systems with additional file systems to be exported. These file systems must be formatted in the same way they are specified in /etc/exports. Refer to Section 9.7.1, " The /etc/exports Configuration File" for more information on /etc/exports syntax. This option is often used to test an exported file system before adding it permanently to the list of file systems to be exported.
-i
Ignores /etc/exports; only options given from the command line are used to define exported file systems.
-u
Unexports all shared directories. The command /usr/sbin/exportfs -ua suspends NFS file sharing while keeping all NFS daemons up. To re-enable NFS sharing, use exportfs -r.
-v
Verbose operation, where the file systems being exported or unexported are displayed in greater detail when the exportfs command is executed.
If no options are passed to the exportfs command, it displays a list of currently exported file systems. For more information about the exportfs command, refer to man exportfs.

9.7.2.1. Using exportfs with NFSv4

In Red Hat Enterprise Linux 6, no extra steps are required to configure NFSv4 exports as any filesystems mentioned are automatically available to NFSv2, NFSv3, and NFSv4 clients using the same path. This was not the case in previous versions.
To prevent clients from using NFSv4, turn it off by sellecting RPCNFSDARGS= -N 4 in /etc/sysconfig/nfs.

9.7.3. Running NFS Behind a Firewall

NFS requires rpcbind, which dynamically assigns ports for RPC services and can cause problems for configuring firewall rules. To allow clients to access NFS shares behind a firewall, edit the /etc/sysconfig/nfs configuration file to control which ports the required RPC services run on.
The /etc/sysconfig/nfs may not exist by default on all systems. If it does not exist, create it and add the following variables, replacing port with an unused port number (alternatively, if the file exists, un-comment and change the default entries as required):
MOUNTD_PORT=port
Controls which TCP and UDP port mountd (rpc.mountd) uses.
STATD_PORT=port
Controls which TCP and UDP port status (rpc.statd) uses.
LOCKD_TCPPORT=port
Controls which TCP port nlockmgr (lockd) uses.
LOCKD_UDPPORT=port
Controls which UDP port nlockmgr (lockd) uses.
If NFS fails to start, check /var/log/messages. Normally, NFS will fail to start if you specify a port number that is already in use. After editing /etc/sysconfig/nfs, restart the NFS service using service nfs restart. Run the rpcinfo -p command to confirm the changes.
To configure a firewall to allow NFS, perform the following steps:

Procedure 9.1. Configure a firewall to allow NFS

  1. Allow TCP and UDP port 2049 for NFS.
  2. Allow TCP and UDP port 111 (rpcbind/sunrpc).
  3. Allow the TCP and UDP port specified with MOUNTD_PORT="port"
  4. Allow the TCP and UDP port specified with STATD_PORT="port"
  5. Allow the TCP port specified with LOCKD_TCPPORT="port"
  6. Allow the UDP port specified with LOCKD_UDPPORT="port"

Note

To allow NFSv4.0 callbacks to pass through firewalls set /proc/sys/fs/nfs/nfs_callback_tcpport and allow the server to connect to that port on the client.
This process is not needed for NFSv4.1 or higher, and the other ports for mountd, statd, and lockd are not required in a pure NFSv4 environment.

9.7.3.1. Discovering NFS exports

There are two ways to discover which file systmes an NFS server exports.
First, on any server that supports NFSv2 or NFSv3, use the showmount command:
$ showmount -e myserverExport list for mysever/exports/foo/exports/bar
Second, on any server that supports NFSv4, mount / and look around.
# mount myserver:/ /mnt/#cd /mnt/exports# ls exportsfoobar
On servers that support both NFSv4 and either NFSv2 or NFSv3, both methods will work and give the same results.

Note

Before Red Hat Enterprise Linux 6 on older NFS servers, depending on how they are configured, it is possible to export filesystems to NFSv4 clients at different paths. Because these servers do not enable NFSv4 by default this should not normally be a problem.

9.7.4. Hostname Formats

The host(s) can be in the following forms:
Single machine
A fully-qualified domain name (that can be resolved by the server), hostname (that can be resolved by the server), or an IP address.
Series of machines specified via wildcards
Use the * or ? character to specify a string match. Wildcards are not to be used with IP addresses; however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully qualified domain names, dots (.) are not included in the wildcard. For example, *.example.com includes one.example.com but does not include one.two.example.com.
IP networks
Use a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask (for example 192.168.0.0/24). Another acceptable format is a.b.c.d/netmask, where a.b.c.d is the network and netmask is the netmask (for example, 192.168.100.8/255.255.255.0).
Netgroups
Use the format @group-name, where group-name is the NIS netgroup name.

9.7.5. NFS over RDMA

To enable the RDMA transport in the linux kernel NFS server, use the following procedure:

Procedure 9.2. Enable RDMA from server

  1. Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:
    # yum install rdma; chkconfig --level 2345 rdma on
  2. Ensure the package that provides the nfs-rdma service is installed and the service is enabled with the following command:
    # yum install rdma; chkconfig --level 345 nfs-rdma on
  3. Ensure that the RDMA port is set to the preferred port (default for Red Hat Enterprise Linux 6 is 2050). To do so, edit the /etc/rdma/rdma.conf file to set NFSoRDMA_LOAD=yes and NFSoRDMA_PORT to the desired port.
  4. Set up the exported filesystem as normal for NFS mounts.
On the client side, use the following procedure:

Procedure 9.3. Enable RDMA from client

  1. Ensure the RDMA rpm is installed and the RDMA service is enabled with the following command:
    # yum install rdma; chkconfig --level 2345 rdma on
  2. Mount the NFS exported partition using the RDMA option on the mount call. The port option can optionally be added to the call.
    # mount -t nfs -o rdma,port=port_number

9.8. Securing NFS

NFS is well-suited for sharing entire file systems with a large number of known hosts in a transparent manner. However, with ease-of-use comes a variety of potential security problems. Consider the following sections when exporting NFS file systems on a server or mounting them on a client. Doing so minimizes NFS security risks and better protects data on the server.

9.8.1. NFS Security with AUTH_SYS and export controls

Traditionally, NFS has given two options in order to control access to exported files.
First, the server restricts which hosts are allowed to mount which filesystems either by IP address or by host name.
Second, the server enforces file system permissions for users on NFS clients in the same way it does local users. Traditionally it does this using AUTH_SYS (also called AUTH_UNIX) which relies on the client to state the UID and GID's of the user. Be aware that this means a malicious or misconfigured client can easily get this wrong and allow a user access to files that it should not.
To limit the potential risks, administrators often allow read-only access or squash user permissions to a common user and group ID. Unfortunately, these solutions prevent the NFS share from being used in the way it was originally intended.
Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS file system, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine is the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount.
Wildcards should be used sparingly when exporting directories via NFS, as it is possible for the scope of the wildcard to encompass more systems than intended.
You can also to restrict access to the rpcbind[3] service via TCP wrappers. Creating rules with iptables can also limit access to ports used by rpcbind, rpc.mountd, and rpc.nfsd.
For more information on securing NFS and rpcbind, refer to man iptables.

9.8.2. NFS security with AUTH_GSS

The release of NFSv4 brought a revolution to NFS security by mandating the implementation of RPCSEC_GSS and the Kerberos version 5 GSS-API mechanism. However, RPCSEC_GSS and the Kerberos mechanism are also available for all versions of NFS.
With the RPCSEC_GSS Kerberos mechanism, the server no longer depends on the client to correctly represent which user is accessing the file, as is the case with AUTH_SYS. Instead, it uses cryptography to authenticate users to the server, preventing a malicious client from impersonating a user without having that user's kerberos credentials.

Note

It is assumed that a Kerberos ticket-granting server (KDC) is installed and configured correctly, prior to configuring an NFSv4 server. Kerberos is a network authentication system which allows clients and servers to authenticate to each other through use of symmetric encryption and a trusted third party, the KDC.
To set up RPCSEC_GSS, use the following procedure:

Procedure 9.4. Set up RPCSEC_GSS

  1. Create nfs/client.mydomain@MYREALM and nfs/server.mydomain@MYREALM principals.
  2. Add the corresponding keys to keytabs for the client and server.
  3. On the server side, add sec=krb5,krb5i,krb5p to the export. To continue allowing AUTH_SYS, add sec=sys,krb5,krb5i,krb5p instead.
  4. On the client side, add sec=krb5 (or sec=krb5i, or sec=krb5p depending on the setup) to the mount options.
For more information, such as the difference between krb5, krb5i, and krb5p, refer to the exports and nfs man pages or refer to Section 9.5, "Common NFS Mount Options".
For more information on the RPCSEC_GSS framework, including how rpc.svcgssd and rpc.gssd inter-operate, refer to http://www.citi.umich.edu/projects/nfsv4/gssd/.

9.8.2.1. NFS security with NFSv4

NFSv4 includes ACL support based on the Microsoft Windows NT model, not the POSIX model, because of the former's features and wide deployment.
Another important security feature of NFSv4 is the removal of the use of the MOUNT protocol for mounting file systems. This protocol presented possible security holes because of the way that it processed file handles.

9.8.3. File Permissions

Once the NFS file system is mounted read/write by a remote host, the only protection each shared file has is its permissions. If two users that share the same user ID value mount the same NFS file system, they can modify each others files. Additionally, anyone logged in as root on the client system can use the su - command to access any files via the NFS share.
By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. Red Hat recommends that you keep this feature enabled.
By default, NFS uses root squashing when exporting a file system. This sets the user ID of anyone accessing the NFS share as the root user on their local machine to nobody. Root squashing is controlled by the default option root_squash; for more information about this option, refer to Section 9.7.1, " The /etc/exports Configuration File". If possible, never disable root squashing.
When exporting an NFS share as read-only, consider using the all_squash option. This option makes every user accessing the exported file system take the user ID of the nfsnobody user.

9.9.  NFS and rpcbind

Note

The following section only applies to NFSv2 or NFSv3 implementations that require the rpcbind service for backward compatibility.
The rpcbind[3] utility maps RPC services to the ports on which they listen. RPC processes notify rpcbind when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts rpcbind on the server with a particular RPC program number. The rpcbind service redirects the client to the proper port number so it can communicate with the requested service.
Because RPC-based services rely on rpcbind to make all connections with incoming client requests, rpcbind must be available before any of these services start.
The rpcbind service uses TCP wrappers for access control, and access control rules for rpcbind affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons. The man pages for rpc.mountd and rpc.statd contain information regarding the precise syntax for these rules.

9.9.1.  Troubleshooting NFS and rpcbind

Because rpcbind[3] provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using rpcbind when troubleshooting. The rpcinfo command shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP).
To make sure the proper NFS RPC-based services are enabled for rpcbind, issue the following command:
  # rpcinfo -p

Example 9.7. Output of the rpcinfo -p command

The following is sample output from this command:
program vers proto  port service  100021 1   udp  32774  nlockmgr  100021 3   udp  32774  nlockmgr  100021 4   udp  32774  nlockmgr  100021 1   tcp  34437  nlockmgr  100021 3   tcp  34437  nlockmgr  100021 4   tcp  34437  nlockmgr  100011 1   udp 819  rquotad  100011 2   udp 819  rquotad  100011 1   tcp 822  rquotad  100011 2   tcp 822  rquotad  100003 2   udp   2049  nfs  100003 3   udp   2049  nfs  100003 2   tcp   2049  nfs  100003 3   tcp   2049  nfs  100005 1   udp 836  mountd  100005 1   tcp 839  mountd  100005 2   udp 836  mountd  100005 2   tcp 839  mountd  100005 3   udp 836  mountd  100005 3   tcp 839  mountd

If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from clients for that service to the correct port. In many cases, if NFS is not present in rpcinfo output, restarting NFS causes the service to correctly register with rpcbind and begin working. .
For more information and a list of options on rpcinfo, refer to its man page.

9.10.  NFS Support for SELinux

 Selinux curently does not work over NFS. We have two proposals for adding support: xattr support which could be a tech preview item for RHEL6.0 and a much longer term effort to support the IETF proposal for labeled NFS that might be more of a RHEL7.OverviewLabelled NFS is a proposal to revise the NFS protocol to support extended attributes. For RHEL, this is one way to support selinux for NFS clients.Owner (package maintainer): * Ric WheelerSummary: * Labelled NFS is a proposal to revise the NFS protocol to support extended attributes. For RHEL, this is one way to support selinux for NFS clients.The revision to the protocol has made some progress and we have a tentative patch set, but the protocol is not finalized either in the standards body or in the upstream kernel.Detailed description: * This would be a compelling addition to RHEL6. It makes a basic change to the NFS protocol and that change would allow us to use NFS for selinux installations.The challenge with getting this feature into RHEL6 is threefold: the standard has to be ratified, the upstream Linux server and client code need to get pulled and major NFS storage vendors need to update their devices in order to add support so our mutual customers can use this.That last item is the sticking point - we could easily add this to RHEL6 today based on the patchset from the NSA, but would only be able to support pure Linux NFS server and client combinations. If the spec or upstream code change radically, we could get stuck supporting a odd implementation for years.Our intention is to pursue both the upstream and standards based work aggressively since this will be a feature that we would benefit from having, but will need to pursue alternatives for users of labelled NFS concurrently.Completion Status * 20% Completed in F11: * 40% Completed in F12: * Confidence factor that any remaining work for this feature will land by F12 beta (~2009-07-28) (high/med/low): low * 40% Completed in RHEL 6 (if only in RHEL6): * Possible inclusion in F12/At risk for RHEL 6Fedora Links Other Helpful Information * http://selinuxproject.org/page/Labeled_NFS has a good overview of the project.ScopingTarget Audience * OPENProduct Variants / High Level Use Cases * OPENHardware Architectures * OPENConstraints and Limitations * OPENThird-Party Dependencies * OPENLinksFeatures this feature depends on * OPENFeatures depending on this Feature * OPENUse-Cases * OPENTest Cases * OPENBugzilla Numbers * https://bugzilla.redhat.com/show_bug.cgi?id=519835Documentation Upstream Project * OPENBusiness AspectsBusiness Justification * OPENThemes * OPENCustomers * OPENPartners * OPENPlanned Certifications * OPEN

9.11.  pNFS Support (Block, Object and File)

OverviewpNFS (parallel NFS) is a standard NFS feature that vendors like NetApp, EMC, LSI and Panasas plan to support which allows NFS clients to use NFS servers to get meta-data and use a separate path to other data providers to get high speed IO over SANS or LANs without transferring that data via the NFS server. pNFS is part of the NFSv4.1 specification.Owner (package maintainer): * Ric WheelerQE Owner: * Yuguo Zhang Summary: * pNFS (parallel NFS) is a standard NFS feature that vendors like NetApp, EMC, LSI and Panasas plan to support which allows NFS clients to use NFS servers to get meta-data and use a separate path to other data providers to get high speed IO over SANS or LANs without transferring that data via the NFS server. pNFS is part of the NFSv4.1 specification.Detailed description: * As the summary states above, pNFS allows greater NFS scalability for NFS clients that have direct connections to various types of data providers. It replaces the proprietary offerings from EMC (Highroad) and Panasas (it's NFS/object storage combination).The complexity is that there are three varieties of pNFS: object, block and file. Object is supported by Panasas, block by EMC and LSI and file by NetApp.The advantage to Red Hat is that it will give us the capability to interoperate with high end NAS (NFS) providers high performance offerings without having to load proprietary NFS client file systems. Also, we can implement our own open source pNFS file provider using GFS2 for customers that want a pure RHEL pNFS system.Our challenge is that pNFS is built on top of NFSv4.1 which has been only partially pulled into 2.6.30. More patches are being aggressively pushed by NetApp and Panasas who are working with us to aggressively push the upstream code, but we might end up having only tech preview/partial functionality in RHEL6.0 with full support coming in later RHEL6.x minor releases.Completion Status * 0% Completed in F11: * 60% Completed in F12: * Confidence factor that any remaining work for this feature will land by F12 beta (~2009-07-28) (high/med/low): med * 60% Completed in RHEL 6 (if only in RHEL6): * Possible inclusion in F12/At risk for RHEL 6Fedora Links Other Helpful Information * http://www.pnfs.com/ gives an overview.ScopingTarget Audience * OPENProduct Variants / High Level Use Cases * OPENHardware Architectures * OPENConstraints and Limitations * OPENThird-Party Dependencies * OPENLinksFeatures this feature depends on * OPENFeatures depending on this Feature * OPENUse-Cases * OPENTest Cases * OPENBugzilla Numbers * https://bugzilla.redhat.com/show_bug.cgi?id=519836Documentation Upstream Project * OPENBusiness AspectsBusiness Justification * OPENThemes * OPENCustomers * OPENPartners * OPENPlanned Certifications * OPENDocument Actions

9.12.  NFSv4

  NFSv4DefaultSummaryChange the default NFS protocol to version 4.Owner * Name: Steve Dickson * email: [email protected] Current status * Targeted release: Fedora 13 * Last updated: 2010-01-18 * Percentage of completion: 100% With the current 2.6.33 Rahide kernel and the nfs-utils1.2.1-10 package the default NFS protocol is now version 4.Detailed DescriptionThe latest version of the NFS protocol is version 4, which was first introduced in Fedora F-2 (the first distro to have such support). The current default NFS version is version 3. Meaning when an simple NFS mount is done (i.e. mount server:/export /mnt) version 3 is the first protocol version that is tried.With the proposed changes, version 4 would be tried first. If the server does not support version 4, the mount would then try version 3.Benefit to FedoraOne of the major benefit is performance. In version 4, the server has state which means it can communicate with each NFS client. The means the server can issue things called delegations (or leases) for files allowing the v4 client to aggressively cache which drastically cuts down on network traffic between the client and server.There are a number of other benefits which are documented here.ScopeThere are basically three parts to make this happen:   1. Change the exports on the server so v3 and v2 exports can seamlessly be used by v4 clients.   2. Change the mount command to start negotiating with the version 4 protocol and then fall back to version 3 if the server does not support v4 (similar to what happens today with version 3 and version 4)   3. Introduce a NFS mount configuration file where users can define which protocol version should be negotiated. How To Test * The usual Connectathon tests will be used and well as any other file system tests that are available (such as fsx). The official link is at : http://www.connectathon.org/nfstests.html My Git tree I used to keep updates are at: git://FedoraPeople.org/~steved/cthon04.git From this tree I generally use the runcthon tests script which runs all the tests simultaneouslyUser ExperienceThis transition should be seamless to the users...DependenciesThe only dependency is on the nfs-utils package.Contingency PlanIf the code is not ready, then the version 3 will still be the default.Documentation * http://www.nfsv4.org/ Release Notes * Fedora now use NFS version 4 as the default protocol version. Comments and Discussion * See Talk:Features/NFSv4Default

9.13. References

Administering an NFS server can be a challenge. Many options, including quite a few not mentioned in this chapter, are available for exporting or mounting NFS shares. Consult the following sources for more information.

Installed Documentation

  • man mount - Contains a comprehensive look at mount options for both NFS server and client configurations.
  • man fstab - Gives details for the format of the /etc/fstab file used to mount file systems at boot-time.
  • man nfs - Provides details on NFS-specific file system export and mount options.
  • man exports - Shows common options used in the /etc/exports file when exporting NFS file systems.

Useful Websites

Related Books

  • Managing NFS and NIS by Hal Stern, Mike Eisler, and Ricardo Labiaga; O'Reilly & Associates - Makes an excellent reference guide for the many different NFS export and mount options available.
  • NFS Illustrated by Brent Callaghan; Addison-Wesley Publishing Company - Provides comparisons of NFS to other network file systems and shows, in detail, how NFS communication occurs.


[3] The rpcbind service replaces portmap, which was used in previous versions of Red Hat Enterprise Linux to map RPC program numbers to IP address port number combinations. For more information, refer to Section 9.1.1, "Required Services".
(Sebelumnya) 20 : Chapter 8. The XFS File S ...20 : Chapter 10. FS-Cache - St ... (Berikutnya)