Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

NFS 4

News Network File System (NFS) Recommended Links Troubleshooting  NFS Problems NFSv4 with Kerberos  /etc/fstab File nfsv4 mounts files and directories as nobody
NFS performance tuning NFS Security NFS logging Nfsstat Tips Humor Etc

From Wikipedia:

Version 4 (RFC 3010, December 2000; revised in RFC 3530, April 2003), influenced by AFS and CIFS, includes performance improvements, mandates strong security, and introduces a stateful protocol.[3] Version 4 became the first version developed with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols.

NFS version 4 minor version 1 (NFSv4.1) has been approved by the IESG and received an RFC number since Jan 2010.[4] The NFSv4.1 specification aims:

To provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers (pNFS extension).

In 1998, Sun Microsystems and the Internet Society completed an agreement giving the Internet Society control over future versions of NFS, starting with NFS Version 4. The Internet Society is the umbrella body for the Internet Engineering Task Force (IETF). IETF now has a working group chartered to define NFS Version 4. The goals of the working group include:

Better access and performance on the Internet
NFS can be used on the Internet, but it isn't designed to work through firewalls. Even if a firewall isn't in the way, certain aspects of NFS, such as pathname parsing, can be expensive on high-latency links. For example, if you want to look at /a/b/c/d/e on a server, your NFS Version 2 or 3 client will need to make five lookup requests before it can start reading the file. This is hardly noticeable on an ethernet, but very annoying on a modem link.
Mandatory security
Most NFS implementations have a default form of authentication that relies on a trust between the client and server. With more people on the Internet, trust is insufficient. While there are security flavors for NFS that require strong authentication based on cryptography, these flavors aren't universally implemented. To claim conformance to NFS Version 4, implementations will have to offer a common set of security flavors.
Better heterogeneity
NFS has been implemented on a wide array of platforms, including Unix, PCs, Macintoshes, Java, MVS, and web browsers, but many aspects of it are very Unix-centric, which prevents it from being the file-sharing system of choice for non-Unix systems.

For example, the set of attributes that NFS Versions 2 and 3 use is derived completely from Unix without thought about useful attributes that Windows 98, for example, might need. The other side of the problem is that some existing NFS attributes are hard to implement by some non-Unix systems.

Internationalization and localization
This refers to pathname strings and not the contents of files. Technically, filenames in NFS Versions 2 and 3 can only be 7-bit ASCII, which is very limiting. Even if one uses the eighth bit, that still doesn't help the Asian users.

There are no plans to add explicit internationalization and localization hooks to file content. The NFS protocol's model has always been to treat the content of files as an opaque stream of bytes that the application must interpret, and Version 4 will not vary from that.

There has been talk of adding an optional attribute that describes the MIME type of contents of the file.

Extensibility
After NFS Version 2 was released, it took nine years for the first NFS Version 3 implementations to appear on the market. It will take at least seven years from the time NFS Version 3 was first available for Version 4 implementations to be marketed. The gap between Version 2 and Version 3 was especially painful because of the write performance issue. Had NFS Version 2 included a method for adding procedures, the pain could have been reduced.

At the time this book was written, the NFS Version 4 working group published the initial NFS Version 4 specification in the form of RFC 3010, which you can peruse from IETF's web site at http://www.ietf.org. Several of the participants in the working group have prototype implementations that interoperate with each other. Early versions of the Linux implementation are available from http://www.citi.umich.edu/projects/nfsv4/. Some of the characteristics of NFS Version 4 that are not in Version 3 include:

No sideband protocols
The separate protocols for mounting and locking have been incorporated into the NFS protocol.
Statefulness
NFS Version 4 has an OPEN operation that tells the server the client has opened the file, and a corresponding CLOSE operation. Recall earlier in this chapter, in Section 7.2.2 that the point was made that crash recovery in NFS Versions 2 and 3 is simple because the server retains very little state. By adding such state, recovery is more complicated. When a server crashes, clients have a grace period to reestablish the OPEN state. When a client crashes, because the OPEN state is leased (i.e., has a time limit that expires if not renewed), a dead client will eventually have its leases timed out, allowing the server to delete any state. Another point in Section 7.2.2 is that the operations in NFS Versions 2 and 3 are nonidempotent where possible, and the idempotent operations results are cached in a duplicate request cache. For the most part, this is still the case with NFS Version 4. The only exceptions are the OPEN, CLOSE, and locking operations. Operations like RENAME continue to rely on the duplicate request cache, a solution with theoretical holes, but in practice has proven to be quite sufficient. Thus NFS Version 4 retains much of the character of NFS Versions 2 and 3.
Aggressive caching
Because there is an OPEN operation, the client can be much more lazy about writing data to the server. Indeed, for temporary files, the server may never see any data written before the client closes and removes the file.

Security

Aside from lack of multivendor support, the other problem with NFS security flavors is that they become obsolete rather quickly. To mitigate this, IETF specified the RPCSEC_GSS security flavor that NFS and other RPC-based protocols could use to normalize access to different security mechanisms. RPCSEC_GSS accomplishes this using another IETF specification called the Generic Security Services Application Programming Interface (GSS-API). GSS-API is an abstract layer for generating messages that are encrypted or signed in a form that can be sent to a peer on the network for decryption or verification. GSS-API has been specified to work over Kerberos V5, the Simple Public Key Mechanism, and the Low Infrastructure Public Key system (LIPKEY). We will discuss NFS security, RPCSEC_GSS, and Kerberos V5 in more detail in Chapter 12.

The Secure Socket Layer (SSL) and IPSec were considered as candidates to provide NFS security. SSL wasn't feasible because it was confined to connection-oriented protocols like TCP, and NFS and RPC work over TCP and UDP. IPSec wasn't feasible because, as noted in the section Section 7.2.7, NFS clients typically don't have a TCP connection per user; whereas, it is hard, if not impossible, for an IPSec implementation to authenticate multiple users over a single TCP/IP connection.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Apr 04, 2014] NFSv4

I have started moving this information to https://help.ubuntu.com/community/SettingUpNFSHowTo

NFSv4 exports exist in a single pseudo filesystem, where the real directories are mounted with the --bind option. Here is some additional information regarding this fact.

NFSv4 Client

Not clear what is meant by UID/GID on the export being generic. This guide does not explicitly state that idmapd must also run on the client side, i.e. /etc/default/nfs-common needs the same settings as described in the server section. If idmapd is running the UID/GID are mapped correctly. Check with ps ax|grep rpc that rpc.idmapd is running.

and execute this mount after a short pause once all devices are loaded. Add the following lines to /etc/rc.local

NFSv4 and Autofs

Automount (or autofs) can be used in combination with NFSv4. Details on the configuration of autofs can be found in the AutofsHowto. The configuration is identical to NFSv2 and NFSv3 except that you have to specify -fstype=nfs4 as option. Automount supports NFSv4's feature to mount all file systems exported by server at once. The exports are then treated as an entity, i.e. they are "all" mounted when you step into "one" directory on the NFS server's file systems. When auto-mounting each file system separately the behavior is slightly different. In that case you would have to step into "each" file system to make it show up on the NFS client.

Create and distribute credentials

NFSv4 needs machine credentials for the server and every client, which wants to use the NFSv4 security features.

Create the credentials for the nfs-server and all nfs-clients on the Kerberos KDC and distribute the extraced keys with scp to the destination

You can make sure that only this entry has been created by executing "sudo klist -e -k /etc/krb5.keytab".

Create nfs/ principals

Authenticate as your admin user. You can do this from any machine in your kerberos-domain, as long as your kadmind is running; then add principals for your server and client machines. Replace "nfs-server.domain" with the fully qualified domain name of the machines. For example, if your server is called "snoopy" and your domain is "office.example.com", you would add a principal named "nfs/snoopy.office.example.com" for the server. Note: kadmin must be run with -l (locally) on the KDC if there is no kadmind. Please be aware of https://bugs.launchpad.net/ubuntu/+source/heimdal/+bug/309738/

Heimdal

$ kinit kadmin/admin
$ kadmin add -r nfs/nfs-server.domain
$ kadmin add -r nfs/nfs-client.domain

Now add these to the keytab-files on your NFS-server and client. Log in to your NFSserver (as root, because you will need to edit the /etc/krb5.keytab file) and initialize as Kerberos administrator. If your domain is fully kerberized, logging in as root will automatically give you the right access, in which case you don't need to use "kinit" anymore.

NFSserver# kinit kadmin/admin
NFSserver# ktutil get nfs/nfs-server.domain

And add it to the client's keytab file:

NFSclient# kinit kadmin/admin
NFSclient# ktutil get nfs/nfs-client.domain
MIT
$ kinit admin/admin
$ kadmin -q "addprinc -randkey nfs/nfs-server.domain"
$ kadmin -q "addprinc -randkey nfs/nfs-client.domain"

Now add these to the keytab-files on your NFS-server and client. Log in to your NFSserver (as root, because you will need to edit the /etc/krb5.keytab file) and initialize as Kerberos administrator.

NFSserver# kadmin -p admin/admin -q "ktadd nfs/nfs-server.domain"

And add it to the client's keytab file:

NFSclient# kadmin -p admin/admin -q "ktadd nfs/nfs-client.domain"

What's New in NFS Version 4?

By Bikash R. Choudhury

Since its introduction in 1984, the Network File System (NFS) has become a standard for network file sharing, particularly in the UNIX® and Linux® communities. Over the last 20 years, the NFS protocol has slowly evolved to adapt to new requirements and market changes.

For most of that time, NetApp has been working to drive the evolution of NFS to better meet user needs. NetApp Chief Technology Officer Brian Pawlowski coauthored the NFS version 3 specification-currently the most widely used version of NFS-and is cochair of the NFS version 4 (NFSv4) working group. NetApp's Mike Eisler and Dave Noveck are coauthors of the NFS version 4 specification-the latest available version of NFS-and are editors for the NFS version 4.1 specification that is currently under development.

NFSv3 NFSv4
Personality Stateless Stateful
Semantics UNIX only Suport UNIX and Windows
Authentication Weak (AUTH_SYS) Strong (Kerberos)
Identification 32 bit UID/GID String based ([email protected])
Permissions UNIX based Windows like access
Transport UDP & TCP TCP
Caching Ad-hoc Delegations

Table 1) Comparison of NFSv3 and NFSv4.

NFS consists of server software-for instance, the software that runs on NetApp® storage-and client software running on hosts that require access to network storage. Proper operation requires that both sides of the connection, client and server, are mature and correctly implemented. Although NFS version 4 has been shipping from NetApp since Data ONTAP® 6.4, our code base, underwent a lot of changes and matured substantially, NFSv4 has now reached a point at which we believe it is ready for production use.
Today, client implementations are becoming stable. NetApp has also made some significant changes and enhancements in Data ONTAP 7.3 to support NFSv4. In this article, I'm going to explore three significant features of NFSv4 that have gotten a lot of attention:

While this discussion will largely apply to any NFSv4 implementation, I'll also describe some details that are unique to NetApp and discuss best practices where appropriate.
AIX Client Server BSD/OSX Client Server (not sup. by apple) EMC Server Humingbird Client Linux Client Server
(RHEL5)
NetApp Server Solaris 10 client Server
Delegations Yes Yes Yes Yes Yes
Access Control lists Yes Yes Yes Yes Yes
Kerberos V5 Yes Yes Yes Yes Yes Yes Yes
File Streams Yes Yes
I18n Yes Yes Yes Yes Yes Yes
Global Namespace/Referrals Yes Yes Future

Table 2) NFSv4 client and server implementation status.

Access Control Lists

ACLs are one of the most frequently requested features by NetApp customers seeking greater compatibility for Windows clients. NFSv4 ACLs significantly improve NFS security and interoperability with CIFS.

ACLs allow access permissions to be granted or denied on a per-user basis for every file object, providing much finer-grained access control and a greater degree of selectivity than traditional UNIX mode permission bits. NFSv4 ACLs are based on the NT model, but they do not contain owner/group information. NFSv4 ACLs are made up of an array of access control entries (ACEs), which contain information regarding access allowed/denied, permission bits, user name/group name, and flags.

Because NetApp already offers ACL support for CIFS clients, the addition of ACLs in NFSv4 creates some unique considerations. NetApp provides three types of qtrees-UNIX, NTFS, and mixed-for use by different clients. The way that NFSv4 ACLS are handled depends on the type of qtree:

UNIX Qtree

NTFS Qtree

Mixed Qtree

Obviously, you should choose the type of qtree you use carefully to get the desired results:

The only other best practice with regard to ACLs is to use no more than 192 ACEs per ACL. You can increase the number of ACEs per ACL to a current maximum of 400, but doing so could present problems should you need to revert to an earlier version of Data ONTAP or if you use SnapMirror® to go to an earlier version.

Mandatory Security

In addition to including ACLs, NFSv4 improves on the security of previous NFS versions by:

NFS security has been bolstered by the addition of security based on the Generic Security Services API (GSS-API), called RPCSEC_GSS [RFC2203]. All versions of NFS are capable of using RPCSEC_GSS. However, a conforming NFS version 4 implementation must implement RPCSEC_GSS. RPCSEC_GSS is allocated analogous to the commonly used AUTH_SYS security that was the standard method of authentication in previous NFS versions.
RPCSEC_GSS differs from AUTH_SYS and other traditional NFS security mechanisms in two ways:

NFS AUTH System



Figure 1) NFS with AUTH_SYS versus RPCSEC-GSS security.

Both NFSv3 and NFSv4 can use RPCSEC-GSS. However, AUTH_SYS is the default for NFSv3.
The only security mechanism that NetApp or most NFSv4 clients currently provide under RPCSEC_GSS is Kerberos 5. Kerberos is an authentication mechanism that uses symmetrical key encryption. It never sends passwords across the network in either cleartext or encrypted form and relies on encrypted tickets and session keys to authenticate users before they can use network resources. The Kerberos system utilizes key distribution centers (KDCs) that contain a centralized database of user names and passwords. NetApp supports two types of KDCs: UNIX and Windows Active Directory.
You still have the option of using the weak authentication method of previous NFS versions (AUTH_SYS) should you require it. You can accomplish this by specifying sec=sys on the exportfs command line or in the /etc/exports file. Data ONTAP only supports a maximum of 16 supplemental group IDs in a credential plus 1 primary group ID when AUTH_SYS is used. A maximum of 32 supplemental group IDs is supported with Kerberos.

Delegations for Client-Side Caching

In NFSv3, clients typically function as if there is contention for the files they have open, (even though this is often not the case). Weak cache consistency is maintained through frequent requests from the client to the server to find out whether an open file has been modified by someone else, which can cause unnecessary network traffic and delays in high-latency environments. In situations in which a client holds a lock on a file, all write I/O is required to be synchronous, further impacting client-side performance in many situations.
NFSv4 differs from previous versions of NFS by allowing a server to delegate specific actions on a file to a client to enable more aggressive client caching of data and to allow caching of the locking state. A server cedes control of file updates and the locking state to a client via a delegation. This reduces latency by allowing the client to perform various operations and cache data locally. Two types of delegations currently exist: read and write. The server has the ability to call back a delegation from a client should there be contention for a file.
Once a client holds a delegation, it can perform operations on files whose data has been cached locally to avoid network latency and optimize I/O. The more aggressive caching that results from delegations can be a big help in environments with the following characteristics:

Data ONTAP supports both read and write delegations, and you can separately tune the NFSv4 server to enable or disable one or both types of delegations. When delegations are turned on, Data ONTAP will automatically grant a read delegation to a client that opens a file for reading, or a write delegation to a client opening a file for writing.
The options for enabling or disabling read and write delegations have been provided to give you a level of control over the impact of delegations. Ideally, the server determines if it will provide a delegation to clients or not. Turning on read delegation in a highly read-intensive environment will be beneficial. Write delegations in engineering build environments in which each user writes to separate build files and there is no contention will also improve performance. However, write delegations may not be helpful in scenarios in which there are multiple writers to the same file.

Finding Out More

If you're concerned about data security (and who isn't?), new NFSv4 features such as ACL support and mandatory security can significantly reduce the risk of unauthorized data access. Another important design goal for NFSv4 was improved performance on the Internet. Improved client caching behavior through delegations was intended to help satisfy this goal.
In addition to the highly visible changes I've described, NFSv4 includes a number of other significant changes, including:

If you want to learn more about these and other features of NFSv4, TR-3085 (PDF) provides a good general reference.

[Sep 3, 2008] Benchmarking NFSv3 vs. NFSv4 file operation performance

"These tests show no clear performance advantage to moving from NFSv3 to NFSv4."
June 20, 2008 | Linux.com

One issue with migrating to NFSv4 is that all of the filesystems you export have to be located under a single top-level exported directory. This means you have to change your /etc/exports file and also use Linux bind mounts to mount the filesystems you wish to export under your single top-level NFSv4 exported directory. Because the manner in which filesystems are exported in NFSv4 requires fairly large changes to system configuration, many folks might not have upgraded from NFSv3. This administration work is covered in other articles. This article provides performance benchmarks of NFSv3 against NFSv4 so you can get an idea of whether your network filesystem performance will be better after the migration.

I ran these performance tests using an Intel Q6600-based server with 8GB of RAM. The client was an AMD X2 with 2GB of RAM. Both machines were using Intel gigabit PCIe EXPI9300PT NICs, and the network between the two machines had virtually zero traffic on it for the duration of the benchmarks. The NICs provide a very low latency network, as described in a past article. While testing performance for this article I ran each benchmark multiple times to ensure performance was repeatable. The difference in RAM between the two machines changes how Bonnie++ is run by default. On the server, I ran the test using 16GB files, and on the client, 4GB files. Both machines were running 64-bit Fedora 9.

The filesystem exported from the server was an ext3 filesystem created on a RAID-5 over three 500GB hard disks. The exported filesystem was 60GB in size. The stripe_cache_size was 16384, meaning that for a three-disk RAID array, 192MB of RAM was used to cache pages at the RAID level. Default cache sizes for distributions might be in the 3-4MB range for the same RAID array. Using a larger cache directly improves write performance of the RAID. I also ran benchmarks locally on the server without using NFS to get an idea of the theoretical maximum performance NFS could achieve.

Some readers may point out that RAID-5 is not a desirable configuration, and certainly running it on only three disks is not a typical configuration. However, the relative performance of NFSv3 to NFSv4 is our main point of interest. I used a three disk RAID-5 because it had a filesystem that could be recreated for the benchmark. Recreation of the filesystem removes factors such as file fragmentation that can adversely effect performance.

I tested NFSv3 with and without the async option. The async option allows the NFS server to respond to a write request before it is actually on disk. The NFS protocol normally requires the server to ensure data has been written to storage successfully before replying to the client. Depending on your needs, you might be running mounts with the async option on some filesystems for the performance improvement it offers, though you should be aware of what async implies for data integrity, in particular, potential undetectable data loss if the NFS server crashes.

The table below shows the Bonnie++ input, output, and seek benchmarks for the various NFS version 3 and 4 mounted filesystems as well as the benchmark that was run on the server. As expected, the reading performance is almost identical whether or not you are using the async option. You can perform more than five times the number of "seeks" over NFS when using the async option, presumably because the server can avoid actually performing some of them because a subsequent seek is issued before the initial seek was completed. Unfortunately the block sequential output for NFSv4 is not any better than for NFSv3. Without using the async option, output was about 50Mbps, whereas the local filesystem was capable of performing at 91Mbps. When using the async option, sequential block output came much closer to local disk speeds over the NFS mount.

Configuration Sequential Output Sequential Input Random
Per Char Block Rewrite Per Char Block Seeks
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU
local filesystem 62340 94 91939 22 45533 19 43046 69 109356 32 239.2 0
NFSv3 noatime,nfsvers=3 50129 86 47700 6 35942 8 52871 96 107516 11 1704 4
NFSv3 noatime,nfsvers=3,async 59287 96 83729 10 48880 12 52824 95 107582 10 9147 30
NFSv4 noatime 49864 86 49548 5 34046 8 52990 95 108091 10 1649 4
NFSv4 noatime,async 58569 96 85796 10 49146 10 52856 95 108247 11 9135 21

The table below shows the Bonnie++ benchmarks for file creation, read, and deletion. Notice that the async option has a tremendous impact on file creation and deletion.

Configuration Sequential Create Random Create
Create Read Delete Create Read Delete
/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU
NFSv3 noatime,nfsvers=3 186 0 6122 10 182 0 186 0 6604 10 181 0
NFSv3 noatime,nfsvers=3,async 3031 10 8789 11 3078 9 2921 11 11271 13 3069 9
NFSv4 noatime 98 0 6005 13 193 0 93 0 6520 11 192 0
NFSv4 noatime,async 1314 8 7155 13 5350 12 1298 8 7537 12 5060 9

To test more day-to-day performance I extracted the linux-2.6.25.4.tar uncompressed Linux kernel source tarball and then deleted the extracted sources. Note that the original source tarball was not compressed in order to ensure that the CPU of the client was not slowing down extraction.

Configuration Find (m:ss) Remove (m:ss)
local filesystem 0:01 0:03
NFSv3 noatime,nfsvers=3 9:44 2:36
NFSv3 noatime,nfsvers=3,async 0:31 0:10
NFSv4 noatime 9:52 2:27
NFSv4 noatime,async 0:40 0:08

Wrap up

These tests show no clear performance advantage to moving from NFSv3 to NFSv4.

NFSv4 file creation is actually about half the speed of file creation over NFSv3, but NFSv4 can delete files quicker than NFSv3. By far the largest speed gains come from running with the async option on, though using this can lead to issues if the NFS server crashes or is rebooted.

Ben Martin has been working on filesystems for more than 10 years. He completed his Ph.D. and now offers consulting services focused on libferris, filesystems, and search solutions.

NFSv4 delivers seamless network access

A word of caution

After you have used NFS to mount a remote file system, that system will also be part of any total system backup that you perform on the client system. This behavior can have potentially disastrous results if you don't exclude the newly mounted directories from the backup.

To use NFS as a client, the client computer must be running rpc.statd and portmap. You can run a quick ps -ef to check for these two daemons. If they are running (and they should be), you can mount the server's exported directory with the generic command:

mount server:directory  local mount point

Generally speaking, you must be running under root to mount a file system. From a remote computer, you can use the following command (assume that the NFS server has an IP address of 192.168.0.100):

mount 192.168.0.100:/opt/files  /mnt
Your distribution may require you to specify the file system type when mounting a file system. If so, run the command:
mount -t nfs 192.168.0.100:/opt/files /mnt

The remote directory should mount without issue if you've set up the server side correctly. Now, run the cd command to the /mnt directory, then run the ls command to see the files. To make this mount permanent, you must edit the /etc/fstab file and create an entry similar to the following:

192.168.0.100:/opt/files  /mnt  nfs  rw  0  0

Note: Refer to the fstab man page for more information on /etc/fstab entries.

[ Sep 12, 2006] NFSv4 delivers seamless network access by Frank Pohlmann & Kenneth Hess

...Distributed file system access, therefore, needs rather more than a couple of commands enabling users to mount a directory on a computer networked to theirs. Sun Microsystems faced up to this challenge a number of years ago when it started propagating something called Remote Procedure Calls (RPCs) and NFS.

The basic problem that Sun was trying to solve was how to connect several UNIX computers to form a seamless distributed working environment without having to rewrite UNIX file system semantics and without having to add too many data structures specific to distributed file systems. Naturally, it was impossible for a network of UNIX workstations to appear as one large system: the integrity of each system had to be preserved while still enabling users to work on a directory on a different computer without experiencing unacceptable delays or limitations in their workflow.

To be sure, NFS does more than facilitate access to text files. You can distribute "runnable" applications through NFS, as well. Security procedures serve to shore up the network against the malicious takeovers of executables. But how exactly does this happen?

NFS is RPC

NFS is traditionally defined as an RPC application requiring TCP for the NFS server and either TCP or another network congestion-avoiding protocol for the NFS client. The Internet Engineering Task Force (IETF) has published the Request for Comments (RFC) for RPCs in RFC 1832. The other standard vital to the functioning of an NFS implementation describes data formats that NFS uses; it has been published in RFC 1831 as the "External Data Representation" (XDR) document.

Other RFCs are relevant to security and the encryption algorithms used to exchange authentication information during NFS sessions, but we focus on the basic mechanisms first. One protocol that concerns us is the Mount protocol, which is described in Appendix 1 of RFC 1813.

This RFC tells you which protocols make NFS work, but it doesn't tell you how NFS works today. You've already learned something important by knowing that NFS protocols have been documented as IETF standards. While the latest NFS release was stuck at version 3, RPCs had not progressed beyond the informational RFC stage and thus were perceived as an interest largely confined to Sun Microsystems' admittedly huge engineering task force and proprietary UNIX variety. Sun NFS has been around in several versions since 1985 and, therefore, predates most current file system flavors by several years. Sun Microsystems turned over control of NFS to the IETF in 1998, and most NSF version 4 (NFSv4) activity occurred under the latter's aegis.

So, if you're dealing with RPC and NFS today, you're dealing with a version that reflects the concerns of companies and interest groups outside Sun's influence. Many Sun engineers, however, retain a deep interest in NFS development

NFS version 3

NFS in its version 3 avatar (NFSv3) was not stateful: NFSv4 is. This fundamental statement is unlikely to raise any hackles today, although the TCP/IP world on which NFS builds has mostly been stateless -- a fact that has helped traffic analysis and security software companies do quite well for themselves.

NFSv3 had to rely on several subsidiary protocols to seamlessly mount directories on remote computers without becoming too dependent on underlying file system mechanisms. NFS has not always been successful in this attempt. To give you a better example, the Mount protocol called the initial file handle, while the Network Lock Manager protocol addressed file locking. Both operations required state, which NFSv3 did not provide. Therefore, you have complex interactions between protocol layers that do not reflect similar data-flow mechanisms. Now, if you add the fact that file and directory creation in Microsoft® Windows® works very differently from UNIX, matters become rather complicated.

NFSv3 had to use several ports to accommodate some of its subsidiary protocols, and you get a rather complex picture of ports and protocol layers and all their attendant security concerns. Today, this model of operation has been abandoned, and all operations that subsidiary protocol implementations previously executed from individual ports are now handled by NFSv4 from a single, well-known port.

NFSv3 was also ready for Unicode-enabled file system operation -- an advantage that until the late 1990s had to remain fairly theoretical. In all, it mapped well to UNIX file system semantics and motivated competing distributed file system implementations like AFS and Samba. Not surprisingly, Windows support was poor, but Samba file servers have since addressed file sharing between UNIX and Windows systems.

NFS version 4

NFSv4 is, as we pointed out, stateful. Several radical changes made this behavior possible. We already mentioned that subsidiary protocols must be called, as user-level processes have been abandoned. Instead, every file-opening operation and quite a few RPC calls are turned into kernel-level file system operations.

All NFS versions defined each unit of work in terms of RPC client and server operations. Each NFSv3 request required a fairly generous number of RPC calls and port-opening calls to yield a result. Version 4 simplifies matters by introducing a so-called compound operation that subsumed a large number of file system object operations. The immediate effect is, of course, that far fewer RPC calls and data have to traverse the network, even though each RPC call carries substantially more data while accomplishing far more. It is estimated that NFSv3 RPC calls required five times the number of client-server interactions that NFSv4 compound RPC procedures demand.

RPC is not really that important anymore and essentially serves as a wrapper around the number of operations encapsulated within the NFSv4 stack. This change also makes the protocol stack far less dependent on the underlying file system semantics. But the changes don't mean that the file system operations of other operating systems were neglected: For example, Windows shares require stateful open calls. Statefulness not only helps traffic analysis but, when included in file system semantics, makes file system operations much more traceable. Stateful open calls enable clients to cache file data and state -- something that would otherwise have to happen on the server. In the real world, where Windows clients are ubiquitous, NFS servers that work seamlessly and transparently with Windows shares are worth the time you'll spend customizing your NFS configuration.

Using NFS

NFS setup is generically similar to Samba. On the server side, you define file systems or directories to export, or share; the client side mounts those shared directories. When a remote client mounts an NFS-shared directory, that directory is accessed in the same way as any other local file system. Setting up NFS from the server side is an equally simple process. Minimally, you must create or edit the /etc/exports file and start the NFS daemon. To set up a more secure NFS service, you must also edit /etc/hosts.allow and /etc/hosts.deny. The client side of NFS requires only the mount command. For more information and options, consult the Linux® man pages.

The NFS server

Entries in the /etc/exports file have a straightforward format. To share a file system, edit the /etc/exports file and supply a file system (with options) in the general format:

directory (or file system)   client1 (option1, option2) client2 (option1, option2)

General options

Several general options are available to help you customize your NFS implementation. They include:

User mapping

Through user mapping in NFS, you can grant pseudo or actual user and group identity to a user working on an NFS volume. The NFS user has the user and group permissions that the mapping allows. Using a generic user and group for NFS volumes provides a layer of security and flexibility without a lot of administrative overhead.

User access is typically "squashed" when using files on an NFS-mounted file system, which means that a user accesses files as an anonymous user who, by default, has read-only permissions to those files. This behavior is especially important for the root user. Cases exist, however, in which you want a user to access files on a remote system as root or some other defined user. NFS allows you to specify a user -- by user identification (UID) number and group identification (GID) number -- to access remote files, and you can disable the normal behavior of squashing.

User mapping options include:

Listing 1 shows examples of /etc/exports entries.

Listing 1. Example /etc/exports entries
	
/opt/files   192.168.0.*
/opt/files   192.168.0.120
/opt/files   192.168.0.125(rw, all_squash, anonuid=210, anongid=100)
/opt/files   *(ro, insecure, all_squash)

The first entry exports the /opt/files directory to all hosts in the 192.168.0 network. The next entry exports /opt/files to a single host: 192.168.0.120. The third entry specifies host 192.168.0.125 and grants read/write access to the files with user permissions of user id=210 and group id=100. The final entry is for a "public" directory that has read-only access and allows access only under the anonymous account.

The NFS client

A word of caution>

After you have used NFS to mount a remote file system, that system will also be part of any total system backup that you perform on the client system. This behavior can have potentially disastrous results if you don't exclude the newly mounted directories from the backup.

To use NFS as a client, the client computer must be running rpc.statd and portmap. You can run a quick ps -ef to check for these two daemons. If they are running (and they should be), you can mount the server's exported directory with the generic command:

mount server:directory  local mount point

Generally speaking, you must be running under root to mount a file system. From a remote computer, you can use the following command (assume that the NFS server has an IP address of 192.168.0.100):

mount 192.168.0.100:/opt/files  /mnt

Your distribution may require you to specify the file system type when mounting a file system. If so, run the command:

mount -t nfs 192.168.0.100:/opt/files /mnt

The remote directory should mount without issue if you've set up the server side correctly. Now, run the cd command to the /mnt directory, then run the ls command to see the files. To make this mount permanent, you must edit the /etc/fstab file and create an entry similar to the following:

192.168.0.100:/opt/files  /mnt  nfs  rw  0  0

Note: Refer to the fstab man page for more information on /etc/fstab entries.

Criticism drives improvement

Criticisms leveled at NFS security have been at the root of many improvements in NSFv4. The designers of the new version took positive measures to strengthen the security of NFS client-server interaction. In fact, they decided to include a whole new security model.

To understand the security model, you should familiarize yourself with something called the Generic Security Services application programming interface (GSS-API) version 2, update 1. The GSS-API is fully described in RFC 2743, which, unfortunately, is among the most difficult RFCs to understand.

We know from our experience with NFSv4 that it's not easy to make the network file system operating system independent. But it's even more difficult to make all areas of security operating systems and network protocols independent. We must have both, because NFS must be able to handle a fairly generous number of user operations, and it must do so without much reference to the specifics of network protocol interaction.

Connections between NFS clients and servers are secured through what has been rather superficially called strong RPC security. NFSv4 uses the Open Network Computing Remote Procedure Call (ONCRPC) standard codified in RFC 1831. The security model had to be strengthened, and instead of relying on simple authentication (known as AUTH_SYS), a GSS-API-based security flavor known as RPCSEC_GSS has been defined and implemented as a mandatory part of NFSv4. The most important security mechanisms available under NFSv4 include Kerberos version 5 and LIPKEY.

Given that Kerberos has limitations when used across the Internet, LIPKEY has the pleasant advantage of working like Secure Sockets Layer (SSL), prompting users for their user names and passwords, while avoiding the TCP dependence of SSL -- a dependence that NFSv4 doesn't share. You can set NFS up to negotiate for security flavors if RPCSEC_GSS is not required. Past NFS versions did not have this ability and therefore could not negotiate for the quality of protection, data integrity, the requirement for authentication, or the type of encryption.

NFSv3 had come in for a substantial amount of criticism in the area of security. Given that NFSv3 servers ran on TCP, it was perfectly possible to run NFSv3 networks across the Internet. Unfortunately, it was also necessary to open several ports, which led to several well-publicized security breaches. By making port 2049 mandatory for NFS, it became possible to use NFSv4 across firewalls without having to pay too much attention to what ports other protocols, such as the Mount protocol, were listening to. Therefore, the elimination of the Mount protocol had multiple positive effects:

Is NFS still without peers?

NFSv4 is replacing NFSv3 on most UNIX and Linux systems. As a network file system, NSFv4 has few competitors. The Common Internet File System (CIFS)/Server Message Block (SMB) could be considered a viable competitor given that it's native to all Windows varieties and (today) to Linux. AFS never made much commercial impact, and it emphasized elements of distributed file systems that made data migration and replication easier.

Production-ready Linux versions of NFS had been around since the kernel reached version 2.2, but one of the more common failings of Linux kernel versions was the fact that Linux adopted NFSv3 fairly late. In fact, it took a long time before Linux fully supported NSFv3. When NSFv4 came along, this lack was addressed quickly, and it wasn't just Solaris, AIX, and FreeBSD that enjoyed full NSFv4 support.

NFS is considered a mature technology today, and it has a fairly big advantage: It's secure and usable, and most users find it convenient to use one secure logon to access a network and its facilities, even when files and applications reside on different systems. Although this might look like a disadvantage compared to distributed file systems, which hide system structures from users, don't forget that many applications use files from different operating systems and, therefore, computers. NFS makes it easy to work on different operating systems without having to worry too much about the file system semantics and their performance characteristics.

Resources

Learn
Get products and technologies

IBM Redbooks Securing NFS in AIX An Introduction to NFS v4 in AIX 5L Version 5.3

NFS Version 4 (NFS V4) is the latest defined client-to-server protocol for NFS. A significant upgrade from NFS V3, it was defined under the IETF framework by many contributors. NFS V4 introduces major changes to the way NFS has been implemented and used before now, including stronger security, wide area network sharing, and broader platform adaptability.

This IBM Redbook is intended to provide a broad understanding of NFS V4 and specific AIX NFS V4 implementation details. It discusses considerations for deployment of NFS V4, with a focus on exploiting the stronger security features of the new protocol.

In the initial implementation of NFS V4 in AIX 5.3, the most important functional differences are related to security. Chapter 3 and parts of the planning and implementation chapters in Part 2 cover this topic in detail.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Learning NFSv4 with Fedora Core 2

Redhat- Setup NFS v4.0 File Server

NetApp - Tech OnTap - What's New in NFS Version 4-

CITI- Projects- NFS Version 4 Open Source Reference Implementation

NFSv4 new features (Network File System version 4) and NFS on ...

Linux NFS faq



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019