|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
|
From Wikipedia:
Version 4 (RFC 3010, December 2000; revised in RFC 3530, April 2003), influenced by AFS and CIFS, includes performance improvements, mandates strong security, and introduces a stateful protocol.[3] Version 4 became the first version developed with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols.NFS version 4 minor version 1 (NFSv4.1) has been approved by the IESG and received an RFC number since Jan 2010.[4] The NFSv4.1 specification aims:
To provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers (pNFS extension).
In 1998, Sun Microsystems and the Internet Society completed an agreement giving the Internet Society control over future versions of NFS, starting with NFS Version 4. The Internet Society is the umbrella body for the Internet Engineering Task Force (IETF). IETF now has a working group chartered to define NFS Version 4. The goals of the working group include:
For example, the set of attributes that NFS Versions 2 and 3 use is derived completely from Unix without thought about useful attributes that Windows 98, for example, might need. The other side of the problem is that some existing NFS attributes are hard to implement by some non-Unix systems.
There are no plans to add explicit internationalization and localization hooks to file content. The NFS protocol's model has always been to treat the content of files as an opaque stream of bytes that the application must interpret, and Version 4 will not vary from that.
There has been talk of adding an optional attribute that describes the MIME type of contents of the file.
At the time this book was written, the NFS Version 4 working group published the initial NFS Version 4 specification in the form of RFC 3010, which you can peruse from IETF's web site at http://www.ietf.org. Several of the participants in the working group have prototype implementations that interoperate with each other. Early versions of the Linux implementation are available from http://www.citi.umich.edu/projects/nfsv4/. Some of the characteristics of NFS Version 4 that are not in Version 3 include:
Aside from lack of multivendor support, the other problem with NFS security flavors is that they become obsolete rather quickly. To mitigate this, IETF specified the RPCSEC_GSS security flavor that NFS and other RPC-based protocols could use to normalize access to different security mechanisms. RPCSEC_GSS accomplishes this using another IETF specification called the Generic Security Services Application Programming Interface (GSS-API). GSS-API is an abstract layer for generating messages that are encrypted or signed in a form that can be sent to a peer on the network for decryption or verification. GSS-API has been specified to work over Kerberos V5, the Simple Public Key Mechanism, and the Low Infrastructure Public Key system (LIPKEY). We will discuss NFS security, RPCSEC_GSS, and Kerberos V5 in more detail in Chapter 12.
The Secure Socket Layer (SSL) and IPSec were considered as candidates to provide NFS security. SSL wasn't feasible because it was confined to connection-oriented protocols like TCP, and NFS and RPC work over TCP and UDP. IPSec wasn't feasible because, as noted in the section Section 7.2.7, NFS clients typically don't have a TCP connection per user; whereas, it is hard, if not impossible, for an IPSec implementation to authenticate multiple users over a single TCP/IP connection.
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
I have started moving this information to https://help.ubuntu.com/community/SettingUpNFSHowTo
NFSv4 exports exist in a single pseudo filesystem, where the real directories are mounted with the --bind option. Here is some additional information regarding this fact.
- Let's say we want to export our users' home directories in /home/users. First we create the export filesystem:
# mkdir /export # mkdir /export/usersand mount the real users directory with:# mount --bind /home/users /export/usersTo save us from retyping this after every reboot we add the followingline to /etc/fstab
/home/users /export/users none bind 0 0- In /etc/default/nfs-kernel-server we set:
NEED_SVCGSSD=no # no is defaultbecause we are not activating NFSv4 security this time.- To export our directories to a local network 192.168.1.0/24
we add the following two lines to /etc/exports
/export 192.168.1.0/24(rw,fsid=0,no_subtree_check,sync) /export/users 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,sync)- Be aware of the following points:
- there is no space between the IP address and the options
- you can list more IP addresses and options; they are space separated as in:
- /export/users 192.168.1.1(rw,no_subtree_check) 192.168.1.2(rw,no_root_squash)
- using the insecure option allows clients such as Mac OS X to connect on ports above 1024. This option is not otherwise "insecure".
- Setting the crossmnt option on the main psuedo mountpoint has the same effect as setting nohide on the sub-exports: It allows the client to map the sub-exports within the psuedo filesystem. These two options are mutually exclusive.
- Note that when locking down which clients can map an export by setting the IP address, you can either specify an address range (as shown above) using a subnet mask, or you can list a single IP address followed by the options. Using a subnet mask for single client's full IP address is **not** required. Just use something like 192.168.1.123(rw). There are a couple options for specifying the subnet mask. One style is 255.255.255.0. The other style is /24 as shown. Both styles should work. The subnet mask marks which part of IP address must be evaluated.
- Restart the service
# service nfs restartNFSv4 Client
- On the client we can mount the complete export tree with one command:
# mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/ /mnt- We can also mount an exported subtree with:
# mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/users /home/users- To save us from retyping this after every reboot we add the following
line to /etc/fstab:
nfs-server:/ /mnt nfs4 _netdev,auto 0 0
- where the auto option mounts on startup and the _netdev option can be used by scripts to mount the filesystem when the network is available. Under NFSv3 (type nfs) the _netdev option will tell the system to wait to mount until the network is available. With a type of nfs4 this option is ignored, but can be used with mount -O _netdev in scripts later. Currently Ubuntu Server does not come with the scripts needed to auto-mount nfs4 entries in /etc/fstab after the network is up.
- Note with remote NFS paths
- They don't work the way they did in NFSv3. NFSv4 has a global root directory and all exported directories are children to it. So what would have been nfs-server:/export/users on NFSv3 is nfs-server:/users on NFSv4, because /export is the root directory.
- Note regarding UID/GID permissions on NFSv4 without Kerberos
- They do not work. Can someone please help investigating? Following this guide will result in UID/GID on the export being generic despite having same UID on client and server. (According to my experience it works at least with "precise", if uid/gid are equal on both sides. hwehner) Mounting same share on NFSv3 works correctly with regards to UID/GID. Does this need Kerberos to work fully? According to http://arstechnica.com/civis/viewtopic.php?f=16&t=1128994 and http://opensolaris.org/jive/thread.jspa?threadID=68381 you need to use Kerberos for the mapping to have any effect.
This is a possibly related bug: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=500778
Not clear what is meant by UID/GID on the export being generic. This guide does not explicitly state that idmapd must also run on the client side, i.e. /etc/default/nfs-common needs the same settings as described in the server section. If idmapd is running the UID/GID are mapped correctly. Check with ps ax|grep rpc that rpc.idmapd is running.
- If all directory listings show just "nobody" and "nogroup" instead of real user and group names, then you might want to check the Domain parameter set in /etc/idmapd.conf. NFSv4 client and server should be in the same domain. Other operating systems might derive the NFSv4 domain name from the domain name mentioned in /etc/resolv.conf (e.g. Solaris 10).
- If you have a slow network connection and are not establishing mount at reboot, you can change the line in etc/fstab:
nfs-server:/ /mnt nfs4 noauto 0 0and execute this mount after a short pause once all devices are loaded. Add the following lines to /etc/rc.local
# sleep 5 # mount /mnt- In Ubuntu 11.10 or earlier, f you experience Problems like this:
Warning: rpc.idmapd appears not to be running. All uids will be mapped to the nobody uid. mount: unknown filesystem type 'nfs4'(all directories and files on the client are owned by uid/gid 4294967294:4294967294) then you need to set in /etc/default/nfs-common:
NEED_IDMAPD=yesand restart nfs-common.# service idmapd restartThe "unknown Filesystem" Error will disappear as well. (Make sure you restart the idmapd on both client and server after making changes, otherwise uid/gid problems can persist.)NFSv4 and Autofs
Automount (or autofs) can be used in combination with NFSv4. Details on the configuration of autofs can be found in the AutofsHowto. The configuration is identical to NFSv2 and NFSv3 except that you have to specify -fstype=nfs4 as option. Automount supports NFSv4's feature to mount all file systems exported by server at once. The exports are then treated as an entity, i.e. they are "all" mounted when you step into "one" directory on the NFS server's file systems. When auto-mounting each file system separately the behavior is slightly different. In that case you would have to step into "each" file system to make it show up on the NFS client.
Create and distribute credentials
NFSv4 needs machine credentials for the server and every client, which wants to use the NFSv4 security features.
Create the credentials for the nfs-server and all nfs-clients on the Kerberos KDC and distribute the extraced keys with scp to the destination
You can make sure that only this entry has been created by executing "sudo klist -e -k /etc/krb5.keytab".
Create nfs/ principals
Authenticate as your admin user. You can do this from any machine in your kerberos-domain, as long as your kadmind is running; then add principals for your server and client machines. Replace "nfs-server.domain" with the fully qualified domain name of the machines. For example, if your server is called "snoopy" and your domain is "office.example.com", you would add a principal named "nfs/snoopy.office.example.com" for the server. Note: kadmin must be run with -l (locally) on the KDC if there is no kadmind. Please be aware of https://bugs.launchpad.net/ubuntu/+source/heimdal/+bug/309738/
Heimdal
$ kinit kadmin/admin $ kadmin add -r nfs/nfs-server.domain $ kadmin add -r nfs/nfs-client.domainNow add these to the keytab-files on your NFS-server and client. Log in to your NFSserver (as root, because you will need to edit the /etc/krb5.keytab file) and initialize as Kerberos administrator. If your domain is fully kerberized, logging in as root will automatically give you the right access, in which case you don't need to use "kinit" anymore.
NFSserver# kinit kadmin/admin NFSserver# ktutil get nfs/nfs-server.domainAnd add it to the client's keytab file:
NFSclient# kinit kadmin/admin NFSclient# ktutil get nfs/nfs-client.domainMIT
$ kinit admin/admin $ kadmin -q "addprinc -randkey nfs/nfs-server.domain" $ kadmin -q "addprinc -randkey nfs/nfs-client.domain"Now add these to the keytab-files on your NFS-server and client. Log in to your NFSserver (as root, because you will need to edit the /etc/krb5.keytab file) and initialize as Kerberos administrator.
NFSserver# kadmin -p admin/admin -q "ktadd nfs/nfs-server.domain"And add it to the client's keytab file:
NFSclient# kadmin -p admin/admin -q "ktadd nfs/nfs-client.domain"
By Bikash R. ChoudhurySince its introduction in 1984, the Network File System (NFS) has become a standard for network file sharing, particularly in the UNIX® and Linux® communities. Over the last 20 years, the NFS protocol has slowly evolved to adapt to new requirements and market changes.
For most of that time, NetApp has been working to drive the evolution of NFS to better meet user needs. NetApp Chief Technology Officer Brian Pawlowski coauthored the NFS version 3 specification-currently the most widely used version of NFS-and is cochair of the NFS version 4 (NFSv4) working group. NetApp's Mike Eisler and Dave Noveck are coauthors of the NFS version 4 specification-the latest available version of NFS-and are editors for the NFS version 4.1 specification that is currently under development.
NFSv3 NFSv4 Personality Stateless Stateful Semantics UNIX only Suport UNIX and Windows Authentication Weak (AUTH_SYS) Strong (Kerberos) Identification 32 bit UID/GID String based ([email protected]) Permissions UNIX based Windows like access Transport UDP & TCP TCP Caching Ad-hoc Delegations
Table 1) Comparison of NFSv3 and NFSv4.NFS consists of server software-for instance, the software that runs on NetApp® storage-and client software running on hosts that require access to network storage. Proper operation requires that both sides of the connection, client and server, are mature and correctly implemented. Although NFS version 4 has been shipping from NetApp since Data ONTAP® 6.4, our code base, underwent a lot of changes and matured substantially, NFSv4 has now reached a point at which we believe it is ready for production use.
Today, client implementations are becoming stable. NetApp has also made some significant changes and enhancements in Data ONTAP 7.3 to support NFSv4. In this article, I'm going to explore three significant features of NFSv4 that have gotten a lot of attention:While this discussion will largely apply to any NFSv4 implementation, I'll also describe some details that are unique to NetApp and discuss best practices where appropriate.
- Access control lists (ACLs) for security and Windows® compatibility
- Mandatory security with Kerberos
- Delegations for client-side caching
AIX Client Server BSD/OSX Client Server (not sup. by apple) EMC Server Humingbird Client Linux Client Server
(RHEL5)NetApp Server Solaris 10 client Server Delegations Yes Yes Yes Yes Yes Access Control lists Yes Yes Yes Yes Yes Kerberos V5 Yes Yes Yes Yes Yes Yes Yes File Streams Yes Yes I18n Yes Yes Yes Yes Yes Yes Global Namespace/Referrals Yes Yes Future
Table 2) NFSv4 client and server implementation status.Access Control Lists
ACLs are one of the most frequently requested features by NetApp customers seeking greater compatibility for Windows clients. NFSv4 ACLs significantly improve NFS security and interoperability with CIFS.
ACLs allow access permissions to be granted or denied on a per-user basis for every file object, providing much finer-grained access control and a greater degree of selectivity than traditional UNIX mode permission bits. NFSv4 ACLs are based on the NT model, but they do not contain owner/group information. NFSv4 ACLs are made up of an array of access control entries (ACEs), which contain information regarding access allowed/denied, permission bits, user name/group name, and flags.
Because NetApp already offers ACL support for CIFS clients, the addition of ACLs in NFSv4 creates some unique considerations. NetApp provides three types of qtrees-UNIX, NTFS, and mixed-for use by different clients. The way that NFSv4 ACLS are handled depends on the type of qtree:
UNIX Qtree
- NFSV4 ACLs and mode bits effective
- Windows clients cannot set attributes
- UNIX semantics win
NTFS Qtree
- NT ACLs and mode bits effective; UNIX clients cannot set attributes
- NFSv4 ACLs are generated from the mode bits for NFS clients accessing files with an NT ACL
- NT semantics wins
Mixed Qtree
- NFSv4 ACLs, NT ACLs, and mode bits effective
- Windows and UNIX clients can set attributes
- NFSv4 ACL is generated from the mode bits for files having NT ACLs
Obviously, you should choose the type of qtree you use carefully to get the desired results:
- NFS access only: UNIX qtree
- Mixed access: mixed qtree
- Mostly CIFS access: NTFS qtree
- CIFS only: NTFS qtree
The only other best practice with regard to ACLs is to use no more than 192 ACEs per ACL. You can increase the number of ACEs per ACL to a current maximum of 400, but doing so could present problems should you need to revert to an earlier version of Data ONTAP or if you use SnapMirror® to go to an earlier version.
Mandatory Security
In addition to including ACLs, NFSv4 improves on the security of previous NFS versions by:
- Mandating the availability of strong RPC security that depends on cryptography
- Negotiating the type of security used between server and client via a secure in-band system
- Using character strings instead of integers to represent user and group identifiers
NFS security has been bolstered by the addition of security based on the Generic Security Services API (GSS-API), called RPCSEC_GSS [RFC2203]. All versions of NFS are capable of using RPCSEC_GSS. However, a conforming NFS version 4 implementation must implement RPCSEC_GSS. RPCSEC_GSS is allocated analogous to the commonly used AUTH_SYS security that was the standard method of authentication in previous NFS versions.
RPCSEC_GSS differs from AUTH_SYS and other traditional NFS security mechanisms in two ways:
- RPCSEC_GSS does more than authentication. It is capable of performing integrity checksums and encryption of the entire body of the RPC request and response. Hence, RPCSEC_GSS provides security beyond just authentication.
- Because RPCSEC_GSS simply encapsulates the GSS-API messaging tokens-it merely acts as a transport for mechanism-specific tokens for security mechanisms such as Kerberos-adding new security mechanisms (as long as they conform to GSS-API) does not require rewriting significant portions of NFS.
Figure 1) NFS with AUTH_SYS versus RPCSEC-GSS security.Both NFSv3 and NFSv4 can use RPCSEC-GSS. However, AUTH_SYS is the default for NFSv3.
The only security mechanism that NetApp or most NFSv4 clients currently provide under RPCSEC_GSS is Kerberos 5. Kerberos is an authentication mechanism that uses symmetrical key encryption. It never sends passwords across the network in either cleartext or encrypted form and relies on encrypted tickets and session keys to authenticate users before they can use network resources. The Kerberos system utilizes key distribution centers (KDCs) that contain a centralized database of user names and passwords. NetApp supports two types of KDCs: UNIX and Windows Active Directory.
You still have the option of using the weak authentication method of previous NFS versions (AUTH_SYS) should you require it. You can accomplish this by specifying sec=sys on the exportfs command line or in the /etc/exports file. Data ONTAP only supports a maximum of 16 supplemental group IDs in a credential plus 1 primary group ID when AUTH_SYS is used. A maximum of 32 supplemental group IDs is supported with Kerberos.Delegations for Client-Side Caching
In NFSv3, clients typically function as if there is contention for the files they have open, (even though this is often not the case). Weak cache consistency is maintained through frequent requests from the client to the server to find out whether an open file has been modified by someone else, which can cause unnecessary network traffic and delays in high-latency environments. In situations in which a client holds a lock on a file, all write I/O is required to be synchronous, further impacting client-side performance in many situations.
NFSv4 differs from previous versions of NFS by allowing a server to delegate specific actions on a file to a client to enable more aggressive client caching of data and to allow caching of the locking state. A server cedes control of file updates and the locking state to a client via a delegation. This reduces latency by allowing the client to perform various operations and cache data locally. Two types of delegations currently exist: read and write. The server has the ability to call back a delegation from a client should there be contention for a file.
Once a client holds a delegation, it can perform operations on files whose data has been cached locally to avoid network latency and optimize I/O. The more aggressive caching that results from delegations can be a big help in environments with the following characteristics:
- Frequent opens and closes
- Frequent GETATTRs
- File locking
- Read-only sharing
- High latency
- Fast clients
- Heavily loaded server with many clients
Data ONTAP supports both read and write delegations, and you can separately tune the NFSv4 server to enable or disable one or both types of delegations. When delegations are turned on, Data ONTAP will automatically grant a read delegation to a client that opens a file for reading, or a write delegation to a client opening a file for writing.
The options for enabling or disabling read and write delegations have been provided to give you a level of control over the impact of delegations. Ideally, the server determines if it will provide a delegation to clients or not. Turning on read delegation in a highly read-intensive environment will be beneficial. Write delegations in engineering build environments in which each user writes to separate build files and there is no contention will also improve performance. However, write delegations may not be helpful in scenarios in which there are multiple writers to the same file.Finding Out More
If you're concerned about data security (and who isn't?), new NFSv4 features such as ACL support and mandatory security can significantly reduce the risk of unauthorized data access. Another important design goal for NFSv4 was improved performance on the Internet. Improved client caching behavior through delegations was intended to help satisfy this goal.
In addition to the highly visible changes I've described, NFSv4 includes a number of other significant changes, including:
- Elimination of the independent mount daemon and other accessory services that require additional open ports
- Compound requests that group common tasks into a single operation to accelerate response and reduce network traffic
If you want to learn more about these and other features of NFSv4, TR-3085 (PDF) provides a good general reference.
June 20, 2008 | Linux.com
One issue with migrating to NFSv4 is that all of the filesystems you export have to be located under a single top-level exported directory. This means you have to change your /etc/exports file and also use Linux bind mounts to mount the filesystems you wish to export under your single top-level NFSv4 exported directory. Because the manner in which filesystems are exported in NFSv4 requires fairly large changes to system configuration, many folks might not have upgraded from NFSv3. This administration work is covered in other articles. This article provides performance benchmarks of NFSv3 against NFSv4 so you can get an idea of whether your network filesystem performance will be better after the migration.
I ran these performance tests using an Intel Q6600-based server with 8GB of RAM. The client was an AMD X2 with 2GB of RAM. Both machines were using Intel gigabit PCIe EXPI9300PT NICs, and the network between the two machines had virtually zero traffic on it for the duration of the benchmarks. The NICs provide a very low latency network, as described in a past article. While testing performance for this article I ran each benchmark multiple times to ensure performance was repeatable. The difference in RAM between the two machines changes how Bonnie++ is run by default. On the server, I ran the test using 16GB files, and on the client, 4GB files. Both machines were running 64-bit Fedora 9.
The filesystem exported from the server was an ext3 filesystem created on a RAID-5 over three 500GB hard disks. The exported filesystem was 60GB in size. The stripe_cache_size was 16384, meaning that for a three-disk RAID array, 192MB of RAM was used to cache pages at the RAID level. Default cache sizes for distributions might be in the 3-4MB range for the same RAID array. Using a larger cache directly improves write performance of the RAID. I also ran benchmarks locally on the server without using NFS to get an idea of the theoretical maximum performance NFS could achieve.
Some readers may point out that RAID-5 is not a desirable configuration, and certainly running it on only three disks is not a typical configuration. However, the relative performance of NFSv3 to NFSv4 is our main point of interest. I used a three disk RAID-5 because it had a filesystem that could be recreated for the benchmark. Recreation of the filesystem removes factors such as file fragmentation that can adversely effect performance.
I tested NFSv3 with and without the async option. The async option allows the NFS server to respond to a write request before it is actually on disk. The NFS protocol normally requires the server to ensure data has been written to storage successfully before replying to the client. Depending on your needs, you might be running mounts with the async option on some filesystems for the performance improvement it offers, though you should be aware of what async implies for data integrity, in particular, potential undetectable data loss if the NFS server crashes.
The table below shows the Bonnie++ input, output, and seek benchmarks for the various NFS version 3 and 4 mounted filesystems as well as the benchmark that was run on the server. As expected, the reading performance is almost identical whether or not you are using the async option. You can perform more than five times the number of "seeks" over NFS when using the async option, presumably because the server can avoid actually performing some of them because a subsequent seek is issued before the initial seek was completed. Unfortunately the block sequential output for NFSv4 is not any better than for NFSv3. Without using the async option, output was about 50Mbps, whereas the local filesystem was capable of performing at 91Mbps. When using the async option, sequential block output came much closer to local disk speeds over the NFS mount.
Configuration Sequential Output Sequential Input Random Per Char Block Rewrite Per Char Block Seeks K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU local filesystem 62340 94 91939 22 45533 19 43046 69 109356 32 239.2 0 NFSv3 noatime,nfsvers=3 50129 86 47700 6 35942 8 52871 96 107516 11 1704 4 NFSv3 noatime,nfsvers=3,async 59287 96 83729 10 48880 12 52824 95 107582 10 9147 30 NFSv4 noatime 49864 86 49548 5 34046 8 52990 95 108091 10 1649 4 NFSv4 noatime,async 58569 96 85796 10 49146 10 52856 95 108247 11 9135 21 The table below shows the Bonnie++ benchmarks for file creation, read, and deletion. Notice that the async option has a tremendous impact on file creation and deletion.
Configuration Sequential Create Random Create Create Read Delete Create Read Delete /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU NFSv3 noatime,nfsvers=3 186 0 6122 10 182 0 186 0 6604 10 181 0 NFSv3 noatime,nfsvers=3,async 3031 10 8789 11 3078 9 2921 11 11271 13 3069 9 NFSv4 noatime 98 0 6005 13 193 0 93 0 6520 11 192 0 NFSv4 noatime,async 1314 8 7155 13 5350 12 1298 8 7537 12 5060 9 To test more day-to-day performance I extracted the linux-2.6.25.4.tar uncompressed Linux kernel source tarball and then deleted the extracted sources. Note that the original source tarball was not compressed in order to ensure that the CPU of the client was not slowing down extraction.
Configuration Find (m:ss) Remove (m:ss) local filesystem 0:01 0:03 NFSv3 noatime,nfsvers=3 9:44 2:36 NFSv3 noatime,nfsvers=3,async 0:31 0:10 NFSv4 noatime 9:52 2:27 NFSv4 noatime,async 0:40 0:08 Wrap up
These tests show no clear performance advantage to moving from NFSv3 to NFSv4.
NFSv4 file creation is actually about half the speed of file creation over NFSv3, but NFSv4 can delete files quicker than NFSv3. By far the largest speed gains come from running with the async option on, though using this can lead to issues if the NFS server crashes or is rebooted.
Ben Martin has been working on filesystems for more than 10 years. He completed his Ph.D. and now offers consulting services focused on libferris, filesystems, and search solutions.
A word of cautionAfter you have used NFS to mount a remote file system, that system will also be part of any total system backup that you perform on the client system. This behavior can have potentially disastrous results if you don't exclude the newly mounted directories from the backup.
To use NFS as a client, the client computer must be running rpc.statd and portmap. You can run a quick ps -ef to check for these two daemons. If they are running (and they should be), you can mount the server's exported directory with the generic command:
mount server:directory local mount pointGenerally speaking, you must be running under root to mount a file system. From a remote computer, you can use the following command (assume that the NFS server has an IP address of 192.168.0.100):
mount 192.168.0.100:/opt/files /mntYour distribution may require you to specify the file system type when mounting a file system. If so, run the command:mount -t nfs 192.168.0.100:/opt/files /mntThe remote directory should mount without issue if you've set up the server side correctly. Now, run the cd command to the /mnt directory, then run the ls command to see the files. To make this mount permanent, you must edit the /etc/fstab file and create an entry similar to the following:
192.168.0.100:/opt/files /mnt nfs rw 0 0
Note: Refer to the fstab man page for more information on /etc/fstab entries.
...Distributed file system access, therefore, needs rather more than a couple of commands enabling users to mount a directory on a computer networked to theirs. Sun Microsystems faced up to this challenge a number of years ago when it started propagating something called Remote Procedure Calls (RPCs) and NFS.The basic problem that Sun was trying to solve was how to connect several UNIX computers to form a seamless distributed working environment without having to rewrite UNIX file system semantics and without having to add too many data structures specific to distributed file systems. Naturally, it was impossible for a network of UNIX workstations to appear as one large system: the integrity of each system had to be preserved while still enabling users to work on a directory on a different computer without experiencing unacceptable delays or limitations in their workflow.
To be sure, NFS does more than facilitate access to text files. You can distribute "runnable" applications through NFS, as well. Security procedures serve to shore up the network against the malicious takeovers of executables. But how exactly does this happen?
NFS is traditionally defined as an RPC application requiring TCP for the NFS server and either TCP or another network congestion-avoiding protocol for the NFS client. The Internet Engineering Task Force (IETF) has published the Request for Comments (RFC) for RPCs in RFC 1832. The other standard vital to the functioning of an NFS implementation describes data formats that NFS uses; it has been published in RFC 1831 as the "External Data Representation" (XDR) document.
Other RFCs are relevant to security and the encryption algorithms used to exchange authentication information during NFS sessions, but we focus on the basic mechanisms first. One protocol that concerns us is the Mount protocol, which is described in Appendix 1 of RFC 1813.
This RFC tells you which protocols make NFS work, but it doesn't tell you how NFS works today. You've already learned something important by knowing that NFS protocols have been documented as IETF standards. While the latest NFS release was stuck at version 3, RPCs had not progressed beyond the informational RFC stage and thus were perceived as an interest largely confined to Sun Microsystems' admittedly huge engineering task force and proprietary UNIX variety. Sun NFS has been around in several versions since 1985 and, therefore, predates most current file system flavors by several years. Sun Microsystems turned over control of NFS to the IETF in 1998, and most NSF version 4 (NFSv4) activity occurred under the latter's aegis.
So, if you're dealing with RPC and NFS today, you're dealing with a version that reflects the concerns of companies and interest groups outside Sun's influence. Many Sun engineers, however, retain a deep interest in NFS development
NFS version 3
NFS in its version 3 avatar (NFSv3) was not stateful: NFSv4 is. This fundamental statement is unlikely to raise any hackles today, although the TCP/IP world on which NFS builds has mostly been stateless -- a fact that has helped traffic analysis and security software companies do quite well for themselves.
NFSv3 had to rely on several subsidiary protocols to seamlessly mount directories on remote computers without becoming too dependent on underlying file system mechanisms. NFS has not always been successful in this attempt. To give you a better example, the Mount protocol called the initial file handle, while the Network Lock Manager protocol addressed file locking. Both operations required state, which NFSv3 did not provide. Therefore, you have complex interactions between protocol layers that do not reflect similar data-flow mechanisms. Now, if you add the fact that file and directory creation in Microsoft® Windows® works very differently from UNIX, matters become rather complicated.
NFSv3 had to use several ports to accommodate some of its subsidiary protocols, and you get a rather complex picture of ports and protocol layers and all their attendant security concerns. Today, this model of operation has been abandoned, and all operations that subsidiary protocol implementations previously executed from individual ports are now handled by NFSv4 from a single, well-known port.
NFSv3 was also ready for Unicode-enabled file system operation -- an advantage that until the late 1990s had to remain fairly theoretical. In all, it mapped well to UNIX file system semantics and motivated competing distributed file system implementations like AFS and Samba. Not surprisingly, Windows support was poor, but Samba file servers have since addressed file sharing between UNIX and Windows systems.
NFS version 4
NFSv4 is, as we pointed out, stateful. Several radical changes made this behavior possible. We already mentioned that subsidiary protocols must be called, as user-level processes have been abandoned. Instead, every file-opening operation and quite a few RPC calls are turned into kernel-level file system operations.
All NFS versions defined each unit of work in terms of RPC client and server operations. Each NFSv3 request required a fairly generous number of RPC calls and port-opening calls to yield a result. Version 4 simplifies matters by introducing a so-called compound operation that subsumed a large number of file system object operations. The immediate effect is, of course, that far fewer RPC calls and data have to traverse the network, even though each RPC call carries substantially more data while accomplishing far more. It is estimated that NFSv3 RPC calls required five times the number of client-server interactions that NFSv4 compound RPC procedures demand.
RPC is not really that important anymore and essentially serves as a wrapper around the number of operations encapsulated within the NFSv4 stack. This change also makes the protocol stack far less dependent on the underlying file system semantics. But the changes don't mean that the file system operations of other operating systems were neglected: For example, Windows shares require stateful open calls. Statefulness not only helps traffic analysis but, when included in file system semantics, makes file system operations much more traceable. Stateful open calls enable clients to cache file data and state -- something that would otherwise have to happen on the server. In the real world, where Windows clients are ubiquitous, NFS servers that work seamlessly and transparently with Windows shares are worth the time you'll spend customizing your NFS configuration.
NFS setup is generically similar to Samba. On the server side, you define file systems or directories to export, or share; the client side mounts those shared directories. When a remote client mounts an NFS-shared directory, that directory is accessed in the same way as any other local file system. Setting up NFS from the server side is an equally simple process. Minimally, you must create or edit the /etc/exports file and start the NFS daemon. To set up a more secure NFS service, you must also edit /etc/hosts.allow and /etc/hosts.deny. The client side of NFS requires only the mount command. For more information and options, consult the Linux® man pages.
Entries in the /etc/exports file have a straightforward format. To share a file system, edit the /etc/exports file and supply a file system (with options) in the general format:
directory (or file system) client1 (option1, option2) client2 (option1, option2)
General optionsSeveral general options are available to help you customize your NFS implementation. They include:
- secure: This option -- the default -- uses available TCP/IP ports below 1024 for NFS connections. Specifying insecure disables this option.
- rw: This option allows NFS clients read/write access. The default option is read only.
- async: This option may improve performance, but it can also cause data loss if you restart the NFS server without first performing a clean shutdown of the NFS daemon. The default setting is sync.
- no_wdelay: This option turns off the write delay. If you set async, NFS ignores this option.
- nohide: If you mount one directory over another, the old directory is typically hidden or appears empty. To disable this behavior, enable the hide option.
- no_subtree_check: This option turns off subtree checking, which performs some security checks that you may not want to bypass. The default option is to have subtree checks enabled.
- no_auth_nlm: This option, also specified as insecure_locks, tells the NFS daemon not to authenticate locking requests. If you're concerned about security, avoid this option. The default option is auth_nlm or secure_locks.
- mp (mountpoint=path): By explicitly declaring this option, NSF requires that the exported directory be mounted.
- fsid=num: This option is typically used in NFS failover scenarios. Refer to the NFS documentation if you want to implement NFS failover.
User mapping
Through user mapping in NFS, you can grant pseudo or actual user and group identity to a user working on an NFS volume. The NFS user has the user and group permissions that the mapping allows. Using a generic user and group for NFS volumes provides a layer of security and flexibility without a lot of administrative overhead.
User access is typically "squashed" when using files on an NFS-mounted file system, which means that a user accesses files as an anonymous user who, by default, has read-only permissions to those files. This behavior is especially important for the root user. Cases exist, however, in which you want a user to access files on a remote system as root or some other defined user. NFS allows you to specify a user -- by user identification (UID) number and group identification (GID) number -- to access remote files, and you can disable the normal behavior of squashing.
User mapping options include:
- root_squash: This option doesn't allow root user access on the mounted NFS volume.
- no_root_squash: This option allows root user access on the mounted NFS volume.
- all_squash: This option, which is useful for a publicly accessible NFS volume, squashes all UIDs and GIDs and only uses the anonymous account. The default setting is no_all_squash.
- anonuid and anongid: These options change the anonymous UIDs and GIDs to specific user and group accounts.
Listing 1 shows examples of /etc/exports entries.
Listing 1. Example /etc/exports entries
/opt/files 192.168.0.* /opt/files 192.168.0.120 /opt/files 192.168.0.125(rw, all_squash, anonuid=210, anongid=100) /opt/files *(ro, insecure, all_squash)The first entry exports the /opt/files directory to all hosts in the 192.168.0 network. The next entry exports /opt/files to a single host: 192.168.0.120. The third entry specifies host 192.168.0.125 and grants read/write access to the files with user permissions of user id=210 and group id=100. The final entry is for a "public" directory that has read-only access and allows access only under the anonymous account.
The NFS client
A word of caution>
After you have used NFS to mount a remote file system, that system will also be part of any total system backup that you perform on the client system. This behavior can have potentially disastrous results if you don't exclude the newly mounted directories from the backup.
To use NFS as a client, the client computer must be running rpc.statd and portmap. You can run a quick ps -ef to check for these two daemons. If they are running (and they should be), you can mount the server's exported directory with the generic command:
mount server:directory local mount pointGenerally speaking, you must be running under root to mount a file system. From a remote computer, you can use the following command (assume that the NFS server has an IP address of 192.168.0.100):
mount 192.168.0.100:/opt/files /mntYour distribution may require you to specify the file system type when mounting a file system. If so, run the command:
mount -t nfs 192.168.0.100:/opt/files /mntThe remote directory should mount without issue if you've set up the server side correctly. Now, run the cd command to the /mnt directory, then run the ls command to see the files. To make this mount permanent, you must edit the /etc/fstab file and create an entry similar to the following:
192.168.0.100:/opt/files /mnt nfs rw 0 0Note: Refer to the fstab man page for more information on /etc/fstab entries.
Criticism drives improvementCriticisms leveled at NFS security have been at the root of many improvements in NSFv4. The designers of the new version took positive measures to strengthen the security of NFS client-server interaction. In fact, they decided to include a whole new security model.
To understand the security model, you should familiarize yourself with something called the Generic Security Services application programming interface (GSS-API) version 2, update 1. The GSS-API is fully described in RFC 2743, which, unfortunately, is among the most difficult RFCs to understand.
We know from our experience with NFSv4 that it's not easy to make the network file system operating system independent. But it's even more difficult to make all areas of security operating systems and network protocols independent. We must have both, because NFS must be able to handle a fairly generous number of user operations, and it must do so without much reference to the specifics of network protocol interaction.
Connections between NFS clients and servers are secured through what has been rather superficially called strong RPC security. NFSv4 uses the Open Network Computing Remote Procedure Call (ONCRPC) standard codified in RFC 1831. The security model had to be strengthened, and instead of relying on simple authentication (known as AUTH_SYS), a GSS-API-based security flavor known as RPCSEC_GSS has been defined and implemented as a mandatory part of NFSv4. The most important security mechanisms available under NFSv4 include Kerberos version 5 and LIPKEY.
Given that Kerberos has limitations when used across the Internet, LIPKEY has the pleasant advantage of working like Secure Sockets Layer (SSL), prompting users for their user names and passwords, while avoiding the TCP dependence of SSL -- a dependence that NFSv4 doesn't share. You can set NFS up to negotiate for security flavors if RPCSEC_GSS is not required. Past NFS versions did not have this ability and therefore could not negotiate for the quality of protection, data integrity, the requirement for authentication, or the type of encryption.
NFSv3 had come in for a substantial amount of criticism in the area of security. Given that NFSv3 servers ran on TCP, it was perfectly possible to run NFSv3 networks across the Internet. Unfortunately, it was also necessary to open several ports, which led to several well-publicized security breaches. By making port 2049 mandatory for NFS, it became possible to use NFSv4 across firewalls without having to pay too much attention to what ports other protocols, such as the Mount protocol, were listening to. Therefore, the elimination of the Mount protocol had multiple positive effects:
- Mandatory strong authentication mechanisms: NFSv4 makes strong authentication mechanisms mandatory. Kerberos flavors are fairly common, and Lower Infrastructure Public Key Mechanism (LIPKEY) must be supported, as well. NFSv3 never supported much more than UNIX-style standard encryption to authenticate access -- something that led to major security problems in large networks.
- Mandatory Microsoft Windows NT-style access control list (ACL) schemes: Although NFSv3 allowed for strong encryption for authentication, it did not push Windows NT-style ACL access schemes. Portable Operating System Interface (POSIX)-style ACLs were sometimes implemented but never widely adopted. NFSv4 makes Windows NT-style ACL schemes mandatory.
- Negotiated authentication styles and mechanisms: NFSv4 makes it possible to negotiate authentication styles and mechanisms. Under NSFv3, it was impossible to do much more than determine manually which encryption styles were used. The system administrator then had to harmonize encryption and security protocols.
Is NFS still without peers?
NFSv4 is replacing NFSv3 on most UNIX and Linux systems. As a network file system, NSFv4 has few competitors. The Common Internet File System (CIFS)/Server Message Block (SMB) could be considered a viable competitor given that it's native to all Windows varieties and (today) to Linux. AFS never made much commercial impact, and it emphasized elements of distributed file systems that made data migration and replication easier.
Production-ready Linux versions of NFS had been around since the kernel reached version 2.2, but one of the more common failings of Linux kernel versions was the fact that Linux adopted NFSv3 fairly late. In fact, it took a long time before Linux fully supported NSFv3. When NSFv4 came along, this lack was addressed quickly, and it wasn't just Solaris, AIX, and FreeBSD that enjoyed full NSFv4 support.
NFS is considered a mature technology today, and it has a fairly big advantage: It's secure and usable, and most users find it convenient to use one secure logon to access a network and its facilities, even when files and applications reside on different systems. Although this might look like a disadvantage compared to distributed file systems, which hide system structures from users, don't forget that many applications use files from different operating systems and, therefore, computers. NFS makes it easy to work on different operating systems without having to worry too much about the file system semantics and their performance characteristics.
Learn
- The main NFSv4 portal contains links to all technical documents and RFCs for NFSv4.
- Distributed file systems need a bit more effort to understand. Find the information you need at Wikipedia.
- RFC 1832 outlines the XDR standard.
- RFC 1831 outlines the RPC standard.
- RFC 1813, Appendix I: Mount protocol outlines the now-obsolete Mount protocol.
- RFC 2743 outlines the GSS-API.
Get products and technologies
NFS Version 4 (NFS V4) is the latest defined client-to-server protocol for NFS. A significant upgrade from NFS V3, it was defined under the IETF framework by many contributors. NFS V4 introduces major changes to the way NFS has been implemented and used before now, including stronger security, wide area network sharing, and broader platform adaptability.This IBM Redbook is intended to provide a broad understanding of NFS V4 and specific AIX NFS V4 implementation details. It discusses considerations for deployment of NFS V4, with a focus on exploiting the stronger security features of the new protocol.
In the initial implementation of NFS V4 in AIX 5.3, the most important functional differences are related to security. Chapter 3 and parts of the planning and implementation chapters in Part 2 cover this topic in detail.
Google matched content |
Learning NFSv4 with Fedora Core 2
Redhat- Setup NFS v4.0 File Server
NetApp - Tech OnTap - What's New in NFS Version 4-
CITI- Projects- NFS Version 4 Open Source Reference Implementation
NFSv4 new features (Network File System version 4) and NFS on ...
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019