May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Network filesystem (NFS)

News Recommended Links NFS version 3 Design and Operation NFS4 Linux NFS implementation Solaris NFS implementation SFU NFS implementation
Suse NFS service NFS in RHEL          
nfsstat showmount Mounting NFS directory owned by root NFS performance tuning Mounting NFS Resources AutoFS and automountd daemon RPC
Troubleshooting NFS Security History Tips Random Findings Humor Etc

NFS is a network filesystem originally developed by Sun (version 2, see RFC1094) and later enhanced by Network Appliance and other companies (version 3 and 4 of NFS). It works well for sharing filesystems between multiple clients, but is slower then some other network filesystems (Samba). It is also more fault tolerant then most other network filesystems.  NFS4 uses TCP, earlier versions used UDP.

Currently all three versions of NFS are in use, but NFS version 3 is the most popular and the most widely implemented.

NFS is a server-client protocol. The server and client do not have to use the same operating system. The client system just needs to be running an NFS client compatible with the NFS server.

The NFS server exports one or more directories to the client systems, and the client systems mount one or more of the shared directories to local directories called mount points.

It is not necessary for exported directory to be a separate partition.  But if the partition is writable then the client can fill it all even if a single directory is exported.

After the share is mounted, all I/O operations are written back to the server, and all clients notice the change as if it occurred on the local filesystem. A manual refresh is not needed because the client accesses the remote filesystem as if it were local. Access is granted or restricted by client IP addresses.

One advantage of NFS is that the client mounts the remote filesystem to a directory thus allowing users to access it in the same method used to access local files. Furthermore, because access is granted by IP address, a username and password are not required.

However, there are security risks to consider because the NFS server knows nothing about the users on the client system. The files from the NFS server retain their file permissions, user ID, and group ID when mounted. If the client uses a different set of user and group IDs, file ownership will change.

For example, if a file is owned by user ID 500 on the NFS server, the file is exported to the clients with that same user ID. If user ID 500 maps to the user A on the NFS server but maps to the user B on the remote client, user B will have access to the file on the remote client. Thus, it is crucial that the NFS server and all its clients use the same user database so the user and group IDs are identical no matter which machine is used to access the files. The administrator can assign identical user and group IDs on systems on the network, but this can be a tedious and time-consuming task if the network has more than a few users. 

Like with any network filesystem,  when a file or directory is shared from a remote machine via NFS, it appears to be part of your filesystem: reading or writing file files is transparent despite the fact that they are delivered to the your machine over the network. Everything behaves like for local filesystem (except for some additional delays due to network transmission; the latter can be negligible of 10GBit networks).

Because of its popularity, NFS was most existing operating systems including Windows and Netware. And vice versa, a competing file Windows sharing protocol called SAMBA was ported and became popular on Unix.  Among prominent Window NSF implementations are  Hummingbird Maestro, Cygwin and SFU 3.5.  The needs for compatibility with Windows clients drives Unix ACLs  development. See Solaris ACLs for details

NFS defines an abstract model of a filesystem. Each OS maps Unix-like attributes of the NFS model to attributes that exist in its native filesystem. For example Windows has different file attributes then Unix filesystems, but you will see the that the owner of the file is mapped to the NFS(unix) owner role. Unix-style owner group is also mapped and so are Unix permissions.  Perfect mapping of attributes exists only in Unix-to-Unix mount operations. Still there can be subtle problems even here. For example, if root owed filesystem from one server is exported "read-write" and is mounted on non root owned directory with read-write access, the owner and group attributed of the directory are overwritten and the directory will appeared to be owned by root and primary group that exists on the original server.

As NFS is based on a client-server model. The server offers filesystems to other systems. This is called exporting or sharing and the filesystems offered are called "exports."

Filesystems shared through NFS software can also be mounted automatically "on demand". Special daemon called automountd catches the cases when user changes to NFS directory and, if it not yet mounted, transparently mounts it.  The list of mount points should be provided as a configuration file. Essentially any I/O operation on s program notifies the automount daemon, automountd,  and it mounts it. If there is a long period of inactivity the directory is unmounted. In other words, the automountd daemon transparently performs mounting and unmounting of remote directories listed in autofs configuration file on an "as-needed" basis.

The NFS implementation relies on special protocol called the Remote Procedure Call (RPC) protocol. RPC server daemon must be running for NFS to be accessible. You can check whether RPC is active by issuing this command at the shell prompt:

rpcinfo -p
Here are results on Linux
rpcinfo -p
program vers proto   port
 100000    2   tcp    111  portmapper
 100000    2   udp    111  portmapper

Because the NFS service makes the physical location of the filesystem irrelevant to the user, it is possible to   enable users to have a directory or several directories that will be mounted automatically with all the relevant files regardless of what server or workstation the user is logged in. This idea was first implemented in Solaris were you can share your home directory (home/export/user) on one server (master) and it will be mount at location /home/user on all other computers that user configured to have this mount point.

In other words instead of keeping copies of commonly used files (for example your personal profile and other dot-files) on every system and synchronizing it with the master copy, the NFS  enables you to place one copy  on specific system (master) and have all other systems access it across the network. For small files NFS access is almost indistinguishable from local access.  For large files like ISO files the difference is quite noticeable  and on networks with speed less then 1GBit/sec access is too slow.

Writable NFS-sharable directories should generally constitute a separate partition. This way you can protect the server that shares this directory from malicious users who can to fill all the space available with junk file,  by writing large files onto it.  This can  crash some daemons on the remote server.  From this point of view there is some danger in allowing regular users to mount NFS filesystem.

NFS controls who can mount an exported filesystem based on the host making the mount request, not the user that will actually use the filesystem. Hosts must be given explicit rights to mount the exported filesystem. User access control is limited to file and directory permissions. In other words, once a filesystem is exported via NFS, user with appropriate credentials on any remote host connected to the NFS server can access the shared data. To limit the potential risks, administrators can only allow read-only access or squash owner and group of the NFS mounted system to a common user and groupid.  But as with other security enhancing strategies such solutions may prevent the NFS share from being used in the way it was originally intended.

Additionally, if an attacker gains control of the DNS server used by the system exporting the NFS filesystem, the system associated with a particular hostname or fully qualified domain name can be pointed to an unauthorized machine. At this point, the unauthorized machine became the system permitted to mount the NFS share, since no username or password information is exchanged to provide additional security for the NFS mount. The same risks hold true to compromised NIS servers, if NIS netgroups are used to allow certain hosts to mount an NFS share. By using IP addresses in /etc/exports, this kind of attack is more difficult.

Wildcards should be used sparingly when exporting NFS shares as the scope of the wildcard may encompass more systems than intended.

Once the NFS filesystem is mounted read-write by a remote host, the only protection each shared file has is its permissions. If two different users share the same userid value on remote NFS server and client, they will be able to modify each others files. Of course root on the client system can use the su - command to become any user and as such can access all mounted via the NFS files.

The default behavior when exporting a filesystem via NFS is to use root squashing. This sets the userid of anyone accessing the NFS share as the root user on their local machine to a value of the server's nobody account. When you exporting an NFS share read-only you can also use the all_squash option, which makes every user accessing the exported filesystem take the userid of the nobody user.

Exporting files

Before filesystems or directories can be accessed (that is, mounted) by a client through NFS, they must be shared or  exported.  This is a server-based operation. Once shared, authorized NFS clients can mount the resources. You can group exportable directories under special root directories. Such practice is reflected in Solaris directory name /export/home

To start the NFS server daemons or to specify the number of concurrent NFS requests that can be handled by the nfsd daemon, use in Solaris is the /etc/rc3.d/S15nfs.server script. 

You need several daemons to support NFS activities. These daemons can support both NFS client and NFS server activity, NFS server activity alone, or logging of the NFS server activity. Also there should be startup script. For exmaple on Solaris there are six daemons that support NFS:

  1. mountd Handles filesystem mount requests from remote systems, and provides access control (server)
  2. nfsd Handles client filesystem requests (both client and server)
  3. statd Works with the lockd daemon to provide crash recovery functions for the lock manager (server)
  4. lockd Supports record locking operations on NFS files
  5. nfslogd  Provides filesystem logging. Runs only if one or more filesystems is mounted with log attribute.

Solaris  startup scripts is called  /etc/rc3.d/S15nfs.server .


You can detect most NFS problems from console messages or from certain symptoms that appear on a client system. Some common errors are (see Troubleshooting Solaris NFS Problems for more information):

  1. The rpcbind failure error incorrect host Internet address or server overload
  2. The server not responding error network connection or server is down
  3. The NFS client fails a reboot error a client is requesting an NFS mount using an entry in the /etc/vfstab file, specifying a foreground mount from a non-operational NFS server.
  4. The service not responding error an accessible server is not running the NFS server daemons.
  5. The program not registered error  an accessible server is not running the mountd daemon.
  6. The stale file handle error [file moved on the server]. To solve the stale NFS file handle error condition, unmount and mount the resource again on the client.
  7. The unknown host error the host name of the server on the client is missing from the hosts table.
  8. The mount point error check that the mount point exists on the client
  9. The no such file error unknown file name on the server
  10. No such file or directory  the directory does not exists on the server

NFS Server Commands

For more information see pages devoted to specific implementations

Top Visited
Past week
Past month


Old News ;-)

[Jun 09, 2018] How to use autofs to mount NFS shares by Alan Formy-Duval

Jun 05, 2018 |
fstab file. However, there may be times when you prefer to have a remote file system mount only on demand -- for example, to boost performance by reducing network bandwidth usage, or to hide or obfuscate certain directories for security reasons. The package autofs provides this feature. In this article, I'll describe how to get a basic automount configuration up and running.

First, a few assumptions: Assume the NFS server named is up and running. Also assume a data directory named ourfiles and two user directories, for Carl and Sarah, are being shared by this server.

A few best practices will make things work a bit better: It is a good idea to use the same user ID for your users on the server and any client workstations where they have an account. Also, your workstations and server should have the same domain name. Checking the relevant configuration files should confirm.

alan@workstation1:~$ sudo getent passwd carl sarah
[sudo] password for alan:

alan@workstation1:~$ sudo getent hosts localhost workstation1 tree

As you can see, both the client workstation and the NFS server are configured in the hosts file. I'm assuming a basic home or even small office network that might lack proper internal domain name service (i.e., DNS).

Install the packages

You need to install only two packages: nfs-common for NFS client functions, and autofs to provide the automount function.

alan@workstation1:~$ sudo apt-get install nfs-common autofs

You can verify that the autofs files have been placed in the etc directory:

alan@workstation1:~$ cd /etc; ll auto*
-rw-r--r-- 1 root root 12596 Nov 19 2015 autofs.conf
-rw-r--r-- 1 root root 857 Mar 10 2017 auto.master
-rw-r--r-- 1 root root 708 Jul 6 2017 auto.misc
-rwxr-xr-x 1 root root 1039 Nov 19 2015*
-rwxr-xr-x 1 root root 2191 Nov 19 2015 auto.smb*
alan@workstation1:/etc$ Configure autofs

Now you need to edit several of these files and add the file auto.home . First, add the following two lines to the file auto.master :

/mnt/tree /etc/auto.misc
/home/tree /etc/auto.home

Each line begins with the directory where the NFS shares will be mounted. Go ahead and create those directories:

alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree

Second, add the following line to the file auto.misc :

ourfiles        -fstype=nfs     tree:/share/ourfiles

This line instructs autofs to mount the ourfiles share at the location matched in the auto.master file for auto.misc . As shown above, these files will be available in the directory /mnt/tree/ourfiles .

Third, create the file auto.home with the following line:

*               -fstype=nfs     tree:/home/&

This line instructs autofs to mount the users share at the location matched in the auto.master file for auto.home . In this case, Carl and Sarah's files will be available in the directories /home/tree/carl or /home/tree/sarah , respectively. The asterisk (referred to as a wildcard) makes it possible for each user's share to be automatically mounted when they log in. The ampersand also works as a wildcard representing the user's directory on the server side. Their home directory should be mapped accordingly in the passwd file. This doesn't have to be done if you prefer a local home directory; instead, the user could use this as simple remote storage for specific files.

Finally, restart the autofs daemon so it will recognize and load these configuration file changes.

alan@workstation1:/etc$ sudo service autofs restart
Testing autofs

If you change to one of the directories listed in the file auto.master and run the ls command, you won't see anything immediately. For example, change directory (cd) to /mnt/tree . At first, the output of ls won't show anything, but after running cd ourfiles , the ourfiles share directory will be automatically mounted. The cd command will also be executed and you will be placed into the newly mounted directory.

carl@workstation1:~$ cd /mnt/tree
carl@workstation1:/mnt/tree$ ls
carl@workstation1:/mnt/tree$ cd ourfiles

To further confirm that things are working, the mount command will display the details of the mounted share.

carl@workstation1:~$ mount
tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=

The /home/tree directory will work the same way for Carl and Sarah.

I find it useful to bookmark these directories in my file manager for quicker access.

Managing NFS[tm] Workloads (April 1999) by Richard McDougall, Adrian Cockcroft and Evert Hoogendoorn

Demonstration of the usage and management of NFS.

[Aug 1, 2006] Network Appliance - The Evolution of NFS by Dave Hitz & Andy Watson, Network Appliance, Inc.

1. NFS changes since 1985 2. NFSv3 3. Other Recent Changes and Future Possibilities 4. Conclusion

NFS version 3 (NFSv3) arrived almost exactly ten years after Sun Microsystems originally introduced NFS. This leaves some people wondering: What took so long? Will it be another ten years before NFS gets another fresh coat of paint?

In part, these questions reflect a conflation between NFS-the- protocol and NFS-the-implementation. While the NFS protocol itself remained unchanged until NFSv3, NFS implementations have changed substantially in the past ten years, and they will continue to change in the future even without another protocol revision.

The menu below outlines the evolution of NFS and reflects the sequence of topics discussed in this document.

Many people are surprised that a protocol can change so much without a protocol revision. The NFS specification RFC 1094 defines the exact format of NFS packets transmitted over the network, but it leaves great flexibility in the hardware and software that actually send the packets. In addition, a protocol can have some flexibility designed in from the start.

For instance, NFS implementations have traditionally used UDP for remote procedure calls (RPC), but the RPC specification allows either UDP or TCP. Finally, services such as the automounter can be added to improve NFS without any change at all to the protocol or its implementation.

SRP Open-Source Password Security

The Secure Remote Password protocol is the core technology behind the Stanford SRP Authentication Project. The Project is an Open Source initiative that integrates secure password authentication into new and existing networked applications.

The Project's primary purpose is to improve password security by making strong password authentication technology a standard part of deployed real-world systems. This is accomplished by making this technology an easy-to-use, hassle-free alternative to weak and vulnerable legacy password authentication schemes. SRP makes these objectives possible because it offers a unique combination of password security, user convenience, and freedom from restrictive licenses.

This site serves as the semi-official home of the SRP distribution, which contains secure versions of Telnet and FTP. In addition, it contains links to a number of SRP-related projects, products (both commercial and non-commercial), and research on the Web.

[Jun 2, 2002] CITI Projects NFS Version 4 Open Source Reference Implementation

This work was done as part of the NFS Version 4 Open Source Reference Implementation project. This release includes preliminary support for acl's and for kerberos authentication, and is in the form of a patch against the linux 2.4.18 kernel, with cryptoapi and acl patches already applied.

IBM Redbooks Securing NFS in AIX An Introduction to NFS v4 in AIX 5L Version 5.3

NFS Version 4 (NFS V4) is the latest defined client-to-server protocol for NFS. A significant upgrade from NFS V3, it was defined under the IETF framework by many contributors. NFS V4 introduces major changes to the way NFS has been implemented and used before now, including stronger security, wide area network sharing, and broader platform adaptability.

This IBM Redbook is intended to provide a broad understanding of NFS V4 and specific AIX NFS V4 implementation details. It discusses considerations for deployment of NFS V4, with a focus on exploiting the stronger security features of the new protocol.

In the initial implementation of NFS V4 in AIX 5.3, the most important functional differences are related to security. Chapter 3 and parts of the planning and implementation chapters in Part 2 cover this topic in detail.

SUMMARY NFS Security in Solaris 8

Bhavesh Shah bshah at
Thu Apr 8 18:12:54 EDT 2004

Thanks to all who replied:

Darren Dunham

Anatoliy Lisovskiy


Mark Cain

Special Thanks to Darren Dunham which pointed me to the right direction
towards solving the problem quickly.

Solution was:

Reverse lookup was not setup correctly in DNS. Once that done everything
started working with FQDN.

My Original Question was

Hi Gurus,

I have a NFS Security question:

I am running Solaris 8.

Following is my /etc/dfs/dfstab entry:

/usr/sbin/share -F nfs -o rw=host1:host2,anon=0,ro /dir

even then host1 or host2 can not have write access to /dir

if I try to touch a file to host1 or host2 it gives an obvious  error
:"touch: test cannot create"

but if I replace host1 and host2 with ip addresses everything works

I can ping to host1 and host2 from NFS Server box. Was wondering how
that can happen?

I changed nsswitch.conf to search for dns first and then files but no

I guess there is some configuration problem on NFS Server.

Any Help will be greatly appreciated.

Thanks in advance


NFS Maestro support for Solaris NFS security modes

NFS Maestro support for Solaris NFS security modes


Solaris 8 "man" page for nfssec.

It discusses the different security modes in Solaris available for NFS.

If you would like to use DES (dh) security mode, which involves public/private key authentication, and would like to know how to configure this with NFS Maestro.


AUTH_DH protocol - also known as Secure RPC - provides a secure means of communication between two parties across an insecure network.

In general, the communication is between a client (a user logged into one host) and a server.

AUTH_DES uses Data Encryption Standard (DES) credentials for authentication.

To use NIS+ with NFS Maestro AUTH_DH/AUTH_DES

  1. Configure NIS+ in Hummingbird Directory Services
    • In the Directory Service dialog box, select NIS+
    • Click on Change and enter :
      NIS+ Domain:
      Secure RPC password:
      NOTE: User Name dialog box appears only if you are using the user profile.
      Then click OK.
    • In the General tab, you will see ( greyed out): NIS+ Profile Name:
      Domain Name:
      Root Master:
      Netname: [email protected]
      NIS+ Server Query Order: nis+master and nis+replicas ( these can be moved up or down)
      Then click OK.
    • When you launch NFS Network Access the first time, you will get the NIS+ Keylogin window with:
      Username: Administrator
      Please change Administrator username to your pc name, then enter the passwd ( Secure RPC password for the machine).
      If the information is correct, you can then use NFS Network Access to mount to your filesystem.
  2. Next, export your unix resources.
    If you look in the NOTES sections of the man page for share_nfs, you can export your unix resources using sec=dh option to indicate AUTH_DH.

    For example: share -F nfs -o sec=dh /export/home/users/ken

    NOTE: Make sure there is no conflicting information in the exports file, for example:
    share -F nfs /export/home/users
    share -F nfs -o sec=dh /export/home/users/ken

  3. Not all NFS servers support the authentication protocols in NFS Maestro Client.
    Many of the free UNIX operating systems do not support AUTH_DES or RPCSEC_GSS.
    For information about Sun Enterprise Authentication Mechanism, see:

    Please ensure your NFS server supports the desired security protocol before attempting to connect from NFS Maestro.

  4. The AUTH_DES authentication protocol is very dependent on client and server time synchronization.
    The transaction time "window" between client and server is 60 seconds. This means that all NFS transactions must occur within in this time differential, otherwise they are silently discarded.
    If the clock on the NFS server is running fast, and the clock on the client is running slow by 60 seconds, the NFS connection fails.

For information about Network Time Protocol - Version 3, see:

It is recommended that the server use a Network Time Protocol server (bundled with, or available for, most UNIX operating systems) to synchronize all hosts on the local network.

You can use the Hummingbird bundled Network Time applet to synchronize PC time to the server time.
To launch the applet:

  1. Click on START
  2. Highlight Programs
  3. Highlight Hummingbird Connectivity 8.00
  4. Highlight Accessories
  5. Select Network Time from the listing.
  6. Then click on the HOST button to configure the applet and enter the server name.


For the definition, please refer to:;en-us;Q115873

For the workaround, please refer to:

The making of NFSv4


If you use NFS (Network filesystem) to share filesystems between multiple computers, you are probably using NFS Version 2 or 3. However, you should know that NFSv4 is under active development and is available for trial use under a variety of operating systems.

NFSv4 offers numerous advantages over NFSv2 and NFSv3, including dealing with a variety of security issues mentioned in RFC2623 ("NFS Version 2 and Version 3 Security Issues and the NFS Protocol's Use of RPCSEC_GSS and Kerberos V5," ).

Those of you currently running NFS v2 or NFS v3 may also be interested in the other NFS security references mentioned below.

  1. NFS Version 4:
  2. Network Appliance-The NFS Version 4 Protocol (nice introduction):
  3. CITI Projects: NFS Version 4 Open Source Reference Implementation:
  4. Learning NFSv4 With Fedora Core 2:
  5. CERT Advisory CA-1994-15 NFS Vulnerabilities:
  6. NFS Security:
  7. Security and NFS:
  8. Secure NFS and NIS via SSH Tunnel:

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


Network filesystem - Wikipedia, the free encyclopedia

Yahoo! NFS page

NFS - PC Webopaedia Definition and Links

IBM/System Management Guide Communications and Networks - NFS Reference

***** IBM Redbooks Securing NFS in AIX An Introduction to NFS v4 in AIX 5L Version 5.3

NFS Version 4 (NFS V4) is the latest defined client-to-server protocol for NFS. A significant upgrade from NFS V3, it was defined under the IETF framework by many contributors. NFS V4 introduces major changes to the way NFS has been implemented and used before now, including stronger security, wide area network sharing, and broader platform adaptability.

This IBM Redbook is intended to provide a broad understanding of NFS V4 and specific AIX NFS V4 implementation details. It discusses considerations for deployment of NFS V4, with a focus on exploiting the stronger security features of the new protocol.

In the initial implementation of NFS V4 in AIX 5.3, the most important functional differences are related to security. Chapter 3 and parts of the planning and implementation chapters in Part 2 cover this topic in detail.

The NFS Version 4 Protocol - Pawlowski, Shepler, Beame, Callaghan, Eisler, Noveck, Robinson, Thurlow (ResearchIndex)

Abstract: The Network filesystem (NFS) Version 4 is a new distributed filesystem similar to previous versions of NFS in its straightforward design, simplified error recovery, and independence of transport protocols and operating systems for file access in a heterogeneous network. Unlike earlier versions of NFS, the new protocol integrates file locking, strong security, operation coalescing, and delegation capabilities to enhance client performance for narrow data sharing applications on high-bandwidth...


Network Appliance - NFS Version 3 Design and Implementation Network appliance is definitely an expert in the topic...

by Brian Pawlowski, Chet Juszczak, Peter Staubach, Carl Smith, Diane Lebel, and David Hitz
Presented June 9, 1994: USENIX Summer 1994 - Boston, MA updated (10/2000) by Michael Marchi, Network Appliance, Inc.

This paper describes a new version of the Network filesystem (NFS) that supports access to files larger than 4 GB and increases sequential write throughput seven-fold when compared to unaccelerated NFS Version 2. NFS Version 3 maintains the stateless server design and simple crash recovery of NFS Version 2, and the philosophy of building a distributed file service from cooperating protocols. The authors describe the protocol and its implementation, and provide initial performance measurements. They then describe the implementation effort. Finally, they contrast this work with other distributed filesystems and discuss future revisions of NFS.

Download Full Version:

Sun documentation

Tutorials NFS Administration Guide NFS Server Performance and Tuning Guide for Sun Hardware

[AIX] System Management Guide Communications and Networks - Network filesystem Overview

[AIX] Security Guide - Network filesystem (NFS) Security

Chapter 11 NFS IBM Redbooks IBM eServer Certification Study Guide - pSeries AIX System Administration Configuring and Using NFS By Dru Lavigne Dru takes us through the basics of sharing files between UNIX computers. [July 26, 2000]

[02/14/2002] Understanding NFS by Michael Lucas


NFS System Administration Using NFS.

NFS -- course notes

20.2 NFS Protocol

NFS Administration -- from USAIL textbook Integrating Your Machine With the Network See also NFS Overview

Network filesystem Overview -- IBM R6000

LAN Team Tutorial (POC) Notes


See also RFCs and FAQs.

NFS - PC Webopaedia Definition and Links

[AIX] System Management Guide Communications and Networks - NFS Reference

IBM Network filesystem (NFS) - Environmental Checklist Frequently asked questions

The following utilities are available on client machines help diagnosing simple network connection:


This filesystem is documented by a number of RFC's: NFS v2, NFS v3, NFS v4, and Web NFS.

NFS Security


Accessing NFS NFS is a commonly used tool in Unix networks that allows the sharing of files through RPC. Find out how to configure your Mac OS X client to access an NFS share

Random Findings


[Floyd] S. Floyd, V. Jacobson, "The Synchronization of Periodic Routing Messages," IEEE/ACM Transactions on Networking, 2(2), pp. 122-136, April 1994.
[Gray] C. Gray, D. Cheriton, "Leases: An Efficient Fault- Tolerant Mechanism for Distributed File Cache Consistency," Proceedings of the Twelfth Symposium on Operating Systems Principles, p. 202-210, December 1989.
[ISO10646] "ISO/IEC 10646-1:1993. International Standard -- Information technology -- Universal Multiple-Octet Coded Character Set (UCS) -- Part 1: Architecture and Basic Multilingual Plane."
[Juszczak] Juszczak, Chet, "Improving the Performance and Correctness of an NFS Server," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, June 1990, pages 53-63. Describes reply cache implementation that avoids work in the server by handling duplicate requests. More important, though listed as a side- effect, the reply cache aids in the avoidance of destructive non-idempotent operation re-application -- improving correctness.
[Kazar] Kazar, Michael Leon, "Synchronization and Caching Issues in the Andrew filesystem," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, Dallas Winter 1988, pages 27-36. A description of the cache consistency scheme in AFS. Contrasted with other distributed filesystems.
[Macklem] Macklem, Rick, "Lessons Learned Tuning the 4.3BSD Reno Implementation of the NFS Protocol," Winter USENIX Conference Proceedings, USENIX Association, Berkeley, CA, January 1991. Describes performance work in tuning the 4.3BSD Reno NFS implementation. Describes performance improvement (reduced CPU loading) through elimination of data copies.
[Mogul] Mogul, Jeffrey C., "A Recovery Protocol for Spritely NFS," USENIX filesystem Workshop Proceedings, Ann Arbor, MI, USENIX Association, Berkeley, CA, May 1992. Second paper on Spritely NFS proposes a lease-based scheme for recovering state of consistency protocol.
[Nowicki] Nowicki, Bill, "Transport Issues in the Network filesystem," ACM SIGCOMM newsletter Computer Communication Review, April 1989. A brief description of the basis for the dynamic retransmission work.
[Pawlowski] Pawlowski, Brian, Ron Hixon, Mark Stein, Joseph Tumminaro, "Network Computing in the UNIX and IBM Mainframe Environment," Uniforum `89 Conf. Proc., (1989) Description of an NFS server implementation for IBM's MVS operating system.
[RFC1094] Sun Microsystems, Inc., "NFS: Network filesystem Protocol Specification", RFC 1094, March 1989.
[RFC1345] Simonsen, K., "Character Mnemonics & Character Sets", RFC 1345, June 1992.
[RFC1700] Reynolds, J. and J. Postel, "Assigned Numbers", STD 2, RFC 1700hist(-> 3232), October 1994.
[RFC1813] Callaghan, B., Pawlowski, B. and P. Staubach, "NFS Version 3 Protocol Specification", RFC 1813, June 1995.
[RFC1831] Srinivasan, R., "RPC: Remote Procedure Call Protocol Specification Version 2", RFC 1831 August 1995.
[RFC1832] Srinivasan, R., "XDR: External Data Representation Standard", RFC 1832 August 1995.
[RFC1833] Srinivasan, R., "Binding Protocols for ONC RPC Version 2", RFC 1833 August 1995.
[RFC2025] Adams, C., "The Simple Public-Key GSS-API Mechanism (SPKM)", RFC 2025, October 1996.
[RFC2054] Callaghan, B., "WebNFS Client Specification", RFC 2054, October 1996.
[RFC2055] Callaghan, B., "WebNFS Server Specification", RFC 2055, October 1996.
[RFC2078] Linn, J., "Generic Security Service Application Program Interface, Version 2", RFC 2078(-> 2743, January 1997.
[RFC2152] Goldsmith, D., "UTF-7 A Mail-Safe Transformation Format of Unicode", RFC 2152, May 1997.
[RFC2203] Eisler, M., Chiu, A. and L. Ling, "RPCSEC_GSS Protocol Specification", RFC 2203, August 1995.
[RFC2277] Alvestrand, H., "IETF Policy on Character Sets and Languages", BCP 18, RFC 2277, January 1998.
[RFC2279] Yergeau, F., "UTF-8, a transformation format of ISO 10646", RFC 2279(-> 3629, January 1998.
[RFC2623] Eisler, M., "NFS Version 2 and Version 3 Security Issues and the NFS Protocol's Use of RPCSEC_GSS and Kerberos V5", RFC 2623, June 1999.
[RFC2624] Shepler, S., "NFS Version 4 Design Considerations", RFC 2624, June 1999.
[RFC2847] Eisler, M., "LIPKEY - A Low Infrastructure Public Key Mechanism Using SPKM", RFC 2847, June 2000.
[Sandberg] Sandberg, R., D. Goldberg, S. Kleiman, D. Walsh, B. Lyon, "Design and Implementation of the Sun Network Filesystem," USENIX Conference Proceedings, USENIX Association, Berkeley, CA, Summer 1985. The basic paper describing the SunOS implementation of the NFS version 2 protocol, and discusses the goals, protocol specification and trade-offs.
[Srinivasan] Srinivasan, V., Jeffrey C. Mogul, "Spritely NFS: Implementation and Performance of Cache Consistency Protocols", WRL Research Report 89/5, Digital Equipment Corporation Western Research Laboratory, 100 Hamilton Ave., Palo Alto, CA, 94301, May 1989. This paper analyzes the effect of applying a Sprite-like consistency protocol applied to standard NFS. The issues of recovery in a stateful environment are covered in [Mogul]
[Unicode1] The Unicode Consortium, "The Unicode Standard, Version 3.0", Addison-Wesley Developers Press, Reading, MA, 2000. ISBN 0-201-61633-5. More information available at:
[Unicode2] "Unsupported Scripts" Unicode, Inc., The Unicode Consortium, P.O. Box 700519, San Jose, CA 95710-0519 USA, September 1999
[XNFS] The Open Group, Protocols for Interworking: XNFS, Version 3W, The Open Group, 1010 El Camino Real Suite 380, Menlo Park, CA 94025, ISBN 1-85912-184-5, February 1998. HTML version available:
Network Management Training Page

NFS (Network filesystem) is a widely used and primitive protocol that allows computers to share files over a network. Older versions of NFS relied on the inherently insecure UDP protocol, transactions are not encrypted and hosts and users cannot be easily authenticated. Below we will show a number of issues that one can follow to heal those security problems. Let us clarify how the NFS service operates. An NFS server is the server with a filesystem (or a directory) which is called NFS filesystem (or NFS directory) that will be exported to an NFS client. The NFS client will then have to import (or mount) the exported filesystem (directory) to itself before being able to have access to the filesystem (directory). We will annotate each issue below with on server, on client, on client & server and misc. Those mean that issue is done on NFS server, NFS client, both NFS client and server, and miscellaneous, respectively.

This can be done by adding parameter 'secure' in an item in /etc/exports, such as: /home nfs-client(secure)
where the directory /home is the filesystem to be exported to the NFS client located at address nfs-client (specify the IP address or domain name of your NFS client).

Export an NFS filesystem in an appropriate permission mode (on server)

Let's say that you only need read-only permission on your exported NFS filesystem. Then the filesystem should be exported as read-only to prevent unintended or even intended modifications on those files. This is done by specifying parameter 'ro' in /etc/exports. /home nfs-client(ro)

Restrict exporting an NFS filesystem to a certain set of NFS clients (on server)

Specify only a specific set of NFS clients that will be allowed to mount an NFS filesystem. If possible, use numeric IP addresses or fully qualified domain names, instead of aliases.

Use the 'root_squash' option in /etc/exports on the NFS server if possible (on server)

When this option is used, then while mounting using the command mount, the user ID ?root? on the NFS client will be replaced by the user ID ?nobody? on the NFS server. This is to prevent the root on the NFS client from taking a superuser privilege on the NFS server, thus perhaps illegally allowing him to modify files on the NFS server. Here is an example: /home nfs-client(root_squash)

Disable suid (superuser ID) on an NFS filesystem (on client)

Add the 'nosuid' option (no superuser ID privilege) to an item in /etc/fstab (This file is used to determine which NFS filesystems are to be mounted automatically at the startup time). This is to prevent files with suid bits set on the NFS server, e.g., Trojan horse files, from being executed on the NFS client, which could then lead to root compromise on the client. Or the root on the NFS client may accidentally execute those suid files. Here is an example of ?nosuid?. An item in /etc/fstab on the client may contain: nfs-server:/home /mnt/nfs nfs ro,nosuid 0 0

where nfs-server is the IP address or domain name of the NFS server and /home is the directory on the NFS server to be mounted to the client computer at the directory /mnt/nfs. Alternatively, the 'noexec' option can be used to disable any file execution at all. nfs-server:/home /mnt/nfs nfs ro,nosuid,noexec 0 0

Install the most recent patches for NFS and portmapper (on client & server)

NFS is known to be in the top-ten most common vulnerabilities reported by CERT and was abusively exploited. This means that the NFS server and portmapper on your system must be up-to-date to security patches.

Perform encryption over NFS traffic using SSH (on client & server)

Apart from the use of Secure Shell (SSH) for secure remote access, we can use it for tunnelling between an NFS client and server so that NFS traffic will be encrypted. The steps below will guide you how to encrypt NFS traffic using SSH.

Here is the simple diagram to show the concept of how NFS and SSH services cooperate. nfs-client nfs-server
mount --- SSH SSHD --- NFS

From this figure, when you mount an NFS directory from a client computer, you will mount through SSH. After the mounting is done, the NFS traffic in both directions will be encrypted and so secure.

In the figure the NFS server is located at address nfs-server (use either the IP address or domain name of your NFS server instead), and the NFS client is at address nfs-client. Make sure that in both systems you have SSH and NFS related services already installed so you can use them.

There are two way configurations on the NFS client and server which are described in the two sections below.

NFS server configuration
Section 1.1 and 1.2 are what we have to do on the NFS server. Export an NFS directory to itself
For example, if the NFS server's IP address is and the NFS directory to be exported is /home, then add the following line to /etc/exports /home,root_squash)

The reason for exporting directory /home to itself, instead of to an NFS client? IP address in an ordinary fashion, is that according to the figure above, we will feed the NFS data on the server to SSHD which is running at, instead of to the client computer in the usual case. The NFS data will then be forwarded securely to the client computer through the tunnel.

Note that the exported directory is allowed for read and write permission (rw). root_squash means the person who starts the mounting process to this directory will not obtain the root privilege on this NFS server.

Restart NFS and SSH daemons

Using Redhat 7.2, you can manually start NFS and SSHD by issuing the following commands: #/sbin/service nfs restart
#/sbin/service sshd restart

If you want to have them started automatically at startup time, with Redhat 7.2 add the two lines below to the startup file /etc/rc.d/rc.local. /sbin/service nfs start
/sbin/service sshd start

The term nfs in the commands above is a shell script that will start off two services, namely, NFS and MOUNTD.

NFS client configuration
Three sections below show what we have to do on the NFS client. Find the ports of NFS and MOUNTD on the NFS server Let's say you are now on the NFS client computer. To find the NFS and MOUNTD ports on the NFS server, use the command. #rpcinfo -p nfs-server

program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100003 2 tcp 2049 nfs
100003 2 udp 2049 nfs
100021 1 udp 1136 nlockmgr
100021 3 udp 1136 nlockmgr
100021 4 udp 1136 nlockmgr
100011 1 udp 789 rquotad
100011 2 udp 789 rquotad
100011 1 tcp 792 rquotad
100011 2 tcp 792 rquotad
100005 2 udp 2219 mountd
100005 2 tcp 2219 mountd

Note the lines with terms nfs and mountd. Under the column port, those are the ports for nfs and mountd. nfs has a port 2049 and mountd has a port 2219.

Setup the tunnel using SSH
On the NFS client computer, bind a SSH port with NFS port 2049. #ssh -f -c blowfish -L 7777:nfs-server:2049 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:

-c blowfish means SSH will use the algorithm blowfish to perform encryption.

-L 7777:nfs-server:2049 means binding the SSH client at port 7777 (or any other port that you want) to communicate with the NFS server at address nfs-server on port 2049.

-l tony nfs-server means in the process of login on the authentication server at address nfs-server (specify either the IP address or domain name of the authentication server), use the user login name tony to authenticate on the server.

/bin/sleep 86400 means to prevent spawning a shell on the client computer for 1 day (86,400 seconds). You can specify any larger number.

The line with #tony@nfs-server's password: will prompt the user tony for a password to continue authentication for the user.

Also on the NFS client computer, bind another SSH port with MOUNTD port 2219. #ssh -f -c blowfish -L 8888:nfs-server:2219 -l tony nfs-server /bin/sleep 86400
#tony@nfs-server's password:

-L 8888:nfs-server:2219 means binding this SSH client at port 8888 (or any other port that you want but not 7777 because you already used 7777) to communicate with the NFS server at address nfs-server on port 2219.
c) On the NFS client computer, mount NFS directory /home through the two SSH ports 7777 and 8888 at a local directory, say, /mnt/nfs. #mount -t nfs -o tcp,port=7777 ,mountport=8888 localhost:/home /mnt/nfs

Normally the format of the command mount is to mount, at the IP address (or domain name) of the remote host, the remote NFS directory (/home) to the local directory (/mnt/nfs). However, the reason we mount at localhost instead of the nfs-server, is because the data after decryption at the left end of the tunnel (see the figure above also) is on the localhost, not the remote host.

Alternatively, if you want to mount the NFS directory automatically at startup time, add the following line to /etc/fstab localhost:/home /mnt/nfs/ nfs tcp,rsize=8192,wsize=8192,intr,rw,bg,nosuid,port=7777,mountport=8888,noauto

Allow only traffic from authorised NFS clients to the NFS server (on server)

Supposing that an NFS server only provides the NFS service but nothing else so there are three ports available to use on the server, i.e., RPC Portmapper (on port 111), NFS (on port 2049), and Mountd (on port 2219). Here we can do some filtering on traffic that goes to the NFS server. Through the iptables firewall running locally on the NFS server (you must install iptables to use the following commands), allow only traffic from any authorised NFS client to the server. Allow traffic from an authorised subnet to the ports Portmapper, NFS, and Mountd.
#iptables -A INPUT -i eth0 -s -dport 111 -j ACCEPT
#iptables -A INPUT -i eth0 -s -dport 2049 -j ACCEPT
#iptables -A INPUT -i eth0 -s -dport 2219 -j ACCEPT

Deny something else.
#iptables -A INPUT -i eth0 -s 0/0 -dport 111 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -dport 2219 -j DROP
#iptables -A INPUT -i eth0 -s 0/0 -j DROP

Basically the NFS service operates through Portmapper service so if we block Portmapper port 111, we also then block NFS port 2049.

Alternatively, you can use the TCP wrapper to filter access to your portmapper by adding the line: portmapper:
to /etc/hosts.allow to allow access to portmapper only from subnet

Also add the line below to /etc/hosts.deny to deny access to all other hosts not specified above. portmapper:ALL

Filter out Internet traffic to the NFS service on the routers and firewalls (misc)

In some cases for many organisations with their computers visible on the Internet, if the NFS service is also visible, then we may need to block Internet traffic to ports 111 (Portmapper), 2049 (NFS), and 2219 (Mountd) on your routers or firewalls to prevent unauthorised access to the two ports. With the iptables set up as your firewall, use the following rules: #iptables -A INPUT -i eth0 ?d nfs-server -dport 111 -j DROP
#iptables -A INPUT -i eth0 ?d nfs-server -dport 2049 -j DROP
#iptables -A INPUT -i eth0 -d nfs-server -dport 2219 -j DROP

Use the software tool NFSwatch to monitor NFS traffic (misc)

NFSwatch allows you to monitor NFS packets (traffic) flowing between the NFS client and server. It can be downloaded from One good reason that we need to monitor is that in case there is some malicious activity going on or already taking place, we would then use the log created by NFSwatch to trace back to how and where it came from. To monitor NFS packets between nfs-server and nfs-client, use the command: #nfswatch -dst nfs-server -src nfs-client

all hosts Wed Aug 28 10:12:40 2002 Elapsed time: 00:03:10
Interval packets: 1098 (network) 818 (to host) 0 (dropped)
Total packets: 23069 (network) 14936 (to host) 0 (dropped)
Monitoring packets from interface lo
int pct total int pct total
ND Read 0 0% 0 TCP Packets 461 56% 13678
ND Write 0 0% 0 UDP Packets 353 43% 1051
NFS Read 160 20% 271 ICMP Packets 0 0% 0
NFS Write 1 0% 1 Routing Control 0 0% 36
NFS Mount 0 0% 7 Address Resolution 2 0% 76
YP/NIS/NIS+ 0 0% 0 Reverse Addr Resol 0 0% 0
RPC Authorization 166 20% 323 Ethernet/FDDI Bdcst 4 0% 179
Other RPC Packets 5 1% 56 Other Packets 2 0% 131
1 filesystem
File Sys int pct total
tmp(32,17) 0 0% 15

Specify the IP address (or domain name) of the source (-src) and that of the destination (-dst).

By Nawapong Nakjang Banchong Harangsri



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater�s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright � 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: December 13, 2020