May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

NFS performance tuning

News Recommended Links Troubleshooting nfsstat  NFS logging RPC  NFS Security
share command dfshares Command dfstab File Linux NFS History Humor Etc

The first rule of tuning is to measure your I/O before and after each step. Like with any complex system some recommendations can be counterproductive and blindly following them is foolish. A good tool for getting some handle on NFS server performance is `nfsstat`. This utility reads the info in /proc/net/rpc/nfs[d] and displays it in a somewhat readable format. Here is some info intended for tuning Solaris, but useful for it's description of the nfsstat format. See also the tcp tuning info

When checking problems related to NFS connections, and indeed most other network applications, you should first ensure that the issue is not related to a problem on the machine, such as high load (which will obviously affect the speed at which requests can be processed) or network connection setting (half-duplex kills netowrk I/O). Sometimes, especially with linux, autonegotiation does not work and you need to set connection speed. A simple check using ifconfig reveals the network adapter parameters.  Uptime and ps to identify the processes will tell you how busy the machine is.

The nfsstat command generates detailed stats for both the server and client side of the NFS service. Use the -s  command-line option and -v  to specify the NFS version. For example, a high number of badcalls values indicate that bad requests are being sent to the server, which may indicate that a client is not functioning correctly and submitting bad requests, either due to a software problem or faulty hardware.

The basic tuning steps include:

Top Visited
Past week
Past month


Old News

Optimizing Your NFS Filesystem by Jeff Layton

ADMIN Magazine
NFS is the most widely used HPC filesystem. It is very easy to set up and performs reasonably well for small to medium clusters as primary storage. You can even use it for larger clusters if your applications do not use it for I/O (e.g., /home). One of the most common questions about NFS configuration is how to tune it for performance and management and what options are typically used.Tuning for performance is a loaded question because performance is defined by so many different variables, the most important of which is how to measure performance. However, you can at least identify options for improving NFS performance. In this article, I'll go over some of the major options illustrating some of the pluses and minuses.

In addition to tuning for performance, I will present a few useful options for managing and securing NFS. It's not an extensive list by any stretch of the imagination, but the options are typical for NFS. NFS tuning can occur on both server and clients. You can also tune both the NFS client and server TCP stacks. In this article, I've broken the list of tuning options into three groups: (1) NFS performance tuning options, (2) system tuning options, and (3) NFS management/policy options. The options for these three categories are listed below.

1. NFS performance tuning options

  • Synchronous vs asynchronous
  • Number of NFS daemons (nfsd)
  • Block size setting
  • Timeout and retransmission
  • FS-Cache
  • Filesystem-independent mount options

2. System tuning options

  • System memory
  • MTU
  • TCP tuning on the server

3. NFS management/policy options

  • Subtree checking
  • Root squashing

In the sections that follow, these options are presented and discussed.

Synchronous vs Asynchronous

Most people use the synchronous option on the NFS server. For synchronous writes, the server replies to NFS clients only when the data has been written to stable storage. Many people prefer this option because they have little chance of losing data if the NFS server goes down or network connectivity is lost.

Asynchronous mode allows the server to reply to the NFS client as soon as it has processed the I/O request and sent it to the local filesystem; that is, it does not wait for the data to be written to stable storage before responding to the NFS client. This can save time for I/O requests and improve performance. However, if the NFS server crashes before the I/O request gets to disk, you could lose data.

Synchronous or asynchronous mode can be set when the filesystem is mounted on the clients by simply putting sync or async on the mount command line or in the file /etc/fstab for the NFS filesystem. If you want to change the option, you first have to unmount the NFS filesystem, change the option, then remount the filesystem.

The choice between the two modes of operation is up to you. If you have a copy of the data somewhere, you can perhaps run asynchronously for better performance. If you don't have copies or the data cannot be easily or quickly reproduced, then perhaps synchronous mode is the better option. No one can make this determination but you.

Number of NFS Daemons

NFS uses threads on the server to handle incoming and outgoing I/O requests. These show up in the process table as nfsd (NFS daemons). Using threads helps NFS scale to handle large numbers of clients and large numbers of I/O requests. By default, NFS only starts with eight nfsd processes (eight threads), which, given that CPUs today have very large core counts, is not really enough.

You can find the number of NFS daemons in two ways. The first is to look at the process table and count the number of NFS processes with

ps -aux | grep nfs

The second way is to look at the NFS config file (e.g., /etc/sysconfig/nfs) for an entry that says RPCNFSDCOUNT, which tells you the number of NFS daemons for the server.

If the NFS server has a large number of cores and a fair amount of memory, you can increase RPCNFSDCOUNT. I have seen 256 used on an NFS server with 16 cores and 128GB of memory, and it ran extremely well. Even for home clusters, eight NFS daemons is very small, and you might want to consider increasing the number. (I have 8GB on my NFS server with four cores, and I run with 64 NFS daemons.)

You should also increase RPCNFSDCOUNT when you have a large number of NFS clients performing I/O at the same time. For this situation, you should also increase the amount of memory on the NFS server to a large-ish number, such as 128 or 256GB.

Don't forget that if you change the value of RPCNFSDCOUNT, you will have to restart NFS for the change to take effect.

One way to determine whether more NFS threads helps performance is to check the data in /proc/net/rpc/nfs for the load on the NFS daemons. The output line that starts with th lists the number of threads, and the last 10 numbers are a histogram of the number of seconds the first 10% of threads were busy, the second 10%, and so on.

Ideally, you want the last two numbers to be zero or close to zero, indicating that the threads are busy and you are not "wasting" any threads. If the last two numbers are fairly high, you should add NFS daemons, because the NFS server has become the bottleneck. If the last two, three, or four numbers are zero, then some threads are probably not being used. Personally, I don't mind this situation if I have enough memory in the system, because I might have reached a point at which I need more NFS daemons.

Block Size Setting

Two NFS client options specify the size of data chunks for writing (wsize) and reading (rsize). If you don't specify the chunk sizes, the defaults are determined by the versions of NFS and the kernel being used. If you have NFS already running and configured, the best way to check the current chunk size is to run the command

cat /proc/mounts

on the NFS client and look for the wsize and rsize values.

Chunk size affects how many packets or remote procedure calls (RPCs) are created and sent. For example, if you want to transmit 1MB of data using 32KB chunks, the data is sent in quite a few chunks and a correspondingly large number of network packets. If you increase the chunk size to 64KB, then the number of chunks and the number of packets on the network are reduced.

The chunk size that works best for your NFS configuration depends primarily on the network configuration and the applications performing I/O. If your applications are doing lots of small I/O, then a large chunk size would not make sense. The opposite is true as well.

If the applications are using a mixture of I/O payload sizes, then selecting a block size might require some experimentation. You can experiment by changing the options on the clients; you have to unmount and remount the filesystem before rerunning the applications. Don't be afraid of testing a range of block sizes, starting with the default and moving into the megabyte range.

Timeout and Retransmission

Timeout and retransmission options, although they don't affect performance at first glance, are very important for NFS, especially for the clients. The options determine how long the NFS client waits until retransmitting a packet (timeo) and how many times an NFS client attempts to resend the packet (retrans) before restarting the entire process.

The timeo (timeout) option is the amount of time the NFS client waits on the NFS server before retransmitting a packet (no ACK received). The value for timeo is given in tenths of a second, so if timeo is 5, the NFS client will wait 0.5 seconds before retransmitting. The default is 0.7 (0.07 seconds), but you can adjust the option with the timeo option of the mount command or by editing the /etc/fstab file on the NFS client to indicate the value of timeo.

The other option, retrans, specifies the number of tries the NFS client will make to retransmit the packet. If the value is 5, the client resends the RPC packet five times, waiting timeo seconds between tries. If, after the last attempt, the NFS server does not respond, you get a message Server not responding. The NFS client then resets the RPC transmission attempts counter and tries again in the same fashion (same timeo and retrans values).

On congested networks, you often see retransmissions of RPC packets. A good way to tell is to run the

nfsstat -r

command and look for the column labeled retrans. If the number is large, the network is likely very congested. If that is the case, you might want to increase the values of timeo and retrans to increase the number of tries and the amount of time between RPC tries. Although taking this action will slow down NFS performance, it might help even out the network traffic so that congestion is reduced. In my experience, getting rid of congestion and dropped packets can result in better, more even performance.


Another option you might want to consider using to improve NFS client performance is FS-Cache, which caches NFS client requests on a local storage device, such as a hard drive or SSD, helping improve NFS read I/O: Data that resides on the local NFS client means the NFS server does not have to be contacted.

To use NFS caching you have to enable it explicitly by adding the option -o fsc to the mount command or in /etc/fstab:

# mount <nfs-share>:/ </mount/point> -o fsc

Any data access to </mount/point> will go through the NFS cache unless the file is opened for direct I/O or if a write I/O is performed.

The important thing to remember is that FS-Cache only works if the I/O is a read. FS-Cache can't help with a direct I/O (read or write) or an I/O write request. However, there are plenty of cases in which FS-Cache can help. For example, if you have an application that needs to read from a database or file and you are running a large number of copies of the same application, FS-Cache might help, because each node could cache the database or file.

Filesystem-Independent Mount Options

The mount command in Linux has a number of options that are independent of the filesystem and might be able to improve performance. Some key options are:

  • noatime – Inode access times are not updated on the filesystem. This can help performance because the access time of the file is not updated every time a file is accessed.
  • nodiratime – The directory inode is not updated on the filesystem when it is accessed. This can help performance in the same way as not updating the file access time.
  • relatime – Inode access times are relative to the modify or change time for the file, so the access time is updated only if the previous atime (access time) was earlier than the modify or change time.

Before using these options, please read the mount man page. You also need to decide whether access time is worth tracking accurately. By not tracking it or not tracking it as accurately, you can increase performance.

System Memory

System tuning options aren't really NFS tuning options, but a system change can result in a change in NFS performance. In general, Linux and its services like memory and will grab as much system memory as possible. Of course, this memory is returned if the system needs it for other applications, but rather than let it go to waste (i.e., not being used), it uses it for buffers and caches.

NFS is definitely one of the services, particularly on the server, that will use as much buffer space as possible. With these buffers, NFS can merge I/O requests to improve bandwidth. Therefore, the more physical memory you can add to the NFS server, the more likely the performance will improve, particularly if you have lots of NFS clients hitting the server at the same time. The question is: How muchmemory does your NFS server need?

The answer is not easy to determine because of conflicting goals. Ideally, you should put as much memory in the server as you can afford. But if budgets are a little on the tight side, then you'll have to deal with trade-offs between buying the largest amount of memory for the NFS server or putting the money into other aspects of the system. Could you reduce memory on the NFS server from 512 to 256GB and perhaps buy an extra compute node? Is that worth the trade? The answer is up to you.

As a rule of thumb for production HPC systems, however, I tend to put in no less than 64GB on the NFS server, because memory is less expensive overall. You can always go with less memory, perhaps 16GB, but you might pay a performance penalty. However, if your applications don't do much I/O, then the trade-off might be worthwhile.

If you are choosing to use asynchronous NFS mode, you will need more memory to take advantage of async, because the NFS server will first store the I/O request in memory, respond to the NFS client, and then retire the I/O by having the filesystem write it to stable storage. Therefore, you need as much memory as possible to get the best performance.

The very last word I want to add about system memory is about speed and number of memory channels. To ring out every last bit of performance from your NFS server, you will want the fastest possible memory, while recognizing the trade-off between memory capacity and memory performance. The solution to the trade-off is really up to you, but I like to see how much memory I can get using the fastest memory DIMMs possible. If the memory capacity is not large enough, you might want to step down to the next level in memory speed to increase capacity.

For the best possible performance, you also want an NFS server with the maximum number of memory channels to increase the overall memory bandwidth of the server. In each memory channel, be sure to put:

  • at least one DIMM in each channel,
  • the same number of DIMMs in each channel, and
  • the same DIMM size and speed in each channel.

Again, this is more likely to be critical if you are using asynchronous mode, but it's a good idea for even synchronous mode.


Changing the network MTU (maximum transmission unit) is also a good way to affect performance, but it is not an NFS tunable. Rather, it is a network option that you can tune on the system to improve NFS performance. The MTU is the maximum amount of data that can be sent via an Ethernet frame. The default MTU is typically 1500 (1,500 bytes per frame), but this can be changed fairly easily.

For the greatest effect on NFS performance, you will have to change the MTU on both the NFS server and the NFS clients. You should check both of these systems before changing the value to determine the largest MTU you can use. You also need to check for the largest MTU the network switches and routers between the NFS server and the NFS clients can accommodate (refer to the hardware documentation). Most switches, even non-managed "home" switches, can accommodate an MTU of 9000 (commonly called "jumbo packets").

The MTU size can be very important because it determines packet fragments on the network. If your chunk size is 8KB and the MTU is 1500, it will take six Ethernet frames to transmit the 8KB. If you increase the MTU to 9000 (9,000 bytes), the number of Ethernet frames drops to one.

The most common recommendation for better NFS performance is to set the MTU on both the NFS server and the NFS client to 9000 if the underlying network can accommodate it. A study by Dell a few years back examined the effect of an MTU of 1500 compared with an MTU of 9000. Using Netperf, they found that the bandwidth increased by about 33% when an MTU of 9000 was used.

TCP Tuning on the Server

A great deal can be done to tune the TCP stack for both the NFS client and the NFS server. Many articles around the Internet discuss TCP tuning options for NFS and for network traffic in general. The exact values vary depending on your specific situation. Here, I want to discuss two options for better NFS performance: system input and output queues.

Increasing the size of the input and output queues allows more data to be transferred via NFS. Effectively, you are increasing the size of buffers that can store data. The more data that can be stored in memory, the faster NFS can process it (i.e., more data is queued up). The NFS server NFS daemons share the same socket input and output queues, so if the queues are larger, all of the NFS daemons have more buffer and can send and receive data much faster.

For the input queue, the two values you want to modify are /proc/sys/net/core/rmem_default (the default size of the read queue in bytes) and /proc/sys/net/core/rmem_max (the maximum size of the read queue in bytes). These values are fairly easy to modify:

echo 262144 > /proc/sys/net/core/rmem_default
echo 262144 > /proc/sys/net/core/rmem_max

These commands change the read buffer sizes to 256KiB (base 2), which the NFS daemons share. You can do the same thing for the write buffers that the NFS daemons share:

echo 262144 > /proc/sys/net/core/wmem_default
echo 262144 > /proc/sys/net/core/wmem_max

After changing these values, you need to restart the NFS server for them to take effect. However, if you reboot the system, these values will disappear and the defaults will be used. To make the values survive reboots, you need to enter them in the proper form in the /etc/sysctl.conf file.

Just be aware that increasing the buffer sizes doesn't necessarily mean performance will improve. It just means the buffer sizes are larger. You will need to test your applications with various buffer sizes to determine whether increasing buffer size helps performance.

Subtree Checking

Assume the NFS server has exported a directory from the root filesystem (e.g., /usr/local). Also assume that it is part of the root disk for the system (i.e., it's not on a separate partition or drive). On a compromised NFS client, the cracker could guess the file handle for a file that is in the filesystem but not in /usr/local/ (the NFS-exported directory). Now your NFS server has been compromised.

Adding the option subtree_check to the exports on the NFS server checks that the file being accessed is contained within the exported directory. In the case here, it would force the NFS server to check that the requested file was located within /usr/local/. Alternatively, you can specify the option no_subtree_check on the NFS server, and it will not check that the requested file is in the exported directory. Many people have the opinion that subtree_check can have a big effect on performance, but the final determination is up to you. Is performance more important than security for the configuration and your situation?

One way to overcome the need for subtree_check is to put the exported directory on a separate partition or separate drive to prevent a rogue user from guessing a file handle to anything outside of the filesystem. You should partition your drive space and give a specific mountpoint to the directory that is to be exported. For example, if you want to export /usr/local/, it should have its own storage partition (or drive) and be mounted as /usr/local on the NFS server. By doing this, crackers can't guess file handles outside of the specific export.

Root Squashing

By default, user root is “squashed” to the user nobody so that NFS access is compartmentalized. This point is important because if a rogue user boots a system from some sort of media (e.g., a USB stick), the user can be root on that system and could then change the IP address to gain access to the system, mount a filesystem, and copy data from the server. However, if root is squashed to user nobody, then root will have the same privileges given to all users, thus preventing a compromised system from allowing root to pull data from your system.

On the other hand, if you want root to have access to an NFS-mounted filesystem, you can add the option no_root_squash to the file /etc/exports to allow root access. Just be aware that if someone reboots your system to gain root access, it's possible for them to copy (steal) data.


In this article, I presented various options you could use to improve performance on an NFS filesystem, although depending on your circumstances, they might not help or might even result in reduced performance. Some of these tuning parameters are NFS options, whereas others involve changes to the system that improve performance or are options for managing NFS filesystems. The best way to judge which options are useful is to run tests, particularly with the applications you plan on running.

This blog represents my own view points and not those of my employer, Amazon Web Services.

[PDF] IBM Redbooks | AIX 5L Performance Tools Handbook Benchmarking NFSv3 vs. NFSv4 file operation performance

One issue with migrating to NFSv4 is that all of the filesystems you export have to be located under a single top-level exported directory. This means you have to change your /etc/exports file and also use Linux bind mounts to mount the filesystems you wish to export under your single top-level NFSv4 exported directory. Because the manner in which filesystems are exported in NFSv4 requires fairly large changes to system configuration, many folks might not have upgraded from NFSv3. This administration work is covered in other articles. This article provides performance benchmarks of NFSv3 against NFSv4 so you can get an idea of whether your network filesystem performance will be better after the migration.

I ran these performance tests using an Intel Q6600-based server with 8GB of RAM. The client was an AMD X2 with 2GB of RAM. Both machines were using Intel gigabit PCIe EXPI9300PT NICs, and the network between the two machines had virtually zero traffic on it for the duration of the benchmarks. The NICs provide a very low latency network, as described in a past article. While testing performance for this article I ran each benchmark multiple times to ensure performance was repeatable. The difference in RAM between the two machines changes how Bonnie++ is run by default. On the server, I ran the test using 16GB files, and on the client, 4GB files. Both machines were running 64-bit Fedora 9.

The filesystem exported from the server was an ext3 filesystem created on a RAID-5 over three 500GB hard disks. The exported filesystem was 60GB in size. The stripe_cache_size was 16384, meaning that for a three-disk RAID array, 192MB of RAM was used to cache pages at the RAID level. Default cache sizes for distributions might be in the 3-4MB range for the same RAID array. Using a larger cache directly improves write performance of the RAID. I also ran benchmarks locally on the server without using NFS to get an idea of the theoretical maximum performance NFS could achieve.

Some readers may point out that RAID-5 is not a desirable configuration, and certainly running it on only three disks is not a typical configuration. However, the relative performance of NFSv3 to NFSv4 is our main point of interest. I used a three disk RAID-5 because it had a filesystem that could be recreated for the benchmark. Recreation of the filesystem removes factors such as file fragmentation that can adversely effect performance.

I tested NFSv3 with and without the async option. The async option allows the NFS server to respond to a write request before it is actually on disk. The NFS protocol normally requires the server to ensure data has been writen to storage successfully before replying to the client. Depending on your needs, you might be running mounts with the async option on some filesystems for the performance improvement it offers, though you should be aware of what async implies for data integrity, in particular, potential undetectable data loss if the NFS server crashes.

The table below shows the Bonnie++ input, output, and seek benchmarks for the various NFS version 3 and 4 mounted filesystems as well as the benchmark that was run on the server. As expected, the reading performance is almost identical whether or not you are using the async option. You can perform more than five times the number of "seeks" over NFS when using the async option, presumably because the server can avoid actually performing some of them because a subsequent seek is issued before the initial seek was completed. Unfortunately the block sequential output for NFSv4 is not any better than for NFSv3. Without using the async option, output was about 50Mbps, whereas the local filesystem was capable of performing at 91Mbps. When using the async option, sequential block output came much closer to local disk speeds over the NFS mount.

Configuration Sequential Output Sequential Input Random
Per Char Block Rewrite Per Char Block Seeks
K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU K/sec % CPU /sec % CPU
local filesystem 62340 94 91939 22 45533 19 43046 69 109356 32 239.2 0
NFSv3 noatime,nfsvers=3 50129 86 47700 6 35942 8 52871 96 107516 11 1704 4
NFSv3 noatime,nfsvers=3,async 59287 96 83729 10 48880 12 52824 95 107582 10 9147 30
NFSv4 noatime 49864 86 49548 5 34046 8 52990 95 108091 10 1649 4
NFSv4 noatime,async 58569 96 85796 10 49146 10 52856 95 108247 11 9135 21

The table below shows the Bonnie++ benchmarks for file creation, read, and deletion. Notice that the async option has a tremendous impact on file creation and deletion.

Configuration Sequential Create Random Create
Create Read Delete Create Read Delete
/sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU /sec % CPU
NFSv3 noatime,nfsvers=3 186 0 6122 10 182 0 186 0 6604 10 181 0
NFSv3 noatime,nfsvers=3,async 3031 10 8789 11 3078 9 2921 11 11271 13 3069 9
NFSv4 noatime 98 0 6005 13 193 0 93 0 6520 11 192 0
NFSv4 noatime,async 1314 8 7155 13 5350 12 1298 8 7537 12 5060 9

To test more day-to-day performance I extracted the linux- uncompressed Linux kernel source tarball and then deleted the extracted sources. Note that the original source tarball was not compressed in order to ensure that the CPU of the client was not slowing down extraction.

Configuration Find (m:ss) Remove (m:ss)
local filesystem 0:01 0:03
NFSv3 noatime,nfsvers=3 9:44 2:36
NFSv3 noatime,nfsvers=3,async 0:31 0:10
NFSv4 noatime 9:52 2:27
NFSv4 noatime,async 0:40 0:08

Wrap up

These tests show no clear performance advantage to moving from NFSv3 to NFSv4.

NFSv4 file creation is actually about half the speed of file creation over NFSv3, but NFSv4 can delete files quicker than NFSv3. By far the largest speed gains come from running with the async option on, though using this can lead to issues if the NFS server crashes or is rebooted.

Ben Martin has been working on filesystems for more than 10 years. He completed his Ph.D. and now offers consulting services focused on libferris, filesystems, and search solutions.



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 12, 2019