May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

rm Command

News Books Recommended Links Reference User proofing

Horror stories

Creative uses of rm
Root deletion protection Safe-rm Lack of testing complex, potentially destructive, commands Executing command in a wrong directory Typical Errors In Using Find Performing the operation on a wrong computer Pure stupidity
Unix mv command Unix cp command ln command Using -exec option with find Unix History with some Emphasis on Scripting



Remove files (delete/unlink). You can use the rmdir command to remove a directory (make sure it is empty first). In many cases you can move files to special junk directory instead of deleting them. This is much safer: There is no "undelete" command in UNIX, once you delete a file, it is gone (unless it is on a backup tape).  Unix does not have attribute "system" so rm happily delete system files and directories when used as root. You can use immutable attribute instead but that's create side effects. Many Unix sysadmin horror stories are related to unintended consequences, unanticipated side effects of rm commands.  Behavior of rm -r .*   can easily be forgotten from one encounter to another, especially if sysadmin works both with Unix and Windows servers.  

      rm [options]... file...


Root deletion protection

To remove a file you must have write permission on the folder where it is stored. Sun introduced "rm -rf /" protection in Solaris 10, first released in 2005. Upon executing the command, the system now reports that the removal of / is not allowed.

Shortly after, the same functionality was introduced into FreeBSD version of rm  utility.

GNU rm  refuses to execute rm -rf /  if the --preserve-root  option is given, which has been the default since version 6.4 of GNU Core Utilities was released in 2006.

No such protection exists for critical system directories like /etc /bin, and so on but you can imitate it putting file "-i" into such directories or using a wrapper for the interactive usage of the command.  See also Safe-rm

User proofing

rm command is pretty primitive and is not well designed for interactive use. So in interactive use bad things happen. There are interesting  and very educational stories about spectacular blunders related to rm when an admin working as root erased the entire server or some critical directories. Such mishaps can became a real horror story if backup is missing or is faulty.  See for example

Intricate details (like recursive behavior of rm .* which actually is not metioned in rm man page)  can easily be forgotten from one encounter to another, especially if sysadmin works with several flavors of Unix or Unix and Windows servers. Recursive deletion of files either via rm -r or via find -exec rm {} \; has a lot of pitfalls that can destroy the server pretty nicely if run without testing. 

Some of those pitfalls can be viewed as a deficiency of rm implementation (it should automatically block the deletion of system directories like /, /etc/ and so on unless -f flag is specified, but Unix lacks system attributes for files although sticky bit on files or immutable attribute can be used instead.  It is wise to use wrappers for rm. There are several more or less usable approaches for writing such a wrapper:

To understand what will be actually being executed after shell expansion, preface your rm command with echo. For example if there is a filename starting with the dash you will receive very confusing message from rm

	$ rm *
	rm: unknown option -- -
	usage: rm [-f|-i] [-dPRrvW] file ...
To find out what caused it prefix the command with echo
echo rm *

rm -R will recursively remove folders and their contents. This is a dangerous command and if the number of files is small you should probably use it with -i option. If the number of files is large you need always use ls -Rl command to test the set of files  rm -R command will unlink. From ls man page:

       -R, --recursive:        list subdirectories recursively

You can generate list of files dynamically for rm (piping it via xarg if it is too large. For example

ls | grep copy # check what files will be affected

ls | grep copy | xargs -0 -l rm -f  # remove all the files

The simplest attempt to defend themselves against accidentally deleting files is to create an alias rm or function rm along the lines of:

alias rm="rm -i"

or use the function

function rm { /bin/rm -i "$@" ; }

Unfortunately, this encourage a tendency to mindlessly pound y  and the return key to affirm removes - until you realize that got just past the file you need to keep. Therefore it is better to create a function which first displays the files and then delete them:

function rm 
   if [ -z "$PS1" ] ; then
     /bin/rm "$@"
     exit $?
   ls -l "$@"
   echo 'remove[ny]? ' | tr -d '\012' ;  
   read # read reply (y/n) 
   if [ "_$REPLY" = "_y" ]; then
      /bin/rm "$@" 
      exit $?   
   echo '(rm command cancelled)'
   exit 1 

If this function is included into startup dot files it will be found ahead of the system  binary /bin/rm  in the search path and can break sloppy written batch jobs. That's why it checks for PS1 being non empty but this is not a 100% reliable test.

Undeletable files

The rm command accepts the -- option which will cause it to stop processing flag options from that point forward. This allows the removal of file names that begin with a dash (-).

rm -- -filename

Actually accidental creation of files starting with dash is a common problem for novice sysadmin.  The problem is that if I type rm -foo, the rm command treats the filename as an option.

There are two simple ways to delete such a file. One was shown above and based on supplying an empty option argument. The second to use a relative or absolute pathname:

rm ./-badfile
 rm /home/user/-badfile 

Here the debugging trick that we mentioned above -- prefacing rm command with echo really helps to understand what happens.

To delete a file with non-printable characters in the name you can also use the shell wildcard "?" for each bad character

rm bad?file?name

Another way to delete files that have control characters in the name is to  use  -i option and an asterisk, which gives you the option of removing each file in the directory — even the ones that you can't type.

% rm -i *
rm: remove faq.html (y/n)? n
rm: remove foo (y/n)? y

The -i option may also be helpful when you are dealing with files with Unicode characters that appear to be regular letters in your locale, but that don't match patterns or names you use otherwise.

A great way to discover files with control characters in them is to use the -q option to the Unix ls command (some systems also support a useful -b option). You can, for example, alias the ls command:

alias lsq='ls -q '

Files that have control characters in their filenames will then appear with question marks:

ls -q f*
faq.html                fo?o

Using rm with find

You need to be extra careful running rm in an -exec option of find command. There are many horror stories in which people delete large parts of their filesystem due to typos of sloppy find command parameters. Again it is prudent first to test it using ls command to see the actual list of files to be deleted. Sometimes it is quite different from what you intended to do :-).

I often first generate the list using ls command, inspect it and in a separate step pipe it into rm :

cat rm.lst | xargs -t -n1 rm 
The -t flag causes the xargs command to display each command before running it, so you can see what is happening.
Top Visited
Past week
Past month


Old News ;-)

[Jun 13, 2010] Unix Blog -- rm command - Argument List too long

Some days back I had problems deleting n no of files in folder.

while using rm command it would say

$rm -f /home/sriram/tmp/*.txt
-bash: /bin/rm: Argument list too long

This is not a limitation of the rm command, but a kernel limitation on the size of the parameters of the command. Since I was performing shell globbing (selecting all the files with extension .txt), this meant that the size of the command line arguments became bigger with the number of the files.

One solution is to to either run rm command inside a loop and delete each individual result, or to use find with the xargs parameter to pass the delete command. I prefer the find solution so I had changed the rm line inside the script to:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm

this does the trick and solves the problem. One final touch was to not receive warnings if there were no actual files to delete, like:

rm: too few arguments

Try `rm --help' for more information.

For this I have added the -f parameter to rm (-f, –force = ignore nonexistent files, never prompt). Since this was running in a shell script from cron the prompt was not needed also so no problem here. The final line I used to replace the rm one was:

find /home/$u/tmp/ -name '*.txt' -print0 xargs -0 rm -f

You can also do this in the directory:
ls xargs rm

Alternatively you could have used a one line find:

find /home/$u/tmp/ -name '*.txt' -exec rm {} \; -print

TidBITS A Mac User's Guide to the Unix Command Line, Part 1

Warning! The command line is not without certain risks. Unlike when you work in the Finder, some tasks you carry out are absolute and cannot be undone. The command I am about to present, rm, is very powerful. It removes files permanently and completely instead of just putting them in the Trash. You can't recover files after deleting them with rm, so use it with great care, and always use the -i option, as explained below, so Terminal asks you to confirm deleting each file.

Your prompt should look something like this, showing that you are still inside the Test directory you created earlier:

   [Walden:~/Test] kirk%

Type the following:

   % rm -i testfile

The rm command removes files and directories, in this case the file testfile. The -i option tells Terminal to run the rm command in interactive mode, asking you to make sure you want to delete the file. Terminal asks:

   remove testfile?

Type y for yes, then press Return or Enter and the file is removed. If you wanted to leave it there, you could just type n for no, or press Return.

We should check to make sure the file is gone:

   % ls

After typing ls, you should just see a prompt. Terminal doesn't tell you that the directory is empty, but it shows what's in the directory: nothing.

Now, move up into your Home folder. Type:

   % cd ..

This is the same cd command that we used earlier to change directories. Here, the command tells the Terminal to go up in the directory hierarchy to the next directory (the .. is a shortcut for the parent directory); in this case, that is your Home directory.

Type ls again to see what's in this directory:

   % ls

You should see something like this:

   Desktop    Library  Music     Public  Test
   Documents  Movies   Pictures  Sites

The Test directory is still there, but using rm, it's easy to delete it by typing:

   % rm -d -i Test

The -d option tells rm to remove directories. When Terminal displays:

   remove Test?

Type y, then press Return or Enter. (If you didn't remove testfile, as explained above, the rm command won't delete the directory because it won't, by default, delete directories that are not empty.)

Make one final check to see if the directory has been deleted.

   % ls
   Desktop    Library  Music     Public
   Documents  Movies   Pictures  Sites

The Answer Gang 62 about Unix command rm

I have a question about rm command. Would you please tell me how to remove all the files excepts certain files like anything ended with .c?

[Mike] The easiest way (meaning it will work on any Unix systems anywhere), is to move those files to a temporary directory, then delete "everything", then move those files back.

mkdir /tmp/tdir
mv *.c /tmp/tdir
rm *
mv /tmp/tdir/* .
rmdir /tmp/tdir

[Ben] The above would work, but seems rather clunky, as well as needing a lot of typing.

[Mike] Yes, it's not something you'd want to do frequently. However, if you don't know a lot about Unix commands, and are hesitant to write a shell script which deletes a lot of files, it's a good trick to remember.

[Ben] It's true that it is completely portable; the only questionable part of my suggestion immediately below might be the "-1" in the "ls", but all the versions of "ls" with which I'm familiar support the "single column display" function. It would be very easy to adapt.

My preference would be to use something like

rm $(ls -1|grep -v "\.c$")

because the argument given to "grep" can be a regular expression. Given that, you can say things like "delete all files except those that end in 'htm' or 'html'", "delete all except '*.c', '*.h', and '*.asm'", as well as a broad range of other things. If you want to eliminate the error messages given by the directories (rm can't delete them without other switches), as well as making "rm" ask you for confirmation on each file, you could use a "fancier" version -

rm -i $(ls -AF1|grep -v "/$"|grep -v "\.c$")

Note that in the second argument - the only one that should be changed - the "\" in front of the ".c" is essential: it makes the "." a literal period rather than a single-character match. As an example, lets try the above with different options.

In a directory that contains


".c" means "'c' preceded by any character" - NO files would be deleted.

"\.c" means "'c' preceded by a period" - deletes the first 3 files.

"\.c$" means "'c' preceded by a period and followed by the end of the line" - all the files except the last one would be gone.

Here's a script that would do it all in one shot, including showing a list of files to be deleted:

See attached misc/tag/rmx.bash.txt

[Dan] Which works pretty well up to some limit, at which things break down and exit due to $skip being too long.

For a less interactive script which can remove inordinate numbers of files, something containing:

ls -AF1 | grep -v /$ | grep -v $1 | xargs rm

allows "xargs" to collect as many files as it can on a command line, and invoke "rm" repeatedly.

It would be prudent to try the thing out in a directory containing only expendable files with names similar to the intended victims/saved.

[Ben] Possibly a good idea for some systems. I've just tried it on a directory with 1,000 files in it (created just for the purpose) and deleted 990 of them in one shot, then recreated them and deleted only 9 of them. Everything worked fine, but testing is indeed a prudent thing to do.

[Dan] Or with some typists. I've more than once had to resort to backups due to a slip of the fingers (the brain?) with an "rm" expression.

[Ben] <*snort*> Never happened to me. No sir. Uh-uh. <Anxious glance to make sure the weekly backup disk is where it should be>

I just put in that "to be deleted" display for, umm, practice. Yeah.

<LOL> Good point, Dan.

Got that sinking feeling that often follows an overzealous rm? Our system doctor has a prescription.

by Mark Komarinski

There was recently a bit of traffic on the Usenet newsgroups about the need for (or lack of) an undelete command for Linux. If you were to type rm * tmp instead of rm *tmp and such a command were available, you could quickly recover your files.

The main problem with this idea from a filesystem standpoint involves the differences between the way DOS handles its filesystems and the way Linux handles its filesystems.

Let's look at how DOS handles its filesystems. When DOS writes a file to a hard drive (or a floppy drive) it begins by finding the first block that is marked "free" in the File Allocation Table (FAT). Data is written to that block, the next free block is searched for and written to, and so on until the file has been completely written. The problem with this approach is that the file can be in blocks that are scattered all over the drive. This scattering is known as fragmentation and can seriously degrade your filesystem's performance, because now the hard drive has to look all over the place for file fragments. When files are deleted, the space is marked "free" in the FAT and the blocks can be used by another file.

The good thing about this is that, if you delete a file that is out near the end of your drive, the data in those blocks may not be overwritten for months. In this case, it is likely that you will be able to get your data back for a reasonable amount of time afterwards.

Linux (actually, the second extended filesystem that is almost universally used under Linux) is slightly smarter in its approach to fragmentation. It uses several techniques to reduce fragmentation, involving segmenting the filesystem into independently-managed groups, temporarily reserving large chunks of contiguous space for files, and starting the search for new blocks to be added to a file from the current end of the file, rather than from the start of the filesystem. This greatly decreases fragmentation and makes file access much faster. The only case in which significant fragmentation occurs is when large files are written to an almost-full filesystem, because the filesystem is probably left with lots of free spaces too small to tuck files into nicely.

Because of this policy for finding empty blocks for files, when a file is deleted, the (probably large) contiguous space it occupied becomes a likely place for new files to be written. Also, because Linux is a multi-user, multitasking operating system, there is often more file-creating activity going on than under DOS, which means that those empty spaces where files used to be are more likely to be used for new files. "Undeleteability" has been traded off for a very fast filesystem that normally never needs to be defragmented.

The easiest answer to the problem is to put something in the filesystem that says a file was just deleted, but there are four problems with this approach:

  1. You would need to write a new filesystem or modify a current one (i.e. hack the kernel).
  2. How long should a file be marked "deleted"?
  3. What happens when a hard drive is filled with files that are "deleted"?
  4. What kind of performance loss and fragmentation will occur when files have to be written around "deleted" space?

Each of these questions can be answered and worked around. If you want to do it, go right ahead and try--the ext2 filesystem has space reserved to help you. But I have some solutions that require zero lines of C source code.

I have two similar solutions, and your job as a system administrator is to determine which method is best for you. The first method is a user-by-user no-root-needed approach, and the other is a system-wide approach implemented by root for all (or almost all) users.

The user-by-user approach can be done by anyone with shell access and it doesn't require root privileges, only a few changes to your .profile and .login or .bashrc files and a bit of drive space. The idea is that you alias the rm command to move the files to another directory. Then, when you log in the next time, those files that were moved are purged from the filesystem using the real /bin/rm command. Because the files are not actually deleted by the user, they are accessible until the next login. If you're using the bash shell, add this to your .bashrc file:

alias waste='/bin/rm'
alias rm='mv $1 ~/.rm'
and in your
if [ -x ~/.rm ];
   /bin/rm -r ~/.rm
   mkdir ~/.rm
   chmod og-r ~/.rm
   mkdir ~/.rm
   chmod og-r ~/.rm




The second method is similar to the user-by-user method, but everything is done in /etc/profile and cron entries. The /etc/profile entries do almost the same job as above, and the cron entry removes all the old files every night. The other big change is that deleted files are stored in /tmp before they are removed, so this will not create a problem for users with quotas on their home directories.

The cron daemon (or crond) is a program set up to execute commands at a specified time. These are usually frequently-repeated tasks, such as doing nightly backups or dialing into a SLIP server to get mail every half-hour. Adding an entry requires a bit of work. This is because the user has a crontab file associated with him which lists tasks that the crond program has to perform. To get a list of what crond already knows about, use the crontab -l command, for "list the current cron tasks". To set new cron tasks, you have to use the crontab <file command for "read in cron assignments from this file". As you can see, the best way to add a new cron task is to take the list from crontab -l, edit it to suit your needs, and use crontab <file to submit the modified list. It will look something like this:

~# crontab -l > cron.fil
~# vi cron.fil
To add the necessary cron entry, just type the commands above as root and go to the end of the cron.fil file. Add the following lines:
# Automatically remove files from the
# /tmp/.rm directory that haven't been
# accessed in the last week.
0 0 * * * find /tmp/.rm -atime +7 -exec /bin/rm {} \;
Then type:
~# crontab cron.fil

Of course, you can change -atime +7 to -atime +1 if you want to delete files every day; it depends on how much space you have and how much room you want to give your users.

Now, in your /etc/profile (as root):

if [ -n "$BASH" == "" ] ;
then # we must be running bash
   alias waste='/bin/rm'
   alias rm='mv $1 /tmp/.rm/"$LOGIN"'
   undelete () {
     if [ -e /tmp/.rm/"$LOGIN"/$1 ] ; then
       cp /tmp/.rm/"$LOGIN"/$1 .
       echo "$1 not available"
   }   if [ -n -e /tmp/.rm/"$LOGIN" ] ;
     mkdir /tmp/.rm/"$LOGIN"
     chmod og-rwx /tmp/.rm/"$LOGIN"

Once you restart cron and your users log in, your new `undelete' is ready to go for all users running bash. You can construct a similar mechanism for users using csh, tcsh, ksh, zsh, pdksh, or whatever other shells you use. Alternately, if all your users have /usr/bin in their paths ahead of /bin, you can make a shell script called /usr/bin/rm which does essentially the same thing as the alias above, and create an undelete shell script as well. The advantage of doing this is that it is easier to do complete error checking, which is not done here.



These solutions will work for simple use. More demanding users may want a more complete solution, and there are many ways to implement these. If you implement a very elegant solution, consider packaging it for general use, and send me an e-mail message about it so that I can tell everyone about it here.

Unix is a Four Letter Word Unix --- Strange names

There may come a time that you will discover that you have somehow created a file with a strange name that cannot be removed through conventional means. This section contains some unconventional approaches that may aid in removing such files.

Files that begin with a dash can be removed by typing

 rm ./-filename
A couple other ways that may work are
 rm -- -filename
 rm - -filename
Now let's suppose that we an even nastier filename. One that I ran across this summer was a file with no filename. The solution I used to remove it was to type
 rm -i *
This executes the rm command in interactive mode. I then answered "yes" to the query to remove the nameless file and "no" to all the other queries about the rest of the files.

Another method I could have used would be to obtain the inode number of the nameless file with

ls -i

and then type
 find . -inum number -ok rm '{}' \;
where number is the inode number.

The -ok flag causes a confirmation prompt to be displayed. If you would rather live on the edge and not be bothered with the prompting, you can use -exec in place of -ok.

Suppose you didn't want to remove the file with the funny name, but wanted to rename it so that you could access it more readily. This can be accomplished by following the previous procedure with the following modification to the find command:

 find . -inum number -ok mv '{}' new_filename \;

[Oct 01 2004] Meddling in the Affairs of Wizards

Most people who have spent any time on any version of Unix know that "rm -rf /" is about the worst mistake you can make on any given machine. (For novices, "/" is the root directory, and -r means recursive, so rm keeps deleting files until the entire file system is gone, or at least until something like libc is gone after which the system becomes, as we often joke, a warm brick.)

Well a couple of years ago one Friday afternoon a bunch of us were exchanging horror stories on this subject, when Bryan asked "why don't we fix rm?" So I did.

The code changes were, no surprise, trivial. The hardest part of the whole thing was that one reviewer wanted /usr/xpg4/bin/rm to be changed as well, and that required a visit to our standards guru. He thought the change made sense, but might technically violate the spec, which only allowed rm to treat "." and ".." as special cases for which it could immediately exit with an error. So I submitted a defect report to the appropriate standards committee, thinking it would be a slam dunk.

Well, some of these standards committee members either like making convoluted arguments or just don't see the world the same way I do, as more than one person suggested that the spec was just fine and that "/" was not worthy of special consideration. We tried all sorts of common sense arguments, to no avail. In the end, we had to beat them at their own game, by pointing out that if one attempts to remove "/" recursively, one will ultimately attempt to remove ".." and ".", and that all we are doing is allowing rm to pre-determine this heuristically. Amazingly, they bought that!

Anyway, in the end, we got the spec modified, and Solaris 10 has (since build 36) a version of /usr/bin/rm (/bin is a sym-link to /usr/bin on Solaris) and /usr/xpg4/bin/rm which behaves thus:

[28] /bin/rm -rf /
rm of / is not allowed

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Top articles


rm (Unix) - Wikipedia, the free encyclopedia


UNIX man pages rm ()

Remove (unlink) the FILE(s).

-d, --directory
unlink FILE, even if it is a non-empty directory (super-user
only; this works only if your system

supports `unlink' for nonempty directories)

-f, --force
ignore nonexistent files, never prompt

-i, --interactive
prompt before any removal

--no-preserve-root do not treat `/' specially (the default)

fail to operate recursively on `/'

-r, -R, --recursive
remove directories and their contents recursively

-v, --verbose
explain what is being done

--help display this help and exit

output version information and exit

By default, rm does not remove directories. Use the --recursive (-r or
-R) option to remove each listed directory, too, along with all of its

To remove a file whose name starts with a `-', for example `-foo', use
one of these commands:

rm -- -foo

rm ./-foo

Note that if you use rm to remove a file, it is usually possible to
recover the contents of that file. If you want more assurance that the
contents are truly unrecoverable, consider using shred.


FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  


Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2016 by Dr. Nikolai Bezroukov. was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: September 12, 2017