Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Managing files in RHEL

News Red Hat Certification Program Understanding and using essential tools Access a shell prompt and issue commands with correct syntax Finding Help Managing files in RHEL Working with hard and soft links Working with archives and compressed files Using the Midnight Commander as file manager
Text files processing Using redirection and pipes Use grep and extended regular expressions to analyze text files Finding files and directories; mass operations on files Connecting to the server via ssh, using multiple consoles and screen command Introduction to Unix permissions model      
        Tips Sysadmin Horror Stories Unix History with some Emphasis on Scripting Humor Etc

Extracted from Professor Nikolai Bezroukov unpublished lecture notes.

Copyright 2010-2018, Dr. Nikolai Bezroukov. This is a fragment of the copyrighted unpublished work. All rights reserved.

Partially derived from:


As an administrator, you need to be able to perform common file management tasks. Among them:

RHEL is a huge OS which is far beyond capabilities of any human to comprehend in full. Like blinds touching the elephant, you can learn only fragments. The filesystem is vast and deeply nested with over five thousand directories and close to 50 thousand files in default installation (minimal installation is less).

[0]test01@ROOT:/usr # locate -S
Database /var/lib/mlocate/mlocate.db:
        5,690 directories
        47,762 files
        2,114,945 bytes in file names
        979,464 bytes used to store database

The number of utilities installed by default is in thousands. There are over 900 file in /usr/bin and over 500 files in /usr/sbin; which means over a thousand utilities, which often duplicate each other functionality:

[0]test01@ROOT:/usr # ls /usr/bin | wc -l
[0]test01@ROOT:/usr # ls /usr/sbin | wc -l

For system administrator the main danger is destroying the system by executing some destructive command with incorrect parameters, like rm -rf /etc. So, on high level, managing files in RHEL is mainly about how to avoid destroying valuable files and how to recover then in mishap happened ;-) Important term related to those activities in SNAFU (SNAFU - Wikipedia)

SNAFU is an acronym that is widely used to stand for the sarcastic expression Situation Normal: All Fucked Up. It is a well-known example of military acronym slang, however the original military acronym stood for "Status Nominal: All Fucked Up." It is sometimes bowdlerized to "all fouled up" or similar.[1] It means that the situation is bad, but that this is a normal state of affairs. The acronym is believed to have originated in the United States Marine Corps during World War II.

In modern usage, SNAFU is sometimes used as an interjection. SNAFU also sometimes refers to a bad situation, mistake, or cause of trouble. It is more commonly used in modern vernacular to describe running into an error or problem that is large and unexpected. For example, in 2005, The New York Times published an article titled "Hospital Staff Cutback Blamed for Test Result Snafu".[2]

The systemic danger that constantly faces system administrators is accidental wiping out of a system directory or important user files when doing some movement of data of cleaning of filesystem to get more space. There is a lot of sysadmin folklore about those blunders. Google "sysadmin horror stories" or "sysadmin blunders". For example this Reddit thread contains over a dozens interesting comments from sysadmins around the world explaining their biggest mistakes and what they taught them. This is a great source of information, listing human mistakes some if them were "inspired" by Linux warts. You can read about all kinds of sysadmin mistakes; deleting wrong folders or files, applying wrong permissions, performing operation on a wrong server, or in a wrong directory, destroying server by mishandling RAID array with a failed disk, the list goes on and on.

So "in-and-out" knowledge of rm, chmod commands, as well as -exec option in find and similar command is must.

One of little known way to manage files that helps to avoid SNAFU is to use Midnight Commander, which is included on RHEL ISO and usually is installed on RHEL systems in Europe. You can combine usage of screen with the usage of Midnight Commander. MC has a good editor which often used as a separate program as it can be invoked as mceditor. You can also use WinSCP is you access Linux from Windows using Putty. WinSCP can be used a Putty launcher (can launch session from arbitrary directory) and as a powerful file manager. It also has a windows style editor which far superior to nano.

You also should be aware about idiosyncrasies of Unix including such "classic" pitfalls as:

Filesystem hierarchy

Level two directories in Linux are more or less standardized. They include such classic names of the directories as /etc /usr /var /root

Formally the layout of the Linux file system is defined in the Filesystem Hierarchy Standard (FSH), and this file system hierarchy is described in man 7 hier. That does not mean that Red Hat cares. This layout does change slightly from one version of RHEL to another. It changes a lot from RHEL 6 to 7 with /bin, /lib, /lib64, and /sbin, converted to symbolic links that now point to corresponding directories in /usr hierarchy.

lrwxrwxrwx.   1 root root    7 Sep 25  2017 bin -> usr/bin
dr-xr-xr-x.   4 root root 4096 Sep 25  2017 boot
drwxr-xr-x.  18 root root 3120 Sep 24 22:48 dev
drwxr-xr-x.  87 root root 8192 Sep 26 03:49 etc
drwxr-xr-x.   3 root root   22 Sep 24 22:48 home
lrwxrwxrwx.   1 root root    7 Sep 25  2017 lib -> usr/lib
lrwxrwxrwx.   1 root root    9 Sep 25  2017 lib64 -> usr/lib64
drwxr-xr-x.   2 root root    6 Nov  5  2016 media
drwxr-xr-x.   4 root root   35 Sep 24 22:48 mnt
drwxr-xr-x.   3 root root   16 Sep 25  2017 opt
dr-xr-xr-x. 124 root root    0 Sep 24 22:47 proc
dr-xr-x---.   5 root root  201 Sep 25 05:51 root
drwxr-xr-x.  26 root root  860 Sep 26 03:49 run
lrwxrwxrwx.   1 root root    8 Sep 25  2017 sbin -> usr/sbin
drwxr-xr-x.   2 root root    6 Nov  5  2016 srv
dr-xr-xr-x.  13 root root    0 Sep 24 22:48 sys
drwxrwxrwt.   8 root root  108 Sep 28 03:40 tmp
drwxr-xr-x.  13 root root  155 Sep 25  2017 usr
drwxr-xr-x.  20 root root  282 Sep 24 22:48 var

Understanding Mounts

While linux presents to users a single namespace originating from root, this namespace can consist of multiple partition and different filesystems. All files in unix belong to a single namespace organized as a hierarchy, with the root directory (/) as its starting point. The concept of mounting is about including additional filesystem into the root filesystem and by extension and already mounted filesystem (although hierarchical mounting is not a good idea). This hierarchy may be distributed over different devices and even computer systems that are mounted into the root directory.

Mounting devices makes it possible to organize the Linux file system in a flexible way. There are several disadvantages to storing all files in just one file system:

It is common to organize Linux file systems from more then on "real" disk partition and LVM logical volumes, and mount these partitions into level two directories. By mounting partition into a certain directory allow to use specific mount options that can restrict access. Two directories are commonly mounted on dedicated devices:

Less common to use separate filesystems for

It is up to the discretion of the administrator to decide which directories get their own partitions. Much depends from the set of applications the server runs and the required level of the security of the server.

The mount command gives an overview of all mounted devices, but in RHEL 7 it contains so much junk that without filtering it is almost useless. To get this information, the /proc/mounts file is read. The latter is the file in pseudo filesystem mounted on directory /proc. Ihe content of this filesystem is mapped to memory. /proc/mounts is the areas where the kernel keeps information about all current mounts mapped into a file.

Usually sysadmins are using the command df to view mounts on the system. To make mount command more useful you can filter it to display only /dev/ devices or specific devide about which you want to receive more complete information that is provided by df. For example

nn@test01:/ # mount | egrep ^/dev/
/dev/sda2 on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
/dev/sdc1 on /mnt/resource type ext4 (rw,relatime,seclabel,data=ordered)

The df -Th command shows available disk space on mounted devices including all mounts. It is a more convenient command to get an overview of current system mounts. The -h option summarizes the output of the command in a human-readable way, and the -T option shows which file system type is used on the different mounts.

nn@test01:/ # df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/sda2      xfs        30G  1.3G   29G   5% /
devtmpfs       devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs          tmpfs     3.9G     0  3.9G   0% /dev/shm
tmpfs          tmpfs     3.9G   17M  3.9G   1% /run
tmpfs          tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sda1      xfs       497M   62M  436M  13% /boot
/dev/sdc1      ext4       16G   45M   15G   1% /mnt/resource
tmpfs          tmpfs     797M     0  797M   0% /run/user/1000

The output of df is shown in seven columns:

Note that when using the df command, the sizes are reported in kibobytes, which typically is very inconvenient. The option -m will display these in megabytes, and using -h will display a human-readable format in KiB, MiB, GiB, TiB, or PiB.

The findmnt command shows mounts and the relation that exists between the different mounts. Because the output of the mount command is a bit overwhelming, findmnt command is usually used instead.

nn@test01:/ # findmnt
TARGET                                SOURCE      FSTYPE      OPTIONS
/                                     /dev/sda2   xfs         rw,relatime,seclabel,attr2,inode64,noquota
--/sys                                sysfs       sysfs       rw,nosuid,nodev,noexec,relatime,seclabel
| --/sys/kernel/security              securityfs  securityfs  rw,nosuid,nodev,noexec,relatime
| --/sys/fs/cgroup                    tmpfs       tmpfs       ro,nosuid,nodev,noexec,seclabel,mode=755
| | --/sys/fs/cgroup/systemd          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,|attr,release_agent=/usr/lib/systemd/systemd-c
| | --/sys/fs/cgroup/net_cls,net_prio cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,net_prio,net_cls
| | --/sys/fs/cgroup/hugetlb          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,hugetlb
| | --/sys/fs/cgroup/cpu,cpuacct      cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,cpuacct,cpu
| | --/sys/fs/cgroup/blkio            cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,blkio
| | --/sys/fs/cgroup/pids             cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,pids
| | --/sys/fs/cgroup/memory           cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,memory
| | --/sys/fs/cgroup/freezer          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,freezer
| | --/sys/fs/cgroup/perf_event       cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,perf_event
| | --/sys/fs/cgroup/devices          cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,devices
| | --/sys/fs/cgroup/cpuset           cgroup      cgroup      rw,nosuid,nodev,noexec,relatime,cpuset
| --/sys/fs/pstore                    pstore      pstore      rw,nosuid,nodev,noexec,relatime
| --/sys/fs/selinux                   selinuxfs   selinuxfs   rw,relatime
| --/sys/kernel/debug                 debugfs     debugfs     rw,relatime
| --/sys/kernel/config                configfs    configfs    rw,relatime
--/proc                               proc        proc        rw,nosuid,nodev,noexec,relatime
| --/proc/sys/fs/binfmt_misc          systemd-1   autofs      rw,relatime,fd=31,pgrp=1,timeout=300,minproto=5,ma|proto=5,direct
|   --/proc/sys/fs/binfmt_misc        binfmt_misc binfmt_misc rw,relatime
--/dev                                devtmpfs    devtmpfs    rw,nosuid,seclabel,size=4069076k,nr_inodes=1017269,mode=755
| --/dev/shm                          tmpfs       tmpfs       rw,nosuid,nodev,seclabel
| --/dev/pts                          devpts      devpts      rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptm|mode=000
| --/dev/hugepages                    hugetlbfs   hugetlbfs   rw,relatime,seclabel
| --/dev/mqueue                       mqueue      mqueue      rw,relatime,seclabel
--/run                                tmpfs       tmpfs       rw,nosuid,nodev,seclabel,mode=755
| --/run/user/1000                    tmpfs       tmpfs       rw,nosuid,nodev,relatime,seclabel,size=815932k,mode=700,uid=1000,gid=1000
--/boot                               /dev/sda1   xfs         rw,relatime,seclabel,attr2,inode64,noquota
--/mnt/resource                       /dev/sdc1   ext4        rw,relatime,seclabel,data=ordered

The concepts of a "file" and a "directory"

One of the most revolutionary idea of Linux was "everything is a file" and creation of hieratical filesystem, which differed from the filesystem created by IBM for their OS/360. For example, directories are special types of files that serve as containers for other files.

When you at the command prompt you are located in some directory called the current directory. You can display it with the command pwd; At the command-line interface, the current directory may be in either the top-level root (/) directory or a subdirectory. BTW pwd is an abbreviation of "print working directory"

There is also a system macro variable tilde (~) that is substituted with the path to your home directory. For example the home directory of the root user is /root.

For example

[root@test01 ~]# ls *.cfg
anaconda-ks.cfg original-ks.cfg

The file anaconda-ks.cfg is important file in /root directory. It is created by Red Hat installed (called anaconda) and contains the configuration information which can be used to redo the installation via Kickstart. In a way, this file can be viewed as a high level system backup file.

Absolute vs relative path

Path is the set of directories in hierarchical filesystem that you need to traverse to access the file. Absolute path is the sequence from the root directory and relative path is the sequence from the current directory. To find out the pathname of our current directory, you can use the command pwd.

In other words, an absolute path describes the complete directory structure in terms of the top-level directory, root (/). A relative path is based on the current directory. Relative paths do not include the slash in front.

When you're running an important or potentially destructive command absolute paths are safer. When using relative path, if commands are issued from the wrong current directory may lead to unintended consequences. But relative path also have this pluses. For example, say you’re in the top-level root directory, and you have backed up the /home directory using its relative path to the root (/). If you happen to be in the /home directory when restoring that backup, the files for user joeuser, for example, would be restored to the /home/home/joeuser directory from which you can select what files are actually needed for recovery.

But with destructive commands such as rm the relative path is more dangerous. Outside some trivial cases then you delete file you just created you are better off specifying full path by typing first pwd command and copying it to the command line with the mouse. That also reminds you where you are and in cases when you are tied or under pressure that might not be the place you expect. Trying to copy the director from pwd also gives you few seconds during which you can realize that you are on the wrong server ;-)

But the difference between absolute and relative path is not absolute :-). For example, if the /home directory was backed up using the absolute path and tar utility, it also can be restored using relative path.

Also restoration using absolute path of wrong backup destroys more recent files if they exist in the directories. In this case you can be sure that files will be restored to the correct directories (because, if you care careless, you can overwrite with old content your system directories or important user data ;-)

PATH variable

In Linux when you specify the command you do not need to supply its absolute path if it belongs to the set of system directories. Linux provides an opportunity to search command is a set of system (and user) directories which are listed in a variable PATH. For example, when you type the command ls It is searched in PATH variable and found iether in both /bin (symlink) and /usr/bin directory, depending on which of those directories are listed first. In other words any shell, such as bash, automatically searches through the directories listed in a user’s PATH for the command that user just typed at the command line.

The first directory in which command is found is the one form which it is executed. To determine the PATH for the current user account, run the echo $PATH command. You should see a series of directories in the output, delimited by colon. By default in RHEL7 two user directories are include in user path. Assuming that the use name is alterego they would be: /home/alterego/.local/bin:/home/alterego/bin For example

[alterego@test01 ~]$ echo $PATH

The default PATH is determined globally by current settings in the /etc/profile file or by scripts in the /etc/profile.d directory. You might notice differences between the PATH as configured for User ID (UID) 0 and all other users. UID 0 corresponds to the root administrative user.

The value of PATH variable can be modified via files .bash_profile and .bashrc and can individualized. The differences between the PATH for a regular user and one for a root user have narrowed in RHEL 7 -- now path for regular user includes "sbin" directory /usr/local/sbin:/usr/sbin

The order of directories in PATH matter because the directories are searched in order. For example, the system-config-keyboard command is available from both the /usr/bin and /usr/sbin directories.

TIP: it makes sense to customize the PATH variable for each user via ~/.bash_profile or ~/.profile. Most flexible way of doing this is provided by called environment modules package.

Useful cd shorcuts, pushd and dirs as cd alternative

Directories hierarchies on modern servers are very deep, and include six or seven nested directories. In this environment good old cd command designed 50 year ago is completely inadequate tool. You can compensate somewhat deficiency of cd using aliases. But generally you are better off learning how to use pushd and dirs commands.

But things are not completely bad as cd provides several valuable shorcuts you should know and use:

See also pushd and dirs command.

I would recommend to use mc or WinSCP as a better navigational tools. Both provide for a list of favorite locations and "fast traversing" using just first letters of directories (or files). Both typically are allowed in corporate environment.

Basic regular expressions (Dos style regular expressions or wildcards)

That was one of biggest innovation advanced by Unix. Two other more elaborate standard followed: extended regular expression (used in such utilities as grep and awk) and Perl regular expression implemented in some (unfortunately only some) Linux utilities such as GNU grep. Perl regular expressions are the most powerful and internally consistent of all three.

POSIX or "Portable Operating System Interface for uniX" is a collection of standards that define some of the functionality that a (UNIX) operating system should support. One of these standards defines two flavors of regular expressions: basic (BRE) and extended (ERE). Bash shell impments only BRE. Command grep aand utilities awk and sed implement ERE.

The Basic Regular Expressions used is shell is the oldest regular expression flavor still in use today the notation created more then 50 years ago for use in editor ed first and then migrated to shell. They use just a half-dozen metacharacters so they are relatively easy to learn and use (Wildcards):

? (question mark)
this can represent any single character. If you specified something at the command line like "hd?" GNU/Linux would look for hda, hdb, hdc and every other letter/number between a-z, 0-9.
* (asterisk)
this can represent any number of characters (including zero, in other words, zero or more characters). If you specified a "cd*" it would use "cda", "cdrom", "cdrecord" and anything that starts with “cd” also including “cd” itself. "m*l" could by mill, mull, ml, and anything that starts with an m and ends with an l.
[ ] (square brackets)
specifies a range. If you specify m[aou]m it will match: mam, mum, and mom if you spedify regex m[a-d]m it can become anything that starts and ends with m and has any character a to d in between.  For example, [abc] matches "a", "b", or "c", and [a-z] specifies a range which matches any lowercase letter from "a" to "z". These forms can be mixed: [abcx-z] matches "a", "b", "c", "x", "y", or "z", as does [a-cx-z].
{ } (curly brackets)
terms are separated by commas and each term must be the name of something or a wildcard. This wildcard will copy anything that matches either wildcard(s), or exact name(s) (an “or” relationship, one or the other). This is also called subexpression.

For example, this would be valid:

cp {*.doc,*.pdf} ~

This will copy anything ending with .doc or .pdf to the users home directory. Note that spaces are not allowed after the commas (or anywhere else).

This construct is similar to the [ ] construct, except rather than matching any characters inside the brackets, it'll match any character, as long as it is not listed between the [ and ]. That means that a complementary class of characters is constructed and used. For example rm myfile[!9] will remove all myfiles* (ie. myfiles1, myfiles2 etc) but won't remove a file myfiles9
\ (backslash)
is used as an "escape" character, i.e. to protect a subsequent special character. Thus, "\\” searches for a backslash. Note you may need to use quotation marks and backslash(es).

For more information on basic regular expression refer to the manual page by typing:

man 7 glob
Long ago, in UNIX V6, there was a program /etc/glob that would expand wildcard patterns. Soon afterward this became a shell built-in.

These days there is also a library routine glob(3) that will perform this function for a user program.

The rules are as follows (POSIX.2, 3.13).

Wildcard matching

A string is a wildcard pattern if it contains one of the characters aq?aq, aq*aq or aq[aq. Globbing is the operation that expands a wildcard pattern into the list of pathnames matching the pattern. Matching is defined by:

A aq?aq (not between brackets) matches any single character.

A aq*aq (not between brackets) matches any string, including the empty string.

Character classes

An expression "[...]" where the first character after the leading aq[aq is not an aq!aq matches a single character, namely any of the characters enclosed by the brackets. The string enclosed by the brackets cannot be empty; therefore aq]aq can be allowed between the brackets, provided that it is the first character. (Thus, "[][!]" matches the three characters aq[aq, aq]aq and aq!aq.)


There is one special convention: two characters separated by aq-aq denote a range. (Thus, "[A-Fa-f0-9]" is equivalent to "[ABCDEFabcdef0123456789]".) One may include aq-aq in its literal meaning by making it the first or last character between the brackets. (Thus, "[]-]" matches just the two characters aq]aq and aq-aq, and "[--0]" matches the three characters aq-aq,, aq0aq, since aq/aq cannot be matched.)


An expression "[!...]" matches a single character, namely any character that is not matched by the expression obtained by removing the first aq!aq from it. (Thus, "[!]a-]" matches any single character except aq]aq, aqaaq and aq-aq.)

One can remove the special meaning of aq?aq, aq*aq and aq[aq by preceding them by a backslash, or, in case this is part of a shell command line, enclosing them in quotes. Between brackets these characters stand for themselves. Thus, "[[?*\]" matches the four characters aq[aq, aq?aq, aq*aq and aq\aq.


Globbing is applied on each of the components of a pathname separately. A aq/aq in a pathname cannot be matched by a aq?aq or aq*aq wildcard, or by a range like "[.-0]". A range cannot contain an explicit aq/aq character; this would lead to a syntax error.

If a filename starts with a, this character must be matched explicitly. (Thus, rm * will not remove .profile, and tar c * will not archive all your files; tar c . is better.)

Empty lists

The nice and simple rule given above: "expand a wildcard pattern into the list of matching pathnames" was the original UNIX definition. It allowed one to have patterns that expand into an empty list, as in
xv -wait 0 *.gif *.jpg
where perhaps no *.gif files are present (and this is not an error). However, POSIX requires that a wildcard pattern is left unchanged when it is syntactically incorrect, or the list of matching pathnames is empty. With bash one can force the classical behavior using this command:

shopt -s nullglob

(Similar problems occur elsewhere. E.g., where old scripts have

rm 'find . -name "*~"'
new scripts require
rm -f nosuchfile 'find . -name "*~"'
to avoid error messages from rm called with an empty argument list.)

Regular expressions

Note that wildcard patterns are not regular expressions, although they are a bit similar. First of all, they match filenames, rather than text, and secondly, the conventions are not the same: for example, in a regular expression aq*aq means zero or more copies of the preceding thing.

Now that regular expressions have bracket expressions where the negation is indicated by a aq^aq, POSIX has declared the effect of a wildcard pattern "[^...]" to be undefined.

Character classes and internationalization

Of course ranges were originally meant to be ASCII ranges, so that "[ -%]" stands for "[ !"#$%]" and "[a-z]" stands for "any lowercase letter". Some UNIX implementations generalized this so that a range X-Y stands for the set of characters with code between the codes for X and for Y. However, this requires the user to know the character coding in use on the local system, and moreover, is not convenient if the collating sequence for the local alphabet differs from the ordering of the character codes. Therefore, POSIX extended the bracket notation greatly, both for wildcard patterns and for regular expressions. In the above we saw three types of items that can occur in a bracket expression: namely (i) the negation, (ii) explicit single characters, and (iii) ranges. POSIX specifies ranges in an internationally more useful way and adds three more types:

(iii) Ranges X-Y comprise all characters that fall between X and Y (inclusive) in the current collating sequence as defined by the LC_COLLATE category in the current locale.

(iv) Named character classes, like

[:alnum:]  [:alpha:]  [:blank:]  [:cntrl:]
[:digit:]  [:graph:]  [:lower:]  [:print:]
[:punct:]  [:space:]  [:upper:]  [:xdigit:]
so that one can say "[[:lower:]]" instead of "[a-z]", and have things work in Denmark, too, where there are three letters past aqzaq in the alphabet. These character classes are defined by the LC_CTYPE category in the current locale.

(v) Collating symbols, like "[.ch.]" or "[.a-acute.]", where the string between "[." and ".]" is a collating element defined for the current locale. Note that this may be a multicharacter element.

(vi) Equivalence class expressions, like "[=a=]", where the string between "[=" and "=]" is any collating element from its equivalence class, as defined for the current locale. For example, "[[=a=]]" might be equivalent to "[a'a'a:a^a]", that is, to "[a[.a-acute.][.a-grave.][.a-umlaut.][.a-circumflex.]]".

Wildcards ("*", ".", and "?") provide a convenient way to specify the sets of file names, similar to the way in which DOS wildcard characters are used. For example, all files with particular extention like *.conf. In addition, a DOS wildcard expression lets you easily specify files without extensions.

DOS wildcard expression Equivalent extended regular expression Description
* .* Zero or more of any character
? [^\.] Any one character except the period (.)
. \. Literal period character
*. [^\.]+\.? Does not contain a period, but can end with one

Specifying the range of characters

Let's say we wanted to find the presence of a digit between 1 and 8. We could use a range like so [12345678] but there is a shortcut we may use to make things a bit easier [1-8]. You can use several such ranges, for example [12][90][9012][0-9].

Negating characters in range

Sometimes we may want to find the presence of a character which is not a range of characters. We can do this by placing a caret ( ^ ) at the beginning of the range.

The following regular expression searches for the character t followed by a character which is not either e or o, followed by the character d.


Escaping Metacharacters

Sometimes we may actually want to search for one of the characters which is a metacharacter. To do this we use a feature called escaping. By placing the backslash ( \ ) in front of a metacharacter we can remove it's special meaning. (In some instances of regular expressions we may also use escaping to introduce a special meaning to characters which normally don't have a special meaning but more on that in the intermediate section of this tutorial). Let's say we wanted to find instances of the word 'this' which are the last word of a sentence. If we did the following:

Directory Creation and Deletion

The mkdir and rmdir commands are used to create and delete directories. The ways these commands are used depend on the already-discussed concepts of absolute and relative paths. For example, the following command creates the test subdirectory to the current directory. If you’re currently in the /home/joeuser directory, the full path would be /home/joeuser/test.

The mkdir and rmdir commands are used to create and delete directories. Command mkdir is usually used with the option -p which allow to create the full path to the new directory, if some parent directories do not yet exist.

The ways these commands are used depend on the already-discussed concepts of absolute and relative paths. For example, the following command creates the test subdirectory to the current directory if this directly is current (If you’re currently in the /home/joeuser directory):

mkdir Test

the full path would be /home/joeuser/Test. So equivalent command which uses the first path as as such is a safer option is

mkdir  -p /home/joeuser/Test

Alternatively, the following command creates also the /test directory in joeuser home riecty as is you logged as joeuser (not root):

mkdir -p ~/Test

If you are root it will create the directory /root/test as /root is home directly for the user root.

You can use multiple arguments to create a series of directories:

mkdir -p ~/Test ~/Baseline ~/Backup

Conversely, the rmdir command deletes a directory only if it’s empty. If you’re cleaning up after the previous mkdir commands, the -p switch is useful there as well. The following command deletes the noted directory and subdirectories, as long as all of the directories are otherwise empty:

If desired, you can use the following command to create a series of directories by specifying multiple arguments.

NOTE: rmdir command deletes a directory only if it’s empty. If you’re cleaning up after the previous mkdir commands, the -p switch can be used with this command as well.

Creating files using touch

The simplest way to create a new empty file is with the touch command. For example, the touch mybackup command creates an empty file named mybackup in the current directory.

The touch command is also used to change the time of the last modification of a file. The command is called touch because, when it's used on an existing file, it changes the modification time even though it makes no changes to the file.

Note the timestamps listed with the output of each ls -l command, and compare them with the current date and time returned by date. After you run the touch command, the timestamp of /etc/passwd is updated to the current date and time.

touch is often combined with rm to create new, empty files for a script.

NOTE: Appending output with >> does not need an existing file, eliminating the need to remember whether a file exists.

Identifying the type of files with file command

The built-in type command identifies whether a command is built-in or not, and where the command is located if it is a Linux command.

To test files other than commands, the Linux file command performs a series of tests to determine the type of a file. First, file determines whether the file is a regular file or is empty. If the file is regular, file consults the /usr/share/magic file, checking the first few bytes of the file in an attempt to determine what the file contains. If the file is an ASCII text file, it performs a check of common words to try to determine the language of the text.

$ file empty_file.txt
empty_file.txt: empty
$ file orders.txt
orders.txt: ASCII text

file also works with programs. If is a Bash script, file identifies it as a shell script.

$ file Bourne-Again shell script text
$ file /usr/bin/test
/usr/bin/test: ELF 32-bit LSB executable, Intel 80386, version 1,
dynamically linked (uses shared libs), stripped

For script programming, file's -b (brief) switch hides the name of the file and returns only the assessment of the file.

$ file -b orders.txt
ASCII text

Other useful switches include -f (file) to read filenames from a specific file. The -i switch returns the description as MIME type suitable for Web programming. With the -z (compressed) switch, file attempts to determine the type of files stored inside a compressed file. The -L switch follows symbolic links.

$ file -b -i orders.txt
text/plain, ASCII

Managing timestamps, touch command

The touch command changes timestamp of the file and it file does not exits create zero length file with specified name and current timestamp (you can change timestamp from current to specified or the identical to the timestamp of the other file.

By default, the touch  command updates the access and modification times of each file specified by the File or Directory parameter (you can mix files and directories in list of supplied arguments).


touch [-acm] [-r ref_file | -t time] file...

Important options are as follows:

  1. -a, --time=atime, --time=access, or --time=use Changes access time only
  2. -c, or --no-create Doesn't create files that don't exist
  3. -m, --time=mtime, or --time=modify Changes modification time only
  4. -r file or --reference=file Uses the times from file instead of the current time
  5. -t  Uses time instead of current time allows reset timestamps. Format is [[CC]YY]MMDDhhmm [.SS]. For example, to reset time to Jan 13, 2013 14:25 you can use:
    touch -t 201301101425 usr_restore.diff
  6. -d, --date=STRING parse STRING and use it instead of current time or --date=time. Similar -t but allows more flexible format for date (GNU touch only)

If you do not specify a value for the Time variable, the touch command uses the current time. If you specify a file that does not exist, the touch command creates the file unless you specify the -c flag. Creation of zero length files is  probably the most common usage of the command.  Another common use of touch is to create a large number of files quickly for testing scripts that read and process filenames. The touch command may also be used to assist in backup operations by manipulating access or modification times of files.

Few administrators know that the touch  command also can be used for selective modification of the modification and access times using -a and -m options. If neither the -a  nor -m  options are specified, touch  updates both the modification and access times.


Option -t allows reset timestamps. Format is [[CC]YY]MMDDhhmm [.SS]. For example, to reset time to Jan 13, 2013 14:25 you can use:

touch -t 201301101425 usr_restore.diff

The time used can be specified not only using specific time via  -t  time option, but also by referencing to an exiting file (with  -r  ref_file ).

The time used can be specified not only using specific time via  -t time option, but also by referencing to an exiting file (with  -r ref_file ).

For example:

touch -r usr_restore.lst usr_restore.diff

A user with write access to a file, but who is not the owner of the file or a super-user, can change the modification and access times of that file only to the current time. Attempts to set a specific time with touch  will result in an error. There is also a settime  utility which is equivalent to touch  -c  [date_time] file.

The return code from the touch command is the number of files for which the times could not be successfully modified (including files that did not exist and were not created).

For example:

# touch foo bar blatz

If either foo, bar, or blatz does not exist, touch tries to create the file. touch cannot change files that the current user does not own or for which the user does not have write permissions.

Listing Files and Directories

While working with files and directories, it is useful if you can show the contents of the current directory. For this purpose, you can use the ls command. If used without arguments, ls shows the contents of the current directory. Some common arguments make working with ls easier.

Tip: A hidden file on Linux is a file that has a name that starts with a dot. Try the following: touch .hidden. Next, type ls. You will not see it. Then type ls -a. You’ll see it.

NOTE: When using ls and ls -l, you’ll see that files are color-coded. The different colors that are used for different file types make it easier to distinguish between different kinds of files. If colors are crazy and you can't read the screen create you own aliases l and ll in your .bash_profile. it takes just a minute to do. Youcan also use "escape form alias" if colors are mangled by typing \ls.

Copying Files

To organize files on your server, you’ll often copy files. The cp command helps you do so. Behaviour of Unix copy is different from the behaviour of Windows copy -- the former does not preserve timestamp by default although now there is tendency to use rsync command instead. Working with GNU cp command it makes sense to create the alias (for example c ) that specified option --preserve=all (preserve all attributes) and and use it instead as regular cp for copying files as default behaviour of cp does not preserve timestamps.

Copying a single file is not difficult:

cp --preserve=all /path/to/file /path/to/destination. 

Troubles start is you want to copy whole directories or use wildcards.

To do so, use the option -R, which stands for recursive (The option -R exists in ls chmod, chowon and several other important commands as well.) To copy the directory /etc and everything in it to the directory /tmp, you can use the command

cp -Rp  /etc /tmp 

It is better to use option -a instead of -Rp though. Option -a which is equivalent to three options (actually d is redundant):

-dR --preserve=all  

Option -d blocks copying symlinked directories and file; instead symlink is copied (this is desired behavior) Option --preserve=all blocks traversing simlinked directories and specified copying all attributes

--preserve[=ATTR_LIST] -- preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all

With this new knowledge we can simplify the command to :

cp -a /etc /tmp 

NOTE: Be very careful if you try to include wildcard starting with dot in the set of files you are moving and you use recursive option. Remeber classic .* pitfall

To copy the file /etc/hosts to the directory /tmp, for instance, use cp -p /etc/hosts /tmp. This results in the file hosts being written to /tmp.

You can define you own alias for example pcp to save typing.

With the cp command, you can also copy an entire subdirectory, with its contents and everything beneath it.

While using the cp command, permissions and other properties of the files are to be considered. Without extra options, you risk permissions not being copied. If you want to make sure that you keep the current permissions, use the -a option, which has cp work in archive mode. This option ensures that permissions and all other file properties will be kept while copying. So, to copy an exact state of your home directory and everything within it to the /tmp directory, use cp -a ~ /tmp.

A special case when working with cp are hidden files. By default, hidden files are not copied over. There are three solutions to copy hidden files as well:

Here is full list of option from the man page

Mandatory arguments to long options are mandatory for short options too.
-a, --archive
same as -dR --preserve=all
make a backup of each existing destination file
like --backup but does not accept an argument
copy contents of special files when recursive
same as --no-dereference --preserve=links
-f, --force
if an existing destination file cannot be opened, remove it and try again (redundant if the -n option is used)
-i, --interactive
prompt before overwrite (overrides a previous -n option)
follow command-line symbolic links in SOURCE
-l, --link
link files instead of copying
-L, --dereference
always follow symbolic links in SOURCE
-n, --no-clobber
do not overwrite an existing file (overrides a previous -i option)
-P, --no-dereference
never follow symbolic links in SOURCE
same as --preserve=mode,ownership,timestamps
preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all
same as --preserve=context
don't preserve the specified attributes
use full source file name under DIRECTORY
-R, -r, --recursive
copy directories recursively
control clone/CoW copies. See below.
remove each existing destination file before attempting to open it (contrast with --force)
control creation of sparse files. See below.
remove any trailing slashes from each SOURCE argument
-s, --symbolic-link
make symbolic links instead of copying
-S, --suffix=SUFFIX
override the usual backup suffix
-t, --target-directory=DIRECTORY
copy all SOURCE arguments into DIRECTORY
-T, --no-target-directory
treat DEST as a normal file
-u, --update
copy only when the SOURCE file is newer than the destination file or when the destination file is missing
-v, --verbose
explain what is being done
-x, --one-file-system
stay on this file system
-Z, --context=CONTEXT
set security context of copy to CONTEXT
display this help and exit
output version information and exit

Moving Files

To move files, you use the mv command. It works with directories adding or deleting directory entries in the process. You can also move files to /tmp/Trash folder (or other folder of your choice) instead of rm to imitate Windows behaviour and avoid accidental file loss. To do this you define alias del or erase.

This command removes the file from its current location and puts it in the new location. You can also use it to rename a file (which, in fact, is nothing else than deleting directory entry for the file and creating a new one). Let’s take a look at some examples:

mv myfile /tmp Moves the file myfile from the current directory to /tmp.

mkdir somefiles; mv somefiles /tmp This first creates a directory with the name somefiles and then moves this directory to /tmp. Notice that this also works if the directory contains files.

mv myfile mynewfile Renames the file myfile to a new file with the name mynewfile.

Deleting Files

The most dangerous operation in file administration is file deletion. That means that you need to learn man page of RM command by heart and know all option even if you waked up at night. Linux rm command is like a very sharp blade. You can do a lot of things with it but you can cut yourself if you are not careful. The problem is that blunder occur very infrequently, but can be completely devastating.

If it happens on important production server it can cost you your job.

With option -r rm will delete whole directories if they are specify as argument. That's where it become really dangerous instrument.

To delete files and directories, you use the rm command. When used on a single file, the single file is removed. Problems start when you use wildcards and/or recursive option. Rule number one is not to type this such command on the command line. Type then in the editor, verify them, verify the set of files that is covered by your wildcard and only them transfer the result to the command line via cut and paste operation.

The problem with typing rm command is that you can do some automatic things or slipped filger, or fat finger mistakes about which you can regret. Some seasned sysadmin recommend typing options first verifying them and only then typing rm in from of them. But using editor in a separate window is a better idea.

Note: Many commands have an option that specifies recursive behavior. For some commands this is the option -R, on other commands this is option -r. That is confusing, but it is just the way it is -- Unix is 50 year old system after all and warts accumulate with age.

On RHEL 7, the rm command is redefined via alias

alias rm='/bin/rm -i'

which prompts for confirmation. If you do not like that, you can use the \rm invocation. Make sure that you know what you are doing in this case as important protection against an idiot is removed. Usually if you do extensive cleanup you need to crate and verify backup first.

The command rm has relatively few options. Two of the then are especially important:

Linux uses GNU implementation that as of April 2018 has the following options (note relatively new option -I):

-f, --force
ignore nonexistent files, never prompt
prompt before every removal
prompt once before removing more than three files, or when removing recursively. Less intrusive than -i, while still giving protection against most mistakes
prompt according to WHEN: never, once (-I), or always (-i). Without WHEN, prompt always
when removing a hierarchy recursively, skip any directory that is on a file system different from that of the corresponding command line argument
do not treat '/' specially
do not remove '/' (default)
-r, -R, --recursive
remove directories and their contents recursively
-v, --verbose
explain what is being done
display this help and exit
output version information and exit

Wiping out useful data by using rm with wrong parameters or options is probably the most common blunder by both novice and seasoned sysadmins. It happens rarely but the result are often devastating. This danger can't be completely avoided, but in addition to having an up-to-date backup, there are several steps that you can do to reduce the chances of the major damage to the data:

  1. Do backup before any dangerous operation. That the Rule no. 1
  2. You can make /usr directory write-protected by using a separate filesystem for it (/usr/local should be symlinked to /opt where it belongs). There can be some other write protected filesystem as appropriate.
  3. Try to use option -I (instead of -i). While -i option (which it default alas in Red Hat and other distributions) is annoying and most often user disable it, there is more modern and more useful option -I which is highly recommended. Unfortunately very few sysadmin know about its existence. It prompt once before removing more than three files, or when removing recursively. Less intrusive than -i, while still giving protection against most mistakes (rm(1) remove files-directories ):

    If the -I or --interactive=once option is given, and there are more than three files or the -r, -R, or --recursive are given, then rm prompts the user for whether to proceed with the entire operation. If the response is not affirmative, the entire command is aborted.

  4. Block operation for system level 2 directories such as /etc or /usr (this simple measure actually is now implemented in GNU rm used in Linux) using wrapper such as safe-rm. Aliasing rm to wrapper is safe because aliases work only in interactive sessions. In addition you can block the deletion of directories that are in your favorite directories list if you have any. For example if you use level 2 directory /Fav for links some often used directories, you can check is the directory you are trying to delete is present in those links.
  5. Create list of files and directories that should not be erased and use it in wrapper. Instead of creating the list you can mark them in some way to distinguish from other. You can use extended attributes instead of the list. You can also use file stamps, for example any file with creation date that have 00 seconds in it should never be deleted. If you operate server with SE linux in enforcing mode you can use SElinux for file protection.
  6. Rename the directory before applying rm -r command to a special name DELETEIT or some other mnemonic name and then deleting it using special alias dirdel or something like that (this way you will never mistype the name of the directory): many devastating blunders with rm occur when you think that you operate on backup copy of the system or important user/application directory, but in reality operate on a "real thing" or by accidentally typing "/" inform on name of common system directories such as /etc and /var
  7. Write the command in editor first. Do not improvise with such command on the command line. A weaker version of this rule is to write options first and after verification add rm in front. Sysadmins automatically type the name of the most common systems directory with slash in front of it instead of proper relative name (rm -r /etc instead of rm -r etc ) because they are conditioned to type it this way by a long practice and this sequence of keystroke is "etched" in your brain and you do it almost semi-consciously. If you first rename the directory to something else, for example DELETEIT first, such an error will not occur . A script can check if there is another directory with this name in existence and warn you too. You can also simply move directory to /Trash folder which has, say 90 day expiration period for files in it
  8. Use a protective script as an envelope to rm. It can that take into account some of the most dangerous situation typical in your environment. You can write a script like rmm or saferm which among other useful preventive checks introduces a delay between hitting enter and starting of the operation and list the file or at lest number of them and first five that will be affected.
  9. Verify the list of files you delete by running find or ls command first on wildcard you created. When deleting large number of files first generate the list of files only only then apply rm command to this list.

Classic blunders with RM command

There are several classic blunders committed using the rm command :

  1. Running rm - R command without testing it using ls . You should always use ls -Rl command to test complex rm -R commands ( -R, --recursive means process subdirectories recursively).
  2. Mistyping complex path or file name in rm commands. It's always better to use copy and paste operation for directories and files used in rm command as it helps to avoid various typos. If you do recursive deletion is useful to have a "dry run" using find or ls to see the files.

    Here are to example of typos:

    rm -rf /tmp/foo/bar/ *

    instead of

    rm -rf /tmp/foo/bar/*


    rm -r /etc

    instead of

    rm -r etc

    This actually is an interesting type of error because /etc is typed daily so often that it kind of ingrained in sysadmin head and can be typed subconsciously instead of etc

  3. Using .* That essentially make rm recursive as it matches ".." (parent directory).
  4. You can accidentally hit Enter by mistake before finishing typing the line containing rm command.

Those cases can be viewed as shortcomings of rm implementation (it should automatically block * deletion of system directories like /, /etc/ and so on unless -f flag is specified. As well as any system file. Also Unix does not have system attribute for files although sticky bit on files can be used instead along with ownership of sys instead of root).

Basic File Operations using Midnight Commander

The Midnight Commander provides all of the basic UNIX file system operations including copy, delete, move, rename, link, symbolic link, chown, view, and edit. One of the nice features of mc is that it defaults to asking for confirmation before taking destructive actions such as file deletion or overwriting. This feature alone can save you a good deal of agony.

The Midnight Commander comes with a very functional built in file viewer and text/hex editor. To view or edit a file hit the F3 or F4 key respectively. Of particular note is the fact that mc provides formatting support for several common file formats including manual pages, HTML, and mail file. That is, that rather than viewing the raw file, mc will format the file according to the file format.

For example, to view a manual page (even a gzipped page!) simply select the file and hit F3.  See    fr more information. Her is how basic cuntionalisty is descibed in MC Tutorial by Jane Trembath :

Check that mc is installed in your distribution - it is no longer installed by default in some distros, such as openSUSE, but easy to do with a package manager. See Advanced Operations on installing from source. Start Midnight Commander by typing mc in a terminal window. The main section will be the two directory panels, with a dropdown menu line above, a command line below, and below that a list of the present functions of the F (function) keys. Above the command line is a Hint line that shows random tips.

Your mc may also open the F2 file operation menu on start-up. If it irritates you (as it does me) untick "Auto Menus' when configuring Midnight Commander. To quit mc, use key F10.

The Basics: Navigation in the Directory Panels

Generally, you want to display different directories either side, so you can action files between them. Navigate around mc with the keyboard:

To change up into a parent directory, arrow up to the top line and enter on /.. (usual parent directory notation). To change down into a subdirectory, arrow down then Enter.  See Configuration on how to enable 'lynx-like motion'. Without needing to scroll to the top, the back arrrow will change you directly into the parent directory.

Keyboard Shortcuts

The 'F' (function) keys are widely used in mc for file operations. Read the bar at the bottom for their current function, which may differ according to the context, eg. browsing a directory, using the file viewer, or the editor.

In normal browsing mode:
  1. F1 - help. More readable than the 2000-line man page, although difficult to browse.
  2. F2 - user menu ( offers option to gzip files, etc.)
  3. F3 - view (handy to check the contents of an rpm or tgz file, or read contents of files)
  4. F4 - edit with internal editor, mcedit
  5. F5 - copy
  6. F6 - rename or move
  7. F7 - create a directory
  8. F8 - delete
  9. F9 - pull-down - accesses the menu bar at the top.
  10. F10 - quit. Closes mc, as well as mcedit and any unwanted open menu.

If you don't have F keys (or they do not work in your terminal emulator), use Esc - number sequence (1-0) instead.

F10 key in Gnome Terminal: opens the main terminal File menu instead, so click quit with mouse.

Other basic keyboard usages:

All shortcuts are noted in the menus. In mc's keyboard shortcut notation, 'C-x i' would mean press Ctrl and x simultaneously, release both then press i. M refers to the Alt key. A few common shortcuts:


Old News ;-)

Recommended Links

Google matched content

Softpanorama Recommended

Red Hat Certified System Administrator (RHCSA EX200) – Study Guide

Regular Expressions-POSIX Basic Regular Expressions - Wikibooks, open books for an open world

Regular Expressions (POSIX)



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2018 by Dr. Nikolai Bezroukov. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case is down you can use the at


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October 28, 2018