Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Using redirection and pipes

News Red Hat Certification Program Understanding and using essential tools Access a shell prompt and issue commands with correct syntax Finding Help Managing files in RHEL Working with hard and soft links Working with archives and compressed files Using the Midnight Commander as file manager
Text files processing Using redirection and pipes Use grep and extended regular expressions to analyze text files Connecting to the server via ssh, using multiple consoles and screen command Introduction to Unix permissions model Tips Sysadmin Horror Stories Unix History with some Emphasis on Scripting Humor

Extracted from Professor Nikolai Bezroukov unpublished lecture notes.

Copyright 2010-2018, Dr. Nikolai Bezroukov. This is a fragment of the copyrighted unpublished work. All rights reserved.

 


Introduction

Redirections and pipes are two (closely related) innovation that helped to propel Unix to the role it plays today.   

By default when a command is executed it shows its results on the screen of the computer you are working on. The computer monitor serves as the target for the so-called standard output, which is also referred to as the STDOUT.  The shell also has default destinations to send error messages  (STDERR ) to and to accept input (STDIN)

So if you run a command, that command would expect input from the keyboard, and it would normally send its output to the monitor of your computer without making a difference between normal output and errors. Some commands, however, are started at the background and not from a current terminal session, so these commands do not have a monitor or console session to send their output to, and they do not listen to keyboard input to accept their standard input. That is where redirection comes in handy.

It is important to understand that when messages aren't redirected in your program, the output goes through a special file called standard output. By default, standard output represents the screen. That means that everything sent through standard output is redirected to the screen.

Programs started from the command line have no idea what they are reading from or writing to. They just read from file descriptor 0 if they want to read from standard input, and they write to file descriptor number 1 to display output and to file descriptor 2 if they have error messages to be output. By default, these are connected to the keyboard and the screen.

Programs started from the command line have no idea what they are reading from or writing to. They just read from file descriptor 0 if they want to read from standard input, and they write to file descriptor number 1 to display output and to file descriptor 2 if they have error messages to be output. By default, these are connected to the keyboard and the screen.

In I/O redirection, files can be used to replace the default STDIN, STDOUT, and STDERR. You can also redirect to device files. A device file on Linux is a file that is used to access specific hardware. Your hard disk for instance can be referred to as /dev/sda, the console of your server is known as /dev/console or /dev/tty1, and if you want to discard a commands output, you can redirect to /dev/null. To access most device files you need to be root.

If you use redirection symbols such as <, >, and |, the shell connects the file descriptors to files you specified or other commands.

There are two major mechanism that increase flexibility of Unix utilities:

Redirection basics

By default Unix/Linux assumes that all output is going to STDOUT  which is assigned to a user screen/console called  /dev/tty. You can divert messages directed to standard output, for example from commands like echo,  to files or other commands. Bash refers to this as redirection.

Before shell executes a command, it scans the command line for redirection characters. These special symbols instruct the shell to redirect input and output accordingly. Redirection characters can appear anywhere in a simple command or can precede or follow a command. They are not passed on to the invoked command as parameters.

The most popular is > operator, which redirects STDOUT to a file. The redirection operator is followed by the name of the file the messages should be written to. For example, to write the message "The processing is complete" to a file named my.log , you use

timestamp=`date`
echo "The processing started at $timestamp" > /tmp/my.log

Try to execute

echo "Hello world" > /dev/tty

You will see that it typed on the your screen exactly the say way as if you executed the command

echo "Hello to myself"

Because those two command are actually identical.

Bash uses the symbol &l to refer to standard output, and you can explicitly redirect messages to it. You can redirect to the file the output of the whole script

bash myscript.sh > mylisting.txt
This is the same as
bash myscript.sh 1> mylisting.txt

In this case any echo statement will write the information not the  screen, but to the file you've redirected the output to. In this case this is the file mylisting. txt

But you can also redirect each echo statement in you script. Let's see another set of examples:

echo "Don't forget to backup your data" > /dev/tty      # send explicitly to the screen
echo "Don't forget to backup your data"                 # sent to screen via standard output stream
echo "Don't forget to backup your data >&1              # same as the last one
echo "Don't forget to backup your data >/dev/stdout     # same as the last one
echo "Don't forget to backup your data" > warning.txt   # redirect to a file in the current directory

Using standard output is a way to send all the output from a script and any commands in it to a new destination.

A script doesn't usually need to know where the messages are going: There’s always the possibility they were redirected. However, when errors occur and when warning messages are printed to the user, you don't want these messages to get redirected along with everything else.

Linux defines a second file especially for messages intended for the user called standard error. This file represents the destination for all error messages. Because standard error, like standard output, is a file, standard error can likewise be redirected. The symbol for standard error is &2.  Instead of &2 /dev/stderr can also be used. The default destination for standard error, like standard output, is the screen. For example,

echo "$SCRIPT:SLINENO: No files available for processing" >&2

This command appears to work the same as a echo without the >&2 redirection, but there is an important difference. It displays an error message to the screen, no matter where standard output has been previously redirected.

 The redirection for the standard error is very similar but naturally they begin with the number 2. For example

bash myscript.sh 2> myscript_errors.txt

You can merge standard output and standard error streams with  2>&1: This redirects the stderr to the stdout, which in turn can be redirected to a file:

bash myscript.sh 2>&1 > myscript_errors.txt

There are several classic types of redirection. Among them: 

  1. <Sourcefile -- reading input from a specified file
  2. >Targetfile -- writing output to the specified file
  3. >> Targetfile -- adding output to the specified file
  4. <> Stream -- reading and writing to the specified stream
  5. <<[-] Source  -- converting variable and strings into input file

Source and target can be expression. In this case bash performs command and parameter substitution before using the  parameter. File name substitution occurs only if the pattern matches a single file

The < operator

Unix command cat is actually short for "catenate," i.e., link together. It accepts multiple filename arguments and copies them to the standard output. But let's pretend, for the moment, that cat and other utilities don't accept filename arguments and accept only standard input. Unix shell lets you redirect standard input so that it comes from a file. The notation command <  filename does the same as cat with less overhead.  In other words

cat /var/log/messages | more

and

more <  /var/log/messages

are equivalent, but the latter is slightly faster and more efficiently uses CPU and memory as it does not create a separate process. 

another example. The utility wc (word count) is able to calculate number of lines in the file with the option -l. That means that you can count the number of lines in a file /etc/passwd, which represent the number of accounts on your system,  using the command:

wc  -l <  /etc/passwd

Again, wc -l count lines of the file. In this case this is number of lines in your  .bashrc. Printing this information from your .bash_profile script might be a useful reminder to you that can alert you to the fact that you recently modified your env, or God forbid your .bashrc file disappeared without trace :-)  

The > operator

The > operator always overwrites the named file. If a series of messages are redirected from the echo command to the same file, only the last message appears:

echo "The processing started at" `date` > /tmp/myprogram/log
,,, ,,, ,,, 
echo  "There were no errors. Normal exist of the program" `date ` > /tmp/myprogram/log

To add messages to a file without overwriting the earlier ones, Bash has an append operator, >>. This operator redirects messages to the end of a file.

echo "The processing started at"  > /tmp/myprogram/log
... ... ... 
echo "There were no errors. Normal exist of the program" >> /tmp/myprogram/log

The << operator ("here file")

This operator imitates reading from a file inside the script by putting several lines directly into the script. The operator <<MARKER treats the lines following it in a script as if they were typed from the keyboard until it reaches the line starting with word MARKER in the first postion. In other words the lines which are treated as an input file are limited by the a special line using the delimiter you you define yourself.

The data in the << list is known as a here file (or a here document) because, historically, the word HERE was often used in Bourne shell scripts as the marker of the end of the input lines.

For example, in the following example the delimiter word used is "EOF": 

cat > /tmp/example <<EOF
this is a test demostrating how you can 
write several lines of text into 
a file
EOF

If you use >> instead of  you can add lines to a file without using any editor. this is how typically sysadmincreated small files during installationof the operating system, for example /etc/resolv.conf

cat >>/etc/resolv.conf <<EOF
search datacenter.mycompany.com headquarters.mycompany.com
nameserver 10.100.20.5
nameserver 10.100.20.6
EOF

In this example bash treats the three lines between the EOF markers as if they were being typed from the keyboard and write them to the file specified after > (/tmp/example in our case). 

NOTE: there should be no spaces between << and EOF marker.

Again, the name EOF is arbitrary. you can choose, for example,  LINES_END instead. the only important thing is there should be no lines in your test that start with the same word.

cat >>/etc/resolv.conf <<LINES_END
search datacenter.mycompany.com headquaters.mycompany.com
nameserver 10.100.20.5
nameserver 10.100.20.6
LINES_END

There should no market at any beginning of the lines of included text. that's why using all caps makes sense in this case. 

Operator <<< ("here string")

From version 3 bash has another here file redirection operator, <<<, which redirects a variable or a literal.

cat > /tmp/example <<<  "this is another example of piping info into the file" 

Here is a summary of what we can do.

Exercise: Using I/O Redirection

  1. Open a shell as user user and type cd without any arguments. This ensures that the home directory of this user is the current directory while working on this exercise. Type pwd  to verify this.
  2. Type ls.  You’ll see the results onscreen.
  3. Type ls > /dev/null. This redirects the STDOUT to the null device, with the result that you will not see it.
  4. Type ls /root/nofile  > /dev/null. This command shows a “no such file or directory” message onscreen. You see the message because it is not STDOUT, but an error message that is written to STDERR.
  5. Type ls /root/nofile 2> /dev/null. Now you will not see the error message anymore.
  6. Type ls /root/bin  /root/etc  2> /dev/null. This command will show content of /root/bin directory suppressing all error message from non existent /root/etc directory
  7. Type ls /root/bin  /root/etc  2> /dev/null > output. In this command, you still write the error message to /dev/null while sending STDOUT to a file with the name output that will be created in your home directory.
  8. Type cat output  to show the contents of this file.
  9. Type echo hello > output. This overwrites the contents of the output file.
  10. Type ls >> output. This appends the result of the ls  command to the output file.
  11. Type ls -R /. This shows a long list of files and folders scrolling over your computer monitor. (You may want to type Ctrl+C  to stop [or wait a long time])
  12. Type ls -R | less. This shows the same result, but in the pager less, where you can scroll up and down using the arrow keys on your keyboard.
  13. Type q  to close less. This will also end the ls  program.
  14. Type ls > /dev/tty1. This gives an error message because you are executing the command as an ordinary user (unless you were logged in to tty1). Only the user root has permission to write to device files directly.

Pipes as cascading redirection

Linux administrator needs to know well how to use pipes, because pipes are used for construction simple sysadmin scripts on daily basis. Pipes were one of the most significant innovation brought to the OS area by Unix. By combining multiple commands using pipes, you can create kind of super commands that make almost anything possible. Pipe can be used to catch the output of one command and process it as input by  a second command. And so on.

Many text utilities are used as stages in multistage pipes (utilities which accept standard input processes it and output the result into standard output are called filters)

Pipeline programming involves applying special style of componentization  that allows to break a problem into a number of small steps, each of which can then be performed by a simple program. We will call this type of componentization pipethink in which wherever possible, programmer  relies on preexisting collection of useful "stages" implemented by Unix filters. David Korn quote catches the essence of pipethink -- "reuse of a set of components rather than on building monolithic applications".

This process is called piping and shell uses the vertical bar (or pipe) operator | to specify it:

who | wc -l # count the number or users

Any number of commands can be strung together with vertical bar symbols. A group of such commands is called a pipeline. This is actually a language that system administrators are learning all their carrier.  Level of mastery of this language directly correlates with the qualification of sysadmin.  See also Pipes -- powerful and elegant programming paradigm

Pipes are often used for processing log -- for example analyzing if there are some types of errors or selecting appropriate fragment for further analysis. For example to select lines for 100 to 200 you can use two stage pipe:

cat /etc/log/messages | head -200 | tail -100

If one command ends prematurely in a series of pipe commands, for example, because you interrupted a command with a Ctrl-C, Bash displays the message "Broken Pipe" on the screen.

 If a user runs the command ls, for instance, the output of the command is shown onscreen. If the user uses ls | less, the commands ls  and less  are started in parallel. The standard output of the ls  command is connected to the standard input of less. Everything that ls writes to the standard output will become available for read from standard input in less. The result is that the output of ls  is shown in a pager, where the user can browse up and down through the results easily.

This arrangement is called corotines. If the result is not what you expect you can breka pipe at any state redirect the output to a file and see if it is correct. After that you can redirect file into first state of the remaning part of the pipe and  debut  stage by stage using the same method. 

This way less can serve as interactive frontend to any utilities that does not have such capabilities.

You can you filter pv (pipe viewer) to debug each pipe stage on different datasets. pv outputs on console each record that is passing via it.

You can also split pipe into two parts. One that is working properly and the other that is not. Write output of the first stage after the one that is working properly  into the file in /tmp and analyze it visually with less and other tools. This way you can find out what's wrong with the output of problematic stage, if any. After that you can add the next stage and repeat this procedure.  And so on until the whole pipe is debugged. 

You can use tee on any stage of the pipe to divert output to a file.

Here us how Stephen G. Kochan explains  this  concent in his boor (with Patrick Wood) Shell Programming in Unix, Linux and OS X, Fourth Edition

Pipes

As you will recall, the file users that was created previously contains a list of all the users currently logged in to the system. Because you know that there will be one line in the file for each user logged in to the system, you can easily determine the number of login sessions by counting the number of lines in the users file:

$ who > users
$ wc -l < users
      5
$

This output indicates that currently five users are logged in or that there are five login sessions, the difference being that users, particularly administrators, often log in more than once. Now you have a command sequence you can use whenever you want to know how many users are logged in.

Another approach to determine the number of logged-in users bypasses the intermediate file. As referenced earlier, Unix lets you “connect” two commands together. This connection is known as a pipe, and it enables you to take the output from one command and feed it directly into the input of another. A pipe is denoted by the character |, which is placed between the two commands. To create a pipe between the who and wc -l commands, you type who | wc -l:

$ who | wc -l
      5
$

Pipeline process: who | wc -l

When a pipe is established between two commands, the standard output from the first command is connected directly to the standard input of the second command. You know that the who command writes its list of logged-in users to standard output. Furthermore, you know that if no filename argument is specified to the wc command, it takes its input from standard input. Therefore, the list of logged-in users that is output from the who command automatically becomes the input to the wc command. Note that you never see the output of the who command at the terminal because it is piped directly into the wc command. This is depicted in Figure 1.13.

... ... ...

A pipe can be made between any two programs, provided that the first program writes its output to standard output, and the second program reads its input from standard input.

As another example, suppose you wanted to count the number of files contained in your directory. Knowledge of the fact that the ls command displays one line of output per file enables you to use the same type of approach as before:

$ ls | wc -l
      10
$

The output indicates that the current directory contains 10 files.

It is also possible to create a more complicated pipeline that consists of more than two programs, with the output of one program feeding into the input of the next. As you become a more sophisticated command line user, you’ll find many situations where pipelines can be tremendously powerful.

Filters

The term filter is often used in Unix terminology to refer to any program that can take input from standard input, perform some operation on that input, and write the results to standard output. More succinctly, a filter is any program that can be used to modify the output of other programs in a pipeline. So in the pipeline in the previous example, wc is considered a filter. ls is not because it does not read its input from standard input. As other examples, cat and sort are filters, whereas who, date, cd, pwd, echo, rm, mv, and cp are not.

 

Using AWK in pipes

Previously it was AWK that was used (and still is widely used in scripts, as this is standalone utility that does not change).  You can search for AWK one liners for examples. Simple AWK programs enclosed in single quotes can be passed as a parameter. For example, you can specify filed separators using option -F  and extract that fields you want much like cut:

awk -F ':' { print $1 | "sort" }' /etc/passwd
This pipe
awk -F ':' { print $1 | "sort" }' /etc/passwd | sort 

prints a sorted list of the login names of all users from /etc/passwd.

Generally AWK is more flexible then cut utility and often is use instead:

Here are more examples from HANDY ONE-LINE SCRIPTS FOR AWK  by Eric Pement :

 # print the first 2 fields, in opposite order, of every line
 awk '{print $2, $1}' file

 # switch the first 2 fields of every line
 awk '{temp = $1; $1 = $2; $2 = temp}' file

 # print every line, deleting the second field of that line
 awk '{ $2 = ""; print }'

If an input file or output file are not specified, AWK will expect input from stdin or output to stdout.

AWK was pioneer in introducing regular expression to Unix See AWK Regular expressions. But pattern matching capabilities of AWK are not limited to regular expression. Patterns can be

  1. regular expressions enclosed by slashes, e.g.: /[a-z]+/
  2. relational expressions, e.g.: $3!=$4
  3. pattern-matching expressions, e.g.: $1 !~ /string/
  4. or any combination of these (This  example selects lines where the two characters starting in fifth column are xx and the third field matches nasty, plus lines beginning with The, plus lines ending with mean, plus lines in which the fourth field is greater than two.):
    (substr($0,5,2)=="xx" && $3 ~ /nasty/ ) || /^The/ || /$mean/ || $4>2

Here are some examples (HANDY ONE-LINE SCRIPTS FOR AWK  by Eric Pement ):

 # substitute (find and replace) "foo" with "bar" on each line
 awk '{sub(/foo/,"bar")}; 1'           # replace only 1st instance
 gawk '{$0=gensub(/foo/,"bar",4)}; 1'  # replace only 4th instance
 awk '{gsub(/foo/,"bar")}; 1'          # replace ALL instances in a line
 
 # substitute "foo" with "bar" ONLY for lines which contain "baz"
 awk '/baz/{gsub(/foo/, "bar")}; 1'

 # substitute "foo" with "bar" EXCEPT for lines which contain "baz"
 awk '!/baz/{gsub(/foo/, "bar")}; 1'

 # change "scarlet" or "ruby" or "puce" to "red"
 awk '{gsub(/scarlet|ruby|puce/, "red")}; 1'
More examples:
SELECTIVE PRINTING OF CERTAIN LINES:

 # print first 10 lines of file (emulates behavior of "head")
 awk 'NR < 11'

 # print first line of file (emulates "head -1")
 awk 'NR>1{exit};1'

  # print the last 2 lines of a file (emulates "tail -2")
 awk '{y=x "\n" $0; x=$0};END{print y}'

 # print the last line of a file (emulates "tail -1")
 awk 'END{print}'

 # print only lines which match regular expression (emulates "grep")
 awk '/regex/'

 # print only lines which do NOT match regex (emulates "grep -v")
 awk '!/regex/'

 # print any line where field #5 is equal to "abc123"
 awk '$5 == "abc123"'

 # print only those lines where field #5 is NOT equal to "abc123"
 # This will also print lines which have less than 5 fields.
 awk '$5 != "abc123"'
 awk '!($5 == "abc123")'

 # matching a field against a regular expression
 awk '$7  ~ /^[a-f]/'    # print line if field #7 matches regex
 awk '$7 !~ /^[a-f]/'    # print line if field #7 does NOT match regex

 # print the line immediately before a regex, but not the line
 # containing the regex
 awk '/regex/{print x};{x=$0}'
 awk '/regex/{print (NR==1 ? "match on line 1" : x)};{x=$0}'

 # print the line immediately after a regex, but not the line
 # containing the regex
 awk '/regex/{getline;print}'

 # grep for AAA and BBB and CCC (in any order on the same line)
 awk '/AAA/ && /BBB/ && /CCC/'

 # grep for AAA and BBB and CCC (in that order)
 awk '/AAA.*BBB.*CCC/'

 # print only lines of 65 characters or longer
 awk 'length > 64'

 # print only lines of less than 65 characters
 awk 'length < 64'

 # print section of file from regular expression to end of file
 awk '/regex/,0'
 awk '/regex/,EOF'

 # print section of file based on line numbers (lines 8-12, inclusive)
 awk 'NR==8,NR==12'

 # print line number 52
 awk 'NR==52'
 awk 'NR==52 {print;exit}'          # more efficient on large files

 # print section of file between two regular expressions (inclusive)
 awk '/Iowa/,/Montana/'             # case sensitive

SELECTIVE DELETION OF CERTAIN LINES:

 # delete ALL blank lines from a file (same as "grep '.' ")
 awk NF
 awk '/./'

 # remove duplicate, consecutive lines (emulates "uniq")
 awk 'a !~ $0; {a=$0}'

 # remove duplicate, nonconsecutive lines
 awk '!a[$0]++'                     # most concise script
 awk '!($0 in a){a[$0];print}'      # most efficient script

 

Using Perl in pipes

The pipe command lets you take the output from one command and use it as input to a custom script, written in any scripting language that you know. Sysadmins typically use Perl for this purpose, but fashion changes.

Many sysadmins know Perl and it is powerful and flexible language to write you own pipe stages. Now Python that gradually displaces Perl form this role (and it is possible that in 20 years you will search for Python one liners ;-) , much like Perl slightly displaced AWK in 1990th, but Perl is still widely used and it closer to shell than Python, so it have "home field advantage".  i personally prefer Perl to Python as Perl as many interesting "one liners" are available for this scripting language. Just search them using Bing or Google. They can be reused. 

Minimal Perl that is used in pipes can be explained just in few paragraphs:

Let's discuss several useful examples:

Here is a classic one liner that replaces a regular expression specified in the first operator into string and creates and backup file:

cat /etc/hosts | perl -pe 's/oldserver/newserver/g' > /tmp/newhosts
df -k | awk 'say $F[2]' 
/dev/hda5 
/dev/hda8 
/dev/hda6 
/dev/hdb5 
/dev/hdb1 
/dev/hda7 
/dev/hda1
					
  1. Delete several lines from the beginning of the script of file. For example one liner below deletes first 10 lines:
    cat /etc/hosts | perl -i.bak -ne 'print unless 1 .. 10' > /tmp/newhosts
  1. Emulation of Unix cut utility in Perl. Option -a ( autosplit mode) converts each line into array @F. The pattern for splitting a whitespace (space or \t) and unlike cut it accommodates consecutive spaces of \t mixes with spaces. Here's a simple one-line script that will print out the first word of every line, but also skip any line beginning with a # because it's a comment line.
    perl -nae 'next if /^#/; print "$F[0]\n"'

    Here is example on how to print first and the last fields:

    perl -lane 'print "$F[0]:$F[-1]\n"'

NEWS CONTENTS

Old News ;-)

Recommended Links

Google matched content

Softpanorama Recommended

Unix Pipes

AWK

Perl

Etc

...



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

 

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October 18, 2018