May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Single square bracket conditionals (test conditionals)


See also

Recommended Links

Arithmetic expressions Double square bracket conditionals Single square bracket conditionals
Compound comparisons Integer comparison operators String comparison operators File test operators Quoting caveats Macro substitutions caveats
if statements in shell Loops in Shell Case statement in shell Shell History Humor Etc

Single bracket conditionals is the oldest type of shell conditionals and are invocation of test function in disguise.

Every UNIX command returns an integer code to its parent. This return code is called the exit status. 0 is usually the "OK" exit status, while anything else (1 to 255) usually denotes an error.

The simplest form (without the elif and else parts, a.k.a. clauses) executes the statements only if the exit code is true (in shell unlike Perl this mean code equal zero). For example:

if cd /fake; then 
   echo "cd returned OK"; 

If you add an else clause, you get the ability to execute one set of statements if a condition is true or another set of statements if the condition is false. If the status is 0, the condition evaluates to true; if it is anything else, the condition is considered false. The same is true for each condition attached to an elif statement (if any).

You can use as many elif (a contraction of "else if") clauses as you wish; they introduce more conditions, and thus more choices for which set of statements to execute. If you use one or more elifs, you can think of the else clause as the "if all else fails" part.

This enables us to write code of the form:

if command ran successfully
    normal processing
    error processing
For example:
if cd /fake; then echo "cd returned OK"; fi

test: Obsolete construct for Testing Files and Strings

Historically conditional expressions in Unix shells were introduced via test command. For instance, test can check whether a file is writable before your script tries to write to it. It can treat the string in a shell variable as a number and do comparisons ("Is that number less than 1000?"). You can combine tests, too ("If the file exists and it's readable and the message number is more than 500..."). Some versions of test have more tests than others.

The test command returns a zero status if the test was true and a nonzero status otherwise, so people usually use test with if , while, or until. Here's a way your program could check to see if the user has a readable file named .profile in the home directory:

if test -r $HOME/.profile
    echo "$myname: You already have .profile file and its readable"
    echo " you do not have .profile file. Copying ..."
    cp /etc/skel/.profile $HOME/.profile
    exit 1

The test command also lets you test for something that isn't true. Add an exclamation point (!) before the condition you're testing. For example, the following test is true if the .profile file is not readable:

if test ! -r $HOME/.profile 
... ... ...

The hack that was implemented is to link text to the file named [. Yes, that's a left bracket. It was a pretty interesting hack: you can use it interchangeably with the test command with one exception: there has to be a matching right bracket (]) at the end of the test. The second example above could be rewritten this way:

if [ ! -r $HOME/.profile ]
    echo "$myname: Can't read your '.profile'.  You need to create one and make it readable." 1>&2
    exit 1

Be sure to leave space between the brackets and other text. There are a couple of other common gotchas caused by empty arguments because shell attempts macro expansion before syntax analysis.

The test command

Courtecy of

Table of Contents

General syntax

This command allows you to do various tests and sets its exit code to 0 (TRUE) or 1 (FALSE) whenever such a test succeeds or not.



Using this exit code, it's possible to let Bash react on the result of such a test, here by using the command in an if-statement:

# test if /etc/passwd exists

if test -e /etc/passwd; then
  echo "Alright man..." >&2
  echo "Yuck! Where is it??" >&2
  exit 1

The syntax of the test command is relatively easy. Usually it's the command name "test" followed by a test type (here "-e" for "file exists") followed by test-type-specific values (here the filename to check, "/etc/passwd").

There's a second standardized command that does exactly the same: the command "[" - the difference just is that it's called "[" and the last argument to the command must be a "]": It forms "[ <EXPRESSION> ]"

Let's rewrite the above example to use it:

# test if /etc/passwd exists

if [ -e /etc/passwd ]; then
  echo "Alright man..." >&2
  echo "Yuck! Where is it??" >&2
  exit 1
One might think now that these "[" and "]" belong to the syntax of Bash's if-clause: No they don't! It's a simple, ordinary command, still!

Another thing you have to remember is that if the test command wants one parameter for a test, you have to give it one parameter. Let's check for some of your music files:


mymusic="/data/music/Van Halen/Van Halen - Right Now.mp3"

if [ -e "$mymusic" ]; then
  echo "Let's rock" >&2
  echo "No music today, sorry..." >&2
  exit 1
As you definitely noted, the filename contains spaces. Since we call a normal ordinary command ("test" or "[") the shell will word-split the expansion of the variable mymusic: You need to quote it when you don't want the test-command to complain about too many arguments for this test-type! If you didn't understand it, please read the article about words...

Please also note that the file-tests want one filename to test. Don't give a glob (filename-wildcards) as it can expand to many filenames ⇒ too many arguments!

Another common mistake is to provide too few arguments:

[ "$mystring"!="test" ]
This provides exactly one test-argument to the command. With one parameter, it defaults to the -n test: It tests if a provided string is empty (FALSE) or not (TRUE) - due to the lack of spaces to separate the arguments the shown command always ends TRUE!

Well, I addressed several basic rules, now let's see what the test-command can do for you. The Bash test-types can be split into several sections: file tests, string tests, arithmetic tests, misc tests. Below, the tests marked with :!: are non-standard tests (i.e. not in SUS/POSIX/etc..).

File tests

This section probably holds the most tests, I'll list them in some logical order. Since Bash 4.1, all tests related to permissions respect ACLs, if the underlying filesystem/OS supports them.
Operator syntax Description
-a <FILE> True if <FILE> exists. :!: (not recommended, may collide with -a for AND, see below)
-e <FILE> True if <FILE> exists.
-f <FILE> True, if <FILE> exists and is a regular file. (compare with -s)
-d <FILE> True, if <FILE> exists and is a directory.
-c <FILE> True, if <FILE> exists and is a character special file.
-b <FILE> True, if <FILE> exists and is a block special file.
-p <FILE> True, if <FILE> exists and is a named pipe (FIFO).
-S <FILE> True, if <FILE> exists and is a socket file.
-L <FILE> True, if <FILE> exists and is a symbolic link.
-h <FILE> True, if <FILE> exists and is a symbolic link.
-g <FILE> True, if <FILE> exists and has sgid bit set.
-u <FILE> True, if <FILE> exists and has suid bit set.
-r <FILE> True, if <FILE> exists and is readable.
-w <FILE> True, if <FILE> exists and is writable.
-x <FILE> True, if <FILE> exists and is executable.
-s <FILE> True, if <FILE> exists and has size bigger than 0 (not empty).
-t <fd> True, if file descriptor <fd> is open and refers to a terminal.
<FILE1> -nt <FILE2> True, if <FILE1> is newer than <FILE2> (mtime). :!:
<FILE1> -ot <FILE2> True, if <FILE1> is older than <FILE2> (mtime). :!:
<FILE1> -ef <FILE2> True, if <FILE1> is a hardlink to <FILE2>. :!:

String tests

Operator syntax Description
-z <STRING> True, if <STRING> is empty.
-n <STRING> True, if <STRING> is not empty (this is the default operation).
<STRING1> = <STRING2> True, if the strings are equal.
<STRING1> != <STRING2> True, if the strings are not equal.
<STRING1> < <STRING2> True if <STRING1> sorts before <STRING2> lexicographically (pure ASCII, not current locale!). Remember to escape! Use \<
<STRING1> > <STRING2> True if <STRING1> sorts after <STRING2> lexicographically (pure ASCII, not current locale!). Remember to escape! Use \>

Arithmetic tests

Operator syntax Description
<INTEGER1> -eq <INTEGER2> True, if the integers are equal.
<INTEGER1> -ne <INTEGER2> True, if the integers are NOT equal.
<INTEGER1> -le <INTEGER2> True, if the first integer is less than or equal second one.
<INTEGER1> -ge <INTEGER2> True, if the first integer is greater than or equal second one.
<INTEGER1> -lt <INTEGER2> True, if the first integer is less than second one.
<INTEGER1> -gt <INTEGER2> True, if the first integer is greater than second one.

Misc syntax

Operator syntax Description
<TEST1> -a <TEST2> True, if <TEST1> and <TEST2> are true (AND). Note that -a also may be used as a file test (see above)
<TEST1> -o <TEST2> True, if either <TEST1> or <TEST2> is true (OR).
! <TEST> True, if <TEST> is false (NOT).
( <TEST> ) Group a test (for precedence). Attention: In normal shell-usage, the "(" and ")" must be escaped; use "\(" and "\)"!
-o <OPTION_NAME> True, if the shell option <OPTION_NAME> is set.
-v <VARIABLENAME> True if the variable <VARIABLENAME> has been set. Use var[n] for array elements.
-R <VARIABLENAME> True if the variable <VARIABLENAME> has been set and is a nameref variable (since 4.3-alpha)

Number of Arguments Rules

The test builtin, especially hidden under its [ name, may seem simple but is in fact causing a lot of trouble sometimes. One of the difficulty is that the behaviour of test not only depends on its arguments but also on the number of its arguments.

Here are the rules taken from the manual (Note: This is for the command test, for [ the number of arguments is calculated without the final ], for example [ ] follows the "zero arguments" rule):

These rules may seem complex, but it's not so bad in practice. Knowing them might help you to explain some of the "unexplicable" behaviours you might encounter:

if [ -n $var ]; then echo "var is not empty"; fi

This code prints "var is not empty", even though -n something is supposed to be true if $var is not empty - why?

Here, as $var is not quoted, word splitting occurs and $var results in actually nothing (Bash removes it from the command's argument list!). So the test is in fact [ -n ] and falls into the "one argument" rule, the only argument is "-n" which is not null and so the test returns true. The solution, as usual, is to quote the parameter expansion: [ -n "$var" ] so that the test has always 2 arguments, even if the second one is the null string.

These rules also explain why, for instance, -a and -o can have several meanings.

AND and OR

The Prefered Way

The way often recommended to logically connect several tests with AND and OR is to use several single test commands and to combine them with the shell && and || list control operators.

See this:

if [ -n "$var"] && [ -e "$var"]; then
   echo "\$var is not null and a file named $var exists!"

The return status of AND and OR lists is the exit status of the last command executed in the list

The other way: -a and -o

The logical operators AND and OR for the test-command itself are -a and -o, thus:
if [ -n "$var" -a -e "$var" ] ; then
   echo "\$var is not null and a file named $var exists"

They are not && or ||:

$ if [ -n "/tmp" && -d "/tmp"]; then echo true; fi # DOES NOT WORK
bash: [: missing `]'

You might find the error message confusing, [ does not find the required final ], because as seen above && is used to write a list of commands. The if statement actually sees two commands:

…which must fail.

Why you should avoid using -a and -o

If portability is a concern

POSIX®/SUSv3 does not specify the behaviour of test in cases where there are more than 4 arguments. If you write a script that might not be executed by Bash, the behaviour might be different! 1)

If you want the cut behaviour

Let's say, we want to check the following two things (AND):
  1. if a string is null (empty)
  2. if a command produced an output

Let's see:

if [ -z "false" -a -z "$(echo I am executed >&2)" ] ; then ... 
⇒ The arguments are all expanded before test runs, thus the echo-command is executed.

if [ -z "false" ] && [ -z "$(echo I am not executed >&2)" ]; then... 

⇒ Due to the nature of the && list operator, the second test-command runs only if the first test-command returns true, our echo-command is not executed.

Note: In my opinion, -a and -o are also less readable [pgas]

Precedence and Parenthesis

Take care if you convert your scripts from using -a and -o to use the list way (&& and ||):

That means, you can get different results, depending on the manner of use:

$ if [ "true" ] || [ -e /does/not/exist ] && [ -e /does/not/exist ]; then echo true; else echo false; fi

$ if [ "true" -o -e /does/not/exist -a -e /does/not/exist ]; then  echo true; else echo false;fi
As a result you have to think about it a little or add precedence control (parenthesis).

For && and || parenthesis means (shell-ly) grouping the commands, and since ( … ) introduces a subshell we will use { … } instead:

$ if  [ "true" ] || { [ -e /does/not/exist ]  && [ -e /does/not/exist ] ;} ; then echo true; else echo false; fi

For the test command, the precedence parenthesis are, as well, ( ), but you need to escape or quote them, so that the shell doesn't try to interpret them:

$ if [ \( "true" -o -e /does/not/exist \) -a -e /does/not/exist ]; then  echo true; else echo false; fi

# equivalent, but less readable IMHO:
$ if [ '(' "true" -o -e /does/not/exist ')' -a -e /does/not/exist ]; then  echo true; else echo false; fi


As for AND and OR, there are 2 ways to negate a test with the shell keyword ! or passing ! as an argument to test.

Here ! negates the exit status of the command test which is 0 (true), and the else part is executed:

if ! [ -d '/tmp' ]; then echo "/tmp doesn't exists"; else echo "/tmp exists"; fi

Here the test command itself exits with status 1 (false) and the else is also executed:

if  [ ! -d '/tmp' ]; then echo "/tmp doesn't exists"; else echo "/tmp exists"; fi

Unlike for AND and OR, both methods for NOT have an identical behaviour, at least for doing one single test.

Pitfalls summarized

In this section you will get all the mentioned (and maybe more) possible pitfalls and problems in a summary.


Here's the copy of a mail on bug-bash list. A user asking a question about using the test command in Bash, he's talking about a problem, which you may have already had yourself:
Subject: -d option not working. . .?
Date: Tue, 11 Sep 2007 21:51:59 -0400
To: [email protected]

Hi All,

I've got a script that I'm trying to set up, but it keeps telling me  
that  "[-d command not found".  Can someone please explain what is  
wrong with this?:


for i in $*
	if  [-d $i]
		echo "$i is a directory! Yay!"
		echo "$i is not a directory!"


See the problem regarding the used test-command (the other potential problems are not of interest here)?

[-d $i]
He simply didn't know that test or [ is a normal, simple command. Well, here's the answer he got. I quote it here, because it's a well written text that addresses most of the common issues with the "classic" test command:
From: Bob Proulx (EMAIL PROTECTED)
Subject: Re: -d option not working. . .?
Date: Wed, 12 Sep 2007 10:32:35 -0600
To: [email protected]


The shell is first and foremost a way to launch other commands.  The
syntax is simply "if" followed by a command-list, (e.g. if /some/foo;
or even if cmd1; cmd2; cmd3; then).  Plus the '( ... )' syntax is
already taken by the use of starting a subshell.

As I recall in the original shell language the file test operator was
not built-in.  It was provided by the standalone '/bin/test' command.
The result was effectively this:

  if /bin/test -d somedir

Although the full path /bin/test was never used.  I showed it that way
here for emphasis that following the 'if' statement is a command list.
Normally it would simply have been:

  if test -d somedir

Of course that is fine and for the best portability that style is
still the recommended way today to use the test command.  But many
people find that it looks different from other programming languages.
To make the test operator (note I mention the test operator and not
the shell language, this is a localized change not affecting the
language as a whole) look more like other programming languages the
'test' program was coded to ignore the last argument if it was a ']'.
Then a copy of the test program could be used as the '[' program.

  ...modify /bin/test to ignore ']' as last argument...
  cp /bin/test /bin/[

This allows:

  if [ -d somedir ]

Doesn't that look more normal?  People liked it and it caught on.  It
was so popular that both 'test' and '[' are now shell built-ins.  They
don't launch an external '/bin/test' program anymore.  But they *used*
to launch external programs.  Therefore argument parsing is the same
as if they still did launch an external program.  This affects
argument parsing.

  it test -f *.txt
  test: too many arguments

Oops.  I have twenty .txt files and so test got one -f followed by the
first file followed by the remaining files.  (e.g. test -f 1.txt 2.txt
3.txt 4.txt)

  if test -d $file
  test: argument expected

Oops.  I meant to set file.

  if test -d $file

If variables such as that are not set then they wlll be expanded by
the shell before passing them to the (possibly external) command and
disappear entirely.  This is why test arguments should always be quoted.

  if test -d "$file"
  if [ -d "$file" ]

Actually today test is defined that if only one argument is given as
in this case "test FOO" then then test returns true if the argument is
non-zero in text length.  Because "-d" is non-zero length "test -d" is
true.  The number of arguments affects how test parses the args.  This
avoids a case where depending upon the data may look like a test

  if test "$DATA"         # true, $DATA is non-zero length

  if test "$DATA"         # false, $DATA is zero length

But the problem case is how should test handle an argument that looks
like an operator?  This used to generate errors but now because it is
only one argument is defined to be the same as test -n $DATA.

  if test "$DATA"         # true, $DATA is non-zero length
  if test -d              # true, same as previous case.

Because test and [ are possibly external commands all of the parts of
them are chosen to avoid shell metacharacters.  The Fortran operator
naming was well known at the time (e.g. .gt., .eq., etc.) and was
pressed into service for the shell test operator too.  Comming from
Fortran using -gt, -eq, etc. looked very normal.

Incorrect use generating unlikely to be intended results:

  if test 5 > 2    # true, "5" is non-zero length, creates file named "2"

Intended use:

  if test 5 -gt 2  # true (and no shell meta characters needing quoting)

Then much later, sometime in the mid 1980's, the Korn sh decided to
improve upon this situation.  A new test operator was introduced.
This one was always a shell built-in and therefore could act upon the
shell arguments directly.  This is '[[' which is a shell keyword.
(Keyword, metacharacters, builtins, all are different.)  Because the
shell processes [[ internally all arguments are known and do not need
to be quoted.

  if [[ -d $file ]]  # okay
  if [[ 5 > 2 ]]     # okay

I am sure that I am remembering a detail wrong but hopefully this is
useful as a gentle introduction and interesting anyway.


I hope this text protects you a bit from stepping from one pitfall into the next.

I find it very interesting and informative, that's why I quoted it here. Many thanks, Bob, also for the permission to copy the text here!

Code examples


Some code snipplets follow, different ways of shell reaction is used.

Listing directories

Using a for-loop to iterate through all entries of a directory, if an entry is a directory ([ -d "$fn" ]), print its name:

for fn in *; do
  [ -d "$fn" ] && echo "$fn"

See also

1) <rant>Of course, one can wonder what is the use of including the parenthesis in the specification without defining the behaviour with more than 4 arguments or how usefull are the examples with 7 or 9 arguments attached to the specification.</rant>


PC Pete, 2012/10/06 02:06

Thanks for an interesting and helpful explanation of the sources and requirements of the test operators. Even after 20 years, I'm still learning!

What I'd like to know is how to avoid one of the most common pitfalls of the file and directory tests (-f and -d in particular). This is the strange behaviour when you test a hidden file, or a file starting with '.' (not just a file that isn't readable by the permissions applied to it).

In this case, although the file can be listed and passed as an argument to both test types ("if [[ -d" and "if test -d"), both tests fail when passed a 'dot' file.

So far, all the workarounds I've seen are quite cumbersome, and detract from "nice" shell scripting. Can you help with an example, or explain why these tests apparently 'fail' the way they do, and what we can do to get them to work with all files?


Jan Schampera, 2012/10/06 10:30

Hi Pete,

can you explain more what you mean by failing with dot-files? A dot-file is "hidden" by a convention to not display it (ls, file explorers, …), not technically hidden (i.e. there is no "hidden" flag).

For the permissions thing, it's relatively easy to explain. Just give me an example of what's unclear.

Pete, 2012/10/06 16:16 Thanks, Jan, I appreciate the help!

OK, here's the actual example of the problem I'm seeing.

I have a directory in my home directory. In that subdirectory is only one item, a directory called ".git", which contains a number of files and folders, which I want to search (without using find). This search is part of a general search, but for some reason it never seemed to search the .git folder!

I wrote a little recursive-function script that cd's into each directory, then for each file in the directory, I use the following exact code (minus some printing functions that only get executed outside the folder test):


shopt -s dotglob


function RecurseDirs
    for f in "$@"
        if [[ -e "$PWD/$FilePattern" ]]; then
            ## go do stuff to let me know we found something... e.g. FoundFile
        echo "Test : checking if $f is a directory..."
        if [[ -d "${f}" ]]; then
            echo Looking in "$f"...
            cd "${f}"
            RecurseDirs $(ls -A1 ".")
            cd ..


RecurseDirs $(ls -A1 "$StartPath")

I set $StartPath to, say, '.' to begin with, and kick the script off in my home directory.

It works as expected for all the folders it finds - but when it gets to the folder containing the .git folder, although the first echo command echoes the dot folder (so it's not being hidden by the ls options or anything, that's why I set the dotglob shell option), the -d test always fails for that folder, and the second echo command never executes, even though it's an actual folder, is readable, and so on.

This happens for any ".folder" - except, if I test by using -d '.foldername' on the command line, it works!

I'm sure this is something really silly I'm misunderstanding, but I'll be darned if I can figure it out. Any ideas, or suggestions?

I thought it might be the use of the "." as the parameter to ls in the function call… but removing it had no effect on this issue, and I want to be able to extend the code and use that as another parameter later on. That's as much as I could figure out might be causing the issue.

BTW, I also get very strange errors in some folders with this script, such as "ls : option 'A' is invalid", I'm unsure if they are related, but I can't find any information in any of the shell docs about these error messages or the dot folder problem. Most frustrating… But one thing at a time!

Any help is very much appreciated! It's driving me nuts, it's a good thing not many of the files I want to find are beneath .folders!

Top Visited
Past week
Past month


Old News ;-)

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


Top articles





Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019