Softpanorama
May the source be with you, but remember the KISS principle ;-)

Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

BASH Debugging

News

Debugging

Recommended  Links

Bash Debugger Pipes in Loops  Reference
Arithmetic expressions Comparison operators Loops in Shell Shell history cdpath Etc

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
Brian Kernighan

See Debugging Links for general information about debugging.

Bash is the only version of shell that has a built-in debugger.  That made it a winner in shell competition game, despite the fact that ksh93 is more solid and better designed shell.  See Bash Debugger

One of the major innovations in bash 3.0 was built-in debugger:

New variables to support the bash debugger:  BASH_ARGC, BASH_ARGV,     BASH_SOURCE, BASH_LINENO, BASH_SUBSHELL, BASH_EXECUTION_STRING,     BASH_COMMAND

i.  FUNCNAME has been changed to support the debugger: it's now an array   variable.

j.  for, case, select, arithmetic commands now keep line number information  for the debugger.

k.  There is a new `RETURN' trap executed when a function or sourced script  returns (not inherited child processes; inherited by command substitution  if function tracing is enabled and the debugger is active).

l.  New invocation option:  --debugger.  Enables debugging and turns on new `extdebug' shell option.

m.  New `functrace' and `errtrace' options to `set -o' cause DEBUG and ERR  traps, respectively, to be inherited by shell functions.  Equivalent to `set -T' and `set -E' respectively.  The `functrace' option also controls   whether or not the DEBUG trap is inherited by sourced scripts.

n.  The DEBUG trap is run before binding the variable and running the action  list in a `for' command, binding the selection variable and running the query in a `select' command, and before attempting a match in a `case'  command.

o.  New `--enable-debugger' option to `configure' to compile in the debugger  support code.

p.  `declare -F' now prints out extra line number and source file information if the `extdebug' option is set.

q.  If `extdebug' is enabled, a non-zero return value from a DEBUG trap causes the next command to be skipped, and a return value of 2 while in a function or sourced script forces a `return'.

r.  New `caller' builtin to provide a call stack for the bash debugger.

s.  The DEBUG trap is run just before the first command in a function body is  executed, for the debugger.

t.  `for', `select', and `case' command heads are printed when `set -x' is  enabled.
 

In addition to debugger you have three "legacy" options that bush inherited from ksh. They are -n (checking syntax), -v (verbose output) and, especially, -x (tracing execution):

 -n option

The -n option, shot for noexec ( as in no execution), tells the shell to not run the commands. Instead, the shell just checks for syntax errors. This option will not convince the shell to perform any more checks. Instead the shell just performs the normal syntax check. With -n option, the shell doesn’t execute your commands, so you have a safe way to test your scripts if they contain syntax error.

The follow example shows how to use -n option.

Let us consider a shell script with a name debug_quotes.sh
#!/bin/bash
echo "USER=$USER
echo "HOME=$HOME"
echo "OSNAME=$OSNAME"

Now run the script with -n option
$ sh -n debug_quotes
debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted string

As the above outputs shows that there is syntax error , double quotes are missing.

 -v option

The -v option tells the shell to run in verbose mode. In practice , this means that shell will echo each command prior to execute the command. This is very useful in that it can often help to find the errors.

Let us create a shell script with the name “listusers.sh” with below contents
linuxtechi@localhost:~$ cat listusers.sh

#!/bin/bash

cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt

#Clean up the temporary file.
/bin/rm -f /tmp/users.txt

Now execute the script with -v option.
linuxtechi@localhost:~$ sh -v listusers.sh

#!/bin/bash

cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,

#Clean up the temporary file.
/bin/rm -f /tmp/users.txt

linuxtechi@localhost:~$

In the above output , script output gets mixed with commands of the scripts. But however , with -v option , at least you get a better view of what the shell is doing as it runs your script.

Combining the -n & -v Options

We can combine the command line options ( -n & -v ). This makes a good combination because we can check the syntax of a script while seeing the script output.

Let us consider a previously used script “debug_quotes.sh”
linuxtechi@localhost:~$ sh -nv debug_quotes.sh

#!/bin/bash
#shows an error.

echo "USER=$USER
echo "HOME=$HOME"
echo "OSNAME=$OSNAME"

debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted string

linuxtechi@localhost:~$

 -x option

The -x option, short for xtrace or execution trace, tells the shell to echo each command after performing the substitution steps. Thus , we can see the values of variables and commands. Often, this option alone will help to diagnose a problem.

In most cases, the -x option provides the most useful information about a script, but it can lead to a lot of output. The following example show this option in action.
linuxtechi@localhost:~$ sh -x listusers.sh

+ cut -d :+ -f1,5,7 /etc/passwd
grep -v sbin
+ sort
+ grep sh
+ awk -F: { printf ( "%-12s %-40s\n", $1, $2 ) } /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,
+ /bin/rm -f /tmp/users.txt

linuxtechi@localhost:~$

In the above output , shell inserted a + sign in front of the commands.

 

The key for intelligent debugging is incorporating debugging code into your script and having a special variable (for example $debug) that controls it, allowing switching it on and off and providing various level of verbosity of output.

Debugging sections of a script using the set options

To debug a shell script, it's not necessary to debug the entire script all the time. Sometimes, debugging a partial script is more useful and time-saving. We can achieve partial debugging in a shell script using the set builtin command:

set -x  (Start debugging from here)
set +x  (End debugging here)

We can use set +x and set -x inside a shell script at multiple places depending upon the need. When a script is executed, commands in between them are printed along with the output.

Consider the following shell script as an example:

#!/bin/bash
# Filename: eval.sh
# Description: Evaluating arithmetic expression

a=23
b=6
expr $a + $b
expr $a - $b
expr $a * $b

Executing this script gives the following output:

$ sh eval.sh
29
17
expr: syntax error

We get the syntax error with an expression that is most likely the third expression—that is, expr $a * $b.

To debug, we will use set -x before and set +x after expr $a * $b.

Another script partial_debugging.sh with partial debugging is as follows:

#!/bin/bash
# Filename: partial_debugging.sh
# Description: Debugging part of script of eval.sh

a=23
b=6
expr $a + $b

expr $a - $b

set -x
expr $a * $b
set +x

The following output is obtained after executing the partial_debugging.sh script:

$  sh partial_debugging.sh
29
17
+ expr 23 eval.sh partial_debugging.sh 6
expr: syntax error
+ set +x

From the preceding output, we can see that expr $a * $b is executed as expr 23 eval.sh partial_debugging.sh 6. This means, instead of doing multiplication, bash is expanding the behavior of * as anything available in the current directory. So, we need to escape the behavior of the character * from getting expanded—that is, expr $a \* $b.

The following script eval_modified.sh is a modified form of the eval.sh script:

#!/bin/bash
# Filename: eval_modified.sh
# Description: Evaluating arithmetic expression

a=23
b=6
expr $a + $b
expr $a - $b
expr $a \* $b

Now, the output of running eval_modified.sh will be as follows:

$  sh eval_modified.sh 
29
17
138

The script runs perfectly now without any errors.

You can also use the bashdb debugger for even better debugging of the shell script. The source code and documentation for bashdb can be found at http://bashdb.sourceforge.net/.


Top updates

Softpanorama Switchboard
Softpanorama Search


NEWS CONTENTS

Old News ;-)

[Jul 13, 2017] Use of set -x command inside scripts

frodo from middle ea (602941)

Wednesday March 10, @02:06PM (#8523604)

My 2 cent tips on budding shell script authors.

If the script is not working as you want, put a

set -x

on the fist line and

set +x

on the last line.

You will see the exact execution path and variable expansion, very neat for debugging

[Dec 07, 2015] Debugging Shell Scripts in Linux by Pradeep Kumar

February 23, 2015

In most of the programming languages debugger tool is available for debugging. A debugger is a tool that can run a program or script that enables you to examine the internals of the script or program as it runs. In the shell scripting we don”t have any debugger tool but with the help of command line options (-n, -v and -x ) we can do the debugging.

-n option

The -n option, shot for noexec ( as in no execution), tells the shell to not run the commands. Instead, the shell just checks for syntax errors. This option will not convince the shell to perform any more checks. Instead the shell just performs the normal syntax check. With -n option, the shell doesn’t execute your commands, so you have a safe way to test your scripts if they contain syntax error.

The follow example shows how to use -n option.

Let us consider a shell script with a name debug_quotes.sh
#!/bin/bash
echo "USER=$USER
echo "HOME=$HOME"
echo "OSNAME=$OSNAME"

Now run the script with -n option
$ sh -n debug_quotes
debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted string

As the above outputs shows that there is syntax error , double quotes are missing.

-v option

The -v option tells the shell to run in verbose mode. In practice , this means that shell will echo each command prior to execute the command. This is very useful in that it can often help to find the errors.

Let us create a shell script with the name “listusers.sh” with below contents
linuxtechi@localhost:~$ cat listusers.sh

#!/bin/bash

cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt

#Clean up the temporary file.
/bin/rm -f /tmp/users.txt

Now execute the script with -v option.
linuxtechi@localhost:~$ sh -v listusers.sh

#!/bin/bash

cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,

#Clean up the temporary file.
/bin/rm -f /tmp/users.txt

linuxtechi@localhost:~$

In the above output , script output gets mixed with commands of the scripts. But however , with -v option , at least you get a better view of what the shell is doing as it runs your script.

Combining the -n & -v Options

We can combine the command line options ( -n & -v ). This makes a good combination because we can check the syntax of a script while seeing the script output.

Let us consider a previously used script “debug_quotes.sh”
linuxtechi@localhost:~$ sh -nv debug_quotes.sh

#!/bin/bash
#shows an error.

echo "USER=$USER
echo "HOME=$HOME"
echo "OSNAME=$OSNAME"

debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted string

linuxtechi@localhost:~$

Tracing Script Execution ( -x option )

The -x option, short for xtrace or execution trace, tells the shell to echo each command after performing the substitution steps. Thus , we can see the values of variables and commands. Often, this option alone will help to diagnose a problem.

In most cases, the -x option provides the most useful information about a script, but it can lead to a lot of output. The following example show this option in action.
linuxtechi@localhost:~$ sh -x listusers.sh

+ cut -d :+ -f1,5,7 /etc/passwd
grep -v sbin
+ sort
+ grep sh
+ awk -F: { printf ( "%-12s %-40s\n", $1, $2 ) } /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,
+ /bin/rm -f /tmp/users.txt

linuxtechi@localhost:~$

In the above output , shell inserted a + sign in front of the commands.

[Jun 12, 2010] Writing Robust Bash Shell Scripts

Many people hack together shell scripts quickly to do simple tasks, but these soon take on a life of their own. Unfortunately shell scripts are full of subtle effects which result in scripts failing in unusual ways. It's possible to write scripts which minimize these problems. In this article, I explain several techniques for writing robust bash scripts.

Use set -u

How often have you written a script that broke because a variable wasn't set? I know I have, many times.

chroot=$1
...
rm -rf $chroot/usr/share/doc 

If you ran the script above and accidentally forgot to give a parameter, you would have just deleted all of your system documentation rather than making a smaller chroot. So what can you do about it? Fortunately bash provides you with set -u, which will exit your script if you try to use an uninitialised variable. You can also use the slightly more readable set -o nounset.

david% bash /tmp/shrink-chroot.sh
$chroot=
david% bash -u /tmp/shrink-chroot.sh
/tmp/shrink-chroot.sh: line 3: $1: unbound variable
david% 

Use set -e

Every script you write should include set -e at the top. This tells bash that it should exit the script if any statement returns a non-true return value. The benefit of using -e is that it prevents errors snowballing into serious issues when they could have been caught earlier. Again, for readability you may want to use set -o errexit.

Using -e gives you error checking for free. If you forget to check something, bash will do it or you. Unfortunately it means you can't check $? as bash will never get to the checking code if it isn't zero. There are other constructs you could use:

command
if [ "$?"-ne 0]; then echo "command failed"; exit 1; fi 

could be replaced with

command || { echo "command failed"; exit 1; } 

or

if ! command; then echo "command failed"; exit 1; fi 

What if you have a command that returns non-zero or you are not interested in its return value? You can use command || true, or if you have a longer section of code, you can turn off the error checking, but I recommend you use this sparingly.

set +e
command1
command2
set -e 

On a slightly related note, by default bash takes the error status of the last item in a pipeline, which may not be what you want. For example, false | true will be considered to have succeeded. If you would like this to fail, then you can use set -o pipefail to make it fail.

Program defensively - expect the unexpected

Your script should take into account of the unexpected, like files missing or directories not being created. There are several things you can do to prevent errors in these situations. For example, when you create a directory, if the parent directory doesn't exist, mkdir will return an error. If you add a -p option then mkdir will create all the parent directories before creating the requested directory. Another example is rm. If you ask rm to delete a non-existent file, it will complain and your script will terminate. (You are using -e, right?) You can fix this by using -f, which will silently continue if the file didn't exist.

Be prepared for spaces in filenames

Someone will always use spaces in filenames or command line arguments and you should keep this in mind when writing shell scripts. In particular you should use quotes around variables.

if [ $filename = "foo" ]; 

will fail if $filename contains a space. This can be fixed by using:

if [ "$filename" = "foo" ]; 

When using $@ variable, you should always quote it or any arguments containing a space will be expanded in to separate words.

david% foo() { for i in $@; do echo $i; done }; foo bar "baz quux"
bar
baz
quux
david% foo() { for i in "$@"; do echo $i; done }; foo bar "baz quux"
bar
baz quux 

I can not think of a single place where you shouldn't use "$@" over $@, so when in doubt, use quotes.

If you use find and xargs together, you should use -print0 to separate filenames with a null character rather than new lines. You then need to use -0 with xargs.

david% touch "foo bar"
david% find | xargs ls
ls: ./foo: No such file or directory
ls: bar: No such file or directory
david% find -print0 | xargs -0 ls
./foo bar 

Setting traps

Often you write scripts which fail and leave the filesystem in an inconsistent state; things like lock files, temporary files or you've updated one file and there is an error updating the next file. It would be nice if you could fix these problems, either by deleting the lock files or by rolling back to a known good state when your script suffers a problem. Fortunately bash provides a way to run a command or function when it receives a unix signal using the trap command.

trap command signal [signal ...]

There are many signals you can trap (you can get a list of them by running kill -l), but for cleaning up after problems there are only 3 we are interested in: INT, TERM and EXIT. You can also reset traps back to their default by using - as the command.

Signal Description
INT Interrupt - This signal is sent when someone kills the script by pressing ctrl-c.
TERM Terminate - this signal is sent when someone sends the TERM signal using the kill command.
EXIT Exit - this is a pseudo-signal and is triggered when your script exits, either through reaching the end of the script, an exit command or by a command failing when using set -e.

Usually, when you write something using a lock file you would use something like:

if [ ! -e $lockfile ]; then
   touch $lockfile
   critical-section
   rm $lockfile
else
   echo "critical-section is already running"
fi

What happens if someone kills your script while critical-section is running? The lockfile will be left there and your script won't run again until it's been deleted. The fix is to use:

if [ ! -e $lockfile ]; then
   trap "rm -f $lockfile; exit" INT TERM EXIT
   touch $lockfile
   critical-section
   rm $lockfile
   trap - INT TERM EXIT
else
   echo "critical-section is already running"
fi

Now when you kill the script it will delete the lock file too. Notice that we explicitly exit from the script at the end of trap command, otherwise the script will resume from the point that the signal was received.

Race conditions

It's worth pointing out that there is a slight race condition in the above lock example between the time we test for the lockfile and the time we create it. A possible solution to this is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file. We can use something similar to:

if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; 
then
   trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT

   critical-section
   
   rm -f "$lockfile"
   trap - INT TERM EXIT
else
   echo "Failed to acquire lockfile: $lockfile." 
   echo "Held by $(cat $lockfile)"
fi 

A slightly more complicated problem is where you need to update a bunch of files and need the script to fail gracefully if there is a problem in the middle of the update. You want to be certain that something either happened correctly or that it appears as though it didn't happen at all.Say you had a script to add users.

add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R

There could be problems if you ran out of diskspace or someone killed the process. In this case you'd want the user to not exist and all their files to be removed.

rollback() {
   del_from_passwd $user
   if [ -e /home/$user ]; then
      rm -rf /home/$user
   fi
   exit
}

trap rollback INT TERM EXIT
add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R
trap - INT TERM EXIT

We needed to remove the trap at the end or the rollback function would have been called as we exited, undoing all the script's hard work.

Be atomic

Sometimes you need to update a bunch of files in a directory at once, say you need to rewrite urls form one host to another on your website. You might write:

for file in $(find /var/www -type f -name "*.html"); do
   perl -pi -e 's/www.example.net/www.example.com/' $file
done

Now if there is a problem with the script you could have half the site referring to www.example.com and the rest referring to www.example.net. You could fix this using a backup and a trap, but you also have the problem that the site will be inconsistent during the upgrade too.

The solution to this is to make the changes an (almost) atomic operation. To do this make a copy of the data, make the changes in the copy, move the original out of the way and then move the copy back into place. You need to make sure that both the old and the new directories are moved to locations that are on the same partition so you can take advantage of the property of most unix filesystems that moving directories is very fast, as they only have to update the inode for that directory.

cp -a /var/www /var/www-tmp
for file in $(find /var/www-tmp -type f -name "*.html"); do
   perl -pi -e 's/www.example.net/www.example.com/' $file
done
mv /var/www /var/www-old
mv /var/www-tmp /var/www 

This means that if there is a problem with the update, the live system is not affected. Also the time where it is affected is reduced to the time between the two mvs, which should be very minimal, as the filesystem just has to change two entries in the inodes rather than copying all the data around.

The disadvantage of this technique is that you need to use twice as much disk space and that any process that keeps files open for a long time will still have the old files open and not the new ones, so you would have to restart those processes if this is the case. In our example this isn't a problem as apache opens the files every request. You can check for files with files open by using lsof. An advantage is that you now have a backup before you made your changes in case you need to revert.

The Various bash Prompts by Juliet Kemp

PS4 is the prompt shown when you set the debug mode on a shell script using set -x at the top of the script. This echoes each line of the script to STDOUT before executing it. The default prompt is ++. More usefully, you can set it to display the line number, with:
export PS4='$LINENO+ '

VIM can be used as bash IDE: Linux.com Turn Vim into a bash IDE

By Joe 'Zonker' Brockmeier on June 11, 2007 (9:01:00 PM)

By itself, Vim is one of the best editors for shell scripting. With a little tweaking, however, you can turn Vim into a full-fledged IDE for writing scripts.

You could do it yourself, or you can just install Fritz Mehner's Bash Support plugin.

To install Bash Support, download the zip archive, copy it to your ~/.vim directory, and unzip the archive. You'll also want to edit your ~/.vimrc to include a few personal details; open the file and add these three lines:
let g:BASH_AuthorName   = 'Your Name'
let g:BASH_Email        = 'my@email.com'
let g:BASH_Company      = 'Company Name'

These variables will be used to fill in some headers for your projects, as we'll see below.

The Bash Support plugin works in the Vim GUI (gVim) and text mode Vim. It's a little easier to use in the GUI, and Bash Support doesn't implement most of its menu functions in Vim's text mode, so you might want to stick with gVim when scripting.

When Bash Support is installed, gVim will include a new menu, appropriately titled Bash. This puts all of the Bash Support functions right at your fingertips (or mouse button, if you prefer). Let's walk through some of the features, and see how Bash Support can make Bash scripting a breeze.

Header and comments

If you believe in using extensive comments in your scripts, and I hope you are, you'll really enjoy using Bash Support. Bash Support provides a number of functions that make it easy to add comments to your bash scripts and programs automatically or with just a mouse click or a few keystrokes.

When you start a non-trivial script that will be used and maintained by others, it's a good idea to include a header with basic information -- the name of the script, usage, description, notes, author information, copyright, and any other info that might be useful to the next person who has to maintain the script. Bash Support makes it a breeze to provide this information. Go to Bash -> Comments -> File Header, and gVim will insert a header like this in your script:

#!/bin/bash
#===============================================================================
#
#          FILE:  test.sh
#
#         USAGE:  ./test.sh
#
#   DESCRIPTION:
#
#       OPTIONS:  ---
#  REQUIREMENTS:  ---
#          BUGS:  ---
#         NOTES:  ---
#        AUTHOR:  Joe Brockmeier, jzb@zonker.net
#       COMPANY:  Dissociated Press
#       VERSION:  1.0
#       CREATED:  05/25/2007 10:31:01 PM MDT
#      REVISION:  ---
#===============================================================================

You'll need to fill in some of the information, but Bash Support grabs the author, company name, and email address from your ~/.vimrc, and fills in the file name and created date automatically. To make life even easier, if you start Vim or gVim with a new file that ends with an .sh extension, it will insert the header automatically.

As you're writing your script, you might want to add comment blocks for your functions as well. To do this, go to Bash -> Comment -> Function Description to insert a block of text like this:

#===  FUNCTION  ================================================================
#          NAME:
#   DESCRIPTION:
#    PARAMETERS:
#       RETURNS:
#===============================================================================

Just fill in the relevant information and carry on coding.

The Comment menu allows you to insert other types of comments, insert the current date and time, and turn selected code into a comment, and vice versa.

Statements and snippets

Let's say you want to add an if-else statement to your script. You could type out the statement, or you could just use Bash Support's handy selection of pre-made statements. Go to Bash -> Statements and you'll see a long list of pre-made statements that you can just plug in and fill in the blanks. For instance, if you want to add a while statement, you can go to Bash -> Statements -> while, and you'll get the following:

while _; do
done

The cursor will be positioned where the underscore (_) is above. All you need to do is add the test statement and the actual code you want to run in the while statement. Sure, it'd be nice if Bash Support could do all that too, but there's only so far an IDE can help you.

However, you can help yourself. When you do a lot of bash scripting, you might have functions or code snippets that you reuse in new scripts. Bash Support allows you to add your snippets and functions by highlighting the code you want to save, then going to Bash -> Statements -> write code snippet. When you want to grab a piece of prewritten code, go to Bash -> Statements -> read code snippet. Bash Support ships with a few included code fragments.

Another way to add snippets to the statement collection is to just place a text file with the snippet under the ~/.vim/bash-support/codesnippets directory.

Running and debugging scripts

Once you have a script ready to go, and it's testing and debugging time. You could exit Vim, make the script executable, run it and see if it has any bugs, and then go back to Vim to edit it, but that's tedious. Bash Support lets you stay in Vim while doing your testing.

When you're ready to make the script executable, just choose Bash -> Run -> make script executable. To save and run the script, press Ctrl-F9, or go to Bash -> Run -> save + run script.

Bash Support also lets you call the bash debugger (bashdb) directly from within Vim. On Ubuntu, it's not installed by default, but that's easily remedied with apt-get install bashdb. Once it's installed, you can debug the script you're working on with F9 or Bash -> Run -> start debugger.

If you want a "hard copy" -- a PostScript printout -- of your script, you can generate one by going to Bash -> Run -> hardcopy to FILENAME.ps. This is where Bash Support comes in handy for any type of file, not just bash scripts. You can use this function within any file to generate a PostScript printout.

Bash Support has several other functions to help run and test scripts from within Vim. One useful feature is syntax checking, which you can access with Alt-F9. If you have no syntax errors, you'll get a quick OK. If there are problems, you'll see a small window at the bottom of the Vim screen with a list of syntax errors. From that window you can highlight the error and press Enter, and you'll be taken to the line with the error.

Put away the reference book...

Don't you hate it when you need to include a regular expression or a test in a script, but can't quite remember the syntax? That's no problem when you're using Bash Support, because you have Regex and Tests menus with all you'll need. For example, if you need to verify that a file exists and is owned by the correct user ID (UID), go to Bash -> Tests -> file exists and is owned by the effective UID. Bash Support will insert the appropriate test ([ -O _]) with your cursor in the spot where you have to fill in the file name.

To build regular expressions quickly, go to the Bash menu, select Regex, then pick the appropriate expression from the list. It's fairly useful when you can't remember exactly how to express "zero or one" or other regular expressions.

Bash Support also includes menus for environment variables, bash builtins, shell options, and a lot more.

Hotkey support

Vim users can access many of Bash Support's features using hotkeys. While not as simple as clicking the menu, the hotkeys do follow a logical scheme that makes them easy to remember. For example, all of the comment functions are accessed with \c, so if you want to insert a file header, you use \ch; if you want a date inserted, type \cd; and for a line end comment, use \cl.

Statements can be accessed with \a. Use \ac for a case statement, \aie for an "if then else" statement, \af for a "for in..." statement, and so on. Note that the online docs are incorrect here, and indicate that statements begin with \s, but Bash Support ships with a PDF reference card (under .vim/bash-support/doc/bash-hot-keys.pdf) that gets it right.

Run commands are accessed with \r. For example, to save the file and run a script, use \rr; to make a script executable, use \re; and to start the debugger, type \rd. I won't try to detail all of the shortcuts, but you can pull up a reference using :help bashsupport-usage-vim when in Vim, or use the PDF. The full Bash Support reference is available within Vim by running :help bashsupport, or you can read it online.

Of course, we've covered only a small part of Bash Support's functionality. The next time you need to whip up a shell script, try it using Vim with Bash Support. This plugin makes scripting in bash a lot easier.

Every Monday we highlight a different extension, plugin, or add-on. Write an article of less than 1,000 words telling us about one that you use and how it makes your work easier, along with tips for getting the most out of it. If we publish it, we'll pay you 0. (Send us a query first to be sure we haven't already published a story on your chosen topic recently or have one in hand.)

Bash is the only shell that has debugger. The current version is 3.1-0.09

The Bash Debugger (bashdb) is a debugger for Bash scripts. The debugger command interface is modeled on the gdb command interface. Front-ends supporting bashdb include GNU-Emacs and ddd. In the past, the project has been used aher experimental features such as a timestamped history file (now in Bash versions after 3.0).

This release contains bugfixes accumulated over the year and works on bash version 3.2 as well as version 3.1.

Bash debugging tips

There are quite a few things that can be done with bash to help debugging your scripts containing the rulesets. One of the first problems with finding a bug is to know on which line the problem appears. This can be solved in two different ways, either using the bash -x flag, or by simply entering some echo statements to find the place where the problem happens. Ideally, you would, with the echo statement, add something like the following echo statement at regular intervals in the code:

  ...
  echo "Debugging message 1."
  ...
  echo "Debugging message 2."
  ...      

In my case, I generally use pretty much worthless messages, as long as they have something in them that is unique so I can find the error message by a simple grep or search in the script file. Now, if the error message shows up after the "Debugging message 1." message, but before "Debugging message 2.", then we know that the erroneous line of code is somewhere in between the two debugging messages. As you can understand, bash has the not really bad, but at least peculiar idea of continuing to execute commands even if there is an error in one of the commands before. In netfilter, this can cause some very interesting problems for you. The above idea of simply using echo statements to find the errors is extremely simple, but it is at the same time very nice since you can narrow the whole problem down to a single line of code and see what the problem is directly.

The second possibility to find the above problem is to use the -x variable to bash, as we spoke of before. This can of course be a little problem, especially if your script is large, and if your console buffer isn't large enough. What the -x variable means is quite simple, it tells the script to just echo every single line of code in the script into the standard output of the shell (generally your console). What you do is to change your normal start line of the script from this:

#!/bin/bash

Into the line below:

#!/bin/bash -x

As you will see, this changes your output from perhaps a couple of lines, to copious amounts of data on the output. The code shows you every single command line that is executed, and with the values of all the variables et cetera, so that you don't have to try and figure out exactly what the code is doing. Simply put, each line that gets executed is output to your screen as well. One thing that may be nice to see, is that all of the lines that bash outputs are prefixed by a + sign. This makes it a little bit easier to discern error or warning messages from the actual script, rather than just one big mesh of output.

The -x option is also very interesting for debugging a couple of other rather common problems that you may run into with a little bit more complex rulesets. The first of them is to find out exactly what happens with what you thought was a simple loop, such as an for, if or while statement? For example, let's look at an example.

  #!/bin/bash 
  iptables="/sbin/iptables"
  $iptables -N output_int_iface
  cat /etc/configs/machines | while read host; do
    $iptables -N output-$host
    $iptables -A output_int_iface -p tcp -d $host -j output-$host

    cat /etc/configs/${host}/ports | while read row2; do
      $iptables -A output-$host -p tcp --dport $row2 -d $host -j ACCEPT
    done
  done

This set of rules may look simple enough, but we continue to run into a problem with it. We get the following error messages that we know come from the above code by using the simple echo debugging method.

work3:~# ./test.sh
Bad argument `output-'
Try `iptables -h' or 'iptables --help' for more information.
cat: /etc/configs//ports: No such file or directory

So we turn on the -x option to bash and look at the output. The output is shown below, and as you can see there is something very weird going on in it. There are a couple of commands where the $host and $row2 variables are replaced by nothing. Looking closer, we see that it is only the last iteration of code that causes the trouble. Either we have done a programmatical error, or there is something strange with the data. In this case, it is a simple error with the data, which contains a single extra linebreak at the end of the file. This causes the loop to iterate one last time, which it shouldn't. Simply remove the trailing linebreak of the file, and the problem is solved. This may not be a very elegant solution, but for private work it should be enough. Otherwise, you could add code that looks to see that there is actually some data in the $host and $row2 variables.

work3:~# ./test.sh
+ iptables=/sbin/iptables
+ /sbin/iptables -N output_int_iface
+ cat /etc/configs/machines
+ read host
+ /sbin/iptables -N output-sto-as-101
+ /sbin/iptables -A output_int_iface -p tcp -d sto-as-101 -j output-sto-as-101
+ cat /etc/configs/sto-as-101/ports
+ read row2
+ /sbin/iptables -A output-sto-as-101 -p tcp --dport 21 -d sto-as-101 -j ACCEPT
+ read row2
+ /sbin/iptables -A output-sto-as-101 -p tcp --dport 22 -d sto-as-101 -j ACCEPT
+ read row2
+ /sbin/iptables -A output-sto-as-101 -p tcp --dport 23 -d sto-as-101 -j ACCEPT
+ read row2
+ read host
+ /sbin/iptables -N output-sto-as-102
+ /sbin/iptables -A output_int_iface -p tcp -d sto-as-102 -j output-sto-as-102
+ cat /etc/configs/sto-as-102/ports
+ read row2
+ /sbin/iptables -A output-sto-as-102 -p tcp --dport 21 -d sto-as-102 -j ACCEPT
+ read row2
+ /sbin/iptables -A output-sto-as-102 -p tcp --dport 22 -d sto-as-102 -j ACCEPT
+ read row2
+ /sbin/iptables -A output-sto-as-102 -p tcp --dport 23 -d sto-as-102 -j ACCEPT
+ read row2
+ read host
+ /sbin/iptables -N output-sto-as-103
+ /sbin/iptables -A output_int_iface -p tcp -d sto-as-103 -j output-sto-as-103
+ cat /etc/configs/sto-as-103/ports
+ read row2
+ /sbin/iptables -A output-sto-as-103 -p tcp --dport 21 -d sto-as-103 -j ACCEPT
+ read row2
+ /sbin/iptables -A output-sto-as-103 -p tcp --dport 22 -d sto-as-103 -j ACCEPT
+ read row2
+ /sbin/iptables -A output-sto-as-103 -p tcp --dport 23 -d sto-as-103 -j ACCEPT
+ read row2
+ read host
+ /sbin/iptables -N output-
+ /sbin/iptables -A output_int_iface -p tcp -d -j output-
Bad argument `output-'
Try `iptables -h' or 'iptables --help' for more information.
+ cat /etc/configs//ports
cat: /etc/configs//ports: No such file or directory
+ read row2
+ read host

The third and final problem you run into that can be partially solved with the help of the -x option is if you are executing the firewall script via SSH, and the console hangs in the middle of executing the script, and the console simply won't come back, nor are you able to connect via SSH again. In 99.9% of the cases, this means there is some kind of problem inside the script with a couple of the rules. By turning on the -x option, you will see exactly at which line the script locks dead, hopefully at least. There are a couple of circumstances where this is not true, unfortunately. For example, what if the script sets up a rule that blocks incoming traffic, but since the ssh/telnet server sends the echo first as outgoing traffic, netfilter will remember the connection, and hence allow the incoming traffic anyways if you have a rule above that handles connection states.

As you can see, it can become quite complex to debug your ruleset to its full extent in the end. However, it is not impossible at all. You may also have noticed, if you have worked remotely on your firewalls via SSH, for example, that the firewall may hang when you load bad rulesets. There is one more thing that can be done to save the day in these circumstances. Cron is an excellent way of saving your day. For example, say you are working on a firewall 50 kilometers away, you add some rules, delete some others, and then delete and insert the new updated ruleset. The firewall locks dead, and you can't reach it. The only way of fixing this is to go to the firewall's physical location and fix the problem from there, unless you have taken precautions that is!

Chapter 29. Debugging

The Bash shell contains no debugger, nor even any debugging-specific commands or constructs. [1] Syntax errors or outright typos in the script generate cryptic error messages that are often of no help in debugging a non-functional script.

Example 29-1. A buggy script
#!/bin/bash
# ex74.sh

# This is a buggy script.
# Where, oh where is the error?

a=37

if [$a -gt 27 ]
then
  echo $a
fi  

exit 0

Output from script:

./ex74.sh: [37: command not found
What's wrong with the above script (hint: after the if)?


Example 29-2. Missing keyword

#!/bin/bash
# missing-keyword.sh: What error message will this generate?

for a in 1 2 3
do
  echo "$a"
# done     # Required keyword 'done' commented out in line 7.

exit 0  

Output from script:

missing-keyword.sh: line 10: syntax error: unexpected end of file
	
Note that the error message does not necessarily reference the line in which the error occurs, but the line where the Bash interpreter finally becomes aware of the error.

Error messages may disregard comment lines in a script when reporting the line number of a syntax error.

What if the script executes, but does not work as expected? This is the all too familiar logic error.

Example 29-3. test24, another buggy script
#!/bin/bash

#  This script is supposed to delete all filenames in current directory
#+ containing embedded spaces.
#  It doesn't work.
#  Why not?


badname=`ls | grep ' '`

# Try this:
# echo "$badname"

rm "$badname"

exit 0

Try to find out what's wrong with Example 29-3 by uncommenting the echo "$badname" line. Echo statements are useful for seeing whether what you expect is actually what you get.

In this particular case, rm "$badname" will not give the desired results because $badname should not be quoted. Placing it in quotes ensures that rm has only one argument (it will match only one filename). A partial fix is to remove to quotes from $badname and to reset $IFS to contain only a newline, IFS=$'\n'. However, there are simpler ways of going about it.

# Correct methods of deleting filenames containing spaces.
rm *\ *
rm *" "*
rm *' '*
# Thank you. S.C.

Summarizing the symptoms of a buggy script,

  1. It bombs with a "syntax error" message, or
  2. It runs, but does not work as expected (logic error).
  3. It runs, works as expected, but has nasty side effects (logic bomb).

Tools for debugging non-working scripts include

  1. echo statements at critical points in the script to trace the variables, and otherwise give a snapshot of what is going on.
    Tip Even better is an echo that echoes only when debug is on.
    ### debecho (debug-echo), by Stefano Falsetto ###
    ### Will echo passed parameters only if DEBUG is set to a value. ###
    debecho () {
      if [ ! -z "$DEBUG" ]; then
         echo "$1" >&2
         #         ^^^ to stderr
      fi
    }
    
    DEBUG=on
    Whatever=whatnot
    debecho $Whatever   # whatnot
    
    DEBUG=
    Whatever=notwhat
    debecho $Whatever   # (Will not echo.)

  2. using the tee filter to check processes or data flows at critical points.
  3. setting option flags -n -v -x

    sh -n scriptname checks for syntax errors without actually running the script. This is the equivalent of inserting set -n or set -o noexec into the script. Note that certain types of syntax errors can slip past this check.

    sh -v scriptname echoes each command before executing it. This is the equivalent of inserting set -v or set -o verbose in the script.

    The -n and -v flags work well together. sh -nv scriptname gives a verbose syntax check.

    sh -x scriptname echoes the result each command, but in an abbreviated manner. This is the equivalent of inserting set -x or set -o xtrace in the script.

    Inserting set -u or set -o nounset in the script runs it, but gives an unbound variable error message at each attempt to use an undeclared variable.

  4. Using an "assert" function to test a variable or condition at critical points in a script. (This is an idea borrowed from C.)

    Example 29-4. Testing a condition with an "assert"

    #!/bin/bash
    # assert.sh
    
    assert ()                 #  If condition false,
    {                         #+ exit from script with error message.
      E_PARAM_ERR=98
      E_ASSERT_FAILED=99
    
    
      if [ -z "$2" ]          # Not enough parameters passed.
      then
        return $E_PARAM_ERR   # No damage done.
      fi
    
      lineno=$2
    
      if [ ! $1 ] 
      then
        echo "Assertion failed:  \"$1\""
        echo "File \"$0\", line $lineno"
        exit $E_ASSERT_FAILED
      # else
      #   return
      #   and continue executing script.
      fi  
    }    
    
    
    a=5
    b=4
    condition="$a -lt $b"     # Error message and exit from script.
                              #  Try setting "condition" to something else,
                              #+ and see what happens.
    
    assert "$condition" $LINENO
    # The remainder of the script executes only if the "assert" does not fail.
    
    
    # Some commands.
    # ...
    echo "This statement echoes only if the \"assert\" does not fail."
    # ...
    # Some more commands.
    
    exit 0
  5. Using the $LINENO variable and the caller builtin.
  6. trapping at exit.

    The exit command in a script triggers a signal 0, terminating the process, that is, the script itself. [2] It is often useful to trap the exit, forcing a "printout" of variables, for example. The trap must be the first command in the script.

Trapping signals
trap
Specifies an action on receipt of a signal; also useful for debugging.
Note A signal is simply a message sent to a process, either by the kernel or another process, telling it to take some specified action (usually to terminate). For example, hitting a Control-C, sends a user interrupt, an INT signal, to a running program.
trap '' 2
# Ignore interrupt 2 (Control-C), with no action specified. 

trap 'echo "Control-C disabled."' 2
# Message when Control-C pressed.

Example 29-5. Trapping at exit
#!/bin/bash
# Hunting variables with a trap.

trap 'echo Variable Listing --- a = $a  b = $b' EXIT
#  EXIT is the name of the signal generated upon exit from a script.
#
#  The command specified by the "trap" doesn't execute until
#+ the appropriate signal is sent.

echo "This prints before the \"trap\" --"
echo "even though the script sees the \"trap\" first."
echo

a=39

b=36

exit 0
#  Note that commenting out the 'exit' command makes no difference,
#+ since the script exits in any case after running out of commands.
Example 29-6. Cleaning up after Control-C
#!/bin/bash
# logon.sh: A quick 'n dirty script to check whether you are on-line yet.

umask 177  # Make sure temp files are not world readable.


TRUE=1
LOGFILE=/var/log/messages
#  Note that $LOGFILE must be readable
#+ (as root, chmod 644 /var/log/messages).
TEMPFILE=temp.$$
#  Create a "unique" temp file name, using process id of the script.
#     Using 'mktemp' is an alternative.
#     For example:
#     TEMPFILE=`mktemp temp.XXXXXX`
KEYWORD=address
#  At logon, the line "remote IP address xxx.xxx.xxx.xxx"
#                      appended to /var/log/messages.
ONLINE=22
USER_INTERRUPT=13
CHECK_LINES=100
#  How many lines in log file to check.

trap 'rm -f $TEMPFILE; exit $USER_INTERRUPT' TERM INT
#  Cleans up the temp file if script interrupted by control-c.

echo

while [ $TRUE ]  #Endless loop.
do
  tail -$CHECK_LINES $LOGFILE> $TEMPFILE
  #  Saves last 100 lines of system log file as temp file.
  #  Necessary, since newer kernels generate many log messages at log on.
  search=`grep $KEYWORD $TEMPFILE`
  #  Checks for presence of the "IP address" phrase,
  #+ indicating a successful logon.

  if [ ! -z "$search" ] #  Quotes necessary because of possible spaces.
  then
     echo "On-line"
     rm -f $TEMPFILE    #  Clean up temp file.
     exit $ONLINE
  else
     echo -n "."        #  The -n option to echo suppresses newline,
                        #+ so you get continuous rows of dots.
  fi

  sleep 1  
done  


#  Note: if you change the KEYWORD variable to "Exit",
#+ this script can be used while on-line
#+ to check for an unexpected logoff.

# Exercise: Change the script, per the above note,
#           and prettify it.

exit 0


# Nick Drage suggests an alternate method:

while true
  do ifconfig ppp0 | grep UP 1> /dev/null && echo "connected" && exit 0
  echo -n "."   # Prints dots (.....) until connected.
  sleep 2
done

# Problem: Hitting Control-C to terminate this process may be insufficient.
#+         (Dots may keep on echoing.)
# Exercise: Fix this.



# Stephane Chazelas has yet another alternative:

CHECK_INTERVAL=1

while ! tail -1 "$LOGFILE" | grep -q "$KEYWORD"
do echo -n .
   sleep $CHECK_INTERVAL
done
echo "On-line"

# Exercise: Discuss the relative strengths and weaknesses
#           of each of these various approaches.
Note The DEBUG argument to trap causes a specified action to execute after every command in a script. This permits tracing variables, for example.

Example 29-7. Tracing a variable

#!/bin/bash

trap 'echo "VARIABLE-TRACE> \$variable = \"$variable\""' DEBUG
# Echoes the value of $variable after every command.

variable=29

echo "Just initialized \"\$variable\" to $variable."

let "variable *= 3"
echo "Just multiplied \"\$variable\" by 3."

exit $?

#  The "trap 'command1 . . . command2 . . .' DEBUG" construct is
#+ more appropriate in the context of a complex script,
#+ where placing multiple "echo $variable" statements might be
#+ clumsy and time-consuming.

# Thanks, Stephane Chazelas for the pointer.


Output of script:

VARIABLE-TRACE> $variable = ""
VARIABLE-TRACE> $variable = "29"
Just initialized "$variable" to 29.
VARIABLE-TRACE> $variable = "29"
VARIABLE-TRACE> $variable = "87"
Just multiplied "$variable" by 3.
VARIABLE-TRACE> $variable = "87"

Of course, the trap command has other uses aside from debugging.

Example 29-8. Running multiple processes (on an SMP box)
#!/bin/bash
# parent.sh
# Running multiple processes on an SMP box.
# Author: Tedman Eng

#  This is the first of two scripts,
#+ both of which must be present in the current working directory.




LIMIT=$1         # Total number of process to start
NUMPROC=4        # Number of concurrent threads (forks?)
PROCID=1         # Starting Process ID
echo "My PID is $$"

function start_thread() {
        if [ $PROCID -le $LIMIT ] ; then
                ./child.sh $PROCID&
                let "PROCID++"
        else
           echo "Limit reached."
           wait
           exit
        fi
}

while [ "$NUMPROC" -gt 0 ]; do
        start_thread;
        let "NUMPROC--"
done


while true
do

trap "start_thread" SIGRTMIN

done

exit 0



# ======== Second script follows ========


#!/bin/bash
# child.sh
# Running multiple processes on an SMP box.
# This script is called by parent.sh.
# Author: Tedman Eng

temp=$RANDOM
index=$1
shift
let "temp %= 5"
let "temp += 4"
echo "Starting $index  Time:$temp" "$@"
sleep ${temp}
echo "Ending $index"
kill -s SIGRTMIN $PPID

exit 0


# ======================= SCRIPT AUTHOR'S NOTES ======================= #
#  It's not completely bug free.
#  I ran it with limit = 500 and after the first few hundred iterations,
#+ one of the concurrent threads disappeared!
#  Not sure if this is collisions from trap signals or something else.
#  Once the trap is received, there's a brief moment while executing the
#+ trap handler but before the next trap is set.  During this time, it may
#+ be possible to miss a trap signal, thus miss spawning a child process.

#  No doubt someone may spot the bug and will be writing 
#+ . . . in the future.



# ===================================================================== #



# ----------------------------------------------------------------------#



#################################################################
# The following is the original script written by Vernia Damiano.
# Unfortunately, it doesn't work properly.
#################################################################

#!/bin/bash

#  Must call script with at least one integer parameter
#+ (number of concurrent processes).
#  All other parameters are passed through to the processes started.


INDICE=8        # Total number of process to start
TEMPO=5         # Maximum sleep time per process
E_BADARGS=65    # No arg(s) passed to script.

if [ $# -eq 0 ] # Check for at least one argument passed to script.
then
  echo "Usage: `basename $0` number_of_processes [passed params]"
  exit $E_BADARGS
fi

NUMPROC=$1              # Number of concurrent process
shift
PARAMETRI=( "$@" )      # Parameters of each process

function avvia() {
         local temp
         local index
         temp=$RANDOM
         index=$1
         shift
         let "temp %= $TEMPO"
         let "temp += 1"
         echo "Starting $index Time:$temp" "$@"
         sleep ${temp}
         echo "Ending $index"
         kill -s SIGRTMIN $$
}

function parti() {
         if [ $INDICE -gt 0 ] ; then
              avvia $INDICE "${PARAMETRI[@]}" &
                let "INDICE--"
         else
                trap : SIGRTMIN
         fi
}

trap parti SIGRTMIN

while [ "$NUMPROC" -gt 0 ]; do
         parti;
         let "NUMPROC--"
done

wait
trap - SIGRTMIN

exit $?

: <<SCRIPT_AUTHOR_COMMENTS
I had the need to run a program, with specified options, on a number of
different files, using a SMP machine. So I thought [I'd] keep running
a specified number of processes and start a new one each time . . . one
of these terminates.

The "wait" instruction does not help, since it waits for a given process
or *all* process started in background. So I wrote [this] bash script
that can do the job, using the "trap" instruction.
  --Vernia Damiano
SCRIPT_AUTHOR_COMMENTS
Note trap '' SIGNAL (two adjacent apostrophes) disables SIGNAL for the remainder of the script. trap SIGNAL restores the functioning of SIGNAL once more. This is useful to protect a critical portion of a script from an undesirable interrupt.

	trap '' 2  # Signal 2 is Control-C, now disabled.
	command
	command
	command
	trap 2     # Reenables Control-C
	

Version 3 of Bash adds the following special variables for use by the debugger.

  1. $BASH_ARGC
  2. $BASH_ARGV
  3. $BASH_COMMAND
  4. $BASH_EXECUTION_STRING
  5. $BASH_LINENO
  6. $BASH_SOURCE
  7. $BASH_SUBSHELL

[1] Rocky Bernstein's Bash debugger partially makes up for this lack.
[2] By convention, signal 0 is assigned to exit.

NEWS CONTENTS

Old News ;-)

BashPitfalls - Greg's Wiki

This page shows common errors that Bash programmers make. The following examples are all flawed in some way:

  1. for i in `ls *.mp3`
  2. cp $file $target
  3. [ $foo = "bar" ]
  4. [ "$foo" = bar && "$bar" = foo ]
  5. [[ $foo > 7 ]]
  6. grep foo bar | while read line; do ((count++)); done
  7. if [grep foo myfile]
  8. if [bar="$foo"]
  9. cat file | sed s/foo/bar/ > file
  10. echo $foo
  11. $foo=bar
  12. foo = bar
  13. echo <<EOF
  14. su -c 'some command'
  15. cd /foo; bar
  16. [ bar == "$foo" ]

1. for i in `ls *.mp3`

One of the most common mistakes BASH programmers make is to write a loop like this:

This breaks when the user has a file with a space in its name. Why? Because the output of the ls *.mp3 command substitution undergoes word splitting. Assuming we have a file named 01 - Don't Eat the Yellow Snow.mp3 in the current directory, the for loop will iterate over each word in the resulting file name (namely: "01", "-", "Don't", "Eat", and so on).

You can't double-quote the substitution either:

This causes the entire output of the ls command to be treated as a single word, and instead of iterating over each file name in the output list, the loop will only execute once, with i taking on a value which is the concatenation of all the file names (with spaces between them).

In addition to this, the use of ls is just plain unnecessary. It's an external command, which simply isn't needed to do the job. So, what's the right way to do it?

Let Bash expand the list of filenames for you. The expansion will not be subject to word splitting. Each filename that's matched by the *.mp3 pattern will be treated as a separate word, and the loop will iterate once per file name.

For more details on this question, please see Bash FAQ #20.

The astute reader will notice the double quotes in the second line. This leads to our second common pitfall.

2. cp $file $target

What's wrong with the command shown above? Well, nothing, if you happen to know in advance that $file and $target have no white space in them.

But if you don't know that in advance, or if you're paranoid, or if you're just trying to develop good habits, then you should quote your variable references to avoid having them undergo word splitting.

Without the double quotes, you'll get a command like cp 01 - Don't Eat the Yellow Snow.mp3 /mnt/usb and then you'll get errors like cp: cannot stat `01': No such file or directory. With the double quotes, all's well, unless "$file" happens to start with a -, in which case cp thinks you're trying to feed it command line options. This isn't really a shell problem, but it often occurs with shell variables.

One solution is to insert -- between cp and its arguments. That tells it to stop scanning for options, and all is well:

(There may be some incredibly ancient systems in existence, in which the -- trick doesn't work. For those, read on....)

Another is to ensure that your filenames always begin with a directory (including . for the current directory, if appropriate). For example, if we're in some sort of loop:

In this case, even if we have a file whose name begins with -, the glob will ensure that the variable always contains something like ./-foo.mp3, which is perfectly safe as far as cp is concerned.

3. [ $foo = "bar" ]

This is very similar to the first part of the previous pitfall, but I repeat it because it's so important. In the example above, the quotes are in the wrong place. You do not need to quote a string literal in bash. But you should quote your variables if you aren't sure whether they could contain white space.

This breaks for two reasons:

A more correct way to write this would be:

But this still breaks if $foo begins with a -.

In bash, the [[ keyword, which embraces and extends the old test command (also known as [), can be used to solve the problem:

You don't need to quote variable references within [[ ]] because they don't undergo word splitting, and even blank variables will be handled correctly. On the other hand, quoting them won't hurt anything either.

You may have seen code like this:

The x"$foo" hack is required for code that must run on ancient shells which lack [[, because if $foo begins with a -, then the [ command may become confused. But you'll get really tired of having to explain that to everyone else.

If the right hand side is a constant, you could just do it this way:

[ doesn't care whether the token on the right hand side of the = begins with a -. It just uses it literally.

4. [ "$foo" = bar && "$bar" = foo ]

You can't use && inside the old test (or [) command. The Bash parser sees && outside of [[ ]] or (( )) and breaks your command into two commands, before and after the &&. Use one of these instead:

(Note that we reversed the constant and the variable inside [ for the reasons discussed in the previous pitfall.)

5. [[ $foo > 7 ]]

The [[ ]] operator is not used for an ArithmeticExpression. It's used for strings only. If you want to do a numeric comparison using > or <, you must use (( )) instead:

If you use the > operator inside [[ ]], it's treated as a string comparison, not an integer comparison. This may work sometimes, but it will fail when you least expect it. If you use > inside [ ], it's even worse: it's an output redirection. You'll get a file named 7 in your directory, and the test will succeed as long as $foo is not empty.

If you're developing for a BourneShell instead of bash, this is the historically correct version:

Note that the test ... -gt command will fail in interesting ways if $foo is not an integer. Therefore, there's not much point in quoting it properly -- if it's got white space, or is empty, or is anything other than an integer, we're probably going to crash anyway. You'll need to sanitize your input aggressively.

6. grep foo bar | while read line; do ((count++)); done

The code above looks OK at first glance, doesn't it? Sure, it's just a poor implementation of grep -c, but it's intended as a simplistic example. So why doesn't it work? The variable count will be unchanged after the loop terminates, much to the surprise of Bash developers everywhere.

The reason this code does not work as expected is because each command in a pipeline is executed in a separate subshell. The changes to the count variable within the loop's subshell aren't reflected within the parent shell (the script in which the code occurs).

For solutions to this, please see Bash FAQ #24.

7. if [grep foo myfile]

Many people are confused by the common practice of putting the [ command after an if. They see this and convince themselves that the [ is part of the if statement's syntax, just like parentheses are used in C's if statement.

However, that is not the case! [ is a command, not a syntax marker for the if statement. It's equivalent to the test command, except for the requirement that the final argument must be a ].

The syntax of the if statement is as follows:

There may be zero or more optional elif sections, and one optional else section. Note: there is no [ in the syntax!

Once again, [ is a command. It takes arguments, and it produces an exit code. It may produce error messages. It does not, however, produce any standard output.

The if statement evaluates the first set of COMMANDS that are given to it (up until then, as the first word of a new command). The exit code of the last command from that set determines whether the if statement will execute the COMMANDS that are in the then section, or move on.

If you want to make a decision based on the output of a grep command, you do not need to enclose it in parentheses, brackets, backticks, or any other syntax mark-up! Just use grep as the COMMANDS after the if, like this:

Note that we discard the standard output of the grep (which would normally include the matching line, if any), because we don't want to see it -- we just want to know whether it's there. If the grep matches a line from myfile, then the exit code will be 0 (true), and the then clause will be executed. Otherwise, if there is no matching line, the grep should return a non-zero exit code.

8. if [bar="$foo"]

As with the previous example, [ is a command. Just like with any other command, Bash expects the command to be followed by a space, then the first argument, then another space, etc. You can't just run things all together without putting the spaces in! Here is the correct way:

Each of bar, =, "$foo" (after substitution, but without word splitting) and ] is a separate argument to the [ command. There must be whitespace between each pair of arguments, so the shell knows where each argument begins and ends.

9. cat file | sed s/foo/bar/ > file

You cannot read from a file and write to it in the same pipeline. Depending on what your pipeline does, the file may be clobbered (to 0 bytes, or possibly to a number of bytes equal to the size of your operating system's pipeline buffer), or it may grow until it fills the available disk space, or reaches your operating system's file size limitation, or your quota, etc.

If you want to make a change to a file, other than appending to the end of it, there must be a temporary file created at some point. For example, the following is completely portable:

The following will only work on GNU sed 4.x:

Note that this also creates a temporary file, and does the same sort of renaming trickery -- it just handles it transparently.

And the following equivalent command requires perl 5.x (which is probably more widely available than GNU sed 4.x):

For more details, please see Bash FAQ #21.

10. echo $foo

This relatively innocent-looking command causes massive confusion. Because the $foo isn't quoted, it will not only be subject to word splitting, but also file globbing. This misleads Bash programmers into thinking their variables contain the wrong values, when in fact the variables are OK -- it's just the echo that's messing up their view of what's happening.

This message is split into words and any globs are expanded, such as the *.zip. What will your users think when they see this message:

To demonstrate:

11. $foo=bar

No, you don't assign a variable by putting a $ in front of the variable name. This isn't perl.

12. foo = bar

No, you can't put spaces around the = when assigning to a variable. This isn't C. When you write foo = bar the shell splits it into three words. The first word, foo, is taken as the command name. The second and third become the arguments to that command.

Likewise, the following are also wrong:

13. echo <<EOF

A here document is a useful tool for embedding large blocks of textual data in a script. It causes a redirection of the lines of text in the script to the standard input of a command. Unfortunately, echo is not a command which reads from stdin.

14. su -c 'some command'

This syntax is almost correct. The problem is, su takes a -c argument, but it's not the one you want. You want to pass -c 'some command' to a shell, which means you need a username before the -c.

su assumes a username of root when you omit one, but this falls on its face when you want to pass a command to the shell afterward. You must supply the username in this case.

15. cd /foo; bar

If you don't check for errors from the cd command, you might end up executing bar in the wrong place. This could be a major disaster, if for example bar happens to be rm *.

You must always check for errors from a cd command. The simplest way to do that is:

If there's more than just one command after the cd, you might prefer this:

cd will report the failure to change directories, with a stderr message such as "bash: cd: /foo: No such file or directory". If you want to add your own message in stdout, however, you could use command grouping:

Note there's a required space between "{" and "echo".

Some people also like to enable set -e to make their scripts abort on any command that returns non-zero, but this can be rather tricky to use correctly (since many common commands may return a non-zero for a warning condition, which you may not want to treat as fatal).

By the way, if you're changing directories a lot in a Bash script, be sure to read the Bash manual page on pushd, popd, and dirs. Perhaps all that code you wrote to manage cd's and pwd's is completely unnecessary.

Speaking of which, compare this:

With this:

Forcing a subshell here causes the cd to occur only in the subshell; for the next iteration of the loop, we're back to our normal location, regardless of whether the cd succeeded or failed. We don't have to change back manually. In fact, the penultimate example isn't even valid -- if one of the whatever commands fails, we might not cd back where we need to be. To correct it without using the subshell, we'd have to arrange to execute some sort of cd "$ORIGINAL_DIR" command within each loop iteration. It would be frightfully messy.

The subshell version is much simpler and cleaner.

16. [ bar == "$foo" ]

The == operator is not valid for the [ command. Use = instead, or use the [[ keyword instead.

[Oct 31, 2007] freshmeat.net Project details for Bash Debugger

Version 3.1-0.09
The Bash Debugger (bashdb) is a debugger for Bash scripts. The debugger command interface is modeled on the gdb command interface. Front-ends supporting bashdb include GNU-Emacs and ddd. In the past, the project has been used as a springboard for other experimental features such as a timestamped history file (now in Bash versions after 3.0).

Release focus: Minor bugfixes

Changes:
This release contains bugfixes accumulated over the year and works on bash version 3.2 as well as version 3.1.

[Oct 5, 2007] Turn Vim into a bash IDE By Joe 'Zonker' Brockmeier

June 11, 2007 | Linux.com
By itself, Vim is one of the best editors for shell scripting. With a little tweaking, however, you can turn Vim into a full-fledged IDE for writing scripts. You could do it yourself, or you can just install Fritz Mehner's Bash Support plugin.

To install Bash Support, download the zip archive, copy it to your ~/.vim directory, and unzip the archive. You'll also want to edit your ~/.vimrc to include a few personal details; open the file and add these three lines:

let g:BASH_AuthorName   = 'Your Name'
let g:BASH_Email        = 'my@email.com'
let g:BASH_Company      = 'Company Name'

These variables will be used to fill in some headers for your projects, as we'll see below.

The Bash Support plugin works in the Vim GUI (gVim) and text mode Vim. It's a little easier to use in the GUI, and Bash Support doesn't implement most of its menu functions in Vim's text mode, so you might want to stick with gVim when scripting.

When Bash Support is installed, gVim will include a new menu, appropriately titled Bash. This puts all of the Bash Support functions right at your fingertips (or mouse button, if you prefer). Let's walk through some of the features, and see how Bash Support can make Bash scripting a breeze.

Header and comments

If you believe in using extensive comments in your scripts, and I hope you are, you'll really enjoy using Bash Support. Bash Support provides a number of functions that make it easy to add comments to your bash scripts and programs automatically or with just a mouse click or a few keystrokes.

When you start a non-trivial script that will be used and maintained by others, it's a good idea to include a header with basic information -- the name of the script, usage, description, notes, author information, copyright, and any other info that might be useful to the next person who has to maintain the script. Bash Support makes it a breeze to provide this information. Go to Bash -> Comments -> File Header, and gVim will insert a header like this in your script:

#!/bin/bash
#===============================================================================
#
#          FILE:  test.sh
#
#         USAGE:  ./test.sh
#
#   DESCRIPTION:
#
#       OPTIONS:  ---
#  REQUIREMENTS:  ---
#          BUGS:  ---
#         NOTES:  ---
#        AUTHOR:  Joe Brockmeier, jzb@zonker.net
#       COMPANY:  Dissociated Press
#       VERSION:  1.0
#       CREATED:  05/25/2007 10:31:01 PM MDT
#      REVISION:  ---
#===============================================================================

You'll need to fill in some of the information, but Bash Support grabs the author, company name, and email address from your ~/.vimrc, and fills in the file name and created date automatically. To make life even easier, if you start Vim or gVim with a new file that ends with an .sh extension, it will insert the header automatically.

As you're writing your script, you might want to add comment blocks for your functions as well. To do this, go to Bash -> Comment -> Function Description to insert a block of text like this:

#===  FUNCTION  ================================================================
#          NAME:
#   DESCRIPTION:
#    PARAMETERS:
#       RETURNS:
#===============================================================================

Just fill in the relevant information and carry on coding.

The Comment menu allows you to insert other types of comments, insert the current date and time, and turn selected code into a comment, and vice versa.

Statements and snippets

Let's say you want to add an if-else statement to your script. You could type out the statement, or you could just use Bash Support's handy selection of pre-made statements. Go to Bash -> Statements and you'll see a long list of pre-made statements that you can just plug in and fill in the blanks. For instance, if you want to add a while statement, you can go to Bash -> Statements -> while, and you'll get the following:

while _; do
done

The cursor will be positioned where the underscore (_) is above. All you need to do is add the test statement and the actual code you want to run in the while statement. Sure, it'd be nice if Bash Support could do all that too, but there's only so far an IDE can help you.

However, you can help yourself. When you do a lot of bash scripting, you might have functions or code snippets that you reuse in new scripts. Bash Support allows you to add your snippets and functions by highlighting the code you want to save, then going to Bash -> Statements -> write code snippet. When you want to grab a piece of prewritten code, go to Bash -> Statements -> read code snippet. Bash Support ships with a few included code fragments.

Another way to add snippets to the statement collection is to just place a text file with the snippet under the ~/.vim/bash-support/codesnippets directory.

Running and debugging scripts

Once you have a script ready to go, and it's testing and debugging time. You could exit Vim, make the script executable, run it and see if it has any bugs, and then go back to Vim to edit it, but that's tedious. Bash Support lets you stay in Vim while doing your testing.

When you're ready to make the script executable, just choose Bash -> Run -> make script executable. To save and run the script, press Ctrl-F9, or go to Bash -> Run -> save + run script.

Bash Support also lets you call the bash debugger (bashdb) directly from within Vim. On Ubuntu, it's not installed by default, but that's easily remedied with apt-get install bashdb. Once it's installed, you can debug the script you're working on with F9 or Bash -> Run -> start debugger.

If you want a "hard copy" -- a PostScript printout -- of your script, you can generate one by going to Bash -> Run -> hardcopy to FILENAME.ps. This is where Bash Support comes in handy for any type of file, not just bash scripts. You can use this function within any file to generate a PostScript printout.

Bash Support has several other functions to help run and test scripts from within Vim. One useful feature is syntax checking, which you can access with Alt-F9. If you have no syntax errors, you'll get a quick OK. If there are problems, you'll see a small window at the bottom of the Vim screen with a list of syntax errors. From that window you can highlight the error and press Enter, and you'll be taken to the line with the error.

Put away the reference book...

Don't you hate it when you need to include a regular expression or a test in a script, but can't quite remember the syntax? That's no problem when you're using Bash Support, because you have Regex and Tests menus with all you'll need. For example, if you need to verify that a file exists and is owned by the correct user ID (UID), go to Bash -> Tests -> file exists and is owned by the effective UID. Bash Support will insert the appropriate test ([ -O _]) with your cursor in the spot where you have to fill in the file name.

To build regular expressions quickly, go to the Bash menu, select Regex, then pick the appropriate expression from the list. It's fairly useful when you can't remember exactly how to express "zero or one" or other regular expressions.

Bash Support also includes menus for environment variables, bash builtins, shell options, and a lot more.

Hotkey support

Vim users can access many of Bash Support's features using hotkeys. While not as simple as clicking the menu, the hotkeys do follow a logical scheme that makes them easy to remember. For example, all of the comment functions are accessed with \c, so if you want to insert a file header, you use \ch; if you want a date inserted, type \cd; and for a line end comment, use \cl.

Statements can be accessed with \a. Use \ac for a case statement, \aie for an "if then else" statement, \af for a "for in..." statement, and so on. Note that the online docs are incorrect here, and indicate that statements begin with \s, but Bash Support ships with a PDF reference card (under .vim/bash-support/doc/bash-hot-keys.pdf) that gets it right.

Run commands are accessed with \r. For example, to save the file and run a script, use \rr; to make a script executable, use \re; and to start the debugger, type \rd. I won't try to detail all of the shortcuts, but you can pull up a reference using :help bashsupport-usage-vim when in Vim, or use the PDF. The full Bash Support reference is available within Vim by running :help bashsupport, or you can read it online.

Of course, we've covered only a small part of Bash Support's functionality. The next time you need to whip up a shell script, try it using Vim with Bash Support. This plugin makes scripting in bash a lot easier.

Debugging

The Bash shell contains no debugger, nor even any debugging-specific commands or constructs. Syntax errors or outright typos in the script generate cryptic error messages that are often of no help in debugging a non-functional script.


Example 3-98. test23, a buggy script

   1 #!/bin/bash
   2 
   3 a=37
   4 
   5 if [$a -gt 27 ]
   6 then
   7   echo $a
   8 fi  
   9 
  10 exit 0

Output from script:

 ./test23: [37: command not found

What's wrong with the above script (hint: after the if)?

What if the script executes, but does not work as expected? This is the all too familiar logic error.


Example 3-99. test24, another buggy script

   1 #!/bin/bash
   2 
   3 # This is supposed to delete all filenames
   4 # containing embedded spaces in current directory,
   5 # but doesn't.  Why not?
   6 
   7 
   8 badname=`ls | grep ' '`
   9 
  10 # echo "$badname"
  11 
  12 rm "$badname"
  13 
  14 exit 0

To find out what's wrong with Example 3-99, uncomment the echo "$badname" line. Echo statements are useful for seeing whether what you expect is actually what you get.

Summarizing the symptoms of a buggy script,

  1. It bombs with an error message syntax error, or
  2. It runs, but does not work as expected (logic error)
  3. It runs, works as expected, but has nasty side effects (logic bomb).

Tools for debugging non-working scripts include

  1. echo statements at critical points in the script to trace the variables, and otherwise give a snapshot of what is going on.
  2. using the tee filter to check processes or data flows at critical points.
  3. setting option flags -n -v -x

    sh -n scriptname checks for syntax errors without actually running the script. This is the equivalent of inserting set -n or set -o noexec into the script. Note that certain types of syntax errors can slip past this check.

    sh -v scriptname echoes each command before executing it. This is the equivalent of inserting set -v or set -o verbose in the script.

    sh -x scriptname echoes the result each command, but in an abbreviated manner. This is the equivalent of inserting set -x or set -o xtrace in the script.

    Inserting set -u or set -o nounset in the script runs it, but gives an unbound variable error message at each attempt to use an undeclared variable.

  4. trapping at exit

    The exit command in a script actually sends a signal 0, terminating the process, that is, the script itself. It is often useful to trap the exit, forcing a "printout" of variables, for example. The trap must be the first command in the script.

Specifies an action on receipt of a signal; also useful for debugging.

A signal is simply a message sent to a process, either by the kernel or another process, telling it to take some specified action (usually to terminate). For example, hitting a Control-C, sends a user interrupt, an INT signal, to a running program.

   1 trap 2 #ignore interrupts (no action specified) 
   2 trap 'echo "Control-C disabled."' 2

Example 3-100. Trapping at exit

   1 #!/bin/bash
   2 
   3 trap 'echo Variable Listing --- a = $a  b = $b' EXIT
   4 # EXIT is the name of the signal generated upon exit from a script.
   5 
   6 a=39
   7 
   8 b=36
   9 
  10 exit 0
  11 # Note that commenting out the 'exit' command makes no difference,
  12 # since the script exits anyhow after running out of commands.

Example 3-101. Cleaning up after Control-C

   1 #!/bin/bash
   2 
   3 # logon.sh
   4 # A quick 'n dirty script to check whether you are on-line yet.
   5 
   6 
   7 TRUE=1
   8 LOGFILE=/var/log/messages
   9 # Note that $LOGFILE must be readable (chmod 644 /var/log/messages).
  10 TEMPFILE=temp.$$
  11 # Create a "unique" temp file name, using process id of the script.
  12 KEYWORD=address
  13 # At logon, the line "remote IP address xxx.xxx.xxx.xxx" appended to /var/log/messages.
  14 ONLINE=22
  15 USER_INTERRUPT=13
  16 
  17 trap 'rm -f $TEMPFILE; exit $USER_INTERRUPT' TERM INT
  18 # Cleans up the temp file if script interrupted by control-c.
  19 
  20 echo
  21 
  22 while [ $TRUE ]  #Endless loop.
  23 do
  24   tail -1 $LOGFILE> $TEMPFILE
  25   # Saves last line of system log file as temp file.
  26   search=`grep $KEYWORD $TEMPFILE`
  27   # Checks for presence of the "IP address" phrase,
  28   # indicating a successful logon.
  29 
  30   if [ ! -z "$search" ] # Quotes necessary because of possible spaces.
  31   then
  32      echo "On-line"
  33      rm -f $TEMPFILE  # Clean up temp file.
  34      exit $ONLINE
  35   else
  36      echo -n "." # -n option to echo suppresses newline,
  37                  # so you get continuous rows of dots.
  38   fi
  39 
  40   sleep 1  
  41 done  
  42 
  43 
  44 # Note: if you change the KEYWORD variable to "Exit",
  45 # this script can be used while on-line to check for an unexpected logoff.
  46 
  47 # Exercise: Change the script, as per the above note,
  48 #           and prettify it.
  49 
  50 exit 0

trap '' SIGNAL (two adjacent apostrophes) disables SIGNAL for the remainder of the script. trap SIGNAL restores the functioning of SIGNAL once more. This is useful to protect a critical portion of a script from an undesirable interrupt.

   1 	trap '' 2  # Signal 2 is Control-C, now disabled.
   2 	command
   3 	command
   4 	command
   5 	trap 2     # Reenables Control-C
   6 	

Recommended Papers

Debugging Bash scripts

2.3.1. Debugging on the entire script

When things don't go according to plan, you need to determine what exactly causes the script to fail. Bash provides extensive debugging features. The most common is to start up the subshell with the -x option, which will run the entire script in debug mode. Traces of each command plus its arguments are printed to standard output after the commands have been expanded but before they are executed.

This is the commented-script1.sh script ran in debug mode. Note again that the added comments are not visible in the output of the script.

willy:~/scripts> bash -x script1.sh
+ clear

+ echo 'The script starts now.'
The script starts now.
+ echo 'Hi, willy!'
Hi, willy!
+ echo

+ echo 'I will now fetch you a list of connected users:'
I will now fetch you a list of connected users:
+ echo

+ w
  4:50pm  up 18 days,  6:49,  4 users,  load average: 0.58, 0.62, 0.40
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT
root     tty2     -                Sat 2pm  5:36m  0.24s  0.05s  -bash
willy	 :0       -                Sat 2pm   ?     0.00s   ?     -
willy	 pts/3    -                Sat 2pm 43:13  36.82s 36.82s  BitchX willy ir
willy    pts/2    -                Sat 2pm 43:13   0.13s  0.06s  /usr/bin/screen
+ echo

+ echo 'I'\''m setting two variables now.'
I'm setting two variables now.
+ COLOUR=black
+ VALUE=9
+ echo 'This is a string: '
This is a string:
+ echo 'And this is a number: '
And this is a number:
+ echo

+ echo 'I'\''m giving you back your prompt now.'
I'm giving you back your prompt now.
+ echo

2.3.2. Debugging on part(s) of the script

Using the set Bash built-in you can run in normal mode those portions of the script of which you are sure they are without fault, and display debugging information only for troublesome zones. Say we are not sure what the w command will do in the example commented-script1.sh, then we could enclose it in the script like this:

set -x			# activate debugging from here
w
set +x			# stop debugging from here

Output then looks like this:

willy: ~/scripts> script1.sh
The script starts now.
Hi, willy!

I will now fetch you a list of connected users:

+ w
  5:00pm  up 18 days,  7:00,  4 users,  load average: 0.79, 0.39, 0.33
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU  WHAT
root     tty2     -                Sat 2pm  5:47m  0.24s  0.05s  -bash
willy    :0       -                Sat 2pm   ?     0.00s   ?     -
willy    pts/3    -                Sat 2pm 54:02  36.88s 36.88s  BitchX willyke
willy    pts/2    -                Sat 2pm 54:02   0.13s  0.06s  /usr/bin/screen
+ set +x

I'm setting two variables now.
This is a string:
And this is a number:

I'm giving you back your prompt now.

willy: ~/scripts>

You can switch debugging mode on and off as many times as you want within the same script.

The table below gives an overview of other useful Bash options:

Table 2-1. Overview of set debugging options
Short notation Long notation Result
set -f set -o noglob Disable file name generation using metacharacters (globbing).
set -v set -o verbose Prints shell input lines as they are read.
set -x set -o xtrace Print command traces before executing command.

The dash is used to activate a shell option and a plus to deactivate it. Don't let this confuse you!

In the example below, we demonstrate these options on the command line:

willy:~/scripts> set -v

willy:~/scripts> ls
ls 
commented-scripts.sh	script1.sh

willy:~/scripts> set +v
set +v

willy:~/scripts> ls *
commented-scripts.sh    script1.sh

willy:~/scripts> set -f

willy:~/scripts> ls *
ls: *: No such file or directory

willy:~/scripts> touch *

willy:~/scripts> ls
*   commented-scripts.sh    script1.sh

willy:~/scripts> rm *

willy:~/scripts> ls
commented-scripts.sh    script1.sh

Alternatively, these modes can be specified in the script itself, by adding the desired options to the first line shell declaration. Options can be combined, as is usually the case with UNIX commands:

#!/bin/bash -xv

Once you found the buggy part of your script, you can add echo statements before each command of which you are unsure, so that you will see exactly where and why things don't work. In the example commented-script1.sh script, it could be done like this, still assuming that the displaying of users gives us problems:

echo "debug message: now attempting to start w command"; w

In more advanced scripts, the echo can be inserted to display the content of variables at different stages in the script, so that flaws can be detected:

echo "Variable VARNAME is now set to $VARNAME."
 

Example of debugging a shell script

To print commands and their arguments as they are executed:

   cat example
   #!/bin/sh
   TEST1=result1
   TEST2=result2
   if [ $TEST1 = "result2" ]
   then
     echo $TEST1
   fi
   if [ $TEST1 = "result1" ]
   then
     echo $TEST1
   fi
   if [ $test3 = "whosit" ]
   then
     echo fail here cos it's wrong
   fi

This is a script called example which has an error in it; the variable $test3 is not set so the 3rd and last test [command will fail.

Running the script produces:

   example
   result1
   [: argument expected

The script fails and to see where the error occurred you would use the -x option like this:

   sh -x example
   TEST1=result1
   TEST2=result2
   + [ result1 = result2 ]
   + [ result1 = result1 ]
   + echo result1
   result1
   + [ = whosit ]
   example: [: argument expected

The error occurs in the command [ = whosit ] which is wrong as the variable $test3 has not been set. You can now see where to fix it.

Debugging shell scripts

To see where a script produces an error use the command:

   sh -x script argument

The -x option to the sh command tells it to print commands and their arguments as they are executed.

You can then see what stage of the script has been reached when an error occurs.

11.4.4 Debugging Programs

At times you may need to debug a program to find and correct errors. Two options to the sh command (listed below) can help you debug a program:  

sh -v shellprogramname
prints the shell input lines as they are read by the system
sh -x shellprogramname
prints commands and their arguments as they are executed

To try these two options, create a shell program that has an error in it. For example, create a file called bug that contains the following list of commands:

 

$ cat bug<CR>
today=`date`
echo enter person
read person
mail $1
$person
When you log off come into my office please.
$today.
MLH
$


Notice that
today equals the output of the date command, which must be enclosed in grave accents for command substitution to occur.

The mail message sent to Tom ($1) at login tommy($1) should look like the following screen:

$ mail<CR>
From mlh Mon Apr 10  11:36  CST  1989
Tom
When you log off come into my office please.
Mon Apr 10  11:36:32  CST  1989
MLH
?
.


To execute
bug, you have to press the BREAK or DELETE key to end the program.

 

To debug this program, try executing bug using sh -v. This will print the lines of the file as they are read by the system, as shown below:

 

$ sh -v bug tommy<CR>
today=`date`
echo enter person
enter person
read person
tom
mail $1


Notice that the output stops on the
mail command, since there is a problem with mail. You must use the here document to redirect input into mail.

Before you fix the bug program, try executing it with sh -x, which prints the commands and their arguments as they are read by the system.

 

$ sh -x bug tommy<CR>
+date
today=Mon Apr 10  11:07:23 CST 1989
+ echo enter person
enter person
+ read person
tom
+ mail tom
$


Once again, the program stops at the
mail command. Notice that the substitutions for the variables have been made and are displayed.

The corrected bug program is as follows:

$ cat bug<CR>
today=`date`
echo enter person
read person
mail $1 <<!
$person
When you log off come into my office please.
$today
MLH
!
$

The tee command is a helpful command for debugging pipelines. While simply passing its standard input to its standard output, it also saves a copy of its input into the file whose name is given as an argument.

 

The general format of the tee command is:

 

command1 | tee saverfile | command2<CR>

saverfile is the file that saves the output of command1 for you to study.

 

For example, suppose you want to check on the output of the grep command in the following command line:

 

who | grep $1 | cut -c1-9<CR>

You can use tee to copy the output of grep into a file called check, without disturbing the rest of the pipeline.

 

who | grep $1 | tee check | cut -c1-9<CR>

The file check contains a copy of the grep output, as shown in the following screen:

 

$ who | grep mlhmo | tee check | cut -c1-9<CR>
mlhmo
$ cat check<CR>
mlhmo   tty61  Apr  10  11:30
$

Advanced Bash-Scripting Guide Chapter 30. Debugging


The Bash shell contains no debugger, nor even any debugging-specific commands or constructs. [1] Syntax errors or outright typos in the script generate cryptic error messages that are often of no help in debugging a non-functional script.

Example 30-1. A buggy script
#!/bin/bash
# ex74.sh

# This is a buggy script.

a=37

if [$a -gt 27 ]
then
  echo $a
fi 

exit 0

Output from script:

./ex74.sh: [37: command not found

What's wrong with the above script (hint: after the if)?

 

Example 30-2. Missing keyword
#!/bin/bash
# missing-keyword.sh: What error message will this generate?

for a in 1 2 3
do
  echo "$a"
# done     # Required keyword 'done' commented out in line 7.

exit 0   

Output from script:

missing-keyword.sh: line 10: syntax error: unexpected end of file
	

Note that the error message does not necessarily reference the line in which the error occurs, but the line where the Bash interpreter finally becomes aware of the error.

 

Error messages may disregard comment lines in a script when reporting the line number of a syntax error.

What if the script executes, but does not work as expected? This is the all too familiar logic error.

Example 30-3. test24, another buggy script
#!/bin/bash

#  This is supposed to delete all filenames in current directory
#+ containing embedded spaces.
#  It doesn't work.  Why not?


badname=`ls | grep ' '`

# echo "$badname"

rm "$badname"

exit 0

Try to find out what's wrong with Example 30-3 by uncommenting the echo "$badname" line. Echo statements are useful for seeing whether what you expect is actually what you get.

In this particular case, rm "$badname" will not give the desired results because $badname should not be quoted. Placing it in quotes ensures that rm has only one argument (it will match only one filename). A partial fix is to remove to quotes from $badname and to reset $IFS to contain only a newline, IFS=$'\n'. However, there are simpler ways of going about it.

# Correct methods of deleting filenames containing spaces.
rm *\ *
rm *" "*
rm *' '*
# Thank you. S.C.

Summarizing the symptoms of a buggy script,

  1. It bombs with a "syntax error" message, or
  2. It runs, but does not work as expected (logic error).
  3. It runs, works as expected, but has nasty side effects (logic bomb).

Tools for debugging non-working scripts include

  1. echo statements at critical points in the script to trace the variables, and otherwise give a snapshot of what is going on.
  2. using the tee filter to check processes or data flows at critical points.
  3. setting option flags -n -v -x

    sh -n scriptname checks for syntax errors without actually running the script. This is the equivalent of inserting set -n or set -o noexec into the script. Note that certain types of syntax errors can slip past this check.

    sh -v scriptname echoes each command before executing it. This is the equivalent of inserting set -v or set -o verbose in the script.

    The -n and -v flags work well together. sh -nv scriptname gives a verbose syntax check.

    sh -x scriptname echoes the result each command, but in an abbreviated manner. This is the equivalent of inserting set -x or set -o xtrace in the script.

    Inserting set -u or set -o nounset in the script runs it, but gives an unbound variable error message at each attempt to use an undeclared variable.

  4. Using an "assert" function to test a variable or condition at critical points in a script. (This is an idea borrowed from C.)

    Example 30-4. Testing a condition with an "assert"

    #!/bin/bash
    # assert.sh
    
    assert ()                 #  If condition false,
    {                         #+ exit from script with error message.
      E_PARAM_ERR=98
      E_ASSERT_FAILED=99
    
    
      if [ -z "$2" ]          # Not enough parameters passed.
      then
        return $E_PARAM_ERR   # No damage done.
      fi
    
      lineno=$2
    
      if [ ! $1 ]
      then
        echo "Assertion failed:  \"$1\""
        echo "File \"$0\", line $lineno"
        exit $E_ASSERT_FAILED
      # else
      #   return
      #   and continue executing script.
      fi 
    }   
    
    
    a=5
    b=4
    condition="$a -lt $b"     # Error message and exit from script.
                              #  Try setting "condition" to something else,
                              #+ and see what happens.
    
    assert "$condition" $LINENO
    # The remainder of the script executes only if the "assert" does not fail.
    
    
    # Some commands.
    # ...
    echo "This statement echoes only if the \"assert\" does not fail."
    # ...
    # Some more commands.
    
    exit 0
  5. trapping at exit.

    The exit command in a script triggers a signal 0, terminating the process, that is, the script itself. [2] It is often useful to trap the exit, forcing a "printout" of variables, for example. The trap must be the first command in the script.

Trapping signals
trap
Specifies an action on receipt of a signal; also useful for debugging.  
A signal is simply a message sent to a process, either by the kernel or another process, telling it to take some specified action (usually to terminate). For example, hitting a Control-C, sends a user interrupt, an INT signal, to a running program.
trap '' 2
# Ignore interrupt 2 (Control-C), with no action specified.

trap 'echo "Control-C disabled."' 2
# Message when Control-C pressed.

 

Example 30-5. Trapping at exit
#!/bin/bash

trap 'echo Variable Listing --- a = $a  b = $b' EXIT
# EXIT is the name of the signal generated upon exit from a script.

a=39

b=36

exit 0
#  Note that commenting out the 'exit' command makes no difference,
#+ since the script exits in any case after running out of commands.
Example 30-6. Cleaning up after Control-C
#!/bin/bash
# logon.sh: A quick 'n dirty script to check whether you are on-line yet.


TRUE=1
LOGFILE=/var/log/messages
# Note that $LOGFILE must be readable (chmod 644 /var/log/messages).
TEMPFILE=temp.$$
# Create a "unique" temp file name, using process id of the script.
KEYWORD=address
# At logon, the line "remote IP address xxx.xxx.xxx.xxx"
#                     appended to /var/log/messages.
ONLINE=22
USER_INTERRUPT=13
CHECK_LINES=100
# How many lines in log file to check.

trap 'rm -f $TEMPFILE; exit $USER_INTERRUPT' TERM INT
# Cleans up the temp file if script interrupted by control-c.

echo

while [ $TRUE ]  #Endless loop.
do
  tail -$CHECK_LINES $LOGFILE> $TEMPFILE
  # Saves last 100 lines of system log file as temp file.
  # Necessary, since newer kernels generate many log messages at log on.
  search=`grep $KEYWORD $TEMPFILE`
  # Checks for presence of the "IP address" phrase,
  # indicating a successful logon.

  if [ ! -z "$search" ] # Quotes necessary because of possible spaces.
  then
     echo "On-line"
     rm -f $TEMPFILE    # Clean up temp file.
     exit $ONLINE
  else
     echo -n "."        # -n option to echo suppresses newline,
                        # so you get continuous rows of dots.
  fi

  sleep 1 
done 


# Note: if you change the KEYWORD variable to "Exit",
# this script can be used while on-line to check for an unexpected logoff.

# Exercise: Change the script, as per the above note,
#           and prettify it.

exit 0


# Nick Drage suggests an alternate method:

while true
  do ifconfig ppp0 | grep UP 1> /dev/null && echo "connected" && exit 0
  echo -n "."   # Prints dots (.....) until connected.
  sleep 2
done

# Problem: Hitting Control-C to terminate this process may be insufficient.
#          (Dots may keep on echoing.)
# Exercise: Fix this.



# Stephane Chazelas has yet another alternative:

CHECK_INTERVAL=1

while ! tail -1 "$LOGFILE" | grep -q "$KEYWORD"
do echo -n .
   sleep $CHECK_INTERVAL
done
echo "On-line"

# Exercise: Discuss the strengths and weaknesses
#           of each of these various approaches.
 
The DEBUG argument to trap causes a specified action to execute after every command in a script. This permits tracing variables, for example.

Example 30-7. Tracing a variable

#!/bin/bash

trap 'echo "VARIABLE-TRACE> \$variable = \"$variable\""' DEBUG
# Echoes the value of $variable after every command.

variable=29

echo "Just initialized \"\$variable\" to $variable."

let "variable *= 3"
echo "Just multiplied \"\$variable\" by 3."

# The "trap 'commands' DEBUG" construct would be more useful
# in the context of a complex script,
# where placing multiple "echo $variable" statements might be
# clumsy and time-consuming.

# Thanks, Stephane Chazelas for the pointer.

exit 0

 

 
trap '' SIGNAL (two adjacent apostrophes) disables SIGNAL for the remainder of the script. trap SIGNAL restores the functioning of SIGNAL once more. This is useful to protect a critical portion of a script from an undesirable interrupt.

 

	trap '' 2  # Signal 2 is Control-C, now disabled.
	command
	command
	command
	trap 2     # Reenables Control-C
	

Notes

[1] Rocky Bernstein's Bash debugger partially makes up for this lack.
[2] By convention, signal 0 is assigned to exit.

Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Please visit  Heiner Steven SHELLdorado  the best shell scripting site on the Internet

Advanced Bash-Scripting Guide

Linux.com Turn Vim into a bash IDE

Debugging

5 Beginner Linux Setup Ideas For Cron Jobs & Shell Scripts(Sep 05, 2014)

Debugging your shell scripts(Nov 21, 2013) By

Reference

Examples shipped with bash 3.2 and newer
Path Description X-ref
./bashdb Deprecated sample implementation of a bash debugger.  
./complete Shell completion code.  
./functions Example functions.  
./functions/array-stuff Various array functions (ashift, array_sort, reverse).  
./functions/array-to-string Convert an array to a string.  
./functions/autoload An almost ksh-compatible 'autoload' (no lazy load). ksh
./functions/autoload.v2 An almost ksh-compatible 'autoload' (no lazy load). ksh
./functions/autoload.v3 A more ksh-compatible 'autoload' (with lazy load). ksh
./functions/basename A replacement for basename(1). basename
./functions/basename2 Fast basename(1) and dirname(1) functions for bash/sh. basename, dirname
./functions/coproc.bash Start, control, and end co-processes.  
./functions/coshell.bash Control shell co-processes (see coprocess.bash).  
./functions/coshell.README README for coshell and coproc.  
./functions/csh-compat A C-shell compatibility package. csh
./functions/dirfuncs Directory manipulation functions from the book The Korn Shell.  
./functions/dirname A replacement for dirname(1). dirname
./functions/emptydir Find out if a directory is empty.  
./functions/exitstat Display the exit status of processes.  
./functions/external Like command, but forces the use of external command.  
./functions/fact Recursive factorial function.  
./functions/fstty Front-end to sync TERM changes to both stty(1) and readline 'bind'. stty.bash
./functions/func Print out definitions for functions named by arguments.  
./functions/gethtml Get a web page from a remote server (wget(1) in bash).  
./functions/getoptx.bash getopt function that parses long-named options.  
./functions/inetaddr Internet address conversion (inet2hex and hex2inet).  
./functions/inpath Return zero if the argument is in the path and executable. inpath
./functions/isnum.bash Test user input on numeric or character value.  
./functions/isnum2 Test user input on numeric values, with floating point.  
./functions/isvalidip Test user input for valid IP addresses.  
./functions/jdate.bash Julian date conversion.  
./functions/jj.bash Look for running jobs.  
./functions/keep Try to keep some programs in the foreground and running.  
./functions/ksh-cd ksh-like cd: cd [-LP] [dir[change]]. ksh
./functions/ksh-compat-test ksh-like arithmetic test replacements. ksh
./functions/kshenv Functions and aliases to provide the beginnings of a ksh environment for bash ksh
./functions/login Replace the login and newgrp built-ins in old Bourne shells.  
./functions/lowercase Rename files to lowercase. rename lower
./functions/manpage Find and print a manpage. fman
./functions/mhfold Print MH folders, useful only because folders(1) doesn't print mod date/times.  
./functions/notify.bash Notify when jobs change status.  
./functions/pathfuncs Path related functions (no_path, add_path, pre-path, del_path). path
./functions/README README  
./functions/recurse Recursive directory traverser.  
./functions/repeat2 A clone of the C shell built-in repeat. repeat, csh
./functions/repeat3 A clone of the C shell built-in repeat. repeat, csh
./functions/seq Generate a sequence from m to n;m defaults to 1.  
./functions/seq2 Generate a sequence from m to n;m defaults to 1.  
./functions/shcat Readline-based pager. cat, readline pager
./functions/shcat2 Readline-based pagers. cat, readline pager
./functions/sort-pos-params Sort the positional parameters.  
./functions/substr A function to emulate the ancient ksh built-in. ksh
./functions/substr2 A function to emulate the ancient ksh built-in. ksh
./functions/term A shell function to set the terminal type interactively or not.  
./functions/whatis An implementation of the 10th Edition Unix sh built-in whatis(1) command.  
./functions/whence An almost ksh-compatible whence(1) command.  
./functions/which An emulation of which(1) as it appears in FreeBSD.  
./functions/xalias.bash Convert csh alias commands to bash functions. csh, aliasconv
./functions/xfind.bash A find(1) clone.  
./loadables/ Example loadable replacements.  
./loadables/basename.c Return nondirectory portion of pathname. basename
./loadables/cat.c cat(1) replacement with no options—the way cat was intended. cat, readline pager
./loadables/cut.c cut(1) replacement.  
./loadables/dirname.c Return directory portion of pathname. dirname
./loadables/finfo.c Print file info.  
./loadables/getconf.c POSIX.2 getconf utility.
./loadables/getconf.h Replacement definitions for ones the system doesn't provide.  
./loadables/head.c Copy first part of files.  
./loadables/hello.c Obligatory "Hello World" / sample loadable.  
./loadables/id.c POSIX.2 user identity.  
./loadables/ln.c Make links.  
./loadables/logname.c Print login name of current user.  
./loadables/Makefile.in Simple makefile for the sample loadable built-ins.  
./loadables/mkdir.c Make directories.  
./loadables/necho.c echo without options or argument interpretation.  
./loadables/pathchk.c Check pathnames for validity and portability.  
./loadables/print.c Loadable ksh-93 style print built-in.  
./loadables/printenv.c Minimal built-in clone of BSD printenv(1).  
./loadables/push.c Anyone remember TOPS-20?  
./loadables/README README  
./loadables/realpath.c Canonicalize pathnames, resolving symlinks.  
./loadables/rmdir.c Remove directory.  
./loadables/sleep.c Sleep for fractions of a second.  
./loadables/strftime.c Loadable built-in interface to strftime(3).  
./loadables/sync.c Sync the disks by forcing pending filesystem writes to complete.  
./loadables/tee.c Duplicate standard input.  
./loadables/template.c Example template for loadable built-in.  
./loadables/truefalse.c True and false built-ins.  
./loadables/tty.c Return terminal name.  
./loadables/uname.c Print system information.  
./loadables/unlink.c Remove a directory entry.  
./loadables/whoami.c Print out username of current user.  
./loadables/perl/ Illustrates how to build a Perl interpreter into bash.  
./misc Miscellaneous  
./misc/aliasconv.bash Convert csh aliases to bash aliases and functions. csh, xalias
./misc/aliasconv.sh Convert csh aliases to bash aliases and functions. csh, xalias
./misc/cshtobash Convert csh aliases, environment variables, and variables to bash equivalents. csh, xalias
./misc/README README  
./misc/suncmd.termcap SunView TERMCAP string.  
./obashdb Modified version of the Korn Shell debugger from Bill Rosenblatt's Learning the Korn Shell.
./scripts.noah Noah Friedman's collection of scripts (updated to bash v2 syntax by Chet Ramey).  
./scripts.noah/aref.bash Pseudo-arrays and substring indexing examples.  
./scripts.noah/bash.sub.bash Library functions used by require.bash.  
./scripts.noah/bash_version. bash A function to slice up $BASH_VERSION.  
./scripts.noah/meta.bash Enable and disable eight-bit readline input.  
./scripts.noah/mktmp.bash Make a temporary file with a unique name.  
./scripts.noah/number.bash A fun hack to translate numerals into English.  
./scripts.noah/PERMISSION Permissions to use the scripts in this directory.  
./scripts.noah/prompt.bash A way to set PS1 to some predefined strings.  
./scripts.noah/README README  
./scripts.noah/remap_keys.bash A front end to bind to redo readline bindings. readline
./scripts.noah/require.bash Lisp-like require/provide library functions for bash.  
./scripts.noah/send_mail. Replacement SMTP client written in bash.  
./scripts.noah/shcat.bash bash replacement for cat(1). cat
./scripts.noah/source.bash Replacement for source that uses current directory.  
./scripts.noah/string.bash The string(3) functions at the shell level.  
./scripts.noah/stty.bash Front-end to stty(1) that changes readline bindings too. fstty
./scripts.noah/y_or_n_p.bash Prompt for a yes/no/quit answer. ask
./scripts.v2 John DuBois' ksh script collection (converted to bash v2 syntax by Chet Ramey).  
./scripts.v2/arc2tarz Convert an arc archive to a compressed tar archive.  
./scripts.v2/bashrand Random number generator with upper and lower bounds and optional seed. random
./scripts.v2/cal2day.bash Convert a day number to a name.  
./scripts.v2/cdhist.bash cd replacement with a directory stack added.  
./scripts.v2/corename Tell what produced a core file.  
./scripts.v2/fman Fast man(1) replacement. manpage
./scripts.v2/frcp Copy files using ftp(1) but with rcp-type command-line syntax.  
./scripts.v2/lowercase Change filenames to lowercase. rename lower
./scripts.v2/ncp A nicer front end for cp(1) (has -i, etc)..  
./scripts.v2/newext Change the extension of a group of files. rename
./scripts.v2/nmv A nicer front end for mv(1) (has -i, etc).. rename
./scripts.v2/pages Print specified pages from files.  
./scripts.v2/PERMISSION Permissions to use the scripts in this directory.  
./scripts.v2/pf A pager front end that handles compressed files.  
./scripts.v2/pmtop Poor man's top(1) for SunOS 4.x and BSD/OS.  
./scripts.v2/README README  
./scripts.v2/ren Rename files by changing parts of filenames that match a pattern. rename
./scripts.v2/rename Change the names of files that match a pattern. rename
./scripts.v2/repeat Execute a command multiple times. repeat
./scripts.v2/shprof Line profiler for bash scripts.  
./scripts.v2/untar Unarchive a (possibly compressed) tarfile into a directory.  
./scripts.v2/uudec Carefully uudecode(1) multiple files.  
./scripts.v2/uuenc uuencode(1) multiple files.  
./scripts.v2/vtree Print a visual display of a directory tree. tree
./scripts.v2/where Show where commands that match a pattern are.  
./scripts Example scripts.  
./scripts/adventure.sh Text adventure game in bash!  
./scripts/bcsh.sh Bourne shell's C shell emulator. csh
./scripts/cat.sh Readline-based pager. cat, readline pager
./scripts/center Center a group of lines.  
./scripts/dd-ex.sh Line editor using only /bin/sh, /bin/dd, and /bin/rm.  
./scripts/fixfiles.bash Recurse a tree and fix files containing various bad characters.  
./scripts/hanoi.bash The inevitable Towers of Hanoi in bash.  
./scripts/inpath Search $PATH for a file the same name as $1; return TRUE if found. inpath
./scripts/krand.bash Produces a random number within integer limits. random
./scripts/line-input.bash Line input routine for GNU Bourne Again Shell plus terminal-control primitives.  
./scripts/nohup.bash bash version of nohup command.  
./scripts/precedence Test relative precedences for && and || operators.  
./scripts/randomcard.bash Print a random card from a card deck. random
./scripts/README README  
./scripts/scrollbar Display scrolling text.  
./scripts/scrollbar2 Display scrolling text.  
./scripts/self-repro A self-reproducing script (careful!).  
./scripts/showperm.bash Convert ls(1) symbolic permissions into octal mode.  
./scripts/shprompt Display a prompt and get an answer satisfying certain criteria. ask
./scripts/spin.bash Display a spinning wheel to show progress.  
./scripts/timeout Give rsh(1) a shorter timeout.  
./scripts/vtree2 Display a tree printout of the direcotry with disk use in 1k blocks. tree
./scripts/vtree3 Display a graphical tree printout of dir. tree
./scripts/vtree3a Display a graphical tree printout of dir. tree
./scripts/websrv.sh A web server in bash!  
./scripts/xterm_title Print the contents of the xterm title bar.  
./scripts/zprintf Emulate printf (obsolete since printf is now a bash built-in).  
./startup-files Example startup files.  
./startup-files/Bash_aliases Some useful aliases (written by Fox).  
./startup-files/Bash_profile Sample startup file for bash login shells (written by Fox).  
./startup-files/bash-profile Sample startup file for bash login shells (written by Ramey).  
./startup-files/bashrc Sample Bourne Again Shell init file (written by Ramey).  
./startup-files/Bashrc.bfox Sample Bourne Again Shell init file (written by Fox).  
./startup-files/README README  
./startup-files/apple Example startup files for Mac OS X.  
./startup-files/apple/aliases Sample aliases for Mac OS X.  
./startup-files/apple/bash.defaults Sample User preferences file.  
./startup-files/apple/environment Sample Bourne Again Shell environment file.  
./startup-files/apple/login Sample login wrapper.  
./startup-files/apple/logout Sample logout wrapper.  
./startup-files/apple/rc Sample Bourne Again Shell config file.  
./startup-files/apple/README README


Etc

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least


Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

Last modified: July 13, 2017