|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. Brian Kernighan |
|
See Debugging Links for general information about debugging.
Bash has an interactive debugger -- which exist as a separately maintained package, not merged with bash distribution.
There are also several "legacy options", that allow tracing of execution of scripts and basic monitoring of selected variables
Bash is the only version of shell that has a usable debugger, although this is not standard option but add on package. That means that in general on development machine you should use the version of bash that is supported by bash debugger, not the latest and greatest. See Bash Debugger
The source code and documentation for bashdb
can be found at
http://bashdb.sourceforge.net/.
One of the major innovations in bash 3.0 was built-in debugger:
New variables to support the bash debugger: BASH_ARGC, BASH_ARGV, BASH_SOURCE, BASH_LINENO, BASH_SUBSHELL, BASH_EXECUTION_STRING, BASH_COMMAND
i. FUNCNAME has been changed to support the debugger: it's now an array variable.
j. for, case, select, arithmetic commands now keep line number information for the debugger.
k. There is a new `RETURN' trap executed when a function or sourced script returns (not inherited child processes; inherited by command substitution if function tracing is enabled and the debugger is active).
l. New invocation option: --debugger. Enables debugging and turns on new `extdebug' shell option.
m. New `functrace' and `errtrace' options to `set -o' cause DEBUG and ERR traps, respectively, to be inherited by shell functions. Equivalent to `set -T' and `set -E' respectively. The `functrace' option also controls whether or not the DEBUG trap is inherited by sourced scripts.
n. The DEBUG trap is run before binding the variable and running the action list in a `for' command, binding the selection variable and running the query in a `select' command, and before attempting a match in a `case' command.
o. New `--enable-debugger' option to `configure' to compile in the debugger support code.
p. `declare -F' now prints out extra line number and source file information if the `extdebug' option is set.
q. If `extdebug' is enabled, a non-zero return value from a DEBUG trap causes the next command to be skipped, and a return value of 2 while in a function or sourced script forces a `return'.
r. New `caller' builtin to provide a call stack for the bash debugger.
s. The DEBUG trap is run just before the first command in a function body is executed, for the debugger.
t. `for', `select', and `case' command heads are printed when `set -x' is enabled.
In addition to debugger you have three "legacy" options that bush inherited from ksh. They are -n (checking syntax), -v (verbose output) and, especially, -x (tracing execution):
-n option
The -n option, shot for noexec ( as in no execution), tells the shell to not run the commands. Instead, the shell just checks for syntax errors. This option will not convince the shell to perform any more checks. Instead the shell just performs the normal syntax check. With -n option, the shell doesn’t execute your commands, so you have a safe way to test your scripts if they contain syntax error.
The follow example shows how to use -n option.
Let us consider a shell script with a name debug_quotes.sh
#!/bin/bash echo "USER=$USER echo "HOME=$HOME" echo "OSNAME=$OSNAME"Now run the script with -n option
$ sh -n debug_quotes
debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted stringAs the above outputs shows that there is syntax error , double quotes are missing.
-v option
The -v option tells the shell to run in verbose mode. In practice , this means that shell will echo each command prior to execute the command. This is very useful in that it can often help to find the errors.
Let us create a shell script with the name “listusers.sh” with below contents
linuxtechi@localhost:~$ cat listusers.sh#!/bin/bash
cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt#Clean up the temporary file.
/bin/rm -f /tmp/users.txtNow execute the script with -v option.
linuxtechi@localhost:~$ sh -v listusers.sh#!/bin/bash
cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,#Clean up the temporary file.
/bin/rm -f /tmp/users.txtlinuxtechi@localhost:~$
In the above output , script output gets mixed with commands of the scripts. But however , with -v option , at least you get a better view of what the shell is doing as it runs your script.
Combining the -n & -v Options
We can combine the command line options ( -n & -v ). This makes a good combination because we can check the syntax of a script while seeing the script output.
Let us consider a previously used script “debug_quotes.sh”
linuxtechi@localhost:~$ sh -nv debug_quotes.sh#!/bin/bash
#shows an error.echo "USER=$USER
echo "HOME=$HOME"
echo "OSNAME=$OSNAME"debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted string
linuxtechi@localhost:~$
-x option
The -x option, short for xtrace or execution trace, tells the shell to echo each command after performing the substitution steps. Thus , we can see the values of variables and commands. Often, this option alone will help to diagnose a problem.
In most cases, the -x option provides the most useful information about a script, but it can lead to a lot of output. The following example show this option in action.
linuxtechi@localhost:~$ sh -x listusers.sh+ cut -d :+ -f1,5,7 /etc/passwd
grep -v sbin
+ sort
+ grep sh
+ awk -F: { printf ( "%-12s %-40s\n", $1, $2 ) } /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,
+ /bin/rm -f /tmp/users.txtlinuxtechi@localhost:~$
In the above output , shell inserted a + sign in front of the commands.
The key for intelligent debugging is incorporating debugging code into your script and having a special variable (for example $debug) that controls it, allowing switching it on and off and providing various level of verbosity of output.
To debug a shell script, it's not necessary to debug the entire script all the time. Sometimes, debugging
a partial script is more useful and time-saving. We can achieve partial debugging in a shell script
using the set
built-in command:
set -x (Start debugging from here) set +x (End debugging here)
We can use set +x
and set -x
inside a shell script at multiple places depending
upon the need. When a script is executed, commands in between them are printed along with the output.
Consider the following shell script as an example:
#!/bin/bash # Filename: eval.sh # Description: Evaluating arithmetic expression a=23 b=6 expr $a + $b expr $a - $b expr $a * $b
Executing this script gives the following output:
$ sh eval.sh 29 17 expr: syntax error
We get the syntax error with an expression that is most likely the third expression--that is,
expr $a * $b
.
To debug, we will use set -x
before and set +x
after expr $a * $b
.
Another script partial_debugging.sh
with partial debugging is as follows:
#!/bin/bash # Filename: partial_debugging.sh # Description: Debugging part of script of eval.sh a=23 b=6 expr $a + $b expr $a - $b set -x expr $a * $b set +x
The following output is obtained after executing the partial_debugging.sh
script:
$ sh partial_debugging.sh 29 17 + expr 23 eval.sh partial_debugging.sh 6 expr: syntax error + set +x
From the preceding output, we can see that expr $a * $b
is executed as expr 23
eval.sh partial_debugging.sh 6
. This means, instead of doing multiplication, bash is expanding
the behavior of *
as anything available in the current directory. So, we need to escape
the behavior of the character *
from getting expanded--that is, expr $a \* $b
.
The following script eval_modified.sh
is a modified form of the eval.sh
script:
#!/bin/bash # Filename: eval_modified.sh # Description: Evaluating arithmetic expression a=23 b=6 expr $a + $b expr $a - $b expr $a \* $b
Now, the output of running eval_modified.sh
will be as follows:
$ sh eval_modified.sh 29 17 138
The script runs perfectly now without any errors.
The -o errexit option terminates the shell script if a command returns an error code. The exceptions are loops, so the if command cannot work properly if commands can't return a non-zero status. Use this option only on the simplest scripts without any other error handling; for example, it does not terminate the script if an error occurs in a subshell.
The -o nounset option terminates with an error if unset (or nonexistent) variables are referenced. This option reports misspelled variable names. nounset does not guarantee that all spelling mistakes will be identified as it "tested" ares depend of the control flow.
#!/bin/bash # # A simple script to list files shopt -o -s nounset declare -i TOTAL=0 let "TOTAL=TTOAL+1" # not caught printf "%s\n" "$TOTAL" if [ $TTOAL -eq 0 ] ; then # caught printf "TOTAL is %s\n" "$TOTAL" fi
The -o xtrace option displays each command before it's executed. The command has all substitutions and expansions performed.
declare -i TOTAL=0 if [ $TOTAL -eq 0 ] ; then printf "%s\n" "$TOTAL is zero" fi
The first 11 lines are the commands executed in the profile scripts on the Linux distributions. The number of plus signs indicate how the scripts are nested. The last four lines are the script fragment after Bash has performed all substations and expansions. Notice the compound commands (like the if command) are left out (see
#!/bin/bash # # xtrace_test.sh: A simple script to test xtrace feature shopt -o -s nounset shopt -o -s xtrace declare -i RESULT declare -i TOTAL=3 while (( $TOTAL >= 0 )) ; do let "RESULT += TOTAL" printf "%d\n" "$RESULT" let "TOTAL--" done
xtrace shows the line-by-line progress of the script.
You can change the trace plus sign prompt by assigning a new prompt to the PS4 variable. Setting the prompt to include the variable LINENO will display the current line in the script or shell function. In a script, LINENO displays the current line number of the script, starting with 1 for the first line. When used with shell functions at the shell prompt, LINENO counts from the first line of the function.
The built-in trap command can be used to execute debugging commands after each line has been processed by Bash. Usually debug traps are combined with a trace to provide additional information not listed in the trace.
When debug trapping is combined with a trace, the debug trap itself is listed by the trace when it executes. This makes a printf rather redundant because the command is displayed with all variable substitutions completed prior to executing the printf. You can use the null command (:) to display variables without having to execute a shell command
#!/bin/bash # # debug_demo.sh : an example of a debug trap trap ': CNT is now $CNT' DEBUG declare -i CNT=0 while [ $CNT -lt 3 ] ; do CNT=CNT+1 done
When it runs with tracing, the value of CNT is displayed after every line.
$ bash -x debug_demo.sh + trap ': CNT is now $CNT' DEBUG + declare -i CNT=0 ++ : CNT is now 0 + '[' 0 -lt 3 ']' ++ : CNT is now 0 + CNT=CNT+1 ++ : CNT is now 1 + '[' 1 -lt 3 ']' ++ : CNT is now 1 + CNT=CNT+1 ++ : CNT is now 2 + '[' 2 -lt 3 ']' ++ : CNT is now 2 + CNT=CNT+1 ++ : CNT is now 3 + '[' 3 -lt 3 ']' ++ : CNT is now 3
The output of a command can be saved to a file with the tee command. The name symbolizes a pipe that splits into two at a T connection: A copy of the output is stored in a file without redirecting the original standard output. To capture both standard output and standard error, redirect standard error to standard output before piping the results to tee.
$ bash buggy_script.sh >& | tee results.txt
The tee --append (-a) switch adds the output to the end of an existing file. The --ignore-interrupts (-i) switch keeps tee running even if it's interrupted by a Linux signal.
This technique doesn't copy what is typed on standard input. To get a complete recording of a script's run, Linux has a script command. When a shell script is running under script, a file named typescript is created in the current directory. The typescript file is a text file that records a list of everything that appears in the shell session.
You can stop the recording process with the exit command.
$ script Script started, file is typescript $ bash buggy_script.sh ... $ exit exit Script done, file is typescript
To test cron scripts without installing them under cron, use the watch command. Watch periodically re-runs a command and displays the results on the screen. Watch runs the command every two seconds, but you can specify a different number of seconds with the --interval= (or -n) switch. You can filter the results so that only differences are shown (--differences or -d) or so that all the differences so far are shown (--differences= cumulative).
There are two commands available for timing a program or script.
The results of buil-in are formatted according to the value of the TIMEFORMAT variable. The layout of TIMEFORMAT is similar to the date command formatting string in that it uses a set of % format codes.
%%-- A literal %.
%[precision][l]R-- The real time; the elapsed time in seconds.
%[precision][l]U-- The number of CPU seconds spent in user mode.
%[precision][l]S-- The number of CPU seconds spent in system mode.
%P-- The CPU percentage, computed as (%U + %S) / %R.
The precision indicates the number decimal positions to show, with a default of 3. The character l (long) prints the value divided into minutes and seconds. If there is no TIMEFORMAT variable, Bash uses \nreal\t%3lR\nuser\t%3lU\nsys%3lS.
$ unset TIMEFORMAT $ time ls > /dev/null real 0m0.018s user 0m0.010s sys 0m0.010s $ declare -x TIMEFORMAT="%P" $ time ls > /dev/null 75.34 $ declare -x TIMEFORMAT="The real time is %lR" $ time ls > /dev/null The real time is 0m0.023s
Notice the times can vary slightly between the runs because other programs running on the computer affect them. To get the most accurate time, test a script several times and take the lowest value.
Linux also has a separates executable -- time command. This variation cannot time a pipeline, but it displays additional statistics. To use it, use the command command to override the Bash time.
$ command time myprog 3.09user 0.95system 0:05.84elapsed 69%CPU(0avgtext+0avgdata 0maxresident)k 0inputs+0outputs(4786major+4235minor)pagefaults 0swaps
Like Bash time, Linux time can format the results. The format can be stored in a variable called TIME (not TIMEFORMAT) or it can be explicitly indicated with the --format (-f) switch.
%%-- A literal %.
%E-- The real time; the elapsed time in the hours:minutes:seconds format.
%e-- The real time; the elapsed time in seconds.
%S-- The system time in CPU seconds.
%U-- The user time in CPU seconds.
%P-- The percentage of the CPU used by the program.
%M-- The maximum size in memory of the program in kilobytes.
%t-- The average resident set size of the program in kilobytes.
%D-- The average size of the unshared data area.
%p-- The average size of the unshared stack in kilobytes.
%X-- The average size of the shared text area.
%Z-- The size of system pages, in bytes.
%F-- The number of major page faults.
%R-- The number of minor page faults (where a page was previously loaded and is still cached by Linux).
%W-- The number of times the process was swapped.
%c-- The number of time-slice context switches.
%w-- The number of voluntary context switches.
%I-- The number of file system inputs.
%O-- The number of file system outputs.
%r-- The number of socket messages received.
%s-- The number of socket messages sent.
%k-- The number of signals received.
%C-- The command line.
%x-- The exit status.
Statistics not relevant to your hardware are shown as zero.
$ command time grep ken /etc/aliases Command exited with non-zero status 1 0.00user 0.00system 0:00.02elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (142major+19minor)pagefaults 0swaps $ command time --format "%P" grep ken /etc/aliases Command exited with non-zero status 1 0% $ command time --format "Major faults = %F" grep ken /etc/aliases Command exited with non-zero status 1 Major faults = 141
The --portability (-p) switch forces time to adhere to the POSIX standard, the same as Bash time -p, turning off many of the extended features.
$ command time --portability grep ken /etc/aliases Command exited with non-zero status 1 real 0.00 user 0.00 sys 0.00The results can be redirected to a file with --output (-o), or appended to a file with --append (-a). The --verbose (-v) option gives a detailed explanation of each statistic.
Adapted from BashGuide-Practices - Greg's Wiki
Very often you will find yourself clueless as to why your script isn't acting the way you want it to. Resolving this problem is always just a matter of common sense and debugging techniques.
If your script still doesn't seem to agree with you, maybe your perception of the way things work is wrong. Try going back to the manual (or this guide) to re-evaluate whether commands do exactly what you think they do, or the syntax is what you think it is. Very often people misunderstand what for does, how Word Splitting works, or how often they should use quotes.
Keep the tips and good practice guidelines in this guide in mind as well. They often help you avoid bugs and problems with scripts.
I mentioned this in the Scripts section of this guide too, but it's worth repeating it here. First of all, make sure your script's header is actually #! /bin/bash. If it is missing or if it's something like #! /bin/sh then you deserve the problems you're having. That means you're probably not even using Bash to run your code. Obviously that'll cause issues. Also, make sure you have no Carriage Return characters at the ends of your lines. This is the cause of scripts written in Microsoft Windows(tm). You can get rid of these fairly easily like this:
$ tr -d '\r' < myscript > tmp && mv tmp myscript
The BashFAQ and BashPitfalls pages explain common misconceptions and issues encountered by other Bash scripters. It's very likely that your problem will be described there in some shape or form.
To be able to find your problem in there, you'll obviously need to have Diagnosed it properly. You'll need to know what you're looking for.
There are people in the IRC #bash channel almost 24/7. This channel resides on the freenode IRC network. To reach us, you need an IRC client. Connect it to irc.freenode.net, and /join #bash.
Make sure that you know what your real problem is and have stepped through it on paper, so you can explain it well. We don't like having to guess at things. Start by explaining what you're trying to do with your script.
Either way, please have a look at this page before entering #bash: XyProblem.
Unless you know what exactly the problem is, you most likely won't come up with a solution anytime soon. So make sure you understand what exactly goes wrong. Evaluate the symptoms and/or error messages.
Try to formulate the problem as a sentence. This will also be vital if you're going to ask other people for help with your problem. You don't want them to have to go through your whole script or run it so that they understand what's going on. No; you need to make the problem perfectly clear to yourself and to anybody trying to help you. This requirement stands until the day the human race invents means of telepathy.
If staring at your code doesn't give you a divine inspiration, the next thing you should do is try to minimize your codebase to isolate the problem. Don't worry about preserving the functionality of your script. The only thing you want to preserve is the logic of the code block that seems buggy.
Often, the best way to do this is to copy your script to a new file and start deleting everything that seems irrelevant from it. Alternatively, you can make a new script that does something similar in the same code fashion, and keep adding structure until you duplicate the problem.
As soon as you delete something that makes the problem go away (or add something that makes it appear), you'll have found where the problem lies. Even if you haven't precisely pinpointed the issue, at least you're not staring at a massive script anymore, but hopefully at a stub of no more than 3-7 lines.
For example, if you have a script that lets you browse images in your image folder by date, and for some reason you can't manage to iterate over your images in the folder properly, it suffices to reduce the script to this part:
for image in $(ls -R "$imgFolder"); do echo "$image" done
Your actual script will be far more complex, and the inside of the for loop will also be far longer. But the essence of the problem is this code. Once you've reduced your problem to this it may be easier to see the problem you're facing. Your echo spits out parts of image names; it looks like all whitespace is replaced by newlines. That must be because echo is run once for each chunk terminated by whitespace, not for every image name (as a result, it seems the output has split open image names with whitespace in them). With this reduced code, it's easier to see that the cause is actually your for statement that splits up the output of ls into words. That's because ls is UNPARSABLE in a bugless manner (do not ever use ls in scripts, unless if you want to show its output to a user).
We can't use a recursive glob (unless we're in bash 4), so we have to use find to retrieve the filenames. One fix would be:
find "$imgFolder" -print0 | while IFS= read -r -d '' image; do echo "$image" done
Now that you've fixed the problem in this tiny example, it's easy to merge it back into the original script.
If you have a complicated mess of scripts, you might find it helpful to change PS4 before setting -x:
export PS4='+$BASH_SOURCE:$LINENO:$FUNCNAME: '
If the script goes too fast for you, you can enable code-stepping. The following code uses the DEBUG trap to inform the user about what command is about to be executed and wait for his confirmation do to so. Put this code in your script, at the location you wish to begin stepping:
trap '(read -p "[$BASH_SOURCE:$LINENO] $BASH_COMMAND?")' DEBUG
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Jun 10, 2021 | www.redhat.com
Exit status
In Bash scripting,
$?
prints the exit status. If it returns zero, it means there is no error. If it is non-zero, then you can conclude the earlier task has some issue.A basic example is as follows:
$ cat myscript.sh #!/bin/bash mkdir learning echo $?If you run the above script once, it will print
0
because the directory does not exist, therefore the script will create it. Naturally, you will get a non-zero value if you run the script a second time, as seen below:$ sh myscript.sh mkdir: cannot create directory 'learning': File exists 1In the cloudBest practices
- Understanding cloud computing
- Free course: Red Hat OpenStack Technical Overview
- Free e-book: Hybrid Cloud Strategy for Dummies
It is always recommended to enable the debug mode by adding the
-e
option to your shell script as below:$ cat test3.sh !/bin/bash set -x echo "hello World" mkdiir testing ./test3.sh + echo 'hello World' hello World + mkdiir testing ./test3.sh: line 4: mkdiir: command not foundYou can write a debug function as below, which helps to call it anytime, using the example below:
$ cat debug.sh #!/bin/bash _DEBUG="on" function DEBUG() { [ "$_DEBUG" == "on" ] && $@ } DEBUG echo 'Testing Debudding' DEBUG set -x a=2 b=3 c=$(( $a + $b )) DEBUG set +x echo "$a + $b = $c"Which prints:
$ ./debug.sh Testing Debudding + a=2 + b=3 + c=5 + DEBUG set +x + '[' on == on ']' + set +x 2 + 3 = 5Standard error redirectionYou can redirect all the system errors to a custom file using standard errors, which can be denoted by the number 2 . Execute it in normal Bash commands, as demonstrated below:
$ mkdir users 2> errors.txt $ cat errors.txt mkdir: cannot create directory "˜users': File existsMost of the time, it is difficult to find the exact line number in scripts. To print the line number with the error, use the PS4 option (supported with Bash 4.1 or later). Example below:
$ cat test3.sh #!/bin/bash PS4='LINENO:' set -x echo "hello World" mkdiir testingYou can easily see the line number while reading the errors:
$ /test3.sh 5: echo 'hello World' hello World 6: mkdiir testing ./test3.sh: line 6: mkdiir: command not found
Jul 04, 2020 | zwischenzugs.com
... ... ... Managing Variables
Variables are a core part of most serious bash scripts (and even one-liners!), so managing them is another important way to reduce the possibility of your script breaking.
Change your script to add the 'set' line immediately after the first line and see what happens:
#!/bin/bash set -o nounset A="some value" echo "${A}" echo "${B}"...I always set
Tracing Variablesnounset
on my scripts as a habit. It can catch many problems before they become serious.If you are working with a particularly complex script, then you can get to the point where you are unsure what happened to a variable.
Try running this script and see what happens:
#!/bin/bash set -o nounset declare A="some value" function a { echo "${BASH_SOURCE}>A A=${A} LINENO:${1}" } trap "a $LINENO" DEBUG B=value echo "${A}" A="another value" echo "${A}" echo "${B}"There's a problem with this code. The output is slightly wrong. Can you work out what is going on? If so, try and fix it.
You may need to refer to the bash man page, and make sure you understand quoting in bash properly.
It's quite a tricky one to fix 'properly', so if you can't fix it, or work out what's wrong with it, then ask me directly and I will help.
Profiling Bash ScriptsReturning to the
xtrace
(orset -x
flag), we can exploit its use of aPS
variable to implement the profiling of a script:#!/bin/bash set -o nounset set -o xtrace declare A="some value" PS4='$(date "+%s%N => ")' B= echo "${A}" A="another value" echo "${A}" echo "${B}" ls pwd curl -q bbc.co.ukFrom this you should be able to tell what
PS4
does. Have a play with it, and read up and experiment with the otherPS
variables to get familiar with what they do.NOTE: If you are on a Mac, then you might only get second-level granularity on the date!
Linting with ShellcheckFinally, here is a very useful tip for understanding bash more deeply and improving any bash scripts you come across.
Shellcheck is a website and a package available on most platforms that gives you advice to help fix and improve your shell scripts. Very often, its advice has prompted me to research more deeply and understand bash better.
Here is some example output from a script I found on my laptop:
$ shellcheck shrinkpdf.sh In shrinkpdf.sh line 44: -dColorImageResolution=$3 \ ^-- SC2086: Double quote to prevent globbing and word splitting. In shrinkpdf.sh line 46: -dGrayImageResolution=$3 \ ^-- SC2086: Double quote to prevent globbing and word splitting. In shrinkpdf.sh line 48: -dMonoImageResolution=$3 \ ^-- SC2086: Double quote to prevent globbing and word splitting. In shrinkpdf.sh line 57: if [ ! -f "$1" -o ! -f "$2" ]; then ^-- SC2166: Prefer [ p ] || [ q ] as [ p -o q ] is not well defined. In shrinkpdf.sh line 60: ISIZE="$(echo $(wc -c "$1") | cut -f1 -d\ )" ^-- SC2046: Quote this to prevent word splitting. ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'. In shrinkpdf.sh line 61: OSIZE="$(echo $(wc -c "$2") | cut -f1 -d\ )" ^-- SC2046: Quote this to prevent word splitting. ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.The most common reminders are regarding potential quoting issues, but you can see other useful tips in the above output, such as preferred arguments to the
Exercisetest
construct, and advice on "useless"echo
s.1) Find a large bash script on a social coding site such as GitHub, and run
shellcheck
over it. Contribute back any improvements you find.
Nov 24, 2008 | www.linux.com
Author: Ben Martin
The Bash Debugger Project (bashdb) lets you set breakpoints, inspect variables, perform a backtrace, and step through a bash script line by line. In other words, it provides the features you expect in a C/C++ debugger to anyone programming a bash script.To see if your standard bash executable has bashdb support, execute the command shown below; if you are not taken to a bashdb prompt then you'll have to install bashdb yourself.
$ bash --debugger -c "set|grep -i dbg" ... bashdbThe Ubuntu Intrepid repository contains a package for bashdb, but there is no special bashdb package in the openSUSE 11 or Fedora 9 repositories. I built from source using version 4.0-0.1 of bashdb on a 64-bit Fedora 9 machine, using the normal
./configure; make; sudo make install
commands.You can start the Bash Debugger using the
bash --debugger foo.sh
syntax or thebashdb foo.sh
command. The former method is recommended except in cases where I/O redirection might cause issues, and it's what I used. You can also use bashdb through ddd or from an Emacs buffer.The syntax for many of the commands in bashdb mimics that of gdb, the GNU debugger. You can
step
into functions, usenext
to execute the next line without stepping into any functions, generate a backtrace withbt
, exit bashdb withquit
or Ctrl-D, and examine a variable withprint $foo
. Aside from the prefixing of the variable with$
at the end of the last sentence, there are some other minor differences that you'll notice. For instance, pressing Enter on a blank line in bashdb executes the previous step or next command instead of whatever the previous command was.The print command forces you to prefix shell variables with the dollar sign (
$foo
). A slightly shorter way of inspecting variables and functions is to use thex foo
command, which usesdeclare
to print variables and functions.Both bashdb and your script run inside the same bash shell. Because bash lacks some namespace properties, bashdb will include some functions and symbols into the global namespace which your script can get at. bashdb prefixes its symbols with
_Dbg_
, so you should avoid that prefix in your scripts to avoid potential clashes. bashdb also uses some environment variables; it uses theDBG_
prefix for its own, and relies on some standard bash ones that begin withBASH_
.To illustrate the use of bashdb, I'll work on the small bash script below, which expects a numeric argument
#!/bin/bash version="0.01"; fibonacci() { n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} if [ $n -le 1 ]; then echo $n else l=`fibonacci $((n-1))` r=`fibonacci $((n-2))` echo $((l + r)) fi } for i in `seq 1 10` do result=$(fibonacci $i) echo "i=$i result=$result" donen
and calculates the nth Fibonacci number .The below session shows bashdb in action, stepping over and then into the fibonacci function and inspecting variables. I've made my input text bold for ease of reading. An initial backtrace (
$ bash --debugger ./fibonacci.sh ... (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb bt ->0 in file `./fibonacci.sh' at line 3 ##1 main() called from file `./fibonacci.sh' at line 0 bashdb next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb list 16:==>for i in `seq 1 10` 17: do 18: result=$(fibonacci $i) 19: echo "i=$i result=$result" 20: done bashdb next (/home/ben/testing/bashdb/fibonacci.sh:18): 18: result=$(fibonacci $i) bashdb (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb x i result declare -- i="1" declare -- result="" bashdb print $i $result 1 bashdb break fibonacci Breakpoint 1 set in file /home/ben/testing/bashdb/fibonacci.sh, line 5. bashdb continue Breakpoint 1 hit (1 times). (/home/ben/testing/bashdb/fibonacci.sh:5): 5: fibonacci() { bashdb next (/home/ben/testing/bashdb/fibonacci.sh:6): 6: n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} bashdb next (/home/ben/testing/bashdb/fibonacci.sh:7): 7: if [ $n -le 1 ]; then bashdb x n declare -- n="2" bashdb quitbt
) shows that the script begins at line 3, which is where the version variable is written. Thenext
andlist
commands then progress to the next line of the script a few times and show the context of the current execution line. After one of thenext
commands I press Enter to executenext
again. I invoke theexamine
command through the single letter shortcutx
. Notice that the variables are printed out usingdeclare
as opposed to their display on the next line usingfibonacci
function andcontinue
the execution of the shell script. Thefibonacci
function is called and I move to thenext
line a few times and inspect a variable.Notice that the number in the bashdb prompt toward the end of the above example is enclosed in parentheses. Each set of parentheses indicates that you have entered a subshell. In this example this is due to being inside a shell function.
In the below example I use a watchpoint to see if and where the
(/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb<1> watch result 0: ($result)==0 arith: 0 bashdb<2> c Watchpoint 0: $result changed: old value: '' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb<3> c i=1 result=1 i=2 result=1 Watchpoint 0: $result changed: old value: '1' new value: '2' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"result
variable changes. Notice the initialnext
command. I found that if I didn't issue that next then my watch would fail to work. As you can see, after I issuec
to continue execution, execution is stopped whenever the result variable is about to change, and the new and old value are displayed.To get around the strange initial
$ bash --debugger ./fibonacci.sh (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> watche result > 4 0: (result > 4)==0 arith: 1 bashdb<1> continue i=1 result=1 i=2 result=1 i=3 result=2 i=4 result=3 Watchpoint 0: result > 4 changed: old value: '0' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"next
requirement I used thewatche
command in the below session, which lets you stop whenever an expression becomes true. In this case I'm not overly interested in the first few Fibonacci numbers so I set a watch to have execution stop when the result is greater than 4. You can also use awatche
command without a condition; for example,watche result
would stop execution whenever the result variable changed.When a shell script goes wrong, many folks use the time-tested method of incrementally adding in
echo
orprintf
statements to look for invalid values or code paths that are never reached. With bashdb, you can save yourself time by just adding a few watches on variables or setting a few breakpoints.
Sep 05, 2019 | linuxconfig.org
05 September 2019
... ... ... How to use other Bash optionsThe Bash options for debugging are turned off by default, but once they are turned on by using the set command, they stay on until explicitly turned off. If you are not sure which options are enabled, you can examine the
$-
variable to see the current state of all the variables.$ echo $- himBHs $ set -xv && echo $- himvxBHsThere is another useful switch we can use to help us find variables referenced without having any value set. This is the
<img src=https://linuxconfig.org/images/02-how-to-debug-bash-scripts.png alt="set u option at command line" width=1200 height=254 /> Setting-u
switch, and just like-x
and-v
it can also be used on the command line, as we see in the following example:u
option at the command lineWe mistakenly assigned a value of 7 to the variable called "level" then tried to echo a variable named "score" that simply resulted in printing nothing at all to the screen. Absolutely no debug information was given. Setting our
-u
switch allows us to see a specific error message, "score: unbound variable" that indicates exactly what went wrong.We can use those options in short Bash scripts to give us debug information to identify problems that do not otherwise trigger feedback from the Bash interpreter. Let's walk through a couple of examples.
#!/bin/bash read -p "Path to be added: " $path if [ "$path" = "/home/mike/bin" ]; then echo $path >> $PATH echo "new path: $PATH" else echo "did not modify PATH" fi<img src=https://linuxconfig.org/images/03-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=417 /> Usingx
option when running your Bash scriptIn the example above we run the addpath script normally and it simply does not modify our
PATH
. It does not give us any indication of why or clues to mistakes made. Running it again using the-x
option clearly shows us that the left side of our comparison is an empty string.$path
is an empty string because we accidentally put a dollar sign in front of "path" in our read statement. Sometimes we look right at a mistake like this and it doesn't look wrong until we get a clue and think, "Why is$path
evaluated to an empty string?"Looking this next example, we also get no indication of an error from the interpreter. We only get one value printed per line instead of two. This is not an error that will halt execution of the script, so we're left to simply wonder without being given any clues. Using the
-u
switch,we immediately get a notification that our variablej
is not bound to a value. So these are real time savers when we make mistakes that do not result in actual errors from the Bash interpreter's point of view.#!/bin/bash for i in 1 2 3 do echo $i $j done<img src=https://linuxconfig.org/images/04-how-to-debug-bash-scripts.png alt="results from count.sh script" width=1200 height=291 /> Usingu
option running your script from the command lineNow surely you are thinking that sounds fine, but we seldom need help debugging mistakes made in one-liners at the command line or in short scripts like these. We typically struggle with debugging when we deal with longer and more complicated scripts, and we rarely need to set these options and leave them set while we run multiple scripts. Setting
-xv
options and then running a more complex script will often add confusion by doubling or tripling the amount of output generated.Fortunately we can use these options in a more precise way by placing them inside our scripts. Instead of explicitly invoking a Bash shell with an option from the command line, we can set an option by adding it to the shebang line instead.
#!/bin/bash -xThis will set the
-x
option for the entire file or until it is unset during the script execution, allowing you to simply run the script by typing the filename instead of passing it to Bash as a parameter. A long script or one that has a lot of output will still become unwieldy using this technique however, so let's look at a more specific way to use options.
For a more targeted approach, surround only the suspicious blocks of code with the options you want. This approach is great for scripts that generate menus or detailed output, and it is accomplished by using the set keyword with plus or minus once again.
#!/bin/bash read -p "Path to be added: " $path set -xv if [ "$path" = "/home/mike/bin" ]; then echo $path >> $PATH echo "new path: $PATH" else echo "did not modify PATH" fi set +xv<img src=https://linuxconfig.org/images/05-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=469 /> Wrapping options around a block of code in your scriptWe surrounded only the blocks of code we suspect in order to reduce the output, making our task easier in the process. Notice we turn on our options only for the code block containing our if-then-else statement, then turn off the option(s) at the end of the suspect block. We can turn these options on and off multiple times in a single script if we can't narrow down the suspicious areas, or if we want to evaluate the state of variables at various points as we progress through the script. There is no need to turn off an option If we want it to continue for the remainder of the script execution.
For completeness sake we should mention also that there are debuggers written by third parties that will allow us to step through the code execution line by line. You might want to investigate these tools, but most people find that that they are not actually needed.
As seasoned programmers will suggest, if your code is too complex to isolate suspicious blocks with these options then the real problem is that the code should be refactored. Overly complex code means bugs can be difficult to detect and maintenance can be time consuming and costly.
One final thing to mention regarding Bash debugging options is that a file globbing option also exists and is set with
-f
. Setting this option will turn off globbing (expansion of wildcards to generate file names) while it is enabled. This-f
option can be a switch used at the command line with bash, after the shebang in a file or, as in this example to surround a block of code.#!/bin/bash echo "ignore fileglobbing option turned off" ls * echo "ignore file globbing option set" set -f ls * set +f<img src=https://linuxconfig.org/images/06-how-to-debug-bash-scripts.png alt="results from -f option" width=1200 height=314 /> Usingf
option to turn off file globbing How to use trap to help debugThere are more involved techniques worth considering if your scripts are complicated, including using an assert function as mentioned earlier. One such method to keep in mind is the use of trap. Shell scripts allow us to trap signals and do something at that point.
A simple but useful example you can use in your Bash scripts is to trap on
EXIT
.#!/bin/bash trap 'echo score is $score, status is $status' EXIT if [ -z ]; then status="default" else status= fi score=0 if [ ${USER} = 'superman' ]; then score=99 elif [ $# -gt 1 ]; then score= fi<img src=https://linuxconfig.org/images/07-how-to-debug-bash-scripts.png alt="results from using trap EXIT" width=1200 height=469 /> Using trapEXIT
to help debug your script
As you can see just dumping the current values of variables to the screen can be useful to show where your logic is failing. The
EXIT
signal obviously does not need an explicitexit
statement to be generated; in this case theecho
statement is executed when the end of the script is reached.Another useful trap to use with Bash scripts is
DEBUG
. This happens after every statement, so it can be used as a brute force way to show the values of variables at each step in the script execution.#!/bin/bash trap 'echo "line ${LINENO}: score is $score"' DEBUG score=0 if [ "${USER}" = "mike" ]; then let "score += 1" fi let "score += 1" if [ "" = "7" ]; then score=7 fi exit 0<img src=https://linuxconfig.org/images/08-how-to-debug-bash-scripts.png alt="results from using trap DEBUG" width=1200 height=469 /> Using trapDEBUG
to help debug your script ConclusionWhen you notice your Bash script not behaving as expected and the reason is not clear to you for whatever reason, consider what information would be useful to help you identify the cause then use the most comfortable tools available to help you pinpoint the issue. The xtrace option
-x
is easy to use and probably the most useful of the options presented here, so consider trying it out next time you're faced with a script that's not doing what you thought it would
Aug 27, 2019 | bash.cyberciti.biz
BASH_LINENO
An array variable whose members are the line numbers in source files corresponding to each member of FUNCNAME .
${BASH_LINENO[$i]}
is the line number in the source file where${FUNCNAME[$i]}
was called. The corresponding source file name is${BASH_SOURCE[$i]}
. UseLINENO
to obtain the current line number.
Aug 27, 2019 | stackoverflow.com
How to show line number when executing bash script Ask Question Asked 6 years, 1 month ago Active 1 year, 4 months ago Viewed 47k times 68 31
dspjm ,Jul 23, 2013 at 7:31
I have a test script which has a lot of commands and will generate lots of output, I useset -x
orset -v
andset -e
, so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem. Is there a method which can output the line number of the script before each line is executed? Or output the line number before the command exhibition generated byset -x
? Or any method which can deal with my script line location problem would be a great help. Thanks.Suvarna Pattayil ,Jul 28, 2017 at 17:25
You mention that you're already using-x
. The variablePS4
denotes the value is the prompt printed before the command line is echoed when the-x
option is set and defaults to:
followed by space.You can change
PS4
to emit theLINENO
(The line number in the script or shell function currently executing).For example, if your script reads:
$ cat script foo=10 echo ${foo} echo $((2 + 2))Executing it thus would print line numbers:
$ PS4='Line ${LINENO}: ' bash -x script Line 1: foo=10 Line 2: echo 10 10 Line 3: echo 4 4http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate
PS4
that would output everything you will possibly need for tracing:export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'Deqing ,Jul 23, 2013 at 8:16
In Bash,$LINENO
contains the line number where the script currently executing.If you need to know the line number where the function was called, try
$BASH_LINENO
. Note that this variable is an array.For example:
#!/bin/bash function log() { echo "LINENO: ${LINENO}" echo "BASH_LINENO: ${BASH_LINENO[*]}" } function foo() { log "$@" } foo "$@"See here for details of Bash variables.
Eliran Malka ,Apr 25, 2017 at 10:14
Simple (but powerful) solution: Placeecho
around the code you think that causes the problem and move theecho
line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.Even more powerful solution: Install
bashdb
the bash debugger and debug the script line by linekklepper ,Apr 2, 2018 at 22:44
Workaround for shells without LINENOIn a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.
Define a function
echo_line_no () { grep -n "$1" $0 | sed "s/echo_line_no//" # grep the line(s) containing input $1 with line numbers # replace the function name with nothing } # echo_line_noUse it with quotes like
echo_line_no "this is a simple comment with a line number"Output is
16 "this is a simple comment with a line number"if the number of this line in the source file is 16.
This basically answers the question How to show line number when executing bash script for users of ash or other shells without
LINENO
.Anything more to add?
Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?
Want to know more? Read reflections on debugging
Dec 09, 2017 | stackoverflow.com
,
That line defines what program will execute the given script. Forsh
normally that line should start with the # character as so:#!/bin/sh -eThe -e flag's long name is
errexit
, causing the script to immediately exit on the first error.
Oct 25, 2017 | linuxconfig.org
Trap syntax is very simple and easy to understand: first we must call the trap builtin, followed by the action(s) to be executed, then we must specify the signal(s) we want to react to:
trap [-lp] [[arg] sigspec]Let's see what the possibletrap
options are for.When used with the
-l
flag, the trap command will just display a list of signals associated with their numbers. It's the same output you can obtain running thekill -l
command:$ trap -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAXIt's really important to specify that it's possible to react only to signals which allows the script to respond: theSIGKILL
andSIGSTOP
signals cannot be caught, blocked or ignored.Apart from signals, traps can also react to some
pseudo-signal
such as EXIT, ERR or DEBUG, but we will see them in detail later. For now just remember that a signal can be specified either by its number or by its name, even without theSIG
prefix.About the
-p
option now. This option has sense only when a command is not provided (otherwise it will produce an error). When trap is used with it, a list of the previously set traps will be displayed. If the signal name or number is specified, only the trap set for that specific signal will be displayed, otherwise no distinctions will be made, and all the traps will be displayed:$ trap 'echo "SIGINT caught!"' SIGINTWe set a trap to catch the SIGINT signal: it will just display the "SIGINT caught" message onscreen when given signal will be received by the shell. If we now use trap with the -p option, it will display the trap we just defined:$ trap -p trap -- 'echo "SIGINT caught!"' SIGINTBy the way, the trap is now "active", so if we send a SIGINT signal, either using the kill command, or with the CTRL-c shortcut, the associated command in the trap will be executed (^C is just printed because of the key combination):^CSIGINT caught!Trap in action We now will write a simple script to show trap in action, here it is:#!/usr/bin/env bash # # A simple script to demonstrate how trap works # set -e set -u set -o pipefail trap 'echo "signal caught, cleaning..."; rm -i linux_tarball.tar.xz' SIGINT SIGTERM echo "Downloading tarball..." wget -O linux_tarball.tar.xz https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.13.5.tar.xz &> /dev/nullThe above script just tries to download the latest linux kernel tarball into the directory from what it is launched usingwget
. During the task, if the SIGINT or SIGTERM signals are received (notice how you can specify more than one signal on the same line), the partially downloaded file will be deleted.In this case the command are actually two: the first is the
echo
which prints the message onscreen, and the second is the actualrm
command (we provided the -i option to it, so it will ask user confirmation before removing), and they are separated by a semicolon. Instead of specifying commands this way, you can also call functions: this would give you more re-usability. Notice that if you don't provide any command the signal(s) will just be ignored!This is the output of the script above when it receives a SIGINT signal:
$ ./fetchlinux.sh Downloading tarball... ^Csignal caught, cleaning... rm: remove regular file 'linux_tarball.tar.xz'?A very important thing to remember is that when a script is terminated by a signal, like above, its exist status will be the result of128 + the signal number
. As you can see, the script above, being terminated by a SIGINT, has an exit status of130
:$ echo $? 130Lastly, you can disable a trap just by callingtrap
followed by the-
sign, followed by the signal(s) name or number:trap - SIGINT SIGTERMThe signals will take back the value they had upon the entrance to shell. Pseudo-signals As already mentioned above, trap can be set not only for signals which allows the script to respond but also to what we can call "pseudo-signals". They are not technically signals, but correspond to certain situations that can be specified: EXIT WhenEXIT
is specified in a trap, the command of the trap will be execute on exit from the shell. ERR This will cause the argument of the trap to be executed when a command returns a non-zero exit status, with some exceptions (the same of the shell errexit option): the command must not be part of awhile
oruntil
loop; it must not be part of anif
construct, nor part of a&&
or||
list, and its value must not be inverted by using the!
operator. DEBUG This will cause the argument of the trap to be executed before every simple command,for
,case
orselect
commands, and before the first command in shell functions RETURN The argument of the trap is executed after a function or a script sourced by usingsource
or the.
command.
Jul 25, 2017 | wiki.bash-hackers.org
Script execution Your perfect Bash script executes with syntax errors If you write Bash scripts with Bash specific syntax and features, run them with Bash , and run them with Bash in native mode .
Wrong
- no shebang
- the interpreter used depends on the OS implementation and current shell
- can be run by calling bash with the script name as an argument, e.g.
bash myscript
#!/bin/sh
shebang
- depends on what
/bin/sh
actually is, for a Bash it means compatiblity mode, not native modeSee also:
Your script named "test" doesn't execute Give it another name. The executabletest
already exists.In Bash it's a builtin. With other shells, it might be an executable file. Either way, it's bad name choice!
Workaround: You can call it using the pathname:
/home/user/bin/testGlobbing Brace expansion is not globbing The following command line is not related to globbing (filename expansion):
# YOU EXPECT # -i1.vob -i2.vob -i3.vob .... echo -i{*.vob,} # YOU GET # -i*.vob -iWhy? The brace expansion is simple text substitution. All possible text formed by the prefix, the postfix and the braces themselves are generated. In the example, these are only two:-i*.vob
and-i
. The filename expansion happens after that, so there is a chance that-i*.vob
is expanded to a filename - if you have files like-ihello.vob
. But it definitely doesn't do what you expected.Please see:
Test-command
if [ $foo ]
if [-d $dir]
Please see:
Variables Setting variables The Dollar-Sign There is no$
(dollar-sign) when you reference the name of a variable! Bash is not PHP!# THIS IS WRONG! $myvar="Hello world!"A variable name preceeded with a dollar-sign always means that the variable gets expanded . In the example above, it might expand to nothing (because it wasn't set), effectively resulting in
="Hello world!"which definitely is wrong !When you need the name of a variable, you write only the name , for example
- (as shown above) to set variables:
picture=/usr/share/images/foo.png
- to name variables to be used by the
read
builtin command:read picture
- to name variables to be unset:
unset picture
When you need the content of a variable, you prefix its name with a dollar-sign , like
Whitespace Putting spaces on either or both sides of the equal-sign (
- echo "The used picture is: $picture"
=
) when assigning a value to a variable will fail.# INCORRECT 1 example = Hello # INCORRECT 2 example= Hello # INCORRECT 3 example =HelloThe only valid form is no spaces between the variable name and assigned value
# CORRECT 1 example=Hello # CORRECT 2 example=" Hello"Expanding (using) variables A typical beginner's trap is quoting.
As noted above, when you want to expand a variable i.e. "get the content", the variable name needs to be prefixed with a dollar-sign. But, since Bash knows various ways to quote and does word-splitting, the result isn't always the same.
Let's define an example variable containing text with spaces:
example="Hello world"
Used form result number of words $example
Hello world
2 "$example"
Hello world
1 \$example
$example
1 '$example'
$example
1 If you use parameter expansion, you must use the name (
PATH
) of the referenced variables/parameters. i.e. not ($PATH
):# WRONG! echo "The first character of PATH is ${$PATH:0:1}" # CORRECT echo "The first character of PATH is ${PATH:0:1}"Note that if you are using variables in arithmetic expressions , then the bare name is allowed:
((a=$a+7)) # Add 7 to a ((a = a + 7)) # Add 7 to a. Identical to the previous command. ((a += 7)) # Add 7 to a. Identical to the previous command. a=$((a+7)) # POSIX-compatible version of previous code.Please see:
Exporting Exporting a variable means to give newly created (child-)processes a copy of that variable. not copy a variable created in a child process to the parent process. The following example does not work, since the variablehello
is set in a child process (the process you execute to start that script./script.sh
):$ cat script.sh export hello=world $ ./script.sh $ echo $hello $Exporting is one-way. The direction is parent process to child process, not the reverse. The above example will work, when you don't execute the script, but include ("source") it:
$ source ./script.sh $ echo $hello world $In this case, the export command is of no use.Please see:
Exit codes Reacting to exit codes If you just want to react to an exit code, regardless of its specific value, you don't need to use$?
in a test command like this:grep ^root: etc passwd >/ dev null >& if $? -neq then echo "root was not found - check the pub at the corner" fiThis can be simplified to:
if grep ^root: etc passwd >/ dev null >& then echo "root was not found - check the pub at the corner" fiOr, simpler yet:
grep ^root: etc passwd >/ dev null >& || echo "root was not found - check the pub at the corner"If you need the specific value of
$?
, there's no other choice. But if you need only a "true/false" exit indication, there's no need for$?
.See also:
Output vs. Return Value It's important to remember the different ways to run a child command, and whether you want the output, the return value, or neither.When you want to run a command (or a pipeline) and save (or print) the output , whether as a string or an array, you use Bash's
$(command)
syntax:$(ls -l /tmp) newvariable=$(printf "foo")When you want to use the return value of a command, just use the command, or add ( ) to run a command or pipeline in a subshell:
if grep someuser /etc/passwd ; then # do something fi if ( w | grep someuser | grep sqlplus ) ; then # someuser is logged in and running sqlplus fiMake sure you're using the form you intended:
# WRONG! if $(grep ERROR /var/log/messages) ; then # send alerts fi
Wednesday March 10, @02:06PM (#8523604)
My 2 cent tips on budding shell script authors.
If the script is not working as you want, put a
set -x
on the fist line and
set +x
on the last line.
You will see the exact execution path and variable expansion, very neat for debugging
February 23, 2015
In most of the programming languages debugger tool is available for debugging. A debugger is a tool that can run a program or script that enables you to examine the internals of the script or program as it runs. In the shell scripting we don"t have any debugger tool but with the help of command line options (-n, -v and -x ) we can do the debugging.
-n option
The -n option, shot for noexec ( as in no execution), tells the shell to not run the commands. Instead, the shell just checks for syntax errors. This option will not convince the shell to perform any more checks. Instead the shell just performs the normal syntax check. With -n option, the shell doesn't execute your commands, so you have a safe way to test your scripts if they contain syntax error.
The follow example shows how to use -n option.
Let us consider a shell script with a name debug_quotes.sh
#!/bin/bash
echo "USER=$USER
echo "HOME=$HOME"
echo "OSNAME=$OSNAME"Now run the script with -n option
$ sh -n debug_quotes
debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted stringAs the above outputs shows that there is syntax error , double quotes are missing.
-v option
The -v option tells the shell to run in verbose mode. In practice , this means that shell will echo each command prior to execute the command. This is very useful in that it can often help to find the errors.
Let us create a shell script with the name "listusers.sh" with below contents
linuxtechi@localhost:~$ cat listusers.sh#!/bin/bash
cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt#Clean up the temporary file.
/bin/rm -f /tmp/users.txtNow execute the script with -v option.
linuxtechi@localhost:~$ sh -v listusers.sh#!/bin/bash
cut -d : -f1,5,7 /etc/passwd | grep -v sbin | grep sh | sort > /tmp/users.txt
awk -F':' ' { printf ( "%-12s %-40s\n", $1, $2 ) } ' /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,#Clean up the temporary file.
/bin/rm -f /tmp/users.txtlinuxtechi@localhost:~$
In the above output , script output gets mixed with commands of the scripts. But however , with -v option , at least you get a better view of what the shell is doing as it runs your script.
Combining the -n & -v Options
We can combine the command line options ( -n & -v ). This makes a good combination because we can check the syntax of a script while seeing the script output.
Let us consider a previously used script "debug_quotes.sh"
linuxtechi@localhost:~$ sh -nv debug_quotes.sh#!/bin/bash
#shows an error.echo "USER=$USER
echo "HOME=$HOME"
echo "OSNAME=$OSNAME"debug_quotes: 8: debug_quotes: Syntax error: Unterminated quoted string
linuxtechi@localhost:~$
Tracing Script Execution ( -x option )
The -x option, short for xtrace or execution trace, tells the shell to echo each command after performing the substitution steps. Thus , we can see the values of variables and commands. Often, this option alone will help to diagnose a problem.
In most cases, the -x option provides the most useful information about a script, but it can lead to a lot of output. The following example show this option in action.
linuxtechi@localhost:~$ sh -x listusers.sh+ cut -d :+ -f1,5,7 /etc/passwd
grep -v sbin
+ sort
+ grep sh
+ awk -F: { printf ( "%-12s %-40s\n", $1, $2 ) } /tmp/users.txt
guest-k9ghtA Guest,,,
guest-kqEkQ8 Guest,,,
guest-llnzfx Guest,,,
pradeep pradeep,,,
mail admin Mail Admin,,,
+ /bin/rm -f /tmp/users.txtlinuxtechi@localhost:~$
In the above output , shell inserted a + sign in front of the commands.
Many people hack together shell scripts quickly to do simple tasks, but these soon take on a life of their own. Unfortunately shell scripts are full of subtle effects which result in scripts failing in unusual ways. It's possible to write scripts which minimize these problems. In this article, I explain several techniques for writing robust bash scripts.
Use set -u
How often have you written a script that broke because a variable wasn't set? I know I have, many times.
chroot=$1 ... rm -rf $chroot/usr/share/docIf you ran the script above and accidentally forgot to give a parameter, you would have just deleted all of your system documentation rather than making a smaller chroot. So what can you do about it? Fortunately bash provides you with set -u, which will exit your script if you try to use an uninitialised variable. You can also use the slightly more readable set -o nounset.
david% bash /tmp/shrink-chroot.sh $chroot= david% bash -u /tmp/shrink-chroot.sh /tmp/shrink-chroot.sh: line 3: $1: unbound variable david%Use set -e
Every script you write should include set -e at the top. This tells bash that it should exit the script if any statement returns a non-true return value. The benefit of using -e is that it prevents errors snowballing into serious issues when they could have been caught earlier. Again, for readability you may want to use set -o errexit.
Using -e gives you error checking for free. If you forget to check something, bash will do it or you. Unfortunately it means you can't check $? as bash will never get to the checking code if it isn't zero. There are other constructs you could use:
command if [ "$?"-ne 0]; then echo "command failed"; exit 1; ficould be replaced with
command || { echo "command failed"; exit 1; }or
if ! command; then echo "command failed"; exit 1; fiWhat if you have a command that returns non-zero or you are not interested in its return value? You can use command || true, or if you have a longer section of code, you can turn off the error checking, but I recommend you use this sparingly.
set +e command1 command2 set -eOn a slightly related note, by default bash takes the error status of the last item in a pipeline, which may not be what you want. For example, false | true will be considered to have succeeded. If you would like this to fail, then you can use set -o pipefail to make it fail.
Program defensively - expect the unexpected
Your script should take into account of the unexpected, like files missing or directories not being created. There are several things you can do to prevent errors in these situations. For example, when you create a directory, if the parent directory doesn't exist, mkdir will return an error. If you add a -p option then mkdir will create all the parent directories before creating the requested directory. Another example is rm. If you ask rm to delete a non-existent file, it will complain and your script will terminate. (You are using -e, right?) You can fix this by using -f, which will silently continue if the file didn't exist.
Be prepared for spaces in filenames
Someone will always use spaces in filenames or command line arguments and you should keep this in mind when writing shell scripts. In particular you should use quotes around variables.
if [ $filename = "foo" ];will fail if $filename contains a space. This can be fixed by using:
if [ "$filename" = "foo" ];When using $@ variable, you should always quote it or any arguments containing a space will be expanded in to separate words.
david% foo() { for i in $@; do echo $i; done }; foo bar "baz quux" bar baz quux david% foo() { for i in "$@"; do echo $i; done }; foo bar "baz quux" bar baz quuxI can not think of a single place where you shouldn't use "$@" over $@, so when in doubt, use quotes.
If you use find and xargs together, you should use -print0 to separate filenames with a null character rather than new lines. You then need to use -0 with xargs.
david% touch "foo bar" david% find | xargs ls ls: ./foo: No such file or directory ls: bar: No such file or directory david% find -print0 | xargs -0 ls ./foo barSetting traps
Often you write scripts which fail and leave the filesystem in an inconsistent state; things like lock files, temporary files or you've updated one file and there is an error updating the next file. It would be nice if you could fix these problems, either by deleting the lock files or by rolling back to a known good state when your script suffers a problem. Fortunately bash provides a way to run a command or function when it receives a unix signal using the trap command.
trap command signal [signal ...]There are many signals you can trap (you can get a list of them by running kill -l), but for cleaning up after problems there are only 3 we are interested in: INT, TERM and EXIT. You can also reset traps back to their default by using - as the command.
Signal Description INT Interrupt - This signal is sent when someone kills the script by pressing ctrl-c. TERM Terminate - this signal is sent when someone sends the TERM signal using the kill command. EXIT Exit - this is a pseudo-signal and is triggered when your script exits, either through reaching the end of the script, an exit command or by a command failing when using set -e. Usually, when you write something using a lock file you would use something like:
if [ ! -e $lockfile ]; then touch $lockfile critical-section rm $lockfile else echo "critical-section is already running" fiWhat happens if someone kills your script while critical-section is running? The lockfile will be left there and your script won't run again until it's been deleted. The fix is to use:
if [ ! -e $lockfile ]; then trap "rm -f $lockfile; exit" INT TERM EXIT touch $lockfile critical-section rm $lockfile trap - INT TERM EXIT else echo "critical-section is already running" fiNow when you kill the script it will delete the lock file too. Notice that we explicitly exit from the script at the end of trap command, otherwise the script will resume from the point that the signal was received.
Race conditions
It's worth pointing out that there is a slight race condition in the above lock example between the time we test for the lockfile and the time we create it. A possible solution to this is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file. We can use something similar to:
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT critical-section rm -f "$lockfile" trap - INT TERM EXIT else echo "Failed to acquire lockfile: $lockfile." echo "Held by $(cat $lockfile)" fiA slightly more complicated problem is where you need to update a bunch of files and need the script to fail gracefully if there is a problem in the middle of the update. You want to be certain that something either happened correctly or that it appears as though it didn't happen at all.Say you had a script to add users.
add_to_passwd $user cp -a /etc/skel /home/$user chown $user /home/$user -RThere could be problems if you ran out of diskspace or someone killed the process. In this case you'd want the user to not exist and all their files to be removed.
rollback() { del_from_passwd $user if [ -e /home/$user ]; then rm -rf /home/$user fi exit } trap rollback INT TERM EXIT add_to_passwd $user cp -a /etc/skel /home/$user chown $user /home/$user -R trap - INT TERM EXITWe needed to remove the trap at the end or the rollback function would have been called as we exited, undoing all the script's hard work.
Be atomic
Sometimes you need to update a bunch of files in a directory at once, say you need to rewrite urls form one host to another on your website. You might write:
for file in $(find /var/www -type f -name "*.html"); do perl -pi -e 's/www.example.net/www.example.com/' $file doneNow if there is a problem with the script you could have half the site referring to www.example.com and the rest referring to www.example.net. You could fix this using a backup and a trap, but you also have the problem that the site will be inconsistent during the upgrade too.
The solution to this is to make the changes an (almost) atomic operation. To do this make a copy of the data, make the changes in the copy, move the original out of the way and then move the copy back into place. You need to make sure that both the old and the new directories are moved to locations that are on the same partition so you can take advantage of the property of most unix filesystems that moving directories is very fast, as they only have to update the inode for that directory.
cp -a /var/www /var/www-tmp for file in $(find /var/www-tmp -type f -name "*.html"); do perl -pi -e 's/www.example.net/www.example.com/' $file done mv /var/www /var/www-old mv /var/www-tmp /var/wwwThis means that if there is a problem with the update, the live system is not affected. Also the time where it is affected is reduced to the time between the two mvs, which should be very minimal, as the filesystem just has to change two entries in the inodes rather than copying all the data around.
The disadvantage of this technique is that you need to use twice as much disk space and that any process that keeps files open for a long time will still have the old files open and not the new ones, so you would have to restart those processes if this is the case. In our example this isn't a problem as apache opens the files every request. You can check for files with files open by using lsof. An advantage is that you now have a backup before you made your changes in case you need to revert.
VIM can be used as bash IDE: Linux.com Turn Vim into a bash IDE
By Joe 'Zonker' Brockmeier on June 11, 2007 (9:01:00 PM)By itself, Vim is one of the best editors for shell scripting. With a little tweaking, however, you can turn Vim into a full-fledged IDE for writing scripts.
You could do it yourself, or you can just install Fritz Mehner's Bash Support plugin.
To install Bash Support, download the zip archive, copy it to your ~/.vim directory, and unzip the archive. You'll also want to edit your ~/.vimrc to include a few personal details; open the file and add these three lines:let g:BASH_AuthorName = 'Your Name' let g:BASH_Email = '[email protected]' let g:BASH_Company = 'Company Name'These variables will be used to fill in some headers for your projects, as we'll see below.
The Bash Support plugin works in the Vim GUI (gVim) and text mode Vim. It's a little easier to use in the GUI, and Bash Support doesn't implement most of its menu functions in Vim's text mode, so you might want to stick with gVim when scripting.
When Bash Support is installed, gVim will include a new menu, appropriately titled Bash. This puts all of the Bash Support functions right at your fingertips (or mouse button, if you prefer). Let's walk through some of the features, and see how Bash Support can make Bash scripting a breeze.
Header and comments
If you believe in using extensive comments in your scripts, and I hope you are, you'll really enjoy using Bash Support. Bash Support provides a number of functions that make it easy to add comments to your bash scripts and programs automatically or with just a mouse click or a few keystrokes.
When you start a non-trivial script that will be used and maintained by others, it's a good idea to include a header with basic information -- the name of the script, usage, description, notes, author information, copyright, and any other info that might be useful to the next person who has to maintain the script. Bash Support makes it a breeze to provide this information. Go to Bash -> Comments -> File Header, and gVim will insert a header like this in your script:
#!/bin/bash #=============================================================================== # # FILE: test.sh # # USAGE: ./test.sh # # DESCRIPTION: # # OPTIONS: --- # REQUIREMENTS: --- # BUGS: --- # NOTES: --- # AUTHOR: Joe Brockmeier, [email protected] # COMPANY: Dissociated Press # VERSION: 1.0 # CREATED: 05/25/2007 10:31:01 PM MDT # REVISION: --- #===============================================================================You'll need to fill in some of the information, but Bash Support grabs the author, company name, and email address from your ~/.vimrc, and fills in the file name and created date automatically. To make life even easier, if you start Vim or gVim with a new file that ends with an .sh extension, it will insert the header automatically.
As you're writing your script, you might want to add comment blocks for your functions as well. To do this, go to Bash -> Comment -> Function Description to insert a block of text like this:
#=== FUNCTION ================================================================ # NAME: # DESCRIPTION: # PARAMETERS: # RETURNS: #===============================================================================Just fill in the relevant information and carry on coding.
The Comment menu allows you to insert other types of comments, insert the current date and time, and turn selected code into a comment, and vice versa.
Statements and snippets
Let's say you want to add an if-else statement to your script. You could type out the statement, or you could just use Bash Support's handy selection of pre-made statements. Go to Bash -> Statements and you'll see a long list of pre-made statements that you can just plug in and fill in the blanks. For instance, if you want to add a while statement, you can go to Bash -> Statements -> while, and you'll get the following:
while _; do doneThe cursor will be positioned where the underscore (_) is above. All you need to do is add the test statement and the actual code you want to run in the while statement. Sure, it'd be nice if Bash Support could do all that too, but there's only so far an IDE can help you.
However, you can help yourself. When you do a lot of bash scripting, you might have functions or code snippets that you reuse in new scripts. Bash Support allows you to add your snippets and functions by highlighting the code you want to save, then going to Bash -> Statements -> write code snippet. When you want to grab a piece of prewritten code, go to Bash -> Statements -> read code snippet. Bash Support ships with a few included code fragments.
Another way to add snippets to the statement collection is to just place a text file with the snippet under the ~/.vim/bash-support/codesnippets directory.
Running and debugging scripts
Once you have a script ready to go, and it's testing and debugging time. You could exit Vim, make the script executable, run it and see if it has any bugs, and then go back to Vim to edit it, but that's tedious. Bash Support lets you stay in Vim while doing your testing.
When you're ready to make the script executable, just choose Bash -> Run -> make script executable. To save and run the script, press
Ctrl-F9
, or go to Bash -> Run -> save + run script.Bash Support also lets you call the bash debugger (bashdb) directly from within Vim. On Ubuntu, it's not installed by default, but that's easily remedied with
apt-get install bashdb
. Once it's installed, you can debug the script you're working on withF9
or Bash -> Run -> start debugger.If you want a "hard copy" -- a PostScript printout -- of your script, you can generate one by going to Bash -> Run -> hardcopy to FILENAME.ps. This is where Bash Support comes in handy for any type of file, not just bash scripts. You can use this function within any file to generate a PostScript printout.
Bash Support has several other functions to help run and test scripts from within Vim. One useful feature is syntax checking, which you can access with
Alt-F9
. If you have no syntax errors, you'll get a quick OK. If there are problems, you'll see a small window at the bottom of the Vim screen with a list of syntax errors. From that window you can highlight the error and pressEnter
, and you'll be taken to the line with the error.Put away the reference book...
Don't you hate it when you need to include a regular expression or a test in a script, but can't quite remember the syntax? That's no problem when you're using Bash Support, because you have Regex and Tests menus with all you'll need. For example, if you need to verify that a file exists and is owned by the correct user ID (UID), go to Bash -> Tests -> file exists and is owned by the effective UID. Bash Support will insert the appropriate test (
[ -O _]
) with your cursor in the spot where you have to fill in the file name.To build regular expressions quickly, go to the Bash menu, select Regex, then pick the appropriate expression from the list. It's fairly useful when you can't remember exactly how to expreptions, and a lot more.
Hotkey support
Vim users can access many of Bash Support's features using hotkeys. While not as simple as clicking the menu, the hotkeys do follow a logical scheme that makes them easy to remember. For example, all of the comment functions are accessed with
\c
, so if you want to insert a file header, you use\ch
; if you want a date inserted, type\cd
; and for a line end comment, use\cl
.Statements can be accessed with
\a
. Use\ac
for a case statement,\aie
for an "if then else" statement,\af
for a "for in..." statement, and so on. Note that the online docs are incorrect here, and indicate that statements begin with\s
, but Bash Support ships with a PDF reference card (under .vim/bash-support/doc/bash-hot-keys.pdf) that gets it right.Run commands are accessed with
\r
. For example, to save the file and run a script, use\rr
; to make a script executable, use\re
; and to start the debugger, type\rd
. I won't try to detail all of the shortcuts, but you can pull up a reference using:help bashsupport-usage-vim
when in Vim, or use the PDF. The full Bash Support reference is available within Vim by running:help bashsupport
, or you can read it online.Of course, we've covered only a small part of Bash Support's functionality. The next time you need to whip up a shell script, try it using Vim with Bash Support. This plugin makes scripting in bash a lot easier.
Every Monday we highlight a different extension, plugin, or add-on. Write an article of less than 1,000 words telling us about one that you use and how it makes your work easier, along with tips for getting the most out of it. If we publish it, we'll pay you 0. (Send us a query first to be sure we haven't already published a story on your chosen topic recently or have one in hand.)
The Bash Debugger (bashdb) is a debugger for Bash scripts. The debugger command interface is modeled on the gdb command interface. Front-ends supporting bashdb include GNU-Emacs and ddd. In the past, the project has been used aher experimental features such as a timestamped history file (now in Bash versions after 3.0).This release contains bugfixes accumulated over the year and works on bash version 3.2 as well as version 3.1.
There are quite a few things that can be done with bash to help debugging your scripts containing the rulesets. One of the first problems with finding a bug is to know on which line the problem appears. This can be solved in two different ways, either using the bash -x flag, or by simply entering some echo statements to find the place where the problem happens. Ideally, you would, with the echo statement, add something like the following echo statement at regular intervals in the code:
... echo "Debugging message 1." ... echo "Debugging message 2." ...In my case, I generally use pretty much worthless messages, as long as they have something in them that is unique so I can find the error message by a simple grep or search in the script file. Now, if the error message shows up after the "Debugging message 1." message, but before "Debugging message 2.", then we know that the erroneous line of code is somewhere in between the two debugging messages. As you can understand, bash has the not really bad, but at least peculiar idea of continuing to execute commands even if there is an error in one of the commands before. In netfilter, this can cause some very interesting problems for you. The above idea of simply using echo statements to find the errors is extremely simple, but it is at the same time very nice since you can narrow the whole problem down to a single line of code and see what the problem is directly.
The second possibility to find the above problem is to use the -x variable to bash, as we spoke of before. This can of course be a little problem, especially if your script is large, and if your console buffer isn't large enough. What the -x variable means is quite simple, it tells the script to just echo every single line of code in the script into the standard output of the shell (generally your console). What you do is to change your normal start line of the script from this:
#!/bin/bashInto the line below:
#!/bin/bash -xAs you will see, this changes your output from perhaps a couple of lines, to copious amounts of data on the output. The code shows you every single command line that is executed, and with the values of all the variables et cetera, so that you don't have to try and figure out exactly what the code is doing. Simply put, each line that gets executed is output to your screen as well. One thing that may be nice to see, is that all of the lines that bash outputs are prefixed by a + sign. This makes it a little bit easier to discern error or warning messages from the actual script, rather than just one big mesh of output.
The -x option is also very interesting for debugging a couple of other rather common problems that you may run into with a little bit more complex rulesets. The first of them is to find out exactly what happens with what you thought was a simple loop, such as an for, if or while statement? For example, let's look at an example.
#!/bin/bash iptables="/sbin/iptables" $iptables -N output_int_iface cat /etc/configs/machines | while read host; do $iptables -N output-$host $iptables -A output_int_iface -p tcp -d $host -j output-$host cat /etc/configs/${host}/ports | while read row2; do $iptables -A output-$host -p tcp --dport $row2 -d $host -j ACCEPT done doneThis set of rules may look simple enough, but we continue to run into a problem with it. We get the following error messages that we know come from the above code by using the simple echo debugging method.
work3:~# ./test.sh Bad argument `output-' Try `iptables -h' or 'iptables --help' for more information. cat: /etc/configs//ports: No such file or directorySo we turn on the -x option to bash and look at the output. The output is shown below, and as you can see there is something very weird going on in it. There are a couple of commands where the $host and $row2 variables are replaced by nothing. Looking closer, we see that it is only the last iteration of code that causes the trouble. Either we have done a programmatical error, or there is something strange with the data. In this case, it is a simple error with the data, which contains a single extra linebreak at the end of the file. This causes the loop to iterate one last time, which it shouldn't. Simply remove the trailing linebreak of the file, and the problem is solved. This may not be a very elegant solution, but for private work it should be enough. Otherwise, you could add code that looks to see that there is actually some data in the $host and $row2 variables.
work3:~# ./test.sh + iptables=/sbin/iptables + /sbin/iptables -N output_int_iface + cat /etc/configs/machines + read host + /sbin/iptables -N output-sto-as-101 + /sbin/iptables -A output_int_iface -p tcp -d sto-as-101 -j output-sto-as-101 + cat /etc/configs/sto-as-101/ports + read row2 + /sbin/iptables -A output-sto-as-101 -p tcp --dport 21 -d sto-as-101 -j ACCEPT + read row2 + /sbin/iptables -A output-sto-as-101 -p tcp --dport 22 -d sto-as-101 -j ACCEPT + read row2 + /sbin/iptables -A output-sto-as-101 -p tcp --dport 23 -d sto-as-101 -j ACCEPT + read row2 + read host + /sbin/iptables -N output-sto-as-102 + /sbin/iptables -A output_int_iface -p tcp -d sto-as-102 -j output-sto-as-102 + cat /etc/configs/sto-as-102/ports + read row2 + /sbin/iptables -A output-sto-as-102 -p tcp --dport 21 -d sto-as-102 -j ACCEPT + read row2 + /sbin/iptables -A output-sto-as-102 -p tcp --dport 22 -d sto-as-102 -j ACCEPT + read row2 + /sbin/iptables -A output-sto-as-102 -p tcp --dport 23 -d sto-as-102 -j ACCEPT + read row2 + read host + /sbin/iptables -N output-sto-as-103 + /sbin/iptables -A output_int_iface -p tcp -d sto-as-103 -j output-sto-as-103 + cat /etc/configs/sto-as-103/ports + read row2 + /sbin/iptables -A output-sto-as-103 -p tcp --dport 21 -d sto-as-103 -j ACCEPT + read row2 + /sbin/iptables -A output-sto-as-103 -p tcp --dport 22 -d sto-as-103 -j ACCEPT + read row2 + /sbin/iptables -A output-sto-as-103 -p tcp --dport 23 -d sto-as-103 -j ACCEPT + read row2 + read host + /sbin/iptables -N output- + /sbin/iptables -A output_int_iface -p tcp -d -j output- Bad argument `output-' Try `iptables -h' or 'iptables --help' for more information. + cat /etc/configs//ports cat: /etc/configs//ports: No such file or directory + read row2 + read hostThe third and final problem you run into that can be partially solved with the help of the -x option is if you are executing the firewall script via SSH, and the console hangs in the middle of executing the script, and the console simply won't come back, nor are you able to connect via SSH again. In 99.9% of the cases, this means there is some kind of problem inside the script with a couple of the rules. By turning on the -x option, you will see exactly at which line the script locks dead, hopefully at least. There are a couple of circumstances where this is not true, unfortunately. For example, what if the script sets up a rule that blocks incoming traffic, but since the ssh/telnet server sends the echo first as outgoing traffic, netfilter will remember the connection, and hence allow the incoming traffic anyways if you have a rule above that handles connection states.
As you can see, it can become quite complex to debug your ruleset to its full extent in the end. However, it is not impossible at all. You may also have noticed, if you have worked remotely on your firewalls via SSH, for example, that the firewall may hang when you load bad rulesets. There is one more thing that can be done to save the day in these circumstances. Cron is an excellent way of saving your day. For example, say you are working on a firewall 50 kilometers away, you add some rules, delete some others, and then delete and insert the new updated ruleset. The firewall locks dead, and you can't reach it. The only way of fixing this is to go to the firewall's physical location and fix the problem from there, unless you have taken precautions that is!
The Bash shell contains no debugger, nor even any debugging-specific commands or constructs. [1] Syntax errors or outright typos in the script generate cryptic error messages that are often of no help in debugging a non-functional script.
Example 29-1. A buggy script
#!/bin/bash # ex74.sh # This is a buggy script. # Where, oh where is the error? a=37 if [$a -gt 27 ] then echo $a fi exit 0Output from script:
What's wrong with the above script (hint: after the if)?
./ex74.sh: [37: command not found
Example 29-2. Missing keyword
#!/bin/bash # missing-keyword.sh: What error message will this generate? for a in 1 2 3 do echo "$a" # done # Required keyword 'done' commented out in line 7. exit 0Output from script:
Note that the error message does not necessarily reference the line in which the error occurs, but the line where the Bash interpreter finally becomes aware of the error.
missing-keyword.sh: line 10: syntax error: unexpected end of fileError messages may disregard comment lines in a script when reporting the line number of a syntax error.
What if the script executes, but does not work as expected? This is the all too familiar logic error.
Example 29-3. test24, another buggy script
#!/bin/bash # This script is supposed to delete all filenames in current directory #+ containing embedded spaces. # It doesn't work. # Why not? badname=`ls | grep ' '` # Try this: # echo "$badname" rm "$badname" exit 0Try to find out what's wrong with Example 29-3 by uncommenting the echo "$badname" line. Echo statements are useful for seeing whether what you expect is actually what you get.
In this particular case, rm "$badname" will not give the desired results because $badname should not be quoted. Placing it in quotes ensures that rm has only one argument (it will match only one filename). A partial fix is to remove to quotes from $badname and to reset $IFS to contain only a newline, IFS=$'\n'. However, there are simpler ways of going about it.
# Correct methods of deleting filenames containing spaces. rm *\ * rm *" "* rm *' '* # Thank you. S.C.Summarizing the symptoms of a buggy script,
- It bombs with a "syntax error" message, or
- It runs, but does not work as expected (logic error).
- It runs, works as expected, but has nasty side effects (logic bomb).
Tools for debugging non-working scripts include
Trapping signals
- echo statements at critical points in the script to trace the variables, and otherwise give a snapshot of what is going on.
Even better is an echo that echoes only when debug is on.
### debecho (debug-echo), by Stefano Falsetto ### ### Will echo passed parameters only if DEBUG is set to a value. ### debecho () { if [ ! -z "$DEBUG" ]; then echo "$1" >&2 # ^^^ to stderr fi } DEBUG=on Whatever=whatnot debecho $Whatever # whatnot DEBUG= Whatever=notwhat debecho $Whatever # (Will not echo.)- using the tee filter to check processes or data flows at critical points.
- setting option flags -n -v -x
sh -n scriptname checks for syntax errors without actually running the script. This is the equivalent of inserting set -n or set -o noexec into the script. Note that certain types of syntax errors can slip past this check.
sh -v scriptname echoes each command before executing it. This is the equivalent of inserting set -v or set -o verbose in the script.
The -n and -v flags work well together. sh -nv scriptname gives a verbose syntax check.
sh -x scriptname echoes the result each command, but in an abbreviated manner. This is the equivalent of inserting set -x or set -o xtrace in the script.
Inserting set -u or set -o nounset in the script runs it, but gives an unbound variable error message at each attempt to use an undeclared variable.
- Using an "assert" function to test a variable or condition at critical points in a script. (This is an idea borrowed from C.)
Example 29-4. Testing a condition with an "assert"
#!/bin/bash # assert.sh assert () # If condition false, { #+ exit from script with error message. E_PARAM_ERR=98 E_ASSERT_FAILED=99 if [ -z "$2" ] # Not enough parameters passed. then return $E_PARAM_ERR # No damage done. fi lineno=$2 if [ ! $1 ] then echo "Assertion failed: \"$1\"" echo "File \"$0\", line $lineno" exit $E_ASSERT_FAILED # else # return # and continue executing script. fi } a=5 b=4 condition="$a -lt $b" # Error message and exit from script. # Try setting "condition" to something else, #+ and see what happens. assert "$condition" $LINENO # The remainder of the script executes only if the "assert" does not fail. # Some commands. # ... echo "This statement echoes only if the \"assert\" does not fail." # ... # Some more commands. exit 0- Using the $LINENO variable and the caller builtin.
- trapping at exit.
The exit command in a script triggers a signal 0, terminating the process, that is, the script itself. [2] It is often useful to trap the exit, forcing a "printout" of variables, for example. The trap must be the first command in the script.
Example 29-5. Trapping at exit
- trap
- Specifies an action on receipt of a signal; also useful for debugging.
A signal is simply a message sent to a process, either by the kernel or another process, telling it to take some specified action (usually to terminate). For example, hitting a Control-C, sends a user interrupt, an INT signal, to a running program.
trap '' 2 # Ignore interrupt 2 (Control-C), with no action specified. trap 'echo "Control-C disabled."' 2 # Message when Control-C pressed.Example 29-6. Cleaning up after Control-C
#!/bin/bash # Hunting variables with a trap. trap 'echo Variable Listing --- a = $a b = $b' EXIT # EXIT is the name of the signal generated upon exit from a script. # # The command specified by the "trap" doesn't execute until #+ the appropriate signal is sent. echo "This prints before the \"trap\" --" echo "even though the script sees the \"trap\" first." echo a=39 b=36 exit 0 # Note that commenting out the 'exit' command makes no difference, #+ since the script exits in any case after running out of commands.
#!/bin/bash # logon.sh: A quick 'n dirty script to check whether you are on-line yet. umask 177 # Make sure temp files are not world readable. TRUE=1 LOGFILE=/var/log/messages # Note that $LOGFILE must be readable #+ (as root, chmod 644 /var/log/messages). TEMPFILE=temp.$$ # Create a "unique" temp file name, using process id of the script. # Using 'mktemp' is an alternative. # For example: # TEMPFILE=`mktemp temp.XXXXXX` KEYWORD=address # At logon, the line "remote IP address xxx.xxx.xxx.xxx" # appended to /var/log/messages. ONLINE=22 USER_INTERRUPT=13 CHECK_LINES=100 # How many lines in log file to check. trap 'rm -f $TEMPFILE; exit $USER_INTERRUPT' TERM INT # Cleans up the temp file if script interrupted by control-c. echo while [ $TRUE ] #Endless loop. do tail -$CHECK_LINES $LOGFILE> $TEMPFILE # Saves last 100 lines of system log file as temp file. # Necessary, since newer kernels generate many log messages at log on. search=`grep $KEYWORD $TEMPFILE` # Checks for presence of the "IP address" phrase, #+ indicating a successful logon. if [ ! -z "$search" ] # Quotes necessary because of possible spaces. then echo "On-line" rm -f $TEMPFILE # Clean up temp file. exit $ONLINE else echo -n "." # The -n option to echo suppresses newline, #+ so you get continuous rows of dots. fi sleep 1 done # Note: if you change the KEYWORD variable to "Exit", #+ this script can be used while on-line #+ to check for an unexpected logoff. # Exercise: Change the script, per the above note, # and prettify it. exit 0 # Nick Drage suggests an alternate method: while true do ifconfig ppp0 | grep UP 1> /dev/null && echo "connected" && exit 0 echo -n "." # Prints dots (.....) until connected. sleep 2 done # Problem: Hitting Control-C to terminate this process may be insufficient. #+ (Dots may keep on echoing.) # Exercise: Fix this. # Stephane Chazelas has yet another alternative: CHECK_INTERVAL=1 while ! tail -1 "$LOGFILE" | grep -q "$KEYWORD" do echo -n . sleep $CHECK_INTERVAL done echo "On-line" # Exercise: Discuss the relative strengths and weaknesses # of each of these various approaches.
The DEBUG argument to trap causes a specified action to execute after every command in a script. This permits tracing variables, for example. Example 29-7. Tracing a variable
#!/bin/bash trap 'echo "VARIABLE-TRACE> \$variable = \"$variable\""' DEBUG # Echoes the value of $variable after every command. variable=29 echo "Just initialized \"\$variable\" to $variable." let "variable *= 3" echo "Just multiplied \"\$variable\" by 3." exit $? # The "trap 'command1 . . . command2 . . .' DEBUG" construct is #+ more appropriate in the context of a complex script, #+ where placing multiple "echo $variable" statements might be #+ clumsy and time-consuming. # Thanks, Stephane Chazelas for the pointer. Output of script: VARIABLE-TRACE> $variable = "" VARIABLE-TRACE> $variable = "29" Just initialized "$variable" to 29. VARIABLE-TRACE> $variable = "29" VARIABLE-TRACE> $variable = "87" Just multiplied "$variable" by 3. VARIABLE-TRACE> $variable = "87"Of course, the trap command has other uses aside from debugging.
Example 29-8. Running multiple processes (on an SMP box)
#!/bin/bash # parent.sh # Running multiple processes on an SMP box. # Author: Tedman Eng # This is the first of two scripts, #+ both of which must be present in the current working directory. LIMIT=$1 # Total number of process to start NUMPROC=4 # Number of concurrent threads (forks?) PROCID=1 # Starting Process ID echo "My PID is $$" function start_thread() { if [ $PROCID -le $LIMIT ] ; then ./child.sh $PROCID& let "PROCID++" else echo "Limit reached." wait exit fi } while [ "$NUMPROC" -gt 0 ]; do start_thread; let "NUMPROC--" done while true do trap "start_thread" SIGRTMIN done exit 0 # ======== Second script follows ======== #!/bin/bash # child.sh # Running multiple processes on an SMP box. # This script is called by parent.sh. # Author: Tedman Eng temp=$RANDOM index=$1 shift let "temp %= 5" let "temp += 4" echo "Starting $index Time:$temp" "$@" sleep ${temp} echo "Ending $index" kill -s SIGRTMIN $PPID exit 0 # ======================= SCRIPT AUTHOR'S NOTES ======================= # # It's not completely bug free. # I ran it with limit = 500 and after the first few hundred iterations, #+ one of the concurrent threads disappeared! # Not sure if this is collisions from trap signals or something else. # Once the trap is received, there's a brief moment while executing the #+ trap handler but before the next trap is set. During this time, it may #+ be possible to miss a trap signal, thus miss spawning a child process. # No doubt someone may spot the bug and will be writing #+ . . . in the future. # ===================================================================== # # ----------------------------------------------------------------------# ################################################################# # The following is the original script written by Vernia Damiano. # Unfortunately, it doesn't work properly. ################################################################# #!/bin/bash # Must call script with at least one integer parameter #+ (number of concurrent processes). # All other parameters are passed through to the processes started. INDICE=8 # Total number of process to start TEMPO=5 # Maximum sleep time per process E_BADARGS=65 # No arg(s) passed to script. if [ $# -eq 0 ] # Check for at least one argument passed to script. then echo "Usage: `basename $0` number_of_processes [passed params]" exit $E_BADARGS fi NUMPROC=$1 # Number of concurrent process shift PARAMETRI=( "$@" ) # Parameters of each process function avvia() { local temp local index temp=$RANDOM index=$1 shift let "temp %= $TEMPO" let "temp += 1" echo "Starting $index Time:$temp" "$@" sleep ${temp} echo "Ending $index" kill -s SIGRTMIN $$ } function parti() { if [ $INDICE -gt 0 ] ; then avvia $INDICE "${PARAMETRI[@]}" & let "INDICE--" else trap : SIGRTMIN fi } trap parti SIGRTMIN while [ "$NUMPROC" -gt 0 ]; do parti; let "NUMPROC--" done wait trap - SIGRTMIN exit $? : <<SCRIPT_AUTHOR_COMMENTS I had the need to run a program, with specified options, on a number of different files, using a SMP machine. So I thought [I'd] keep running a specified number of processes and start a new one each time . . . one of these terminates. The "wait" instruction does not help, since it waits for a given process or *all* process started in background. So I wrote [this] bash script that can do the job, using the "trap" instruction. --Vernia Damiano SCRIPT_AUTHOR_COMMENTS
trap '' SIGNAL (two adjacent apostrophes) disables SIGNAL for the remainder of the script. trap SIGNAL restores the functioning of SIGNAL once more. This is useful to protect a critical portion of a script from an undesirable interrupt.
trap '' 2 # Signal 2 is Control-C, now disabled. command command command trap 2 # Reenables Control-C
Version 3 of Bash adds the following special variables for use by the debugger.
- $BASH_ARGC
- $BASH_ARGV
- $BASH_COMMAND
- $BASH_EXECUTION_STRING
- $BASH_LINENO
- $BASH_SOURCE
- $BASH_SUBSHELL
[1] Rocky Bernstein's Bash debugger partially makes up for this lack. [2] By convention, signal 0 is assigned to exit.
This page shows common errors that Bash programmers make. The following examples are all flawed in some way:
- for i in `ls *.mp3`
- cp $file $target
- [ $foo = "bar" ]
- [ "$foo" = bar && "$bar" = foo ]
- [[ $foo > 7 ]]
- grep foo bar | while read line; do ((count++)); done
- if [grep foo myfile]
- if [bar="$foo"]
- cat file | sed s/foo/bar/ > file
- echo $foo
- $foo=bar
- foo = bar
- echo <<EOF
- su -c 'some command'
- cd /foo; bar
- [ bar == "$foo" ]
1. for i in `ls *.mp3`
One of the most common mistakes BASH programmers make is to write a loop like this:
for i in `ls *.mp3`; do # Wrong! some command $i # Wrong! doneThis breaks when the user has a file with a space in its name. Why? Because the output of the ls *.mp3 command substitution undergoes word splitting. Assuming we have a file named 01 - Don't Eat the Yellow Snow.mp3 in the current directory, the for loop will iterate over each word in the resulting file name (namely: "01", "-", "Don't", "Eat", and so on).
You can't double-quote the substitution either:
for i in "`ls *.mp3`"; do # Wrong! ...This causes the entire output of the ls command to be treated as a single word, and instead of iterating over each file name in the output list, the loop will only execute once, with i taking on a value which is the concatenation of all the file names (with spaces between them).
In addition to this, the use of ls is just plain unnecessary. It's an external command, which simply isn't needed to do the job. So, what's the right way to do it?
for i in *.mp3; do # Better! But... some command "$i" # ... see Pitfall #2 for more info. doneLet Bash expand the list of filenames for you. The expansion will not be subject to word splitting. Each filename that's matched by the *.mp3 pattern will be treated as a separate word, and the loop will iterate once per file name.
For more details on this question, please see Bash FAQ #20.
The astute reader will notice the double quotes in the second line. This leads to our second common pitfall.
2. cp $file $target
What's wrong with the command shown above? Well, nothing, if you happen to know in advance that $file and $target have no white space in them.
But if you don't know that in advance, or if you're paranoid, or if you're just trying to develop good habits, then you should quote your variable references to avoid having them undergo word splitting.
cp "$file" "$target"Without the double quotes, you'll get a command like cp 01 - Don't Eat the Yellow Snow.mp3 /mnt/usb and then you'll get errors like cp: cannot stat `01': No such file or directory. With the double quotes, all's well, unless "$file" happens to start with a -, in which case cp thinks you're trying to feed it command line options. This isn't really a shell problem, but it often occurs with shell variables.
One solution is to insert -- between cp and its arguments. That tells it to stop scanning for options, and all is well:
cp -- "$file" "$target"(There may be some incredibly ancient systems in existence, in which the -- trick doesn't work. For those, read on....)
Another is to ensure that your filenames always begin with a directory (including . for the current directory, if appropriate). For example, if we're in some sort of loop:
for i in ./*.mp3; do cp "$i" /target ...In this case, even if we have a file whose name begins with -, the glob will ensure that the variable always contains something like ./-foo.mp3, which is perfectly safe as far as cp is concerned.
3. [ $foo = "bar" ]
This is very similar to the first part of the previous pitfall, but I repeat it because it's so important. In the example above, the quotes are in the wrong place. You do not need to quote a string literal in bash. But you should quote your variables if you aren't sure whether they could contain white space.
This breaks for two reasons:
- If a variable referenced in [ does not exist, or is blank, then the [ command would see the line:
[ $foo = "bar" ]... as:
[ = "bar" ]... and throw the error unary operator expected. (The = operator is binary, not unary, so the [ command is rather shocked to see it there.)
- If the variable contains internal whitespace, then it's split into separate words, before the [ command sees it. Thus:
[ multiple words here = "bar" ]While that may look OK to you, it's a syntax error as far as [ is concerned.
A more correct way to write this would be:
[ "$foo" = bar ] # Pretty close!But this still breaks if $foo begins with a -.
In bash, the [[ keyword, which embraces and extends the old test command (also known as [), can be used to solve the problem:
[[ $foo = bar ]] # Right!You don't need to quote variable references within [[ ]] because they don't undergo word splitting, and even blank variables will be handled correctly. On the other hand, quoting them won't hurt anything either.
You may have seen code like this:
[ x"$foo" = xbar ] # Also right!The x"$foo" hack is required for code that must run on ancient shells which lack [[, because if $foo begins with a -, then the [ command may become confused. But you'll get really tired of having to explain that to everyone else.
If the right hand side is a constant, you could just do it this way:
[ bar = "$foo" ] # Also right![ doesn't care whether the token on the right hand side of the = begins with a -. It just uses it literally.
4. [ "$foo" = bar && "$bar" = foo ]
You can't use && inside the old test (or [) command. The Bash parser sees && outside of [[ ]] or (( )) and breaks your command into two commands, before and after the &&. Use one of these instead:
[ bar = "$foo" -a foo = "$bar" ] # Right! [ bar = "$foo" ] && [ foo = "$bar" ] # Also right! [[ $foo = bar && $bar = foo ]] # Also right!(Note that we reversed the constant and the variable inside [ for the reasons discussed in the previous pitfall.)
5. [[ $foo > 7 ]]
The [[ ]] operator is not used for an ArithmeticExpression. It's used for strings only. If you want to do a numeric comparison using > or <, you must use (( )) instead:
((foo > 7)) # Right!If you use the > operator inside [[ ]], it's treated as a string comparison, not an integer comparison. This may work sometimes, but it will fail when you least expect it. If you use > inside [ ], it's even worse: it's an output redirection. You'll get a file named 7 in your directory, and the test will succeed as long as $foo is not empty.
If you're developing for a BourneShell instead of bash, this is the historically correct version:
[ $foo -gt 7 ] # Also right!Note that the test ... -gt command will fail in interesting ways if $foo is not an integer. Therefore, there's not much point in quoting it properly -- if it's got white space, or is empty, or is anything other than an integer, we're probably going to crash anyway. You'll need to sanitize your input aggressively.
6. grep foo bar | while read line; do ((count++)); done
The code above looks OK at first glance, doesn't it? Sure, it's just a poor implementation of grep -c, but it's intended as a simplistic example. So why doesn't it work? The variable count will be unchanged after the loop terminates, much to the surprise of Bash developers everywhere.
The reason this code does not work as expected is because each command in a pipeline is executed in a separate subshell. The changes to the count variable within the loop's subshell aren't reflected within the parent shell (the script in which the code occurs).
For solutions to this, please see Bash FAQ #24.
7. if [grep foo myfile]
Many people are confused by the common practice of putting the [ command after an if. They see this and convince themselves that the [ is part of the if statement's syntax, just like parentheses are used in C's if statement.
However, that is not the case! [ is a command, not a syntax marker for the if statement. It's equivalent to the test command, except for the requirement that the final argument must be a ].
The syntax of the if statement is as follows:
if COMMANDS then COMMANDS elif COMMANDS # optional then COMMANDS else # optional COMMANDS fiThere may be zero or more optional elif sections, and one optional else section. Note: there is no [ in the syntax!
Once again, [ is a command. It takes arguments, and it produces an exit code. It may produce error messages. It does not, however, produce any standard output.
The if statement evaluates the first set of COMMANDS that are given to it (up until then, as the first word of a new command). The exit code of the last command from that set determines whether the if statement will execute the COMMANDS that are in the then section, or move on.
If you want to make a decision based on the output of a grep command, you do not need to enclose it in parentheses, brackets, backticks, or any other syntax mark-up! Just use grep as the COMMANDS after the if, like this:
if grep foo myfile >/dev/null; then ... fiNote that we discard the standard output of the grep (which would normally include the matching line, if any), because we don't want to see it -- we just want to know whether it's there. If the grep matches a line from myfile, then the exit code will be 0 (true), and the then clause will be executed. Otherwise, if there is no matching line, the grep should return a non-zero exit code.
8. if [bar="$foo"]
As with the previous example, [ is a command. Just like with any other command, Bash expects the command to be followed by a space, then the first argument, then another space, etc. You can't just run things all together without putting the spaces in! Here is the correct way:
if [ bar = "$foo" ]Each of bar, =, "$foo" (after substitution, but without word splitting) and ] is a separate argument to the [ command. There must be whitespace between each pair of arguments, so the shell knows where each argument begins and ends.
9. cat file | sed s/foo/bar/ > file
You cannot read from a file and write to it in the same pipeline. Depending on what your pipeline does, the file may be clobbered (to 0 bytes, or possibly to a number of bytes equal to the size of your operating system's pipeline buffer), or it may grow until it fills the available disk space, or reaches your operating system's file size limitation, or your quota, etc.
If you want to make a change to a file, other than appending to the end of it, there must be a temporary file created at some point. For example, the following is completely portable:
sed 's/foo/bar/g' file > tmpfile && mv tmpfile fileThe following will only work on GNU sed 4.x:
sed -i 's/foo/bar/g' file(s)Note that this also creates a temporary file, and does the same sort of renaming trickery -- it just handles it transparently.
And the following equivalent command requires perl 5.x (which is probably more widely available than GNU sed 4.x):
perl -pi -e 's/foo/bar/g' file(s)For more details, please see Bash FAQ #21.
10. echo $foo
This relatively innocent-looking command causes massive confusion. Because the $foo isn't quoted, it will not only be subject to word splitting, but also file globbing. This misleads Bash programmers into thinking their variables contain the wrong values, when in fact the variables are OK -- it's just the echo that's messing up their view of what's happening.
MSG="Please enter a file name of the form *.zip" echo $MSGThis message is split into words and any globs are expanded, such as the *.zip. What will your users think when they see this message:
Please enter a file name of the form freenfss.zip lw35nfss.zipTo demonstrate:
VAR=*.zip # VAR contains an asterisk, a period, and the word "zip" echo "$VAR" # writes *.zip echo $VAR # writes the list of files which end with .zip11. $foo=bar
No, you don't assign a variable by putting a $ in front of the variable name. This isn't perl.
12. foo = bar
No, you can't put spaces around the = when assigning to a variable. This isn't C. When you write foo = bar the shell splits it into three words. The first word, foo, is taken as the command name. The second and third become the arguments to that command.
Likewise, the following are also wrong:
foo= bar # WRONG! foo =bar # WRONG! $foo = bar; # COMPLETELY WRONG! foo=bar # Right.13. echo <<EOF
A here document is a useful tool for embedding large blocks of textual data in a script. It causes a redirection of the lines of text in the script to the standard input of a command. Unfortunately, echo is not a command which reads from stdin.
# This is wrong: echo <<EOF Hello world EOF # This is right: cat <<EOF Hello world EOF14. su -c 'some command'
This syntax is almost correct. The problem is, su takes a -c argument, but it's not the one you want. You want to pass -c 'some command' to a shell, which means you need a username before the -c.
su root -c 'some command' # Now it's right.su assumes a username of root when you omit one, but this falls on its face when you want to pass a command to the shell afterward. You must supply the username in this case.
15. cd /foo; bar
If you don't check for errors from the cd command, you might end up executing bar in the wrong place. This could be a major disaster, if for example bar happens to be rm *.
You must always check for errors from a cd command. The simplest way to do that is:
cd /foo && barIf there's more than just one command after the cd, you might prefer this:
cd /foo || exit 1 bar baz bat ... # Lots of commands.cd will report the failure to change directories, with a stderr message such as "bash: cd: /foo: No such file or directory". If you want to add your own message in stdout, however, you could use command grouping:
cd /net || { echo "Can't read /net. Make sure you've logged in to the Samba network, and try again."; exit 1; } do_stuff more_stuffNote there's a required space between "{" and "echo".
Some people also like to enable set -e to make their scripts abort on any command that returns non-zero, but this can be rather tricky to use correctly (since many common commands may return a non-zero for a warning condition, which you may not want to treat as fatal).
By the way, if you're changing directories a lot in a Bash script, be sure to read the Bash manual page on pushd, popd, and dirs. Perhaps all that code you wrote to manage cd's and pwd's is completely unnecessary.
Speaking of which, compare this:
find ... -type d | while read subdir; do cd "$subdir" && whatever && ... && cd - doneWith this:
find ... -type d | while read subdir; do (cd "$subdir" && whatever && ...) doneForcing a subshell here causes the cd to occur only in the subshell; for the next iteration of the loop, we're back to our normal location, regardless of whether the cd succeeded or failed. We don't have to change back manually. In fact, the penultimate example isn't even valid -- if one of the whatever commands fails, we might not cd back where we need to be. To correct it without using the subshell, we'd have to arrange to execute some sort of cd "$ORIGINAL_DIR" command within each loop iteration. It would be frightfully messy.
The subshell version is much simpler and cleaner.
16. [ bar == "$foo" ]
The == operator is not valid for the [ command. Use = instead, or use the [[ keyword instead.
[ bar = "$foo" ] && echo yes [[ bar == $foo ]] && echo yes
The Bash Debugger (bashdb) is a debugger for Bash scripts. The debugger command interface is modeled on the gdb command interface. Front-ends supporting bashdb include GNU-Emacs and ddd. In the past, the project has been used as a springboard for other experimental features such as a timestamped history file (now in Bash versions after 3.0).Release focus: Minor bugfixes
Changes:
This release contains bugfixes accumulated over the year and works on bash version 3.2 as well as version 3.1.
June 11, 2007 | Linux.com
By itself, Vim is one of the best editors for shell scripting. With a little tweaking, however, you can turn Vim into a full-fledged IDE for writing scripts. You could do it yourself, or you can just install Fritz Mehner's Bash Support plugin.To install Bash Support, download the zip archive, copy it to your ~/.vim directory, and unzip the archive. You'll also want to edit your ~/.vimrc to include a few personal details; open the file and add these three lines:
let g:BASH_AuthorName = 'Your Name' let g:BASH_Email = '[email protected]' let g:BASH_Company = 'Company Name'These variables will be used to fill in some headers for your projects, as we'll see below.
The Bash Support plugin works in the Vim GUI (gVim) and text mode Vim. It's a little easier to use in the GUI, and Bash Support doesn't implement most of its menu functions in Vim's text mode, so you might want to stick with gVim when scripting.
When Bash Support is installed, gVim will include a new menu, appropriately titled Bash. This puts all of the Bash Support functions right at your fingertips (or mouse button, if you prefer). Let's walk through some of the features, and see how Bash Support can make Bash scripting a breeze.
Header and comments
If you believe in using extensive comments in your scripts, and I hope you are, you'll really enjoy using Bash Support. Bash Support provides a number of functions that make it easy to add comments to your bash scripts and programs automatically or with just a mouse click or a few keystrokes.
When you start a non-trivial script that will be used and maintained by others, it's a good idea to include a header with basic information -- the name of the script, usage, description, notes, author information, copyright, and any other info that might be useful to the next person who has to maintain the script. Bash Support makes it a breeze to provide this information. Go to Bash -> Comments -> File Header, and gVim will insert a header like this in your script:
#!/bin/bash #=============================================================================== # # FILE: test.sh # # USAGE: ./test.sh # # DESCRIPTION: # # OPTIONS: --- # REQUIREMENTS: --- # BUGS: --- # NOTES: --- # AUTHOR: Joe Brockmeier, [email protected] # COMPANY: Dissociated Press # VERSION: 1.0 # CREATED: 05/25/2007 10:31:01 PM MDT # REVISION: --- #===============================================================================You'll need to fill in some of the information, but Bash Support grabs the author, company name, and email address from your ~/.vimrc, and fills in the file name and created date automatically. To make life even easier, if you start Vim or gVim with a new file that ends with an .sh extension, it will insert the header automatically.
As you're writing your script, you might want to add comment blocks for your functions as well. To do this, go to Bash -> Comment -> Function Description to insert a block of text like this:
#=== FUNCTION ================================================================ # NAME: # DESCRIPTION: # PARAMETERS: # RETURNS: #===============================================================================Just fill in the relevant information and carry on coding.
The Comment menu allows you to insert other types of comments, insert the current date and time, and turn selected code into a comment, and vice versa.
Statements and snippets
Let's say you want to add an if-else statement to your script. You could type out the statement, or you could just use Bash Support's handy selection of pre-made statements. Go to Bash -> Statements and you'll see a long list of pre-made statements that you can just plug in and fill in the blanks. For instance, if you want to add a while statement, you can go to Bash -> Statements -> while, and you'll get the following:
while _; do doneThe cursor will be positioned where the underscore (_) is above. All you need to do is add the test statement and the actual code you want to run in the while statement. Sure, it'd be nice if Bash Support could do all that too, but there's only so far an IDE can help you.
However, you can help yourself. When you do a lot of bash scripting, you might have functions or code snippets that you reuse in new scripts. Bash Support allows you to add your snippets and functions by highlighting the code you want to save, then going to Bash -> Statements -> write code snippet. When you want to grab a piece of prewritten code, go to Bash -> Statements -> read code snippet. Bash Support ships with a few included code fragments.
Another way to add snippets to the statement collection is to just place a text file with the snippet under the ~/.vim/bash-support/codesnippets directory.
Running and debugging scripts
Once you have a script ready to go, and it's testing and debugging time. You could exit Vim, make the script executable, run it and see if it has any bugs, and then go back to Vim to edit it, but that's tedious. Bash Support lets you stay in Vim while doing your testing.
When you're ready to make the script executable, just choose Bash -> Run -> make script executable. To save and run the script, press
Ctrl-F9
, or go to Bash -> Run -> save + run script.Bash Support also lets you call the bash debugger (bashdb) directly from within Vim. On Ubuntu, it's not installed by default, but that's easily remedied with
apt-get install bashdb
. Once it's installed, you can debug the script you're working on withF9
or Bash -> Run -> start debugger.If you want a "hard copy" -- a PostScript printout -- of your script, you can generate one by going to Bash -> Run -> hardcopy to FILENAME.ps. This is where Bash Support comes in handy for any type of file, not just bash scripts. You can use this function within any file to generate a PostScript printout.
Bash Support has several other functions to help run and test scripts from within Vim. One useful feature is syntax checking, which you can access with
Alt-F9
. If you have no syntax errors, you'll get a quick OK. If there are problems, you'll see a small window at the bottom of the Vim screen with a list of syntax errors. From that window you can highlight the error and pressEnter
, and you'll be taken to the line with the error.Put away the reference book...
Don't you hate it when you need to include a regular expression or a test in a script, but can't quite remember the syntax? That's no problem when you're using Bash Support, because you have Regex and Tests menus with all you'll need. For example, if you need to verify that a file exists and is owned by the correct user ID (UID), go to Bash -> Tests -> file exists and is owned by the effective UID. Bash Support will insert the appropriate test (
[ -O _]
) with your cursor in the spot where you have to fill in the file name.To build regular expressions quickly, go to the Bash menu, select Regex, then pick the appropriate expression from the list. It's fairly useful when you can't remember exactly how to express "zero or one" or other regular expressions.
Bash Support also includes menus for environment variables, bash builtins, shell options, and a lot more.
Hotkey support
Vim users can access many of Bash Support's features using hotkeys. While not as simple as clicking the menu, the hotkeys do follow a logical scheme that makes them easy to remember. For example, all of the comment functions are accessed with
\c
, so if you want to insert a file header, you use\ch
; if you want a date inserted, type\cd
; and for a line end comment, use\cl
.Statements can be accessed with
\a
. Use\ac
for a case statement,\aie
for an "if then else" statement,\af
for a "for in..." statement, and so on. Note that the online docs are incorrect here, and indicate that statements begin with\s
, but Bash Support ships with a PDF reference card (under .vim/bash-support/doc/bash-hot-keys.pdf) that gets it right.Run commands are accessed with
\r
. For example, to save the file and run a script, use\rr
; to make a script executable, use\re
; and to start the debugger, type\rd
. I won't try to detail all of the shortcuts, but you can pull up a reference using:help bashsupport-usage-vim
when in Vim, or use the PDF. The full Bash Support reference is available within Vim by running:help bashsupport
, or you can read it online.Of course, we've covered only a small part of Bash Support's functionality. The next time you need to whip up a shell script, try it using Vim with Bash Support. This plugin makes scripting in bash a lot easier.
The Bash shell contains no debugger, nor even any debugging-specific commands or constructs. Syntax errors or outright typos in the script generate cryptic error messages that are often of no help in debugging a non-functional script.
Example 3-98. test23, a buggy script
1 #!/bin/bash 2 3 a=37 4 5 if [$a -gt 27 ] 6 then 7 echo $a 8 fi 9 10 exit 0
Output from script:
./test23: [37: command not foundWhat's wrong with the above script (hint: after the if)?
What if the script executes, but does not work as expected? This is the all too familiar logic error.
Example 3-99. test24, another buggy script
1 #!/bin/bash 2 3 # This is supposed to delete all filenames 4 # containing embedded spaces in current directory, 5 # but doesn't. Why not? 6 7 8 badname=`ls | grep ' '` 9 10 # echo "$badname" 11 12 rm "$badname" 13 14 exit 0
To find out what's wrong with Example 3-99, uncomment the echo "$badname" line. Echo statements are useful for seeing whether what you expect is actually what you get.
Summarizing the symptoms of a buggy script,
- It bombs with an error message syntax error, or
- It runs, but does not work as expected (logic error)
- It runs, works as expected, but has nasty side effects (logic bomb).
Tools for debugging non-working scripts include
Specifies an action on receipt of a signal; also useful for debugging.
- echo statements at critical points in the script to trace the variables, and otherwise give a snapshot of what is going on.
- using the tee filter to check processes or data flows at critical points.
- setting option flags -n -v -x
sh -n scriptname checks for syntax errors without actually running the script. This is the equivalent of inserting set -n or set -o noexec into the script. Note that certain types of syntax errors can slip past this check.
sh -v scriptname echoes each command before executing it. This is the equivalent of inserting set -v or set -o verbose in the script.
sh -x scriptname echoes the result each command, but in an abbreviated manner. This is the equivalent of inserting set -x or set -o xtrace in the script.
Inserting set -u or set -o nounset in the script runs it, but gives an unbound variable error message at each attempt to use an undeclared variable.
- trapping at exit
The exit command in a script actually sends a signal 0, terminating the process, that is, the script itself. It is often useful to trap the exit, forcing a "printout" of variables, for example. The trap must be the first command in the script.
A signal is simply a message sent to a process, either by the kernel or another process, telling it to take some specified action (usually to terminate). For example, hitting a Control-C, sends a user interrupt, an INT signal, to a running program.
1 trap 2 #ignore interrupts (no action specified) 2 trap 'echo "Control-C disabled."' 2
Example 3-100. Trapping at exit
1 #!/bin/bash 2 3 trap 'echo Variable Listing --- a = $a b = $b' EXIT 4 # EXIT is the name of the signal generated upon exit from a script. 5 6 a=39 7 8 b=36 9 10 exit 0 11 # Note that commenting out the 'exit' command makes no difference, 12 # since the script exits anyhow after running out of commands.Example 3-101. Cleaning up after Control-C
1 #!/bin/bash 2 3 # logon.sh 4 # A quick 'n dirty script to check whether you are on-line yet. 5 6 7 TRUE=1 8 LOGFILE=/var/log/messages 9 # Note that $LOGFILE must be readable (chmod 644 /var/log/messages). 10 TEMPFILE=temp.$$ 11 # Create a "unique" temp file name, using process id of the script. 12 KEYWORD=address 13 # At logon, the line "remote IP address xxx.xxx.xxx.xxx" appended to /var/log/messages. 14 ONLINE=22 15 USER_INTERRUPT=13 16 17 trap 'rm -f $TEMPFILE; exit $USER_INTERRUPT' TERM INT 18 # Cleans up the temp file if script interrupted by control-c. 19 20 echo 21 22 while [ $TRUE ] #Endless loop. 23 do 24 tail -1 $LOGFILE> $TEMPFILE 25 # Saves last line of system log file as temp file. 26 search=`grep $KEYWORD $TEMPFILE` 27 # Checks for presence of the "IP address" phrase, 28 # indicating a successful logon. 29 30 if [ ! -z "$search" ] # Quotes necessary because of possible spaces. 31 then 32 echo "On-line" 33 rm -f $TEMPFILE # Clean up temp file. 34 exit $ONLINE 35 else 36 echo -n "." # -n option to echo suppresses newline, 37 # so you get continuous rows of dots. 38 fi 39 40 sleep 1 41 done 42 43 44 # Note: if you change the KEYWORD variable to "Exit", 45 # this script can be used while on-line to check for an unexpected logoff. 46 47 # Exercise: Change the script, as per the above note, 48 # and prettify it. 49 50 exit 0trap '' SIGNAL (two adjacent apostrophes) disables SIGNAL for the remainder of the script. trap SIGNAL restores the functioning of SIGNAL once more. This is useful to protect a critical portion of a script from an undesirable interrupt.
1 trap '' 2 # Signal 2 is Control-C, now disabled. 2 command 3 command 4 command 5 trap 2 # Reenables Control-C 6
2.3.1. Debugging on the entire script
When things don't go according to plan, you need to determine what exactly causes the script to fail. Bash provides extensive debugging features. The most common is to start up the subshell with the -x option, which will run the entire script in debug mode. Traces of each command plus its arguments are printed to standard output after the commands have been expanded but before they are executed.
This is the commented-script1.sh script ran in debug mode. Note again that the added comments are not visible in the output of the script.
willy:~/scripts> bash -x script1.sh + clear + echo 'The script starts now.' The script starts now. + echo 'Hi, willy!' Hi, willy! + echo + echo 'I will now fetch you a list of connected users:' I will now fetch you a list of connected users: + echo + w 4:50pm up 18 days, 6:49, 4 users, load average: 0.58, 0.62, 0.40 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root tty2 - Sat 2pm 5:36m 0.24s 0.05s -bash willy :0 - Sat 2pm ? 0.00s ? - willy pts/3 - Sat 2pm 43:13 36.82s 36.82s BitchX willy ir willy pts/2 - Sat 2pm 43:13 0.13s 0.06s /usr/bin/screen + echo + echo 'I'\''m setting two variables now.' I'm setting two variables now. + COLOUR=black + VALUE=9 + echo 'This is a string: ' This is a string: + echo 'And this is a number: ' And this is a number: + echo + echo 'I'\''m giving you back your prompt now.' I'm giving you back your prompt now. + echo2.3.2. Debugging on part(s) of the script
Using the set Bash built-in you can run in normal mode those portions of the script of which you are sure they are without fault, and display debugging information only for troublesome zones. Say we are not sure what the w command will do in the example commented-script1.sh, then we could enclose it in the script like this:
set -x # activate debugging from here w set +x # stop debugging from hereOutput then looks like this:
willy: ~/scripts> script1.sh The script starts now. Hi, willy! I will now fetch you a list of connected users: + w 5:00pm up 18 days, 7:00, 4 users, load average: 0.79, 0.39, 0.33 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root tty2 - Sat 2pm 5:47m 0.24s 0.05s -bash willy :0 - Sat 2pm ? 0.00s ? - willy pts/3 - Sat 2pm 54:02 36.88s 36.88s BitchX willyke willy pts/2 - Sat 2pm 54:02 0.13s 0.06s /usr/bin/screen + set +x I'm setting two variables now. This is a string: And this is a number: I'm giving you back your prompt now. willy: ~/scripts>You can switch debugging mode on and off as many times as you want within the same script.
The table below gives an overview of other useful Bash options:
Table 2-1. Overview of set debugging options
Short notation Long notation Result set -f set -o noglob Disable file name generation using metacharacters (globbing). set -v set -o verbose Prints shell input lines as they are read. set -x set -o xtrace Print command traces before executing command. The dash is used to activate a shell option and a plus to deactivate it. Don't let this confuse you!
In the example below, we demonstrate these options on the command line:
willy:~/scripts> set -v willy:~/scripts> ls ls commented-scripts.sh script1.sh willy:~/scripts> set +v set +v willy:~/scripts> ls * commented-scripts.sh script1.sh willy:~/scripts> set -f willy:~/scripts> ls * ls: *: No such file or directory willy:~/scripts> touch * willy:~/scripts> ls * commented-scripts.sh script1.sh willy:~/scripts> rm * willy:~/scripts> ls commented-scripts.sh script1.shAlternatively, these modes can be specified in the script itself, by adding the desired options to the first line shell declaration. Options can be combined, as is usually the case with UNIX commands:
#!/bin/bash -xv
Once you found the buggy part of your script, you can add echo statements before each command of which you are unsure, so that you will see exactly where and why things don't work. In the example commented-script1.sh script, it could be done like this, still assuming that the displaying of users gives us problems:
echo "debug message: now attempting to start w command"; wIn more advanced scripts, the echo can be inserted to display the content of variables at different stages in the script, so that flaws can be detected:
echo "Variable VARNAME is now set to $VARNAME."Example of debugging a shell script
To print commands and their arguments as they are executed:
cat example #!/bin/sh TEST1=result1 TEST2=result2 if [ $TEST1 = "result2" ] then echo $TEST1 fi if [ $TEST1 = "result1" ] then echo $TEST1 fi if [ $test3 = "whosit" ] then echo fail here cos it's wrong fiThis is a script called example which has an error in it; the variable $test3 is not set so the 3rd and last test [command will fail.
Running the script produces:
example result1 [: argument expectedThe script fails and to see where the error occurred you would use the -x option like this:
sh -x example TEST1=result1 TEST2=result2 + [ result1 = result2 ] + [ result1 = result1 ] + echo result1 result1 + [ = whosit ] example: [: argument expectedThe error occurs in the command [ = whosit ] which is wrong as the variable $test3 has not been set. You can now see where to fix it.
Debugging shell scripts
To see where a script produces an error use the command:
sh -x script argumentThe -x option to the sh command tells it to print commands and their arguments as they are executed.
You can then see what stage of the script has been reached when an error occurs.
11.4.4 Debugging Programs
At times you may need to debug a program to find and correct errors. Two options to the
sh
command (listed below) can help you debug a program:
sh -v
shellprogramname- prints the shell input lines as they are read by the system
sh -x
shellprogramname- prints commands and their arguments as they are executed
To try these two options, create a shell program that has an error in it. For example, create a file called
bug
that contains the following list of commands:
$ cat bug<CR> today=`date` echo enter person read person mail $1 $person When you log off come into my office please. $today. MLH $
Notice thattoday
equals the output of thedate
command, which must be enclosed in grave accents for command substitution to occur.The mail message sent to Tom
($1)
at logintommy($1)
should look like the following screen:$ mail<CR> From mlh Mon Apr 10 11:36 CST 1989 Tom When you log off come into my office please. Mon Apr 10 11:36:32 CST 1989 MLH ? .
To executebug
, you have to press the BREAK or DELETE key to end the program.
To debug this program, try executing
bug
usingsh -v
. This will print the lines of the file as they are read by the system, as shown below:
$ sh -v bug tommy<CR> today=`date` echo enter person enter person read person tom mail $1
Notice that the output stops on theBefore you fix the
bug
program, try executing it withsh -x
, which prints the commands and their arguments as they are read by the system.
$ sh -x bug tommy<CR> +date today=Mon Apr 10 11:07:23 CST 1989 + echo enter person enter person + read person tom + mail tom $
Once again, the program stops at theThe corrected
bug
program is as follows: $ cat bug<CR> today=`date` echo enter person read person mail $1 <<! $person When you log off come into my office please. $today MLH ! $The
tee
command is a helpful command for debugging pipelines. While simply passing its standard input to its standard output, it also saves a copy of its input into the file whose name is given as an argument.
The general format of the
tee
command is:
command1 | tee saverfile | command2<CR>
saverfile
is the file that saves the output of command1 for you to study.
For example, suppose you want to check on the output of the
grep
command in the following command line:
who | grep $1 | cut -c1-9<CR>You can use
tee
to copy the output ofgrep
into a file calledcheck
, without disturbing the rest of the pipeline.
who | grep $1 | tee check | cut -c1-9<CR>The file
check
contains a copy of thegrep
output, as shown in the following screen:
$ who | grep mlhmo | tee check | cut -c1-9<CR> mlhmo $ cat check<CR> mlhmo tty61 Apr 10 11:30 $Advanced Bash-Scripting Guide Chapter 30. Debugging
Example 30-1. A buggy script
The Bash shell contains no debugger, nor even any debugging-specific commands or constructs. [1] Syntax errors or outright typos in the script generate cryptic error messages that are often of no help in debugging a non-functional script.
#!/bin/bash # ex74.sh # This is a buggy script. a=37 if [$a -gt 27 ] then echo $a fi exit 0Output from script:
./ex74.sh: [37: command not foundWhat's wrong with the above script (hint: after the if)?
Example 30-2. Missing keyword
#!/bin/bash # missing-keyword.sh: What error message will this generate? for a in 1 2 3 do echo "$a" # done # Required keyword 'done' commented out in line 7. exit 0Output from script:
missing-keyword.sh: line 10: syntax error: unexpected end of fileNote that the error message does not necessarily reference the line in which the error occurs, but the line where the Bash interpreter finally becomes aware of the error.
Error messages may disregard comment lines in a script when reporting the line number of a syntax error.
What if the script executes, but does not work as expected? This is the all too familiar logic error.
Example 30-3. test24, another buggy script
#!/bin/bash # This is supposed to delete all filenames in current directory #+ containing embedded spaces. # It doesn't work. Why not? badname=`ls | grep ' '` # echo "$badname" rm "$badname" exit 0Try to find out what's wrong with Example 30-3 by uncommenting the echo "$badname" line. Echo statements are useful for seeing whether what you expect is actually what you get.
In this particular case, rm "$badname" will not give the desired results because $badname should not be quoted. Placing it in quotes ensures that rm has only one argument (it will match only one filename). A partial fix is to remove to quotes from $badname and to reset $IFS to contain only a newline, IFS=$'\n'. However, there are simpler ways of going about it.
# Correct methods of deleting filenames containing spaces. rm *\ * rm *" "* rm *' '* # Thank you. S.C.Summarizing the symptoms of a buggy script,
- It bombs with a "syntax error" message, or
- It runs, but does not work as expected (logic error).
- It runs, works as expected, but has nasty side effects (logic bomb).
Tools for debugging non-working scripts include
Trapping signals
- echo statements at critical points in the script to trace the variables, and otherwise give a snapshot of what is going on.
- using the tee filter to check processes or data flows at critical points.
- setting option flags -n -v -x
sh -n scriptname checks for syntax errors without actually running the script. This is the equivalent of inserting set -n or set -o noexec into the script. Note that certain types of syntax errors can slip past this check.
sh -v scriptname echoes each command before executing it. This is the equivalent of inserting set -v or set -o verbose in the script.
The -n and -v flags work well together. sh -nv scriptname gives a verbose syntax check.
sh -x scriptname echoes the result each command, but in an abbreviated manner. This is the equivalent of inserting set -x or set -o xtrace in the script.
Inserting set -u or set -o nounset in the script runs it, but gives an unbound variable error message at each attempt to use an undeclared variable.
- Using an "assert" function to test a variable or condition at critical points in a script. (This is an idea borrowed from C.)
Example 30-4. Testing a condition with an "assert"
#!/bin/bash # assert.sh assert () # If condition false, { #+ exit from script with error message. E_PARAM_ERR=98 E_ASSERT_FAILED=99 if [ -z "$2" ] # Not enough parameters passed. then return $E_PARAM_ERR # No damage done. fi lineno=$2 if [ ! $1 ] then echo "Assertion failed: \"$1\"" echo "File \"$0\", line $lineno" exit $E_ASSERT_FAILED # else # return # and continue executing script. fi } a=5 b=4 condition="$a -lt $b" # Error message and exit from script. # Try setting "condition" to something else, #+ and see what happens. assert "$condition" $LINENO # The remainder of the script executes only if the "assert" does not fail. # Some commands. # ... echo "This statement echoes only if the \"assert\" does not fail." # ... # Some more commands. exit 0- trapping at exit.
The exit command in a script triggers a signal 0, terminating the process, that is, the script itself. [2] It is often useful to trap the exit, forcing a "printout" of variables, for example. The trap must be the first command in the script.
Example 30-5. Trapping at exit
- trap
- Specifies an action on receipt of a signal; also useful for debugging.
A signal is simply a message sent to a process, either by the kernel or another process, telling it to take some specified action (usually to terminate). For example, hitting a Control-C, sends a user interrupt, an INT signal, to a running program.
trap '' 2 # Ignore interrupt 2 (Control-C), with no action specified. trap 'echo "Control-C disabled."' 2 # Message when Control-C pressed.
Example 30-6. Cleaning up after Control-C
#!/bin/bash trap 'echo Variable Listing --- a = $a b = $b' EXIT # EXIT is the name of the signal generated upon exit from a script. a=39 b=36 exit 0 # Note that commenting out the 'exit' command makes no difference, #+ since the script exits in any case after running out of commands.
#!/bin/bash # logon.sh: A quick 'n dirty script to check whether you are on-line yet. TRUE=1 LOGFILE=/var/log/messages # Note that $LOGFILE must be readable (chmod 644 /var/log/messages). TEMPFILE=temp.$$ # Create a "unique" temp file name, using process id of the script. KEYWORD=address # At logon, the line "remote IP address xxx.xxx.xxx.xxx" # appended to /var/log/messages. ONLINE=22 USER_INTERRUPT=13 CHECK_LINES=100 # How many lines in log file to check. trap 'rm -f $TEMPFILE; exit $USER_INTERRUPT' TERM INT # Cleans up the temp file if script interrupted by control-c. echo while [ $TRUE ] #Endless loop. do tail -$CHECK_LINES $LOGFILE> $TEMPFILE # Saves last 100 lines of system log file as temp file. # Necessary, since newer kernels generate many log messages at log on. search=`grep $KEYWORD $TEMPFILE` # Checks for presence of the "IP address" phrase, # indicating a successful logon. if [ ! -z "$search" ] # Quotes necessary because of possible spaces. then echo "On-line" rm -f $TEMPFILE # Clean up temp file. exit $ONLINE else echo -n "." # -n option to echo suppresses newline, # so you get continuous rows of dots. fi sleep 1 done # Note: if you change the KEYWORD variable to "Exit", # this script can be used while on-line to check for an unexpected logoff. # Exercise: Change the script, as per the above note, # and prettify it. exit 0 # Nick Drage suggests an alternate method: while true do ifconfig ppp0 | grep UP 1> /dev/null && echo "connected" && exit 0 echo -n "." # Prints dots (.....) until connected. sleep 2 done # Problem: Hitting Control-C to terminate this process may be insufficient. # (Dots may keep on echoing.) # Exercise: Fix this. # Stephane Chazelas has yet another alternative: CHECK_INTERVAL=1 while ! tail -1 "$LOGFILE" | grep -q "$KEYWORD" do echo -n . sleep $CHECK_INTERVAL done echo "On-line" # Exercise: Discuss the strengths and weaknesses # of each of these various approaches.
The DEBUG argument to trap causes a specified action to execute after every command in a script. This permits tracing variables, for example. Example 30-7. Tracing a variable
#!/bin/bash trap 'echo "VARIABLE-TRACE> \$variable = \"$variable\""' DEBUG # Echoes the value of $variable after every command. variable=29 echo "Just initialized \"\$variable\" to $variable." let "variable *= 3" echo "Just multiplied \"\$variable\" by 3." # The "trap 'commands' DEBUG" construct would be more useful # in the context of a complex script, # where placing multiple "echo $variable" statements might be # clumsy and time-consuming. # Thanks, Stephane Chazelas for the pointer. exit 0
trap '' SIGNAL (two adjacent apostrophes) disables SIGNAL for the remainder of the script. trap SIGNAL restores the functioning of SIGNAL once more. This is useful to protect a critical portion of a script from an undesirable interrupt.
trap '' 2 # Signal 2 is Control-C, now disabled. command command command trap 2 # Reenables Control-CNotes
[1] Rocky Bernstein's Bash debugger partially makes up for this lack. [2] By convention, signal 0 is assigned to exit. Recommended Links
Google matched content
Softpanorama Recommended
Top articles
[Jul 04, 2020] Learn Bash Debugging Techniques the Hard Way by Ian Miell Published on Jul 04, 2020 | zwischenzugs.com
[Jul 25, 2017] Beginner Mistakes Published on Jul 25, 2017 | wiki.bash-hackers.org
Sites
Please visit Heiner Steven SHELLdorado the best shell scripting site on the Internet
- The URL of the preferred download site for the pdf version of the ABS Guide is:
http://personal.riverusers.com/~thegrendel/abs-guide.pdf
This version is specially book-formatted for duplex printing and is usually more up-to-date than the version you can download from the LDP site. Note that it's a 2.6 MB download.
- http://personal.riverusers.com/~thegrendel/abs-guide-5.2.tar.bz2 The "bleeding edge" branch is the 'tar/BZ2' link. It will generally run anywhere from a few days to a couple of months ahead of the main release on the LDP site.
Linux.com Turn Vim into a bash IDE
5 Beginner Linux Setup Ideas For Cron Jobs & Shell Scripts(Sep 05, 2014)
Debugging your shell scripts(Nov 21, 2013) By
Reference
Examples shipped with bash 3.2 and newer
Path Description X-ref ./bashdb Deprecated sample implementation of a bash debugger. ./complete Shell completion code. ./functions Example functions. ./functions/array-stuff Various array functions (ashift, array_sort, reverse). ./functions/array-to-string Convert an array to a string. ./functions/autoload An almost ksh-compatible 'autoload' (no lazy load). ksh ./functions/autoload.v2 An almost ksh-compatible 'autoload' (no lazy load). ksh ./functions/autoload.v3 A more ksh-compatible 'autoload' (with lazy load). ksh ./functions/basename A replacement for basename(1). basename ./functions/basename2 Fast basename(1) and dirname(1) functions for bash/sh. basename, dirname ./functions/coproc.bash Start, control, and end co-processes. ./functions/coshell.bash Control shell co-processes (see coprocess.bash). ./functions/coshell.README README for coshell and coproc. ./functions/csh-compat A C-shell compatibility package. csh ./functions/dirfuncs Directory manipulation functions from the book The Korn Shell. ./functions/dirname A replacement for dirname(1). dirname ./functions/emptydir Find out if a directory is empty. ./functions/exitstat Display the exit status of processes. ./functions/external Like command, but forces the use of external command. ./functions/fact Recursive factorial function. ./functions/fstty Front-end to sync TERM changes to both stty(1) and readline 'bind'. stty.bash ./functions/func Print out definitions for functions named by arguments. ./functions/gethtml Get a web page from a remote server (wget(1) in bash). ./functions/getoptx.bash getopt function that parses long-named options. ./functions/inetaddr Internet address conversion (inet2hex and hex2inet). ./functions/inpath Return zero if the argument is in the path and executable. inpath ./functions/isnum.bash Test user input on numeric or character value. ./functions/isnum2 Test user input on numeric values, with floating point. ./functions/isvalidip Test user input for valid IP addresses. ./functions/jdate.bash Julian date conversion. ./functions/jj.bash Look for running jobs. ./functions/keep Try to keep some programs in the foreground and running. ./functions/ksh-cd ksh-like cd: cd [-LP] [dir[change]]. ksh ./functions/ksh-compat-test ksh-like arithmetic test replacements. ksh ./functions/kshenv Functions and aliases to provide the beginnings of a ksh environment for bash ksh ./functions/login Replace the login and newgrp built-ins in old Bourne shells. ./functions/lowercase Rename files to lowercase. rename lower ./functions/manpage Find and print a manpage. fman ./functions/mhfold Print MH folders, useful only because folders(1) doesn't print mod date/times. ./functions/notify.bash Notify when jobs change status. ./functions/pathfuncs Path related functions (no_path, add_path, pre-path, del_path). path ./functions/README README ./functions/recurse Recursive directory traverser. ./functions/repeat2 A clone of the C shell built-in repeat. repeat, csh ./functions/repeat3 A clone of the C shell built-in repeat. repeat, csh ./functions/seq Generate a sequence from m to n;m defaults to 1. ./functions/seq2 Generate a sequence from m to n;m defaults to 1. ./functions/shcat Readline-based pager. cat, readline pager ./functions/shcat2 Readline-based pagers. cat, readline pager ./functions/sort-pos-params Sort the positional parameters. ./functions/substr A function to emulate the ancient ksh built-in. ksh ./functions/substr2 A function to emulate the ancient ksh built-in. ksh ./functions/term A shell function to set the terminal type interactively or not. ./functions/whatis An implementation of the 10th Edition Unix sh built-in whatis(1) command. ./functions/whence An almost ksh-compatible whence(1) command. ./functions/which An emulation of which(1) as it appears in FreeBSD. ./functions/xalias.bash Convert csh alias commands to bash functions. csh, aliasconv ./functions/xfind.bash A find(1) clone. ./loadables/ Example loadable replacements. ./loadables/basename.c Return nondirectory portion of pathname. basename ./loadables/cat.c cat(1) replacement with no options--the way cat was intended. cat, readline pager ./loadables/cut.c cut(1) replacement. ./loadables/dirname.c Return directory portion of pathname. dirname ./loadables/finfo.c Print file info. ./loadables/getconf.c POSIX.2 getconf utility. ./loadables/getconf.h Replacement definitions for ones the system doesn't provide. ./loadables/head.c Copy first part of files. ./loadables/hello.c Obligatory "Hello World" / sample loadable. ./loadables/id.c POSIX.2 user identity. ./loadables/ln.c Make links. ./loadables/logname.c Print login name of current user. ./loadables/Makefile.in Simple makefile for the sample loadable built-ins. ./loadables/mkdir.c Make directories. ./loadables/necho.c echo without options or argument interpretation. ./loadables/pathchk.c Check pathnames for validity and portability. ./loadables/print.c Loadable ksh-93 style print built-in. ./loadables/printenv.c Minimal built-in clone of BSD printenv(1). ./loadables/push.c Anyone remember TOPS-20? ./loadables/README README ./loadables/realpath.c Canonicalize pathnames, resolving symlinks. ./loadables/rmdir.c Remove directory. ./loadables/sleep.c Sleep for fractions of a second. ./loadables/strftime.c Loadable built-in interface to strftime(3). ./loadables/sync.c Sync the disks by forcing pending filesystem writes to complete. ./loadables/tee.c Duplicate standard input. ./loadables/template.c Example template for loadable built-in. ./loadables/truefalse.c True and false built-ins. ./loadables/tty.c Return terminal name. ./loadables/uname.c Print system information. ./loadables/unlink.c Remove a directory entry. ./loadables/whoami.c Print out username of current user. ./loadables/perl/ Illustrates how to build a Perl interpreter into bash. ./misc Miscellaneous ./misc/aliasconv.bash Convert csh aliases to bash aliases and functions. csh, xalias ./misc/aliasconv.sh Convert csh aliases to bash aliases and functions. csh, xalias ./misc/cshtobash Convert csh aliases, environment variables, and variables to bash equivalents. csh, xalias ./misc/README README ./misc/suncmd.termcap SunView TERMCAP string. ./obashdb Modified version of the Korn Shell debugger from Bill Rosenblatt's Learning the Korn Shell. ./scripts.noah Noah Friedman's collection of scripts (updated to bash v2 syntax by Chet Ramey). ./scripts.noah/aref.bash Pseudo-arrays and substring indexing examples. ./scripts.noah/bash.sub.bash Library functions used by require.bash. ./scripts.noah/bash_version. bash A function to slice up $BASH_VERSION. ./scripts.noah/meta.bash Enable and disable eight-bit readline input. ./scripts.noah/mktmp.bash Make a temporary file with a unique name. ./scripts.noah/number.bash A fun hack to translate numerals into English. ./scripts.noah/PERMISSION Permissions to use the scripts in this directory. ./scripts.noah/prompt.bash A way to set PS1 to some predefined strings. ./scripts.noah/README README ./scripts.noah/remap_keys.bash A front end to bind to redo readline bindings. readline ./scripts.noah/require.bash Lisp-like require/provide library functions for bash. ./scripts.noah/send_mail. Replacement SMTP client written in bash. ./scripts.noah/shcat.bash bash replacement for cat(1). cat ./scripts.noah/source.bash Replacement for source that uses current directory. ./scripts.noah/string.bash The string(3) functions at the shell level. ./scripts.noah/stty.bash Front-end to stty(1) that changes readline bindings too. fstty ./scripts.noah/y_or_n_p.bash Prompt for a yes/no/quit answer. ask ./scripts.v2 John DuBois' ksh script collection (converted to bash v2 syntax by Chet Ramey). ./scripts.v2/arc2tarz Convert an arc archive to a compressed tar archive. ./scripts.v2/bashrand Random number generator with upper and lower bounds and optional seed. random ./scripts.v2/cal2day.bash Convert a day number to a name. ./scripts.v2/cdhist.bash cd replacement with a directory stack added. ./scripts.v2/corename Tell what produced a core file. ./scripts.v2/fman Fast man(1) replacement. manpage ./scripts.v2/frcp Copy files using ftp(1) but with rcp-type command-line syntax. ./scripts.v2/lowercase Change filenames to lowercase. rename lower ./scripts.v2/ncp A nicer front end for cp(1) (has -i, etc).. ./scripts.v2/newext Change the extension of a group of files. rename ./scripts.v2/nmv A nicer front end for mv(1) (has -i, etc).. rename ./scripts.v2/pages Print specified pages from files. ./scripts.v2/PERMISSION Permissions to use the scripts in this directory. ./scripts.v2/pf A pager front end that handles compressed files. ./scripts.v2/pmtop Poor man's top(1) for SunOS 4.x and BSD/OS. ./scripts.v2/README README ./scripts.v2/ren Rename files by changing parts of filenames that match a pattern. rename ./scripts.v2/rename Change the names of files that match a pattern. rename ./scripts.v2/repeat Execute a command multiple times. repeat ./scripts.v2/shprof Line profiler for bash scripts. ./scripts.v2/untar Unarchive a (possibly compressed) tarfile into a directory. ./scripts.v2/uudec Carefully uudecode(1) multiple files. ./scripts.v2/uuenc uuencode(1) multiple files. ./scripts.v2/vtree Print a visual display of a directory tree. tree ./scripts.v2/where Show where commands that match a pattern are. ./scripts Example scripts. ./scripts/adventure.sh Text adventure game in bash! ./scripts/bcsh.sh Bourne shell's C shell emulator. csh ./scripts/cat.sh Readline-based pager. cat, readline pager ./scripts/center Center a group of lines. ./scripts/dd-ex.sh Line editor using only /bin/sh, /bin/dd, and /bin/rm. ./scripts/fixfiles.bash Recurse a tree and fix files containing various bad characters. ./scripts/hanoi.bash The inevitable Towers of Hanoi in bash. ./scripts/inpath Search $PATH for a file the same name as $1; return TRUE if found. inpath ./scripts/krand.bash Produces a random number within integer limits. random ./scripts/line-input.bash Line input routine for GNU Bourne Again Shell plus terminal-control primitives. ./scripts/nohup.bash bash version of nohup command. ./scripts/precedence Test relative precedences for && and || operators. ./scripts/randomcard.bash Print a random card from a card deck. random ./scripts/README README ./scripts/scrollbar Display scrolling text. ./scripts/scrollbar2 Display scrolling text. ./scripts/self-repro A self-reproducing script (careful!). ./scripts/showperm.bash Convert ls(1) symbolic permissions into octal mode. ./scripts/shprompt Display a prompt and get an answer satisfying certain criteria. ask ./scripts/spin.bash Display a spinning wheel to show progress. ./scripts/timeout Give rsh(1) a shorter timeout. ./scripts/vtree2 Display a tree printout of the direcotry with disk use in 1k blocks. tree ./scripts/vtree3 Display a graphical tree printout of dir. tree ./scripts/vtree3a Display a graphical tree printout of dir. tree ./scripts/websrv.sh A web server in bash! ./scripts/xterm_title Print the contents of the xterm title bar. ./scripts/zprintf Emulate printf (obsolete since printf is now a bash built-in). ./startup-files Example startup files. ./startup-files/Bash_aliases Some useful aliases (written by Fox). ./startup-files/Bash_profile Sample startup file for bash login shells (written by Fox). ./startup-files/bash-profile Sample startup file for bash login shells (written by Ramey). ./startup-files/bashrc Sample Bourne Again Shell init file (written by Ramey). ./startup-files/Bashrc.bfox Sample Bourne Again Shell init file (written by Fox). ./startup-files/README README ./startup-files/apple Example startup files for Mac OS X. ./startup-files/apple/aliases Sample aliases for Mac OS X. ./startup-files/apple/bash.defaults Sample User preferences file. ./startup-files/apple/environment Sample Bourne Again Shell environment file. ./startup-files/apple/login Sample login wrapper. ./startup-files/apple/logout Sample logout wrapper. ./startup-files/apple/rc Sample Bourne Again Shell config file. ./startup-files/apple/README README
Etc
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors of this site Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: July 05, 2020