Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Care and Feeding of Functions in Shell

News

See also

Recommended Links

Checking the loaded functions Encoding and decoding strings

Manipulating files and pathnames

Arguments
Interacting with the user Manipulating strings Parsing command line options Printing messages to the console Using external programs Catching signals Manipulating variables
    Sysadmin Horror Stories Unix shells history Tips Humor Etc

Functions in shell are really scripts which run in the current context of the current shell instance (no secondary shell is forked to run the function; it's run within the current shell.)

Functions provide a lot more flexibility that aliases.  Here are two simplest functions possible: "do nothing" function and "Hello world" function:

          function quit {
               exit
           }
           function hello {
               print "Hello world !"
           }
           hello
           quit
       

Declaring a function is just a matter of writing function my_func { my_code }. Functions can be declared in arbitrary order. But they need to be declared before they are invoked in the script. 

Calling a function is just like calling another program, you just write its name and (optionally) arguments separated by spaces.

NOTE: you can't enclose arguments in round parenthesis -- this would be a syntactic error.  You can't use comma after arguments like you are inclined after using C or Perl.

The best way to create a set of  useful shell functions in a separate file. Then you can source the file with the dot (.) command or in your startup scripts. You can also just enter the function at the command line.

To create a function from the command line, you would do something like this:

$ psgrep() {
> ps -ef | grep $1
> }

This is a pretty simple function, and could be implemented as an alias as well. Let's try to solve the problem of displayed files over 1K.  awk can be used to find any matching files that are larger than 100K bytes:

largesize() {
   ls -la |  awk ' { if ( $5 gt 100000 ) print $1 } '
}

As in almost any programming language, you can use functions to group pieces of code in a more logical way or practice the divine art of recursion.

Bash shell's function feature is pretty primitive and just an slightly enhanced version of a similar facility in the System V Bourne shell.  You can create collection of functions that help you in your work and store them in you .profile file.

Functions are faster then subshell invocation: when you invoke a function, it is already in memory. Modern computers have large RAM, so there is no need to worry about the amount of space a typical function takes up.

The other advantage of functions is that they can and should be used for organizing shell scripts into modular "chunks" of related functions that are easier to develop and maintain. To define a function, you can use the following form (there is also second C-style  form of function definition that we will not discuss hee):

function function_name {
    shell commands
}

You can also delete a function definition with the command unset -f function_name.

Like list of aliases is produced by typing command alias, you can find out what functions are defined in your login session by typing functions. The shell will print not just the names but the definitions of all functions, in alphabetical order by function name. Since this may result in long output, you might want to pipe the output through more or redirect it to a file and view it with the view command.

Note: functions is actually an alias for typeset -f;

There are two important differences between functions and scripts. First, functions do not run in separate processes, as scripts are when you invoke them by name; the "semantics" of running a function are more like those of your .profile when you log in or any script when invoked with the "dot" command. Second, if a function has the same name as a script or executable program, the function takes precedence.

if a function has the same name as a script or executable program, the function takes precedence.

This is a good time to show the order of precedence for the various sources of commands. When you type a command to the shell, it looks in the following places until it finds a match:

  1. Aliases
  2. Keywords such as function and several others, like if and for
  3. Functions
  4. Built-ins like cd and whence

    Note: that means that you can overwrite built-ins with functions
     

  5. Scripts and executable programs, for which the shell searches in the directories listed in the PATH environment variable

If you need to know the exact source of a command, you can use the whence built-in command .If you type whence -v command_name, you also get information about how particular command is implemented, for example:

$ whence -v cd
cd is a shell builtin
$ whence -v function
function is a keyword
$ whence -v man
man is /usr/bin/man
$ whence -v ll
ll is an alias for ls -l

Return and exit

The statement return N, which causes the surrounding script or function to exit with exit status N. N is actually optional; it defaults to 0.

In the scripts that finish without a return statement (i.e., every one we have seen so far) the return value is equal to the return value (sucess of failue) of the  last executed statement.

Exist is similar to return:  the statement exit N exits the entire script, no matter how deeply you are nested in functions.

For example is we need a function to implement enhanced cd we can write something like: 

function cd {
    "cd" $*
    rs=$?
    print $OLDPWD -> $PWD
    return $rs
}

Here is fist save the exit status of cd in the variable rs and then return it as the return value of the function.

# Capture value returned by last command
ret=$?

return $ret

return [n], exit [n]
Return from a function with the given value, or exit the whole script with the given value.
Without a return, the function returns when it reaches the end, and the value is the exit status of the last command it ran.

Example:

die()
{
   # Print an error message and exit with given status
   # call as: die status "message" ["message" ...]
   exitstat=$1; shift
   for i in "$@"; do
      print -R "$i"
   done
   exit $exitstat
}
 
[ -w $filename ] || \
  die 1 "$file not writeable" "check permissions"

Example:

logme()
{
   # Print or not depending on global "$verbosity"
   # Change the verbosity with a single variable.
   # Arg. 1 is the level for this message.
   level=$1; shift
   if [[ $level -le $verbosity ]]; then
      print -R $*
   fi
}

verbosity=2
logme 1 This message will appear
logme 3 This only appears if verbosity is 3 or higher

Common Errors

Two common errors with declaring and using functions are

The following example illustrates the first type of error:

lsl { ls -l ; } 

Here, the parentheses are missing after lsl. This is an invalid function definition and will result in an error message similar to the following:

sh: syntax error: '}' unexpected

The following command illustrates the second type of error:

$ lsl()

Here, the function lsl is executed along with the parentheses, (). This will not work because the shell interprets it as a redefinition of the function with the name lsl. Usually such an invocation results in a prompt similar to the following:

>

This is a prompt produced by the shell when it expects you to provide more input. The input it expects is the body of the function lsl

Arguments

NOTES:

For example:

name_error() 
{
echo " $@ contains errors, it must contain only letters"
}

Here is an example of a function that takes arbitrary number of parameters and prints some information about them:

function invocation_inform {
   print "The first argument is:" $1 
   print "List of argumnets is:" $@ 
   print "Number of arguments is:" $#
}

The command shift performs shifts of argument to the left by given number of positions, extra argumnat shifted to left of index 1 are discarded

1=$2
2=$3
...

for every argument, regardless of how many there are. If you supply a numeric argument to shift , it will shift the arguments that many times over; for example, shift 3 has this effect:

1=$4
2=$5
...

This leads immediately to some code that handles a single option (call it -o ) and arbitrarily many arguments:

if [[ $1 = -o ]]; then
   # process the -o option
   shift
fi
# normal processing of arguments...   

After the if construct, $1 , $2 , etc., are set to the correct arguments.



Returning STDIO

Messages to stdout may be captured by command substitution (`myfunction`, which provides another way for a function to return information to the calling script. Beware of side-effects (and reducing reusability) in functions which perform I/O.

Example of invocations

 Calling a function is just like calling another program, you just write its name and (optionally) arguments separated by spaces

NOTE: you can't enclose arguments in round parenthesis -- this would be a syntactic error.  You can't use comma after arguments like you are inclined after using C or Perl.

arg1_echo test

all_arg_echo  test 1 test 2

Checking the loaded functions

the set command displays all the loaded functions available to the shell.

$ set
USER=dave
findit=()
{
if [ $# -lt 1 ]; then
  echo "usage :findit file";
  return 1;
fi;
find / -name $1 -print
}
...

You can use  unset command to remove functions:

unset function_name

The idea of  .functions file

Traditionally .bash_profile file contained aliases, but now they are often separates and group into a separate file, for example .aliases.  So for functions it might be also be beneficial to use a separate file called, for example, .functions.

Let's consider a very artificial (bad) example of creating a function that will call find for each argument so that we can find several different files with one command (useless exersize) The function will now look like this:

$ pg .functions 

#!/bin/sh
findit()
{
# findit
if [ $# -lt 1 ]; then
  echo "usage :findit file"
  return 1
fi
for member
do
  find / -name $member -print
done 
} 

Now source the file again:

. ./.functions 

Using the set command to see that it is indeed loaded, you will notice that the shell has correctly interpreted the for loop to take in all parameters.

$ set
findit=()
{
if [ $# -lt 1 ]; then
  echo "usage :`basename $0` file";
  return 1;
fi;
for loop in "$@";
do
  find / -name $loop -print;
done
} 
... 

Now to execute the changed  Supplying a couple of files to find:

$ findit LPSO.doc passwd
/usr/local/accounts/LPSO.doc
/etc/passwd
... 

By default all variables, except for the special variables associated with function arguments, have global scope. In ksh, bash, and zsh, variables with local scope can be declared using the typeset command. The typeset command is discussed later in this chapter. This command is not supported in the Bourne shell, so it is not possible to have programmer-defined local variables in scripts that rely strictly on the Bourne shell.

Local Variables

Local variables are defined using typeset command (in bash you usually use declare instead):

typeset var1[=val1] ... varN[=valN]

Here, var1 ... varN are variable names and val1 ... valN are values to assign to the variables. The values are optional as the following example illustrates:

typeset fruit1 fruit2=banana

Using whence command

If you need to know the exact source of a command, there is an option to the whence built-in command.. whence by itself will print the pathname of a command if the command is a script or executable program, but it will only parrot the command's name back if it is anything else. But if you type whence -v commandname , you get more complete information, such as:

$ 
whence -v cd

cd is a shell builtin
$ 
whence -v function

function is a keyword
$ 
whence -v man

man is /usr/bin/man
$ 
whence -v ll

ll is an alias for ls -l

We will refer mainly to scripts throughout the remainder of this book, but unless we note otherwise, you should assume that whatever we say applies equally to functions.

Positional Parameters

shell assignment statment has the form of  varname=value, e.g.:

$ greeting="hello world!"
$ print "$greeting"

Some environment variables are predefined by the shell when you log in. There are several built-in variables that are vital to shell programming like HOME, HOSTNAME, PWD, OLD_PWD, etc

There is also a special class on built-in variables called positional parameters. These hold the command-line arguments to scripts when they are invoked. Positional parameters have names 1, 2, 3, etc., meaning that their values are denoted by $1, $2, $3, etc. There is also a positional parameter 0, whose value is the name of the script (i.e., the command typed in to invoke it).

Two special variables contain all of the positional parameters (except positional parameter 0): * and @. The difference between them is subtle but important, and it's apparent only when they are used within double quotes:

The variable $# holds the number of positional parameters (as a character string). All of these variables are "read-only," meaning that you can't assign new values to them within scripts.

For example, assume that you create a file testme that contains the following simple shell script:

print "testme: $@"
print "$0: Arg 1 is '$1' and Arg 2 is $2"
print "$# arguments"

Then if you type bash 3.2, you will see the following output:

testme: bash 3.2
testme: Arg 1 is 'bash' and Arg 2 is '3.2'
2 arguments

In this case, $3, $4, etc., are all unset, which means that the shell will substitute the empty (or null) string for them (Unless the option nounset is turned on).

Shell functions use positional parameters and special variables like * and # in exactly the same way as shell scripts do. If you wanted to define testme as a function, you could put the following in your .profile or environment file:

function testme {
    print "testme: $*"
    print "$0: $1 and $2"
    print "$# arguments"
}

You will get the same result if you type  testme bash 3.2

Typically, several shell functions are defined within a single shell script. Therefore each function will need to handle its own arguments, which in turn means that each function needs to keep track of positional parameters separately. Sure enough, each function has its own copies of these variables (even though functions don't run in their own subshells, as scripts do); we say that such variables are local to the function.

However, other variables defined within functions are not local (they are global), meaning that their values are known throughout the entire shell script. For example, assume that you have a shell script called ascript that contains this:

Note: typeset  can be used for making variables local to functions.

Let's take a closer look at "$@" and "$*". These variables are two of the shell's greatest idiosyncracies, so we'll discuss some of the most common sources of confusion.

Note on Variable Syntax

Before we show the many things you can do with shell variables, we have to make a confession: the syntax of $varname for taking the value of a variable is not quite accurate. Actually, it's the simple form of the more general syntax, which is ${varname}.

Why two syntaxes? For one thing, the more general syntax is necessary if your code refers to more than nine positional parameters: you must use ${10} for the tenth instead of $10

Also useful is the next character is not a delimiter:

PS1="${LOGNAME}_ "

we would get the desired $yourname_. It is safe to omit the curly brackets ({}) if the variable name is followed by a character that isn't a letter, digit, or underscore.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jul 08, 2020] Exit Codes With Special Meanings

Jul 08, 2020 | www.tldp.org

Appendix E. Exit Codes With Special Meanings Table E-1. Reserved Exit Codes

Exit Code Number Meaning Example Comments
1 Catchall for general errors let "var1 = 1/0" Miscellaneous errors, such as "divide by zero" and other impermissible operations
2 Misuse of shell builtins (according to Bash documentation) empty_function() {} Missing keyword or command, or permission problem (and diff return code on a failed binary file comparison ).
126 Command invoked cannot execute /dev/null Permission problem or command is not an executable
127 "command not found" illegal_command Possible problem with $PATH or a typo
128 Invalid argument to exit exit 3.14159 exit takes only integer args in the range 0 - 255 (see first footnote)
128+n Fatal error signal "n" kill -9 $PPID of script $? returns 137 (128 + 9)
130 Script terminated by Control-C Ctl-C Control-C is fatal error signal 2 , (130 = 128 + 2, see above)
255* Exit status out of range exit -1 exit takes only integer args in the range 0 - 255

According to the above table, exit codes 1 - 2, 126 - 165, and 255 [1] have special meanings, and should therefore be avoided for user-specified exit parameters. Ending a script with exit 127 would certainly cause confusion when troubleshooting (is the error code a "command not found" or a user-defined one?). However, many scripts use an exit 1 as a general bailout-upon-error. Since exit code 1 signifies so many possible errors, it is not particularly useful in debugging.

There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h ), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0 , for success), to conform with the C/C++ standard. This would allot 50 valid codes, and make troubleshooting scripts more straightforward. [2] All user-defined exit codes in the accompanying examples to this document conform to this standard, except where overriding circumstances exist, as in Example 9-2 .

Issuing a $? from the command-line after a shell script exits gives results consistent with the table above only from the Bash or sh prompt. Running the C-shell or tcsh may give different values in some cases.
Notes
[1] Out of range exit values can result in unexpected exit codes. An exit value greater than 255 returns an exit code modulo 256 . For example, exit 3809 gives an exit code of 225 (3809 % 256 = 225).
[2] An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78 . It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.

[Jul 08, 2020] Exit Codes

From bash manual: The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes.
For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
The exit status is used by the Bash conditional commands (see Conditional Constructs) and some of the list constructs (see Lists).
All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid options or missing arguments.
Jul 08, 2020 | zwischenzugs.com

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code.

1 is a 'general error', and others can give you more information (e.g. which signal killed it, for example). 255 is upper limit and is "internal error"

grep joeuser /etc/passwd # in case of success returns 0, otherwise 1

or

grep not_there /dev/null
echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

[Jul 08, 2020] Returning Values from Bash Functions by Mitch Frazier

Sep 11, 2009 | www.linuxjournal.com

Bash functions, unlike functions in most programming languages do not allow you to return a value to the caller. When a bash function ends its return value is its status: zero for success, non-zero for failure. To return values, you can set a global variable with the result, or use command substitution, or you can pass in the name of a variable to use as the result variable. The examples below describe these different mechanisms.

Although bash has a return statement, the only thing you can specify with it is the function's status, which is a numeric value like the value specified in an exit statement. The status value is stored in the $? variable. If a function does not contain a return statement, its status is set based on the status of the last statement executed in the function. To actually return arbitrary values to the caller you must use other mechanisms.

The simplest way to return a value from a bash function is to just set a global variable to the result. Since all variables in bash are global by default this is easy:

function myfunc()
{
    myresult='some value'
}

myfunc
echo $myresult

The code above sets the global variable myresult to the function result. Reasonably simple, but as we all know, using global variables, particularly in large programs, can lead to difficult to find bugs.

A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:

function myfunc()
{
    local  myresult='some value'
    echo "$myresult"
}

result=$(myfunc)   # or result=`myfunc`
echo $result

Here the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.

The other way to return a value is to write your function so that it accepts a variable name as part of its command line and then set that variable to the result of the function:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    eval $__resultvar="'$myresult'"
}

myfunc result
echo $result

Since we have the name of the variable to set stored in a variable, we can't set the variable directly, we have to use eval to actually do the setting. The eval statement basically tells bash to interpret the line twice, the first interpretation above results in the string result='some value' which is then interpreted once more and ends up setting the caller's variable.

When you store the name of the variable passed on the command line, make sure you store it in a local variable with a name that won't be (unlikely to be) used by the caller (which is why I used __resultvar rather than just resultvar ). If you don't, and the caller happens to choose the same name for their result variable as you use for storing the name, the result variable will not get set. For example, the following does not work:

function myfunc()
{
    local  result=$1
    local  myresult='some value'
    eval $result="'$myresult'"
}

myfunc result
echo $result

The reason it doesn't work is because when eval does the second interpretation and evaluates result='some value' , result is now a local variable in the function, and so it gets set rather than setting the caller's result variable.

For more flexibility, you may want to write your functions so that they combine both result variables and command substitution:

function myfunc()
{
    local  __resultvar=$1
    local  myresult='some value'
    if [[ "$__resultvar" ]]; then
        eval $__resultvar="'$myresult'"
    else
        echo "$myresult"
    fi
}

myfunc result
echo $result
result2=$(myfunc)
echo $result2

Here, if no variable name is passed to the function, the value is output to the standard output.

Mitch Frazier is an embedded systems programmer at Emerson Electric Co. Mitch has been a contributor to and a friend of Linux Journal since the early 2000s.


David Krmpotic6 years ago • edited ,

This is the best way: http://stackoverflow.com/a/... return by reference:

function pass_back_a_string() {
eval "$1='foo bar rab oof'"
}

return_var=''
pass_back_a_string return_var
echo $return_var

lxw David Krmpotic6 years ago ,

I agree. After reading this passage, the same idea with yours occurred to me.

phil • 6 years ago ,

Since this page is a top hit on google:

The only real issue I see with returning via echo is that forking the process means no longer allowing it access to set 'global' variables. They are still global in the sense that you can retrieve them and set them within the new forked process, but as soon as that process is done, you will not see any of those changes.

e.g.
#!/bin/bash

myGlobal="very global"

call1() {
myGlobal="not so global"
echo "${myGlobal}"
}

tmp=$(call1) # keep in mind '$()' starts a new process

echo "${tmp}" # prints "not so global"
echo "${myGlobal}" # prints "very global"

lxw • 6 years ago ,

Hello everyone,

In the 3rd method, I don't think the local variable __resultvar is necessary to use. Any problems with the following code?

function myfunc()
{
local myresult='some value'
eval "$1"="'$myresult'"
}

myfunc result
echo $result

code_monk6 years ago • edited ,

i would caution against returning integers with "return $int". My code was working fine until it came across a -2 (negative two), and treated it as if it were 254, which tells me that bash functions return 8-bit unsigned ints that are not protected from overflow

Emil Vikström code_monk5 years ago ,

A function behaves as any other Bash command, and indeed POSIX processes. That is, they can write to stdout, read from stdin and have a return code. The return code is, as you have already noticed, a value between 0 and 255. By convention 0 means success while any other return code means failure.

This is also why Bash "if" statements treat 0 as success and non+zero as failure (most other programming languages do the opposite).

[Jul 02, 2020] Import functions and variables into Bash with the source command by Seth Kenlon

Jun 12, 2020 | opensource.com
Source is like a Python import or a Java include. Learn it to expand your Bash prowess. Seth Kenlon (Red Hat) Feed 25 up 2 comments Image by : Opensource.com x Subscribe now

When you log into a Linux shell, you inherit a specific working environment. An environment , in the context of a shell, means that there are certain variables already set for you, which ensures your commands work as intended. For instance, the PATH environment variable defines where your shell looks for commands. Without it, nearly everything you try to do in Bash would fail with a command not found error. Your environment, while mostly invisible to you as you go about your everyday tasks, is vitally important.

There are many ways to affect your shell environment. You can make modifications in configuration files, such as ~/.bashrc and ~/.profile , you can run services at startup, and you can create your own custom commands or script your own Bash functions .

Add to your environment with source

Bash (along with some other shells) has a built-in command called source . And here's where it can get confusing: source performs the same function as the command . (yes, that's but a single dot), and it's not the same source as the Tcl command (which may come up on your screen if you type man source ). The built-in source command isn't in your PATH at all, in fact. It's a command that comes included as a part of Bash, and to get further information about it, you can type help source .

The . command is POSIX -compliant. The source command is not defined by POSIX but is interchangeable with the . command.

More on Bash According to Bash help , the source command executes a file in your current shell. The clause "in your current shell" is significant, because it means it doesn't launch a sub-shell; therefore, whatever you execute with source happens within and affects your current environment.

Before exploring how source can affect your environment, try source on a test file to ensure that it executes code as expected. First, create a simple Bash script and save it as a file called hello.sh :

#!/usr/bin/env bash
echo "hello world"

Using source , you can run this script even without setting the executable bit:

$ source hello.sh
hello world

You can also use the built-in . command for the same results:

$ . hello.sh
hello world

The source and . commands successfully execute the contents of the test file.

Set variables and import functions

You can use source to "import" a file into your shell environment, just as you might use the include keyword in C or C++ to reference a library or the import keyword in Python to bring in a module. This is one of the most common uses for source , and it's a common default inclusion in .bashrc files to source a file called .bash_aliases so that any custom aliases you define get imported into your environment when you log in.

Here's an example of importing a Bash function. First, create a function in a file called myfunctions . This prints your public IP address and your local IP address:

function myip () {
curl http: // icanhazip.com

ip addr | grep inet $IP | \
cut -d "/" -f 1 | \
grep -v 127 \.0 | \
grep -v \:\: 1 | \
awk '{$1=$1};1'
}

Import the function into your shell:

$ source myfunctions

Test your new function:

$ myip
93.184.216.34
inet 192.168.0.23
inet6 fbd4:e85f:49c: 2121 :ce12:ef79:0e77:59d1
inet 10.8.42.38 Search for source

When you use source in Bash, it searches your current directory for the file you reference. This doesn't happen in all shells, so check your documentation if you're not using Bash.

If Bash can't find the file to execute, it searches your PATH instead. Again, this isn't the default for all shells, so check your documentation if you're not using Bash.

These are both nice convenience features in Bash. This behavior is surprisingly powerful because it allows you to store common functions in a centralized location on your drive and then treat your environment like an integrated development environment (IDE). You don't have to worry about where your functions are stored, because you know they're in your local equivalent of /usr/include , so no matter where you are when you source them, Bash finds them.

For instance, you could create a directory called ~/.local/include as a storage area for common functions and then put this block of code into your .bashrc file:

for i in $HOME / .local / include /* ; do 
   source $i
done

This "imports" any file containing custom functions in ~/.local/include into your shell environment.

Bash is the only shell that searches both the current directory and your PATH when you use either the source or the . command.

Using source for open source

Using source or . to execute files can be a convenient way to affect your environment while keeping your alterations modular. The next time you're thinking of copying and pasting big blocks of code into your .bashrc file, consider placing related functions or groups of aliases into dedicated files, and then use source to ingest them.

Get started with Bash scripting for sysadmins Learn the commands and features that make Bash one of the most powerful shells available.

Seth Kenlon (Red Hat) Introduction to automation with Bash scripts In the first article in this four-part series, learn how to create a simple shell script and why they are the best way to automate tasks.

David Both (Correspondent) Bash cheat sheet: Key combos and special syntax Download our new cheat sheet for Bash commands and shortcuts you need to talk to your computer.

[Sep 10, 2019] Handling positional parameters

Jul 25, 2017 | wiki.bash-hackers.org

Intro The day will come when you want to give arguments to your scripts. These arguments are known as positional parameters . Some relevant special parameters are described below:

Parameter(s) Description
$0 the first positional parameter, equivalent to argv[0] in C, see the first argument
$FUNCNAME the function name ( attention : inside a function, $0 is still the $0 of the shell, not the function name)
$1 $9 the argument list elements from 1 to 9
${10} ${N} the argument list elements beyond 9 (note the parameter expansion syntax!)
$* all positional parameters except $0 , see mass usage
$@ all positional parameters except $0 , see mass usage
$# the number of arguments, not counting $0

These positional parameters reflect exactly what was given to the script when it was called.

Option-switch parsing (e.g. -h for displaying help) is not performed at this point.

See also the dictionary entry for "parameter" . The first argument The very first argument you can access is referenced as $0 . It is usually set to the script's name exactly as called, and it's set on shell initialization:

Testscript - it just echos $0 :

#!/bin/bash
echo "$0"
You see, $0 is always set to the name the script is called with ( $ is the prompt ):
> ./testscript 
./testscript
> /usr/bin/testscript
/usr/bin/testscript

However, this isn't true for login shells:

> echo "$0"
-bash

In other terms, $0 is not a positional parameter, it's a special parameter independent from the positional parameter list. It can be set to anything. In the ideal case it's the pathname of the script, but since this gets set on invocation, the invoking program can easily influence it (the login program does that for login shells, by prefixing a dash, for example).

Inside a function, $0 still behaves as described above. To get the function name, use $FUNCNAME . Shifting The builtin command shift is used to change the positional parameter values:

The command can take a number as argument: Number of positions to shift. e.g. shift 4 shifts $5 to $1 . Using them Enough theory, you want to access your script-arguments. Well, here we go. One by one One way is to access specific parameters:

#!/bin/bash
echo "Total number of arguments: $#"
echo "Argument 1: $1"
echo "Argument 2: $2"
echo "Argument 3: $3"
echo "Argument 4: $4"
echo "Argument 5: $5"

While useful in another situation, this way is lacks flexibility. The maximum number of arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames as arguments.

⇒ forget that one Loops There are several ways to loop through the positional parameters.


You can code a C-style for-loop using $# as the end value. On every iteration, the shift -command is used to shift the argument list:

numargs=$#
for ((i=1 ; i <= numargs ; i++))
do
    echo "$1"
    shift
done

Not very stylish, but usable. The numargs variable is used to store the initial value of $# because the shift command will change it as the script runs.


Another way to iterate one argument at a time is the for loop without a given wordlist. The loop uses the positional parameters as a wordlist:

for arg
do
    echo "$arg"
done
Advantage: The positional parameters will be preserved

The next method is similar to the first example (the for loop), but it doesn't test for reaching $# . It shifts and checks if $1 still expands to something, using the test command :

while [ "$1" ]
do
    echo "$1"
    shift
done

Looks nice, but has the disadvantage of stopping when $1 is empty (null-string). Let's modify it to run as long as $1 is defined (but may be null), using parameter expansion for an alternate value :

while [ "${1+defined}" ]; do
  echo "$1"
  shift
done

Getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). Mass usage All Positional Parameters Sometimes it's necessary to just "relay" or "pass" given arguments to another program. It's very inefficient to do that in one of these loops, as you will destroy integrity, most likely (spaces!).

The shell developers created $* and $@ for this purpose.

As overview:

Syntax Effective result
$* $1 $2 $3 ${N}
$@ $1 $2 $3 ${N}
"$*" "$1c$2c$3c c${N}"
"$@" "$1" "$2" "$3" "${N}"

Without being quoted (double quotes), both have the same effect: All positional parameters from $1 to the last one used are expanded without any special handling.

When the $* special parameter is double quoted, it expands to the equivalent of: "$1c$2c$3c$4c ..$N" , where 'c' is the first character of IFS .

But when the $@ special parameter is used inside double quotes, it expands to the equivanent of

"$1" "$2" "$3" "$4" .. "$N"

which reflects all positional parameters as they were set initially and passed to the script or function. If you want to re-use your positional parameters to call another program (for example in a wrapper-script), then this is the choice for you, use double quoted "$@" .

Well, let's just say: You almost always want a quoted "$@" ! Range Of Positional Parameters Another way to mass expand the positional parameters is similar to what is possible for a range of characters using substring expansion on normal parameters and the mass expansion range of arrays .

${@:START:COUNT}

${*:START:COUNT}

"${@:START:COUNT}"

"${*:START:COUNT}"

The rules for using @ or * and quoting are the same as above. This will expand COUNT number of positional parameters beginning at START . COUNT can be omitted ( ${@:START} ), in which case, all positional parameters beginning at START are expanded.

If START is negative, the positional parameters are numbered in reverse starting with the last one.

COUNT may not be negative, i.e. the element count may not be decremented.

Example: START at the last positional parameter:

echo "${@: -1}"

Attention : As of Bash 4, a START of 0 includes the special parameter $0 , i.e. the shell name or whatever $0 is set to, when the positional parameters are in use. A START of 1 begins at $1 . In Bash 3 and older, both 0 and 1 began at $1 . Setting Positional Parameters Setting positional parameters with command line arguments, is not the only way to set them. The builtin command, set may be used to "artificially" change the positional parameters from inside the script or function:

set "This is" my new "set of" positional parameters

# RESULTS IN
# $1: This is
# $2: my
# $3: new
# $4: set of
# $5: positional
# $6: parameters

It's wise to signal "end of options" when setting positional parameters this way. If not, the dashes might be interpreted as an option switch by set itself:

# both ways work, but behave differently. See the article about the set command!
set -- ...
set - ...

Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may otherwise be reset by set

set -$- ...

FIXME continue Production examples Using a while loop To make your program accept options as standard command syntax:

COMMAND [options] <params> # Like 'cat -A file.txt'

See simple option parsing code below. It's not that flexible. It doesn't auto-interpret combined options (-fu USER) but it works and is a good rudimentary way to parse your arguments.

#!/bin/sh
# Keeping options in alphabetical order makes it easy to add more.

while :
do
    case "$1" in
      -f | --file)
          file="$2"   # You may want to check validity of $2
          shift 2
          ;;
      -h | --help)
          display_help  # Call your function
          # no shifting needed here, we're done.
          exit 0
          ;;
      -u | --user)
          username="$2" # You may want to check validity of $2
          shift 2
          ;;
      -v | --verbose)
          #  It's better to assign a string, than a number like "verbose=1"
          #  because if you're debugging the script with "bash -x" code like this:
          #
          #    if [ "$verbose" ] ...
          #
          #  You will see:
          #
          #    if [ "verbose" ] ...
          #
          #  Instead of cryptic
          #
          #    if [ "1" ] ...
          #
          verbose="verbose"
          shift
          ;;
      --) # End of all options
          shift
          break;
      -*)
          echo "Error: Unknown option: $1" >&2
          exit 1
          ;;
      *)  # No more options
          break
          ;;
    esac
done

# End of file

Filter unwanted options with a wrapper script This simple wrapper enables filtering unwanted options (here: -a and –all for ls ) out of the command line. It reads the positional parameters and builds a filtered array consisting of them, then calls ls with the new option set. It also respects the as "end of options" for ls and doesn't change anything after it:

#!/bin/bash

# simple ls(1) wrapper that doesn't allow the -a option

options=()  # the buffer array for the parameters
eoo=0       # end of options reached

while [[ $1 ]]
do
    if ! ((eoo)); then
        case "$1" in
          -a)
              shift
              ;;
          --all)
              shift
              ;;
          -[^-]*a*|-a?*)
              options+=("${1//a}")
              shift
              ;;
          --)
              eoo=1
              options+=("$1")
              shift
              ;;
          *)
              options+=("$1")
              shift
              ;;
        esac
    else
        options+=("$1")

        # Another (worse) way of doing the same thing:
        # options=("${options[@]}" "$1")
        shift
    fi
done

/bin/ls "${options[@]}"

Using getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). See also

Discussion 2010/04/14 14:20
The shell-developers invented $* and $@ for this purpose.
Without being quoted (double-quoted), both have the same effect: All positional parameters from $1 to the last used one >are expanded, separated by the first character of IFS (represented by "c" here, but usually a space):
$1c$2c$3c$4c........$N

Without double quotes, $* and $@ are expanding the positional parameters separated by only space, not by IFS.

#!/bin/bash

export IFS='-'

echo -e $*
echo -e $@
$./test "This is" 2 3
This is 2 3
This is 2 3

(Edited: Inserted code tags)

2010/04/14 17:12 Thank you very much for this finding. I know how $* works, thus I can't understand why I described it that wrong. I guess it was in some late night session.

Thanks again.

2011/02/18 16:11 #!/bin/bash

OLDIFS="$IFS" IFS='-' #export IFS='-'

#echo -e $* #echo -e $@ #should be echo -e "$*" echo -e "$@" IFS="$OLDIFS"

2011/02/18 16:14 #should be echo -e "$*"
Dave Carlton , 2010/05/18 15:23 I would suggext using a different prompt as the $ is confusing to newbies. Otherwise, an excellent treatise on use of positional parameters.
2010/05/24 10:48 Thanks for the suggestion, I use "> " here now, and I'll change it in whatever text I edit in future (whole wiki). Let's see if "> " is okay.
2012/04/20 10:32 Here's yet another non-getopts way.

http://bsdpants.blogspot.de/2007/02/option-ize-your-shell-scripts.html

2012/07/16 14:48 Hi there!

What if I use "$@" in subsequent function calls, but arguments are strings?

I mean, having:

#!/bin/bash
echo "$@"
echo n: $#

If you use it

mypc$ script arg1 arg2 "asd asd" arg4
arg1 arg2 asd asd arg4
n: 4

But having

#!/bin/bash
myfunc()
{
  echo "$@"
  echo n: $#
}
ech "$@"
echo n: $#
myfunc "$@"

you get:

mypc$ myscrpt arg1 arg2 "asd asd" arg4
arg1 arg2 asd asd arg4
4
arg1 arg2 asd asd arg4
5

As you can see, there is no way to make know the function that a parameter is a string and not a space separated list of arguments.

Any idea of how to solve it? I've test calling functions and doing expansion in almost all ways with no results.

2012/08/12 09:11 I don't know why it fails for you. It should work if you use "$@" , of course.

See the exmaple I used your second script with:

$ ./args1 a b c "d e" f
a b c d e f
n: 5
a b c d e f
n: 5
2015/06/10 10:00 Thanks a lot for this tutorial. Especially the first example is very helpful.

[Aug 29, 2019] Parsing bash script options with getopts by Kevin Sookocheff

Mar 30, 2018 | sookocheff.com

Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes | Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your script. Bash provides the getopts built-in function to do just that. This tutorial explains how to use the getopts built-in function to parse arguments and options to a bash script.

The getopts function takes three parameters. The first is a specification of which options are valid, listed as a sequence of letters. For example, the string 'ht' signifies that the options -h and -t are valid.

The second argument to getopts is a variable that will be populated with the option or argument to be processed next. In the following loop, opt will hold the value of the current option that has been parsed by getopts .

while getopts ":ht" opt; do
  case ${opt} in
    h ) # process option a
      ;;
    t ) # process option t
      ;;
    \? ) echo "Usage: cmd [-h] [-t]"
      ;;
  esac
done

This example shows a few additional features of getopts . First, if an invalid option is provided, the option variable is assigned the value ? . You can catch this case and provide an appropriate usage message to the user. Second, this behaviour is only true when you prepend the list of valid options with : to disable the default error handling of invalid options. It is recommended to always disable the default error handling in your scripts.

The third argument to getopts is the list of arguments and options to be processed. When not provided, this defaults to the arguments and options provided to the application ( $@ ). You can provide this third argument to use getopts to parse any list of arguments and options you provide.

Shifting processed options

The variable OPTIND holds the number of options parsed by the last call to getopts . It is common practice to call the shift command at the end of your processing loop to remove options that have already been handled from $@ .

shift $((OPTIND -1))
Parsing options with arguments

Options that themselves have arguments are signified with a : . The argument to an option is placed in the variable OPTARG . In the following example, the option t takes an argument. When the argument is provided, we copy its value to the variable target . If no argument is provided getopts will set opt to : . We can recognize this error condition by catching the : case and printing an appropriate error message.

while getopts ":t:" opt; do
  case ${opt} in
    t )
      target=$OPTARG
      ;;
    \? )
      echo "Invalid option: $OPTARG" 1>&2
      ;;
    : )
      echo "Invalid option: $OPTARG requires an argument" 1>&2
      ;;
  esac
done
shift $((OPTIND -1))
An extended example – parsing nested arguments and options

Let's walk through an extended example of processing a command that takes options, has a sub-command, and whose sub-command takes an additional option that has an argument. This is a mouthful so let's break it down using an example. Let's say we are writing our own version of the pip command . In this version you can call pip with the -h option to display a help message.

> pip -h
Usage:
    pip -h                      Display this help message.
    pip install                 Install a Python package.

We can use getopts to parse the -h option with the following while loop. In it we catch invalid options with \? and shift all arguments that have been processed with shift $((OPTIND -1)) .

while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install                 Install a Python package."
      exit 0
      ;;
    \? )
      echo "Invalid Option: -$OPTARG" 1>&2
      exit 1
      ;;
  esac
done
shift $((OPTIND -1))

Now let's add the sub-command install to our script. install takes as an argument the Python package to install.

> pip install urllib3

install also takes an option, -t . -t takes as an argument the location to install the package to relative to the current directory.

> pip install urllib3 -t ./src/lib

To process this line we must find the sub-command to execute. This value is the first argument to our script.

subcommand=$1
shift # Remove `pip` from the argument list

Now we can process the sub-command install . In our example, the option -t is actually an option that follows the package argument so we begin by removing install from the argument list and processing the remainder of the line.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list
    ;;
esac

After shifting the argument list we can process the remaining arguments as if they are of the form package -t src/lib . The -t option takes an argument itself. This argument will be stored in the variable OPTARG and we save it to the variable target for further work.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list

  while getopts ":t:" opt; do
    case ${opt} in
      t )
        target=$OPTARG
        ;;
      \? )
        echo "Invalid Option: -$OPTARG" 1>&2
        exit 1
        ;;
      : )
        echo "Invalid Option: -$OPTARG requires an argument" 1>&2
        exit 1
        ;;
    esac
  done
  shift $((OPTIND -1))
  ;;
esac

Putting this all together, we end up with the following script that parses arguments to our version of pip and its sub-command install .

package=""  # Default to empty package
target=""  # Default to empty target

# Parse options to the `pip` command
while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install <package>       Install <package>."
      exit 0
      ;;
   \? )
     echo "Invalid Option: -$OPTARG" 1>&2
     exit 1
     ;;
  esac
done
shift $((OPTIND -1))

subcommand=$1; shift  # Remove 'pip' from the argument list
case "$subcommand" in
  # Parse options to the install sub command
  install)
    package=$1; shift  # Remove 'install' from the argument list

    # Process package options
    while getopts ":t:" opt; do
      case ${opt} in
        t )
          target=$OPTARG
          ;;
        \? )
          echo "Invalid Option: -$OPTARG" 1>&2
          exit 1
          ;;
        : )
          echo "Invalid Option: -$OPTARG requires an argument" 1>&2
          exit 1
          ;;
      esac
    done
    shift $((OPTIND -1))
    ;;
esac

After processing the above sequence of commands, the variable package will hold the package to install and the variable target will hold the target to install the package to. You can use this as a template for processing any set of arguments and options to your scripts.

bash getopts

[Aug 29, 2019] How do I parse command line arguments in Bash - Stack Overflow

Jul 10, 2017 | stackoverflow.com

Livven, Jul 10, 2017 at 8:11

Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great edits/comments/suggestions. In order save maintenance time, I've modified the code block to be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ". Instead, copy-paste the code block, see the output, make the change, rerun the script, and comment "I changed X to Y and " I don't have time to test your ideas and tell you if they work.
Method #1: Using bash without getopt[s]

Two common ways to pass key-value-pair arguments are:

Bash Space-Separated (e.g., --option argument ) (without getopt[s])

Usage demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

cat >/tmp/demo-space-separated.sh <<'EOF'
#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
EOF

chmod +x /tmp/demo-space-separated.sh

/tmp/demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com
Bash Equals-Separated (e.g., --option=argument ) (without getopt[s])

Usage demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

cat >/tmp/demo-equals-separated.sh <<'EOF'
#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi
EOF

chmod +x /tmp/demo-equals-separated.sh

/tmp/demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Method #2: Using bash with getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

getopt(1) limitations (older, relatively-recent getopt versions):

More recent getopt versions don't have these limitations.

Additionally, the POSIX shell (and others) offer getopts which doesn't have these limitations. I've included a simplistic getopts example.

Usage demo-getopts.sh -vf /etc/hosts foo bar

cat >/tmp/demo-getopts.sh <<'EOF'
#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"
EOF

chmod +x /tmp/demo-getopts.sh

/tmp/demo-getopts.sh -vf /etc/hosts foo bar

output from copy-pasting the block above:

verbose=1, output_file='/etc/hosts', Leftovers: foo bar

The advantages of getopts are:

  1. It's more portable, and will work in other shells like dash .
  2. It can handle multiple single options like -vf filename in the typical Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without additional code.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

johncip ,Jul 23, 2018 at 15:15

No answer mentions enhanced getopt . And the top-voted answer is misleading: It either ignores -⁠vfd style short options (requested by the OP) or options after positional arguments (also requested by the OP); and it ignores parsing-errors. Instead:

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash
# saner programming env: these switches turn some bugs into errors
set -o errexit -o pipefail -o noclobber -o nounset

# -allow a command to fail with !'s side effect on errexit
# -use return value from ${PIPESTATUS[0]}, because ! hosed $?
! getopt --test > /dev/null 
if [[ ${PIPESTATUS[0]} -ne 4 ]]; then
    echo 'I'm sorry, `getopt --test` failed in this environment.'
    exit 1
fi

OPTIONS=dfo:v
LONGOPTS=debug,force,output:,verbose

# -regarding ! and PIPESTATUS see above
# -temporarily store output to be able to check for errors
# -activate quoting/enhanced mode (e.g. by writing out "--options")
# -pass arguments only via   -- "$@"   to separate them correctly
! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")
if [[ ${PIPESTATUS[0]} -ne 0 ]]; then
    # e.g. return value is 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

d=n f=n v=n outFile=-
# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt or sudo port install getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

Tobias Kienzler ,Mar 19, 2016 at 15:23

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Robert Siemer ,Jun 1, 2018 at 1:57

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

hfossli ,Jan 31 at 20:05

More succinct way

script.sh

#!/bin/bash

while [[ "$#" -gt 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

bronson ,Apr 27 at 23:22

At the risk of adding another example to ignore, here's my scheme.

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

Robert Siemer ,Jun 6, 2016 at 19:28

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

> ,

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

[Aug 29, 2019] shell - An example of how to use getopts in bash - Stack Overflow

The key thing to understand is that getops is just parsing options. You need to shift them as a separate operation:
shift $((OPTIND-1))
May 10, 2013 | stackoverflow.com

An example of how to use getopts in bash Ask Question Asked 6 years, 3 months ago Active 10 months ago Viewed 419k times 288 132

chepner ,May 10, 2013 at 13:42

I want to call myscript file in this way:
$ ./myscript -s 45 -p any_string

or

$ ./myscript -h >>> should display help
$ ./myscript    >>> should display help

My requirements are:

I tried so far this code:

#!/bin/bash
while getopts "h:s:" arg; do
  case $arg in
    h)
      echo "usage" 
      ;;
    s)
      strength=$OPTARG
      echo $strength
      ;;
  esac
done

But with that code I get errors. How to do it with Bash and getopt ?

,

#!/bin/bash

usage() { echo "Usage: $0 [-s <45|90>] [-p <string>]" 1>&2; exit 1; }

while getopts ":s:p:" o; do
    case "${o}" in
        s)
            s=${OPTARG}
            ((s == 45 || s == 90)) || usage
            ;;
        p)
            p=${OPTARG}
            ;;
        *)
            usage
            ;;
    esac
done
shift $((OPTIND-1))

if [ -z "${s}" ] || [ -z "${p}" ]; then
    usage
fi

echo "s = ${s}"
echo "p = ${p}"

Example runs:

$ ./myscript.sh
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -h
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s "" -p ""
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 10 -p foo
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 45 -p foo
s = 45
p = foo

$ ./myscript.sh -s 90 -p bar
s = 90
p = bar

[Aug 26, 2019] Linux and Unix exit code tutorial with examples by George Ornbo

Aug 07, 2016 | shapeshed.com
Tutorial on using exit codes from Linux or UNIX commands. Examples of how to get the exit code of a command, how to set the exit code and how to suppress exit codes.

Estimated reading time: 3 minutes

Table of contents

UNIX exit code

What is an exit code in the UNIX or Linux shell?

An exit code, or sometimes known as a return code, is the code returned to a parent process by an executable. On POSIX systems the standard exit code is 0 for success and any number from 1 to 255 for anything else.

Exit codes can be interpreted by machine scripts to adapt in the event of successes of failures. If exit codes are not set the exit code will be the exit code of the last run command.

How to get the exit code of a command

To get the exit code of a command type echo $? at the command prompt. In the following example a file is printed to the terminal using the cat command.

cat file.txt
hello world
echo $?
0

The command was successful. The file exists and there are no errors in reading the file or writing it to the terminal. The exit code is therefore 0 .

In the following example the file does not exist.

cat doesnotexist.txt
cat: doesnotexist.txt: No such file or directory
echo $?
1

The exit code is 1 as the operation was not successful.

How to use exit codes in scripts

To use exit codes in scripts an if statement can be used to see if an operation was successful.

#!/bin/bash

cat file.txt 

if [ $? -eq 0 ]
then
  echo "The script ran ok"
  exit 0
else
  echo "The script failed" >&2
  exit 1
fi

If the command was unsuccessful the exit code will be 0 and 'The script ran ok' will be printed to the terminal.

How to set an exit code

To set an exit code in a script use exit 0 where 0 is the number you want to return. In the following example a shell script exits with a 1 . This file is saved as exit.sh .

#!/bin/bash

exit 1

Executing this script shows that the exit code is correctly set.

bash exit.sh
echo $?
1
What exit code should I use?

The Linux Documentation Project has a list of reserved codes that also offers advice on what code to use for specific scenarios. These are the standard error codes in Linux or UNIX.

How to suppress exit statuses

Sometimes there may be a requirement to suppress an exit status. It may be that a command is being run within another script and that anything other than a 0 status is undesirable.

In the following example a file is printed to the terminal using cat . This file does not exist so will cause an exit status of 1 .

To suppress the error message any output to standard error is sent to /dev/null using 2>/dev/null .

If the cat command fails an OR operation can be used to provide a fallback - cat file.txt || exit 0 . In this case an exit code of 0 is returned even if there is an error.

Combining both the suppression of error output and the OR operation the following script returns a status code of 0 with no output even though the file does not exist.

#!/bin/bash

cat 'doesnotexist.txt' 2>/dev/null || exit 0
Further reading

[Aug 26, 2019] Exit Codes - Shell Scripting Tutorial

Aug 26, 2019 | www.shellscript.sh

Exit codes are a number between 0 and 255, which is returned by any Unix command when it returns control to its parent process.
Other numbers can be used, but these are treated modulo 256, so exit -10 is equivalent to exit 246 , and exit 257 is equivalent to exit 1 .

These can be used within a shell script to change the flow of execution depending on the success or failure of commands executed. This was briefly introduced in Variables - Part II . Here we shall look in more detail in the available interpretations of exit codes.

Success is traditionally represented with exit 0 ; failure is normally indicated with a non-zero exit-code. This value can indicate different reasons for failure.
For example, GNU grep returns 0 on success, 1 if no matches were found, and 2 for other errors (syntax errors, non-existent input files, etc).

We shall look at three different methods for checking error status, and discuss the pros and cons of each approach.

Firstly, the simple approach:


#!/bin/sh
# First attempt at checking return codes
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
if [ "$?" -ne "0" ]; then
  echo "Sorry, cannot find user ${1} in /etc/passwd"
  exit 1
fi
NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5`
HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6`

echo "USERNAME: $USERNAME"
echo "NAME: $NAME"
echo "HOMEDIR: $HOMEDIR"

This script works fine if you supply a valid username in /etc/passwd . However, if you enter an invalid code, it does not do what you might at first expect - it keeps running, and just shows:
USERNAME: 
NAME: 
HOMEDIR:
Why is this? As mentioned, the $? variable is set to the return code of the last executed command . In this case, that is cut . cut had no problems which it feels like reporting - as far as I can tell from testing it, and reading the documentation, cut returns zero whatever happens! It was fed an empty string, and did its job - returned the first field of its input, which just happened to be the empty string.

So what do we do? If we have an error here, grep will report it, not cut . Therefore, we have to test grep 's return code, not cut 's.


#!/bin/sh
# Second attempt at checking return codes
grep "^${1}:" /etc/passwd > /dev/null 2>&1
if [ "$?" -ne "0" ]; then
  echo "Sorry, cannot find user ${1} in /etc/passwd"
  exit 1
fi
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5`
HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6`

echo "USERNAME: $USERNAME"
echo "NAME: $NAME"
echo "HOMEDIR: $HOMEDIR"

This fixes the problem for us, though at the expense of slightly longer code.
That is the basic way which textbooks might show you, but it is far from being all there is to know about error-checking in shell scripts. This method may not be the most suitable to your particular command-sequence, or may be unmaintainable. Below, we shall investigate two alternative approaches.

As a second approach, we can tidy this somewhat by putting the test into a separate function, instead of littering the code with lots of 4-line tests:


#!/bin/sh
# A Tidier approach

check_errs()
{
  # Function. Parameter 1 is the return code
  # Para. 2 is text to display on failure.
  if [ "${1}" -ne "0" ]; then
    echo "ERROR # ${1} : ${2}"
    # as a bonus, make our script exit with the right error code.
    exit ${1}
  fi
}

### main script starts here ###

grep "^${1}:" /etc/passwd > /dev/null 2>&1
check_errs $? "User ${1} not found in /etc/passwd"
USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1`
check_errs $? "Cut returned an error"
echo "USERNAME: $USERNAME"
check_errs $? "echo returned an error - very strange!"

This allows us to test for errors 3 times, with customised error messages, without having to write 3 individual tests. By writing the test routine once. we can call it as many times as we wish, creating a more intelligent script, at very little expense to the programmer. Perl programmers will recognise this as being similar to the die command in Perl.

As a third approach, we shall look at a simpler and cruder method. I tend to use this for building Linux kernels - simple automations which, if they go well, should just get on with it, but when things go wrong, tend to require the operator to do something intelligent (ie, that which a script cannot do!):


#!/bin/sh
cd /usr/src/linux && \
make dep && make bzImage && make modules && make modules_install && \
cp arch/i386/boot/bzImage /boot/my-new-kernel && cp System.map /boot && \
echo "Your new kernel awaits, m'lord."
This script runs through the various tasks involved in building a Linux kernel (which can take quite a while), and uses the && operator to check for success. To do this with if would involve:
#!/bin/sh
cd /usr/src/linux
if [ "$?" -eq "0" ]; then
  make dep 
    if [ "$?" -eq "0" ]; then
      make bzImage 
      if [ "$?" -eq "0" ]; then
        make modules 
        if [ "$?" -eq "0" ]; then
          make modules_install
          if [ "$?" -eq "0" ]; then
            cp arch/i386/boot/bzImage /boot/my-new-kernel
            if [ "$?" -eq "0" ]; then
              cp System.map /boot/
              if [ "$?" -eq "0" ]; then
                echo "Your new kernel awaits, m'lord."
              fi
            fi
          fi
        fi
      fi
    fi
  fi
fi

... which I, personally, find pretty difficult to follow.


The && and || operators are the shell's equivalent of AND and OR tests. These can be thrown together as above, or:


#!/bin/sh
cp /foo /bar && echo Success || echo Failed

This code will either echo

Success

or

Failed

depending on whether or not the cp command was successful. Look carefully at this; the construct is

command && command-to-execute-on-success || command-to-execute-on-failure

Only one command can be in each part. This method is handy for simple success / fail scenarios, but if you want to check on the status of the echo commands themselves, it is easy to quickly become confused about which && and || applies to which command. It is also very difficult to maintain. Therefore this construct is only recommended for simple sequencing of commands.

In earlier versions, I had suggested that you can use a subshell to execute multiple commands depending on whether the cp command succeeded or failed:

cp /foo /bar && ( echo Success ; echo Success part II; ) || ( echo Failed ; echo Failed part II )

But in fact, Marcel found that this does not work properly. The syntax for a subshell is:

( command1 ; command2; command3 )

The return code of the subshell is the return code of the final command ( command3 in this example). That return code will affect the overall command. So the output of this script:

cp /foo /bar && ( echo Success ; echo Success part II; /bin/false ) || ( echo Failed ; echo Failed part II )

Is that it runs the Success part (because cp succeeded, and then - because /bin/false returns failure, it also executes the Failure part:

Success
Success part II
Failed
Failed part II

So if you need to execute multiple commands as a result of the status of some other condition, it is better (and much clearer) to use the standard if , then , else syntax.

[Jun 18, 2019] Introduction to Bash Shell Parameter Expansions

Jun 18, 2019 | linuxconfig.org

Before proceeding further, let me give you one tip. In the example above the shell tried to expand a non-existing variable, producing a blank result. This can be very dangerous, especially when working with path names, therefore, when writing scripts, it's always recommended to use the nounset option which causes the shell to exit with error whenever a non existing variable is referenced:

$ set -o nounset
$ echo "You are reading this article on $site_!"
bash: site_: unbound variable
Working with indirection

The use of the ${!parameter} syntax, adds a level of indirection to our parameter expansion. What does it mean? The parameter which the shell will try to expand is not parameter ; instead it will try to use the the value of parameter as the name of the variable to be expanded. Let's explain this with an example. We all know the HOME variable expands in the path of the user home directory in the system, right?

$ echo "${HOME}"
/home/egdoc

Very well, if now we assign the string "HOME", to another variable, and use this type of expansion, we obtain:

$ variable_to_inspect="HOME"
$ echo "${!variable_to_inspect}"
/home/egdoc

As you can see in the example above, instead of obtaining "HOME" as a result, as it would have happened if we performed a simple expansion, the shell used the value of variable_to_inspect as the name of the variable to expand, that's why we talk about a level of indirection.

Case modification expansion

This parameter expansion syntax let us change the case of the alphabetic characters inside the string resulting from the expansion of the parameter. Say we have a variable called name ; to capitalize the text returned by the expansion of the variable we would use the ${parameter^} syntax:

$ name="egidio"
$ echo "${name^}"
Egidio

What if we want to uppercase the entire string, instead of capitalize it? Easy! we use the ${parameter^^} syntax:

$ echo "${name^^}"
EGIDIO

Similarly, to lowercase the first character of a string, we use the ${parameter,} expansion syntax:

$ name="EGIDIO"
$ echo "${name,}"
eGIDIO

To lowercase the entire string, instead, we use the ${parameter,,} syntax:

$ name="EGIDIO"
$ echo "${name,,}"
egidio

In all cases a pattern to match a single character can also be provided. When the pattern is provided the operation is applied only to the parts of the original string that matches it:

$ name="EGIDIO"
$ echo "${name,,[DIO]}"
EGidio

me name=


In the example above we enclose the characters in square brackets: this causes anyone of them to be matched as a pattern.

When using the expansions we explained in this paragraph and the parameter is an array subscripted by @ or * , the operation is applied to all the elements contained in it:

$ my_array=(one two three)
$ echo "${my_array[@]^^}"
ONE TWO THREE

When the index of a specific element in the array is referenced, instead, the operation is applied only to it:

$ my_array=(one two three)
$ echo "${my_array[2]^^}"
THREE
Substring removal

The next syntax we will examine allows us to remove a pattern from the beginning or from the end of string resulting from the expansion of a parameter.

Remove matching pattern from the beginning of the string

The next syntax we will examine, ${parameter#pattern} , allows us to remove a pattern from the beginning of the string resulting from the parameter expansion:

$ name="Egidio"
$ echo "${name#Egi}"
dio

A similar result can be obtained by using the "${parameter##pattern}" syntax, but with one important difference: contrary to the one we used in the example above, which removes the shortest matching pattern from the beginning of the string, it removes the longest one. The difference is clearly visible when using the * character in the pattern :

$ name="Egidio Docile"
$ echo "${name#*i}"
dio Docile

In the example above we used * as part of the pattern that should be removed from the string resulting by the expansion of the name variable. This wildcard matches any character, so the pattern itself translates in "'i' character and everything before it". As we already said, when we use the ${parameter#pattern} syntax, the shortest matching pattern is removed, in this case it is "Egi". Let's see what happens when we use the "${parameter##pattern}" syntax instead:

$ name="Egidio Docile"
$ echo "${name##*i}"
le

This time the longest matching pattern is removed ("Egidio Doci"): the longest possible match includes the third 'i' and everything before it. The result of the expansion is just "le".

Remove matching pattern from the end of the string

The syntax we saw above remove the shortest or longest matching pattern from the beginning of the string. If we want the pattern to be removed from the end of the string, instead, we must use the ${parameter%pattern} or ${parameter%%pattern} expansions, to remove, respectively, the shortest and longest match from the end of the string:

$ name="Egidio Docile"
$ echo "${name%i*}"
Egidio Doc

In this example the pattern we provided roughly translates in "'i' character and everything after it starting from the end of the string". The shortest match is "ile", so what is returned is "Egidio Doc". If we try the same example but we use the syntax which removes the longest match we obtain:

$ name="Egidio Docile"
$ echo "${name%%i*}"
Eg

In this case the once the longest match is removed, what is returned is "Eg".

In all the expansions we saw above, if parameter is an array and it is subscripted with * or @ , the removal of the matching pattern is applied to all its elements:

$ my_array=(one two three)
$ echo "${my_array[@]#*o}"
ne three

me name=


Search and replace pattern

We used the previous syntax to remove a matching pattern from the beginning or from the end of the string resulting from the expansion of a parameter. What if we want to replace pattern with something else? We can use the ${parameter/pattern/string} or ${parameter//pattern/string} syntax. The former replaces only the first occurrence of the pattern, the latter all the occurrences:

$ phrase="yellow is the sun and yellow is the
lemon"
$ echo "${phrase/yellow/red}"
red is the sun and yellow is the lemon

The parameter (phrase) is expanded, and the longest match of the pattern (yellow) is matched against it. The match is then replaced by the provided string (red). As you can observe only the first occurrence is replaced, so the lemon remains yellow! If we want to change all the occurrences of the pattern, we must prefix it with the / character:

$ phrase="yellow is the sun and yellow is the
lemon"
$ echo "${phrase//yellow/red}"
red is the sun and red is the lemon

This time all the occurrences of "yellow" has been replaced by "red". As you can see the pattern is matched wherever it is found in the string resulting from the expansion of parameter . If we want to specify that it must be matched only at the beginning or at the end of the string, we must prefix it respectively with the # or % character.

Just like in the previous cases, if parameter is an array subscripted by either * or @ , the substitution happens in each one of its elements:

$ my_array=(one two three)
$ echo "${my_array[@]/o/u}"
une twu three
Substring expansion

The ${parameter:offset} and ${parameter:offset:length} expansions let us expand only a part of the parameter, returning a substring starting at the specified offset and length characters long. If the length is not specified the expansion proceeds until the end of the original string. This type of expansion is called substring expansion :

$ name="Egidio Docile"
$ echo "${name:3}"
dio Docile

In the example above we provided just the offset , without specifying the length , therefore the result of the expansion was the substring obtained by starting at the character specified by the offset (3).

If we specify a length, the substring will start at offset and will be length characters long:

$ echo "${name:3:3}"
dio

If the offset is negative, it is calculated from the end of the string. In this case an additional space must be added after : otherwise the shell will consider it as another type of expansion identified by :- which is used to provide a default value if the parameter to be expanded doesn't exist (we talked about it in the article about managing the expansion of empty or unset bash variables ):

$ echo "${name: -6}"
Docile

If the provided length is negative, instead of being interpreted as the total number of characters the resulting string should be long, it is considered as an offset to be calculated from the end of the string. The result of the expansion will therefore be a substring starting at offset and ending at length characters from the end of the original string:

$ echo "${name:7:-3}"
Doc

When using this expansion and parameter is an indexed array subscribed by * or @ , the offset is relative to the indexes of the array elements. For example:

$ my_array=(one two three)
$ echo "${my_array[@]:0:2}"
one two
$ echo "${my_array[@]: -2}"
two three

[Jul 04, 2018] How do I parse command line arguments in Bash

Notable quotes:
"... enhanced getopt ..."
Jul 04, 2018 | stackoverflow.com

Lawrence Johnston ,Oct 10, 2008 at 16:57

Say, I have a script that gets called with this line:
./myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile

or this one:

./myscript -v -f -d -o /fizz/someOtherFile ./foo/bar/someFile

What's the accepted way of parsing this such that in each case (or some combination of the two) $v , $f , and $d will all be set to true and $outFile will be equal to /fizz/someOtherFile ?

Inanc Gumus ,Apr 15, 2016 at 19:11

See my very easy and no-dependency answer here: stackoverflow.com/a/33826763/115363Inanc Gumus Apr 15 '16 at 19:11

dezza ,Aug 2, 2016 at 2:13

For zsh-users there's a great builtin called zparseopts which can do: zparseopts -D -E -M -- d=debug -debug=d And have both -d and --debug in the $debug array echo $+debug[1] will return 0 or 1 if one of those are used. Ref: zsh.org/mla/users/2011/msg00350.htmldezza Aug 2 '16 at 2:13

Bruno Bronosky ,Jan 7, 2013 at 20:01

Preferred Method: Using straight bash without getopt[s]

I originally answered the question as the OP asked. This Q/A is getting a lot of attention, so I should also offer the non-magic way to do this. I'm going to expand upon guneysus's answer to fix the nasty sed and include Tobias Kienzler's suggestion .

Two of the most common ways to pass key value pair arguments are:

Straight Bash Space Separated

Usage ./myscript.sh -e conf -s /etc -l /usr/lib /etc/hosts

#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo FILE EXTENSION  = "${EXTENSION}"
echo SEARCH PATH     = "${SEARCHPATH}"
echo LIBRARY PATH    = "${LIBPATH}"
echo DEFAULT         = "${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
Straight Bash Equals Separated

Usage ./myscript.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Using getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

Never use getopt(1). getopt cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed.

The POSIX shell (and others) offer getopts which is safe to use instead. Here is a simplistic getopts example:

#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"

# End of file

The advantages of getopts are:

  1. It's portable, and will work in e.g. dash.
  2. It can handle things like -vf filename in the expected Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without trickery.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

Livven ,Jun 6, 2013 at 21:19

Is this really true? According to Wikipedia there's a newer GNU enhanced version of getopt which includes all the functionality of getopts and then some. man getopt on Ubuntu 13.04 outputs getopt - parse command options (enhanced) as the name, so I presume this enhanced version is standard now. – Livven Jun 6 '13 at 21:19

szablica ,Jul 17, 2013 at 15:23

That something is a certain way on your system is a very weak premise to base asumptions of "being standard" on. – szablica Jul 17 '13 at 15:23

Stephane Chazelas ,Aug 20, 2014 at 19:55

@Livven, that getopt is not a GNU utility, it's part of util-linux . – Stephane Chazelas Aug 20 '14 at 19:55

Nicolas Mongrain-Lacombe ,Jun 19, 2016 at 21:22

If you use -gt 0 , remove your shift after the esac , augment all the shift by 1 and add this case: *) break;; you can handle non optionnal arguments. Ex: pastebin.com/6DJ57HTcNicolas Mongrain-Lacombe Jun 19 '16 at 21:22

kolydart ,Jul 10, 2017 at 8:11

You do not echo –default . In the first example, I notice that if –default is the last argument, it is not processed (considered as non-opt), unless while [[ $# -gt 1 ]] is set as while [[ $# -gt 0 ]]kolydart Jul 10 '17 at 8:11

Robert Siemer ,Apr 20, 2015 at 17:47

No answer mentions enhanced getopt . And the top-voted answer is misleading: It ignores -⁠vfd style short options (requested by the OP), options after positional arguments (also requested by the OP) and it ignores parsing-errors. Instead:

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash

getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
    echo "I'm sorry, `getopt --test` failed in this environment."
    exit 1
fi

OPTIONS=dfo:v
LONGOPTIONS=debug,force,output:,verbose

# -temporarily store output to be able to check for errors
# -e.g. use "--options" parameter by name to activate quoting/enhanced mode
# -pass arguments only via   -- "$@"   to separate them correctly
PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$@")
if [[ $? -ne 0 ]]; then
    # e.g. $? == 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

johncip ,Jan 12, 2017 at 2:00

Thanks for this. Just confirmed from the feature table at en.wikipedia.org/wiki/Getopts , if you need support for long options, and you're not on Solaris, getopt is the way to go. – johncip Jan 12 '17 at 2:00

Kaushal Modi ,Apr 27, 2017 at 14:02

I believe that the only caveat with getopt is that it cannot be used conveniently in wrapper scripts where one might have few options specific to the wrapper script, and then pass the non-wrapper-script options to the wrapped executable, intact. Let's say I have a grep wrapper called mygrep and I have an option --foo specific to mygrep , then I cannot do mygrep --foo -A 2 , and have the -A 2 passed automatically to grep ; I need to do mygrep --foo -- -A 2 . Here is my implementation on top of your solution.Kaushal Modi Apr 27 '17 at 14:02

bobpaul ,Mar 20 at 16:45

Alex, I agree and there's really no way around that since we need to know the actual return value of getopt --test . I'm a big fan of "Unofficial Bash Strict mode", (which includes set -e ), and I just put the check for getopt ABOVE set -euo pipefail and IFS=$'\n\t' in my script. – bobpaul Mar 20 at 16:45

Robert Siemer ,Mar 21 at 9:10

@bobpaul Oh, there is a way around that. And I'll edit my answer soon to reflect my collections regarding this issue ( set -e )... – Robert Siemer Mar 21 at 9:10

Robert Siemer ,Mar 21 at 9:16

@bobpaul Your statement about util-linux is wrong and misleading as well: the package is marked "essential" on Ubuntu/Debian. As such, it is always installed. – Which distros are you talking about (where you say it needs to be installed on purpose)? – Robert Siemer Mar 21 at 9:16

guneysus ,Nov 13, 2012 at 10:31

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Tobias Kienzler ,Nov 12, 2013 at 12:48

Neat! Though this won't work for space-separated arguments à la mount -t tempfs ... . One can probably fix this via something like while [ $# -ge 1 ]; do param=$1; shift; case $param in; -p) prefix=$1; shift;; etc – Tobias Kienzler Nov 12 '13 at 12:48

Robert Siemer ,Mar 19, 2016 at 15:23

This can't handle -vfd style combined short options. – Robert Siemer Mar 19 '16 at 15:23

bekur ,Dec 19, 2017 at 23:27

link is broken! – bekur Dec 19 '17 at 23:27

Matt J ,Oct 10, 2008 at 17:03

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

Robert Siemer ,Apr 16, 2016 at 14:37

Using $* is broken usage of getopt . (It hoses arguments with spaces.) See my answer for proper usage. – Robert Siemer Apr 16 '16 at 14:37

SDsolar ,Aug 10, 2017 at 14:07

Why would you want to make it more complicated? – SDsolar Aug 10 '17 at 14:07

thebunnyrules ,Jun 1 at 1:57

@Matt J, the first part of the script (for i) would be able to handle arguments with spaces in them if you use "$i" instead of $i. The getopts does not seem to be able to handle arguments with spaces. What would be the advantage of using getopt over the for i loop? – thebunnyrules Jun 1 at 1:57

bronson ,Jul 15, 2015 at 23:43

At the risk of adding another example to ignore, here's my scheme.

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

rhombidodecahedron ,Sep 11, 2015 at 8:40

What is the "handle_argument" function? – rhombidodecahedron Sep 11 '15 at 8:40

bronson ,Oct 8, 2015 at 20:41

Sorry for the delay. In my script, the handle_argument function receives all the non-option arguments. You can replace that line with whatever you'd like, maybe *) die "unrecognized argument: $1" or collect the args into a variable *) args+="$1"; shift 1;; . – bronson Oct 8 '15 at 20:41

Guilherme Garnier ,Apr 13 at 16:10

Amazing! I've tested a couple of answers, but this is the only one that worked for all cases, including many positional parameters (both before and after flags) – Guilherme Garnier Apr 13 at 16:10

Shane Day ,Jul 1, 2014 at 1:20

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

Robert Siemer ,Dec 6, 2015 at 13:47

This code can't handle options with arguments like this: -c1 . And the use of = to separate short options from their arguments is unusual... – Robert Siemer Dec 6 '15 at 13:47

sfnd ,Jun 6, 2016 at 19:28

I ran into two problems with this useful chunk of code: 1) the "shift" in the case of "-c=foo" ends up eating the next parameter; and 2) 'c' should not be included in the "[cfr]" pattern for combinable short options. – sfnd Jun 6 '16 at 19:28

Inanc Gumus ,Nov 20, 2015 at 12:28

More succinct way

script.sh

#!/bin/bash

while [[ "$#" > 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

hfossli ,Apr 7 at 20:58

This is what I am doing. Have to while [[ "$#" > 1 ]] if I want to support ending the line with a boolean flag ./script.sh --debug dev --uglify fast --verbose . Example: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 20:58

hfossli ,Apr 7 at 21:09

I sent an edit request. I just tested this and it works perfectly. – hfossli Apr 7 at 21:09

hfossli ,Apr 7 at 21:10

Wow! Simple and clean! This is how I'm using this: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 21:10

Ponyboy47 ,Sep 8, 2016 at 18:59

My answer is largely based on the answer by Bruno Bronosky , but I sort of mashed his two pure bash implementations into one that I use pretty frequently.
# As long as there is at least one more argument, keep looping
while [[ $# -gt 0 ]]; do
    key="$1"
    case "$key" in
        # This is a flag type option. Will catch either -f or --foo
        -f|--foo)
        FOO=1
        ;;
        # Also a flag type option. Will catch either -b or --bar
        -b|--bar)
        BAR=1
        ;;
        # This is an arg value type option. Will catch -o value or --output-file value
        -o|--output-file)
        shift # past the key and to the value
        OUTPUTFILE="$1"
        ;;
        # This is an arg=value type option. Will catch -o=value or --output-file=value
        -o=*|--output-file=*)
        # No need to shift here since the value is part of the same string
        OUTPUTFILE="${key#*=}"
        ;;
        *)
        # Do whatever you want with extra options
        echo "Unknown option '$key'"
        ;;
    esac
    # Shift after checking all the cases to get the next option
    shift
done

This allows you to have both space separated options/values, as well as equal defined values.

So you could run your script using:

./myscript --foo -b -o /fizz/file.txt

as well as:

./myscript -f --bar -o=/fizz/file.txt

and both should have the same end result.

PROS:

CONS:

These are the only pros/cons I can think of off the top of my head

bubla ,Jul 10, 2016 at 22:40

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

RichVel ,Aug 18, 2016 at 5:34

Thanks for writing argbash, I just used it and found it works well. I mostly went for argbash because it's a code generator supporting the older bash 3.x found on OS X 10.11 El Capitan. The only downside is that the code-generator approach means quite a lot of code in your main script, compared to calling a module. – RichVel Aug 18 '16 at 5:34

bubla ,Aug 23, 2016 at 20:40

You can actually use Argbash in a way that it produces tailor-made parsing library just for you that you can have included in your script or you can have it in a separate file and just source it. I have added an example to demonstrate that and I have made it more explicit in the documentation, too. – bubla Aug 23 '16 at 20:40

RichVel ,Aug 24, 2016 at 5:47

Good to know. That example is interesting but still not really clear - maybe you can change name of the generated script to 'parse_lib.sh' or similar and show where the main script calls it (like in the wrapping script section which is more complex use case). – RichVel Aug 24 '16 at 5:47

bubla ,Dec 2, 2016 at 20:12

The issues were addressed in recent version of argbash: Documentation has been improved, a quickstart argbash-init script has been introduced and you can even use argbash online at argbash.io/generatebubla Dec 2 '16 at 20:12

Alek ,Mar 1, 2012 at 15:15

I think this one is simple enough to use:
#!/bin/bash
#

readopt='getopts $opts opt;rc=$?;[ $rc$opt == 0? ]&&exit 1;[ $rc == 0 ]||{ shift $[OPTIND-1];false; }'

opts=vfdo:

# Enumerating options
while eval $readopt
do
    echo OPT:$opt ${OPTARG+OPTARG:$OPTARG}
done

# Enumerating arguments
for arg
do
    echo ARG:$arg
done

Invocation example:

./myscript -v -do /fizz/someOtherFile -f ./foo/bar/someFile
OPT:v 
OPT:d 
OPT:o OPTARG:/fizz/someOtherFile
OPT:f 
ARG:./foo/bar/someFile

erm3nda ,May 20, 2015 at 22:50

I read all and this one is my preferred one. I don't like to use -a=1 as argc style. I prefer to put first the main option -options and later the special ones with single spacing -o option . Im looking for the simplest-vs-better way to read argvs. – erm3nda May 20 '15 at 22:50

erm3nda ,May 20, 2015 at 23:25

It's working really well but if you pass an argument to a non a: option all the following options would be taken as arguments. You can check this line ./myscript -v -d fail -o /fizz/someOtherFile -f ./foo/bar/someFile with your own script. -d option is not set as d: – erm3nda May 20 '15 at 23:25

unsynchronized ,Jun 9, 2014 at 13:46

Expanding on the excellent answer by @guneysus, here is a tweak that lets user use whichever syntax they prefer, eg
command -x=myfilename.ext --another_switch

vs

command -x myfilename.ext --another_switch

That is to say the equals can be replaced with whitespace.

This "fuzzy interpretation" might not be to your liking, but if you are making scripts that are interchangeable with other utilities (as is the case with mine, which must work with ffmpeg), the flexibility is useful.

STD_IN=0

prefix=""
key=""
value=""
for keyValue in "$@"
do
  case "${prefix}${keyValue}" in
    -i=*|--input_filename=*)  key="-i";     value="${keyValue#*=}";; 
    -ss=*|--seek_from=*)      key="-ss";    value="${keyValue#*=}";;
    -t=*|--play_seconds=*)    key="-t";     value="${keyValue#*=}";;
    -|--stdin)                key="-";      value=1;;
    *)                                      value=$keyValue;;
  esac
  case $key in
    -i) MOVIE=$(resolveMovie "${value}");  prefix=""; key="";;
    -ss) SEEK_FROM="${value}";          prefix=""; key="";;
    -t)  PLAY_SECONDS="${value}";           prefix=""; key="";;
    -)   STD_IN=${value};                   prefix=""; key="";; 
    *)   prefix="${keyValue}=";;
  esac
done

vangorra ,Feb 12, 2015 at 21:50

getopts works great if #1 you have it installed and #2 you intend to run it on the same platform. OSX and Linux (for example) behave differently in this respect.

Here is a (non getopts) solution that supports equals, non-equals, and boolean flags. For example you could run your script in this way:

./script --arg1=value1 --arg2 value2 --shouldClean

# parse the arguments.
COUNTER=0
ARGS=("$@")
while [ $COUNTER -lt $# ]
do
    arg=${ARGS[$COUNTER]}
    let COUNTER=COUNTER+1
    nextArg=${ARGS[$COUNTER]}

    if [[ $skipNext -eq 1 ]]; then
        echo "Skipping"
        skipNext=0
        continue
    fi

    argKey=""
    argVal=""
    if [[ "$arg" =~ ^\- ]]; then
        # if the format is: -key=value
        if [[ "$arg" =~ \= ]]; then
            argVal=$(echo "$arg" | cut -d'=' -f2)
            argKey=$(echo "$arg" | cut -d'=' -f1)
            skipNext=0

        # if the format is: -key value
        elif [[ ! "$nextArg" =~ ^\- ]]; then
            argKey="$arg"
            argVal="$nextArg"
            skipNext=1

        # if the format is: -key (a boolean flag)
        elif [[ "$nextArg" =~ ^\- ]] || [[ -z "$nextArg" ]]; then
            argKey="$arg"
            argVal=""
            skipNext=0
        fi
    # if the format has not flag, just a value.
    else
        argKey=""
        argVal="$arg"
        skipNext=0
    fi

    case "$argKey" in 
        --source-scmurl)
            SOURCE_URL="$argVal"
        ;;
        --dest-scmurl)
            DEST_URL="$argVal"
        ;;
        --version-num)
            VERSION_NUM="$argVal"
        ;;
        -c|--clean)
            CLEAN_BEFORE_START="1"
        ;;
        -h|--help|-help|--h)
            showUsage
            exit
        ;;
    esac
done

akostadinov ,Jul 19, 2013 at 7:50

This is how I do in a function to avoid breaking getopts run at the same time somewhere higher in stack:
function waitForWeb () {
   local OPTIND=1 OPTARG OPTION
   local host=localhost port=8080 proto=http
   while getopts "h:p:r:" OPTION; do
      case "$OPTION" in
      h)
         host="$OPTARG"
         ;;
      p)
         port="$OPTARG"
         ;;
      r)
         proto="$OPTARG"
         ;;
      esac
   done
...
}

Renato Silva ,Jul 4, 2016 at 16:47

EasyOptions does not require any parsing:
## Options:
##   --verbose, -v  Verbose mode
##   --output=FILE  Output filename

source easyoptions || exit

if test -n "${verbose}"; then
    echo "output file is ${output}"
    echo "${arguments[@]}"
fi

Oleksii Chekulaiev ,Jul 1, 2016 at 20:56

I give you The Function parse_params that will parse params:
  1. Without polluting global scope.
  2. Effortlessly returns to you ready to use variables so that you could build further logic on them
  3. Amount of dashes before params does not matter ( --all equals -all equals all=all )

The script below is a copy-paste working demonstration. See show_use function to understand how to use parse_params .

Limitations:

  1. Does not support space delimited params ( -d 1 )
  2. Param names will lose dashes so --any-param and -anyparam are equivalent
  3. eval $(parse_params "$@") must be used inside bash function (it will not work in the global scope)

#!/bin/bash

# Universal Bash parameter parsing
# Parse equal sign separated params into named local variables
# Standalone named parameter value will equal its param name (--force creates variable $force=="force")
# Parses multi-valued named params into an array (--path=path1 --path=path2 creates ${path[*]} array)
# Parses un-named params into ${ARGV[*]} array
# Additionally puts all named params into ${ARGN[*]} array
# Additionally puts all standalone "option" params into ${ARGO[*]} array
# @author Oleksii Chekulaiev
# @version v1.3 (May-14-2018)
parse_params ()
{
    local existing_named
    local ARGV=() # un-named params
    local ARGN=() # named params
    local ARGO=() # options (--params)
    echo "local ARGV=(); local ARGN=(); local ARGO=();"
    while [[ "$1" != "" ]]; do
        # Escape asterisk to prevent bash asterisk expansion
        _escaped=${1/\*/\'\"*\"\'}
        # If equals delimited named parameter
        if [[ "$1" =~ ^..*=..* ]]; then
            # Add to named parameters array
            echo "ARGN+=('$_escaped');"
            # key is part before first =
            local _key=$(echo "$1" | cut -d = -f 1)
            # val is everything after key and = (protect from param==value error)
            local _val="${1/$_key=}"
            # remove dashes from key name
            _key=${_key//\-}
            # search for existing parameter name
            if (echo "$existing_named" | grep "\b$_key\b" >/dev/null); then
                # if name already exists then it's a multi-value named parameter
                # re-declare it as an array if needed
                if ! (declare -p _key 2> /dev/null | grep -q 'declare \-a'); then
                    echo "$_key=(\"\$$_key\");"
                fi
                # append new value
                echo "$_key+=('$_val');"
            else
                # single-value named parameter
                echo "local $_key=\"$_val\";"
                existing_named=" $_key"
            fi
        # If standalone named parameter
        elif [[ "$1" =~ ^\-. ]]; then
            # Add to options array
            echo "ARGO+=('$_escaped');"
            # remove dashes
            local _key=${1//\-}
            echo "local $_key=\"$_key\";"
        # non-named parameter
        else
            # Escape asterisk to prevent bash asterisk expansion
            _escaped=${1/\*/\'\"*\"\'}
            echo "ARGV+=('$_escaped');"
        fi
        shift
    done
}

#--------------------------- DEMO OF THE USAGE -------------------------------

show_use ()
{
    eval $(parse_params "$@")
    # --
    echo "${ARGV[0]}" # print first unnamed param
    echo "${ARGV[1]}" # print second unnamed param
    echo "${ARGN[0]}" # print first named param
    echo "${ARG0[0]}" # print first option param (--force)
    echo "$anyparam"  # print --anyparam value
    echo "$k"         # print k=5 value
    echo "${multivalue[0]}" # print first value of multi-value
    echo "${multivalue[1]}" # print second value of multi-value
    [[ "$force" == "force" ]] && echo "\$force is set so let the force be with you"
}

show_use "param 1" --anyparam="my value" param2 k=5 --force --multi-value=test1 --multi-value=test2

Oleksii Chekulaiev ,Sep 28, 2016 at 12:55

To use the demo to parse params that come into your bash script you just do show_use "$@"Oleksii Chekulaiev Sep 28 '16 at 12:55

Oleksii Chekulaiev ,Sep 28, 2016 at 12:58

Basically I found out that github.com/renatosilva/easyoptions does the same in the same way but is a bit more massive than this function. – Oleksii Chekulaiev Sep 28 '16 at 12:58

galmok ,Jun 24, 2015 at 10:54

I'd like to offer my version of option parsing, that allows for the following:
-s p1
--stage p1
-w somefolder
--workfolder somefolder
-sw p1 somefolder
-e=hello

Also allows for this (could be unwanted):

-s--workfolder p1 somefolder
-se=hello p1
-swe=hello p1 somefolder

You have to decide before use if = is to be used on an option or not. This is to keep the code clean(ish).

while [[ $# > 0 ]]
do
    key="$1"
    while [[ ${key+x} ]]
    do
        case $key in
            -s*|--stage)
                STAGE="$2"
                shift # option has parameter
                ;;
            -w*|--workfolder)
                workfolder="$2"
                shift # option has parameter
                ;;
            -e=*)
                EXAMPLE="${key#*=}"
                break # option has been fully handled
                ;;
            *)
                # unknown option
                echo Unknown option: $key #1>&2
                exit 10 # either this: my preferred way to handle unknown options
                break # or this: do this to signal the option has been handled (if exit isn't used)
                ;;
        esac
        # prepare for next option in this key, if any
        [[ "$key" = -? || "$key" == --* ]] && unset key || key="${key/#-?/-}"
    done
    shift # option(s) fully processed, proceed to next input argument
done

Luca Davanzo ,Nov 14, 2016 at 17:56

what's the meaning for "+x" on ${key+x} ? – Luca Davanzo Nov 14 '16 at 17:56

galmok ,Nov 15, 2016 at 9:10

It is a test to see if 'key' is present or not. Further down I unset key and this breaks the inner while loop. – galmok Nov 15 '16 at 9:10

Mark Fox ,Apr 27, 2015 at 2:42

Mixing positional and flag-based arguments --param=arg (equals delimited)

Freely mixing flags between positional arguments:

./script.sh dumbo 127.0.0.1 --environment=production -q -d
./script.sh dumbo --environment=production 127.0.0.1 --quiet -d

can be accomplished with a fairly concise approach:

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   param=${!pointer}
   if [[ $param != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      case $param in
         # paramter-flags with arguments
         -e=*|--environment=*) environment="${param#*=}";;
                  --another=*) another="${param#*=}";;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + 1)):$#} \
         || set -- ${@:((pointer + 1)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2
--param arg (space delimited)

It's usualy clearer to not mix --flag=value and --flag value styles.

./script.sh dumbo 127.0.0.1 --environment production -q -d

This is a little dicey to read, but is still valid

./script.sh dumbo --environment production 127.0.0.1 --quiet -d

Source

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   if [[ ${!pointer} != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      param=${!pointer}
      ((pointer_plus = pointer + 1))
      slice_len=1

      case $param in
         # paramter-flags with arguments
         -e|--environment) environment=${!pointer_plus}; ((slice_len++));;
                --another) another=${!pointer_plus}; ((slice_len++));;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + $slice_len)):$#} \
         || set -- ${@:((pointer + $slice_len)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2

schily ,Oct 19, 2015 at 13:59

Note that getopt(1) was a short living mistake from AT&T.

getopt was created in 1984 but already buried in 1986 because it was not really usable.

A proof for the fact that getopt is very outdated is that the getopt(1) man page still mentions "$*" instead of "$@" , that was added to the Bourne Shell in 1986 together with the getopts(1) shell builtin in order to deal with arguments with spaces inside.

BTW: if you are interested in parsing long options in shell scripts, it may be of interest to know that the getopt(3) implementation from libc (Solaris) and ksh93 both added a uniform long option implementation that supports long options as aliases for short options. This causes ksh93 and the Bourne Shell to implement a uniform interface for long options via getopts .

An example for long options taken from the Bourne Shell man page:

getopts "f:(file)(input-file)o:(output-file)" OPTX "$@"

shows how long option aliases may be used in both Bourne Shell and ksh93.

See the man page of a recent Bourne Shell:

http://schillix.sourceforge.net/man/man1/bosh.1.html

and the man page for getopt(3) from OpenSolaris:

http://schillix.sourceforge.net/man/man3c/getopt.3c.html

and last, the getopt(1) man page to verify the outdated $*:

http://schillix.sourceforge.net/man/man1/getopt.1.html

Volodymyr M. Lisivka ,Jul 9, 2013 at 16:51

Use module "arguments" from bash-modules

Example:

#!/bin/bash
. import.sh log arguments

NAME="world"

parse_arguments "-n|--name)NAME;S" -- "$@" || {
  error "Cannot parse command line."
  exit 1
}

info "Hello, $NAME!"

Mike Q ,Jun 14, 2014 at 18:01

This also might be useful to know, you can set a value and if someone provides input, override the default with that value..

myscript.sh -f ./serverlist.txt or just ./myscript.sh (and it takes defaults)

    #!/bin/bash
    # --- set the value, if there is inputs, override the defaults.

    HOME_FOLDER="${HOME}/owned_id_checker"
    SERVER_FILE_LIST="${HOME_FOLDER}/server_list.txt"

    while [[ $# > 1 ]]
    do
    key="$1"
    shift

    case $key in
        -i|--inputlist)
        SERVER_FILE_LIST="$1"
        shift
        ;;
    esac
    done


    echo "SERVER LIST   = ${SERVER_FILE_LIST}"

phk ,Oct 17, 2015 at 21:17

Another solution without getopt[s], POSIX, old Unix style

Similar to the solution Bruno Bronosky posted this here is one without the usage of getopt(s) .

Main differentiating feature of my solution is that it allows to have options concatenated together just like tar -xzf foo.tar.gz is equal to tar -x -z -f foo.tar.gz . And just like in tar , ps etc. the leading hyphen is optional for a block of short options (but this can be changed easily). Long options are supported as well (but when a block starts with one then two leading hyphens are required).

Code with example options
#!/bin/sh

echo
echo "POSIX-compliant getopt(s)-free old-style-supporting option parser from phk@[se.unix]"
echo

print_usage() {
  echo "Usage:

  $0 {a|b|c} [ARG...]

Options:

  --aaa-0-args
  -a
    Option without arguments.

  --bbb-1-args ARG
  -b ARG
    Option with one argument.

  --ccc-2-args ARG1 ARG2
  -c ARG1 ARG2
    Option with two arguments.

" >&2
}

if [ $# -le 0 ]; then
  print_usage
  exit 1
fi

opt=
while :; do

  if [ $# -le 0 ]; then

    # no parameters remaining -> end option parsing
    break

  elif [ ! "$opt" ]; then

    # we are at the beginning of a fresh block
    # remove optional leading hyphen and strip trailing whitespaces
    opt=$(echo "$1" | sed 's/^-\?\([a-zA-Z0-9\?-]*\)/\1/')

  fi

  # get the first character -> check whether long option
  first_chr=$(echo "$opt" | awk '{print substr($1, 1, 1)}')
  [ "$first_chr" = - ] && long_option=T || long_option=F

  # note to write the options here with a leading hyphen less
  # also do not forget to end short options with a star
  case $opt in

    -)

      # end of options
      shift
      break
      ;;

    a*|-aaa-0-args)

      echo "Option AAA activated!"
      ;;

    b*|-bbb-1-args)

      if [ "$2" ]; then
        echo "Option BBB with argument '$2' activated!"
        shift
      else
        echo "BBB parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    c*|-ccc-2-args)

      if [ "$2" ] && [ "$3" ]; then
        echo "Option CCC with arguments '$2' and '$3' activated!"
        shift 2
      else
        echo "CCC parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    h*|\?*|-help)

      print_usage
      exit 0
      ;;

    *)

      if [ "$long_option" = T ]; then
        opt=$(echo "$opt" | awk '{print substr($1, 2)}')
      else
        opt=$first_chr
      fi
      printf 'Error: Unknown option: "%s"\n' "$opt" >&2
      print_usage
      exit 1
      ;;

  esac

  if [ "$long_option" = T ]; then

    # if we had a long option then we are going to get a new block next
    shift
    opt=

  else

    # if we had a short option then just move to the next character
    opt=$(echo "$opt" | awk '{print substr($1, 2)}')

    # if block is now empty then shift to the next one
    [ "$opt" ] || shift

  fi

done

echo "Doing something..."

exit 0

For the example usage please see the examples further below.

Position of options with arguments

For what its worth there the options with arguments don't be the last (only long options need to be). So while e.g. in tar (at least in some implementations) the f options needs to be last because the file name follows ( tar xzf bar.tar.gz works but tar xfz bar.tar.gz does not) this is not the case here (see the later examples).

Multiple options with arguments

As another bonus the option parameters are consumed in the order of the options by the parameters with required options. Just look at the output of my script here with the command line abc X Y Z (or -abc X Y Z ):

Option AAA activated!
Option BBB with argument 'X' activated!
Option CCC with arguments 'Y' and 'Z' activated!
Long options concatenated as well

Also you can also have long options in option block given that they occur last in the block. So the following command lines are all equivalent (including the order in which the options and its arguments are being processed):

All of these lead to:

Option CCC with arguments 'Z' and 'Y' activated!
Option BBB with argument 'X' activated!
Option AAA activated!
Doing something...
Not in this solution Optional arguments

Options with optional arguments should be possible with a bit of work, e.g. by looking forward whether there is a block without a hyphen; the user would then need to put a hyphen in front of every block following a block with a parameter having an optional parameter. Maybe this is too complicated to communicate to the user so better just require a leading hyphen altogether in this case.

Things get even more complicated with multiple possible parameters. I would advise against making the options trying to be smart by determining whether the an argument might be for it or not (e.g. with an option just takes a number as an optional argument) because this might break in the future.

I personally favor additional options instead of optional arguments.

Option arguments introduced with an equal sign

Just like with optional arguments I am not a fan of this (BTW, is there a thread for discussing the pros/cons of different parameter styles?) but if you want this you could probably implement it yourself just like done at http://mywiki.wooledge.org/BashFAQ/035#Manual_loop with a --long-with-arg=?* case statement and then stripping the equal sign (this is BTW the site that says that making parameter concatenation is possible with some effort but "left [it] as an exercise for the reader" which made me take them at their word but I started from scratch).

Other notes

POSIX-compliant, works even on ancient Busybox setups I had to deal with (with e.g. cut , head and getopts missing).

Noah ,Aug 29, 2016 at 3:44

Solution that preserves unhandled arguments. Demos Included.

Here is my solution. It is VERY flexible and unlike others, shouldn't require external packages and handles leftover arguments cleanly.

Usage is: ./myscript -flag flagvariable -otherflag flagvar2

All you have to do is edit the validflags line. It prepends a hyphen and searches all arguments. It then defines the next argument as the flag name e.g.

./myscript -flag flagvariable -otherflag flagvar2
echo $flag $otherflag
flagvariable flagvar2

The main code (short version, verbose with examples further down, also a version with erroring out):

#!/usr/bin/env bash
#shebang.io
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
    for flag in $validflags
    do
        sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers

The verbose version with built in echo demos:

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
echo "all args
$@"
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
#   argval=$(echo $@ | cut -d ' ' -f$count)
    for flag in $validflags
    do
            sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done

#Cleanup then restore the leftovers
echo "pre final clear args:
$@"
shift $#
echo "post final clear args:
$@"
set -- $leftovers
echo "all post set args:
$@"
echo arg1: $1 arg2: $2

echo leftovers: $leftovers
echo rate $rate time $time number $number

Final one, this one errors out if an invalid -argument is passed through.

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
validflags="rate time number"
count=1
for arg in $@
do
    argval=$1
    match=0
        if [ "${argval:0:1}" == "-" ]
    then
        for flag in $validflags
        do
                sflag="-"$flag
            if [ "$argval" == "$sflag" ]
            then
                declare $flag=$2
                match=1
            fi
        done
        if [ "$match" == "0" ]
        then
            echo "Bad argument: $argval"
            exit 1
        fi
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers
echo rate $rate time $time number $number
echo leftovers: $leftovers

Pros: What it does, it handles very well. It preserves unused arguments which a lot of the other solutions here don't. It also allows for variables to be called without being defined by hand in the script. It also allows prepopulation of variables if no corresponding argument is given. (See verbose example).

Cons: Can't parse a single complex arg string e.g. -xcvf would process as a single argument. You could somewhat easily write additional code into mine that adds this functionality though.

Daniel Bigham ,Aug 8, 2016 at 12:42

The top answer to this question seemed a bit buggy when I tried it -- here's my solution which I've found to be more robust:
boolean_arg=""
arg_with_value=""

while [[ $# -gt 0 ]]
do
key="$1"
case $key in
    -b|--boolean-arg)
    boolean_arg=true
    shift
    ;;
    -a|--arg-with-value)
    arg_with_value="$2"
    shift
    shift
    ;;
    -*)
    echo "Unknown option: $1"
    exit 1
    ;;
    *)
    arg_num=$(( $arg_num + 1 ))
    case $arg_num in
        1)
        first_normal_arg="$1"
        shift
        ;;
        2)
        second_normal_arg="$1"
        shift
        ;;
        *)
        bad_args=TRUE
    esac
    ;;
esac
done

# Handy to have this here when adding arguments to
# see if they're working. Just edit the '0' to be '1'.
if [[ 0 == 1 ]]; then
    echo "first_normal_arg: $first_normal_arg"
    echo "second_normal_arg: $second_normal_arg"
    echo "boolean_arg: $boolean_arg"
    echo "arg_with_value: $arg_with_value"
    exit 0
fi

if [[ $bad_args == TRUE || $arg_num < 2 ]]; then
    echo "Usage: $(basename "$0") <first-normal-arg> <second-normal-arg> [--boolean-arg] [--arg-with-value VALUE]"
    exit 1
fi

phyatt ,Sep 7, 2016 at 18:25

This example shows how to use getopt and eval and HEREDOC and shift to handle short and long parameters with and without a required value that follows. Also the switch/case statement is concise and easy to follow.
#!/usr/bin/env bash

# usage function
function usage()
{
   cat << HEREDOC

   Usage: $progname [--num NUM] [--time TIME_STR] [--verbose] [--dry-run]

   optional arguments:
     -h, --help           show this help message and exit
     -n, --num NUM        pass in a number
     -t, --time TIME_STR  pass in a time string
     -v, --verbose        increase the verbosity of the bash script
     --dry-run            do a dry run, don't change any files

HEREDOC
}  

# initialize variables
progname=$(basename $0)
verbose=0
dryrun=0
num_str=
time_str=

# use getopt and store the output into $OPTS
# note the use of -o for the short options, --long for the long name options
# and a : for any option that takes a parameter
OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; usage; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  # uncomment the next line to see how shift is working
  # echo "\$1:\"$1\" \$2:\"$2\""
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

if (( $verbose > 0 )); then

   # print out all the parameters we read in
   cat <<-EOM
   num=$num_str
   time=$time_str
   verbose=$verbose
   dryrun=$dryrun
EOM
fi

# The rest of your script below

The most significant lines of the script above are these:

OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

Short, to the point, readable, and handles just about everything (IMHO).

Hope that helps someone.

Emeric Verschuur ,Feb 20, 2017 at 21:30

I have write a bash helper to write a nice bash tool

project home: https://gitlab.mbedsys.org/mbedsys/bashopts

example:

#!/bin/bash -ei

# load the library
. bashopts.sh

# Enable backtrace dusplay on error
trap 'bashopts_exit_handle' ERR

# Initialize the library
bashopts_setup -n "$0" -d "This is myapp tool description displayed on help message" -s "$HOME/.config/myapprc"

# Declare the options
bashopts_declare -n first_name -l first -o f -d "First name" -t string -i -s -r
bashopts_declare -n last_name -l last -o l -d "Last name" -t string -i -s -r
bashopts_declare -n display_name -l display-name -t string -d "Display name" -e "\$first_name \$last_name"
bashopts_declare -n age -l number -d "Age" -t number
bashopts_declare -n email_list -t string -m add -l email -d "Email adress"

# Parse arguments
bashopts_parse_args "$@"

# Process argument
bashopts_process_args

will give help:

NAME:
    ./example.sh - This is myapp tool description displayed on help message

USAGE:
    [options and commands] [-- [extra args]]

OPTIONS:
    -h,--help                          Display this help
    -n,--non-interactive true          Non interactive mode - [$bashopts_non_interactive] (type:boolean, default:false)
    -f,--first "John"                  First name - [$first_name] (type:string, default:"")
    -l,--last "Smith"                  Last name - [$last_name] (type:string, default:"")
    --display-name "John Smith"        Display name - [$display_name] (type:string, default:"$first_name $last_name")
    --number 0                         Age - [$age] (type:number, default:0)
    --email                            Email adress - [$email_list] (type:string, default:"")

enjoy :)

Josh Wulf ,Jun 24, 2017 at 18:07

I get this on Mac OS X: ``` lib/bashopts.sh: line 138: declare: -A: invalid option declare: usage: declare [-afFirtx] [-p] [name[=value] ...] Error in lib/bashopts.sh:138. 'declare -x -A bashopts_optprop_name' exited with status 2 Call tree: 1: lib/controller.sh:4 source(...) Exiting with status 1 ``` – Josh Wulf Jun 24 '17 at 18:07

Josh Wulf ,Jun 24, 2017 at 18:17

You need Bash version 4 to use this. On Mac, the default version is 3. You can use home brew to install bash 4. – Josh Wulf Jun 24 '17 at 18:17

a_z ,Mar 15, 2017 at 13:24

Here is my approach - using regexp.

script:

#!/usr/bin/env sh

help_menu() {
  echo "Usage:

  ${0##*/} [-h][-l FILENAME][-d]

Options:

  -h, --help
    display this help and exit

  -l, --logfile=FILENAME
    filename

  -d, --debug
    enable debug
  "
}

parse_options() {
  case $opt in
    h|help)
      help_menu
      exit
     ;;
    l|logfile)
      logfile=${attr}
      ;;
    d|debug)
      debug=true
      ;;
    *)
      echo "Unknown option: ${opt}\nRun ${0##*/} -h for help.">&2
      exit 1
  esac
}
options=$@

until [ "$options" = "" ]; do
  if [[ $options =~ (^ *(--([a-zA-Z0-9-]+)|-([a-zA-Z0-9-]+))(( |=)(([\_\.\?\/\\a-zA-Z0-9]?[ -]?[\_\.\?a-zA-Z0-9]+)+))?(.*)|(.+)) ]]; then
    if [[ ${BASH_REMATCH[3]} ]]; then # for --option[=][attribute] or --option[=][attribute]
      opt=${BASH_REMATCH[3]}
      attr=${BASH_REMATCH[7]}
      options=${BASH_REMATCH[9]}
    elif [[ ${BASH_REMATCH[4]} ]]; then # for block options -qwert[=][attribute] or single short option -a[=][attribute]
      pile=${BASH_REMATCH[4]}
      while (( ${#pile} > 1 )); do
        opt=${pile:0:1}
        attr=""
        pile=${pile/${pile:0:1}/}
        parse_options
      done
      opt=$pile
      attr=${BASH_REMATCH[7]}
      options=${BASH_REMATCH[9]}
    else # leftovers that don't match
      opt=${BASH_REMATCH[10]}
      options=""
    fi
    parse_options
  fi
done

mauron85 ,Jun 21, 2017 at 6:03

Like this one. Maybe just add -e param to echo with new line. – mauron85 Jun 21 '17 at 6:03

John ,Oct 10, 2017 at 22:49

Assume we create a shell script named test_args.sh as follow
#!/bin/sh
until [ $# -eq 0 ]
do
  name=${1:1}; shift;
  if [[ -z "$1" || $1 == -* ]] ; then eval "export $name=true"; else eval "export $name=$1"; shift; fi  
done
echo "year=$year month=$month day=$day flag=$flag"

After we run the following command:

sh test_args.sh  -year 2017 -flag  -month 12 -day 22

The output would be:

year=2017 month=12 day=22 flag=true

Will Barnwell ,Oct 10, 2017 at 23:57

This takes the same approach as Noah's answer , but has less safety checks / safeguards. This allows us to write arbitrary arguments into the script's environment and I'm pretty sure your use of eval here may allow command injection. – Will Barnwell Oct 10 '17 at 23:57

Masadow ,Oct 6, 2015 at 8:53

Here is my improved solution of Bruno Bronosky's answer using variable arrays.

it lets you mix parameters position and give you a parameter array preserving the order without the options

#!/bin/bash

echo $@

PARAMS=()
SOFT=0
SKIP=()
for i in "$@"
do
case $i in
    -n=*|--skip=*)
    SKIP+=("${i#*=}")
    ;;
    -s|--soft)
    SOFT=1
    ;;
    *)
        # unknown option
        PARAMS+=("$i")
    ;;
esac
done
echo "SKIP            = ${SKIP[@]}"
echo "SOFT            = $SOFT"
    echo "Parameters:"
    echo ${PARAMS[@]}

Will output for example:

$ ./test.sh parameter -s somefile --skip=.c --skip=.obj
parameter -s somefile --skip=.c --skip=.obj
SKIP            = .c .obj
SOFT            = 1
Parameters:
parameter somefile

Jason S ,Dec 3, 2017 at 1:01

You use shift on the known arguments and not on the unknown ones so your remaining $@ will be all but the first two arguments (in the order they are passed in), which could lead to some mistakes if you try to use $@ later. You don't need the shift for the = parameters, since you're not handling spaces and you're getting the value with the substring removal #*=Jason S Dec 3 '17 at 1:01

Masadow ,Dec 5, 2017 at 9:17

You're right, in fact, since I build a PARAMS variable, I don't need to use shift at all – Masadow Dec 5 '17 at 9:17

[Nov 01, 2017] Functions by Tom Ryder

Nov 01, 2017 | sanctum.geek.nz

A more flexible method for defining custom commands for an interactive shell (or within a script) is to use a shell function. We could declare our ll function in a Bash startup file as a function instead of an alias like so:

# Shortcut to call ls(1) with the -l flag
ll() {
    command ls -l "$@"
}

Note the use of the command builtin here to specify that the ll function should invoke the program named ls , and not any function named ls . This is particularly important when writing a function wrapper around a command, to stop an infinite loop where the function calls itself indefinitely:

# Always add -q to invocations of gdb(1)
gdb() {
    command gdb -q "$@"
}

In both examples, note also the use of the "$@" expansion, to add to the final command line any arguments given to the function. We wrap it in double quotes to stop spaces and other shell metacharacters in the arguments causing problems. This means that the ll command will work correctly if you were to pass it further options and/or one or more directories as arguments:

$ ll -a
$ ll ~/.config

Shell functions declared in this way are specified by POSIX for Bourne-style shells, so they should work in your shell of choice, including Bash, dash , Korn shell, and Zsh. They can also be used within scripts, allowing you to abstract away multiple instances of similar commands to improve the clarity of your script, in much the same way the basics of functions work in general-purpose programming languages.

Functions are a good and portable way to approach adding features to your interactive shell; written carefully, they even allow you to port features you might like from other shells into your shell of choice. I'm fond of taking commands I like from Korn shell or Zsh and implementing them in Bash or POSIX shell functions, such as Zsh's vared or its two-argument cd features.

If you end up writing a lot of shell functions, you should consider putting them into separate configuration subfiles to keep your shell's primary startup file from becoming unmanageably large.

Examples from the author

You can take a look at some of the shell functions I have defined here that are useful to me in general shell usage; a lot of these amount to implementing convenience features that I wish my shell had, especially for quick directory navigation, or adding options to commands:

Other examples Variables in shell functions

You can manipulate variables within shell functions, too:

# Print the filename of a path, stripping off its leading path and
# extension
fn() {
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

This works fine, but the catch is that after the function is done, the value for name will still be defined in the shell, and will overwrite whatever was in there previously:

$ printf '%s\n' "$name"
foobar
$ fn /home/you/Task_List.doc
Task_List
$ printf '%s\n' "$name"
Task_List

This may be desirable if you actually want the function to change some aspect of your current shell session, such as managing variables or changing the working directory. If you don't want that, you will probably want to find some means of avoiding name collisions in your variables.

If your function is only for use with a shell that provides the local (Bash) or typeset (Ksh) features, you can declare the variable as local to the function to remove its global scope, to prevent this happening:

# Bash-like
fn() {
    local name
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

# Ksh-like
# Note different syntax for first line
function fn {
    typeset name
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

If you're using a shell that lacks these features, or you want to aim for POSIX compatibility, things are a little trickier, since local function variables aren't specified by the standard. One option is to use a subshell , so that the variables are only defined for the duration of the function:

# POSIX; note we're using plain parentheses rather than curly brackets, for
# a subshell
fn() (
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
)

# POSIX; alternative approach using command substitution:
fn() {
    printf '%s\n' "$(
        name=$1
        name=${name##*/}
        name=${name%.*}
        printf %s "$name"
    )"
}

This subshell method also allows you to change directory with cd within a function without changing the working directory of the user's interactive shell, or to change shell options with set or Bash options with shopt only temporarily for the purposes of the function.

Another method to deal with variables is to manipulate the positional parameters directly ( $1 , $2 ) with set , since they are local to the function call too:

# POSIX; using positional parameters
fn() {
    set -- "${1##*/}"
    set -- "${1%.*}"
    printf '%s\n' "$1"
}

These methods work well, and can sometimes even be combined, but they're awkward to write, and harder to read than the modern shell versions. If you only need your functions to work with your modern shell, I recommend just using local or typeset . The Bash Guide on Greg's Wiki has a very thorough breakdown of functions in Bash, if you want to read about this and other aspects of functions in more detail.

Keeping functions for later

As you get comfortable with defining and using functions during an interactive session, you might define them in ad-hoc ways on the command line for calling in a loop or some other similar circumstance, just to solve a task in that moment.

As an example, I recently made an ad-hoc function called monit to run a set of commands for its hostname argument that together established different types of monitoring system checks, using an existing script called nmfs :

$ monit() { nmfs "$1" Ping Y ; nmfs "$1" HTTP Y ; nmfs "$1" SNMP Y ; }
$ for host in webhost{1..10} ; do
> monit "$host"
> done

After that task was done, I realized I was likely to use the monit command interactively again, so I decided to keep it. Shell functions only last as long as the current shell, so if you want to make them permanent, you need to store their definitions somewhere in your startup files. If you're using Bash, and you're content to just add things to the end of your ~/.bashrc file, you could just do something like this:

$ declare -f monit >> ~/.bashrc

That would append the existing definition of monit in parseable form to your ~/.bashrc file, and the monit function would then be loaded and available to you for future interactive sessions. Later on, I ended up converting monit into a shell script, as its use wasn't limited to just an interactive shell.

If you want a more robust approach to keeping functions like this for Bash permanently, I wrote a tool called Bashkeep , which allows you to quickly store functions and variables defined in your current shell into separate and appropriately-named files, including viewing and managing the list of names conveniently:

$ keep monit
$ keep
monit
$ ls ~/.bashkeep.d
monit.bash
$ keep -d monit

[Oct 31, 2017] Testing exit values in Bash by Tom Ryder

Oct 28, 2013 | sanctum.geek.nz

In Bash scripting (and shell scripting in general), we often want to check the exit value of a command to decide an action to take after it completes, likely for the purpose of error handling. For example, to determine whether a particular regular expression regex was present somewhere in a file options , we might apply grep(1) with its POSIX -q option to suppress output and just use the exit value:

grep -q regex options

An approach sometimes taken is then to test the exit value with the $? parameter, using if to check if it's non-zero, which is not very elegant and a bit hard to read:

# Bad practice
grep -q regex options
if (($? > 0)); then
    printf '%s\n' 'myscript: Pattern not found!' >&2
    exit 1
fi

Because the if construct by design tests the exit value of commands , it's better to test the command directly , making the expansion of $? unnecessary:

# Better
if grep -q regex options; then
    # Do nothing
    :
else
    printf '%s\n' 'myscript: Pattern not found!\n' >&2
    exit 1
fi

We can precede the command to be tested with ! to negate the test as well, to prevent us having to use else as well:

# Best
if ! grep -q regex options; then
    printf '%s\n' 'myscript: Pattern not found!' >&2
    exit 1
fi

An alternative syntax is to use && and || to perform if and else tests with grouped commands between braces, but these tend to be harder to read:

# Alternative
grep -q regex options || {
    printf '%s\n' 'myscript: Pattern not found!' >&2
    exit 1
}

With this syntax, the two commands in the block are only executed if the grep(1) call exits with a non-zero status. We can apply && instead to execute commands if it does exit with zero.

That syntax can be convenient for quickly short-circuiting failures in scripts, for example due to nonexistent commands, particularly if the command being tested already outputs its own error message. This therefore cuts the script off if the given command fails, likely due to ffmpeg(1) being unavailable on the system:

hash ffmpeg || exit 1

Note that the braces for a grouped command are not needed here, as there's only one command to be run in case of failure, the exit call.

Calls to cd are another good use case here, as running a script in the wrong directory if a call to cd fails could have really nasty effects:

cd wherever || exit 1

In general, you'll probably only want to test $? when you have specific non-zero error conditions to catch. For example, if we were using the --max-delete option for rsync(1) , we could check a call's return value to see whether rsync(1) hit the threshold for deleted file count and write a message to a logfile appropriately:

rsync --archive --delete --max-delete=5 source destination
if (($? == 25)); then
    printf '%s\n' 'Deletion limit was reached' >"$logfile"
fi

It may be tempting to use the errexit feature in the hopes of stopping a script as soon as it encounters any error, but there are some problems with its usage that make it a bit error-prone. It's generally more straightforward to simply write your own error handling using the methods above.

For a really thorough breakdown of dealing with conditionals in Bash, take a look at the relevant chapter of the Bash Guide .

[Jul 25, 2017] Handling positional parameters

Notable quotes:
"... under construction ..."
"... under construction ..."
Jul 25, 2017 | wiki.bash-hackers.org

Intro The day will come when you want to give arguments to your scripts. These arguments are known as positional parameters . Some relevant special parameters are described below:

Parameter(s) Description
$0 the first positional parameter, equivalent to argv[0] in C, see the first argument
$FUNCNAME the function name ( attention : inside a function, $0 is still the $0 of the shell, not the function name)
$1 $9 the argument list elements from 1 to 9
${10} ${N} the argument list elements beyond 9 (note the parameter expansion syntax!)
$* all positional parameters except $0 , see mass usage
$@ all positional parameters except $0 , see mass usage
$# the number of arguments, not counting $0

These positional parameters reflect exactly what was given to the script when it was called.

Option-switch parsing (e.g. -h for displaying help) is not performed at this point.

See also the dictionary entry for "parameter" . The first argument The very first argument you can access is referenced as $0 . It is usually set to the script's name exactly as called, and it's set on shell initialization:

Testscript - it just echos $0 :


#!/bin/bash

echo "$0"

You see, $0 is always set to the name the script is called with ( $ is the prompt ):

> ./testscript 

./testscript


> /usr/bin/testscript

/usr/bin/testscript

However, this isn't true for login shells:


> echo "$0"

-bash

In other terms, $0 is not a positional parameter, it's a special parameter independent from the positional parameter list. It can be set to anything. In the ideal case it's the pathname of the script, but since this gets set on invocation, the invoking program can easily influence it (the login program does that for login shells, by prefixing a dash, for example).

Inside a function, $0 still behaves as described above. To get the function name, use $FUNCNAME . Shifting The builtin command shift is used to change the positional parameter values:

The command can take a number as argument: Number of positions to shift. e.g. shift 4 shifts $5 to $1 . Using them Enough theory, you want to access your script-arguments. Well, here we go. One by one One way is to access specific parameters:


#!/bin/bash

echo "Total number of arguments: $#"

echo "Argument 1: $1"

echo "Argument 2: $2"

echo "Argument 3: $3"

echo "Argument 4: $4"

echo "Argument 5: $5"

While useful in another situation, this way is lacks flexibility. The maximum number of arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames as arguments.

⇒ forget that one Loops There are several ways to loop through the positional parameters.


You can code a C-style for-loop using $# as the end value. On every iteration, the shift -command is used to shift the argument list:


numargs=$#

for ((i=1 ; i <= numargs ; i++))

do

    echo "$1"

    shift

done

Not very stylish, but usable. The numargs variable is used to store the initial value of $# because the shift command will change it as the script runs.


Another way to iterate one argument at a time is the for loop without a given wordlist. The loop uses the positional parameters as a wordlist:


for arg

do

    echo "$arg"

done

Advantage: The positional parameters will be preserved

The next method is similar to the first example (the for loop), but it doesn't test for reaching $# . It shifts and checks if $1 still expands to something, using the test command :


while [ "$1" ]

do

    echo "$1"

    shift

done

Looks nice, but has the disadvantage of stopping when $1 is empty (null-string). Let's modify it to run as long as $1 is defined (but may be null), using parameter expansion for an alternate value :


while [ "${1+defined}" ]; do

  echo "$1"

  shift

done

Getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). Mass usage All Positional Parameters Sometimes it's necessary to just "relay" or "pass" given arguments to another program. It's very inefficient to do that in one of these loops, as you will destroy integrity, most likely (spaces!).

The shell developers created $* and $@ for this purpose.

As overview:

Syntax Effective result
$* $1 $2 $3 ${N}
$@ $1 $2 $3 ${N}
"$*" "$1c$2c$3c c${N}"
"$@" "$1" "$2" "$3" "${N}"

Without being quoted (double quotes), both have the same effect: All positional parameters from $1 to the last one used are expanded without any special handling.

When the $* special parameter is double quoted, it expands to the equivalent of: "$1c$2c$3c$4c ..$N" , where 'c' is the first character of IFS .

But when the $@ special parameter is used inside double quotes, it expands to the equivanent of

"$1" "$2" "$3" "$4" .. "$N"

which reflects all positional parameters as they were set initially and passed to the script or function. If you want to re-use your positional parameters to call another program (for example in a wrapper-script), then this is the choice for you, use double quoted "$@" .

Well, let's just say: You almost always want a quoted "$@" ! Range Of Positional Parameters Another way to mass expand the positional parameters is similar to what is possible for a range of characters using substring expansion on normal parameters and the mass expansion range of arrays .

${@:START:COUNT}

${*:START:COUNT}

"${@:START:COUNT}"

"${*:START:COUNT}"

The rules for using @ or * and quoting are the same as above. This will expand COUNT number of positional parameters beginning at START . COUNT can be omitted ( ${@:START} ), in which case, all positional parameters beginning at START are expanded.

If START is negative, the positional parameters are numbered in reverse starting with the last one.

COUNT may not be negative, i.e. the element count may not be decremented.

Example: START at the last positional parameter:


echo "${@: -1}"

Attention : As of Bash 4, a START of 0 includes the special parameter $0 , i.e. the shell name or whatever $0 is set to, when the positional parameters are in use. A START of 1 begins at $1 . In Bash 3 and older, both 0 and 1 began at $1 . Setting Positional Parameters Setting positional parameters with command line arguments, is not the only way to set them. The builtin command, set may be used to "artificially" change the positional parameters from inside the script or function:


set "This is" my new "set of" positional parameters



# RESULTS IN

# $1: This is

# $2: my

# $3: new

# $4: set of

# $5: positional

# $6: parameters

It's wise to signal "end of options" when setting positional parameters this way. If not, the dashes might be interpreted as an option switch by set itself:


# both ways work, but behave differently. See the article about the set command!

set -- ...

set - ...

Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may otherwise be reset by set


set -$- ...

Production examples Using a while loop To make your program accept options as standard command syntax:

COMMAND [options] <params> # Like 'cat -A file.txt'

See simple option parsing code below. It's not that flexible. It doesn't auto-interpret combined options (-fu USER) but it works and is a good rudimentary way to parse your arguments.


#!/bin/sh

# Keeping options in alphabetical order makes it easy to add more.



while :

do

    case "$1" in

      -f | --file)

          file="$2"   # You may want to check validity of $2

          shift 2

          ;;

      -h | --help)

          display_help  # Call your function

          # no shifting needed here, we're done.

          exit 0

          ;;

      -u | --user)

          username="$2" # You may want to check validity of $2

          shift 2

          ;;

      -v | --verbose)

          #  It's better to assign a string, than a number like "verbose=1"

          #  because if you're debugging the script with "bash -x" code like this:

          #

          #    if [ "$verbose" ] ...

          #

          #  You will see:

          #

          #    if [ "verbose" ] ...

          #

          #  Instead of cryptic

          #

          #    if [ "1" ] ...

          #

          verbose="verbose"

          shift

          ;;

      --) # End of all options

          shift

          break;

      -*)

          echo "Error: Unknown option: $1" >&2

          exit 1

          ;;

      *)  # No more options

          break

          ;;

    esac

done



# End of file

Filter unwanted options with a wrapper script This simple wrapper enables filtering unwanted options (here: -a and –all for ls ) out of the command line. It reads the positional parameters and builds a filtered array consisting of them, then calls ls with the new option set. It also respects the as "end of options" for ls and doesn't change anything after it:


#!/bin/bash



# simple ls(1) wrapper that doesn't allow the -a option



options=()  # the buffer array for the parameters

eoo=0       # end of options reached



while [[ $1 ]]

do

    if ! ((eoo)); then

        case "$1" in

          -a)

              shift

              ;;

          --all)

              shift

              ;;

          -[^-]*a*|-a?*)

              options+=("${1//a}")

              shift

              ;;

          --)

              eoo=1

              options+=("$1")

              shift

              ;;

          *)

              options+=("$1")

              shift

              ;;

        esac

    else

        options+=("$1")



        # Another (worse) way of doing the same thing:

        # options=("${options[@]}" "$1")

        shift

    fi

done



/bin/ls "${options[@]}"

Using getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). See also

Discussion 2010/04/14 14:20
The shell-developers invented $* and $@ for this purpose.
Without being quoted (double-quoted), both have the same effect: All positional parameters from $1 to the last used one >are expanded, separated by the first character of IFS (represented by "c" here, but usually a space):
$1c$2c$3c$4c........$N

Without double quotes, $* and $@ are expanding the positional parameters separated by only space, not by IFS.


#!/bin/bash



export IFS='-'



echo -e $*

echo -e $@


$./test "This is" 2 3

This is 2 3

This is 2 3

2011/02/18 16:11 #!/bin/bash

OLDIFS="$IFS" IFS='-' #export IFS='-'

#echo -e $* #echo -e $@ #should be echo -e "$*" echo -e "$@" IFS="$OLDIFS"

2011/02/18 16:14 #should be echo -e "$*"

2012/04/20 10:32 Here's yet another non-getopts way.

http://bsdpants.blogspot.de/2007/02/option-ize-your-shell-scripts.html

2012/07/16 14:48 Hi there!

What if I use "$@" in subsequent function calls, but arguments are strings?

I mean, having:


#!/bin/bash

echo "$@"

echo n: $#

If you use it


mypc$ script arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

n: 4

But having


#!/bin/bash

myfunc()

{

  echo "$@"

  echo n: $#

}

echo "$@"

echo n: $#

myfunc "$@"

you get:


mypc$ myscrpt arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

4

arg1 arg2 asd asd arg4

5

As you can see, there is no way to make know the function that a parameter is a string and not a space separated list of arguments.

Any idea of how to solve it? I've test calling functions and doing expansion in almost all ways with no results.

2012/08/12 09:11 I don't know why it fails for you. It should work if you use "$@" , of course.

See the example I used your second script with:


$ ./args1 a b c "d e" f

a b c d e f

n: 5

a b c d e f

n: 5

[Feb 1, 2007] Functions and aliases in bash

# Append to .bashrc or call it from there.
# Save some typing at the command line :)

# longlist a directory, by page
# lo [directoryname]
lo () {
      if [ -d "$1" ] ; then
         ls -al "$1" | less
      else
         ls -al $(pwd) | less
      fi
}
# Same as above but recursive
lro () {
      if [ -d "$1" ] ; then
         ls -alR "$1" | less
      else
         ls -alR $(pwd) | less
      fi
}
export -f lo lro

[Apr 1, 2006] Submitted Article/ Converting a ksh Function to a ksh Script by William R. Seppeler

BigAdmin

Here is a simple way to create a script that will behave both as an executable script and as a ksh function. Being an executable script means the script can be run from any shell. Being a ksh function means the script can be optimized to run faster if launched from a ksh shell. This is an attempt to get the best of both worlds.

Procedure

Start by writing a ksh function. A ksh function is just like a ksh script except the script code is enclosed within a function name { script } construct.

Take the following example:

# Example script

function fun {
  print "pid=$$ cmd=$0 args=$*" opts="$-"
}

Save the text in a file. You'll notice nothing happens if you try to execute the code as a script:

ksh ./example

In order to use a function, the file must first be sourced. Sourcing the file will create the function definition in the current shell. After the function has been sourced, it can then be executed when you call it by name:

.. ./example
fun

To make the function execute as a script, the function must be called within the file. Add the bold text to the example function.

# Example script

function fun {
  print "pid=$$ cmd=$0 args=$*" opts="$-"
}

fun $*

Now you have a file that executes like a ksh script and sources like a ksh function. One caveat is that the file now executes while it is being sourced.

There are advantages and disadvantages to how the code is executed. If the file was executed as a script, the system spawns a child ksh process, loads the function definition, and then executes the function. If the file was sourced, no child process is created, the function definition is loaded into the current shell process, and the function is then executed.

Sourcing the file will make it run faster because no extra processes are created, however, loading a function occupies environment memory space. Functions can also manipulate environment variables whereas a script only gets a copy to work with. In programming terms, a function can use call by reference parameters via shell variables. A shell script is always call by value via arguments.

Advanced Information

When working with functions, it's advantageous to use ksh autoloading. Autoloading eliminates the need to source a file before executing the function. This is accomplished by saving the file with the same name as the function. In the above example, save the example as the file name "fun". Then set the FPATH environment variable to the directory where the file fun is. Now, all that needs to be done is type "fun" on the command line to execute the function.

Notice the double output the first time fun is called. This is because the first time the function is called, the file must be sourced, and in sourcing the file, the function gets called. What we need is to only call the function when the file is executed as a script, but skip calling the function if the file is sourced. To accomplish this, notice the output of the script when executing it as opposed to sourcing it. When the file is sourced, arg0 is always -ksh. Also, note the difference in opts when the script is sourced. Test the output of arg0 to determine if the function should be called or not. Also, make the file a self-executing script. After all, no one likes having to type "ksh" before running every ksh script.

[[ "${0##*/}" == "fun" ]] &&  fun $*

Now the file is a self-executing script as well as a self-sourcing function (when used with ksh autoloading). What becomes more interesting is that since the file can be an autoload function as well as a stand-alone script, it could be placed in a single directory and have both PATH and FPATH point to it.

# ${HOME}/.profile

FPATH=${HOME}/bin
PATH=${FPATH}:${PATH}

In this setup, fun will always be called as a function unless it's explicitly called as ${HOME}/bin/fun.

Considerations

Even though the file can be executed as a function or a script, there are minor differences in behavior between the two. When the file is sourced as a function, all local environment variables will be visible to the script. If the file is executed as a script, only exported environment variables will be visible. Also, when sourced, a function can modify all environment variables. When the file is executed, all visible environment variables are only copies. We may want to make special allowances depending on how the file is called. Take the following example.

#!/bin/ksh

# Add arg2 to the contents of arg1

function addTo {
  eval $1=$(($1 + $2))
}

if [[ "${0##*/}" == "addTo" ]]; then
  addTo $*
  eval print \$$1
fi

The script is called by naming an environment variable and a quantity to add to that variable. When sourced, the script will directly modify the environment variable with the new value. However, when executed as a script, the environment variable cannot be modified, so the result must be output instead. Here is a sample run of both situations.

# called as a function
var=5
addTo var 3
print $var

# called as a script
var=5
export var
var=$(./addTo var 3)
print $var

Note the extra steps needed when executing this example as a script. The var must be exported prior to running the script or else it won't be visible. Also, because a script can't manipulate the current environment, you must capture the new result.

Extra function-ality

It's possible to package several functions into a single file. This is nice for distribution as you only need to maintain a single file. In order to maintain autoloading functionality, all that needs to be done is create a link for each function named in the file.

#!/bin/ksh

function addTo {
  eval $1=$(($1 + $2))
}

function multiplyBy {
  eval $1=$(($1 * $2))
}

if [[ "${0##*/}" == "addTo" ]] \
|| [[ "${0##*/}" == "multiplyBy" ]]; then
  ${0##*/} $*
  eval print \$$1
fi

if [[ ! -f "${0%/*}/addTo" ]] \
|| [[ ! -f "${0%/*}/multiplyBy" ]]; then
  ln "${0}" "${0%/*}/addTo"
  ln "${0}" "${0%/*}/multiplyBy"
  chmod u+rx "${0}"
fi

Notice the extra code at the bottom. This text could be saved in a file named myDist. The first time the file is sourced or executed, the appropriate links and file permissions will be put in place, thus creating a single distribution for multiple functions. Couple that with making the file a script executable and you end up with a single distribution of multiple scripts. It's like a shar file, but nothing actually gets unpacked.

The only downside to this distribution tactic is that BigAdmin will only credit you for each file submission, not based on the actual number of executable programs...

Time to Run

Try some of the sample code in this document. Get comfortable with the usage of each snippet to understand the differences and limitations. In general, it's safest to always distribute a script, but it's nice to have a function when speed is a consideration. Do some timing tests.

export var=8
time ./addTo var 5
time addTo var 5

If this code were part of an inner-loop calculation of a larger script, that speed difference could be significant.

This document aims to provide the best of both worlds. You can have a script and retain function speed for when it's needed. I hope you have enjoyed this document and its content. Thanks to Sun and BigAdmin for the hosting and support to make contributions like this possible.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Shell Programming Functions Using Functions InformIT

IBM Knowledge Center - Korn shell functions

v10, i03 Creating Global Functions with the Korn Shell

Linux tip Bash parameters and parameter expansions

Marco's Bash Functions Library - Summary [Gna!]

This package is an attempt to make GNU bash a viable solution for medium sized scripts. A problem with bash is that it doesn't provide encapsulation of any sort, beside the feature of providing functions. This problem is partly solved by writing subscripts and invoking them in the main script, but this is not always the best solution.

A set of modules implementing common operations and a script template are provided by this package and the author has used them with success in implementing non-small scripts.

The philosophy of MBFL is to do the work as much as possible without external commands. For example: string manipulation is done using the special variable substitution provided by bash, and no use is done of utilities like sed, grep and ed.

The library is better used if our script is developed on the template provided in the package (examples/template.sh). This is because with MBFL some choices have been made to reduce the application dependent part of the script to the smallest dimension; if we follow another schema, MBFL modules may be indequate. This is especially true for the options parsing module.

The best way to use the library is to include at runtime the library file libmbfl.sh i; this is possible by installing MBFL on the system and using this code in the scripts:

mbfl_INTERACTIVE='no'
source "${MBFL_LIBRARY:=`mbfl-config`}"

after the service variables have been declared (Service Variables for details). This code will read the full pathname of the library from the environment variable MBFL_LIBRARY; if this variable is not set: the script mbfl-config is invoked with no arguments to acquire the pathname of the library. mbfl-config is installed in the bin directory with the library.

Another solution is to include the library directly in the script; this is easy if we preprocess our scripts with GNU m4:

m4_changequote([[, ]])
m4_include(libmbfl.sh)

is all we need to do. We can preprocess the script with:

$ m4 --prefix-builtins --include=/path/to/library \
         script.sh.m4 >script.sh

easy to do in a Makefile; we can take the MBFL's Makefile as example of this method.

It is also interesting to process the script with the following rule:

M4      = ...
M4FLAGS = --prefix-builtins --include=/path/to/library

%.sh: %.sh.m4
        $(M4) $(M4FLAGS) $(<) | \
        grep --invert-match -e '^#' -e '^$$' | \
        sed -e "s/^ \\+//" >$(@)

this will remove all the comments and blank lines, decreasing the size of the script significantly if one makes use of verbose comments; note that this will wipe out the #!/bin/bash first line also.

Usually we want the script to begin with #!/bin/bash followed by a comment describing the license terms.

Bash by example, Part 3

Encoding and decoding strings

The purpose of this module is to let an external process invoke a bash script with damncommand line arguments: strings including blanks or strange characters that may trigger quoting rules.

This problem can arise when using scripting languages with some sort of eval command.

The solution is to encode the argument string in hexadecimal or octal format strings, so that all the damn characters are converted to "good" ones. The the bash script can convert them back.

mbfl_decode_hex string Function
Decodes a hex string and outputs it on stdout.
mbfl_decode_oct string Function
Decodes a oct string and outputs it on stdout.

Example:

mbfl_decode_hex 414243
-> ABC

Manipulating files and pathnames


Node:File Names, Next:, Up:File

File names


Node:File Name Parts, Next:, Up:File Names

Splitting a file name into its components

mbfl_file_extension pathname Function
Extracts the extension from a file name. Searches the last dot (.) character in the argument string and echoes to stdout the range of characters from the dot to the end, not including the dot. If a slash (/) character is found first, echoes to stdout the empty string.

mbfl_file_dirname pathname Function
Extracts the directory part from a fully qualified file name. Searches the last slash character in the input string and echoes to stdout the range of characters from the first to the slash, not including the slash.

If no slash is found: echoes a single dot (the current directory).

If the input string begins with / or // with no slash characters after the first ones, the string echoed to stdout is a single slash.

mbfl_file_rootname pathname Function
Extracts the root portion of a file name. Searches the last dot character in the argument string and echoes to stdout the range of characters from the beginning to the dot, not including the dot.

If a slash character is found first, or no dot is found, or the dot is the first character, echoes to stdout the empty string.

mbfl_file_tail pathnbame Function
Extracts the file portion from a fully qualified file name. Searches the last slash character in the input string and echoes to stdout the range of characters from the slash to the end, not including the slash. If no slash is found: echoes the whole string.

mbfl_file_split pathname Function
Separates a file name into its components. One or more contiguous occurrences of the slash character are used as separator. The components are stored in an array named SPLITPATH, that may be declared local in the scope of the caller; the base index is zero. The number of elements in the array is stored in a variable named SPLITCOUNT. Returns true.


Node:File Name Path, Next:, Previous:File Name Parts, Up:File Names

Handling relative pathnames

mbfl_file_normalise pathname ?prefix? Function
Normalises a file name: removes all the occurrences of . and ...

If pathname is relative (according to mbfl_file_is_absolute) and prefix is not present or it is the empty string: the current process working directory is prepended to pathname.

If prefix is present and non empty, and pathname is relative (according to mbfl_file_is_absolute): prefix is prepended to pathname and normalised, too.

Echoes to stdout the normalised file name; returns true.

mbfl_file_is_absolute pathname Function
Returns true if the first character in pathname is a slash (/); else returns false.
mbfl_file_is_absolute_dirname pathname Function
Returns true if pathname is a directory according to mbfl_file_is_directory and an absolute pathname according to mbfl_file_is_absolute.
mbfl_file_is_absolute_filename pathname Function
Returns true if pathname is a file according to mbfl_file_is_file and an absolute pathname according to mbfl_file_is_absolute.



Node:File Name System, Previous:File Name Path, Up:File Names

Finding pathnames on the system

mbfl_file_find_tmpdir ?PATHNAME? Function
Finds a value for a temporary directory. If PATHNAME is not null and is a directory and is writable it is accepted; else the value /tmp/$USER, where USER is the environment variable, is tried; finally the value /tmp is tried. When a value is accepted it's echoed to stdout. Returns true if a value is found, false otherwise.


Node:File Commands, Next:, Previous:File Names, Up:File

File Commands


Node:File Commands Listing, Next:, Up:File Commands

Retrieving informations

mbfl_file_enable_listing Function
Declares to the program module the commands required to retrieve informations about files and directories (Program Declaring). The programs are: ls.

mbfl_file_get_owner pathname Function
Prints the owner of the file.

mbfl_file_get_group pathname Function
Prints the group of the file.

mbfl_file_get_size pathname Function
Prints the size of the file.

mbfl_file_normalise_link pathname Function
Makes use of the readlink to normalise the pathname of a symbolic link (remember that a symbolic link references a file, never a directory). Echoes to stdout the normalised pathname.

The command line of readlink is:

readlink -fn $pathname


Node:File Commands Mkdir, Next:, Previous:File Commands Listing, Up:File Commands

Creating directories

mbfl_file_enable_make_directory Function
Declares to the program module the commands required to create directories (Program Declaring). The programs are: mkdir.

mbfl_file_make_directory pathname ?permissions? Function
Creates a directory named pathname; all the unexistent parents are created, too. If permissions is present: it is the specification of directory permissions in octal mode.


Node:File Commands Copy, Next:, Previous:File Commands Mkdir, Up:File Commands

Copying files

mbfl_file_enable_copy Function
Declares to the program module the commands required to copy files and directories (Program Declaring). The programs are: cp.

mbfl_file_copy source target ?...? Function
Copies the source, a file, to target, a file pathname. Additional arguments are handed to the command unchanged.

If source does not exist, or if it is not a file, an error is generated and the return value is 1. No test is done upon target.

mbfl_file_copy_recursively source target ?...? Function
Copies the source, a directory, to target, a directory pathname. Additional arguments are handed to the command unchanged. This function is like mbfl_file_copy, but it adds --recursive to the command line of cp.

If source does not exist, or if it is not a file, an error is generated and the return value is 1. No test is done upon target.


Node:File Commands Removing, Next:, Previous:File Commands Copy, Up:File Commands

Removing files and directories

Files removal is forced: the --force option to rm is always used. It is responsibility of the caller to validate the operation before invoking these functions.

Some functions test the existence of the pathname before attempting to remove it: this is done only if test execution is disabled; if test execution is enabled the command line is echoed to stderr to make it easier to debug scripts.

mbfl_file_enable_remove Function
Declares to the program module the commands required to remove files and directories (Program Declaring). The programs are: rm and rmdir.

mbfl_file_remove pathname Function
Removes pathname, no matter if it is a file or directory. If it is a directory: descends the sublevels removing all of them. If an error occurs returns 1.

mbfl_file_remove_file pathname Function
Removes the file selected by pathname. If the file does not exist or it is not a file or an error occurs: returns 1.

mbfl_file_remove_directory pathname Function
Removes the directory selected by pathname. If the directory does not exist or an error occurs: returns 1.

Manipulating tar archives

Remember that when we execute a script with the --test option: the external commands are not executed: a command line is echoed to stdout. It is recommended to use this mode to fine tune the command line options required by tar.

mbfl_file_enable_tar Function
Declares to the program module the tar command (Program Declaring).

mbfl_tar_exec ?...? Function
Executes tar with whatever arguments are used. Returns the return code of tar.

mbfl_tar_create_to_stdout directory ?...? Function
Creates an archive and sends it to stdout. The root of the archive is the directory. Files are selected with the . pattern. tar flags may be appended to the invocation to this function. In case of error returns 1.

mbfl_tar_extract_from_stdin directory ?...? Function
Reads an archive from stdin and extracts it under directory. tar flags may be appended to the invocation to this function. In case of error returns 1.

mbfl_tar_extract_from_file directory archive ?...? Function
Reads an archive from a file and extracts it under directory. tar flags may be appended to the invocation to this function. In case of error returns 1.

mbfl_tar_create_to_file directory archive ?...? Function
Creates an archive named archive holding the contents of directory. Before creating the archive, the process changes the current directory to directory and selects the files with the pattern .. tar flags may be appended to the invocation to this function. In case of error returns 1.

mbfl_tar_archive_directory_to_file directory archive ?...? Function
Like mbfl_tar_create_to_file but archives all the contents of directory, including the directory itself (not its parents).

mbfl_tar_list archive ?...? Function
Prints to stdout the list of files in archive. tar flags may be appended to the invocation to this function. In case of error returns 1.


Node:File Testing, Next:, Previous:File Commands, Up:File

Testing file existence and the like

mbfl_file_is_file filename Function
Returns true if filename is not the empty string and is a file.

mbfl_file_is_readable filename Function
Returns true if filename is not the empty string, is a file and is readable.

mbfl_file_is_writable filename Function
Returns true if filename is not the empty string, is a file and is writable.

mbfl_file_is_directory directory Function
Returns true if directory is not the empty string and is a directory.

mbfl_file_directory_is_readable directory Function
Returns true if directory is not the empty string, is a directory and is readable.

mbfl_file_directory_is_writable directory Function
Returns true if directory is not the empty string, is a directory and is writable.
mbfl_file_is_symlink pathname Function
Returns true if pathname is not the empty string and is a symbolic link.


Node:File Misc, Previous:File Testing, Up:File

Miscellaneous commands

mbfl_cd dirname ?...? Function
Changes directory to dirname. Optional flags to cd may be appended.


Node:Getopts, Next:, Previous:File, Up:Top

Parsing command line options

The getopt module defines a set of procedures to be used to process command line arguments with the following format:

-a
brief option a with no value;
-a123
brief option a with value 123;
--bianco
long option bianco with no value;
--color=bianco
long option color with value bianco.

Requires the message module (Message for details).


Node:Getopts Arguments, Next:, Up:Getopts

Arguments

The module contains, at the root level, a block of code like the following:

ARGC=0
declare -a ARGV ARGV1

for ((ARGC1=0; $# > 0; ++ARGC1)); do
    ARGV1[$ARGC1]="$1"
    shift
done

this block is executed when the script is evaluated. Its purpose is to store command line arguments in the global array ARGV1 and the number of command line arguments in the global variable ARGC1.

The global array ARGV and the global variable ARGC are predefined and should be used by the mbfl_getopts functions to store non-option command line arguments.

Example:

$ script --gulp wo --gasp=123 wa

if the script makes use of the library, the strings wo and wa will go into ARGV and ARGC will be set to 2. The option arguments are processed and some action is performed to register them.

We can access the non-option arguments with the following code:

for ((i=0; $i < $ARGC; ++i)); do
    # do something with ${ARGV[$i]}
done


Node:Getopts Usage, Next:, Previous:Getopts Arguments, Up:Getopts

Using the module

To use this module we have to declare a set of script options; we declare a new script option with the function mbfl_declare_option. Options declaration should be done at the beginning of the script, before doing anything; for example: right after the MBFL library code.

In the main block of the script: options are parsed by invoking mbfl_getopts_parse: this function will update a global variable and invoke a script function for each option on the command line.

Examples

Example of option declaration:

mbfl_declare_option ALPHA no a alpha noarg "enable alpha option"

this code declares an option with no argument and properties:

If the option is used: the function script_option_update_alpha is invoked (if it exists) with no arguments, after the variable script_option_ALPHA has been set to yes. Valid option usages are:

$ script.sh -a
$ script.sh --alpha

Another example:

mbfl_declare_option BETA 123 b beta witharg "select beta value"

this code declares an option with argument and properties:

If the option is used: the function script_option_update_beta is invoked (if it exists) with no arguments, after the variable script_option_BETA has been set to the selected value. Valid option usages are:

$ script.sh -b456
$ script.sh --beta=456


Node:Getopts Options, Next:, Previous:Getopts Usage, Up:Getopts

Predefined options

A set of predefined options is recognised by the library and not handed to the user defined functions.

--encoded-args
Signals to the library that the non-option arguments and the option values are encoded in hexadecimal strings. Encoding is useful to avoid quoting problems when invoking a script from another one.

If this option is used: the values are decoded by mbfl_getopts_parse before storing them in the ARGV array and before being stored in the option's specific global variables.

-v
--verbose
Turns on verbose messages. The fuction mbfl_option_verbose returns true (Message, for details).
--silent
Turns off verbose messages. The fuction mbfl_option_verbose returns false.
--verbose-program
If used the --verbose option is added to the command line of external programs that support it. The fuction mbfl_option_verbose_program returns true or false depending on the state of this option.
--show-program
Prints the command line of executed external programs.
--debug
Turns on debugging messages (Message, for details).
--test
Turns on test execution (Program Testing, for details).
--null
Signals to the script that it has to use the null character to separate values, instead of the common newline. The global variable mbfl_option_NULL is set to yes.
-f
--force
Signals to the script that it does not have to query the user before doing dangerous operations, like overwriting files. The global variable mbfl_option_INTERACTIVE is set to no.
-i
--interactive
Signals to the script that it does have to query the user before doing dangerous operations, like overwriting files. The global variable mbfl_option_INTERACTIVE is set to yes.
--validate-programs
Validates the existence of all the programs needed by the script; then exits. The exit code is zero if all the programs were found, one otherwise.
--version
Prints to the standard output of the script the contents of the global variable mbfl_message_VERSION, then exits with code zero. The variable makes use of the service variables (Service Variables, for details).
--version-only
Prints to the standard output of the script the contents of the global variable script_VERSION, then exits with code zero.
--license
Prints to the standard output of the script the contents of one of the global variables mbfl_message_LICENSE_*, then exits with code zero. The variable makes use of the service variables (Service Variables, for details).
-h
--help
--usage
Prints to the standard output of the script: the contents of the global variable script_USAGE; a newline; the string options:; a newline; an automatically generated string describing the options declared with mbfl_declare_option; a string describing the MBFL default options. Then exits with code zero.

The following options may be used to set, unset and query the state of the predefined options.

mbfl_option_encoded_args Function
mbfl_set_option_encoded_args Function
mbfl_unset_option_encoded_args Function
Query/sets/unsets the encoded arguments option.

mbfl_option_encoded_args Function
mbfl_set_option_encoded_args Function
mbfl_unset_option_encoded_args Function
Query/sets/unsets the verbose messages option.

mbfl_option_verbose_program Function
mbfl_set_option_verbose_program Function
mbfl_unset_option_verbose_program Function
Query/sets/unsets verbose execution for external programs.

This option, of course, is supported only for programs that are known by MBFL (like rm): if a program is executed with mbfl_program_exec, it is responsibility of the caller to use the option.

mbfl_option_show_program Function
mbfl_set_option_show_program Function
mbfl_unset_option_show_program Function
Prints the command line of executed external program. This does not disable program execution, it just prints the command line before executing it.

mbfl_option_test Function
mbfl_set_option_test Function
mbfl_unset_option_test Function
Query/sets/unsets the test execution option.

mbfl_option_debug Function
mbfl_set_option_debug Function
mbfl_unset_option_debug Function
Query/sets/unsets the debug messages option.

mbfl_option_null Function
mbfl_set_option_null Function
mbfl_unset_option_null Function
Query/sets/unsets the null list separator option.

mbfl_option_interactive Function
mbfl_set_option_interactive Function
mbfl_unset_option_interactive Function
Query/sets/unsets the interactive excution option.


Node:Getopts Interface, Next:, Previous:Getopts Options, Up:Getopts

Interface functions

mbfl_declare_option keyword default brief long hasarg description Function
Declares a new option. Arguments description follows.
keyword
A string identifying the option; internally it is used to build a function name and a variable name. It is safer to limit this string to the letters in the range a-z and underscores.
default
The default value for the option. For an option with argument it can be anything; for an option with no argument: it must be yes or no.
brief
The brief option selector: a single character. It is safer to choose a single letter (lower or upper case) in the ASCII standard.
long
The long option selector: a string. It is safer to choose a sequence of letters in the ASCII standard, separated by underscores or dashes.
hasarg
Either witharg or noarg: declares if the option requires an argument or not.
description
A one-line string describing the option briefly.

mbfl_getopts_parse Function
Parses a set of command line options. The options are handed to user defined functions. The global array ARGV1 and the global variable ARGC1 are supposed to hold the command line arguments and the number of command line arguments. Non-option arguments are left in the global array ARGV, the global variable ARGC holds the number of elements in ARGV.

mbfl_getopts_islong string varname Function
Verifies if a string is a long option without argument. string is the string to validate, varname is the optional name of a variable that's set to the option name, without the leading dashes.

Returns with code zero if the string is a long option without argument, else returns with code one.

An option must be of the form --option, only characters in the ranges A-Z, a-z, 0-9 and the characters - and _ are allowed in the option name.

mbfl_getopts_islong_with string optname varname Function
Verifies if a string is a long option with argument. Arguments:
string
the string to validate;
optname
optional name of a variable that's set to the option name, without the leading dashes;
varname
optional name of a variable that's set to the option value.

Returns with code zero if the string is a long option with argument, else returns with code one.

An option must be of the form --option=value, only characters in the ranges A-Z, a-z, 0-9 and the characters - and _ are allowed in the option name.

If the argument is not an option with value, the variable names are ignored.

mbfl_getopts_isbrief string varname Function
Verifies if a string is a brief option without argument. Arguments: string is the string to validate, varname optional name of a variable that's set to the option name, without the leading dash.

Returns with code zero if the argument is a brief option without argument, else returns with code one.

A brief option must be of the form -a, only characters in the ranges A-Z, a-z, 0-9 are allowed as option letters.

mbfl_getopts_isbrief_with string optname valname Function
Verifies if a string is a brief option without argument. Arguments:
string
the string to validate;
optname
optional name of a variable that's set to the option name, without the leading dashes;
valname
optional name of a variable that's set to the option value.

Returns with code zero if the argument is a brief option without argument, else returns with code one.

A brief option must be of the form -aV (a is the option, V is the value), only characters in the ranges A-Z, a-z, 0-9 are allowed as option letters.

mbfl_wrong_num_args required present Function
Validates the number of arguments. required is the required number of arguments, present is the given number of arguments on the command line. If the number of arguments is different from the required one: prints an error message and returns with code one; else returns with code zero.

mbfl_argv_from_stdin Function
If the ARGC global variable is set to zero: fills the global variable ARGV with lines from stdin. If the global variable mbfl_option_NULL is set to yes: lines are read using the null character as terminator, else they are read using the standard newline as terminator.

This function may block waiting for input.

mbfl_argv_all_files Function
Checks that all the arguments in ARGV are file names of existent file. Returns with code zero if no errors, else prints an error message and returns with code 1.


Node:Getopts Values, Previous:Getopts Interface, Up:Getopts

Querying Options

Some feature and behaviour of the library is configured by the return value of the following set of functions. All of these functions are defined by the Getopts module, but they can be redefined by the script.

mbfl_option_encoded_args Function
Returns true if the option --encoded-args was used on the command line.

mbfl_option_verbose Function
Returns true if the option --verbose was used on the command line after all the occurrences of --silent. Returns false if the option --silent was used on the command line after all the occurrences of --verbose.

mbfl_option_test Function
Returns true if the option --test was used on the command line.
mbfl_option_debug Function
Returns true if the option --debug was used on the command line.
mbfl_option_null Function
Returns true if the option --null was used on the command line.
mbfl_option_interactive Function
Returns true if the option --interactive was used on the command line after all the occurrences of --force. Returns false if the option --force was used on the command line after all the occurrences of --interactive.

Printing messages to the console

This module allows one to print messages on an output channel. Various forms of message are supported.

All the function names are prefixed with mbfl_message_. All the messages will have the forms:

<progname>: <message>
<progname>: [error|warning]: <message>

The following global variables are declared:

mbfl_message_PROGNAME
must be initialised with the name of the script that'll be displayed at the beginning of each message;
mbfl_message_VERBOSE
yes if verbose messages should be displayed, else no;

mbfl_message_set_program PROGNAME Function
Sets the script official name to put at the beginning of messages.

mbfl_message_set_channel channel Function
Selects the channel to be used to output messages.

mbfl_message_string string Function
Outputs a message to the selected channel. Echoes a string composed of: the content of the mbfl_message_PROGNAME global variable; a colon; a space; the provided message.

A newline character is NOT appended to the message. Escape characters are allowed in the message.

mbfl_message_verbose string Function
Outputs a message to the selected channel, but only if the evaluation of the function/alias mbfl_option_verbose returns true.

Echoes a string composed of: the content of the mbfl_message_PROGNAME global variable; a colon; a space; the provided message.

A newline character is NOT appended to the message. Escape characters are allowed in the message.

mbfl_message_verbose_end string Function
Outputs a message to the selected channel, but only if the evaluation of the function/alias mbfl_option_verbose returns true.

Echoes the string. A newline character is NOT appended to the message. Escape characters are allowed in the message.

mbfl_message_debug string Function
Outputs a message to the selected channel, but only if the evaluation of the function/alias mbfl_option_debug returns true.

Echoes a string composed of: the content of the mbfl_message_PROGNAME global variable; a colon; a space; the provided message.

A newline character is NOT appended to the message. Escape characters are allowed in the message.

mbfl_message_warning string Function
Outputs a warning message to the selected channel. Echoes a string composed of: the content of the mbfl_message_PROGNAME global variable; a colon; a space; the string warning; a colon; a space; the provided message.

A newline character IS appended to the message. Escape characters are allowed in the message.

mbfl_message_error string Function
Outputs a error message to the selected channel. Echoes a string composed of: the content of the mbfl_message_PROGNAME global variable; a colon; a space; the string error; a colon; a space; the provided message.

A newline character IS appended to the message. Escape characters are allowed in the message.


Node:Program, Next:, Previous:Message, Up:Top

Using external programs

This module declares a set of global variables all prefixed with mbfl_program_. We have to look at the module's code to see which one are declared.


Node:Program Testing, Next:, Up:Program

Testing a script and running programs

MBFL allows a script to execute a "dry run", that is: do not perform any operation on the system, just print messages describing what will happen if the script is executed with the selected options. This implies, in the MBFL model, that no external program is executed.

When this feature is turned on: mbfl_program_exec does not execute the program, instead it prints the command line on standard error and returns true.

mbfl_set_option_test Function
Enables the script test option. After this a script should not do anything on the system, just print messages describing the operations. This function is invoked when the predefined option --test is used on the command line.

mbfl_unset_option_test Function
Disables the script test option. After this a script should perform normal operations.

mbfl_option_test Function
Returns true if test execution is enabled, else returns false.


Node:Program Checking, Next:, Previous:Program Testing, Up:Program

Checking programs existence

The simpler way to test the availability of a program is to look for it just before it is used. The following function should be used at the beginning of a function that makes use of external programs.

mbfl_program_check program ?program ...? Function
Checks the availability of programs. All the pathnames on the command line are checked: if one is not executable an error message is printed on stderr. Returns false if a program can't be found, true otherwise.

mbfl_program_find program Function
A wrapper for:
type -ap program

that looks for a program in the current search path: prints the full pathname of the program found, or prints an empty string if nothing is found.


Node:Program Executing, Next:, Previous:Program Checking, Up:Program

Executing a program

mbfl_program_exec arg ... Function
Evaluates a command line.

If the function mbfl_option_test returns true: instead of evaluation, the command line is sent to stderr.

If the function mbfl_option_show_program returns true: the command line is sent to stderr, then it is executed.


Node:Program Declaring, Previous:Program Executing, Up:Program

Declaring the intention to use a program

To make a script model simpler, we assume that the unavailability of a program at the time of its execution is a fatal error. So if we need to execute a program and the executable is not there, the script must be aborted on the spot.

Functions are available to test the availability of a program, so we can try to locate an alternative or terminate the process under the script control. On a system where executables may vanish from one moment to another, no matter how we test a program existence, there's always the possibility that the program is not "there" when we invoke it.

If we just use mbfl_program_exec to invoke an external program, the function will try and fail if the executable is unavailable: the return code will be false.

The vanishing of a program is a rare event: if it's there when we look for it, probably it will be there also a few moments later when we invoke it. For this reason, MBFL proposes a set of functions with which we can declare the intention of a script to use a set of programs; a command line option is predefined to let the user test the availability of all the declared programs before invoking the script.

mbfl_declare_program program Function
Registers program as the name of a program required by the script. The return value is always zero.

mbfl_program_validate_declared Function
Validates the existence of all the declared programs. The return value is zero if all the programs are found, one otherwise.

This function is invoked by mbfl_getopts_parse when the --validate-programs option is used on the command line.

It is a good idea to invoke this function at the beginning of a script, just before starting to do stuff, example:

mbfl_program_validate_declared || mbfl_exit_program_not_found

If verbose messages are enabled: a brief summary is echoed to stderr; from the command line the option --verbose must be used before --validate-programs.

mbfl_program_found program Function
Prints the pathname of the previously declared program. Returns zero if the program was found, otherwise prints an error message and exits the script by invoking mbfl_exit_program_not_found.

This function should be used to retrieve the pathname of the program to be used as first argument to mbfl_program_exec.

mbfl_exit_program_not_found Function
Terminates the script with exit code 20. This function may be redefined by a script to make use of a different exit code; it may even be redefined to execute arbitrary code and then exit.


Node:Signal, Next:, Previous:Program, Up:Top

Catching signals

MBFL provides an interface to the trap builtin that allows the execution of more than one function when a signal is received; this may sound useless, but that is it.

mbfl_signal_map_signame_to_signum sigspec Function
Converts sigspec to the corresponding signal number, then prints the number.

mbfl_signal_attach sigspec handler Function
Append handler to the list of functions that are executed whenever sigspec is received.

mbfl_signal_invoke_handlers signum Function
Invokes all the handlers registered for signum. This function is not meant to be used during normal scripts execution, but it may be useful to debug a script.


Node:String, Next:, Previous:Signal, Up:Top

Manipulating strings


Node:String Quote, Next:, Up:String

Quoted characters

mbfl_string_is_quoted_char string position Function
Returns true if the character at position in string is quoted; else returns false. A character is considered quoted if it is preceeded by an odd number of backslashes (\). position is a zero-based index.

mbfl_string_is_equal_unquoted_char string position char Function
Returns true if the character at position in string is equal to char and is not quoted (according to mbfl_string_is_quoted_char); else returns false. position is a zero-based index.

mbfl_string_quote string Function
Prints string with quoted characters. All the occurrences of the backslash character, \, are substituted with a quoted backslash, \\. Returns true.


Node:String Inspection, Next:, Previous:String Quote, Up:String

Inspecting a string

mbfl_string_index string index Function
Selects a character from a string. Echoes to stdout the selected character. If the index is out of range: the empty string is echoed to stdout.

mbfl_string_first string char ?begin? Function
Searches characters in a string. Arguments: string, the target string; char, the character to look for; begin, optional, the index of the character in the target string from which the search begins (defaults to zero).

Prints an integer representing the index of the first occurrence of char in string. If the character is not found: nothing is sent to stdout.

mbfl_string_last string char ?begin? Function
Searches characters in a string starting from the end. Arguments: string, the target string; char, the character to look for; begin, optional, the index of the character in the target string from which the search begins (defaults to zero).

Prints an integer representing the index of the last occurrence of char in string. If the character is not found: nothing is sent to stdout.

mbfl_string_range string begin end Function
Extracts a range of characters from a string. Arguments: string, the source string; begin, the index of the first character in the range; end, optional, the index of the character next to the last in the range, this character is not extracted. end defaults to the last character in the string; if equal to end: the end of the range is the end of the string. Echoes to stdout the selected range of characters.

mbfl_string_equal_substring string position pattern Function
Returns true if the substring starting at position in string is equal to pattern; else returns false. If position plus the length of pattern is greater than the length of string: the return value is false, always.


Node:String Splitting, Next:, Previous:String Inspection, Up:String

Splitting a string

mbfl_string_chars string Function
Splits a string into characters. Fills an array named SPLITFIELD with the characters from the string; the number of elements in the array is stored in a variable named SPLITCOUNT. Both SPLITFIELD and SPLITCOUNT may be declared local in the scope of the caller.

The difference between this function and using: ${STRING:$i:1}, is that this function detects backslash characters, \, and treats them as part of the following character. So, for example, the sequence \n is treated as a single char.

Example of usage for mbfl_string_chars:

string="abcde\nfghilm"
mbfl_string_chars "${string}"
# Now:
# "${#string}" = $SPLITCOUNT
#  a = "${SPLITFIELD[0]}"
#  b = "${SPLITFIELD[1]}"
#  c = "${SPLITFIELD[2]}"
#  d = "${SPLITFIELD[3]}"
#  e = "${SPLITFIELD[4]}"
#  \n = "${SPLITFIELD[5]}"
#  f = "${SPLITFIELD[6]}"
#  g = "${SPLITFIELD[7]}"
#  h = "${SPLITFIELD[8]}"
#  i = "${SPLITFIELD[9]}"
#  l = "${SPLITFIELD[10]}"
#  m = "${SPLITFIELD[11]}"

mbfl_string_split string separator Function
Splits string into fields using seprator. Fills an array named SPLITFIELD with the characters from the string; the number of elements in the array is stored in a variable named SPLITCOUNT. Both SPLITFIELD and SPLITCOUNT may be declared local in the scope of the caller.


Node:String Case, Next:, Previous:String Splitting, Up:String

Converting between upper and lower case

mbfl_string_toupper string Function
Outputs string with all the occurrencies of lower case ASCII characters (no accents) turned into upper case.

mbfl_string_tolower string Function
Outputs string with all the occurrencies of upper case ASCII characters (no accents) turned into lower case.


Node:String Class, Next:, Previous:String Case, Up:String

Matching a string with a class

mbfl-string-is-alpha-char char Function
Returns true if char is in one of the ranges: a-z, A-Z.

mbfl-string-is-digit-char char Function
Returns true if char is in one of the ranges: 0-9.

mbfl-string-is-alnum-char char Function
Returns true if mbfl-string-is-alpha-char || mbfl-string-is-digit-char returns true when acting on char.

mbfl-string-is-noblank-char char Function
Returns true if char is in none of the characters: , \n, \r, \f, \t. char is meant to be the unquoted version of the non-blank characters: the one obtained with:
$'char'

mbfl-string-is-name-char char Function
Returns true if mbfl-string-is-alnum-char returns true when acting upon char or char is an underscore, _.

mbfl-string-is-alpha string Function
mbfl-string-is-digit string Function
mbfl-string-is-alnum string Function
mbfl-string-is-noblank string Function
mbfl-string-is-name string Function
Return true if the associated char function returns true for each character in string. As an additional constraint: mbfl-string-is-name returns false if mbfl-string-is-digit returns true when acting upon the first character of string.


Node:String Misc, Previous:String Class, Up:String

Miscellaneous functions

mbfl_string_replace string pattern ?subst? Function
Replaces all the occurrences of pattern in string with subst; prints the result. If not used, subst defaults to the empty string.

mbfl_sprintf varname format ... Function
Makes use of printf to format the string format with the additional arguments, then stores the result in varname: if this name is local in the scope of the caller, this has the effect of filling the variable in that scope.

mbfl_string_skip string varname char Function
Skips all the characters in a string equal to char. varname is the name of a variable in the scope of the caller: its value is the offset of the first character to test in string. The offset is incremented until a char different from char is found, then the value of varname is update to the position of the different char. If the initial value of the offset corresponds to a char equal to char, the variable is left untouched. Returns true.


Node:Dialog, Next:, Previous:String, Up:Top

Interacting with the user

mbfl_dialog_yes_or_no string ?progname? Function
Prints the question string on the standard output and waits for the user to type yes or no in the standard input. Returns true if the user has typed yes, false if the user has typed no.

The optional parameter progname is used as prefix for the prompt; if not given: defaults to the value of script_PROGNAME (Service Variables for details).

mbfl_dialog_ask_password prompt Function
Prints prompts followed by a colon and a space, then reads a password from the terminal. Prints the password.


Node:Variables, Next:, Previous:Dialog, Up:Top

Manipulating variables


Node:Variables Arrays, Next:, Up:Variables

Manipulating arrays

mbfl_variable_find_in_array element Function
Searches the array mbfl_FIELDS for a value equal to element. If it is found: prints the index and returns true; else prints nothing and returns false.

mbfl_FIELDS must be filled with elements having subsequent indexes starting at zero.

mbfl_variable_element_is_in_array element Function
A wrapper for mbfl_variable_find_in_array that does not print anything.


Node:Variables Colon, Previous:Variables Arrays, Up:Variables

Manipulating colon variables

mbfl_variable_colon_variable_to_array varname Function
Reads varname's value, a colon separated list of string, and stores each string in the array mbfl_FIELDS, starting with a base index of zero.

mbfl_variable_array_to_colon_variable
varname
Function
Stores each value in the array mbfl_FIELDS in varname as a colon separated list of strings.

mbfl_variable_colon_variable_drop_duplicate varname Function
Reads varname's value, a colon separated list of string, and removes duplicates.


Node:Main, Next:, Previous:Variables, Up:Top

Main function

MBFL declares a function to drive the execution of the script; its purpose is to make use of the other modules to reduce the size of scripts depending on MBFL. All the code blocks in the script, with the exception of global variables declaration, should be enclosed in functions.

mbfl_main Function
Must be the last line of code in the script. Does the following.
  1. Registers the value of the variable script_PROGNAME in the message module using the function mbfl_message_set_progname.
  2. If it exists: invokes the function script_before_parsing_options.
  3. Parses command line options with mbfl_getopts_parse.
  4. If it exists: invokes the function script_after_parsing_options.
  5. Invokes the function whose name is stored in the global variable mbfl_main_SCRIPT_FUNCTION, if it exists, with no arguments; if its return value is non-zero: exits the script with the same code. The default value is main.
  6. Exits the script with the return code of the action function or zero.

mbfl_invoke_script_function funcname Function
If funcname is the name of an existing function: it is invoked with no arguments; the return value is the one of the function. The existence test is performed with:
type -t FUNCNAME = function

mbfl_main_set_main funcname Function
Selects the main function storing funcname into mbfl_main_SCRIPT_FUNCTION.


Node:Testing, Next:, Previous:Main, Up:Top

Building test suites

MBFL comes with a little library of functions that may be used to build test suites; its aim is at building tests for bash functions/commands/scripts.

The ideas at the base of this library are taken from the tcltest package distributed with the TCL core 1; this package had contributions from the following people/entities: Sun Microsystems, Inc.; Scriptics Corporation; Ajuba Solutions; Don Porter, NIST; probably many many others.

The library tries to do as much as possible using functions and aliases, not variables; this is an attempt to let the user redefine functions to his taste.


Node:Testing Intro, Next:, Up:Testing

A way to organise a test suite

A useful way to organise a test suite is to split it into a set of files: one for each module to be tested.

The file mbfltest.sh must be sourced at the beginning of each test file.

The function dotest should be invoked at the end of each module in the test suite; each module should define functions starting with the same prefix. A module should be stored in a file, and should look like the following:

# mymodule.test --

source mbfltest.sh
source module.sh

function module-featureA-1.1 () { ... }
function module-featureA-1.2 () { ... }
function module-featureA-2.1 () { ... }
function module-featureB-1.1 () { ... }
function module-featureB-1.2 () { ... }

dotest module-

### end of file

the file should be executed with:

$ bash mymodule.test

To test just "feature A":

$ TESTMATCH=module-featureA bash mymodule.test

Remember that the source builtin will look for files in the directories selected by the PATH environment variables, so we may want to do:

$ PATH="path/to/modules:${PATH}" \
TESTMATCH=module-featureA bash mymodule.test

It is better to put such stuff in a Makefile, with GNU make:

top_srcdir      = ...
builddir        = ...
BASHPROG        = bash
MODULES         = moduleA moduleB

testdir         = $(top_srcdir)/tests
test_FILES      = $(foreach f, $(MODULES), $(testdir)/$(f).test)
test_TARGETS    = test-modules

test_ENV        = PATH=$(builddir):$(testdir):$(PATH) TESTMATCH=$(TESTMATCH)
test_CMD        = $(test_ENV) $(BASHPROG)

.PHONY: test-modules

test-modules:
ifneq ($(strip $(test_FILES)),)
        @$(foreach f, $(test_FILES), $(test_CMD) $(f);)
endif


Node:Testing Config, Next:, Previous:Testing Intro, Up:Testing

Configuring the package

dotest-set-verbose Function
dotest-unset-verbose Function
Set or unset verbose execution. If verbose mode is on: some commands output messages on stderr describing what is going on. Examples: files and directories creation/removal.

dotest-option-verbose Function
Returns true if verbose mode is on, false otherwise.

dotest-set-test Function
dotest-unset-test Function
Set or unset test execution. If test mode is on: external commands (like rm and mkdir) are not executed, the command line is sent to stderr. Test mode is meant to be used to debug the test library functions.

dotest-option-test Function
Returns true if test mode is on, false otherwise.

dotest-set-report-start Function
dotest-unset-report-start Function
Set or unset printing a message upon starting a function.

dotest-option-report-start Function
Returns true if start function reporting is on; otherwise returns false.

dotest-set-report-success Function
dotest-unset-report-success Function
Set or unset printing a message when a function execution succeeds. Failed tests always cause a message to be printed.

dotest-option-report-success Function
Returns true if success function reporting is on; otherwise returns false.


Node:Testing Running, Next:, Previous:Testing Config, Up:Testing

Running test functions

dotest pattern Funciton
Run all the functions matching pattern. Usually pattern is the first part of the name of the functions to be executed; the function names are selected with the following code:
compgen -A function "$pattern"

There's no constraint on function names, but they must be one-word names.

Before running a test function: the current process working directory is saved, and it is restored after the execution is terminated.

The return value of the test functions is used as result of the test: true, the test succeeded; false, the test failed. Remembering that the return value of a function is the return value of its last executed command, the functions dotest-equal and dotest-output, and of course the test command, may be used to return the correct value.

Messages are printed before and after the execution of each function, according to the mode selected with: dotest-set-report-success, dotest-set-report-start, ... (Testing Config for details).

The following environment variables may configure the behaviour of dotest.

TESTMATCH
Overrides the value selected with pattern.
TESTSTART
If yes: it is equivalent to invoking dotest-set-report-start; if no: it is equivalent to invoking dotest-unset-report-start.
TESTSUCCESS
If yes: it is equivalent to invoking dotest-set-report-success; if no: it is equivalent to invoking dotest-unset-report-success.


Node:Testing Compare, Next:, Previous:Testing Running, Up:Testing

Validating results by comparing

dotest-equal expected got Function
Compares the two parameters and returns true if they are equal; returns false otherwise. In the latter case prints a message showing the expected value and the wrong one. Must be used as last command in a function, so that its return value is equal to that of the function.

Example:

function my-func () {
    echo $(($1 + $2))
}
function mytest-1.1 () {
    dotest-result 5 `my-func 2 3`
}
dotest mytest-

another example:

function my-func () {
    echo $(($1 + $2))
}
function mytest-1.1 () {
    dotest-result 5 `my-func 2 3` && \
      dotest-result 5 `my-func 1 4` && \
      dotest-result 5 `my-func 3 2` && \
}
dotest mytest-


Node:Testing Output, Next:, Previous:Testing Compare, Up:Testing

Validating results by output

dotest-output ?string? Function
Reads all the available lines from stdin accumulating them into a local variable, separated by \n; then compares the input with string, or the empty string if string is not present, and returns true if they are equal, false otherwise.

Example of test for a function that echoes its three parameters:

function my-lib-function () {
    echo $1 $2 $3
}
function mytest-1.1 () {
    my-lib-function a b c | dotest-output a b c
}
dotest mytest

Example of test for a function that is supposed to print nothing:

function my-lib-function () {
    test "$1" != "$2" && echo error
}
function mytest-1.1 () {
    my-lib-function a a | dotest-output
}
dotest mytest

Validating input

Here is a small script that asks for a first name then a second name:

 
$ pg func2 

#!/bin/sh
# func2
echo -n "What is your first name :"
read F_NAME
echo -n "What is your surname :"
read S_NAME 

The task is to make sure that the characters entered in both variables contain letters only. To do this without functions would duplicate a lot of code. Using a function cuts this duplication down. To test for characters only, we can use awk. Here's the function to test if we only get upper or lower case characters.
 

char_name()
{
# char_name
# to call: char_name string
# assign the argument across to new variable
_LETTERS_ONLY=$1
# use awk to test for characters only !
_LETTERS_ONLY=`echo $1|awk '{if($0~/[^a-z A-Z]/) print "1"}'`
if [ "$_LETTERS_ONLY" != "" ]
then
# oops errors
return 1
else
# contains only chars
return 0
fi
}


We first assign the $1 variable to a more meaningful name. Awk is then used to test if the whole record passed contains only characters. The output of this command, which is 1 for non-letters and null for OK, is held in the variable _LETTERS_ONLY.

A test on the variable is then carried out. If it holds any value then it's an error, but if it holds no value then it's OK. A return code is then executed based on this test. Using the return code enables the script to look cleaner when the test is done on the function on the calling part of the script.

To test the outcome of the function we can use this format of the if statement if we wanted:

if char_name $F_NAME; then
  echo "OK"
else
  echo "ERRORS"
fi 

If there is an error we can create another function to echo the error out to the screen:

 
name_error()
# name_error
# display an error message
{
echo " $@ contains errors, it must contain only letters"
}

The function name_error will be used to echo out all errors disregarding any invalid entries. Using the special variable $@ allows all arguments to be echoed. In this case it's the value of either F_NAME or S_NAME. Here's what the finished script now looks like, using the functions:

 
$ pg func2 

!/bin/sh
char_name()
# char_name
# to call: char_name string
# check if $1 does indeed contain only characters a-z,A-Z
{
# assign the argurment across to new variable
_LETTERS_ONLY=$1
_LETTERS_ONLY=`echo $1|awk '{if($0~/[^a-zA-Z]/) print "1"}'`
if [ "$_LETTERS_ONLY" != "" ]
then
  # oops errors
  return 1
else
  # contains only chars
  return 0
fi
}

name_error()
# display an error message
{
echo " $@ contains errors, it must contain only letters"
}

while :
do
  echo -n "What is your first name :"
  read F_NAME
  if char_name $F_NAME
  then
    # all ok breakout
    break
  else
    name_error $F_NAME
  fi
done

while :
do
  echo -n "What is your surname :"
  read S_NAME
  if char_name $S_NAME
  then
    # all ok breakout
    break
  else
    name_error $S_NAME
  fi
done

Notice a while loop for each of the inputs; this makes sure that we will continue prompting until a correct value is input, then we break out of the loop. Of course, on a working script, an option would be given for the user to quit this cycle, and proper cursor controls would be used, as would checking for zero length fields.

Here's what the output looks like when the script is run:

 
$ func2
What is your first name :Davi2d
 Davi2d contains errors, it must contain only letters
What is your first name :David
What is your surname :Tansley1
Tansley1 contains errors, it must contain only letters
What is your surname :Tansley 

Reading a single character

When navigating menus, one of the most frustrating tasks is having to keep hitting the return key after every selection, or when a 'press any key to continue' prompt appears. A command that can help us with not having to hit return to send a key sequence is the dd command.

The dd command is used mostly for conversions and interrogating problems with data on tapes or normal tape archiving tasks, but it can also be used to create fixed length files. Here a 1-megabyte file is created with the filename myfile.

 
dd if=/dev/zero of=myfile count=512 bs=2048

The dd command can interpret what is coming in from your keyboard, and can be used to accept so many characters. In this case we only want one character. The command dd needs to chop off the new line; this control character gets attached when the user hits return. dd will also send out only one character. Before any of that can happen the terminal must first be set into raw mode using the stty command. We save the settings before dd is invoked and then restore them after dd has finished.

Here's the function:

 
read_a_char()
# read_a_char
{
# save the settings
SAVEDSTTY=`stty -g`
# set terminal raw please
    stty cbreak
  # read and output only one character
    dd if=/dev/tty bs=1 count=1 2> /dev/null
  # restore terminal and restore stty
    stty -cbreak
stty $SAVEDSTTY
}

To call the function and return the character typed in, use command substitution. Here's an example.

 
echo -n "Hit Any Key To Continue"
character=`read_a_char`
echo " In case you are wondering you pressed $character"
 

Testing for the presence of a directory

Testing for the presence of directories is a fairly common task when copying files around. This function will test the filename passed to the function to see if it is a directory. Because we are using the return command with a succeed or failure value, the if statement becomes the most obvious choice in testing the result.

Here's the function.

 
isdir()
{
# is_it_a_directory

if [ $# -lt 1 ]; then
  echo "isdir needs an argument"
  return 1
fi
# is it a directory ?
_DIRECTORY_NAME=$1
if [ ! -d $_DIRECTORY_NAME ]; then
  # no it is not
  return 1
else
  # yes it is
  return 0
fi
}

Getting information from a login ID

When you are on a big system, and you want to contact one of the users who is logged in, don't you just hate it when you have forgotten the person's full name? Many a time I have seen users locking up a process, but their user ID means nothing to me, so I have to grep the passwd file to get their full name. Then I can get on with the nice part where I can ring them up to give the user a telling off.

Here's a function that can save you from grep ing the /etc/passwd file to see the user's full name.

On my system the user's full name is kept in field 5 of the passwd file; yours might be different, so you will have to change the field number to suit your passwd file.

The function is passed a user ID or many IDs, and the function just grep s the passwd file.

Here's the function:

 
whois()
# whois
# to call: whois userid
{
# check we have the right params
if [ $# -lt 1 ]; then
  echo "whois : need user id's please"
  return 1
fi

for loop
do
  _USER_NAME=`grep $loop /etc/passwd | awk -F: '{print $4}'`
  if [ "$_USER_NAME" = "" ]; then
    echo "whois: Sorry cannot find $loop"
  else
    echo "$loop is $_USER_NAME"
  fi
done
}

The whois function can be called like this:
 

$ whois dave peters superman
dave is David Tansley - admin accts
peter is Peter Stromer - customer services
whois: Sorry cannot find superman


Line numbering a text file

When you are in vi you can number your lines which is great for debugging, but if you want to print out some files with line numbers then you have to use the command nl. Here is a function that does what nl does best – numbering the lines in a file. The original file is not overwritten.

number_file()
# number_file
# to call: number_file filename
{
_FILENAME=$1
# check we have the right params
if [ $# -ne 1 ]; then
echo "number_file: I need a filename to number"
return 1
fi

loop=1
while read LINE
do
echo "$loop: $LINE"
loop=`expr $loop + 1`
done < $_FILENAME
}

String to upper case

You may need to convert text from lower to upper case sometimes, for example to create directories in a filesystem with upper case only, or to input data into a field you are validating that requires the text to be in upper case.

Here is a function that will do it for you. No points for guessing it's tr.

 
str_to_upper ()
# str_to_upper
# to call: str_to_upper $1
{
_STR=$1
# check we have the right params
if [ $# -ne 1 ]; then
  echo "number_file: I need a string to convert please"
  return 1
fi
echo $@ |tr '[a-z]' '[A-Z]'
}

The variable UPPER holds the newly returned upper case string. Notice the use again of using the special parameter $@ to pass all arguments. The str_to_upper can be called in two ways. You can either supply the string in a script like this:

 
UPPER=`str_to_upper "documents.live"`
echo $upper 

or supply an argument to the function instead of a string, like this:

UPPER=`str_to_upper $1`
echo $UPPER 

Both of these examples use substitution to get the returned function results.

is_upper

The function str_to_upper does a case conversion, but sometimes you only need to know if a string is upper case before continuing with some processing, perhaps to write a field of text to a file. The is_upper function does just that. Using an if statement in the script will determine if the string passed is indeed upper case.

Here is the function.

 
is_upper()
# is_upper
# to call: is_upper $1
{
# check we have the right params
if [ $# -ne 1 ]; then
  echo "is_upper: I need a string to test OK"
  return 1
fi
# use awk to check we have only upper case
_IS_UPPER=`echo $1|awk '{if($0~/[^A-Z]/) print "1"}'`
if [ "$_IS_UPPER" != "" ]
then
  # no, they are not all upper case
  return 1
else
  # yes all upper case
  return 0
fi
} 

To call the function is_upper simply send it a string argument. Here's how it could be called.
 

echo -n "Enter the filename :"
read FILENAME
if is_upper $FILENAME; then
echo "Great it's upper case"
# let's create a file maybe ??
else
echo "Sorry it's not upper case"
# shall we convert it anyway using str_to_upper ???
fi


To test if a string is indeed lower case, just replace the existing awk statement with this one inside the function is_upper and call it is_lower.

_IS_LOWER=`echo $1|awk '{if($0~/[^a-z]/) print "1"}'` 

String to lower case

Now I've done it. Because I have shown you the str_to_upper, I'd better show you its sister function str_to_lower. No guesses here please on how this one works.

str_to_lower ()
# str_to_lower
# to call: str_to_lower $1
{
# check we have the right params
if [ $# -ne 1 ]; then
  echo "str_to_lower: I need a string to convert please"
  return 1
fi
echo $@ |tr '[A-Z]' '[a-z]'
} 

The variable LOWER holds the newly returned lower case string. Notice the use again of using the special parameter $@ to pass all arguments. The str_to_lower can be called in two ways. You can either supply the string in a script like this:

 

LOWER=`str_to_lower "documents.live"`
echo $LOWER 

or supply an argument to the function instead of a string, like this:
 

LOWER=`str_to_upper $1`
echo $LOWER


Length of string

Validating input into a field is a common task in scripts. Validating can mean many things, whether it's numeric, character only, formats, or the length of the field.

Suppose you had a script where the user enters data into a name field via an interactive screen. You will want to check that the field contains only a certain number of characters, say 20 for a person's name. It's easy for the user to input up to 50 characters into a field. This is what this next function will check. You pass the function two parameters, the actual string and the maximum length the string should be.

Here's the function:

 
check_length()
# check_length
# to call: check_length string max_length_of_string
{
_STR=$1
_MAX=$2
# check we have the right params
if [ $# -ne 2 ]; then
  echo "check_length: I need a string and max length the string should be"
  return 1
fi
# check the length of the string
_LENGTH=`echo $_STR |awk '{print length($0)}'`
if [ "$_LENGTH" -gt "$_MAX" ]; then
  # length of string is too big
  return 1
else
  # string is ok in length
  return 0
fi
} 

You could call the function check_length like this:

 
$ pg test_name 

# !/bin/sh
# test_name
while :
do
  echo -n "Enter your FIRST name :"
  read NAME
  if check_length $NAME 10
  then
    break
    # do nothing fall through condition all is ok
  else
    echo "The name field is too long 10 characters max"
  fi
done 

The loop will continue until the data input into the variable NAME is less than the MAX characters permitted which in this case is ten; the break command then lets it drop out of the loop.

Using the above piece of code this is how the output could look.

 
$ val_max
Enter your FIRST name :Pertererrrrrrrrrrrrrrr
The name field is too long 10 characters max
Enter your FIRST name :Peter 

You could use the wc command to get the length of the string, but beware: there is a glitch when using wc in taking input from the keyboard. If you hit the space bar a few times after typing in a name, wc will almost always retain some of the spaces as part of the string, thus giving a false length size. Awk truncates end of string spaces by default when reading in via the keyboard.

Here's an example of the wc glitch (or maybe it's a feature):

 
echo -n "name :"
read NAME
echo $NAME | wc -c 

Running the above script segment (where graphics/square.gif is a space)
 

name :Petergraphics/square.gifgraphics/square.gif
6


chop

The chop function chops off characters from the beginning of a string. The function chop is passed a string; you specify how many characters to chop off the string starting from the first character. Suppose you had the string MYDOCUMENT.DOC and you wanted the MYDOCUMENT part chopped, so that the function returned only .DOC. You would pass the following to the chop function:

 
MYDOCUMENT.DOC 10 

Here's the function chop:

 

chop()
# chop
# to call:chop string how_many_chars_to_chop
{
_STR=$1
_CHOP=$2
# awk's substr starts at 0, we need to increment by one
# to reflect when the user says (ie) 2 chars to be chopped it will be 2
chars off
# and not 1
CHOP=`expr $_CHOP + 1`

# check we have the right params
if [ $# -ne 2 ]; then
echo "check_length: I need a string and how many characters to chop"
return 1
fi
# check the length of the string first
# we can't chop more than what's in the string !!
_LENGTH=`echo $_STR |awk '{print length($0)}'`
if [ "$_LENGTH" -lt "$_CHOP" ]; then
echo "Sorry you have asked to chop more characters than there are in
the string"
return 1
fi
echo $_STR |awk '{print substr($1,'$_CHOP')}'
}


The returned string newly chopped is held in the variable CHOPPED. To call the function chop, you could use:

 

CHOPPED=`chop "Honeysuckle" 5`
echo $CHOPPED
suckle


or you could call this way:

 

echo -n "Enter the Filename :"
read FILENAME
CHOPPED=`chop $FILENAME 1`
# the first character would be chopped off !
Months

When generating reports or creating screen displays, it is sometimes convenient to the programmer to have a quick way of displaying the full month. This function, called months, will accept the month number or month abbreviation and then return the full month.

For example, passing 3 or 03 will return March. Here's the function.

 

months()
{
# months
_MONTH=$1
# check we have the right params
if [ $# -ne 1 ]; then
  echo "months: I need a number 1 to 12 "
  return 1
fi

case $_MONTH in
1|01|Jan)_FULL="January" ;;
2|02|Feb)_FULL="February" ;;
3|03|Mar)_FULL="March";;
4|04|Apr)_FULL="April";;
5|05|May)_FULL="May";;
6|06|Jun)_FULL="June";;
7|07|Jul)_FULL="July";;
8|08|Aug)_FULL="August";;
9|10|Sep|Sept)_FULL="September";;
10|Oct)_FULL="October";;
11|Nov)_FULL="November";;
12|Dec)_FULL="December";;
*) echo "months: Unknown month"
  return 1
  ;;
esac
echo $_FULL
} 

To call the function months you can use either of the following methods.

 

months 04

 

The above method will display the month April; or from a script:

 

MY_MONTH=`months 06`
echo "Generating the Report for Month End $MY_MONTH"
...

which would output the month June.

Calling functions inside a script

To use a function in a script, create the function, and make sure it is above the code that calls it. Here's a script that uses a couple of functions. We have seen the script before; it tests to see if a directory exists.

 

$ pg direc_check
!/bin/sh
# function file
is_it_a_directory()
{
# is_it_a_directory
# to call: is_it_a_directory directory_name
_DIRECTORY_NAME=$1
if [ $# -lt 1 ]; then
echo "is_it_a_directory: I need a directory name to check"
return 1
fi
# is it a directory ?
if [ ! -d $_DIRECTORY_NAME ]; then
return 1
else
return 0
fi
}
#--------------------------------------------------
error_msg()
{
# error_msg
# beeps; display message; beeps again!
echo -e "\007"
echo $@
echo -e "\007"
return 0
}
}

### END OF FUNCTIONS

echo -n "enter destination directory :"
read DIREC
if is_it_a_directory $DIREC
then :
else
error_msg "$DIREC does not exist...creating it now"
mkdir $DIREC > /dev/null 2>&1
if [ $? != 0 ]
then
error_msg "Could not create directory:: check it out!"
exit 1
else :
fi
fi # not a directory
echo "extracting files..."

In the above script two functions are declared at the top of the script and called from the main part of the script. All functions should go at the top of the script before any of the main scripting blocks begin. Notice the error message statement; the function error_msg is used, and all arguments passed to the function error_msg are just echoed out with a couple of bleeps.

Calling functions from a function file

We have already seen how to call functions from the command line; these types of functions are generally used for system reporting utilities.

Let's use the above function again, but this time put it in a function file. We will call it functions.sh, the sh meaning shell scripts.

 

$ pg functions.sh
#!/bin/sh
# functions.sh
# main script functions
is_it_a_directory()
{
# is_it_a_directory
# to call: is_it_a_directory directory_name
#
if [ $# -lt 1 ]; then
echo "is_it_a_directory: I need a directory name to check"
return 1
fi
# is it a directory ?
DIRECTORY_NAME=$1
if [ ! -d $DIRECTORY_NAME ]; then
return 1
else
return 0
fi
}

#---------------------------------------------

error_msg()
{
echo -e "\007"
echo $@
echo -e "\007"
return 0
}

Now let's create the script that will use functions in the file functions.sh. We can then use these functions. Notice the functions file is sourced with the command format:

 

. /<path to file> 

A subshell will not be created using this method; all functions stay in the current shell.
 

$ pg direc_check
!/bin/sh
# direc_check
# source the function file functions.sh
# that's a <dot><space><forward slash>
. /home/dave/bin/functions.sh

# now we can use the function(s)

echo -n "enter destination directory :"
read DIREC
if is_it_a_directory $DIREC
then :
else
error_msg "$DIREC does not exist...creating it now"
mkdir $DIREC > /dev/null 2>&1
if [ $? != 0 ]
then
error_msg "Could not create directory:: check it out!"
exit 1
else :
fi
fi # not a directory
echo "extracting files..."

When we run the above script we get the same output as if we had the function inside our script:

 

$ direc_check
enter destination directory :AUDIT
AUDIT does not exist...creating it now
extracting files... 

Sourcing files is not only for functions

To source a file, it does not only have to contain functions – it can contain global variables that make up a configuration file.

Suppose you had a couple of backup scripts that archived different parts of a system. It would be a good idea to share one common configuration file. All you need to do is to create your variables inside a file then when one of the backup scripts kicks off it can load these variables in to see if the user wants to change any of the defaults before the archive actually begins. It may be the case that you want the archive to go to a different media.

Of course this approach can be used by any scripts that share a common configuration to carry out a process. Here's an example. The following configuration contains default environments that are shared by a few backup scripts I use.

Here's the file.

 

$ pg backfunc 

#!/bin/sh
# name: backfunc
# config file that holds the defaults for the archive systems
_CODE="comet"
_FULLBACKUP="yes"
_LOGFILE="/logs/backup/"
_DEVICE="/dev/rmt/0n"
_INFORM="yes"
_PRINT_STATS="yes" 

The descriptions are clear. The first field _CODE holds a code word. To be able to view this and thus change the values the user must first enter a code that matches up with the value of _CODE, which is "comet".

Here's the script that prompts for a password then displays the default configuration:

 
$ pg readfunc 

#!/bin/sh
# readfunc

if [ -r backfunc ]; then
  # source the file
  . /backfunc
else
  echo "$`basename $0` cannot locate backfunc file"
fi

echo -n "Enter the code name :"
# does the code entered match the code from backfunc file ???
if [ "${CODE}" != "${_CODE}" ]; then
  echo "Wrong code...exiting..will use defaults"
  exit 1
fi

echo " The environment config file reports"
echo "Full Backup Required            : $_FULLBACKUP"
echo "The Logfile Is                  : $_LOGFILE"
echo "The Device To Backup To is      : $_DEVICE"
echo "You Are To Be Informed by Mail  : $_INFORM"
echo "A Statistic Report To Be Printed: $_PRINT_STATS" 

When the script is run, you are prompted for the code. If the code matches, you can view the defaults. A fully working script would then let the user change the defaults.

 
$ readback
Enter the code name :comet

The environment config file reports
Full Backup Required            : yes
The Logfile Is                  : /logs/backup/
The Device To Backup To is      : /dev/rmt/0n
You Are To Be Informed by Mail  : yes
A Statistic Report To Be Printed: yes 

Using functions will greatly reduce the time you spend scripting. Creating useable and reuseable functions makes good sense; it also makes your main scripts less maintenance-prone.

When you have got a set of functions you like, put them in a functions file, then other scripts can use the functions as well.


Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: September 10, 2019