Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Unix Shells

Ksh93 and Bash Shells for Unix System Administrators

News

Unix shell history

Best Shell Books

Recommended Links Papers, ebooks tutorials Unix Tools

Regular Expressions

Reference

Bourne Shell and portability

Bash

ksh93

AWK

Bash Built-in Variables

Readline and inputrc

IFS variable

Command completion

Debugging

Bash history and bang commands

Comparison operators

Arithmetic Expressions in BASH

String Operations in Shell

Bash Control Structures

if statements Loops in Bash

Usage of pipes with loops in shell

Case statement Bash select statement
Input and Output Redirection Advanced filesystem navigation Shell Prompts Customarization Subshells Brace Expansion Examples of .bashrc files History substitution Vi editing mode Pushd and popd
Pretty Printing Functions Unix shell vi mode The Unix Hater's Handbook Sysadmin Horror Stories bash Tips and Tricks Tips Humor Etc
"The lyfe so short, the craft so long to lerne,''

Chaucer c. 1340–1400
(borrowed from the bash FAQ)

Command completionThis collection of links is oriented on students (initially it was provided as a reference material to my shell programming university course) and is designed to emphasize usage of advanced shell constructs and pipes in shell programming (mainly in the context of ksh93 and bash 3.2+ which have good support for those constructs). An introductory paper Slightly Skeptical View on Shell discusses the shell as a scripting language and as one of the earliest examples of very high level languages. The page might also be useful for system administrators who constitute the considerable percentage of shell users and lion part of shell programmers.

This page is the main page to a set of sub-pages devoted to shell that collectively are known as Shellorama. The most important are:

I strongly recommend getting a so-called orthodox file manager (OFM). This tool can immensely simplify Unix filesystem navigation and file operations (Midnight Commander while defective in handling command line can be tried first as this is an active project and it provides fpt and sftp virtual filesystem in remote hosts)

Actually filesystem navigation in shell is an area of great concern as there are several serious problems with the current tools for Unix filesystem navigation. I would say that usage of cd command (the most common method) is conceptually broken and deprives people from the full understanding of Unix filesystem; I doubt that it can be fixed within the shell paradigm (C-shell made an attempt to compensate for this deficiency by introducing history and popd/pushd/dirs troika, but this proved to be neither necessary nor sufficient for compensating problems with the in-depth understanding of the classical Unix hierarchical filesystem inherent in purely command line navigation ;-). Paradoxically sysadmins who use OFMs usually have much better understanding of the power and flexibility of the Unix filesystem then people who use command line. All-in-all usage of OFM is system administration represents Eastern European school of administration and it might be a better way to administer system that a typical "North American Way".

The second indispensable tool for shell programmer is Expect. This is a very flexible application that can be used for automation of interactive sessions as well as automation of testing of applications.

Usually people who know shell and awk and/or Perl well are usually considered to be advanced Unix system administrators (this is another way to say the system administrators who does not know shall/awk/Perl troika well are essentially a various flavors of entry-level system administrators no matter how many years of experience they have). I would argue that no system administrator can consider himself to be a senior Unix system administrator without in-depth knowledge of both one of the OFMs and Expect.

No system administrator can consider himself to be a senior Unix system administrator without in-depth knowledge of both one of the OFMs and Expect.

An OFM tends to educate the user about the Unix filesystem in some subtle, but definitely psychologically superior way. Widespread use of OFMs in Europe, especially in Germany and Eastern Europe, tend to produce specialists with substantially greater skills at handling Unix (and Windows) file systems than users that only have experience with a more primitive command line based navigational tools.

And yes, cd navigation is conceptually broken. This is not a bizarre opinion of the author, this is a fact: when you do not even suspect that a particular part of the tree exists something is conceptually broken. People using command line know only fragments of the file system structure like blinds know only the parts of the elephant. Current Unix file system with, say, 13K directories for a regular Solaris installation, are just unsuitable for the "cd way of navigation"; 1K directories was probably OK. But when there are over 10K of directories you need something else. Here quantity turns into quality. That's my point.

The page provides rather long quotes as web pages as web pages are notoriously unreliable medium and can disappear without trace. That makes this page somewhat difficult to browse, but it's not designed for browsing; it's designed as a supplementary material to the university shell course and for self-education.

Note: A highly recommended shell site is SHELLdorado by Heiner Steven.
This is a really excellent site with the good coding practice section,
some interesting example scripts and tips and tricks

A complementary page with Best Shell Books Reviews is also available. Although the best book selection is to a certain extent individual, the selection of a bad book is not: so this page might at least help you to avoid most common bad books (often the book recommended by a particular university are either weak or boring or both; Unix Shell by Example is one such example ;-). Still the shell literature is substantial (over a hundred of books) and that mean that you can find a suitable textbook. Please be aware of the fact that that few authors of shell programming books have a broad understanding of Unix necessary for writing a comprehensive shell book.

IMHO the first edition of O'Reilly Learning Korn Shell is probably one of the best and contains nice set of examples (the second edition is more up to date but generally is weaker). Also the first edition has advantage of being available in HTML form too (O'Reilly Unix CD). It does not cover ksh93 but it presents ksh in a unique way that no other book does. Some useful examples can also be found in UNIX Power Tools Book( see Archive of all shell scripts (684 KB); the book is available in HTML from one of O'Reilly CD bookshelf collections).

Still one needs to understand that Unix shells are pretty archaic languages which were designed with compatibility with dinosaur shells in mind (and Borne is a dinosaur shell by any definition). Designers even such strong designers as David Korn were hampered by compatibility problems from the very beginning (in a way it is amazing how much ingenuity they demonstrate in enhancing Borne shell; I am really amazed how David Korn managed to extend borne shell into something much more usable and much loser to "normal" scripting language. In this sense ksh93 stands like a real pinnacle of shell compatibility and the the testament of the art of shell language extension).

That means that outside of interactive usage and small one page scripts they generally outlived their usefulness. That's why for more or less complex tasks Perl is usually used (and should be used) instead of shells. While shells continued to improve since the original C-shell and Korn shell, the shell syntax is frozen in space and time and now looks completely archaic. There are a large number of problems with this syntax as it does not cleanly separate lexical analysis from syntax analysis. Bash 3.2 actually made some progress of overcoming most archaic features of old shells but still it has it own share of warts (for example last stage of the pipe does not run in on the same level as encompassing the pipe script)

Some syntax features in shell are idiosyncratic as Steve Bourne played with Algol 68 before starting work on the shell. In a way, he proved to be the most influential bad language designer, the designer who has the most lasting influence on Unix environment (that does not exonerate the subsequent designers which probably can take a more aggressive stance on the elimination of initial shell design blunders by marking them as "legacy").

For example there is very little logic in how different types of blocks are delimitated in shell scripts. Conditional statements end with (broken) classic Algor-68 the reverse keyword syntax: 'if condition; then echo yes; else echo no; fi', but loops are structured like perverted version of PL/1 (loop prefix do; ... done;) , individual case branches blocks ends with ';;' . Functions have C-style bracketing "{", "}". M. D. McIlroy as Steve Borne manager should be ashamed. After all at this time the level of compiler construction knowledge was pretty sufficient to avoid such blunders (David Gries book was published in 1971) and Bell Labs staff were not a bunch of enthusiasts ;-).

Also the original Bourne shell was a almost pure macro language. It performed variable substitution, tokenization and other operations on one line at a time without understanding the underlying syntax. This results in many unexpected side effects: Consider a simple command
rm $file
If variable $file is accidentally contains space that will lead to treating it as two separate augments to the rm command with possible nasty side effects. To fix this, the user has to make sure every use of a variable in enclosed in quotes, like in rm "$file".

Variable assignments in Bourne shell are whitespace sensitive. 'foo=bar' is an assignment, but 'foo = bar' is not. It is a function call with "= "and "bar" as two arguments. This is another strange idiosyncrasy.

There is also an overlap between aliases and functions. Aliases are positional macros that are recognized only as the first word of the command like in classic alias ll='ls -l'. Because of this, aliases have several limitations:

Functions are not positional and can in most cases can emulate aliases functionality:
ll() { ls -l $*; }
The curly brackets are some sort of pseudo-commands, so skipping the semicolon in the example above results in a syntax error. As there is no clean separation between lexical analysis and syntax analysis removing the whitespace between the opening bracket and 'ls' will also result in a syntax error.

Since the use of variables as commands is allowed, it is impossible to reliably check the syntax of a script as substitution can accidentally result in key word as in example that I found in the paper about fish (not that I like or recommend fish):

if true; then if [ $RANDOM -lt 1024 ]; then END=fi; else END=true; fi; $END
Both bash and zsh try to determine if the command in the current buffer is finished when the user presses the return key, but because of issues like this, they will sometimes fail.

Dr. Nikolai Bezroukov



NEWS CONTENTS

Old News ;-)

2009 2008 2007 2006 2005 2004 2003 2002

[Nov 17, 2018] hh command man page

Later was renamed to hstr
Notable quotes:
"... Favorite and frequently used commands can be bookmarked ..."
Nov 17, 2018 | www.mankier.com

hh -- easily view, navigate, sort and use your command history with shell history suggest box.

Synopsis

hh [option] [arg1] [arg2]...
hstr [option] [arg1] [arg2]...

Description

hh uses shell history to provide suggest box like functionality for commands used in the past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number of occurrences, length and timestamp. Favorite and frequently used commands can be bookmarked . In addition hh allows removal of commands from history - for instance with a typo or with a sensitive content.

Options
-h --help
Show help
-n --non-interactive
Print filtered history on standard output and exit
-f --favorites
Show favorites view immediately
-s --show-configuration
Show configuration that can be added to ~/.bashrc
-b --show-blacklist
Show blacklist of commands to be filtered out before history processing
-V --version
Show version information
Keys
pattern
Type to filter shell history.
Ctrl-e
Toggle regular expression and substring search.
Ctrl-t
Toggle case sensitive search.
Ctrl-/ , Ctrl-7
Rotate view of history as provided by Bash, ranked history ordered by the number of occurences/length/timestamp and favorites.
Ctrl-f
Add currently selected command to favorites.
Ctrl-l
Make search pattern lowercase or uppercase.
Ctrl-r , UP arrow, DOWN arrow, Ctrl-n , Ctrl-p
Navigate in the history list.
TAB , RIGHT arrow
Choose currently selected item for completion and let user to edit it on the command prompt.
LEFT arrow
Choose currently selected item for completion and let user to edit it in editor (fix command).
ENTER
Choose currently selected item for completion and execute it.
DEL
Remove currently selected item from the shell history.
BACSKSPACE , Ctrl-h
Delete last pattern character.
Ctrl-u , Ctrl-w
Delete pattern and search again.
Ctrl-x
Write changes to shell history and exit.
Ctrl-g
Exit with empty prompt.
Environment Variables

hh defines the following environment variables:

HH_CONFIG
Configuration options:

hicolor
Get more colors with this option (default is monochromatic).

monochromatic
Ensure black and white view.

prompt-bottom
Show prompt at the bottom of the screen (default is prompt at the top).

regexp
Filter command history using regular expressions (substring match is default)

substring
Filter command history using substring.

keywords
Filter command history using keywords - item matches if contains all keywords in pattern in any order.

casesensitive
Make history filtering case sensitive (it's case insensitive by default).

rawhistory
Show normal history as a default view (metric-based view is shown otherwise).

favorites
Show favorites as a default view (metric-based view is shown otherwise).

duplicates
Show duplicates in rawhistory (duplicates are discarded by default).

blacklist
Load list of commands to skip when processing history from ~/.hh_blacklist (built-in blacklist used otherwise).

big-keys-skip
Skip big history entries i.e. very long lines (default).

big-keys-floor
Use different sorting slot for big keys when building metrics-based view (big keys are skipped by default).

big-keys-exit
Exit (fail) on presence of a big key in history (big keys are skipped by default).

warning
Show warning.

debug
Show debug information.

Example:
export HH_CONFIG=hicolor,regexp,rawhistory

HH_PROMPT
Change prompt string which is user@host$ by default.

Example:
export HH_PROMPT="$ "

Files
~/.hh_favorites
Bookmarked favorite commands.
~/.hh_blacklist
Command blacklist.
Bash Configuration

Optionally add the following lines to ~/.bashrc:

export HH_CONFIG=hicolor         # get more colors
shopt -s histappend              # append new history items to .bash_history
export HISTCONTROL=ignorespace   # leading space hides commands from history
export HISTFILESIZE=10000        # increase history file size (default is 500)
export HISTSIZE=${HISTFILESIZE}  # increase history size (default is 500)
export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"
# if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc)
if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fi

The prompt command ensures synchronization of the history between BASH memory and history file.

ZSH Configuration

Optionally add the following lines to ~/.zshrc:

export HISTFILE=~/.zsh_history   # ensure history file visibility
export HH_CONFIG=hicolor         # get more colors
bindkey -s "\C-r" "\eqhh\n"  # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)
Examples
hh git
Start `hh` and show only history items containing 'git'.
hh --non-interactive git
Print history items containing 'git' to standard output and exit.
hh --show-configuration >> ~/.bashrc
Append default hh configuration to your Bash profile.
hh --show-blacklist
Show blacklist configured for history processing.
Author

Written by Martin Dvorak <martin.dvorak@mindforger.com>

Bugs

Report bugs to https://github.com/dvorka/hstr/issues

See Also

history(1), bash(1), zsh(1)

Referenced By

The man page hstr(1) is an alias of hh(1).

[Nov 08, 2018] How to split one string into multiple variables in bash shell? [duplicate]

Nov 08, 2018 | stackoverflow.com
This question already has an answer here:

Rob I , May 9, 2012 at 19:22

For your second question, see @mkb's comment to my answer below - that's definitely the way to go! – Rob I May 9 '12 at 19:22

Dennis Williamson , Jul 4, 2012 at 16:14

See my edited answer for one way to read individual characters into an array. – Dennis Williamson Jul 4 '12 at 16:14

Nick Weedon , Dec 31, 2015 at 11:04

Here is the same thing in a more concise form: var1=$(cut -f1 -d- <<<$STR) – Nick Weedon Dec 31 '15 at 11:04

Rob I , May 9, 2012 at 17:00

If your solution doesn't have to be general, i.e. only needs to work for strings like your example, you could do:
var1=$(echo $STR | cut -f1 -d-)
var2=$(echo $STR | cut -f2 -d-)

I chose cut here because you could simply extend the code for a few more variables...

crunchybutternut , May 9, 2012 at 17:40

Can you look at my post again and see if you have a solution for the followup question? thanks! – crunchybutternut May 9 '12 at 17:40

mkb , May 9, 2012 at 17:59

You can use cut to cut characters too! cut -c1 for example. – mkb May 9 '12 at 17:59

FSp , Nov 27, 2012 at 10:26

Although this is very simple to read and write, is a very slow solution because forces you to read twice the same data ($STR) ... if you care of your script performace, the @anubhava solution is much better – FSp Nov 27 '12 at 10:26

tripleee , Jan 25, 2016 at 6:47

Apart from being an ugly last-resort solution, this has a bug: You should absolutely use double quotes in echo "$STR" unless you specifically want the shell to expand any wildcards in the string as a side effect. See also stackoverflow.com/questions/10067266/tripleee Jan 25 '16 at 6:47

Rob I , Feb 10, 2016 at 13:57

You're right about double quotes of course, though I did point out this solution wasn't general. However I think your assessment is a bit unfair - for some people this solution may be more readable (and hence extensible etc) than some others, and doesn't completely rely on arcane bash feature that wouldn't translate to other shells. I suspect that's why my solution, though less elegant, continues to get votes periodically... – Rob I Feb 10 '16 at 13:57

Dennis Williamson , May 10, 2012 at 3:14

read with IFS are perfect for this:
$ IFS=- read var1 var2 <<< ABCDE-123456
$ echo "$var1"
ABCDE
$ echo "$var2"
123456

Edit:

Here is how you can read each individual character into array elements:

$ read -a foo <<<"$(echo "ABCDE-123456" | sed 's/./& /g')"

Dump the array:

$ declare -p foo
declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]="-" [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'

If there are spaces in the string:

$ IFS=$'\v' read -a foo <<<"$(echo "ABCDE 123456" | sed 's/./&\v/g')"
$ declare -p foo
declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]=" " [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'

insecure , Apr 30, 2014 at 7:51

Great, the elegant bash-only way, without unnecessary forks. – insecure Apr 30 '14 at 7:51

Martin Serrano , Jan 11 at 4:34

this solution also has the benefit that if delimiter is not present, the var2 will be empty – Martin Serrano Jan 11 at 4:34

mkb , May 9, 2012 at 17:02

If you know it's going to be just two fields, you can skip the extra subprocesses like this:
var1=${STR%-*}
var2=${STR#*-}

What does this do? ${STR%-*} deletes the shortest substring of $STR that matches the pattern -* starting from the end of the string. ${STR#*-} does the same, but with the *- pattern and starting from the beginning of the string. They each have counterparts %% and ## which find the longest anchored pattern match. If anyone has a helpful mnemonic to remember which does which, let me know! I always have to try both to remember.

Jens , Jan 30, 2015 at 15:17

Plus 1 For knowing your POSIX shell features, avoiding expensive forks and pipes, and the absence of bashisms. – Jens Jan 30 '15 at 15:17

Steven Lu , May 1, 2015 at 20:19

Dunno about "absence of bashisms" considering that this is already moderately cryptic .... if your delimiter is a newline instead of a hyphen, then it becomes even more cryptic. On the other hand, it works with newlines , so there's that. – Steven Lu May 1 '15 at 20:19

mkb , Mar 9, 2016 at 17:30

@KErlandsson: done – mkb Mar 9 '16 at 17:30

mombip , Aug 9, 2016 at 15:58

I've finally found documentation for it: Shell-Parameter-Expansionmombip Aug 9 '16 at 15:58

DS. , Jan 13, 2017 at 19:56

Mnemonic: "#" is to the left of "%" on a standard keyboard, so "#" removes a prefix (on the left), and "%" removes a suffix (on the right). – DS. Jan 13 '17 at 19:56

tripleee , May 9, 2012 at 17:57

Sounds like a job for set with a custom IFS .
IFS=-
set $STR
var1=$1
var2=$2

(You will want to do this in a function with a local IFS so you don't mess up other parts of your script where you require IFS to be what you expect.)

Rob I , May 9, 2012 at 19:20

Nice - I knew about $IFS but hadn't seen how it could be used. – Rob I May 9 '12 at 19:20

Sigg3.net , Jun 19, 2013 at 8:08

I used triplee's example and it worked exactly as advertised! Just change last two lines to <pre> myvar1= echo $1 && myvar2= echo $2 </pre> if you need to store them throughout a script with several "thrown" variables. – Sigg3.net Jun 19 '13 at 8:08

tripleee , Jun 19, 2013 at 13:25

No, don't use a useless echo in backticks . – tripleee Jun 19 '13 at 13:25

Daniel Andersson , Mar 27, 2015 at 6:46

This is a really sweet solution if we need to write something that is not Bash specific. To handle IFS troubles, one can add OLDIFS=$IFS at the beginning before overwriting it, and then add IFS=$OLDIFS just after the set line. – Daniel Andersson Mar 27 '15 at 6:46

tripleee , Mar 27, 2015 at 6:58

FWIW the link above is broken. I was lazy and careless. The canonical location still works; iki.fi/era/unix/award.html#echotripleee Mar 27 '15 at 6:58

anubhava , May 9, 2012 at 17:09

Using bash regex capabilities:
re="^([^-]+)-(.*)$"
[[ "ABCDE-123456" =~ $re ]] && var1="${BASH_REMATCH[1]}" && var2="${BASH_REMATCH[2]}"
echo $var1
echo $var2

OUTPUT

ABCDE
123456

Cometsong , Oct 21, 2016 at 13:29

Love pre-defining the re for later use(s)! – Cometsong Oct 21 '16 at 13:29

Archibald , Nov 12, 2012 at 11:03

string="ABCDE-123456"
IFS=- # use "local IFS=-" inside the function
set $string
echo $1 # >>> ABCDE
echo $2 # >>> 123456

tripleee , Mar 27, 2015 at 7:02

Hmmm, isn't this just a restatement of my answer ? – tripleee Mar 27 '15 at 7:02

Archibald , Sep 18, 2015 at 12:36

Actually yes. I just clarified it a bit. – Archibald Sep 18 '15 at 12:36

[Nov 08, 2018] How to split a string in shell and get the last field

Nov 08, 2018 | stackoverflow.com

cd1 , Jul 1, 2010 at 23:29

Suppose I have the string 1:2:3:4:5 and I want to get its last field ( 5 in this case). How do I do that using Bash? I tried cut , but I don't know how to specify the last field with -f .

Stephen , Jul 2, 2010 at 0:05

You can use string operators :
$ foo=1:2:3:4:5
$ echo ${foo##*:}
5

This trims everything from the front until a ':', greedily.

${foo  <-- from variable foo
  ##   <-- greedy front trim
  *    <-- matches anything
  :    <-- until the last ':'
 }

eckes , Jan 23, 2013 at 15:23

While this is working for the given problem, the answer of William below ( stackoverflow.com/a/3163857/520162 ) also returns 5 if the string is 1:2:3:4:5: (while using the string operators yields an empty result). This is especially handy when parsing paths that could contain (or not) a finishing / character. – eckes Jan 23 '13 at 15:23

Dobz , Jun 25, 2014 at 11:44

How would you then do the opposite of this? to echo out '1:2:3:4:'? – Dobz Jun 25 '14 at 11:44

Mihai Danila , Jul 9, 2014 at 14:07

And how does one keep the part before the last separator? Apparently by using ${foo%:*} . # - from beginning; % - from end. # , % - shortest match; ## , %% - longest match. – Mihai Danila Jul 9 '14 at 14:07

Putnik , Feb 11, 2016 at 22:33

If i want to get the last element from path, how should I use it? echo ${pwd##*/} does not work. – Putnik Feb 11 '16 at 22:33

Stan Strum , Dec 17, 2017 at 4:22

@Putnik that command sees pwd as a variable. Try dir=$(pwd); echo ${dir##*/} . Works for me! – Stan Strum Dec 17 '17 at 4:22

a3nm , Feb 3, 2012 at 8:39

Another way is to reverse before and after cut :
$ echo ab:cd:ef | rev | cut -d: -f1 | rev
ef

This makes it very easy to get the last but one field, or any range of fields numbered from the end.

Dannid , Jan 14, 2013 at 20:50

This answer is nice because it uses 'cut', which the author is (presumably) already familiar. Plus, I like this answer because I am using 'cut' and had this exact question, hence finding this thread via search. – Dannid Jan 14 '13 at 20:50

funroll , Aug 12, 2013 at 19:51

Some cut-and-paste fodder for people using spaces as delimiters: echo "1 2 3 4" | rev | cut -d " " -f1 | revfunroll Aug 12 '13 at 19:51

EdgeCaseBerg , Sep 8, 2013 at 5:01

the rev | cut -d -f1 | rev is so clever! Thanks! Helped me a bunch (my use case was rev | -d ' ' -f 2- | rev – EdgeCaseBerg Sep 8 '13 at 5:01

Anarcho-Chossid , Sep 16, 2015 at 15:54

Wow. Beautiful and dark magic. – Anarcho-Chossid Sep 16 '15 at 15:54

shearn89 , Aug 17, 2017 at 9:27

I always forget about rev , was just what I needed! cut -b20- | rev | cut -b10- | revshearn89 Aug 17 '17 at 9:27

William Pursell , Jul 2, 2010 at 7:09

It's difficult to get the last field using cut, but here's (one set of) solutions in awk and perl
$ echo 1:2:3:4:5 | awk -F: '{print $NF}'
5
$ echo 1:2:3:4:5 | perl -F: -wane 'print $F[-1]'
5

eckes , Jan 23, 2013 at 15:20

great advantage of this solution over the accepted answer: it also matches paths that contain or do not contain a finishing / character: /a/b/c/d and /a/b/c/d/ yield the same result ( d ) when processing pwd | awk -F/ '{print $NF}' . The accepted answer results in an empty result in the case of /a/b/c/d/eckes Jan 23 '13 at 15:20

stamster , May 21 at 11:52

@eckes In case of AWK solution, on GNU bash, version 4.3.48(1)-release that's not true, as it matters whenever you have trailing slash or not. Simply put AWK will use / as delimiter, and if your path is /my/path/dir/ it will use value after last delimiter, which is simply an empty string. So it's best to avoid trailing slash if you need to do such a thing like I do. – stamster May 21 at 11:52

Nicholas M T Elliott , Jul 1, 2010 at 23:39

Assuming fairly simple usage (no escaping of the delimiter, for example), you can use grep:
$ echo "1:2:3:4:5" | grep -oE "[^:]+$"
5

Breakdown - find all the characters not the delimiter ([^:]) at the end of the line ($). -o only prints the matching part.

Dennis Williamson , Jul 2, 2010 at 0:05

One way:
var1="1:2:3:4:5"
var2=${var1##*:}

Another, using an array:

var1="1:2:3:4:5"
saveIFS=$IFS
IFS=":"
var2=($var1)
IFS=$saveIFS
var2=${var2[@]: -1}

Yet another with an array:

var1="1:2:3:4:5"
saveIFS=$IFS
IFS=":"
var2=($var1)
IFS=$saveIFS
count=${#var2[@]}
var2=${var2[$count-1]}

Using Bash (version >= 3.2) regular expressions:

var1="1:2:3:4:5"
[[ $var1 =~ :([^:]*)$ ]]
var2=${BASH_REMATCH[1]}

liuyang1 , Mar 24, 2015 at 6:02

Thanks so much for array style, as I need this feature, but not have cut, awk these utils. – liuyang1 Mar 24 '15 at 6:02

user3133260 , Dec 24, 2013 at 19:04

$ echo "a b c d e" | tr ' ' '\n' | tail -1
e

Simply translate the delimiter into a newline and choose the last entry with tail -1 .

Yajo , Jul 30, 2014 at 10:13

It will fail if the last item contains a \n , but for most cases is the most readable solution. – Yajo Jul 30 '14 at 10:13

Rafael , Nov 10, 2016 at 10:09

Using sed :
$ echo '1:2:3:4:5' | sed 's/.*://' # => 5

$ echo '' | sed 's/.*://' # => (empty)

$ echo ':' | sed 's/.*://' # => (empty)
$ echo ':b' | sed 's/.*://' # => b
$ echo '::c' | sed 's/.*://' # => c

$ echo 'a' | sed 's/.*://' # => a
$ echo 'a:' | sed 's/.*://' # => (empty)
$ echo 'a:b' | sed 's/.*://' # => b
$ echo 'a::c' | sed 's/.*://' # => c

Ab Irato , Nov 13, 2013 at 16:10

If your last field is a single character, you could do this:
a="1:2:3:4:5"

echo ${a: -1}
echo ${a:(-1)}

Check string manipulation in bash .

gniourf_gniourf , Nov 13, 2013 at 16:15

This doesn't work: it gives the last character of a , not the last field . – gniourf_gniourf Nov 13 '13 at 16:15

Ab Irato , Nov 25, 2013 at 13:25

True, that's the idea, if you know the length of the last field it's good. If not you have to use something else... – Ab Irato Nov 25 '13 at 13:25

sphakka , Jan 25, 2016 at 16:24

Interesting, I didn't know of these particular Bash string manipulations. It also resembles to Python's string/array slicing . – sphakka Jan 25 '16 at 16:24

ghostdog74 , Jul 2, 2010 at 1:16

Using Bash.
$ var1="1:2:3:4:0"
$ IFS=":"
$ set -- $var1
$ eval echo  \$${#}
0

Sopalajo de Arrierez , Dec 24, 2014 at 5:04

I would buy some details about this method, please :-) . – Sopalajo de Arrierez Dec 24 '14 at 5:04

Rafa , Apr 27, 2017 at 22:10

Could have used echo ${!#} instead of eval echo \$${#} . – Rafa Apr 27 '17 at 22:10

Crytis , Dec 7, 2016 at 6:51

echo "a:b:c:d:e"|xargs -d : -n1|tail -1

First use xargs split it using ":",-n1 means every line only have one part.Then,pring the last part.

BDL , Dec 7, 2016 at 13:47

Although this might solve the problem, one should always add an explanation to it. – BDL Dec 7 '16 at 13:47

Crytis , Jun 7, 2017 at 9:13

already added.. – Crytis Jun 7 '17 at 9:13

021 , Apr 26, 2016 at 11:33

There are many good answers here, but still I want to share this one using basename :
 basename $(echo "a:b:c:d:e" | tr ':' '/')

However it will fail if there are already some '/' in your string . If slash / is your delimiter then you just have to (and should) use basename.

It's not the best answer but it just shows how you can be creative using bash commands.

Nahid Akbar , Jun 22, 2012 at 2:55

for x in `echo $str | tr ";" "\n"`; do echo $x; done

chepner , Jun 22, 2012 at 12:58

This runs into problems if there is whitespace in any of the fields. Also, it does not directly address the question of retrieving the last field. – chepner Jun 22 '12 at 12:58

Christoph Böddeker , Feb 19 at 15:50

For those that comfortable with Python, https://github.com/Russell91/pythonpy is a nice choice to solve this problem.
$ echo "a:b:c:d:e" | py -x 'x.split(":")[-1]'

From the pythonpy help: -x treat each row of stdin as x .

With that tool, it is easy to write python code that gets applied to the input.

baz , Nov 24, 2017 at 19:27

a solution using the read builtin
IFS=':' read -a field <<< "1:2:3:4:5"
echo ${field[4]}

[Nov 08, 2018] How do I split a string on a delimiter in Bash?

Notable quotes:
"... Bash shell script split array ..."
"... associative array ..."
"... pattern substitution ..."
"... Debian GNU/Linux ..."
Nov 08, 2018 | stackoverflow.com

stefanB , May 28, 2009 at 2:03

I have this string stored in a variable:
IN="bla@some.com;john@home.com"

Now I would like to split the strings by ; delimiter so that I have:

ADDR1="bla@some.com"
ADDR2="john@home.com"

I don't necessarily need the ADDR1 and ADDR2 variables. If they are elements of an array that's even better.


After suggestions from the answers below, I ended up with the following which is what I was after:

#!/usr/bin/env bash

IN="bla@some.com;john@home.com"

mails=$(echo $IN | tr ";" "\n")

for addr in $mails
do
    echo "> [$addr]"
done

Output:

> [bla@some.com]
> [john@home.com]

There was a solution involving setting Internal_field_separator (IFS) to ; . I am not sure what happened with that answer, how do you reset IFS back to default?

RE: IFS solution, I tried this and it works, I keep the old IFS and then restore it:

IN="bla@some.com;john@home.com"

OIFS=$IFS
IFS=';'
mails2=$IN
for x in $mails2
do
    echo "> [$x]"
done

IFS=$OIFS

BTW, when I tried

mails2=($IN)

I only got the first string when printing it in loop, without brackets around $IN it works.

Brooks Moses , May 1, 2012 at 1:26

With regards to your "Edit2": You can simply "unset IFS" and it will return to the default state. There's no need to save and restore it explicitly unless you have some reason to expect that it's already been set to a non-default value. Moreover, if you're doing this inside a function (and, if you aren't, why not?), you can set IFS as a local variable and it will return to its previous value once you exit the function. – Brooks Moses May 1 '12 at 1:26

dubiousjim , May 31, 2012 at 5:21

@BrooksMoses: (a) +1 for using local IFS=... where possible; (b) -1 for unset IFS , this doesn't exactly reset IFS to its default value, though I believe an unset IFS behaves the same as the default value of IFS ($' \t\n'), however it seems bad practice to be assuming blindly that your code will never be invoked with IFS set to a custom value; (c) another idea is to invoke a subshell: (IFS=$custom; ...) when the subshell exits IFS will return to whatever it was originally. – dubiousjim May 31 '12 at 5:21

nicooga , Mar 7, 2016 at 15:32

I just want to have a quick look at the paths to decide where to throw an executable, so I resorted to run ruby -e "puts ENV.fetch('PATH').split(':')" . If you want to stay pure bash won't help but using any scripting language that has a built-in split is easier. – nicooga Mar 7 '16 at 15:32

Jeff , Apr 22 at 17:51

This is kind of a drive-by comment, but since the OP used email addresses as the example, has anyone bothered to answer it in a way that is fully RFC 5322 compliant, namely that any quoted string can appear before the @ which means you're going to need regular expressions or some other kind of parser instead of naive use of IFS or other simplistic splitter functions. – Jeff Apr 22 at 17:51

user2037659 , Apr 26 at 20:15

for x in $(IFS=';';echo $IN); do echo "> [$x]"; doneuser2037659 Apr 26 at 20:15

Johannes Schaub - litb , May 28, 2009 at 2:23

You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment to IFS only takes place to that single command's environment (to read ). It then parses the input according to the IFS variable value into an array, which we can then iterate over.
IFS=';' read -ra ADDR <<< "$IN"
for i in "${ADDR[@]}"; do
    # process "$i"
done

It will parse one line of items separated by ; , pushing it into an array. Stuff for processing whole of $IN , each time one line of input separated by ; :

 while IFS=';' read -ra ADDR; do
      for i in "${ADDR[@]}"; do
          # process "$i"
      done
 done <<< "$IN"

Chris Lutz , May 28, 2009 at 2:25

This is probably the best way. How long will IFS persist in it's current value, can it mess up my code by being set when it shouldn't be, and how can I reset it when I'm done with it? – Chris Lutz May 28 '09 at 2:25

Johannes Schaub - litb , May 28, 2009 at 3:04

now after the fix applied, only within the duration of the read command :) – Johannes Schaub - litb May 28 '09 at 3:04

lhunath , May 28, 2009 at 6:14

You can read everything at once without using a while loop: read -r -d '' -a addr <<< "$in" # The -d '' is key here, it tells read not to stop at the first newline (which is the default -d) but to continue until EOF or a NULL byte (which only occur in binary data). – lhunath May 28 '09 at 6:14

Charles Duffy , Jul 6, 2013 at 14:39

@LucaBorrione Setting IFS on the same line as the read with no semicolon or other separator, as opposed to in a separate command, scopes it to that command -- so it's always "restored"; you don't need to do anything manually. – Charles Duffy Jul 6 '13 at 14:39

chepner , Oct 2, 2014 at 3:50

@imagineerThis There is a bug involving herestrings and local changes to IFS that requires $IN to be quoted. The bug is fixed in bash 4.3. – chepner Oct 2 '14 at 3:50

palindrom , Mar 10, 2011 at 9:00

Taken from Bash shell script split array :
IN="bla@some.com;john@home.com"
arrIN=(${IN//;/ })

Explanation:

This construction replaces all occurrences of ';' (the initial // means global replace) in the string IN with ' ' (a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).

The syntax used inside of the curly braces to replace each ';' character with a ' ' character is called Parameter Expansion .

There are some common gotchas:

  1. If the original string has spaces, you will need to use IFS :
    • IFS=':'; arrIN=($IN); unset IFS;
  2. If the original string has spaces and the delimiter is a new line, you can set IFS with:
    • IFS=$'\n'; arrIN=($IN); unset IFS;

Oz123 , Mar 21, 2011 at 18:50

I just want to add: this is the simplest of all, you can access array elements with ${arrIN[1]} (starting from zeros of course) – Oz123 Mar 21 '11 at 18:50

KomodoDave , Jan 5, 2012 at 15:13

Found it: the technique of modifying a variable within a ${} is known as 'parameter expansion'. – KomodoDave Jan 5 '12 at 15:13

qbolec , Feb 25, 2013 at 9:12

Does it work when the original string contains spaces? – qbolec Feb 25 '13 at 9:12

Ethan , Apr 12, 2013 at 22:47

No, I don't think this works when there are also spaces present... it's converting the ',' to ' ' and then building a space-separated array. – Ethan Apr 12 '13 at 22:47

Charles Duffy , Jul 6, 2013 at 14:39

This is a bad approach for other reasons: For instance, if your string contains ;*; , then the * will be expanded to a list of filenames in the current directory. -1 – Charles Duffy Jul 6 '13 at 14:39

Chris Lutz , May 28, 2009 at 2:09

If you don't mind processing them immediately, I like to do this:
for i in $(echo $IN | tr ";" "\n")
do
  # process
done

You could use this kind of loop to initialize an array, but there's probably an easier way to do it. Hope this helps, though.

Chris Lutz , May 28, 2009 at 2:42

You should have kept the IFS answer. It taught me something I didn't know, and it definitely made an array, whereas this just makes a cheap substitute. – Chris Lutz May 28 '09 at 2:42

Johannes Schaub - litb , May 28, 2009 at 2:59

I see. Yeah i find doing these silly experiments, i'm going to learn new things each time i'm trying to answer things. I've edited stuff based on #bash IRC feedback and undeleted :) – Johannes Schaub - litb May 28 '09 at 2:59

lhunath , May 28, 2009 at 6:12

-1, you're obviously not aware of wordsplitting, because it's introducing two bugs in your code. one is when you don't quote $IN and the other is when you pretend a newline is the only delimiter used in wordsplitting. You are iterating over every WORD in IN, not every line, and DEFINATELY not every element delimited by a semicolon, though it may appear to have the side-effect of looking like it works. – lhunath May 28 '09 at 6:12

Johannes Schaub - litb , May 28, 2009 at 17:00

You could change it to echo "$IN" | tr ';' '\n' | while read -r ADDY; do # process "$ADDY"; done to make him lucky, i think :) Note that this will fork, and you can't change outer variables from within the loop (that's why i used the <<< "$IN" syntax) then – Johannes Schaub - litb May 28 '09 at 17:00

mklement0 , Apr 24, 2013 at 14:13

To summarize the debate in the comments: Caveats for general use : the shell applies word splitting and expansions to the string, which may be undesired; just try it with. IN="bla@some.com;john@home.com;*;broken apart" . In short: this approach will break, if your tokens contain embedded spaces and/or chars. such as * that happen to make a token match filenames in the current folder. – mklement0 Apr 24 '13 at 14:13

F. Hauri , Apr 13, 2013 at 14:20

Compatible answer

To this SO question, there is already a lot of different way to do this in bash . But bash has many special features, so called bashism that work well, but that won't work in any other shell .

In particular, arrays , associative array , and pattern substitution are pure bashisms and may not work under other shells .

On my Debian GNU/Linux , there is a standard shell called dash , but I know many people who like to use ksh .

Finally, in very small situation, there is a special tool called busybox with his own shell interpreter ( ash ).

Requested string

The string sample in SO question is:

IN="bla@some.com;john@home.com"

As this could be useful with whitespaces and as whitespaces could modify the result of the routine, I prefer to use this sample string:

 IN="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
Split string based on delimiter in bash (version >=4.2)

Under pure bash, we may use arrays and IFS :

var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
oIFS="$IFS"
IFS=";"
declare -a fields=($var)
IFS="$oIFS"
unset oIFS

IFS=\; read -a fields <<<"$var"

Using this syntax under recent bash don't change $IFS for current session, but only for the current command:

set | grep ^IFS=
IFS=$' \t\n'

Now the string var is split and stored into an array (named fields ):

set | grep ^fields=\\\|^var=
fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")
var='bla@some.com;john@home.com;Full Name <fulnam@other.org>'

We could request for variable content with declare -p :

declare -p var fields
declare -- var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
declare -a fields=([0]="bla@some.com" [1]="john@home.com" [2]="Full Name <fulnam@other.org>")

read is the quickiest way to do the split, because there is no forks and no external resources called.

From there, you could use the syntax you already know for processing each field:

for x in "${fields[@]}";do
    echo "> [$x]"
    done
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]

or drop each field after processing (I like this shifting approach):

while [ "$fields" ] ;do
    echo "> [$fields]"
    fields=("${fields[@]:1}")
    done
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]

or even for simple printout (shorter syntax):

printf "> [%s]\n" "${fields[@]}"
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]
Split string based on delimiter in shell

But if you would write something usable under many shells, you have to not use bashisms .

There is a syntax, used in many shells, for splitting a string across first or last occurrence of a substring:

${var#*SubStr}  # will drop begin of string up to first occur of `SubStr`
${var##*SubStr} # will drop begin of string up to last occur of `SubStr`
${var%SubStr*}  # will drop part of string from last occur of `SubStr` to the end
${var%%SubStr*} # will drop part of string from first occur of `SubStr` to the end

(The missing of this is the main reason of my answer publication ;)

As pointed out by Score_Under :

# and % delete the shortest possible matching string, and

## and %% delete the longest possible.

This little sample script work well under bash , dash , ksh , busybox and was tested under Mac-OS's bash too:

var="bla@some.com;john@home.com;Full Name <fulnam@other.org>"
while [ "$var" ] ;do
    iter=${var%%;*}
    echo "> [$iter]"
    [ "$var" = "$iter" ] && \
        var='' || \
        var="${var#*;}"
  done
> [bla@some.com]
> [john@home.com]
> [Full Name <fulnam@other.org>]

Have fun!

Score_Under , Apr 28, 2015 at 16:58

The # , ## , % , and %% substitutions have what is IMO an easier explanation to remember (for how much they delete): # and % delete the shortest possible matching string, and ## and %% delete the longest possible. – Score_Under Apr 28 '15 at 16:58

sorontar , Oct 26, 2016 at 4:36

The IFS=\; read -a fields <<<"$var" fails on newlines and add a trailing newline. The other solution removes a trailing empty field. – sorontar Oct 26 '16 at 4:36

Eric Chen , Aug 30, 2017 at 17:50

The shell delimiter is the most elegant answer, period. – Eric Chen Aug 30 '17 at 17:50

sancho.s , Oct 4 at 3:42

Could the last alternative be used with a list of field separators set somewhere else? For instance, I mean to use this as a shell script, and pass a list of field separators as a positional parameter. – sancho.s Oct 4 at 3:42

F. Hauri , Oct 4 at 7:47

Yes, in a loop: for sep in "#" "ł" "@" ; do ... var="${var#*$sep}" ...F. Hauri Oct 4 at 7:47

DougW , Apr 27, 2015 at 18:20

I've seen a couple of answers referencing the cut command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.

In the case of splitting this specific example into a bash script array, tr is probably more efficient, but cut can be used, and is more effective if you want to pull specific fields from the middle.

Example:

$ echo "bla@some.com;john@home.com" | cut -d ";" -f 1
bla@some.com
$ echo "bla@some.com;john@home.com" | cut -d ";" -f 2
john@home.com

You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.

This gets more useful when you have a delimited log file with rows like this:

2015-04-27|12345|some action|an attribute|meta data

cut is very handy to be able to cat this file and select a particular field for further processing.

MisterMiyagi , Nov 2, 2016 at 8:42

Kudos for using cut , it's the right tool for the job! Much cleared than any of those shell hacks. – MisterMiyagi Nov 2 '16 at 8:42

uli42 , Sep 14, 2017 at 8:30

This approach will only work if you know the number of elements in advance; you'd need to program some more logic around it. It also runs an external tool for every element. – uli42 Sep 14 '17 at 8:30

Louis Loudog Trottier , May 10 at 4:20

Excatly waht i was looking for trying to avoid empty string in a csv. Now i can point the exact 'column' value as well. Work with IFS already used in a loop. Better than expected for my situation. – Louis Loudog Trottier May 10 at 4:20

, May 28, 2009 at 10:31

How about this approach:
IN="bla@some.com;john@home.com" 
set -- "$IN" 
IFS=";"; declare -a Array=($*) 
echo "${Array[@]}" 
echo "${Array[0]}" 
echo "${Array[1]}"

Source

Yzmir Ramirez , Sep 5, 2011 at 1:06

+1 ... but I wouldn't name the variable "Array" ... pet peev I guess. Good solution. – Yzmir Ramirez Sep 5 '11 at 1:06

ata , Nov 3, 2011 at 22:33

+1 ... but the "set" and declare -a are unnecessary. You could as well have used just IFS";" && Array=($IN)ata Nov 3 '11 at 22:33

Luca Borrione , Sep 3, 2012 at 9:26

+1 Only a side note: shouldn't it be recommendable to keep the old IFS and then restore it? (as shown by stefanB in his edit3) people landing here (sometimes just copying and pasting a solution) might not think about this – Luca Borrione Sep 3 '12 at 9:26

Charles Duffy , Jul 6, 2013 at 14:44

-1: First, @ata is right that most of the commands in this do nothing. Second, it uses word-splitting to form the array, and doesn't do anything to inhibit glob-expansion when doing so (so if you have glob characters in any of the array elements, those elements are replaced with matching filenames). – Charles Duffy Jul 6 '13 at 14:44

John_West , Jan 8, 2016 at 12:29

Suggest to use $'...' : IN=$'bla@some.com;john@home.com;bet <d@\ns* kl.com>' . Then echo "${Array[2]}" will print a string with newline. set -- "$IN" is also neccessary in this case. Yes, to prevent glob expansion, the solution should include set -f . – John_West Jan 8 '16 at 12:29

Steven Lizarazo , Aug 11, 2016 at 20:45

This worked for me:
string="1;2"
echo $string | cut -d';' -f1 # output is 1
echo $string | cut -d';' -f2 # output is 2

Pardeep Sharma , Oct 10, 2017 at 7:29

this is sort and sweet :) – Pardeep Sharma Oct 10 '17 at 7:29

space earth , Oct 17, 2017 at 7:23

Thanks...Helped a lot – space earth Oct 17 '17 at 7:23

mojjj , Jan 8 at 8:57

cut works only with a single char as delimiter. – mojjj Jan 8 at 8:57

lothar , May 28, 2009 at 2:12

echo "bla@some.com;john@home.com" | sed -e 's/;/\n/g'
bla@some.com
john@home.com

Luca Borrione , Sep 3, 2012 at 10:08

-1 what if the string contains spaces? for example IN="this is first line; this is second line" arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) will produce an array of 8 elements in this case (an element for each word space separated), rather than 2 (an element for each line semi colon separated) – Luca Borrione Sep 3 '12 at 10:08

lothar , Sep 3, 2012 at 17:33

@Luca No the sed script creates exactly two lines. What creates the multiple entries for you is when you put it into a bash array (which splits on white space by default) – lothar Sep 3 '12 at 17:33

Luca Borrione , Sep 4, 2012 at 7:09

That's exactly the point: the OP needs to store entries into an array to loop over it, as you can see in his edits. I think your (good) answer missed to mention to use arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) ) to achieve that, and to advice to change IFS to IFS=$'\n' for those who land here in the future and needs to split a string containing spaces. (and to restore it back afterwards). :) – Luca Borrione Sep 4 '12 at 7:09

lothar , Sep 4, 2012 at 16:55

@Luca Good point. However the array assignment was not in the initial question when I wrote up that answer. – lothar Sep 4 '12 at 16:55

Ashok , Sep 8, 2012 at 5:01

This also works:
IN="bla@some.com;john@home.com"
echo ADD1=`echo $IN | cut -d \; -f 1`
echo ADD2=`echo $IN | cut -d \; -f 2`

Be careful, this solution is not always correct. In case you pass "bla@some.com" only, it will assign it to both ADD1 and ADD2.

fersarr , Mar 3, 2016 at 17:17

You can use -s to avoid the mentioned problem: superuser.com/questions/896800/ "-f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified" – fersarr Mar 3 '16 at 17:17

Tony , Jan 14, 2013 at 6:33

I think AWK is the best and efficient command to resolve your problem. AWK is included in Bash by default in almost every Linux distribution.
echo "bla@some.com;john@home.com" | awk -F';' '{print $1,$2}'

will give

bla@some.com john@home.com

Of course your can store each email address by redefining the awk print field.

Jaro , Jan 7, 2014 at 21:30

Or even simpler: echo "bla@some.com;john@home.com" | awk 'BEGIN{RS=";"} {print}' – Jaro Jan 7 '14 at 21:30

Aquarelle , May 6, 2014 at 21:58

@Jaro This worked perfectly for me when I had a string with commas and needed to reformat it into lines. Thanks. – Aquarelle May 6 '14 at 21:58

Eduardo Lucio , Aug 5, 2015 at 12:59

It worked in this scenario -> "echo "$SPLIT_0" | awk -F' inode=' '{print $1}'"! I had problems when trying to use atrings (" inode=") instead of characters (";"). $ 1, $ 2, $ 3, $ 4 are set as positions in an array! If there is a way of setting an array... better! Thanks! – Eduardo Lucio Aug 5 '15 at 12:59

Tony , Aug 6, 2015 at 2:42

@EduardoLucio, what I'm thinking about is maybe you can first replace your delimiter inode= into ; for example by sed -i 's/inode\=/\;/g' your_file_to_process , then define -F';' when apply awk , hope that can help you. – Tony Aug 6 '15 at 2:42

nickjb , Jul 5, 2011 at 13:41

A different take on Darron's answer , this is how I do it:
IN="bla@some.com;john@home.com"
read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)

ColinM , Sep 10, 2011 at 0:31

This doesn't work. – ColinM Sep 10 '11 at 0:31

nickjb , Oct 6, 2011 at 15:33

I think it does! Run the commands above and then "echo $ADDR1 ... $ADDR2" and i get "bla@some.com ... john@home.com" output – nickjb Oct 6 '11 at 15:33

Nick , Oct 28, 2011 at 14:36

This worked REALLY well for me... I used it to itterate over an array of strings which contained comma separated DB,SERVER,PORT data to use mysqldump. – Nick Oct 28 '11 at 14:36

dubiousjim , May 31, 2012 at 5:28

Diagnosis: the IFS=";" assignment exists only in the $(...; echo $IN) subshell; this is why some readers (including me) initially think it won't work. I assumed that all of $IN was getting slurped up by ADDR1. But nickjb is correct; it does work. The reason is that echo $IN command parses its arguments using the current value of $IFS, but then echoes them to stdout using a space delimiter, regardless of the setting of $IFS. So the net effect is as though one had called read ADDR1 ADDR2 <<< "bla@some.com john@home.com" (note the input is space-separated not ;-separated). – dubiousjim May 31 '12 at 5:28

sorontar , Oct 26, 2016 at 4:43

This fails on spaces and newlines, and also expand wildcards * in the echo $IN with an unquoted variable expansion. – sorontar Oct 26 '16 at 4:43

gniourf_gniourf , Jun 26, 2014 at 9:11

In Bash, a bullet proof way, that will work even if your variable contains newlines:
IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")

Look:

$ in=$'one;two three;*;there is\na newline\nin this field'
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is
a newline
in this field")'

The trick for this to work is to use the -d option of read (delimiter) with an empty delimiter, so that read is forced to read everything it's fed. And we feed read with exactly the content of the variable in , with no trailing newline thanks to printf . Note that's we're also putting the delimiter in printf to ensure that the string passed to read has a trailing delimiter. Without it, read would trim potential trailing empty fields:

$ in='one;two;three;'    # there's an empty field
$ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")
$ declare -p array
declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'

the trailing empty field is preserved.


Update for Bash≥4.4

Since Bash 4.4, the builtin mapfile (aka readarray ) supports the -d option to specify a delimiter. Hence another canonical way is:

mapfile -d ';' -t array < <(printf '%s;' "$in")

John_West , Jan 8, 2016 at 12:10

I found it as the rare solution on that list that works correctly with \n , spaces and * simultaneously. Also, no loops; array variable is accessible in the shell after execution (contrary to the highest upvoted answer). Note, in=$'...' , it does not work with double quotes. I think, it needs more upvotes. – John_West Jan 8 '16 at 12:10

Darron , Sep 13, 2010 at 20:10

How about this one liner, if you're not using arrays:
IFS=';' read ADDR1 ADDR2 <<<$IN

dubiousjim , May 31, 2012 at 5:36

Consider using read -r ... to ensure that, for example, the two characters "\t" in the input end up as the same two characters in your variables (instead of a single tab char). – dubiousjim May 31 '12 at 5:36

Luca Borrione , Sep 3, 2012 at 10:07

-1 This is not working here (ubuntu 12.04). Adding echo "ADDR1 $ADDR1"\n echo "ADDR2 $ADDR2" to your snippet will output ADDR1 bla@some.com john@home.com\nADDR2 (\n is newline) – Luca Borrione Sep 3 '12 at 10:07

chepner , Sep 19, 2015 at 13:59

This is probably due to a bug involving IFS and here strings that was fixed in bash 4.3. Quoting $IN should fix it. (In theory, $IN is not subject to word splitting or globbing after it expands, meaning the quotes should be unnecessary. Even in 4.3, though, there's at least one bug remaining--reported and scheduled to be fixed--so quoting remains a good idea.) – chepner Sep 19 '15 at 13:59

sorontar , Oct 26, 2016 at 4:55

This breaks if $in contain newlines even if $IN is quoted. And adds a trailing newline. – sorontar Oct 26 '16 at 4:55

kenorb , Sep 11, 2015 at 20:54

Here is a clean 3-liner:
in="foo@bar;bizz@buzz;fizz@buzz;buzz@woof"
IFS=';' list=($in)
for item in "${list[@]}"; do echo $item; done

where IFS delimit words based on the separator and () is used to create an array . Then [@] is used to return each item as a separate word.

If you've any code after that, you also need to restore $IFS , e.g. unset IFS .

sorontar , Oct 26, 2016 at 5:03

The use of $in unquoted allows wildcards to be expanded. – sorontar Oct 26 '16 at 5:03

user2720864 , Sep 24 at 13:46

+ for the unset command – user2720864 Sep 24 at 13:46

Emilien Brigand , Aug 1, 2016 at 13:15

Without setting the IFS

If you just have one colon you can do that:

a="foo:bar"
b=${a%:*}
c=${a##*:}

you will get:

b = foo
c = bar

Victor Choy , Sep 16, 2015 at 3:34

There is a simple and smart way like this:
echo "add:sfff" | xargs -d: -i  echo {}

But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :

brew install findutils

then

echo "add:sfff" | gxargs -d: -i  echo {}

Halle Knast , May 24, 2017 at 8:42

The following Bash/zsh function splits its first argument on the delimiter given by the second argument:
split() {
    local string="$1"
    local delimiter="$2"
    if [ -n "$string" ]; then
        local part
        while read -d "$delimiter" part; do
            echo $part
        done <<< "$string"
        echo $part
    fi
}

For instance, the command

$ split 'a;b;c' ';'

yields

a
b
c

This output may, for instance, be piped to other commands. Example:

$ split 'a;b;c' ';' | cat -n
1   a
2   b
3   c

Compared to the other solutions given, this one has the following advantages:

If desired, the function may be put into a script as follows:

#!/usr/bin/env bash

split() {
    # ...
}

split "$@"

sandeepkunkunuru , Oct 23, 2017 at 16:10

works and neatly modularized. – sandeepkunkunuru Oct 23 '17 at 16:10

Prospero , Sep 25, 2011 at 1:09

This is the simplest way to do it.
spo='one;two;three'
OIFS=$IFS
IFS=';'
spo_array=($spo)
IFS=$OIFS
echo ${spo_array[*]}

rashok , Oct 25, 2016 at 12:41

IN="bla@some.com;john@home.com"
IFS=';'
read -a IN_arr <<< "${IN}"
for entry in "${IN_arr[@]}"
do
    echo $entry
done

Output

bla@some.com
john@home.com

System : Ubuntu 12.04.1

codeforester , Jan 2, 2017 at 5:37

IFS is not getting set in the specific context of read here and hence it can upset rest of the code, if any. – codeforester Jan 2 '17 at 5:37

shuaihanhungry , Jan 20 at 15:54

you can apply awk to many situations
echo "bla@some.com;john@home.com"|awk -F';' '{printf "%s\n%s\n", $1, $2}'

also you can use this

echo "bla@some.com;john@home.com"|awk -F';' '{print $1,$2}' OFS="\n"

ghost , Apr 24, 2013 at 13:13

If no space, Why not this?
IN="bla@some.com;john@home.com"
arr=(`echo $IN | tr ';' ' '`)

echo ${arr[0]}
echo ${arr[1]}

eukras , Oct 22, 2012 at 7:10

There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:
IN="bla@some.com;john@home.com"
declare -a a="(${IN/;/ })";

Now ${a[0]} , ${a[1]} , etc, are as you would expect. Use ${#a[*]} for number of terms. Or to iterate, of course:

for i in ${a[*]}; do echo $i; done

IMPORTANT NOTE:

This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the $IFS solution(s) in that case.

olibre , Oct 7, 2013 at 13:33

Does not work when IN contains more than two e-mail addresses. Please refer to same idea (but fixed) at palindrom's answerolibre Oct 7 '13 at 13:33

sorontar , Oct 26, 2016 at 5:14

Better use ${IN//;/ } (double slash) to make it also work with more than two values. Beware that any wildcard ( *?[ ) will be expanded. And a trailing empty field will be discarded. – sorontar Oct 26 '16 at 5:14

jeberle , Apr 30, 2013 at 3:10

Use the set built-in to load up the $@ array:
IN="bla@some.com;john@home.com"
IFS=';'; set $IN; IFS=$' \t\n'

Then, let the party begin:

echo $#
for a; do echo $a; done
ADDR1=$1 ADDR2=$2

sorontar , Oct 26, 2016 at 5:17

Better use set -- $IN to avoid some issues with "$IN" starting with dash. Still, the unquoted expansion of $IN will expand wildcards ( *?[ ). – sorontar Oct 26 '16 at 5:17

NevilleDNZ , Sep 2, 2013 at 6:30

Two bourne-ish alternatives where neither require bash arrays:

Case 1 : Keep it nice and simple: Use a NewLine as the Record-Separator... eg.

IN="bla@some.com
john@home.com"

while read i; do
  # process "$i" ... eg.
    echo "[email:$i]"
done <<< "$IN"

Note: in this first case no sub-process is forked to assist with list manipulation.

Idea: Maybe it is worth using NL extensively internally , and only converting to a different RS when generating the final result externally .

Case 2 : Using a ";" as a record separator... eg.

NL="
" IRS=";" ORS=";"

conv_IRS() {
  exec tr "$1" "$NL"
}

conv_ORS() {
  exec tr "$NL" "$1"
}

IN="bla@some.com;john@home.com"
IN="$(conv_IRS ";" <<< "$IN")"

while read i; do
  # process "$i" ... eg.
    echo -n "[email:$i]$ORS"
done <<< "$IN"

In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }

fedorqui , Jan 8, 2015 at 10:21

Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider using awk :
awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"

This sets the field separator to ; , so that it can loop through the fields with a for loop and print accordingly.

Test
$ IN="bla@some.com;john@home.com"
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"
> [bla@some.com]
> [john@home.com]

With another input:

$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c   d;e_;f"
> [a]
> [b]
> [c   d]
> [e_]
> [f]

18446744073709551615 , Feb 20, 2015 at 10:49

In Android shell, most of the proposed methods just do not work:
$ IFS=':' read -ra ADDR <<<"$PATH"                             
/system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directory

What does work is:

$ for i in ${PATH//:/ }; do echo $i; done
/sbin
/vendor/bin
/system/sbin
/system/bin
/system/xbin

where // means global replacement.

sorontar , Oct 26, 2016 at 5:08

Fails if any part of $PATH contains spaces (or newlines). Also expands wildcards (asterisk *, question mark ? and braces [ ]). – sorontar Oct 26 '16 at 5:08

Eduardo Lucio , Apr 4, 2016 at 19:54

Okay guys!

Here's my answer!

DELIMITER_VAL='='

read -d '' F_ABOUT_DISTRO_R <<"EOF"
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
NAME="Ubuntu"
VERSION="14.04.4 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.4 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
EOF

SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
while read -r line; do
   SPLIT+=("$line")
done <<< "$SPLIT_NOW"
for i in "${SPLIT[@]}"; do
    echo "$i"
done

Why this approach is "the best" for me?

Because of two reasons:

  1. You do not need to escape the delimiter;
  2. You will not have problem with blank spaces . The value will be properly separated in the array!

[]'s

gniourf_gniourf , Jan 30, 2017 at 8:26

FYI, /etc/os-release and /etc/lsb-release are meant to be sourced, and not parsed. So your method is really wrong. Moreover, you're not quite answering the question about spiltting a string on a delimiter.gniourf_gniourf Jan 30 '17 at 8:26

Michael Hale , Jun 14, 2012 at 17:38

A one-liner to split a string separated by ';' into an array is:
IN="bla@some.com;john@home.com"
ADDRS=( $(IFS=";" echo "$IN") )
echo ${ADDRS[0]}
echo ${ADDRS[1]}

This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.

Luca Borrione , Sep 3, 2012 at 10:04

-1 this doesn't work here (ubuntu 12.04). it prints only the first echo with all $IN value in it, while the second is empty. you can see it if you put echo "0: "${ADDRS[0]}\n echo "1: "${ADDRS[1]} the output is 0: bla@some.com;john@home.com\n 1: (\n is new line) – Luca Borrione Sep 3 '12 at 10:04

Luca Borrione , Sep 3, 2012 at 10:05

please refer to nickjb's answer at for a working alternative to this idea stackoverflow.com/a/6583589/1032370 – Luca Borrione Sep 3 '12 at 10:05

Score_Under , Apr 28, 2015 at 17:09

-1, 1. IFS isn't being set in that subshell (it's being passed to the environment of "echo", which is a builtin, so nothing is happening anyway). 2. $IN is quoted so it isn't subject to IFS splitting. 3. The process substitution is split by whitespace, but this may corrupt the original data. – Score_Under Apr 28 '15 at 17:09

ajaaskel , Oct 10, 2014 at 11:33

IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
set -f
oldifs="$IFS"
IFS=';'; arrayIN=($IN)
IFS="$oldifs"
for i in "${arrayIN[@]}"; do
echo "$i"
done
set +f

Output:

bla@some.com
john@home.com
Charlie Brown <cbrown@acme.com
!"#$%&/()[]{}*? are no problem
simple is beautiful :-)

Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual. Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.

IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.

gniourf_gniourf , Feb 20, 2015 at 16:45

!"#$%&/()[]{}*? are no problem well... not quite: []*? are glob characters. So what about creating this directory and file: `mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' and running your command? simple may be beautiful, but when it's broken, it's broken. – gniourf_gniourf Feb 20 '15 at 16:45

ajaaskel , Feb 25, 2015 at 7:20

@gniourf_gniourf The string is stored in a variable. Please see the original question. – ajaaskel Feb 25 '15 at 7:20

gniourf_gniourf , Feb 25, 2015 at 7:26

@ajaaskel you didn't fully understand my comment. Go in a scratch directory and issue these commands: mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' . They will only create a directory and a file, with weird looking names, I must admit. Then run your commands with the exact IN you gave: IN='bla@some.com;john@home.com;Charlie Brown <cbrown@acme.com;!"#$%&/()[]{}*? are no problem;simple is beautiful :-)' . You'll see that you won't get the output you expect. Because you're using a method subject to pathname expansions to split your string. – gniourf_gniourf Feb 25 '15 at 7:26

gniourf_gniourf , Feb 25, 2015 at 7:29

This is to demonstrate that the characters * , ? , [...] and even, if extglob is set, !(...) , @(...) , ?(...) , +(...) are problems with this method! – gniourf_gniourf Feb 25 '15 at 7:29

ajaaskel , Feb 26, 2015 at 15:26

@gniourf_gniourf Thanks for detailed comments on globbing. I adjusted the code to have globbing off. My point was however just to show that rather simple assignment can do the splitting job. – ajaaskel Feb 26 '15 at 15:26

> , Dec 19, 2013 at 21:39

Maybe not the most elegant solution, but works with * and spaces:
IN="bla@so me.com;*;john@home.com"
for i in `delims=${IN//[^;]}; seq 1 $((${#delims} + 1))`
do
   echo "> [`echo $IN | cut -d';' -f$i`]"
done

Outputs

> [bla@so me.com]
> [*]
> [john@home.com]

Other example (delimiters at beginning and end):

IN=";bla@so me.com;*;john@home.com;"
> []
> [bla@so me.com]
> [*]
> [john@home.com]
> []

Basically it removes every character other than ; making delims eg. ;;; . Then it does for loop from 1 to number-of-delimiters as counted by ${#delims} . The final step is to safely get the $i th part using cut .

[Oct 17, 2018] How to use arrays in bash script - LinuxConfig.org

Oct 17, 2018 | linuxconfig.org

Create indexed arrays on the fly We can create indexed arrays with a more concise syntax, by simply assign them some values:

$ my_array=(foo bar)
In this case we assigned multiple items at once to the array, but we can also insert one value at a time, specifying its index:
$ my_array[0]=foo
Array operations Once an array is created, we can perform some useful operations on it, like displaying its keys and values or modifying it by appending or removing elements: Print the values of an array To display all the values of an array we can use the following shell expansion syntax:
${my_array[@]}
Or even:
${my_array[*]}
Both syntax let us access all the values of the array and produce the same results, unless the expansion it's quoted. In this case a difference arises: in the first case, when using @ , the expansion will result in a word for each element of the array. This becomes immediately clear when performing a for loop . As an example, imagine we have an array with two elements, "foo" and "bar":
$ my_array=(foo bar)
Performing a for loop on it will produce the following result:
$ for i in "${my_array[@]}"; do echo "$i"; done
foo
bar
When using * , and the variable is quoted, instead, a single "result" will be produced, containing all the elements of the array:
$ for i in "${my_array[*]}"; do echo "$i"; done
foo bar

me name=


Print the keys of an array It's even possible to retrieve and print the keys used in an indexed or associative array, instead of their respective values. The syntax is almost identical, but relies on the use of the ! operator:
$ my_array=(foo bar baz)
$ for index in "${!my_array[@]}"; do echo "$index"; done
0
1
2
The same is valid for associative arrays:
$ declare -A my_array
$ my_array=([foo]=bar [baz]=foobar)
$ for key in "${!my_array[@]}"; do echo "$key"; done
baz
foo
As you can see, being the latter an associative array, we can't count on the fact that retrieved values are returned in the same order in which they were declared. Getting the size of an array We can retrieve the size of an array (the number of elements contained in it), by using a specific shell expansion:
$ my_array=(foo bar baz)
$ echo "the array contains ${#my_array[@]} elements"
the array contains 3 elements
We have created an array which contains three elements, "foo", "bar" and "baz", then by using the syntax above, which differs from the one we saw before to retrieve the array values only for the # character before the array name, we retrieved the number of the elements in the array instead of its content. Adding elements to an array As we saw, we can add elements to an indexed or associative array by specifying respectively their index or associative key. In the case of indexed arrays, we can also simply add an element, by appending to the end of the array, using the += operator:
$ my_array=(foo bar)
$ my_array+=(baz)
If we now print the content of the array we see that the element has been added successfully:
$ echo "${my_array[@]}"
foo bar baz
Multiple elements can be added at a time:
$ my_array=(foo bar)
$ my_array+=(baz foobar)
$ echo "${my_array[@]}"
foo bar baz foobar
To add elements to an associative array, we are bound to specify also their associated keys:
$ declare -A my_array

# Add single element
$ my_array[foo]="bar"

# Add multiple elements at a time
$ my_array+=([baz]=foobar [foobarbaz]=baz)

me name=


Deleting an element from the array To delete an element from the array we need to know it's index or its key in the case of an associative array, and use the unset command. Let's see an example:
$ my_array=(foo bar baz)
$ unset my_array[1]
$ echo ${my_array[@]}
foo baz
We have created a simple array containing three elements, "foo", "bar" and "baz", then we deleted "bar" from it running unset and referencing the index of "bar" in the array: in this case we know it was 1 , since bash arrays start at 0. If we check the indexes of the array, we can now see that 1 is missing:
$ echo ${!my_array[@]}
0 2
The same thing it's valid for associative arrays:
$ declare -A my_array
$ my_array+=([foo]=bar [baz]=foobar)
$ unset my_array[foo]
$ echo ${my_array[@]}
foobar
In the example above, the value referenced by the "foo" key has been deleted, leaving only "foobar" in the array.

Deleting an entire array, it's even simpler: we just pass the array name as an argument to the unset command without specifying any index or key:

$ unset my_array
$ echo ${!my_array[@]}

After executing unset against the entire array, when trying to print its content an empty result is returned: the array doesn't exist anymore. Conclusions In this tutorial we saw the difference between indexed and associative arrays in bash, how to initialize them and how to perform fundamental operations, like displaying their keys and values and appending or removing items. Finally we saw how to unset them completely. Bash syntax can sometimes be pretty weird, but using arrays in scripts can be really useful. When a script starts to become more complex than expected, my advice is, however, to switch to a more capable scripting language such as python.

[Oct 13, 2018] replace cd in bash to (silent) pushd · GitHub

Oct 13, 2018 | gist.github.com

Instantly share code, notes, and snippets.

@mbadran mbadran / gist:130469 Created Jun 16, 2009
What would you like to do? Learn more about clone URLs
Download ZIP replace cd in bash to (silent) pushd Raw gistfile1.sh
alias cd= " pushd $@ > /dev/null "
@bobbydavid
Copy link
@bobbydavid bobbydavid Sep 19, 2012 One annoyance with this alias is that simply typing "cd" will twiddle the directory stack instead of bringing you to your home directory.
Copy link
bobbydavid commented Sep 19, 2012
One annoyance with this alias is that simply typing "cd" will twiddle the directory stack instead of bringing you to your home directory.
@dideler
Copy link
@dideler dideler Mar 9, 2013 @bobbydavid makes a good point. This would be better as a function.
function cd {                                                                   
    if (("$#" > 0)); then
        pushd "$@" > /dev/null
    else
        cd $HOME
    fi
}

By the way, I found this gist by googling "silence pushd".

Copy link
dideler commented Mar 9, 2013
@bobbydavid makes a good point. This would be better as a function.
function cd {                                                                   
    if (("$#" > 0)); then
        pushd "$@" > /dev/null
    else
        cd $HOME
    fi
}

By the way, I found this gist by googling "silence pushd".

@ghost
Copy link
@ghost ghost May 30, 2013 Don't you miss something?
function cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

You can always mimic the "cd -" functionality by using pushd alone.
Btw, I also found this gist by googling "silent pushd" ;)

Copy link
ghost commented May 30, 2013
Don't you miss something?
function cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

You can always mimic the "cd -" functionality by using pushd alone.
Btw, I also found this gist by googling "silent pushd" ;)

@cra
Copy link
@cra cra Jul 1, 2014 And thanks to your last comment, I found this gist by googling "silent cd -" :)
Copy link
cra commented Jul 1, 2014
And thanks to your last comment, I found this gist by googling "silent cd -" :)
@keltroth
Copy link
@keltroth keltroth Jun 25, 2015 With bash completion activated a can't get rid of this error :
"bash: pushd: cd: No such file or directory"...

Any clue ?

Copy link
keltroth commented Jun 25, 2015
With bash completion activated a can't get rid of this error :
"bash: pushd: cd: No such file or directory"...

Any clue ?

@keltroth
Copy link
@keltroth keltroth Jun 25, 2015 Got it !
One have to add :
complete -d cd

After making the alias !

My complete code here :

function _cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

alias cd=_cd
complete -d cd
Copy link
keltroth commented Jun 25, 2015
Got it !
One have to add :
complete -d cd

After making the alias !

My complete code here :

function _cd {
    if (("$#" > 0)); then
        if [ "$1" == "-" ]; then
            popd > /dev/null
        else
            pushd "$@" > /dev/null
        fi
    else
        cd $HOME
    fi
}

alias cd=_cd
complete -d cd
@jan-warchol
Copy link
@jan-warchol jan-warchol Nov 29, 2015 I wanted to be able to go back by a given number of history items by typing cd -n , and I came up with this:
function _cd {
    # typing just `_cd` will take you $HOME ;)
    if [ "$1" == "" ]; then
        pushd "$HOME" > /dev/null

    # use `_cd -` to visit previous directory
    elif [ "$1" == "-" ]; then
        pushd $OLDPWD > /dev/null

    # use `_cd -n` to go n directories back in history
    elif [[ "$1" =~ ^-[0-9]+$ ]]; then
        for i in `seq 1 ${1/-/}`; do
            popd > /dev/null
        done

    # use `_cd -- <path>` if your path begins with a dash
    elif [ "$1" == "--" ]; then
        shift
        pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
    else
        pushd "$@" > /dev/null
    fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=_cd
complete -d cd

I think you may find this interesting.

Copy link
jan-warchol commented Nov 29, 2015
I wanted to be able to go back by a given number of history items by typing cd -n , and I came up with this:
function _cd {
    # typing just `_cd` will take you $HOME ;)
    if [ "$1" == "" ]; then
        pushd "$HOME" > /dev/null

    # use `_cd -` to visit previous directory
    elif [ "$1" == "-" ]; then
        pushd $OLDPWD > /dev/null

    # use `_cd -n` to go n directories back in history
    elif [[ "$1" =~ ^-[0-9]+$ ]]; then
        for i in `seq 1 ${1/-/}`; do
            popd > /dev/null
        done

    # use `_cd -- <path>` if your path begins with a dash
    elif [ "$1" == "--" ]; then
        shift
        pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
    else
        pushd "$@" > /dev/null
    fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=_cd
complete -d cd

I think you may find this interesting.

@3v1n0
Copy link
@3v1n0 3v1n0 Oct 25, 2017 Another improvement over @jan-warchol version, to make cd - to alternatively use pushd $OLDPWD and popd depending on what we called before.

This allows to avoid to fill your history with elements when you often do cd -; cd - # repeated as long you want . This could be applied when using this alias also for $OLDPWD , but in that case it might be that you want it repeated there, so I didn't touch it.

Also added cd -l as alias for dir -v and use cd -g X to go to the X th directory in your history (without popping, that's possible too of course, but it' something more an addition in this case).

# Replace cd with pushd https://gist.github.com/mbadran/130469
function push_cd() {
  # typing just `push_cd` will take you $HOME ;)
  if [ -z "$1" ]; then
    push_cd "$HOME"

  # use `push_cd -` to visit previous directory
  elif [ "$1" == "-" ]; then
    if [ "$(dirs -p | wc -l)" -gt 1 ]; then
      current_dir="$PWD"
      popd > /dev/null
      pushd -n $current_dir > /dev/null
    elif [ -n "$OLDPWD" ]; then
      push_cd $OLDPWD
    fi

  # use `push_cd -l` or `push_cd -s` to print current stack of folders
  elif [ "$1" == "-l" ] || [ "$1" == "-s" ]; then
    dirs -v

  # use `push_cd -l N` to go to the Nth directory in history (pushing)
  elif [ "$1" == "-g" ] && [[ "$2" =~ ^[0-9]+$ ]]; then
    indexed_path=$(dirs -p | sed -n $(($2+1))p)
    push_cd $indexed_path

  # use `push_cd +N` to go to the Nth directory in history (pushing)
  elif [[ "$1" =~ ^+[0-9]+$ ]]; then
    push_cd -g ${1/+/}

  # use `push_cd -N` to go n directories back in history
  elif [[ "$1" =~ ^-[0-9]+$ ]]; then
    for i in `seq 1 ${1/-/}`; do
      popd > /dev/null
    done

  # use `push_cd -- <path>` if your path begins with a dash
  elif [ "$1" == "--" ]; then
    shift
    pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
  else
    pushd "$@" > /dev/null

    if [ "$1" == "." ] || [ "$1" == "$PWD" ]; then
      popd -n > /dev/null
    fi
  fi

  if [ -n "$CD_SHOW_STACK" ]; then
    dirs -v
  fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=push_cd
complete -d cd```
Copy link
3v1n0 commented Oct 25, 2017 •
Another improvement over @jan-warchol version, to make cd - to alternatively use pushd $OLDPWD and popd depending on what we called before.

This allows to avoid to fill your history with elements when you often do cd -; cd - # repeated as long you want . This could be applied when using this alias also for $OLDPWD , but in that case it might be that you want it repeated there, so I didn't touch it.

Also added cd -l as alias for dir -v and use cd -g X to go to the X th directory in your history (without popping, that's possible too of course, but it' something more an addition in this case).

# Replace cd with pushd https://gist.github.com/mbadran/130469
function push_cd() {
  # typing just `push_cd` will take you $HOME ;)
  if [ -z "$1" ]; then
    push_cd "$HOME"

  # use `push_cd -` to visit previous directory
  elif [ "$1" == "-" ]; then
    if [ "$(dirs -p | wc -l)" -gt 1 ]; then
      current_dir="$PWD"
      popd > /dev/null
      pushd -n $current_dir > /dev/null
    elif [ -n "$OLDPWD" ]; then
      push_cd $OLDPWD
    fi

  # use `push_cd -l` or `push_cd -s` to print current stack of folders
  elif [ "$1" == "-l" ] || [ "$1" == "-s" ]; then
    dirs -v

  # use `push_cd -l N` to go to the Nth directory in history (pushing)
  elif [ "$1" == "-g" ] && [[ "$2" =~ ^[0-9]+$ ]]; then
    indexed_path=$(dirs -p | sed -n $(($2+1))p)
    push_cd $indexed_path

  # use `push_cd +N` to go to the Nth directory in history (pushing)
  elif [[ "$1" =~ ^+[0-9]+$ ]]; then
    push_cd -g ${1/+/}

  # use `push_cd -N` to go n directories back in history
  elif [[ "$1" =~ ^-[0-9]+$ ]]; then
    for i in `seq 1 ${1/-/}`; do
      popd > /dev/null
    done

  # use `push_cd -- <path>` if your path begins with a dash
  elif [ "$1" == "--" ]; then
    shift
    pushd -- "$@" > /dev/null

    # basic case: move to a dir and add it to history
  else
    pushd "$@" > /dev/null

    if [ "$1" == "." ] || [ "$1" == "$PWD" ]; then
      popd -n > /dev/null
    fi
  fi

  if [ -n "$CD_SHOW_STACK" ]; then
    dirs -v
  fi
}

# replace standard `cd` with enhanced version, ensure tab-completion works
alias cd=push_cd
complete -d cd```

[Oct 10, 2018] Bash History Display Date And Time For Each Command - nixCraft

Oct 10, 2018 | www.cyberciti.biz
  1. Abhijeet Vaidya says: March 11, 2010 at 11:41 am End single quote is missing.
    Correct command is:
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile 
  2. izaak says: March 12, 2010 at 11:06 am I would also add
    $ echo 'export HISTSIZE=10000' >> ~/.bash_profile

    It's really useful, I think.

  3. Dariusz says: March 12, 2010 at 2:31 pm you can add it to /etc/profile so it is available to all users. I also add:

    # Make sure all terminals save history
    shopt -s histappend histreedit histverify
    shopt -s no_empty_cmd_completion # bash>=2.04 only

    # Whenever displaying the prompt, write the previous line to disk:
    PROMPT_COMMAND='history -a'

    #Use GREP color features by default: This will highlight the matched words / regexes
    export GREP_OPTIONS='–color=auto'
    export GREP_COLOR='1;37;41′

  4. Babar Haq says: March 15, 2010 at 6:25 am Good tip. We have multiple users connecting as root using ssh and running different commands. Is there a way to log the IP that command was run from?
    Thanks in advance.
    1. Anthony says: August 21, 2014 at 9:01 pm Just for anyone who might still find this thread (like I did today):

      export HISTTIMEFORMAT="%F %T : $(echo $SSH_CONNECTION | cut -d\ -f1) : "

      will give you the time format, plus the IP address culled from the ssh_connection environment variable (thanks for pointing that out, Cadrian, I never knew about that before), all right there in your history output.

      You could even add in $(whoami)@ right to get if you like (although if everyone's logging in with the root account that's not helpful).

  5. cadrian says: March 16, 2010 at 5:55 pm Yup, you can export one of this

    env | grep SSH
    SSH_CLIENT=192.168.78.22 42387 22
    SSH_TTY=/dev/pts/0
    SSH_CONNECTION=192.168.78.22 42387 192.168.36.76 22

    As their bash history filename

    set |grep -i hist
    HISTCONTROL=ignoreboth
    HISTFILE=/home/cadrian/.bash_history
    HISTFILESIZE=1000000000
    HISTSIZE=10000000

    So in profile you can so something like HISTFILE=/root/.bash_history_$(echo $SSH_CONNECTION| cut -d\ -f1)

  6. TSI says: March 21, 2010 at 10:29 am bash 4 can syslog every command bat afaik, you have to recompile it (check file config-top.h). See the news file of bash: http://tiswww.case.edu/php/chet/bash/NEWS
    If you want to safely export history of your luser, you can ssl-syslog them to a central syslog server.
  7. Dinesh Jadhav says: November 12, 2010 at 11:00 am This is good command, It helps me a lot.
  8. Indie says: September 19, 2011 at 11:41 am You only need to use
    export HISTTIMEFORMAT='%F %T '

    in your .bash_profile

  9. lalit jain says: October 3, 2011 at 9:58 am -- show history with date & time

    # HISTTIMEFORMAT='%c '
    #history

  10. Sohail says: January 13, 2012 at 7:05 am Hi
    Nice trick but unfortunately, the commands which were executed in the past few days also are carrying the current day's (today's) timestamp.

    Please advice.

    Regards

    1. Raymond says: March 15, 2012 at 9:05 am Hi Sohail,

      Yes indeed that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system recall or record the commands which were inputted prior enabling of this feature. Hope this answers your concern.

      Thanks!

      1. Raymond says: March 15, 2012 at 9:08 am Hi Sohail,

        Yes, that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system can't recall or record the commands which were inputted prior enabling of this feature, thus it will just reflect on the printed output (upon execution of "history") the current day and time. Hope this answers your concern.

        Thanks!

  11. Sohail says: February 24, 2012 at 6:45 am Hi

    The command only lists the current date (Today) even for those commands which were executed on earlier days.

    Any solutions ?

    Regards

  12. nitiratna nikalje says: August 24, 2012 at 5:24 pm hi vivek.do u know any openings for freshers in linux field? I m doing rhce course from rajiv banergy. My samba,nfs-nis,dhcp,telnet,ftp,http,ssh,squid,cron,quota and system administration is over.iptables ,sendmail and dns is remaining.

    -9029917299(Nitiratna)

  13. JMathew says: August 26, 2012 at 10:51 pm Hi,

    Is there anyway to log username also along with the Command Which we typed

    Thanks in Advance

  14. suresh says: May 22, 2013 at 1:42 pm How can i get full comman along with data and path as we het in history command.
  15. rajesh says: December 6, 2013 at 5:56 am Thanks it worked..
  16. Krishan says: February 7, 2014 at 6:18 am The command is not working properly. It is displaying the date and time of todays for all the commands where as I ran the some command three before.

    How come it is displaying the today date

  17. PR says: April 29, 2014 at 5:18 pm Hi..

    I want to collect the history of particular user everyday and want to send an email.I wrote below script.
    for collecting everyday history by time shall i edit .profile file of that user
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile
    Script:

    #!/bin/bash
    #This script sends email of particular user
    history >/tmp/history
    if [ -s /tmp/history ]
    then
           mailx -s "history 29042014"  </tmp/history
               fi
    rm /tmp/history
    #END OF THE SCRIPT
    

    Can any one suggest better way to collect particular user history for everyday

  18. lefty.crupps says: October 24, 2014 at 7:10 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
    https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

    In that case, myne looks like this:
    echo 'export HISTTIMEFORMAT="%YY-%m-%d/ %T "' >> ~/.bashrc

    Thanks for the tip!

    1. lefty.crupps says: October 24, 2014 at 7:11 pm please delete post 33, my command is messed up.
  19. lefty.crupps says: October 24, 2014 at 7:11 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
    https://en.wikipedia.org/wiki/ISO_8601#Calendar_dates

    In that case, myne looks like this:
    echo ‘export HISTTIMEFORMAT=â€%Y-%m-%d %T “‘ >> ~/.bashrc

    Thanks for the tip!

  20. Vanathu says: October 30, 2014 at 1:01 am its show only current date for all the command history
    1. lefty.crupps says: October 30, 2014 at 2:08 am it's marking all of your current history with today's date. Try checking again in a few days.
  21. tinu says: October 14, 2015 at 3:30 pm Hi All,

    I Have enabled my history with the command given :
    echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile

    i need to know how i can add the ip's also , from which the commands are fired to the system.

[Jul 04, 2018] How do I parse command line arguments in Bash

Notable quotes:
"... enhanced getopt ..."
Jul 04, 2018 | stackoverflow.com

Lawrence Johnston ,Oct 10, 2008 at 16:57

Say, I have a script that gets called with this line:
./myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile

or this one:

./myscript -v -f -d -o /fizz/someOtherFile ./foo/bar/someFile

What's the accepted way of parsing this such that in each case (or some combination of the two) $v , $f , and $d will all be set to true and $outFile will be equal to /fizz/someOtherFile ?

Inanc Gumus ,Apr 15, 2016 at 19:11

See my very easy and no-dependency answer here: stackoverflow.com/a/33826763/115363Inanc Gumus Apr 15 '16 at 19:11

dezza ,Aug 2, 2016 at 2:13

For zsh-users there's a great builtin called zparseopts which can do: zparseopts -D -E -M -- d=debug -debug=d And have both -d and --debug in the $debug array echo $+debug[1] will return 0 or 1 if one of those are used. Ref: zsh.org/mla/users/2011/msg00350.htmldezza Aug 2 '16 at 2:13

Bruno Bronosky ,Jan 7, 2013 at 20:01

Preferred Method: Using straight bash without getopt[s]

I originally answered the question as the OP asked. This Q/A is getting a lot of attention, so I should also offer the non-magic way to do this. I'm going to expand upon guneysus's answer to fix the nasty sed and include Tobias Kienzler's suggestion .

Two of the most common ways to pass key value pair arguments are:

Straight Bash Space Separated

Usage ./myscript.sh -e conf -s /etc -l /usr/lib /etc/hosts

#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo FILE EXTENSION  = "${EXTENSION}"
echo SEARCH PATH     = "${SEARCHPATH}"
echo LIBRARY PATH    = "${LIBPATH}"
echo DEFAULT         = "${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
Straight Bash Equals Separated

Usage ./myscript.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Using getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

Never use getopt(1). getopt cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed.

The POSIX shell (and others) offer getopts which is safe to use instead. Here is a simplistic getopts example:

#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"

# End of file

The advantages of getopts are:

  1. It's portable, and will work in e.g. dash.
  2. It can handle things like -vf filename in the expected Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without trickery.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

Livven ,Jun 6, 2013 at 21:19

Is this really true? According to Wikipedia there's a newer GNU enhanced version of getopt which includes all the functionality of getopts and then some. man getopt on Ubuntu 13.04 outputs getopt - parse command options (enhanced) as the name, so I presume this enhanced version is standard now. – Livven Jun 6 '13 at 21:19

szablica ,Jul 17, 2013 at 15:23

That something is a certain way on your system is a very weak premise to base asumptions of "being standard" on. – szablica Jul 17 '13 at 15:23

Stephane Chazelas ,Aug 20, 2014 at 19:55

@Livven, that getopt is not a GNU utility, it's part of util-linux . – Stephane Chazelas Aug 20 '14 at 19:55

Nicolas Mongrain-Lacombe ,Jun 19, 2016 at 21:22

If you use -gt 0 , remove your shift after the esac , augment all the shift by 1 and add this case: *) break;; you can handle non optionnal arguments. Ex: pastebin.com/6DJ57HTcNicolas Mongrain-Lacombe Jun 19 '16 at 21:22

kolydart ,Jul 10, 2017 at 8:11

You do not echo –default . In the first example, I notice that if –default is the last argument, it is not processed (considered as non-opt), unless while [[ $# -gt 1 ]] is set as while [[ $# -gt 0 ]]kolydart Jul 10 '17 at 8:11

Robert Siemer ,Apr 20, 2015 at 17:47

No answer mentions enhanced getopt . And the top-voted answer is misleading: It ignores -⁠vfd style short options (requested by the OP), options after positional arguments (also requested by the OP) and it ignores parsing-errors. Instead:

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash

getopt --test > /dev/null
if [[ $? -ne 4 ]]; then
    echo "I'm sorry, `getopt --test` failed in this environment."
    exit 1
fi

OPTIONS=dfo:v
LONGOPTIONS=debug,force,output:,verbose

# -temporarily store output to be able to check for errors
# -e.g. use "--options" parameter by name to activate quoting/enhanced mode
# -pass arguments only via   -- "$@"   to separate them correctly
PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$@")
if [[ $? -ne 0 ]]; then
    # e.g. $? == 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

johncip ,Jan 12, 2017 at 2:00

Thanks for this. Just confirmed from the feature table at en.wikipedia.org/wiki/Getopts , if you need support for long options, and you're not on Solaris, getopt is the way to go. – johncip Jan 12 '17 at 2:00

Kaushal Modi ,Apr 27, 2017 at 14:02

I believe that the only caveat with getopt is that it cannot be used conveniently in wrapper scripts where one might have few options specific to the wrapper script, and then pass the non-wrapper-script options to the wrapped executable, intact. Let's say I have a grep wrapper called mygrep and I have an option --foo specific to mygrep , then I cannot do mygrep --foo -A 2 , and have the -A 2 passed automatically to grep ; I need to do mygrep --foo -- -A 2 . Here is my implementation on top of your solution.Kaushal Modi Apr 27 '17 at 14:02

bobpaul ,Mar 20 at 16:45

Alex, I agree and there's really no way around that since we need to know the actual return value of getopt --test . I'm a big fan of "Unofficial Bash Strict mode", (which includes set -e ), and I just put the check for getopt ABOVE set -euo pipefail and IFS=$'\n\t' in my script. – bobpaul Mar 20 at 16:45

Robert Siemer ,Mar 21 at 9:10

@bobpaul Oh, there is a way around that. And I'll edit my answer soon to reflect my collections regarding this issue ( set -e )... – Robert Siemer Mar 21 at 9:10

Robert Siemer ,Mar 21 at 9:16

@bobpaul Your statement about util-linux is wrong and misleading as well: the package is marked "essential" on Ubuntu/Debian. As such, it is always installed. – Which distros are you talking about (where you say it needs to be installed on purpose)? – Robert Siemer Mar 21 at 9:16

guneysus ,Nov 13, 2012 at 10:31

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Tobias Kienzler ,Nov 12, 2013 at 12:48

Neat! Though this won't work for space-separated arguments à la mount -t tempfs ... . One can probably fix this via something like while [ $# -ge 1 ]; do param=$1; shift; case $param in; -p) prefix=$1; shift;; etc – Tobias Kienzler Nov 12 '13 at 12:48

Robert Siemer ,Mar 19, 2016 at 15:23

This can't handle -vfd style combined short options. – Robert Siemer Mar 19 '16 at 15:23

bekur ,Dec 19, 2017 at 23:27

link is broken! – bekur Dec 19 '17 at 23:27

Matt J ,Oct 10, 2008 at 17:03

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

Robert Siemer ,Apr 16, 2016 at 14:37

Using $* is broken usage of getopt . (It hoses arguments with spaces.) See my answer for proper usage. – Robert Siemer Apr 16 '16 at 14:37

SDsolar ,Aug 10, 2017 at 14:07

Why would you want to make it more complicated? – SDsolar Aug 10 '17 at 14:07

thebunnyrules ,Jun 1 at 1:57

@Matt J, the first part of the script (for i) would be able to handle arguments with spaces in them if you use "$i" instead of $i. The getopts does not seem to be able to handle arguments with spaces. What would be the advantage of using getopt over the for i loop? – thebunnyrules Jun 1 at 1:57

bronson ,Jul 15, 2015 at 23:43

At the risk of adding another example to ignore, here's my scheme.

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

rhombidodecahedron ,Sep 11, 2015 at 8:40

What is the "handle_argument" function? – rhombidodecahedron Sep 11 '15 at 8:40

bronson ,Oct 8, 2015 at 20:41

Sorry for the delay. In my script, the handle_argument function receives all the non-option arguments. You can replace that line with whatever you'd like, maybe *) die "unrecognized argument: $1" or collect the args into a variable *) args+="$1"; shift 1;; . – bronson Oct 8 '15 at 20:41

Guilherme Garnier ,Apr 13 at 16:10

Amazing! I've tested a couple of answers, but this is the only one that worked for all cases, including many positional parameters (both before and after flags) – Guilherme Garnier Apr 13 at 16:10

Shane Day ,Jul 1, 2014 at 1:20

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

Robert Siemer ,Dec 6, 2015 at 13:47

This code can't handle options with arguments like this: -c1 . And the use of = to separate short options from their arguments is unusual... – Robert Siemer Dec 6 '15 at 13:47

sfnd ,Jun 6, 2016 at 19:28

I ran into two problems with this useful chunk of code: 1) the "shift" in the case of "-c=foo" ends up eating the next parameter; and 2) 'c' should not be included in the "[cfr]" pattern for combinable short options. – sfnd Jun 6 '16 at 19:28

Inanc Gumus ,Nov 20, 2015 at 12:28

More succinct way

script.sh

#!/bin/bash

while [[ "$#" > 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

hfossli ,Apr 7 at 20:58

This is what I am doing. Have to while [[ "$#" > 1 ]] if I want to support ending the line with a boolean flag ./script.sh --debug dev --uglify fast --verbose . Example: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 20:58

hfossli ,Apr 7 at 21:09

I sent an edit request. I just tested this and it works perfectly. – hfossli Apr 7 at 21:09

hfossli ,Apr 7 at 21:10

Wow! Simple and clean! This is how I'm using this: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58hfossli Apr 7 at 21:10

Ponyboy47 ,Sep 8, 2016 at 18:59

My answer is largely based on the answer by Bruno Bronosky , but I sort of mashed his two pure bash implementations into one that I use pretty frequently.
# As long as there is at least one more argument, keep looping
while [[ $# -gt 0 ]]; do
    key="$1"
    case "$key" in
        # This is a flag type option. Will catch either -f or --foo
        -f|--foo)
        FOO=1
        ;;
        # Also a flag type option. Will catch either -b or --bar
        -b|--bar)
        BAR=1
        ;;
        # This is an arg value type option. Will catch -o value or --output-file value
        -o|--output-file)
        shift # past the key and to the value
        OUTPUTFILE="$1"
        ;;
        # This is an arg=value type option. Will catch -o=value or --output-file=value
        -o=*|--output-file=*)
        # No need to shift here since the value is part of the same string
        OUTPUTFILE="${key#*=}"
        ;;
        *)
        # Do whatever you want with extra options
        echo "Unknown option '$key'"
        ;;
    esac
    # Shift after checking all the cases to get the next option
    shift
done

This allows you to have both space separated options/values, as well as equal defined values.

So you could run your script using:

./myscript --foo -b -o /fizz/file.txt

as well as:

./myscript -f --bar -o=/fizz/file.txt

and both should have the same end result.

PROS:

CONS:

These are the only pros/cons I can think of off the top of my head

bubla ,Jul 10, 2016 at 22:40

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

RichVel ,Aug 18, 2016 at 5:34

Thanks for writing argbash, I just used it and found it works well. I mostly went for argbash because it's a code generator supporting the older bash 3.x found on OS X 10.11 El Capitan. The only downside is that the code-generator approach means quite a lot of code in your main script, compared to calling a module. – RichVel Aug 18 '16 at 5:34

bubla ,Aug 23, 2016 at 20:40

You can actually use Argbash in a way that it produces tailor-made parsing library just for you that you can have included in your script or you can have it in a separate file and just source it. I have added an example to demonstrate that and I have made it more explicit in the documentation, too. – bubla Aug 23 '16 at 20:40

RichVel ,Aug 24, 2016 at 5:47

Good to know. That example is interesting but still not really clear - maybe you can change name of the generated script to 'parse_lib.sh' or similar and show where the main script calls it (like in the wrapping script section which is more complex use case). – RichVel Aug 24 '16 at 5:47

bubla ,Dec 2, 2016 at 20:12

The issues were addressed in recent version of argbash: Documentation has been improved, a quickstart argbash-init script has been introduced and you can even use argbash online at argbash.io/generatebubla Dec 2 '16 at 20:12

Alek ,Mar 1, 2012 at 15:15

I think this one is simple enough to use:
#!/bin/bash
#

readopt='getopts $opts opt;rc=$?;[ $rc$opt == 0? ]&&exit 1;[ $rc == 0 ]||{ shift $[OPTIND-1];false; }'

opts=vfdo:

# Enumerating options
while eval $readopt
do
    echo OPT:$opt ${OPTARG+OPTARG:$OPTARG}
done

# Enumerating arguments
for arg
do
    echo ARG:$arg
done

Invocation example:

./myscript -v -do /fizz/someOtherFile -f ./foo/bar/someFile
OPT:v 
OPT:d 
OPT:o OPTARG:/fizz/someOtherFile
OPT:f 
ARG:./foo/bar/someFile

erm3nda ,May 20, 2015 at 22:50

I read all and this one is my preferred one. I don't like to use -a=1 as argc style. I prefer to put first the main option -options and later the special ones with single spacing -o option . Im looking for the simplest-vs-better way to read argvs. – erm3nda May 20 '15 at 22:50

erm3nda ,May 20, 2015 at 23:25

It's working really well but if you pass an argument to a non a: option all the following options would be taken as arguments. You can check this line ./myscript -v -d fail -o /fizz/someOtherFile -f ./foo/bar/someFile with your own script. -d option is not set as d: – erm3nda May 20 '15 at 23:25

unsynchronized ,Jun 9, 2014 at 13:46

Expanding on the excellent answer by @guneysus, here is a tweak that lets user use whichever syntax they prefer, eg
command -x=myfilename.ext --another_switch

vs

command -x myfilename.ext --another_switch

That is to say the equals can be replaced with whitespace.

This "fuzzy interpretation" might not be to your liking, but if you are making scripts that are interchangeable with other utilities (as is the case with mine, which must work with ffmpeg), the flexibility is useful.

STD_IN=0

prefix=""
key=""
value=""
for keyValue in "$@"
do
  case "${prefix}${keyValue}" in
    -i=*|--input_filename=*)  key="-i";     value="${keyValue#*=}";; 
    -ss=*|--seek_from=*)      key="-ss";    value="${keyValue#*=}";;
    -t=*|--play_seconds=*)    key="-t";     value="${keyValue#*=}";;
    -|--stdin)                key="-";      value=1;;
    *)                                      value=$keyValue;;
  esac
  case $key in
    -i) MOVIE=$(resolveMovie "${value}");  prefix=""; key="";;
    -ss) SEEK_FROM="${value}";          prefix=""; key="";;
    -t)  PLAY_SECONDS="${value}";           prefix=""; key="";;
    -)   STD_IN=${value};                   prefix=""; key="";; 
    *)   prefix="${keyValue}=";;
  esac
done

vangorra ,Feb 12, 2015 at 21:50

getopts works great if #1 you have it installed and #2 you intend to run it on the same platform. OSX and Linux (for example) behave differently in this respect.

Here is a (non getopts) solution that supports equals, non-equals, and boolean flags. For example you could run your script in this way:

./script --arg1=value1 --arg2 value2 --shouldClean

# parse the arguments.
COUNTER=0
ARGS=("$@")
while [ $COUNTER -lt $# ]
do
    arg=${ARGS[$COUNTER]}
    let COUNTER=COUNTER+1
    nextArg=${ARGS[$COUNTER]}

    if [[ $skipNext -eq 1 ]]; then
        echo "Skipping"
        skipNext=0
        continue
    fi

    argKey=""
    argVal=""
    if [[ "$arg" =~ ^\- ]]; then
        # if the format is: -key=value
        if [[ "$arg" =~ \= ]]; then
            argVal=$(echo "$arg" | cut -d'=' -f2)
            argKey=$(echo "$arg" | cut -d'=' -f1)
            skipNext=0

        # if the format is: -key value
        elif [[ ! "$nextArg" =~ ^\- ]]; then
            argKey="$arg"
            argVal="$nextArg"
            skipNext=1

        # if the format is: -key (a boolean flag)
        elif [[ "$nextArg" =~ ^\- ]] || [[ -z "$nextArg" ]]; then
            argKey="$arg"
            argVal=""
            skipNext=0
        fi
    # if the format has not flag, just a value.
    else
        argKey=""
        argVal="$arg"
        skipNext=0
    fi

    case "$argKey" in 
        --source-scmurl)
            SOURCE_URL="$argVal"
        ;;
        --dest-scmurl)
            DEST_URL="$argVal"
        ;;
        --version-num)
            VERSION_NUM="$argVal"
        ;;
        -c|--clean)
            CLEAN_BEFORE_START="1"
        ;;
        -h|--help|-help|--h)
            showUsage
            exit
        ;;
    esac
done

akostadinov ,Jul 19, 2013 at 7:50

This is how I do in a function to avoid breaking getopts run at the same time somewhere higher in stack:
function waitForWeb () {
   local OPTIND=1 OPTARG OPTION
   local host=localhost port=8080 proto=http
   while getopts "h:p:r:" OPTION; do
      case "$OPTION" in
      h)
         host="$OPTARG"
         ;;
      p)
         port="$OPTARG"
         ;;
      r)
         proto="$OPTARG"
         ;;
      esac
   done
...
}

Renato Silva ,Jul 4, 2016 at 16:47

EasyOptions does not require any parsing:
## Options:
##   --verbose, -v  Verbose mode
##   --output=FILE  Output filename

source easyoptions || exit

if test -n "${verbose}"; then
    echo "output file is ${output}"
    echo "${arguments[@]}"
fi

Oleksii Chekulaiev ,Jul 1, 2016 at 20:56

I give you The Function parse_params that will parse params:
  1. Without polluting global scope.
  2. Effortlessly returns to you ready to use variables so that you could build further logic on them
  3. Amount of dashes before params does not matter ( --all equals -all equals all=all )

The script below is a copy-paste working demonstration. See show_use function to understand how to use parse_params .

Limitations:

  1. Does not support space delimited params ( -d 1 )
  2. Param names will lose dashes so --any-param and -anyparam are equivalent
  3. eval $(parse_params "$@") must be used inside bash function (it will not work in the global scope)

#!/bin/bash

# Universal Bash parameter parsing
# Parse equal sign separated params into named local variables
# Standalone named parameter value will equal its param name (--force creates variable $force=="force")
# Parses multi-valued named params into an array (--path=path1 --path=path2 creates ${path[*]} array)
# Parses un-named params into ${ARGV[*]} array
# Additionally puts all named params into ${ARGN[*]} array
# Additionally puts all standalone "option" params into ${ARGO[*]} array
# @author Oleksii Chekulaiev
# @version v1.3 (May-14-2018)
parse_params ()
{
    local existing_named
    local ARGV=() # un-named params
    local ARGN=() # named params
    local ARGO=() # options (--params)
    echo "local ARGV=(); local ARGN=(); local ARGO=();"
    while [[ "$1" != "" ]]; do
        # Escape asterisk to prevent bash asterisk expansion
        _escaped=${1/\*/\'\"*\"\'}
        # If equals delimited named parameter
        if [[ "$1" =~ ^..*=..* ]]; then
            # Add to named parameters array
            echo "ARGN+=('$_escaped');"
            # key is part before first =
            local _key=$(echo "$1" | cut -d = -f 1)
            # val is everything after key and = (protect from param==value error)
            local _val="${1/$_key=}"
            # remove dashes from key name
            _key=${_key//\-}
            # search for existing parameter name
            if (echo "$existing_named" | grep "\b$_key\b" >/dev/null); then
                # if name already exists then it's a multi-value named parameter
                # re-declare it as an array if needed
                if ! (declare -p _key 2> /dev/null | grep -q 'declare \-a'); then
                    echo "$_key=(\"\$$_key\");"
                fi
                # append new value
                echo "$_key+=('$_val');"
            else
                # single-value named parameter
                echo "local $_key=\"$_val\";"
                existing_named=" $_key"
            fi
        # If standalone named parameter
        elif [[ "$1" =~ ^\-. ]]; then
            # Add to options array
            echo "ARGO+=('$_escaped');"
            # remove dashes
            local _key=${1//\-}
            echo "local $_key=\"$_key\";"
        # non-named parameter
        else
            # Escape asterisk to prevent bash asterisk expansion
            _escaped=${1/\*/\'\"*\"\'}
            echo "ARGV+=('$_escaped');"
        fi
        shift
    done
}

#--------------------------- DEMO OF THE USAGE -------------------------------

show_use ()
{
    eval $(parse_params "$@")
    # --
    echo "${ARGV[0]}" # print first unnamed param
    echo "${ARGV[1]}" # print second unnamed param
    echo "${ARGN[0]}" # print first named param
    echo "${ARG0[0]}" # print first option param (--force)
    echo "$anyparam"  # print --anyparam value
    echo "$k"         # print k=5 value
    echo "${multivalue[0]}" # print first value of multi-value
    echo "${multivalue[1]}" # print second value of multi-value
    [[ "$force" == "force" ]] && echo "\$force is set so let the force be with you"
}

show_use "param 1" --anyparam="my value" param2 k=5 --force --multi-value=test1 --multi-value=test2

Oleksii Chekulaiev ,Sep 28, 2016 at 12:55

To use the demo to parse params that come into your bash script you just do show_use "$@"Oleksii Chekulaiev Sep 28 '16 at 12:55

Oleksii Chekulaiev ,Sep 28, 2016 at 12:58

Basically I found out that github.com/renatosilva/easyoptions does the same in the same way but is a bit more massive than this function. – Oleksii Chekulaiev Sep 28 '16 at 12:58

galmok ,Jun 24, 2015 at 10:54

I'd like to offer my version of option parsing, that allows for the following:
-s p1
--stage p1
-w somefolder
--workfolder somefolder
-sw p1 somefolder
-e=hello

Also allows for this (could be unwanted):

-s--workfolder p1 somefolder
-se=hello p1
-swe=hello p1 somefolder

You have to decide before use if = is to be used on an option or not. This is to keep the code clean(ish).

while [[ $# > 0 ]]
do
    key="$1"
    while [[ ${key+x} ]]
    do
        case $key in
            -s*|--stage)
                STAGE="$2"
                shift # option has parameter
                ;;
            -w*|--workfolder)
                workfolder="$2"
                shift # option has parameter
                ;;
            -e=*)
                EXAMPLE="${key#*=}"
                break # option has been fully handled
                ;;
            *)
                # unknown option
                echo Unknown option: $key #1>&2
                exit 10 # either this: my preferred way to handle unknown options
                break # or this: do this to signal the option has been handled (if exit isn't used)
                ;;
        esac
        # prepare for next option in this key, if any
        [[ "$key" = -? || "$key" == --* ]] && unset key || key="${key/#-?/-}"
    done
    shift # option(s) fully processed, proceed to next input argument
done

Luca Davanzo ,Nov 14, 2016 at 17:56

what's the meaning for "+x" on ${key+x} ? – Luca Davanzo Nov 14 '16 at 17:56

galmok ,Nov 15, 2016 at 9:10

It is a test to see if 'key' is present or not. Further down I unset key and this breaks the inner while loop. – galmok Nov 15 '16 at 9:10

Mark Fox ,Apr 27, 2015 at 2:42

Mixing positional and flag-based arguments --param=arg (equals delimited)

Freely mixing flags between positional arguments:

./script.sh dumbo 127.0.0.1 --environment=production -q -d
./script.sh dumbo --environment=production 127.0.0.1 --quiet -d

can be accomplished with a fairly concise approach:

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   param=${!pointer}
   if [[ $param != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      case $param in
         # paramter-flags with arguments
         -e=*|--environment=*) environment="${param#*=}";;
                  --another=*) another="${param#*=}";;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + 1)):$#} \
         || set -- ${@:((pointer + 1)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2
--param arg (space delimited)

It's usualy clearer to not mix --flag=value and --flag value styles.

./script.sh dumbo 127.0.0.1 --environment production -q -d

This is a little dicey to read, but is still valid

./script.sh dumbo --environment production 127.0.0.1 --quiet -d

Source

# process flags
pointer=1
while [[ $pointer -le $# ]]; do
   if [[ ${!pointer} != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer
   else
      param=${!pointer}
      ((pointer_plus = pointer + 1))
      slice_len=1

      case $param in
         # paramter-flags with arguments
         -e|--environment) environment=${!pointer_plus}; ((slice_len++));;
                --another) another=${!pointer_plus}; ((slice_len++));;

         # binary flags
         -q|--quiet) quiet=true;;
                 -d) debug=true;;
      esac

      # splice out pointer frame from positional list
      [[ $pointer -gt 1 ]] \
         && set -- ${@:1:((pointer - 1))} ${@:((pointer + $slice_len)):$#} \
         || set -- ${@:((pointer + $slice_len)):$#};
   fi
done

# positional remain
node_name=$1
ip_address=$2

schily ,Oct 19, 2015 at 13:59

Note that getopt(1) was a short living mistake from AT&T.

getopt was created in 1984 but already buried in 1986 because it was not really usable.

A proof for the fact that getopt is very outdated is that the getopt(1) man page still mentions "$*" instead of "$@" , that was added to the Bourne Shell in 1986 together with the getopts(1) shell builtin in order to deal with arguments with spaces inside.

BTW: if you are interested in parsing long options in shell scripts, it may be of interest to know that the getopt(3) implementation from libc (Solaris) and ksh93 both added a uniform long option implementation that supports long options as aliases for short options. This causes ksh93 and the Bourne Shell to implement a uniform interface for long options via getopts .

An example for long options taken from the Bourne Shell man page:

getopts "f:(file)(input-file)o:(output-file)" OPTX "$@"

shows how long option aliases may be used in both Bourne Shell and ksh93.

See the man page of a recent Bourne Shell:

http://schillix.sourceforge.net/man/man1/bosh.1.html

and the man page for getopt(3) from OpenSolaris:

http://schillix.sourceforge.net/man/man3c/getopt.3c.html

and last, the getopt(1) man page to verify the outdated $*:

http://schillix.sourceforge.net/man/man1/getopt.1.html

Volodymyr M. Lisivka ,Jul 9, 2013 at 16:51

Use module "arguments" from bash-modules

Example:

#!/bin/bash
. import.sh log arguments

NAME="world"

parse_arguments "-n|--name)NAME;S" -- "$@" || {
  error "Cannot parse command line."
  exit 1
}

info "Hello, $NAME!"

Mike Q ,Jun 14, 2014 at 18:01

This also might be useful to know, you can set a value and if someone provides input, override the default with that value..

myscript.sh -f ./serverlist.txt or just ./myscript.sh (and it takes defaults)

    #!/bin/bash
    # --- set the value, if there is inputs, override the defaults.

    HOME_FOLDER="${HOME}/owned_id_checker"
    SERVER_FILE_LIST="${HOME_FOLDER}/server_list.txt"

    while [[ $# > 1 ]]
    do
    key="$1"
    shift

    case $key in
        -i|--inputlist)
        SERVER_FILE_LIST="$1"
        shift
        ;;
    esac
    done


    echo "SERVER LIST   = ${SERVER_FILE_LIST}"

phk ,Oct 17, 2015 at 21:17

Another solution without getopt[s], POSIX, old Unix style

Similar to the solution Bruno Bronosky posted this here is one without the usage of getopt(s) .

Main differentiating feature of my solution is that it allows to have options concatenated together just like tar -xzf foo.tar.gz is equal to tar -x -z -f foo.tar.gz . And just like in tar , ps etc. the leading hyphen is optional for a block of short options (but this can be changed easily). Long options are supported as well (but when a block starts with one then two leading hyphens are required).

Code with example options
#!/bin/sh

echo
echo "POSIX-compliant getopt(s)-free old-style-supporting option parser from phk@[se.unix]"
echo

print_usage() {
  echo "Usage:

  $0 {a|b|c} [ARG...]

Options:

  --aaa-0-args
  -a
    Option without arguments.

  --bbb-1-args ARG
  -b ARG
    Option with one argument.

  --ccc-2-args ARG1 ARG2
  -c ARG1 ARG2
    Option with two arguments.

" >&2
}

if [ $# -le 0 ]; then
  print_usage
  exit 1
fi

opt=
while :; do

  if [ $# -le 0 ]; then

    # no parameters remaining -> end option parsing
    break

  elif [ ! "$opt" ]; then

    # we are at the beginning of a fresh block
    # remove optional leading hyphen and strip trailing whitespaces
    opt=$(echo "$1" | sed 's/^-\?\([a-zA-Z0-9\?-]*\)/\1/')

  fi

  # get the first character -> check whether long option
  first_chr=$(echo "$opt" | awk '{print substr($1, 1, 1)}')
  [ "$first_chr" = - ] && long_option=T || long_option=F

  # note to write the options here with a leading hyphen less
  # also do not forget to end short options with a star
  case $opt in

    -)

      # end of options
      shift
      break
      ;;

    a*|-aaa-0-args)

      echo "Option AAA activated!"
      ;;

    b*|-bbb-1-args)

      if [ "$2" ]; then
        echo "Option BBB with argument '$2' activated!"
        shift
      else
        echo "BBB parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    c*|-ccc-2-args)

      if [ "$2" ] && [ "$3" ]; then
        echo "Option CCC with arguments '$2' and '$3' activated!"
        shift 2
      else
        echo "CCC parameters incomplete!" >&2
        print_usage
        exit 1
      fi
      ;;

    h*|\?*|-help)

      print_usage
      exit 0
      ;;

    *)

      if [ "$long_option" = T ]; then
        opt=$(echo "$opt" | awk '{print substr($1, 2)}')
      else
        opt=$first_chr
      fi
      printf 'Error: Unknown option: "%s"\n' "$opt" >&2
      print_usage
      exit 1
      ;;

  esac

  if [ "$long_option" = T ]; then

    # if we had a long option then we are going to get a new block next
    shift
    opt=

  else

    # if we had a short option then just move to the next character
    opt=$(echo "$opt" | awk '{print substr($1, 2)}')

    # if block is now empty then shift to the next one
    [ "$opt" ] || shift

  fi

done

echo "Doing something..."

exit 0

For the example usage please see the examples further below.

Position of options with arguments

For what its worth there the options with arguments don't be the last (only long options need to be). So while e.g. in tar (at least in some implementations) the f options needs to be last because the file name follows ( tar xzf bar.tar.gz works but tar xfz bar.tar.gz does not) this is not the case here (see the later examples).

Multiple options with arguments

As another bonus the option parameters are consumed in the order of the options by the parameters with required options. Just look at the output of my script here with the command line abc X Y Z (or -abc X Y Z ):

Option AAA activated!
Option BBB with argument 'X' activated!
Option CCC with arguments 'Y' and 'Z' activated!
Long options concatenated as well

Also you can also have long options in option block given that they occur last in the block. So the following command lines are all equivalent (including the order in which the options and its arguments are being processed):

All of these lead to:

Option CCC with arguments 'Z' and 'Y' activated!
Option BBB with argument 'X' activated!
Option AAA activated!
Doing something...
Not in this solution Optional arguments

Options with optional arguments should be possible with a bit of work, e.g. by looking forward whether there is a block without a hyphen; the user would then need to put a hyphen in front of every block following a block with a parameter having an optional parameter. Maybe this is too complicated to communicate to the user so better just require a leading hyphen altogether in this case.

Things get even more complicated with multiple possible parameters. I would advise against making the options trying to be smart by determining whether the an argument might be for it or not (e.g. with an option just takes a number as an optional argument) because this might break in the future.

I personally favor additional options instead of optional arguments.

Option arguments introduced with an equal sign

Just like with optional arguments I am not a fan of this (BTW, is there a thread for discussing the pros/cons of different parameter styles?) but if you want this you could probably implement it yourself just like done at http://mywiki.wooledge.org/BashFAQ/035#Manual_loop with a --long-with-arg=?* case statement and then stripping the equal sign (this is BTW the site that says that making parameter concatenation is possible with some effort but "left [it] as an exercise for the reader" which made me take them at their word but I started from scratch).

Other notes

POSIX-compliant, works even on ancient Busybox setups I had to deal with (with e.g. cut , head and getopts missing).

Noah ,Aug 29, 2016 at 3:44

Solution that preserves unhandled arguments. Demos Included.

Here is my solution. It is VERY flexible and unlike others, shouldn't require external packages and handles leftover arguments cleanly.

Usage is: ./myscript -flag flagvariable -otherflag flagvar2

All you have to do is edit the validflags line. It prepends a hyphen and searches all arguments. It then defines the next argument as the flag name e.g.

./myscript -flag flagvariable -otherflag flagvar2
echo $flag $otherflag
flagvariable flagvar2

The main code (short version, verbose with examples further down, also a version with erroring out):

#!/usr/bin/env bash
#shebang.io
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
    for flag in $validflags
    do
        sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers

The verbose version with built in echo demos:

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
echo "all args
$@"
validflags="rate time number"
count=1
for arg in $@
do
    match=0
    argval=$1
#   argval=$(echo $@ | cut -d ' ' -f$count)
    for flag in $validflags
    do
            sflag="-"$flag
        if [ "$argval" == "$sflag" ]
        then
            declare $flag=$2
            match=1
        fi
    done
        if [ "$match" == "1" ]
    then
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done

#Cleanup then restore the leftovers
echo "pre final clear args:
$@"
shift $#
echo "post final clear args:
$@"
set -- $leftovers
echo "all post set args:
$@"
echo arg1: $1 arg2: $2

echo leftovers: $leftovers
echo rate $rate time $time number $number

Final one, this one errors out if an invalid -argument is passed through.

#!/usr/bin/env bash
#shebang.io
rate=30
time=30
number=30
validflags="rate time number"
count=1
for arg in $@
do
    argval=$1
    match=0
        if [ "${argval:0:1}" == "-" ]
    then
        for flag in $validflags
        do
                sflag="-"$flag
            if [ "$argval" == "$sflag" ]
            then
                declare $flag=$2
                match=1
            fi
        done
        if [ "$match" == "0" ]
        then
            echo "Bad argument: $argval"
            exit 1
        fi
        shift 2
    else
        leftovers=$(echo $leftovers $argval)
        shift
    fi
    count=$(($count+1))
done
#Cleanup then restore the leftovers
shift $#
set -- $leftovers
echo rate $rate time $time number $number
echo leftovers: $leftovers

Pros: What it does, it handles very well. It preserves unused arguments which a lot of the other solutions here don't. It also allows for variables to be called without being defined by hand in the script. It also allows prepopulation of variables if no corresponding argument is given. (See verbose example).

Cons: Can't parse a single complex arg string e.g. -xcvf would process as a single argument. You could somewhat easily write additional code into mine that adds this functionality though.

Daniel Bigham ,Aug 8, 2016 at 12:42

The top answer to this question seemed a bit buggy when I tried it -- here's my solution which I've found to be more robust:
boolean_arg=""
arg_with_value=""

while [[ $# -gt 0 ]]
do
key="$1"
case $key in
    -b|--boolean-arg)
    boolean_arg=true
    shift
    ;;
    -a|--arg-with-value)
    arg_with_value="$2"
    shift
    shift
    ;;
    -*)
    echo "Unknown option: $1"
    exit 1
    ;;
    *)
    arg_num=$(( $arg_num + 1 ))
    case $arg_num in
        1)
        first_normal_arg="$1"
        shift
        ;;
        2)
        second_normal_arg="$1"
        shift
        ;;
        *)
        bad_args=TRUE
    esac
    ;;
esac
done

# Handy to have this here when adding arguments to
# see if they're working. Just edit the '0' to be '1'.
if [[ 0 == 1 ]]; then
    echo "first_normal_arg: $first_normal_arg"
    echo "second_normal_arg: $second_normal_arg"
    echo "boolean_arg: $boolean_arg"
    echo "arg_with_value: $arg_with_value"
    exit 0
fi

if [[ $bad_args == TRUE || $arg_num < 2 ]]; then
    echo "Usage: $(basename "$0") <first-normal-arg> <second-normal-arg> [--boolean-arg] [--arg-with-value VALUE]"
    exit 1
fi

phyatt ,Sep 7, 2016 at 18:25

This example shows how to use getopt and eval and HEREDOC and shift to handle short and long parameters with and without a required value that follows. Also the switch/case statement is concise and easy to follow.
#!/usr/bin/env bash

# usage function
function usage()
{
   cat << HEREDOC

   Usage: $progname [--num NUM] [--time TIME_STR] [--verbose] [--dry-run]

   optional arguments:
     -h, --help           show this help message and exit
     -n, --num NUM        pass in a number
     -t, --time TIME_STR  pass in a time string
     -v, --verbose        increase the verbosity of the bash script
     --dry-run            do a dry run, don't change any files

HEREDOC
}  

# initialize variables
progname=$(basename $0)
verbose=0
dryrun=0
num_str=
time_str=

# use getopt and store the output into $OPTS
# note the use of -o for the short options, --long for the long name options
# and a : for any option that takes a parameter
OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; usage; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  # uncomment the next line to see how shift is working
  # echo "\$1:\"$1\" \$2:\"$2\""
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

if (( $verbose > 0 )); then

   # print out all the parameters we read in
   cat <<-EOM
   num=$num_str
   time=$time_str
   verbose=$verbose
   dryrun=$dryrun
EOM
fi

# The rest of your script below

The most significant lines of the script above are these:

OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@")
if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; exit 1 ; fi
eval set -- "$OPTS"

while true; do
  case "$1" in
    -h | --help ) usage; exit; ;;
    -n | --num ) num_str="$2"; shift 2 ;;
    -t | --time ) time_str="$2"; shift 2 ;;
    --dry-run ) dryrun=1; shift ;;
    -v | --verbose ) verbose=$((verbose + 1)); shift ;;
    -- ) shift; break ;;
    * ) break ;;
  esac
done

Short, to the point, readable, and handles just about everything (IMHO).

Hope that helps someone.

Emeric Verschuur ,Feb 20, 2017 at 21:30

I have write a bash helper to write a nice bash tool

project home: https://gitlab.mbedsys.org/mbedsys/bashopts

example:

#!/bin/bash -ei

# load the library
. bashopts.sh

# Enable backtrace dusplay on error
trap 'bashopts_exit_handle' ERR

# Initialize the library
bashopts_setup -n "$0" -d "This is myapp tool description displayed on help message" -s "$HOME/.config/myapprc"

# Declare the options
bashopts_declare -n first_name -l first -o f -d "First name" -t string -i -s -r
bashopts_declare -n last_name -l last -o l -d "Last name" -t string -i -s -r
bashopts_declare -n display_name -l display-name -t string -d "Display name" -e "\$first_name \$last_name"
bashopts_declare -n age -l number -d "Age" -t number
bashopts_declare -n email_list -t string -m add -l email -d "Email adress"

# Parse arguments
bashopts_parse_args "$@"

# Process argument
bashopts_process_args

will give help:

NAME:
    ./example.sh - This is myapp tool description displayed on help message

USAGE:
    [options and commands] [-- [extra args]]

OPTIONS:
    -h,--help                          Display this help
    -n,--non-interactive true          Non interactive mode - [$bashopts_non_interactive] (type:boolean, default:false)
    -f,--first "John"                  First name - [$first_name] (type:string, default:"")
    -l,--last "Smith"                  Last name - [$last_name] (type:string, default:"")
    --display-name "John Smith"        Display name - [$display_name] (type:string, default:"$first_name $last_name")
    --number 0                         Age - [$age] (type:number, default:0)
    --email                            Email adress - [$email_list] (type:string, default:"")

enjoy :)

Josh Wulf ,Jun 24, 2017 at 18:07

I get this on Mac OS X: ``` lib/bashopts.sh: line 138: declare: -A: invalid option declare: usage: declare [-afFirtx] [-p] [name[=value] ...] Error in lib/bashopts.sh:138. 'declare -x -A bashopts_optprop_name' exited with status 2 Call tree: 1: lib/controller.sh:4 source(...) Exiting with status 1 ``` – Josh Wulf Jun 24 '17 at 18:07

Josh Wulf ,Jun 24, 2017 at 18:17

You need Bash version 4 to use this. On Mac, the default version is 3. You can use home brew to install bash 4. – Josh Wulf Jun 24 '17 at 18:17

a_z ,Mar 15, 2017 at 13:24

Here is my approach - using regexp.

script:

#!/usr/bin/env sh

help_menu() {
  echo "Usage:

  ${0##*/} [-h][-l FILENAME][-d]

Options:

  -h, --help
    display this help and exit

  -l, --logfile=FILENAME
    filename

  -d, --debug
    enable debug
  "
}

parse_options() {
  case $opt in
    h|help)
      help_menu
      exit
     ;;
    l|logfile)
      logfile=${attr}
      ;;
    d|debug)
      debug=true
      ;;
    *)
      echo "Unknown option: ${opt}\nRun ${0##*/} -h for help.">&2
      exit 1
  esac
}
options=$@

until [ "$options" = "" ]; do
  if [[ $options =~ (^ *(--([a-zA-Z0-9-]+)|-([a-zA-Z0-9-]+))(( |=)(([\_\.\?\/\\a-zA-Z0-9]?[ -]?[\_\.\?a-zA-Z0-9]+)+))?(.*)|(.+)) ]]; then
    if [[ ${BASH_REMATCH[3]} ]]; then # for --option[=][attribute] or --option[=][attribute]
      opt=${BASH_REMATCH[3]}
      attr=${BASH_REMATCH[7]}
      options=${BASH_REMATCH[9]}
    elif [[ ${BASH_REMATCH[4]} ]]; then # for block options -qwert[=][attribute] or single short option -a[=][attribute]
      pile=${BASH_REMATCH[4]}
      while (( ${#pile} > 1 )); do
        opt=${pile:0:1}
        attr=""
        pile=${pile/${pile:0:1}/}
        parse_options
      done
      opt=$pile
      attr=${BASH_REMATCH[7]}
      options=${BASH_REMATCH[9]}
    else # leftovers that don't match
      opt=${BASH_REMATCH[10]}
      options=""
    fi
    parse_options
  fi
done

mauron85 ,Jun 21, 2017 at 6:03

Like this one. Maybe just add -e param to echo with new line. – mauron85 Jun 21 '17 at 6:03

John ,Oct 10, 2017 at 22:49

Assume we create a shell script named test_args.sh as follow
#!/bin/sh
until [ $# -eq 0 ]
do
  name=${1:1}; shift;
  if [[ -z "$1" || $1 == -* ]] ; then eval "export $name=true"; else eval "export $name=$1"; shift; fi  
done
echo "year=$year month=$month day=$day flag=$flag"

After we run the following command:

sh test_args.sh  -year 2017 -flag  -month 12 -day 22

The output would be:

year=2017 month=12 day=22 flag=true

Will Barnwell ,Oct 10, 2017 at 23:57

This takes the same approach as Noah's answer , but has less safety checks / safeguards. This allows us to write arbitrary arguments into the script's environment and I'm pretty sure your use of eval here may allow command injection. – Will Barnwell Oct 10 '17 at 23:57

Masadow ,Oct 6, 2015 at 8:53

Here is my improved solution of Bruno Bronosky's answer using variable arrays.

it lets you mix parameters position and give you a parameter array preserving the order without the options

#!/bin/bash

echo $@

PARAMS=()
SOFT=0
SKIP=()
for i in "$@"
do
case $i in
    -n=*|--skip=*)
    SKIP+=("${i#*=}")
    ;;
    -s|--soft)
    SOFT=1
    ;;
    *)
        # unknown option
        PARAMS+=("$i")
    ;;
esac
done
echo "SKIP            = ${SKIP[@]}"
echo "SOFT            = $SOFT"
    echo "Parameters:"
    echo ${PARAMS[@]}

Will output for example:

$ ./test.sh parameter -s somefile --skip=.c --skip=.obj
parameter -s somefile --skip=.c --skip=.obj
SKIP            = .c .obj
SOFT            = 1
Parameters:
parameter somefile

Jason S ,Dec 3, 2017 at 1:01

You use shift on the known arguments and not on the unknown ones so your remaining $@ will be all but the first two arguments (in the order they are passed in), which could lead to some mistakes if you try to use $@ later. You don't need the shift for the = parameters, since you're not handling spaces and you're getting the value with the substring removal #*=Jason S Dec 3 '17 at 1:01

Masadow ,Dec 5, 2017 at 9:17

You're right, in fact, since I build a PARAMS variable, I don't need to use shift at all – Masadow Dec 5 '17 at 9:17

[Jun 09, 2018] How to use the history command in Linux Opensource.com

Jun 09, 2018 | opensource.com

Changing an executed command

history also allows you to rerun a command with different syntax. For example, if I wanted to change my previous command history | grep dnf to history | grep ssh , I can execute the following at the prompt:

$ ^dnf^ssh^

history will rerun the command, but replace dnf with ssh , and execute it.

Removing history

There may come a time that you want to remove some or all the commands in your history file. If you want to delete a particular command, enter history -d <line number> . To clear the entire contents of the history file, execute history -c .

The history file is stored in a file that you can modify, as well. Bash shell users will find it in their Home directory as .bash_history .

Next steps

There are a number of other things that you can do with history :

For more information about the history command and other interesting things you can do with it, take a look at the GNU Bash Manual .

[Jun 01, 2018] Introduction to Bash arrays by Robert Aboukhalil

Jun 01, 2018 | opensource.com

... ... ...

Looping through arrays

Although in the examples above we used integer indices in our arrays, let's consider two occasions when that won't be the case: First, if we wanted the $i -th element of the array, where $i is a variable containing the index of interest, we can retrieve that element using: echo ${allThreads[$i]} . Second, to output all the elements of an array, we replace the numeric index with the @ symbol (you can think of @ as standing for all ): echo ${allThreads[@]} .

Looping through array elements

With that in mind, let's loop through $allThreads and launch the pipeline for each value of --threads :

for t in ${allThreads[@]} ; do
. / pipeline --threads $t
done

Looping through array indices

Next, let's consider a slightly different approach. Rather than looping over array elements , we can loop over array indices :

for i in ${!allThreads[@]} ; do
. / pipeline --threads ${allThreads[$i]}
done

Let's break that down: As we saw above, ${allThreads[@]} represents all the elements in our array. Adding an exclamation mark to make it ${!allThreads[@]} will return the list of all array indices (in our case 0 to 7). In other words, the for loop is looping through all indices $i and reading the $i -th element from $allThreads to set the value of the --threads parameter.

This is much harsher on the eyes, so you may be wondering why I bother introducing it in the first place. That's because there are times where you need to know both the index and the value within a loop, e.g., if you want to ignore the first element of an array, using indices saves you from creating an additional variable that you then increment inside the loop.

Populating arrays

So far, we've been able to launch the pipeline for each --threads of interest. Now, let's assume the output to our pipeline is the runtime in seconds. We would like to capture that output at each iteration and save it in another array so we can do various manipulations with it at the end.

Some useful syntax

But before diving into the code, we need to introduce some more syntax. First, we need to be able to retrieve the output of a Bash command. To do so, use the following syntax: output=$( ./my_script.sh ) , which will store the output of our commands into the variable $output .

The second bit of syntax we need is how to append the value we just retrieved to an array. The syntax to do that will look familiar:

myArray+=( "newElement1" "newElement2" )
The parameter sweep

Putting everything together, here is our script for launching our parameter sweep:

allThreads = ( 1 2 4 8 16 32 64 128 )
allRuntimes = ()
for t in ${allThreads[@]} ; do
runtime =$ ( . / pipeline --threads $t )
allRuntimes+= ( $runtime )
done

And voilà!

What else you got?

In this article, we covered the scenario of using arrays for parameter sweeps. But I promise there are more reasons to use Bash arrays -- here are two more examples.

Log alerting

In this scenario, your app is divided into modules, each with its own log file. We can write a cron job script to email the right person when there are signs of trouble in certain modules:

# List of logs and who should be notified of issues
logPaths = ( "api.log" "auth.log" "jenkins.log" "data.log" )
logEmails = ( "jay@email" "emma@email" "jon@email" "sophia@email" )

# Look for signs of trouble in each log
for i in ${!logPaths[@]} ;
do
log = ${logPaths[$i]}
stakeholder = ${logEmails[$i]}
numErrors =$ ( tail -n 100 " $log " | grep "ERROR" | wc -l )

# Warn stakeholders if recently saw > 5 errors
if [[ " $numErrors " -gt 5 ]] ;
then
emailRecipient = " $stakeholder "
emailSubject = "WARNING: ${log} showing unusual levels of errors"
emailBody = " ${numErrors} errors found in log ${log} "
echo " $emailBody " | mailx -s " $emailSubject " " $emailRecipient "
fi
done

API queries

Say you want to generate some analytics about which users comment the most on your Medium posts. Since we don't have direct database access, SQL is out of the question, but we can use APIs!

To avoid getting into a long discussion about API authentication and tokens, we'll instead use JSONPlaceholder , a public-facing API testing service, as our endpoint. Once we query each post and retrieve the emails of everyone who commented, we can append those emails to our results array:

endpoint = "https://jsonplaceholder.typicode.com/comments"
allEmails = ()

# Query first 10 posts
for postId in { 1 .. 10 } ;
do
# Make API call to fetch emails of this posts's commenters
response =$ ( curl " ${endpoint} ?postId= ${postId} " )

# Use jq to parse the JSON response into an array
allEmails+= ( $ ( jq '.[].email' <<< " $response " ) )
done

Note here that I'm using the jq tool to parse JSON from the command line. The syntax of jq is beyond the scope of this article, but I highly recommend you look into it.

As you might imagine, there are countless other scenarios in which using Bash arrays can help, and I hope the examples outlined in this article have given you some food for thought. If you have other examples to share from your own work, please leave a comment below.

But wait, there's more!

Since we covered quite a bit of array syntax in this article, here's a summary of what we covered, along with some more advanced tricks we did not cover:

Syntax Result
arr=() Create an empty array
arr=(1 2 3) Initialize array
${arr[2]} Retrieve third element
${arr[@]} Retrieve all elements
${!arr[@]} Retrieve array indices
${#arr[@]} Calculate array size
arr[0]=3 Overwrite 1st element
arr+=(4) Append value(s)
str=$(ls) Save ls output as a string
arr=( $(ls) ) Save ls output as an array of files
${arr[@]:s:n} Retrieve elements at indices n to s+n
One last thought

As we've discovered, Bash arrays sure have strange syntax, but I hope this article convinced you that they are extremely powerful. Once you get the hang of the syntax, you'll find yourself using Bash arrays quite often.

... ... ...

Robert Aboukhalil is a Bioinformatics Software Engineer. In his work, he develops cloud applications for the analysis and interactive visualization of genomics data. Robert holds a Ph.D. in Bioinformatics from Cold Spring Harbor Laboratory and a B.Eng. in Computer Engineering from McGill.

[May 28, 2018] Useful Linux Command Line Bash Shortcuts You Should Know

May 28, 2018 | www.tecmint.com

In this article, we will share a number of Bash command-line shortcuts useful for any Linux user. These shortcuts allow you to easily and in a fast manner, perform certain activities such as accessing and running previously executed commands, opening an editor, editing/deleting/changing text on the command line, moving the cursor, controlling processes etc. on the command line.

Although this article will mostly benefit Linux beginners getting their way around with command line basics, those with intermediate skills and advanced users might also find it practically helpful. We will group the bash keyboard shortcuts according to categories as follows.

Launch an Editor

Open a terminal and press Ctrl+X and Ctrl+E to open an editor ( nano editor ) with an empty buffer. Bash will try to launch the editor defined by the $EDITOR environment variable.

Nano Editor

Nano Editor Controlling The Screen

These shortcuts are used to control terminal screen output:

Move Cursor on The Command Line

The next shortcuts are used for moving the cursor within the command-line:

Search Through Bash History

The following shortcuts are used for searching for commands in the bash history:

Delete Text on the Command Line

The following shortcuts are used for deleting text on the command line:

Transpose Text or Change Case on the Command Line

These shortcuts will transpose or change the case of letters or words on the command line:

Working With Processes in Linux

The following shortcuts help you to control running Linux processes.

Learn more about: All You Need To Know About Processes in Linux [Comprehensive Guide]

Bash Bang (!) Commands

In the final part of this article, we will explain some useful ! (bang) operations:

For more information, see the bash man page:

$ man bash

That's all for now! In this article, we shared some common and useful Bash command-line shortcuts and operations. Use the comment form below to make any additions or ask questions.

[Apr 30, 2018] New Book Describes Bluffing Programmers in Silicon Valley

Notable quotes:
"... Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley ..."
"... Older generations called this kind of fraud "fake it 'til you make it." ..."
"... Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring ..."
"... It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. ..."
"... In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz ..."
"... So what are the uses for that? I am curious what things people have put these to use for. ..."
"... Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they suck big time. They make software development 100 times harder than it could be. ..."
"... I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as broken and who automate routine tasks or simply question and erase tasks that are not necessary. ..."
"... 10% are just causing damage. I'm not talking about terrorists and criminals. ..."
"... Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers. ..."
Apr 30, 2018 | news.slashdot.org

Long-time Slashdot reader Martin S. pointed us to this an excerpt from the new book Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley by Portland-based investigator reporter Corey Pein.

The author shares what he realized at a job recruitment fair seeking Java Legends, Python Badasses, Hadoop Heroes, "and other gratingly childish classifications describing various programming specialities.

" I wasn't the only one bluffing my way through the tech scene. Everyone was doing it, even the much-sought-after engineering talent.

I was struck by how many developers were, like myself, not really programmers , but rather this, that and the other. A great number of tech ninjas were not exactly black belts when it came to the actual onerous work of computer programming. So many of the complex, discrete tasks involved in the creation of a website or an app had been automated that it was no longer necessary to possess knowledge of software mechanics. The coder's work was rarely a craft. The apps ran on an assembly line, built with "open-source", off-the-shelf components. The most important computer commands for the ninja to master were copy and paste...

[M]any programmers who had "made it" in Silicon Valley were scrambling to promote themselves from coder to "founder". There wasn't necessarily more money to be had running a startup, and the increase in status was marginal unless one's startup attracted major investment and the right kind of press coverage. It's because the programmers knew that their own ladder to prosperity was on fire and disintegrating fast. They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software. The programmers also knew that the fastest way to win that promotion to founder was to find some new domain that hadn't yet been automated. Every tech industry campaign designed to spur investment in the Next Big Thing -- at that time, it was the "sharing economy" -- concealed a larger programme for the transformation of society, always in a direction that favoured the investor and executive classes.

"I wasn't just changing careers and jumping on the 'learn to code' bandwagon," he writes at one point. "I was being steadily indoctrinated in a specious ideology."


Anonymous Coward , Saturday April 28, 2018 @11:40PM ( #56522045 )

older generations already had a term for this ( Score: 5 , Interesting)

Older generations called this kind of fraud "fake it 'til you make it."

raymorris ( 2726007 ) , Sunday April 29, 2018 @02:05AM ( #56522343 ) Journal
The people who are smarter won't ( Score: 5 , Informative)

> The people can do both are smart enough to build their own company and compete with you.

Been there, done that. Learned a few lessons. Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring, managing people, corporate strategy, staying up on the competition, figuring out tax changes each year and getting taxes filed six times each year, the various state and local requirements, legal changes, contract hassles, etc, while hoping the company makes money this month so they can take a paycheck and lay their rent.

I learned that I'm good at creating software systems and I enjoy it. I don't enjoy all-nighters, partners being dickheads trying to pull out of a contract, or any of a thousand other things related to running a start-up business. I really enjoy a consistent, six-figure compensation package too.

brian.stinar ( 1104135 ) writes:
Re: ( Score: 2 )

* getting taxes filled eighteen times a year.

I pay monthly gross receipts tax (12), quarterly withholdings (4) and a corporate (1) and individual (1) returns. The gross receipts can vary based on the state, so I can see how six times a year would be the minimum.

Cederic ( 9623 ) writes:
Re: ( Score: 2 )

Fuck no. Cost of full automation: $4m Cost of manual entry: $0 Opportunity cost of manual entry: $800/year

At worse, pay for an accountant, if you can get one that cheaply. Bear in mind talking to them incurs most of that opportunity cost anyway.

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring

There's nothing wrong with not wnting to run your own business, it's not for most people, and even if it was, the numbers don't add up. But putting the scare qoutes in like that makes it sound like you have huge chip on your shoulder. Those things re just as essential to the business as your work and without them you wouldn't have the steady 9:30-4:30 with good paycheck.

raymorris ( 2726007 ) writes:
Important, and dumb. ( Score: 3 , Informative)

Of course they are important. I wouldn't have done those things if they weren't important!

I frequently have friends say things like "I love baking. I can't get enough of baking. I'm going to open a bakery.". I ask them "do you love dealing with taxes, every month? Do you love contract law? Employment law? Marketing? Accounting?" If you LOVE baking, the smart thing to do is to spend your time baking. Running a start-up business, you're not going to do much baking.

If you love marketing, employment law, taxes

raymorris ( 2726007 ) writes:
Four tips for a better job. Who has more? ( Score: 3 )

I can tell you a few things that have worked for me. I'll go in chronological order rather than priority order.

Make friends in the industry you want to be in. Referrals are a major way people get jobs.

Look at the job listings for jobs you'd like to have and see which skills a lot of companies want, but you're missing. For me that's Java. A lot companies list Java skills and I'm not particularly good with Java. Then consider learning the skills you lack, the ones a lot of job postings are looking for.

Certifi

goose-incarnated ( 1145029 ) , Sunday April 29, 2018 @02:34PM ( #56524475 ) Journal
Re: older generations already had a term for this ( Score: 5 , Insightful)
You don't understand the point of an ORM do you? I'd suggest reading why they exist

They exist because programmers value code design more than data design. ORMs are the poster-child for square-peg-round-hole solutions, which is why all ORMs choose one of three different ways of squashing hierarchical data into a relational form, all of which are crappy.

If the devs of the system (the ones choosing to use an ORM) had any competence at all they'd design their database first because in any application that uses a database the database is the most important bit, not the OO-ness or Functional-ness of the design.

Over the last few decades I've seen programs in a system come and go; a component here gets rewritten, a component there gets rewritten, but you know what? They all have to work with the same damn data.

You can more easily switch out your code for new code with new design in a new language, than you can switch change the database structure. So explain to me why it is that you think the database should be mangled to fit your OO code rather than mangling your OO code to fit the database?

cheekyboy ( 598084 ) writes:
im sick of reinventors and new frameworks ( Score: 3 )

Stick to the one thing for 10-15years. Often all this new shit doesn't do jack different to the old shit, its not faster, its not better. Every dick wants to be famous so make another damn library/tool with his own fancy name and feature, instead of enhancing an existing product.

gbjbaanb ( 229885 ) writes:
Re: ( Score: 2 )

amen to that.

Or kids who can't hack the main stuff, suddenly discover the cool new, and then they can pretend they're "learning" it, and when the going gets tough (as it always does) they can declare the tech to be pants and move to another.

hence we had so many people on the bandwagon for functional programming, then dumped it for ruby on rails, then dumped that for Node.js, not sure what they're on at currently, probably back to asp.net.

Greyfox ( 87712 ) writes:
Re: ( Score: 2 )

How much code do you have to reuse before you're not really programming anymore? When I started in this business, it was reasonably possible that you could end up on a project that didn't particularly have much (or any) of an operating system. They taught you assembly language and the process by which the system boots up, but I think if I were to ask most of the programmers where I work, they wouldn't be able to explain how all that works...

djinn6 ( 1868030 ) writes:
Re: ( Score: 2 )
It really feels like if you know what you're doing it should be possible to build a team of actually good programmers and put everyone else out of business by actually meeting your deliverables, but no one has yet. I wonder why that is.

You mean Amazon, Google, Facebook and the like? People may not always like what they do, but they manage to get things done and make plenty of money in the process. The problem for a lot of other businesses is not having a way to identify and promote actually good programmers. In your example, you could've spent 10 minutes fixing their query and saved them days of headache, but how much recognition will you actually get? Where is your motivation to help them?

Junta ( 36770 ) writes:
Re: ( Score: 2 )

It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. Yes it can happen that baseless boasts can be called out over time by a large enough mass of feedback from competent peers, but it takes a *lot* to overcome the tendency for them to have faith in the boasts.

It does correlate stron

cheekyboy ( 598084 ) writes:
Re: ( Score: 2 )

And all these modern coders forget old lessons, and make shit stuff, just look at instagram windows app, what a load of garbage shit, that us old fuckers could code in 2-3 weeks.

Instagram - your app sucks, cookie cutter coders suck, no refinement, coolness. Just cheap ass shit, with limited usefulness.

Just like most of commercial software that's new - quick shit.

Oh and its obvious if your an Indian faking it, you haven't worked in 100 companies at the age of 29.

Junta ( 36770 ) writes:
Re: ( Score: 2 )

Here's another problem, if faced with a skilled team that says "this will take 6 months to do right" and a more naive team that says "oh, we can slap that together in a month", management goes with the latter. Then the security compromises occur, then the application fails due to pulling in an unvetted dependency update live into production. When the project grows to handling thousands instead of dozens of users and it starts mysteriously folding over and the dev team is at a loss, well the choice has be

molarmass192 ( 608071 ) , Sunday April 29, 2018 @02:15AM ( #56522359 ) Homepage Journal
Re:older generations already had a term for this ( Score: 5 , Interesting)

These restrictions is a large part of what makes Arduino programming "fun". If you don't plan out your memory usage, you're gonna run out of it. I cringe when I see 8MB web pages of bloated "throw in everything including the kitchen sink and the neighbor's car". Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something. Of course, I don't have time to review it but I'm sure everybody else has peer reviewed it for flaws and exploits line by line.

AmiMoJo ( 196126 ) writes: < mojo@@@world3...net > on Sunday April 29, 2018 @05:15AM ( #56522597 ) Homepage Journal
Re:older generations already had a term for this ( Score: 4 , Informative)
Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something.

Of course. What is the business case for making it efficient? Those massive frameworks are cached by the browser and run on the client's system, so cost you nothing and save you time to market. Efficient costs money with no real benefit to the business.

If we want to fix this, we need to make bloat have an associated cost somehow.

locketine ( 1101453 ) writes:
Re: older generations already had a term for this ( Score: 2 )

My company is dealing with the result of this mentality right now. We released the web app to the customer without performance testing and doing several majorly inefficient things to meet deadlines. Once real load was put on the application by users with non-ideal hardware and browsers, the app was infuriatingly slow. Suddenly our standard sub-40 hour workweek became a 50+ hour workweek for months while we fixed all the inefficient code and design issues.

So, while you're right that getting to market and opt

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz

We still have them. In fact some of the systems I've programmed have been more resource limited than the gloriously spacious 32KiB memory of the BBC model B. Take the PIC12F or 10F series. A glorious 64 bytes of RAM, max clock speed of 16MHz, but not unusual to run it 32kHz.

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

So what are the uses for that? I am curious what things people have put these to use for.

It's hard to determine because people don't advertise use of them at all. However, I know that my electric toothbrush uses an Epson 4 bit MCU of some description. It's got a status LED, basic NiMH batteryb charger and a PWM controller for an H Bridge. Braun sell a *lot* of electric toothbrushes. Any gadget that's smarter than a simple switch will probably have some sort of basic MCU in it. Alarm system components, sensor

tlhIngan ( 30335 ) writes:
Re: ( Score: 3 , Insightful)
b) No computer ever ran at 1.023 MHz. It was either a nice multiple of 1Mhz or maybe a multiple of 3.579545Mhz (ie. using the TV output circuit's color clock crystal to drive the CPU).

Well, it could be used to drive the TV output circuit, OR, it was used because it's a stupidly cheap high speed crystal. You have to remember except for a few frequencies, most crystals would have to be specially cut for the desired frequency. This occurs even today, where most oscillators are either 32.768kHz (real time clock

Anonymous Coward writes:
Re: ( Score: 2 , Interesting)

Yeah, nice talk. You could have stopped after the first sentence. The other AC is referring to the Commodore C64 [wikipedia.org]. The frequency has nothing to do with crystal availability but with the simple fact that everything in the C64 is synced to the TV. One clock cycle equals 8 pixels. The graphics chip and the CPU take turns accessing the RAM. The different frequencies dictated by the TV standards are the reason why the CPU in the NTSC version of the C64 runs at 1.023MHz and the PAL version at 0.985MHz.

Wraithlyn ( 133796 ) writes:
Re: ( Score: 2 )

LOL what exactly is so special about 16K RAM? https://yourlogicalfallacyis.c... [yourlogicalfallacyis.com]

I cut my teeth on a VIC20 (5K RAM), then later a C64 (which ran at 1.023MHz...)

Anonymous Coward writes:
Re: ( Score: 2 , Interesting)

Commodore 64 for the win. I worked for a company that made detection devices for the railroad, things like monitoring axle temperatures, reading the rail car ID tags. The original devices were made using Commodore 64 boards using software written by an employee at the one rail road company working with them.

The company then hired some electrical engineers to design custom boards using the 68000 chips and I was hired as the only programmer. Had to rewrite all of the code which was fine...

wierd_w ( 1375923 ) , Saturday April 28, 2018 @11:58PM ( #56522075 )
... A job fair can easily test this competency. ( Score: 4 , Interesting)

Many of these languages have an interactive interpreter. I know for a fact that Python does.

So, since job-fairs are an all day thing, and setup is already a thing for them -- set up a booth with like 4 computers at it, and an admin station. The 4 terminals have an interactive session with the interpreter of choice. Every 20min or so, have a challenge for "Solve this problem" (needs to be easy and already solved in general. Programmers hate being pimped without pay. They don't mind tests of skill, but hate being pimped. Something like "sort this array, while picking out all the prime numbers" or something.) and see who steps up. The ones that step up have confidence they can solve the problem, and you can quickly see who can do the work and who can't.

The ones that solve it, and solve it to your satisfaction, you offer a nice gig to.

ShanghaiBill ( 739463 ) , Sunday April 29, 2018 @01:50AM ( #56522321 )
Re:... A job fair can easily test this competency. ( Score: 5 , Informative)
Then you get someone good at sorting arrays while picking out prime numbers, but potentially not much else.

The point of the test is not to identify the perfect candidate, but to filter out the clearly incompetent. If you can't sort an array and write a function to identify a prime number, I certainly would not hire you. Passing the test doesn't get you a job, but it may get you an interview ... where there will be other tests.

wierd_w ( 1375923 ) writes:
Re: ( Score: 2 )

BINGO!

(I am not even a professional programmer, but I can totally perform such a trivially easy task. The example tests basic understanding of loop construction, function construction, variable use, efficient sorting, and error correction-- especially with mixed type arrays. All of these are things any programmer SHOULD now how to do, without being overly complicated, or clearly a disguised occupational problem trying to get a free solution. Like I said, programmers hate being pimped, and will be turned off

wierd_w ( 1375923 ) , Sunday April 29, 2018 @04:02AM ( #56522443 )
Re: ... A job fair can easily test this competency ( Score: 5 , Insightful)

Again, the quality applicant and the code monkey both have something the fakers do not-- Actual comprehension of what a program is, and how to create one.

As Bill points out, this is not the final exam. This is the "Oh, I see you do actually know how to program-- show me more" portion of the process. This is the part that HR drones are not capable of performing, due to Dunning-Krueger. Those that are actually, REALLY competent will do more than just satisfy the requirements of the challenge, they will provide actually working solutions to the challenge that properly validate their input, and return proper error states if the input is invalid, etc-- You can learn a LOT about a potential hire by observing their work. *THAT* is what this is really about. The triviality of the problem is a necessity, because you ***DON'T*** try to get free solutions out of people.

I realize that may be difficult for you to comprehend, but you *DON'T* do that. The job fair is to let people know that you have a position available, and try to curry interest in people to apply. A successful pre-screening is confidence building, and helps the potential hire to feel that your company is actually interested in actually hiring somebody, and not just fucking off in the booth, to cover for "failing to find somebody" and then "Getting yet another H1B". It gives them a chance to show you what they can do. That is what it is for, and what it does. It also excludes the fakers that this article is about-- The ones that can talk a good talk, but could not program a simple boolean check condition if their life depended on it.

If it were not for the time constraints of a job fair (usually only 2 days, and in that time you need to try and pre-screen as many as possible), I would suggest a tiered challenge, with progressively harder challenges, where you hand out resumes to the ones that make it to the top 3 brackets, but that is not the way the world works.

luis_a_espinal ( 1810296 ) writes:
Re: ( Score: 2 )
This in my opinion is really a waste of time. Challenges like this have to be so simple they can be done walking up to a booth are not likely to filter the "all talks" any better than a few interview questions could (imperson so the candidate can't just google it).

Tougher more involved stuff isn't good either it gives a huge advantage to the full time job hunter, the guy or gal that already has a 9-5 and a family that wants to seem them has not got time for games. We have been struggling with hiring where I work ( I do a lot of the interviews ) and these are the conclusions we have reached

You would be surprised at the number of people with impeccable-looking resumes failing at something as simple as the FizzBuzz test [codinghorror.com]

PaulRivers10 ( 4110595 ) writes:
Re: ... A job fair can easily test this competenc ( Score: 2 )

The only thing fuzzbuzz tests is "have you done fizzbuzz before"? It's a short question filled with every petty trick the author could think ti throw in there. If you haven't seen the tricks they trip you up for no reason related to your actual coding skills. Once you have seen them they're trivial and again unrelated to real work. Fizzbuzz is best passed by someone aiming to game the interview system. It passes people gaming it and trips up people who spent their time doing on the job real work.

Hognoxious ( 631665 ) writes:
Re: ( Score: 2 )
they trip you up for no reason related to your actual codung skills.

Bullshit!

luis_a_espinal ( 1810296 ) , Sunday April 29, 2018 @07:49AM ( #56522861 ) Homepage
filter the lame code monkeys ( Score: 4 , Informative)
Lame monkey tests select for lame monkeys.

A good programmer first and foremost has a clean mind. Experience suggests puzzle geeks, who excel at contrived tests, are usually sloppy thinkers.

No. Good programmers can trivially knock out any of these so-called lame monkey tests. It's lame code monkeys who can't do it. And I've seen their work. Many night shifts and weekends I've burned trying to fix their shit because they couldn't actually do any of the things behind what you call "lame monkey tests", like:

    pulling expensive invariant calculations out of loops using for loops to scan a fucking table to pull rows or calculate an aggregate when they could let the database do what it does best with a simple SQL statement systems crashing under actual load because their shitty code was never stress tested ( but it worked on my dev box! .) again with databases, having to redo their schemas because they were fattened up so much with columns like VALUE1, VALUE2, ... VALUE20 (normalize you assholes!) chatting remote APIs - because these code monkeys cannot think about the need for bulk operations in increasingly distributed systems. storing dates in unsortable strings because the idiots do not know most modern programming languages have a date data type.

Oh and the most important, off-by-one looping errors. I see this all the time, the type of thing a good programmer can spot on quickly because he or she can do the so-called "lame monkey tests" that involve arrays and sorting.

I've seen the type: "I don't need to do this shit because I have business knowledge and I code for business and IT not google", and then they go and code and fuck it up... and then the rest of us have to go clean up their shit at 1AM or on weekends.

If you work as an hourly paid contractor cleaning that crap, it can be quite lucrative. But sooner or later it truly sucks the energy out of your soul.

So yeah, we need more lame monkey tests ... to filter the lame code monkeys.

ShanghaiBill ( 739463 ) writes:
Re: ( Score: 3 )
Someone could Google the problem with the phone then step up and solve the challenge.

If given a spec, someone can consistently cobble together working code by Googling, then I would love to hire them. That is the most productive way to get things done.

There is nothing wrong with using external references. When I am coding, I have three windows open: an editor, a testing window, and a browser with a Stackoverflow tab open.

Junta ( 36770 ) writes:
Re: ( Score: 2 )

Yeah, when we do tech interviews, we ask questions that we are certain they won't be able to answer, but want to see how they would think about the problem and what questions they ask to get more data and that they don't just fold up and say "well that's not the sort of problem I'd be thinking of" The examples aren't made up or anything, they are generally selection of real problems that were incredibly difficult that our company had faced before, that one may not think at first glance such a position would

bobstreo ( 1320787 ) writes:
Nothing worse ( Score: 2 )

than spending weeks interviewing "good" candidates for an opening, selecting a couple and hiring them as contractors, then finding out they are less than unqualified to do the job they were hired for.

I've seen it a few times, Java "experts", Microsoft "experts" with years of experience on their resumes, but completely useless in coding, deployment or anything other than buying stuff from the break room vending machines.

That being said, I've also seen projects costing hundreds of thousands of dollars, with y

Anonymous Coward , Sunday April 29, 2018 @12:34AM ( #56522157 )
Re:Nothing worse ( Score: 4 , Insightful)

The moment you said "contractors", and you have lost any sane developer. Keep swimming, its not a fish.

Anonymous Coward writes:
Re: ( Score: 2 , Informative)

I agree with this. I consider myself to be a good programmer and I would never go into contractor game. I also wonder, how does it take you weeks to interview someone and you still can't figure out if the person can't code? I could probably see that in 15 minutes in a pair coding session.

Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they suck big time. They make software development 100 times harder than it could be. Their technical supp

Lanthanide ( 4982283 ) writes:
Re: ( Score: 2 )

It's weeks to interview multiple different candidates before deciding on 1 or 2 of them. Not weeks per person.

Anonymous Coward writes:
Re: ( Score: 3 , Insightful)
That being said, I've also seen projects costing hundreds of thousands of dollars, with years of delays from companies like Oracle, Sun, SAP, and many other "vendors"

Software development is a hard thing to do well, despite the general thinking of technology becoming cheaper over time, and like health care the quality of the goods and services received can sometimes be difficult to ascertain. However, people who don't respect developers and the problems we solve are very often the same ones who continually frustrate themselves by trying to cheap out, hiring outsourced contractors, and then tearing their hair out when sub par results are delivered, if anything is even del

pauljlucas ( 529435 ) writes:
Re: ( Score: 2 )

As part of your interview process, don't you have candidates code a solution to a problem on a whiteboard? I've interviewed lots of "good" candidates (on paper) too, but they crashed and burned when challenged with a coding exercise. As a result, we didn't make them job offers.

VeryFluffyBunny ( 5037285 ) writes:
I do the opposite ( Score: 2 )

I'm not a great coder but good enough to get done what clients want done. If I'm not sure or don't think I can do it, I tell them. I think they appreciate the honesty. I don't work in a tech-hub, startups or anything like that so I'm not under the same expectations and pressures that others may be.

Tony Isaac ( 1301187 ) writes:
Bigger building blocks ( Score: 2 )

OK, so yes, I know plenty of programmers who do fake it. But stitching together components isn't "fake" programming.

Back in the day, we had to write our own code to loop through an XML file, looking for nuggets. Now, we just use an XML serializer. Back then, we had to write our own routines to send TCP/IP messages back and forth. Now we just use a library.

I love it! I hated having to make my own bricks before I could build a house. Now, I can get down to the business of writing the functionality I want, ins

Anonymous Coward writes:
Re: ( Score: 2 , Insightful)

But, I suspect you could write the component if you had to. That makes you a very different user of that component than someone who just knows it as a magic black box.

Because of this, you understand the component better and have real knowledge of its strengths and limitations. People blindly using components with only a cursory idea of their internal operation often cause major performance problems. They rarely recognize when it is time to write their own to overcome a limitation (or even that it is possibl

Tony Isaac ( 1301187 ) writes:
Re: ( Score: 2 )

You're right on all counts. A person who knows how the innards work, is better than someone who doesn't, all else being equal. Still, today's world is so specialized that no one can possibly learn it all. I've never built a processor, as you have, but I still have been able to build a DNA matching algorithm for a major DNA lab.

I would argue that anyone who can skillfully use off-the-shelf components can also learn how to build components, if they are required to.

thesupraman ( 179040 ) writes:
Ummm. ( Score: 2 )

1, 'Back in the Day' there was no XML, XMl was not very long ago.
2, its a parser, a serialiser is pretty much the opposite (unless this weeks fashion has redefined that.. anything is possible).
3, 'Back then' we didnt have TCP stacks...

But, actually I agree with you. I can only assume the author thinks there are lots of fake plumbers because they dont cast their own toilet bowels from raw clay, and use pre-build fittings and pipes! That car mechanics start from raw steel scrap and a file.. And that you need

Tony Isaac ( 1301187 ) writes:
Re: ( Score: 2 )

For the record, XML was invented in 1997, you know, in the last century! https://en.wikipedia.org/wiki/... [wikipedia.org]
And we had a WinSock library in 1992. https://en.wikipedia.org/wiki/... [wikipedia.org]

Yes, I agree with you on the "middle ground." My reaction was to the author's point that "not knowing how to build the components" was the same as being a "fake programmer."

Tony Isaac ( 1301187 ) , Sunday April 29, 2018 @01:46AM ( #56522313 ) Homepage
Re:Bigger building blocks ( Score: 5 , Interesting)

If I'm a plumber, and I don't know anything about the engineering behind the construction of PVC pipe, I can still be a good plumber. If I'm an electrician, and I don't understand the role of a blast furnace in the making of the metal components, I can still be a good electrician.

The analogy fits. If I'm a programmer, and I don't know how to make an LZW compression library, I can still be a good programmer. It's a matter of layers. These days, we specialize. You've got your low-level programmers that make the components, the high level programmers that put together the components, the graphics guys who do HTML/CSS, and the SQL programmers that just know about databases. Every person has their specialty. It's no longer necessary to be a low-level programmer, or jack-of-all-trades, to be "good."

If I don't know the layout of the IP header, I can still write quality networking software, and if I know XSLT, I can still do cool stuff with XML, even if I don't know how to write a good parser.

frank_adrian314159 ( 469671 ) writes:
Re: ( Score: 3 )

I was with you until you said " I can still do cool stuff with XML".

Tony Isaac ( 1301187 ) writes:
Re: ( Score: 2 )

LOL yeah I know it's all JSON now. I've been around long enough to see these fads come and go. Frankly, I don't see a whole lot of advantage of JSON over XML. It's not even that much more compact, about 10% or so. But the point is that the author laments the "bad old days" when you had to create all your own building blocks, and you didn't have a team of specialists. I for one don't want to go back to those days!

careysub ( 976506 ) writes:
Re: ( Score: 3 )

The main advantage is that JSON is that it is consistent. XML has attributes, embedded optional stuff within tags. That was derived from the original SGML ancestor where is was thought to be a convenience for the human authors who were supposed to be making the mark-up manually. Programmatically it is a PITA.

Cederic ( 9623 ) writes:
Re: ( Score: 3 )

I got shit for decrying XML back when it was the trendy thing. I've had people apologise to me months later because they've realized I was right, even though at the time they did their best to fuck over my career because XML was the new big thing and I wasn't fully on board.

XML has its strengths and its place, but fuck me it taught me how little some people really fucking understand shit.

Anonymous Coward writes:
Silicon Valley is Only Part of the Tech Business ( Score: 2 , Informative)

And a rather small part at that, albeit a very visible and vocal one full of the proverbial prima donas. However, much of the rest of the tech business, or at least the people working in it, are not like that. It's small groups of developers working in other industries that would not typically be considered technology. There are software developers working for insurance companies, banks, hedge funds, oil and gas exploration or extraction firms, national defense and many hundreds and thousands of other small

phantomfive ( 622387 ) writes:
bonfire of fakers ( Score: 2 )

This is the reason I wish programming didn't pay so much....the field is better when it's mostly populated by people who enjoy programming.

Njovich ( 553857 ) , Sunday April 29, 2018 @05:35AM ( #56522641 )
Learn to code courses ( Score: 5 , Insightful)
They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software.

Kind of hard to take this article serious after saying gibberish like this. I would say most good programmers know that neither learn-to-code courses nor AI are going to make a dent in their income any time soon.

AndyKron ( 937105 ) writes:
Me? No ( Score: 2 )

As a non-programmer Arduino and libraries are my friends

Escogido ( 884359 ) , Sunday April 29, 2018 @06:59AM ( #56522777 )
in the silly cone valley ( Score: 5 , Interesting)

There is a huge shortage of decent programmers. I have personally witnessed more than one phone "interview" that went like "have you done this? what about this? do you know what this is? um, can you start Monday?" (120K-ish salary range)

Partly because there are way more people who got their stupid ideas funded than good coders willing to stain their resume with that. partly because if you are funded, and cannot do all the required coding solo, here's your conundrum:

  • top level hackers can afford to be really picky, so on one hand it's hard to get them interested, and if you could get that, they often want some ownership of the project. the plus side is that they are happy to work for lots of equity if they have faith in the idea, but that can be a huge "if".
  • "good but not exceptional" senior engineers aren't usually going to be super happy, as they often have spouses and children and mortgages, so they'd favor job security over exciting ideas and startup lottery.
  • that leaves you with fresh-out-of-college folks, which are really really a mixed bunch. some are actually already senior level of understanding without the experience, some are absolutely useless, with varying degrees in between, and there's no easy way to tell which is which early.

so the not-so-scrupulous folks realized what's going on, and launched multiple coding boot camps programmes, to essentially trick both the students into believing they can become a coder in a month or two, and also the prospective employers that said students are useful. so far it's been working, to a degree, in part because in such companies coding skill evaluation process is broken. but one can only hide their lack of value add for so long, even if they do manage to bluff their way into a job.

quonset ( 4839537 ) , Sunday April 29, 2018 @07:20AM ( #56522817 )
Duh! ( Score: 4 , Insightful)

All one had to do was look at the lousy state of software and web sites today to see this is true. It's quite obvious little to no thought is given on how to make something work such that one doesn't have to jump through hoops.

I have many times said the most perfect word processing program ever developed was WordPefect 5.1 for DOS. Ones productivity was astonishing. It just worked.

Now we have the bloated behemoth Word which does its utmost to get in the way of you doing your work. The only way to get it to function is to turn large portions of its "features" off, and even then it still insists on doing something other than what you told it to do.

Then we have the abomination of Windows 10, which is nothing but Clippy on 10X steroids. It is patently obvious the people who program this steaming pile have never heard of simplicity. Who in their right mind would think having to "search" for something is more efficient than going directly to it? I would ask the question if these people wander around stores "searching" for what they're looking for, but then I realize that's how their entire life is run. They search for everything online rather than going directly to the source. It's no wonder they complain about not having time to things. They're always searching.

Web sites are another area where these people have no clue what they're doing. Anything that might be useful is hidden behind dropdown menus, flyouts, popup bubbles and intriately designed mazes of clicks needed to get to where you want to go. When someone clicks on a line of products, they shouldn't be harassed about what part of the product line they want to look at. Give them the information and let the user go where they want.

This rant could go on, but this article explains clearly why we have regressed when it comes to software and web design. Instead of making things simple and easy to use, using the one or two brain cells they have, programmers and web designers let the software do what it wants without considering, should it be done like this?

swb ( 14022 ) , Sunday April 29, 2018 @07:48AM ( #56522857 )
Tech industry churn ( Score: 3 )

The tech industry has a ton of churn -- there's some technological advancement, but there's an awful lot of new products turned out simply to keep customers buying new licenses and paying for upgrades.

This relentless and mostly phony newness means a lot of people have little experience with current products. People fake because they have no choice. The good ones understand the general technologies and problems they're meant to solve and can generally get up to speed quickly, while the bad ones are good at faking it but don't really know what they're doing. Telling the difference from the outside is impossible.

Sales people make it worse, promoting people as "experts" in specific products or implementations because the people have experience with a related product and "they're all the same". This burns out the people with good adaption skills.

DaMattster ( 977781 ) , Sunday April 29, 2018 @08:39AM ( #56522979 )
Interesting ( Score: 3 )

From the summary, it sounds like a lot of programmers and software engineers are trying to develop the next big thing so that they can literally beg for money from the elite class and one day, hopefully, become a member of the aforementioned. It's sad how the middle class has been utterly decimated in the United States that some of us are willing to beg for scraps from the wealthy. I used to work in IT but I've aged out and am now back in school to learn automotive technology so that I can do something other than being a security guard. Currently, the only work I have been able to find has been in the unglamorous security field.

I am learning some really good new skills in the automotive program that I am in but I hate this one class called "Professionalism in the Shop." I can summarize the entire class in one succinct phrase, "Learn how to appeal to, and communicate with, Mr. Doctor, Mr. Lawyer, or Mr. Wealthy-man." Basically, the class says that we are supposed to kiss their ass so they keep coming back to the Audi, BMW, Mercedes, Volvo, or Cadillac dealership. It feels a lot like begging for money on behalf of my employer (of which very little of it I will see) and nothing like professionalism. Professionalism is doing the job right the first time, not jerking the customer off. Professionalism is not begging for a 5 star review for a few measly extra bucks but doing absolute top quality work. I guess the upshot is that this class will be the easiest 4.0 that I've ever seen.

There is something fundamentally wrong when the wealthy elite have basically demanded that we beg them for every little scrap. I can understand the importance of polite and professional interaction but this prevalent expectation that we bend over backwards for them crosses a line with me. I still suck it up because I have to but it chafes my ass to basically validate the wealthy man.

ElitistWhiner ( 79961 ) writes:
Natural talent... ( Score: 2 )

In 70's I worked with two people who had a natural talent for computer science algorithms .vs. coding syntax. In the 90's while at COLUMBIA I worked with only a couple of true computer scientists out of 30 students. I've met 1 genius who programmed, spoke 13 languages, ex-CIA, wrote SWIFT and spoke fluent assembly complete with animated characters.

According to the Bluff Book, everyone else without natural talent fakes it. In the undiluted definition of computer science, genetics roulette and intellectual d

fahrbot-bot ( 874524 ) writes:
Other book sells better and is more interesting ( Score: 2 )
New Book Describes 'Bluffing' Programmers in Silicon Valley

It's not as interesting as the one about "fluffing" [urbandictionary.com] programmers.

Anonymous Coward writes:
Re: ( Score: 3 , Funny)

Ah yes, the good old 80:20 rule, except it's recursive for programmers.

80% are shit, so you fire them. Soon you realize that 80% of the remaining 20% are also shit, so you fire them too. Eventually you realize that 80% of the 4% remaining after sacking the 80% of the 20% are also shit, so you fire them!

...

The cycle repeats until there's just one programmer left: the person telling the joke.

---

tl;dr: All programmers suck. Just ask them to review their own code from more than 3 years ago: they'll tell you that

luis_a_espinal ( 1810296 ) writes:
Re: ( Score: 3 )
Who gives a fuck about lines? If someone gave me JavaScript, and someone gave me minified JavaScript, which one would I want to maintain?

I donβ(TM)t care about your line savings, less isnβ(TM)t always better.

Because the world of programming is not centered about JavasScript and reduction of lines is not the same as minification. If the first thing that came to your mind was about minified JavaScript when you saw this conversation, you are certainly not the type of programmer I would want to inherit code from.

See, there's a lot of shit out there that is overtly redundant and unnecessarily complex. This is specially true when copy-n-paste code monkeys are left to their own devices for whom code formatting seems

Anonymous Coward , Sunday April 29, 2018 @01:17AM ( #56522241 )
Re:Most "Professional programmers" are useless. ( Score: 4 , Interesting)

I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as broken and who automate routine tasks or simply question and erase tasks that are not necessary. If you have a software team that contain 5 of these, you can easily beat a team of 100 average people, not only in cost but also in schedule, quality and features. In theory they are worth 20 times more than average employees, but in practise they are usually paid the same amount of money with few exceptions.

80% of people are the average. They can follow instructions and they can get the work done, but they don't see that something is broken and needs fixing if it works the way it has always worked. While it might seem so, these people are not worthless. There are a lot of tasks that these people are happily doing which the 10% don't want to do. E.g. simple maintenance work, implementing simple features, automating test cases etc. But if you let the top 10% lead the project, you most likely won't be needed that much of these people. Most work done by these people is caused by themselves, by writing bad software due to lack of good leader.

10% are just causing damage. I'm not talking about terrorists and criminals. I have seen software developers who have tried (their best?), but still end up causing just damage to the code that someone else needs to fix, costing much more than their own wasted time. You really must use code reviews if you don't know your team members, to find these people early.

Anonymous Coward , Sunday April 29, 2018 @01:40AM ( #56522299 )
Re:Most "Professional programmers" are useless. ( Score: 5 , Funny)
to find these people early

and promote them to management where they belong.

raymorris ( 2726007 ) , Sunday April 29, 2018 @01:51AM ( #56522329 ) Journal
Seems about right. Constantly learning, studying ( Score: 5 , Insightful)

That seems about right to me.

I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good programmer and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things I'm not good at, but I'm good at my job, so say everyone I've worked with.)

I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that for twenty years.

I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years. They never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was wrong 20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days of improvement.

gbjbaanb ( 229885 ) writes:
Re: ( Score: 2 )

I think I can guarantee that they are a lot better at their jobs than you think, and that you are a lot worse at your job than you think too.

m00sh ( 2538182 ) writes:
Re: ( Score: 2 )
That seems about right to me.

I have a lot of weaknesses. My people skills suck, I'm scrawny, I'm arrogant. I'm also generally known as a really good programmer and people ask me how/why I'm so much better at my job than everyone else in the room. (There are a lot of things I'm not good at, but I'm good at my job, so say everyone I've worked with.)

I think one major difference is that I'm always studying, intentionally working to improve, every day. I've been doing that for twenty years.

I've worked with people who have "20 years of experience"; they've done the same job, in the same way, for 20 years. Their first month on the job they read the first half of "Databases for Dummies" and that's what they've been doing for 20 years. They never read the second half, and use Oracle database 18.0 exactly the same way they used Oracle Database 2.0 - and it was wrong 20 years ago too. So it's not just experience, it's 20 years of learning, getting better, every day. That's 7,305 days of improvement.

If you take this attitude towards other people, people will not ask your for help. At the same time, you'll be also be not able to ask for their help.

You're not interviewing your peers. They are already in your team. You should be working together.

I've seen superstar programmers suck the life out of project by over-complicating things and not working together with others.

raymorris ( 2726007 ) writes:
Which part? Learning makes you better? ( Score: 2 )

You quoted a lot. Is there one part exactly do you have in mind? The thesis of my post is of course "constant learning, on purpose, makes you better"

> you take this attitude towards other people, people will not ask your for help. At the same time, you'll be also be not able to ask for their help.

Are you saying that trying to learn means you can't ask for help, or was there something more specific? For me, trying to learn means asking.

Trying to learn, I've had the opportunity to ask for help from peop

phantomfive ( 622387 ) writes:
Re: ( Score: 2 )

The difference between a smart programmer who succeeds and a stupid programmer who drops out is that the smart programmer doesn't give up.

complete loony ( 663508 ) writes:
Re: ( Score: 2 )

In other words;

What is often mistaken for 20 years' experience, is just 1 year's experience repeated 20 times.
serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

10% are just causing damage. I'm not talking about terrorists and criminals.

Terrorists and criminals have nothing on those guys. I know guy who is one of those. Worse, he's both motivated and enthusiastic. He also likes to offer help and advice to other people who don't know the systems well.

asifyoucare ( 302582 ) , Sunday April 29, 2018 @08:49AM ( #56522999 )
Re:Most "Professional programmers" are useless. ( Score: 5 , Insightful)

Good point. To quote Kurt von Hammerstein-Equord:

"I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent -- their place is the General Staff. The next lot are stupid and lazy -- they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent -- he must not be entrusted with any responsibility because he will always cause only mischief."

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

Oops. Good thing I never did anything military. I am definitely in the "clever and lazy" class.

apoc.famine ( 621563 ) writes:
Re: ( Score: 2 )

I was just thinking the same thing. One of my passions in life is coming up with clever ways to do less work while getting more accomplished.

Software_Dev_GL ( 5377065 ) writes:
Re: ( Score: 2 )

It's called the Pareto Distribution [wikipedia.org]. The number of competent people (people doing most of the work) in any given organization goes like the square root of the number of employees.

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

Matches my observations. 10-15% are smart, can think independently, can verify claims by others and can identify and use rules in whatever they do. They are not fooled by things "everybody knows" and see standard-approaches as first approximations that, of course, need to be verified to work. They do not trust anything blindly, but can identify whether something actually work well and build up a toolbox of such things.

The problem is that in coding, you do not have a "(mass) production step", and that is the

geoskd ( 321194 ) writes:
Re: ( Score: 2 )

In basic concept I agree with your theory, it fits my own anecdotal experience well, but I find that your numbers are off. The top bracket is actually closer to 20%. The reason it seems so low is that a large portion of the highly competent people are running one programmer shows, so they have no co-workers to appreciate their knowledge and skill. The places they work do a very good job of keeping them well paid and happy (assuming they don't own the company outright), so they rarely if ever switch jobs.

The

Tablizer ( 95088 ) , Sunday April 29, 2018 @01:54AM ( #56522331 ) Journal
Re:Most "Professional programmers" are useless. ( Score: 4 , Interesting)
at least 70, probably 80, maybe even 90 percent of professional programmers should just fuck off and do something else as they are useless at programming.

Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers.

Otherwise, I'd suggest longer training and education before they enter the industry. But that just narrows an already narrow window of use.

Cesare Ferrari ( 667973 ) writes:
Re: ( Score: 2 )

Well, it does rather depend on which industry you work in - i've managed to find interesting programming jobs for 25 years, and there's no end in sight for interesting projects and new avenues to explore. However, this isn't for everyone, and if you have good personal skills then moving from programming into some technical management role is a very worthwhile route, and I know plenty of people who have found very interesting work in that direction.

gweihir ( 88907 ) writes:
Re: ( Score: 3 , Insightful)

I think that is a misinterpretation of the facts. Old(er) coders that are incompetent are just much more obvious and usually are also limited to technologies that have gotten old as well. Hence the 90% old coders that can actually not hack it and never really could get sacked at some time and cannot find a new job with their limited and outdated skills. The 10% that are good at it do not need to worry though. Who worries there is their employers when these people approach retirement age.

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

My experience as an IT Security Consultant (I also do some coding, but only at full rates) confirms that. Most are basically helpless and many have negative productivity, because people with a clue need to clean up after them. "Learn to code"? We have far too many coders already.

tomhath ( 637240 ) writes:
Re: ( Score: 2 )

You can't bluff you way through writing software, but many, many people have bluffed their way into a job and then tried to learn it from the people who are already there. In a marginally functional organization those incompetents are let go pretty quickly, but sometimes they stick around for months or years.

Apparently the author of this book is one of those, probably hired and fired several times before deciding to go back to his liberal arts roots and write a book.

DaMattster ( 977781 ) writes:
Re: ( Score: 2 )

There are some mechanics that bluff their way through an automotive repair. It's the same damn thing

gweihir ( 88907 ) writes:
Re: ( Score: 2 )

I think you can and this is by far not the first piece describing that. Here is a classic: https://blog.codinghorror.com/... [codinghorror.com]
Yet these people somehow manage to actually have "experience" because they worked in a role they are completely unqualified to fill.

phantomfive ( 622387 ) writes:
Re: ( Score: 2 )
Fiddling with JavaScript libraries to get a fancy dancy interface that makes PHB's happy is a sought-after skill, for good or bad. Now that we rely more on half-ass libraries, much of "programming" is fiddling with dark-grey boxes until they work good enough.

This drives me crazy, but I'm consoled somewhat by the fact that it will all be thrown out in five years anyway.

[Apr 26, 2018] How to create a Bash completion script

Notable quotes:
"... now, tomorrow, never ..."
Apr 26, 2018 | opensource.com

Bash completion is a functionality through which Bash helps users type their commands more quickly and easily. It does this by presenting possible options when users press the Tab key while typing a command.

$ git < tab >< tab >
git git-receive-pack git-upload-archive
gitk git-shell git-upload-pack
$ git-s < tab >
$ git-shell

How it works

More Linux resources

The completion script is code that uses the builtin Bash command complete to define which completion suggestions can be displayed for a given executable . The nature of the completion options vary, from simple static to highly sophisticated. Why bother?

This functionality helps users by:

Hands-on

Here's what we will do in this tutorial:

We will first create a dummy executable script called dothis . All it does is execute the command that resides on the number that was passed as an argument in the user's history. For example, the following command will simply execute the ls -a command, given that it exists in history with number 235 :

dothis 235

Then we will create a Bash completion script that will display commands along with their number from the user's history, and we will "bind" it to the dothis executable.

$ dothis < tab >< tab >
215 ls
216 ls -la
217 cd ~
218 man history
219 git status
220 history | cut -c 8 - bash_screen.png dothis executable screen

You can see a gif demonstrating the functionality at this tutorial's code repository on GitHub .

Let the show begin.

Creating the executable script

Create a file named dothis in your working directory and add the following code:

if [ -z "$1" ] ; then
echo "No command number passed"
exit 2
fi

exists =$ ( fc -l -1000 | grep ^ $1 -- 2 >/ dev / null )

if [ -n " $exists " ] ; then
fc -s -- "$1"
else
echo "Command with number $1 was not found in recent history"
exit 2
fi

Notes:

Make the script executable with:

chmod +x ./dothis

We will execute this script many times in this tutorial, so I suggest you place it in a folder that is included in your path so that we can access it from anywhere by typing dothis .

I installed it in my home bin folder using:

install ./dothis ~/bin/dothis

You can do the same given that you have a ~/bin folder and it is included in your PATH variable.

Check to see if it's working:

dothis

You should see this:

$ dothis
No command number passed

Done.

Creating the completion script

Create a file named dothis-completion.bash . From now on, we will refer to this file with the term completion script .

Once we add some code to it, we will source it to allow the completion to take effect. We must source this file every single time we change something in it .

Later in this tutorial, we will discuss our options for registering this script whenever a Bash shell opens.

Static completion

Suppose that the dothis program supported a list of commands, for example:

Let's use the complete command to register this list for completion. To use the proper terminology, we say we use the complete command to define a completion specification ( compspec ) for our program.

Add this to the completion script.

#/usr/bin/env bash
complete -W "now tomorrow never" dothis

Here's what we specified with the complete command above:

Source the file:

source ./dothis-completion.bash

Now try pressing Tab twice in the command line, as shown below:

$ dothis < tab >< tab >
never now tomorrow

Try again after typing the n :

$ dothis n < tab >< tab >
never now

Magic! The completion options are automatically filtered to match only those starting with n .

Note: The options are not displayed in the order that we defined them in the word list; they are automatically sorted.

There are many other options to be used instead of the -W that we used in this section. Most produce completions in a fixed manner, meaning that we don't intervene dynamically to filter their output.

For example, if we want to have directory names as completion words for the dothis program, we would change the complete command to the following:

complete -A directory dothis

Pressing Tab after the dothis program would get us a list of the directories in the current directory from which we execute the script:

$ dothis < tab >< tab >
dir1 / dir2 / dir3 /

Find the complete list of the available flags in the Bash Reference Manual .

Dynamic completion

We will be producing the completions of the dothis executable with the following logic:

Let's start by defining a function that will execute each time the user requests completion on a dothis command. Change the completion script to this:

#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY+= ( "now" )
COMPREPLY+= ( "tomorrow" )
COMPREPLY+= ( "never" )
}

complete -F _dothis_completions dothis

Note the following:

Now source the script and go for completion:

$ dothis < tab >< tab >
never now tomorrow

Perfect. We produce the same completions as in the previous section with the word list. Or not? Try this:

$ dothis nev < tab >< tab >
never now tomorrow

As you can see, even though we type nev and then request for completion, the available options are always the same and nothing gets completed automatically. Why is this happening?

Enter compgen : a builtin command that generates completions supporting most of the options of the complete command (ex. -W for word list, -d for directories) and filtering them based on what the user has already typed.

Don't worry if you feel confused; everything will become clear later.

Type the following in the console to better understand what compgen does:

$ compgen -W "now tomorrow never"
now
tomorrow
never
$ compgen -W "now tomorrow never" n
now
never
$ compgen -W "now tomorrow never" t
tomorrow

So now we can use it, but we need to find a way to know what has been typed after the dothis command. We already have the way: The Bash completion facilities provide Bash variables related to the completion taking place. Here are the more important ones:

To access the word just after the dothis word, we can use the value of COMP_WORDS[1]

Change the completion script again:

#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY = ( $ ( compgen -W "now tomorrow never" " ${COMP_WORDS[1]} " ))
}

complete -F _dothis_completions dothis

Source, and there you are:

$ dothis
never now tomorrow
$ dothis n
never now

Now, instead of the words now, tomorrow, never , we would like to see actual numbers from the command history.

The fc -l command followed by a negative number -n displays the last n commands. So we will use:

fc -l -50

which lists the last 50 executed commands along with their numbers. The only manipulation we need to do is replace tabs with spaces to display them properly from the completion mechanism. sed to the rescue.

Change the completion script as follows:

#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))
}

complete -F _dothis_completions dothis

Source and test in the console:

$ dothis < tab >< tab >
632 source dothis-completion.bash 649 source dothis-completion.bash 666 cat ~ / .bash_profile
633 clear 650 clear 667 cat ~ / .bashrc
634 source dothis-completion.bash 651 source dothis-completion.bash 668 clear
635 source dothis-completion.bash 652 source dothis-completion.bash 669 install . / dothis ~ / bin / dothis
636 clear 653 source dothis-completion.bash 670 dothis
637 source dothis-completion.bash 654 clear 671 dothis 6546545646
638 clear 655 dothis 654 672 clear
639 source dothis-completion.bash 656 dothis 631 673 dothis
640 source dothis-completion.bash 657 dothis 150 674 dothis 651
641 source dothis-completion.bash 658 dothis 675 source dothis-completion.bash
642 clear 659 clear 676 dothis 651
643 dothis 623 ls -la 660 dothis 677 dothis 659
644 clear 661 install . / dothis ~ / bin / dothis 678 clear
645 source dothis-completion.bash 662 dothis 679 dothis 665
646 clear 663 install . / dothis ~ / bin / dothis 680 clear
647 source dothis-completion.bash 664 dothis 681 clear
648 clear 665 cat ~ / .bashrc

Not bad.

We do have a problem, though. Try typing a number as you see it in your completion list and then press the key again.

$ dothis 623 < tab >
$ dothis 623 ls 623 ls -la
...
$ dothis 623 ls 623 ls 623 ls 623 ls 623 ls -la

This is happening because in our completion script, we used the ${COMP_WORDS[1]} to always check the first typed word after the dothis command (the number 623 in the above snippet). Hence the completion continues to suggest the same completion again and again when the Tab key is pressed.

To fix this, we will not allow any kind of completion to take place if at least one argument has already been typed. We will add a condition in our function that checks the size of the aforementioned COMP_WORDS array.

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

COMPREPLY = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))
}

complete -F _dothis_completions dothis

Source and retry.

$ dothis 623 < tab >
$ dothis 623 ls -la < tab > # SUCCESS: nothing happens here

There is another thing we don't like, though. We do want to display the numbers along with the corresponding commands to help users decide which one is desired, but when there is only one completion suggestion and it gets automatically picked by the completion mechanism, we shouldn't append the command literal too .

In other words, our dothis executable accepts only a number, and we haven't added any functionality to check or expect other arguments. When our completion function gives only one result, we should trim the command literal and respond only with the command number.

To accomplish this, we will keep the response of the compgen command in an array variable, and if its size is 1 , we will trim the one and only element to keep just the number. Otherwise, we'll let the array as is.

Change the completion script to this:

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

# keep the suggestions in a local variable
local suggestions = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t/ /') " -- " ${COMP_WORDS[1]} " ))

if [ " ${#suggestions[@]} " == "1" ] ; then
# if there's only one match, we remove the command literal
# to proceed with the automatic completion of the number
local number =$ ( echo ${suggestions[0]/%\ */} )
COMPREPLY = ( " $number " )
else
# more than one suggestions resolved,
# respond with the suggestions intact
COMPREPLY = ( " ${suggestions[@]} " )
fi
}

complete -F _dothis_completions dothis

Registering the completion script

If you want to enable the completion just for you on your machine, all you have to do is add a line in your .bashrc file sourcing the script:

source <path-to-your-script>/dothis-completion.bash

If you want to enable the completion for all users, you can just copy the script under /etc/bash_completion.d/ and it will automatically be loaded by Bash.

Fine-tuning the completion script

Here are some extra steps for better results:

Displaying each entry in a new line

In the Bash completion script I was working on, I too had to present suggestions consisting of two parts. I wanted to display the first part in the default color and the second part in gray to distinguish it as help text. In this tutorial's example, it would be nice to present the numbers in the default color and the command literal in a less fancy one.

Unfortunately, this is not possible, at least for now, because the completions are displayed as plain text and color directives are not processed (for example: \e[34mBlue ).

What we can do to improve the user experience (or not) is to display each entry in a new line. This solution is not that obvious since we can't just append a new line character in each COMPREPLY entry. We will follow a rather hackish method and pad suggestion literals to a width that fills the terminal.

Enter printf . If you want to display each suggestion on each own line, change the completion script to the following:

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

local IFS =$ '\n'
local suggestions = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))

if [ " ${#suggestions[@]} " == "1" ] ; then
local number = " ${suggestions[0]/%\ */} "
COMPREPLY = ( " $number " )
else
for i in " ${!suggestions[@]} " ; do
suggestions [ $i ] = " $(printf '%*s' "-$COLUMNS" "${suggestions[$i]}") "
done

COMPREPLY = ( " ${suggestions[@]} " )
fi
}

complete -F _dothis_completions dothis

Source and test:

dothis < tab >< tab >
...
499 source dothis-completion.bash
500 clear
...
503 dothis 500 Customizable behavior

In our case, we hard-coded to display the last 50 commands for completion. This is not a good practice. We should first respect what each user might prefer. If he/she hasn't made any preference, we should default to 50.

To accomplish that, we will check if an environment variable DOTHIS_COMPLETION_COMMANDS_NUMBER has been set.

Change the completion script one last time:

#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi

local commands_number = ${DOTHIS_COMPLETION_COMMANDS_NUMBER:-50}
local IFS =$ '\n'
local suggestions = ( $ ( compgen -W " $(fc -l -$commands_number | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))

if [ " ${#suggestions[@]} " == "1" ] ; then
local number = " ${suggestions[0]/%\ */} "
COMPREPLY = ( " $number " )
else
for i in " ${!suggestions[@]} " ; do
suggestions [ $i ] = " $(printf '%*s' "-$COLUMNS" "${suggestions[$i]}") "
done

COMPREPLY = ( " ${suggestions[@]} " )
fi
}

complete -F _dothis_completions dothis

Source and test:

export DOTHIS_COMPLETION_COMMANDS_NUMBER = 5
$ dothis < tab >< tab >
505 clear
506 source . / dothis-completion.bash
507 dothis clear
508 clear
509 export DOTHIS_COMPLETION_COMMANDS_NUMBER = 5 Useful links Code and comments

You can find the code of this tutorial on GitHub .

For feedback, comments, typos, etc., please open an issue in the repository.

Lazarus Lazaridis - I am an open source enthusiast and I like helping developers with tutorials and tools . I usually code in Ruby especially when it's on Rails but I also speak Java, Go, bash & C#. I have studied CS at Athens University of Economics and Business and I live in Athens, Greece. My nickname is iridakos and I publish tech related posts on my personal blog iridakos.com .

[Apr 26, 2018] Bash Range How to iterate over sequences generated on the shell Linux Hint by Fahmida Yesmin

Notable quotes:
"... When only upper limit is used then the number will start from 1 and increment by one in each step. ..."
Apr 26, 2018 | linuxhint.com

Bash Range: How to iterate over sequences generated on the shell 2 days ago You can iterate the sequence of numbers in bash by two ways. One is by using seq command and another is by specifying range in for loop. In seq command, the sequence starts from one, the number increments by one in each step and print each number in each line up to the upper limit by default. If the number starts from upper limit then it decrements by one in each step. Normally, all numbers are interpreted as floating point but if the sequence starts from integer then the list of decimal integers will print. If seq command can execute successfully then it returns 0, otherwise it returns any non-zero number. You can also iterate the sequence of numbers using for loop with range. Both seq command and for loop with range are shown in this tutorial by using examples.

The options of seq command:

You can use seq command by using the following options.

Examples of seq command:

You can apply seq command by three ways. You can use only upper limit or upper and lower limit or upper and lower limit with increment or decrement value of each step . Different uses of the seq command with options are shown in the following examples.

Example-1: seq command without option

When only upper limit is used then the number will start from 1 and increment by one in each step. The following command will print the number from 1 to 4.

$ seq 4

When the two values are used with seq command then first value will be used as starting number and second value will be used as ending number. The following command will print the number from 7 to 15.

$ seq 7 15

When you will use three values with seq command then the second value will be used as increment or decrement value for each step. For the following command, the starting number is 10, ending number is 1 and each step will be counted by decrementing 2.

$ seq 10 -2 1
Example-2: seq with –w option

The following command will print the output by adding leading zero for the number from 1 to 9.

$ seq -w 0110
Example-3: seq with –s option

The following command uses "-" as separator for each sequence number. The sequence of numbers will print by adding "-" as separator.

$ seq -s - 8

Example-4: seq with -f option

The following command will print 10 date values starting from 1. Here, "%g" option is used to add sequence number with other string value.

$ seq -f "%g/04/2018" 10

The following command is used to generate the sequence of floating point number using "%f" . Here, the number will start from 3 and increment by 0.8 in each step and the last number will be less than or equal to 6.

$ seq -f "%f" 3 0.8 6

Example-5: Write the sequence in a file

If you want to save the sequence of number into a file without printing in the console then you can use the following commands. The first command will print the numbers to a file named " seq.txt ". The number will generate from 5 to 20 and increment by 10 in each step. The second command is used to view the content of " seq.txt" file.

seq 5 10 20 | cat &gt; seq.txt
cat seq.txt

Example-6: Using seq in for loop

Suppose, you want to create files named fn1 to fn10 using for loop with seq. Create a file named "sq1.bash" and add the following code. For loop will iterate for 10 times using seq command and create 10 files in the sequence fn1, fn2,fn3 ..fn10.

#!/bin/bash
for i in ` seq 10 ` ; do touch fn. $i done

Run the following commands to execute the code of the bash file and check the files are created or not.

bash sq1.bash
ls

Examples of for loop with range: Example-7: For loop with range

The alternative of seq command is range. You can use range in for loop to generate sequence of numbers like seq. Write the following code in a bash file named " sq2.bash ". The loop will iterate for 5 times and print the square root of each number in each step.

#!/bin/bash
for n in { 1 .. 5 } ; do (( result =n * n ))
echo $n square = $result
done

Run the command to execute the script of the file.

bash sq2.bash

Example-8: For loop with range and increment value

By default, the number is increment by one in each step in range like seq. You can also change the increment value in range. Write the following code in a bash file named " sq3.bash ". The for loop in the script will iterate for 5 times, each step is incremented by 2 and print all odd numbers between 1 to 10.

#!/bin/bash
echo "all odd numbers from 1 to 10 are"
for i in { 1 .. 10 .. 2 }; do echo $i ; done

Run the command to execute the script of the file.

bash sq3.bash

If you want to work with the sequence of numbers then you can use any of the options that are shown in this tutorial. After completing this tutorial, you will be able to use seq command and for loop with range more efficiently in your bash script.

[Mar 19, 2018] Gogo - Create Shortcuts to Long and Complicated Paths in Linux

I do not see any bright ideas here. Using aliases is pretty much equivalent to this.
Mar 19, 2018 | www.tecmint.com

For example, if you have a directory ~/Documents/Phone-Backup/Linux-Docs/Ubuntu/ , using gogo , you can create an alias (a shortcut name), for instance Ubuntu to access it without typing the whole path anymore. No matter your current working directory, you can move into ~/cd Documents/Phone-Backup/Linux-Docs/Ubuntu/ by simply using the alias Ubuntu .

Read Also : bd – Quickly Go Back to a Parent Directory Instead of Typing "cd ../../.." Redundantly

In addition, it also allows you to create aliases for connecting directly into directories on remote Linux servers.

How to Install Gogo in Linux Systems

To install Gogo , first clone the gogo repository from Github and then copy the gogo.py to any directory in your PATH environmental variable (if you already have the ~/bin/ directory, you can place it here, otherwise create it).

$ git clone https://github.com/mgoral/gogo.git
$ cd gogo/
$ mkdir -p ~/bin        #run this if you do not have ~/bin directory
$ cp gogo.py ~/bin/

... ... ...

To start using gogo , you need to logout and login back to use it. Gogo stores its configuration in ~/.config/gogo/gogo.conf file (which should be auto created if it doesn't exist) and has the following syntax.
# Comments are lines that start from '#' character.
default = ~/something
alias = /desired/path
alias2 = /desired/path with space
alias3 = "/this/also/works"
zażółć = "unicode/is/also/supported/zażółć gęślą jaźń"

If you run gogo run without any arguments, it will go to the directory specified in default; this alias is always available, even if it's not in the configuration file, and points to $HOME directory.

To display the current aliases, use the -l switch

[Jan 14, 2018] Linux Filesystem Events with inotify by Charles Fisher

Notable quotes:
"... Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals -- it is a tool for filesystem events, and the cron reference is slightly misleading. ..."
"... The incron package is available from EPEL ..."
Jan 08, 2018 | www.linuxjournal.com

Triggering scripts with incron and systemd.

It is, at times, important to know when things change in the Linux OS. The uses to which systems are placed often include high-priority data that must be processed as soon as it is seen. The conventional method of finding and processing new file data is to poll for it, usually with cron. This is inefficient, and it can tax performance unreasonably if too many polling events are forked too often.

Linux has an efficient method for alerting user-space processes to changes impacting files of interest. The inotify Linux system calls were first discussed here in Linux Journal in a 2005 article by Robert Love who primarily addressed the behavior of the new features from the perspective of C.

However, there also are stable shell-level utilities and new classes of monitoring dæmons for registering filesystem watches and reporting events. Linux installations using systemd also can access basic inotify functionality with path units. The inotify interface does have limitations -- it can't monitor remote, network-mounted filesystems (that is, NFS); it does not report the userid involved in the event; it does not work with /proc or other pseudo-filesystems; and mmap() operations do not trigger it, among other concerns. Even with these limitations, it is a tremendously useful feature.

This article completes the work begun by Love and gives everyone who can write a Bourne shell script or set a crontab the ability to react to filesystem changes.

The inotifywait Utility

Working under Oracle Linux 7 (or similar versions of Red Hat/CentOS/Scientific Linux), the inotify shell tools are not installed by default, but you can load them with yum:

 # yum install inotify-tools
Loaded plugins: langpacks, ulninfo
ol7_UEKR4                                      | 1.2 kB   00:00
ol7_latest                                     | 1.4 kB   00:00
Resolving Dependencies
--> Running transaction check
---> Package inotify-tools.x86_64 0:3.14-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================
Package         Arch       Version        Repository     Size
==============================================================
Installing:
inotify-tools   x86_64     3.14-8.el7     ol7_latest     50 k

Transaction Summary
==============================================================
Install  1 Package

Total download size: 50 k
Installed size: 111 k
Is this ok [y/d/N]: y
Downloading packages:
inotify-tools-3.14-8.el7.x86_64.rpm               |  50 kB   00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : inotify-tools-3.14-8.el7.x86_64                 1/1
  Verifying  : inotify-tools-3.14-8.el7.x86_64                 1/1

Installed:
  inotify-tools.x86_64 0:3.14-8.el7

Complete!

The package will include two utilities (inotifywait and inotifywatch), documentation and a number of libraries. The inotifywait program is of primary interest.

Some derivatives of Red Hat 7 may not include inotify in their base repositories. If you find it missing, you can obtain it from Fedora's EPEL repository , either by downloading the inotify RPM for manual installation or adding the EPEL repository to yum.

Any user on the system who can launch a shell may register watches -- no special privileges are required to use the interface. This example watches the /tmp directory:

$ inotifywait -m /tmp
Setting up watches.
Watches established.

If another session on the system performs a few operations on the files in /tmp:

$ touch /tmp/hello
$ cp /etc/passwd /tmp
$ rm /tmp/passwd
$ touch /tmp/goodbye
$ rm /tmp/hello /tmp/goodbye

those changes are immediately visible to the user running inotifywait:

/tmp/ CREATE hello
/tmp/ OPEN hello
/tmp/ ATTRIB hello
/tmp/ CLOSE_WRITE,CLOSE hello
/tmp/ CREATE passwd
/tmp/ OPEN passwd
/tmp/ MODIFY passwd
/tmp/ CLOSE_WRITE,CLOSE passwd
/tmp/ DELETE passwd
/tmp/ CREATE goodbye
/tmp/ OPEN goodbye
/tmp/ ATTRIB goodbye
/tmp/ CLOSE_WRITE,CLOSE goodbye
/tmp/ DELETE hello
/tmp/ DELETE goodbye

A few relevant sections of the manual page explain what is happening:

$ man inotifywait | col -b | sed -n '/diagnostic/,/helpful/p'
  inotifywait will output diagnostic information on standard error and
  event information on standard output. The event output can be config-
  ured, but by default it consists of lines of the following form:

  watched_filename EVENT_NAMES event_filename


  watched_filename
    is the name of the file on which the event occurred. If the
    file is a directory, a trailing slash is output.

  EVENT_NAMES
    are the names of the inotify events which occurred, separated by
    commas.

  event_filename
    is output only when the event occurred on a directory, and in
    this case the name of the file within the directory which caused
    this event is output.

    By default, any special characters in filenames are not escaped
    in any way. This can make the output of inotifywait difficult
    to parse in awk scripts or similar. The --csv and --format
    options will be helpful in this case.

It also is possible to filter the output by registering particular events of interest with the -e option, the list of which is shown here:

access create move_self
attrib delete moved_to
close_write delete_self moved_from
close_nowrite modify open
close move unmount

A common application is testing for the arrival of new files. Since inotify must be given the name of an existing filesystem object to watch, the directory containing the new files is provided. A trigger of interest is also easy to provide -- new files should be complete and ready for processing when the close_write trigger fires. Below is an example script to watch for these events:

#!/bin/sh
unset IFS                                 # default of space, tab and nl
                                          # Wait for filesystem events
inotifywait -m -e close_write \
   /tmp /var/tmp /home/oracle/arch-orcl/ |
while read dir op file
do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] &&
      echo "Import job should start on $file ($dir $op)."

   [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] &&
      echo Weekly backup is ready.

   [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]]
&&
      su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' &

   [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break

   ((step+=1))
done

echo We processed $step events.

There are a few problems with the script as presented -- of all the available shells on Linux, only ksh93 (that is, the AT&T Korn shell) will report the "step" variable correctly at the end of the script. All the other shells will report this variable as null.

The reason for this behavior can be found in a brief explanation on the manual page for Bash: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)." The MirBSD clone of the Korn shell has a slightly longer explanation:

# man mksh | col -b | sed -n '/The parts/,/do so/p'
  The parts of a pipeline, like below, are executed in subshells. Thus,
  variable assignments inside them fail. Use co-processes instead.

  foo | bar | read baz          # will not change $baz
  foo | bar |& read -p baz      # will, however, do so

And, the pdksh documentation in Oracle Linux 5 (from which MirBSD mksh emerged) has several more mentions of the subject:

General features of at&t ksh88 that are not (yet) in pdksh:
  - the last command of a pipeline is not run in the parent shell
  - `echo foo | read bar; echo $bar' prints foo in at&t ksh, nothing
    in pdksh (ie, the read is done in a separate process in pdksh).
  - in pdksh, if the last command of a pipeline is a shell builtin, it
    is not executed in the parent shell, so "echo a b | read foo bar"
    does not set foo and bar in the parent shell (at&t ksh will).
    This may get fixed in the future, but it may take a while.

$ man pdksh | col -b | sed -n '/BTW, the/,/aware/p'
  BTW, the most frequently reported bug is
    echo hi | read a; echo $a   # Does not print hi
  I'm aware of this and there is no need to report it.

This behavior is easy enough to demonstrate -- running the script above with the default bash shell and providing a sequence of example events:

$ cp /etc/passwd /tmp/newdata.txt
$ cp /etc/group /var/tmp/CLOSE_WEEK20170407.txt
$ cp /etc/passwd /tmp/SHUT

gives the following script output:

# ./inotify.sh
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed events.

Examining the process list while the script is running, you'll also see two shells, one forked for the control structure:

$ function pps { typeset a IFS=\| ; ps ax | while read a
do case $a in *$1*|+([!0-9])) echo $a;; esac; done }


$ pps inot
  PID TTY      STAT   TIME COMMAND
 3394 pts/1    S+     0:00 /bin/sh ./inotify.sh
 3395 pts/1    S+     0:00 inotifywait -m -e close_write /tmp /var/tmp
 3396 pts/1    S+     0:00 /bin/sh ./inotify.sh

As it was manipulated in a subshell, the "step" variable above was null when control flow reached the echo. Switching this from #/bin/sh to #/bin/ksh93 will correct the problem, and only one shell process will be seen:

# ./inotify.ksh93
Setting up watches.
Watches established.
Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE).
Weekly backup is ready.
We processed 2 events.


$ pps inot
  PID TTY      STAT   TIME COMMAND
 3583 pts/1    S+     0:00 /bin/ksh93 ./inotify.sh
 3584 pts/1    S+     0:00 inotifywait -m -e close_write /tmp /var/tmp

Although ksh93 behaves properly and in general handles scripts far more gracefully than all of the other Linux shells, it is rather large:

$ ll /bin/[bkm]+([aksh93]) /etc/alternatives/ksh
-rwxr-xr-x. 1 root root  960456 Dec  6 11:11 /bin/bash
lrwxrwxrwx. 1 root root      21 Apr  3 21:01 /bin/ksh ->
                                               /etc/alternatives/ksh
-rwxr-xr-x. 1 root root 1518944 Aug 31  2016 /bin/ksh93
-rwxr-xr-x. 1 root root  296208 May  3  2014 /bin/mksh
lrwxrwxrwx. 1 root root      10 Apr  3 21:01 /etc/alternatives/ksh ->
                                                    /bin/ksh93

The mksh binary is the smallest of the Bourne implementations above (some of these shells may be missing on your system, but you can install them with yum). For a long-term monitoring process, mksh is likely the best choice for reducing both processing and memory footprint, and it does not launch multiple copies of itself when idle assuming that a coprocess is used. Converting the script to use a Korn coprocess that is friendly to mksh is not difficult:

#!/bin/mksh
unset IFS                              # default of space, tab and nl
                                       # Wait for filesystem events
inotifywait -m -e close_write \
   /tmp/ /var/tmp/ /home/oracle/arch-orcl/ \
   2</dev/null |&                      # Launch as Korn coprocess

while read -p dir op file              # Read from Korn coprocess
do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] &&
      print "Import job should start on $file ($dir $op)."

   [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] &&
      print Weekly backup is ready.

   [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]]
&&
      su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' &

   [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break

   ((step+=1))
done

echo We processed $step events.

Note that the Korn and Bolsky reference on the Korn shell outlines the following requirements in a program operating as a coprocess:

Caution: The co-process must:

An fflush(NULL) is found in the main processing loop of the inotifywait source, and these requirements appear to be met.

The mksh version of the script is the most reasonable compromise for efficient use and correct behavior, and I have explained it at some length here to save readers trouble and frustration -- it is important to avoid control structures executing in subshells in most of the Borne family. However, hopefully all of these ersatz shells someday fix this basic flaw and implement the Korn behavior correctly.

A Practical Application -- Oracle Log Shipping

Oracle databases that are configured for hot backups produce a stream of "archived redo log files" that are used for database recovery. These are the most critical backup files that are produced in an Oracle database.

These files are numbered sequentially and are written to a log directory configured by the DBA. An inotifywatch can trigger activities to compress, encrypt and/or distribute the archived logs to backup and disaster recovery servers for safekeeping. You can configure Oracle RMAN to do most of these functions, but the OS tools are more capable, flexible and simpler to use.

There are a number of important design parameters for a script handling archived logs:

Given these design parameters, this is an implementation:

# cat ~oracle/archutils/process_logs

#!/bin/ksh93

set -euo pipefail
IFS=$'\n\t'  # http://redsymbol.net/articles/unofficial-bash-strict-mode/

(
 flock -n 9 || exit 1          # Critical section-allow only one process.

 ARCHDIR=~oracle/arch-${ORACLE_SID}

 APREFIX=${ORACLE_SID}_1_

 ASUFFIX=.ARC

 CURLOG=$(<~oracle/.curlog-$ORACLE_SID)

 File="${ARCHDIR}/${APREFIX}${CURLOG}${ASUFFIX}"

 [[ ! -f "$File" ]] && exit

 while [[ -f "$File" ]]
 do ((NEXTCURLOG=CURLOG+1))

    NextFile="${ARCHDIR}/${APREFIX}${NEXTCURLOG}${ASUFFIX}"

    [[ ! -f "$NextFile" ]] && sleep 60  # Ensure ARCH has finished

    nice /usr/local/bin/lzip -9q "$File"

    until scp "${File}.lz" "yourcompany.com:~oracle/arch-$ORACLE_SID"
    do sleep 5
    done

    CURLOG=$NEXTCURLOG

    File="$NextFile"
 done

 echo $CURLOG > ~oracle/.curlog-$ORACLE_SID

) 9>~oracle/.processing_logs-$ORACLE_SID

The above script can be executed manually for testing even while the inotify handler is running, as the flock protects it.

A standby server, or a DataGuard server in primitive standby mode, can apply the archived logs at regular intervals. The script below forces a 12-hour delay in log application for the recovery of dropped or damaged objects, so inotify cannot be easily used in this case -- cron is a more reasonable approach for delayed file processing, and a run every 20 minutes will keep the standby at the desired recovery point:

# cat ~oracle/archutils/delay-lock.sh

#!/bin/ksh93

(
 flock -n 9 || exit 1              # Critical section-only one process.

 WINDOW=43200                      # 12 hours

 LOG_DEST=~oracle/arch-$ORACLE_SID

 OLDLOG_DEST=$LOG_DEST-applied

 function fage { print $(( $(date +%s) - $(stat -c %Y "$1") ))
  } # File age in seconds - Requires GNU extended date & stat

 cd $LOG_DEST

 of=$(ls -t | tail -1)             # Oldest file in directory

 [[ -z "$of" || $(fage "$of") -lt $WINDOW ]] && exit

 for x in $(ls -rt)                    # Order by ascending file mtime
 do if [[ $(fage "$x") -ge $WINDOW ]]
    then y=$(basename $x .lz)          # lzip compression is optional

         [[ "$y" != "$x" ]] && /usr/local/bin/lzip -dkq "$x"

         $ORACLE_HOME/bin/sqlplus '/ as sysdba' > /dev/null 2>&1 <<-EOF
                recover standby database;
                $LOG_DEST/$y
                cancel
                quit
                EOF

         [[ "$y" != "$x" ]] && rm "$y"

         mv "$x" $OLDLOG_DEST
    fi
              

 done
) 9> ~oracle/.recovering-$ORACLE_SID

I've covered these specific examples here because they introduce tools to control concurrency, which is a common issue when using inotify, and they advance a few features that increase reliability and minimize storage requirements. Hopefully enthusiastic readers will introduce many improvements to these approaches.

The incron System

Lukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals -- it is a tool for filesystem events, and the cron reference is slightly misleading.

The incron package is available from EPEL . If you have installed the repository, you can load it with yum:

# yum install incron
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package incron.x86_64 0:0.5.10-8.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================
 Package       Arch       Version           Repository    Size
=================================================================
Installing:
 incron        x86_64     0.5.10-8.el7      epel          92 k

Transaction Summary
==================================================================
Install  1 Package

Total download size: 92 k
Installed size: 249 k
Is this ok [y/d/N]: y
Downloading packages:
incron-0.5.10-8.el7.x86_64.rpm                      |  92 kB   00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : incron-0.5.10-8.el7.x86_64                          1/1
  Verifying  : incron-0.5.10-8.el7.x86_64                          1/1

Installed:
  incron.x86_64 0:0.5.10-8.el7

Complete!

On a systemd distribution with the appropriate service units, you can start and enable incron at boot with the following commands:

# systemctl start incrond
# systemctl enable incrond
Created symlink from
   /etc/systemd/system/multi-user.target.wants/incrond.service
to /usr/lib/systemd/system/incrond.service.

In the default configuration, any user can establish incron schedules. The incrontab format uses three fields:

<path> <mask> <command>

Below is an example entry that was set with the -e option:

$ incrontab -e        #vi session follows

$ incrontab -l
/tmp/ IN_ALL_EVENTS /home/luser/myincron.sh $@ $% $#

You can record a simple script and mark it with execute permission:

$ cat myincron.sh
#!/bin/sh

echo -e "path: $1 op: $2 \t file: $3" >> ~/op

$ chmod 755 myincron.sh

Then, if you repeat the original /tmp file manipulations at the start of this article, the script will record the following output:

$ cat ~/op

path: /tmp/ op: IN_ATTRIB        file: hello
path: /tmp/ op: IN_CREATE        file: hello
path: /tmp/ op: IN_OPEN          file: hello
path: /tmp/ op: IN_CLOSE_WRITE   file: hello
path: /tmp/ op: IN_OPEN          file: passwd
path: /tmp/ op: IN_CLOSE_WRITE   file: passwd
path: /tmp/ op: IN_MODIFY        file: passwd
path: /tmp/ op: IN_CREATE        file: passwd
path: /tmp/ op: IN_DELETE        file: passwd
path: /tmp/ op: IN_CREATE        file: goodbye
path: /tmp/ op: IN_ATTRIB        file: goodbye
path: /tmp/ op: IN_OPEN          file: goodbye
path: /tmp/ op: IN_CLOSE_WRITE   file: goodbye
path: /tmp/ op: IN_DELETE        file: hello
path: /tmp/ op: IN_DELETE        file: goodbye

While the IN_CLOSE_WRITE event on a directory object is usually of greatest interest, most of the standard inotify events are available within incron, which also offers several unique amalgams:

$ man 5 incrontab | col -b | sed -n '/EVENT SYMBOLS/,/child process/p'

EVENT SYMBOLS

These basic event mask symbols are defined:

IN_ACCESS          File was accessed (read) (*)
IN_ATTRIB          Metadata changed (permissions, timestamps, extended
                   attributes, etc.) (*)
IN_CLOSE_WRITE     File opened for writing was closed (*)
IN_CLOSE_NOWRITE   File not opened for writing was closed (*)
IN_CREATE          File/directory created in watched directory (*)
IN_DELETE          File/directory deleted from watched directory (*)
IN_DELETE_SELF     Watched file/directory was itself deleted
IN_MODIFY          File was modified (*)
IN_MOVE_SELF       Watched file/directory was itself moved
IN_MOVED_FROM      File moved out of watched directory (*)
IN_MOVED_TO        File moved into watched directory (*)
IN_OPEN            File was opened (*)

When monitoring a directory, the events marked with an asterisk (*)
above can occur for files in the directory, in which case the name
field in the returned event data identifies the name of the file within
the directory.

The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above
events. Two additional convenience symbols are IN_MOVE, which is a com-
bination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE, which combines
IN_CLOSE_WRITE and IN_CLOSE_NOWRITE.

The following further symbols can be specified in the mask:

IN_DONT_FOLLOW     Don't dereference pathname if it is a symbolic link
IN_ONESHOT         Monitor pathname for only one event
IN_ONLYDIR         Only watch pathname if it is a directory

Additionally, there is a symbol which doesn't appear in the inotify sym-
bol set. It is IN_NO_LOOP. This symbol disables monitoring events until
the current one is completely handled (until its child process exits).

The incron system likely presents the most comprehensive interface to inotify of all the tools researched and listed here. Additional configuration options can be set in /etc/incron.conf to tweak incron's behavior for those that require a non-standard configuration.

Path Units under systemd

When your Linux installation is running systemd as PID 1, limited inotify functionality is available through "path units" as is discussed in a lighthearted article by Paul Brown at OCS-Mag .

The relevant manual page has useful information on the subject:

$ man systemd.path | col -b | sed -n '/Internally,/,/systems./p'

Internally, path units use the inotify(7) API to monitor file systems.
Due to that, it suffers by the same limitations as inotify, and for
example cannot be used to monitor files or directories changed by other
machines on remote NFS file systems.

Note that when a systemd path unit spawns a shell script, the $HOME and tilde ( ~ ) operator for the owner's home directory may not be defined. Using the tilde operator to reference another user's home directory (for example, ~nobody/) does work, even when applied to the self-same user running the script. The Oracle script above was explicit and did not reference ~ without specifying the target user, so I'm using it as an example here.

Using inotify triggers with systemd path units requires two files. The first file specifies the filesystem location of interest:

$ cat /etc/systemd/system/oralog.path

[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com

[Path]
PathChanged=/home/oracle/arch-orcl/

[Install]
WantedBy=multi-user.target

The PathChanged parameter above roughly corresponds to the close-write event used in my previous direct inotify calls. The full collection of inotify events is not (currently) supported by systemd -- it is limited to PathExists , PathChanged and PathModified , which are described in man systemd.path .

The second file is a service unit describing a program to be executed. It must have the same name, but a different extension, as the path unit:

$ cat /etc/systemd/system/oralog.service

[Unit]
Description=Oracle Archivelog Monitoring
Documentation=http://docs.yourserver.com

[Service]
Type=oneshot
Environment=ORACLE_SID=orcl
ExecStart=/bin/sh -c '/root/process_logs >> /tmp/plog.txt 2>&1'

The oneshot parameter above alerts systemd that the program that it forks is expected to exit and should not be respawned automatically -- the restarts are limited to triggers from the path unit. The above service configuration will provide the best options for logging -- divert them to /dev/null if they are not needed.

Use systemctl start on the path unit to begin monitoring -- a common error is using it on the service unit, which will directly run the handler only once. Enable the path unit if the monitoring should survive a reboot.

Although this limited functionality may be enough for some casual uses of inotify, it is a shame that the full functionality of inotifywait and incron are not represented here. Perhaps it will come in time.

Conclusion

Although the inotify tools are powerful, they do have limitations. To repeat them, inotify cannot monitor remote (NFS) filesystems; it cannot report the userid involved in a triggering event; it does not work with /proc or other pseudo-filesystems; mmap() operations do not trigger it; and the inotify queue can overflow resulting in lost events, among other concerns.

Even with these weaknesses, the efficiency of inotify is superior to most other approaches for immediate notifications of filesystem activity. It also is quite flexible, and although the close-write directory trigger should suffice for most usage, it has ample tools for covering special use cases.

In any event, it is productive to replace polling activity with inotify watches, and system administrators should be liberal in educating the user community that the classic crontab is not an appropriate place to check for new files. Recalcitrant users should be confined to Ultrix on a VAX until they develop sufficient appreciation for modern tools and approaches, which should result in more efficient Linux systems and happier administrators.

Sidenote: Archiving /etc/passwd

Tracking changes to the password file involves many different types of inotify triggering events. The vipw utility commonly will make changes to a temporary file, then clobber the original with it. This can be seen when the inode number changes:

# ll -i /etc/passwd
199720973 -rw-r--r-- 1 root root 3928 Jul  7 12:24 /etc/passwd

# vipw
[ make changes ]
You are using shadow passwords on this system.
Would you like to edit /etc/shadow now [y/n]? n

# ll -i /etc/passwd
203784208 -rw-r--r-- 1 root root 3956 Jul  7 12:24 /etc/passwd

The destruction and replacement of /etc/passwd even occurs with setuid binaries called by unprivileged users:

$ ll -i /etc/passwd
203784196 -rw-r--r-- 1 root root 3928 Jun 29 14:55 /etc/passwd

$ chsh
Changing shell for fishecj.
Password:
New shell [/bin/bash]: /bin/csh
Shell changed.

$ ll -i /etc/passwd
199720970 -rw-r--r-- 1 root root 3927 Jul  7 12:23 /etc/passwd

For this reason, all inotify triggering events should be considered when tracking this file. If there is concern with an inotify queue overflow (in which events are lost), then the OPEN , ACCESS and CLOSE_NOWRITE,CLOSE triggers likely can be immediately ignored.

All other inotify events on /etc/passwd might run the following script to version the changes into an RCS archive and mail them to an administrator:

#!/bin/sh

# This script tracks changes to the /etc/passwd file from inotify.
# Uses RCS for archiving. Watch for UID zero.

PWMAILS=Charlie.Root@openbsd.org

TPDIR=~/track_passwd

cd $TPDIR

if diff -q /etc/passwd $TPDIR/passwd
then exit                                         # they are the same
else sleep 5                                      # let passwd settle
     diff /etc/passwd $TPDIR/passwd 2>&1 |        # they are DIFFERENT
     mail -s "/etc/passwd changes $(hostname -s)" "$PWMAILS"
     cp -f /etc/passwd $TPDIR                     # copy for checkin

#    "SCCS, the source motel! Programs check in and never check out!"
#     -- Ken Thompson

     rcs -q -l passwd                            # lock the archive
     ci -q -m_ passwd                            # check in new ver
     co -q passwd                                # drop the new copy
fi > /dev/null 2>&1

Here is an example email from the script for the above chfn operation:

-----Original Message-----
From: root [mailto:root@myhost.com]
Sent: Thursday, July 06, 2017 2:35 PM
To: Fisher, Charles J. <Charles.Fisher@myhost.com>;
Subject: /etc/passwd changes myhost

57c57
< fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/bash
---
> fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/csh

Further processing on the third column of /etc/passwd might detect UID zero (a root user) or other important user classes for emergency action. This might include a rollback of the file from RCS to /etc and/or SMS messages to security contacts. ______________________

Charles Fisher has an electrical engineering degree from the University of Iowa and works as a systems and database administrator for a Fortune 500 mining and manufacturing corporation.

[Dec 09, 2017] linux - What does the line '!-bin-sh -e' do

Dec 09, 2017 | stackoverflow.com

,

That line defines what program will execute the given script. For sh normally that line should start with the # character as so:
#!/bin/sh -e

The -e flag's long name is errexit , causing the script to immediately exit on the first error.

[Dec 02, 2017] BASH Shell How To Redirect stderr To stdout ( redirect stderr to a File )

Dec 02, 2017 | www.cyberciti.biz

BASH Shell: How To Redirect stderr To stdout ( redirect stderr to a File ) Posted on March 12, 2008 March 12, 2008 in Categories BASH Shell , Linux , UNIX last updated March 12, 2008 Q. How do I redirect stderr to stdout? How do I redirect stderr to a file?

A. Bash and other modern shell provides I/O redirection facility. There are 3 default standard files (standard streams) open:

[a] stdin – Use to get input (keyboard) i.e. data going into a program.

[b] stdout – Use to write information (screen)

[c] stderr – Use to write error message (screen)

Understanding I/O streams numbers

The Unix / Linux standard I/O streams with numbers:

Handle Name Description
0 stdin Standard input
1 stdout Standard output
2 stderr Standard error
Redirecting the standard error stream to a file

The following will redirect program error message to a file called error.log:
$ program-name 2> error.log
$ command1 2> error.log

Redirecting the standard error (stderr) and stdout to file

Use the following syntax:
$ command-name &>file
OR
$ command > file-name 2>&1
Another useful example:
# find /usr/home -name .profile 2>&1 | more

Redirect stderr to stdout

Use the command as follows:
$ command-name 2>&1

[Nov 18, 2017] Largest FREE Microsoft eBook Giveaway! I'm Giving Away MILLIONS of FREE Microsoft eBooks again, including Windows 10, Office

Nov 18, 2017 | msdn.microsoft.com

Before we get to this year's list of FREE eBooks, a few answers to common questions I receive during my FREE EBOOK GIVEAWAY:

  1. How many can you download?
    • ANSWER: As many as you want! This is a FREE eBook giveaway, so please download as many as interest you.
  2. Wow, there are a LOT listed here. Is there a way to download all of them at once?
    • ANSWER: Yes, please see the note below on how to do this.
  3. Can I share a link to your post to let others know about this giveaway?
    • ANSWER: Yes, please do share the good news with anyone you feel could benefit from this.
  4. I know you said they are "Free," but what's the catch?
    • ANSWER: There is no catch. They really are FREE . This consider it a, "Thank you," for being a reader of my blog and a customer or partner of Microsoft.
  5. Ok, so if they are free and you're encouraging us to share this with others, can I post a link to your post here on sites like Reddit, FatWallet, and other deal share sites to let them know, or is that asking too much?
    • ANSWER: Please do. In fact, I would encourage you to share a link to this post on any deal site you feel their users could benefit from the FREE eBooks and resources included below. Again, I WANT to give away MILLIONS of FREE eBooks!
  6. Are these "time-bombed" versions of the eBooks that stop working after a certain amount of time or reads?
    • ANSWER: No, these are the full resources for you to use.

Ok, ready for some FREE eBooks? Below is the collection I am posting this year (which includes a ton of new eBooks & resources, as well as some of the favorites from previous years):

... ... ...

PowerShell Microsoft Dynamics GP 2015 R2 PowerShell Users Guide PDF
PowerShell PowerShell Integrated Scripting Environment 3.0 PDF
PowerShell Simplify Group Policy administration with Windows PowerShell PDF
PowerShell Windows PowerShell 3.0 Examples PDF
PowerShell Windows PowerShell 3.0 Language Quick Reference PDF
PowerShell WINDOWS POWERSHELL 4.0 LANGUAGE QUICK REFERENCE PDF
PowerShell Windows PowerShell 4.0 Language Reference Examples PDF
PowerShell Windows PowerShell Command Builder User's Guide PDF
PowerShell Windows PowerShell Desired State Configuration Quick Reference PDF
PowerShell WINDOWS POWERSHELL INTEGRATED SCRIPTING ENVIRONMENT 4.0 PDF
PowerShell Windows PowerShell Web Access PDF
PowerShell WMI in PowerShell 3.0 PDF
PowerShell WMI in Windows PowerShell 4.0 PDF

[Nov 01, 2017] Functions by Tom Ryder

Nov 01, 2017 | sanctum.geek.nz

A more flexible method for defining custom commands for an interactive shell (or within a script) is to use a shell function. We could declare our ll function in a Bash startup file as a function instead of an alias like so:

# Shortcut to call ls(1) with the -l flag
ll() {
    command ls -l "$@"
}

Note the use of the command builtin here to specify that the ll function should invoke the program named ls , and not any function named ls . This is particularly important when writing a function wrapper around a command, to stop an infinite loop where the function calls itself indefinitely:

# Always add -q to invocations of gdb(1)
gdb() {
    command gdb -q "$@"
}

In both examples, note also the use of the "$@" expansion, to add to the final command line any arguments given to the function. We wrap it in double quotes to stop spaces and other shell metacharacters in the arguments causing problems. This means that the ll command will work correctly if you were to pass it further options and/or one or more directories as arguments:

$ ll -a
$ ll ~/.config

Shell functions declared in this way are specified by POSIX for Bourne-style shells, so they should work in your shell of choice, including Bash, dash , Korn shell, and Zsh. They can also be used within scripts, allowing you to abstract away multiple instances of similar commands to improve the clarity of your script, in much the same way the basics of functions work in general-purpose programming languages.

Functions are a good and portable way to approach adding features to your interactive shell; written carefully, they even allow you to port features you might like from other shells into your shell of choice. I'm fond of taking commands I like from Korn shell or Zsh and implementing them in Bash or POSIX shell functions, such as Zsh's vared or its two-argument cd features.

If you end up writing a lot of shell functions, you should consider putting them into separate configuration subfiles to keep your shell's primary startup file from becoming unmanageably large.

Examples from the author

You can take a look at some of the shell functions I have defined here that are useful to me in general shell usage; a lot of these amount to implementing convenience features that I wish my shell had, especially for quick directory navigation, or adding options to commands:

Other examples Variables in shell functions

You can manipulate variables within shell functions, too:

# Print the filename of a path, stripping off its leading path and
# extension
fn() {
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

This works fine, but the catch is that after the function is done, the value for name will still be defined in the shell, and will overwrite whatever was in there previously:

$ printf '%s\n' "$name"
foobar
$ fn /home/you/Task_List.doc
Task_List
$ printf '%s\n' "$name"
Task_List

This may be desirable if you actually want the function to change some aspect of your current shell session, such as managing variables or changing the working directory. If you don't want that, you will probably want to find some means of avoiding name collisions in your variables.

If your function is only for use with a shell that provides the local (Bash) or typeset (Ksh) features, you can declare the variable as local to the function to remove its global scope, to prevent this happening:

# Bash-like
fn() {
    local name
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

# Ksh-like
# Note different syntax for first line
function fn {
    typeset name
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
}

If you're using a shell that lacks these features, or you want to aim for POSIX compatibility, things are a little trickier, since local function variables aren't specified by the standard. One option is to use a subshell , so that the variables are only defined for the duration of the function:

# POSIX; note we're using plain parentheses rather than curly brackets, for
# a subshell
fn() (
    name=$1
    name=${name##*/}
    name=${name%.*}
    printf '%s\n' "$name"
)

# POSIX; alternative approach using command substitution:
fn() {
    printf '%s\n' "$(
        name=$1
        name=${name##*/}
        name=${name%.*}
        printf %s "$name"
    )"
}

This subshell method also allows you to change directory with cd within a function without changing the working directory of the user's interactive shell, or to change shell options with set or Bash options with shopt only temporarily for the purposes of the function.

Another method to deal with variables is to manipulate the positional parameters directly ( $1 , $2 ) with set , since they are local to the function call too:

# POSIX; using positional parameters
fn() {
    set -- "${1##*/}"
    set -- "${1%.*}"
    printf '%s\n' "$1"
}

These methods work well, and can sometimes even be combined, but they're awkward to write, and harder to read than the modern shell versions. If you only need your functions to work with your modern shell, I recommend just using local or typeset . The Bash Guide on Greg's Wiki has a very thorough breakdown of functions in Bash, if you want to read about this and other aspects of functions in more detail.

Keeping functions for later

As you get comfortable with defining and using functions during an interactive session, you might define them in ad-hoc ways on the command line for calling in a loop or some other similar circumstance, just to solve a task in that moment.

As an example, I recently made an ad-hoc function called monit to run a set of commands for its hostname argument that together established different types of monitoring system checks, using an existing script called nmfs :

$ monit() { nmfs "$1" Ping Y ; nmfs "$1" HTTP Y ; nmfs "$1" SNMP Y ; }
$ for host in webhost{1..10} ; do
> monit "$host"
> done

After that task was done, I realized I was likely to use the monit command interactively again, so I decided to keep it. Shell functions only last as long as the current shell, so if you want to make them permanent, you need to store their definitions somewhere in your startup files. If you're using Bash, and you're content to just add things to the end of your ~/.bashrc file, you could just do something like this:

$ declare -f monit >> ~/.bashrc

That would append the existing definition of monit in parseable form to your ~/.bashrc file, and the monit function would then be loaded and available to you for future interactive sessions. Later on, I ended up converting monit into a shell script, as its use wasn't limited to just an interactive shell.

If you want a more robust approach to keeping functions like this for Bash permanently, I wrote a tool called Bashkeep , which allows you to quickly store functions and variables defined in your current shell into separate and appropriately-named files, including viewing and managing the list of names conveniently:

$ keep monit
$ keep
monit
$ ls ~/.bashkeep.d
monit.bash
$ keep -d monit

[Oct 31, 2017] Bash process substitution by Tom Ryder

Notable quotes:
"... Thanks to Reddit user Rhomboid for pointing out an incorrect assertion about this syntax necessarily abstracting ..."
"... calls, which I've since removed. ..."
February 27, 2012 sanctum.geek.nz

For tools like diff that work with multiple files as parameters, it can be useful to work with not just files on the filesystem, but also potentially with the output of arbitrary commands. Say, for example, you wanted to compare the output of ps and ps -e with diff -u . An obvious way to do this is to write files to compare the output:

$ ps > ps.out
$ ps -e > pse.out
$ diff -u ps.out pse.out

This works just fine, but Bash provides a shortcut in the form of process substitution , allowing you to treat the standard output of commands as files. This is done with the <() and >() operators. In our case, we want to direct the standard output of two commands into place as files:

$ diff -u <(ps) <(ps -e)

This is functionally equivalent, except it's a little tidier because it doesn't leave files lying around. This is also very handy for elegantly comparing files across servers, using ssh :

$ diff -u .bashrc <(ssh remote cat .bashrc)

Conversely, you can also use the >() operator to direct from a filename context to the standard input of a command. This is handy for setting up in-place filters for things like logs. In the following example, I'm making a call to rsync , specifying that it should make a log of its actions in log.txt , but filter it through grep -vF .tmp first to remove anything matching the fixed string .tmp :

$ rsync -arv --log-file=>(grep -vF .tmp >log.txt) src/ host::dst/

Combined with tee this syntax is a way of simulating multiple filters for a stdout stream, transforming output from a command in as many ways as you see fit:

$ ps -ef | tee >(awk '$1=="tom"' >toms-procs.txt) \
               >(awk '$1=="root"' >roots-procs.txt) \
               >(awk '$1!="httpd"' >not-apache-procs.txt) \
               >(awk 'NR>1{print $1}' >pids-only.txt)

In general, the idea is that wherever on the command line you could specify a file to be read from or written to, you can instead use this syntax to make an implicit named pipe for the text stream.

Thanks to Reddit user Rhomboid for pointing out an incorrect assertion about this syntax necessarily abstracting mkfifo calls, which I've since removed.

[Oct 31, 2017] Temporary files by Tom Ryder

Mar 05, 2012 | sanctum.geek.nz

With judicious use of tricks like pipes, redirects, and process substitution in modern shells, it's very often possible to avoid using temporary files, doing everything inline and keeping them quite neat. However when manipulating a lot of data into various formats you do find yourself occasionally needing a temporary file, just to hold data temporarily.

A common way to deal with this is to create a temporary file in your home directory, with some arbitrary name, something like test or working :

$ ps -ef >~/test

If you want to save the information indefinitely for later use, this makes sense, although it would be better to give it a slightly more instructive name than just test .

If you really only needed the data temporarily, however, you're much better to use the temporary files directory. This is usually /tmp , but for good practice's sake it's better to check the value of TMPDIR first, and only use /tmp as a default:

$ ps -ef >"${TMPDIR:-/tmp}"/test

This is getting better, but there is still a significant problem: there's no built-in check that the test file doesn't already exist, perhaps being used by some other user or program, particularly another running instance of the same script.

To that end, we have the mktemp program, which creates an empty temporary file in the appropriate directory for you without overwriting anything, and prints the filename it created. This allows you to use the file inline in both shell scripts and one-liners, and is much safer than specifying hardcoded paths:

$ mktemp
/tmp/tmp.yezXn0evDf
$ procsfile=$(mktemp)
$ printf '%s\n' "$procsfile"
/tmp/tmp.9rBjzWYaSU
$ ps -ef >"$procsfile"

If you're going to create several such files for related purposes, you could also create a directory in which to put them using the -d option:

$ procsdir=$(mktemp -d)
$ printf '%s\n' "$procsdir"
/tmp/tmp.HMAhM2RBSO

On GNU/Linux systems, files of a sufficient age in TMPDIR are cleared on boot (controlled in /etc/default/rcS on Debian-derived systems, /etc/cron.daily/tmpwatch on Red Hat ones), making /tmp useful as a general scratchpad as well as for a kind of relatively reliable inter-process communication without cluttering up users' home directories.

In some cases, there may be additional advantages in using /tmp for its designed purpose as some administrators choose to mount it as a tmpfs filesystem, so it operates in RAM and works very quickly. It's also common practice to set the noexec flag on the mount to prevent malicious users from executing any code they manage to find or save in the directory.

[Oct 31, 2017] High-speed Bash by Tom Ryder

Notable quotes:
"... One of my favourite technical presentations I've read online has been Hal Pomeranz's Unix Command-Line Kung Fu , a catalogue of shortcuts and efficient methods of doing very clever things with the Bash shell. None of these are grand arcane secrets, but they're things that are often forgotten in the course of daily admin work, when you find yourself typing something you needn't, or pressing up repeatedly to find something you wrote for which you could simply search your command history. ..."
Jan 24, 2012 | sanctum.geek.nz

One of my favourite technical presentations I've read online has been Hal Pomeranz's Unix Command-Line Kung Fu , a catalogue of shortcuts and efficient methods of doing very clever things with the Bash shell. None of these are grand arcane secrets, but they're things that are often forgotten in the course of daily admin work, when you find yourself typing something you needn't, or pressing up repeatedly to find something you wrote for which you could simply search your command history.

I highly recommend reading the whole thing, as I think even the most experienced shell users will find there are useful tidbits in there that would make their lives easier and their time with the shell more productive, beyond simpler things like tab completion.

Here, I'll recap two of the things I thought were the most simple and useful items in the presentation for general shell usage, and see if I can add a little value to them with reference to the Bash manual.

History with Ctrl+R

For many shell users, finding a command in history means either pressing the up arrow key repeatedly, or perhaps piping a history call through grep . It turns out there's a much nicer way to do this, using Bash's built-in history searching functionality; if you press Ctrl+R and start typing a search pattern, the most recent command matching that pattern will automatically be inserted on your current line, at which point you can adapt it as you need, or simply press Enter to run it again. You can keep pressing Ctrl+R to move further back in your history to the next-most recent match. On my shell, if I search through my history for git , I can pull up what I typed for a previous commit:

(reverse-i-search)`git': git commit -am "Pulled up-to-date colors."

This functionality isn't actually exclusive to Bash; you can establish a history search function in quite a few tools that use GNU Readline, including the MySQL client command line.

You can search forward through history in the same way with Ctrl+S, but it's likely you'll have to fix up a couple of terminal annoyances first.

Additionally, if like me you're a Vim user and you don't really like having to reach for the arrow keys, or if you're on a terminal where those keys are broken for whatever reason, you can browse back and forth within your command history with Ctrl+P (previous) and Ctrl+N (next). These are just a few of the Emacs-style shortcuts that GNU Readline provides; check here for a more complete list .

Repeating commands with !!

The last command you ran in Bash can be abbreviated on the next line with two exclamation marks:

$ echo "Testing."
Testing.
$ !!
Testing.

You can use this to simply repeat a command over and over again, although for that you really should be using watch , but more interestingly it turns out this is very handy for building complex pipes in stages. Suppose you were building a pipeline to digest some data generated from a program like netstat , perhaps to determine the top 10 IP addresses that are holding open the most connections to a server. You might be able to build a pipeline like this:

# netstat -ant
# !! | awk '{print $5}'
# !! | sort
# !! | uniq -c
# !! | sort -rn
# !! | sed 10q

Similarly, you can repeat the last argument from the previous command line using !$ , which is useful if you're doing a set of operations on one file, such as checking it out via RCS, editing it, and checking it back in:

$ co -l file.txt
$ vim !$
$ ci -u !$

Or if you happen to want to work on a set of arguments, you can repeat all of the arguments from the previous command using !* :

$ touch a.txt b.txt c.txt
$ rm !*

When you remember to user these three together, they can save you a lot of typing, and will really increase your accuracy because you won't be at risk of mistyping any of the commands or arguments. Naturally, however, it pays to be careful what you're running through rm !

[Oct 31, 2017] Learning the content of /bin and /usr/bin by Tom Ryder

Mar 16, 2012 | sanctum.geek.nz

When you have some spare time, something instructive to do that can help fill gaps in your Unix knowledge and to get a better idea of the programs installed on your system and what they can do is a simple whatis call, run over all the executable files in your /bin and /usr/bin directories.

This will give you a one-line summary of the file's function if available from man pages.

tom@conan:/bin$ whatis *
bash (1) - GNU Bourne-Again SHell
bunzip2 (1) - a block-sorting file compressor, v1.0.4
busybox (1) - The Swiss Army Knife of Embedded Linux
bzcat (1) - decompresses files to stdout
...

tom@conan:/usr/bin$ whatis *
[ (1)                - check file types and compare values
2to3 (1)             - Python2 to Python3 converter
2to3-2.7 (1)         - Python2 to Python3 converter
411toppm (1)         - convert Sony Mavica .411 image to ppm
...

It also works on many of the files in other directories, such as /etc :

tom@conan:/etc$ whatis *
acpi (1)             - Shows battery status and other ACPI information
adduser.conf (5)     - configuration file for adduser(8) and addgroup(8)
adjtime (3)          - correct the time to synchronize the system clock
aliases (5)          - Postfix local alias database format
...

Because packages often install more than one binary and you're only in the habit of using one or two of them, this process can tell you about programs on your system that you may have missed, particularly standard tools that solve common problems. As an example, I first learned about watch this way, having used a clunky solution with for loops with sleep calls to do the same thing many times before.

[Oct 31, 2017] Testing exit values in Bash by Tom Ryder

Oct 28, 2013 | sanctum.geek.nz

In Bash scripting (and shell scripting in general), we often want to check the exit value of a command to decide an action to take after it completes, likely for the purpose of error handling. For example, to determine whether a particular regular expression regex was present somewhere in a file options , we might apply grep(1) with its POSIX -q option to suppress output and just use the exit value:

grep -q regex options

An approach sometimes taken is then to test the exit value with the $? parameter, using if to check if it's non-zero, which is not very elegant and a bit hard to read:

# Bad practice
grep -q regex options
if (($? > 0)); then
    printf '%s\n' 'myscript: Pattern not found!' >&2
    exit 1
fi

Because the if construct by design tests the exit value of commands , it's better to test the command directly , making the expansion of $? unnecessary:

# Better
if grep -q regex options; then
    # Do nothing
    :
else
    printf '%s\n' 'myscript: Pattern not found!\n' >&2
    exit 1
fi

We can precede the command to be tested with ! to negate the test as well, to prevent us having to use else as well:

# Best
if ! grep -q regex options; then
    printf '%s\n' 'myscript: Pattern not found!' >&2
    exit 1
fi

An alternative syntax is to use && and || to perform if and else tests with grouped commands between braces, but these tend to be harder to read:

# Alternative
grep -q regex options || {
    printf '%s\n' 'myscript: Pattern not found!' >&2
    exit 1
}

With this syntax, the two commands in the block are only executed if the grep(1) call exits with a non-zero status. We can apply && instead to execute commands if it does exit with zero.

That syntax can be convenient for quickly short-circuiting failures in scripts, for example due to nonexistent commands, particularly if the command being tested already outputs its own error message. This therefore cuts the script off if the given command fails, likely due to ffmpeg(1) being unavailable on the system:

hash ffmpeg || exit 1

Note that the braces for a grouped command are not needed here, as there's only one command to be run in case of failure, the exit call.

Calls to cd are another good use case here, as running a script in the wrong directory if a call to cd fails could have really nasty effects:

cd wherever || exit 1

In general, you'll probably only want to test $? when you have specific non-zero error conditions to catch. For example, if we were using the --max-delete option for rsync(1) , we could check a call's return value to see whether rsync(1) hit the threshold for deleted file count and write a message to a logfile appropriately:

rsync --archive --delete --max-delete=5 source destination
if (($? == 25)); then
    printf '%s\n' 'Deletion limit was reached' >"$logfile"
fi

It may be tempting to use the errexit feature in the hopes of stopping a script as soon as it encounters any error, but there are some problems with its usage that make it a bit error-prone. It's generally more straightforward to simply write your own error handling using the methods above.

For a really thorough breakdown of dealing with conditionals in Bash, take a look at the relevant chapter of the Bash Guide .

[Oct 31, 2017] Shell config subfiles by Tom Ryder

Notable quotes:
"... Note that we unset the config variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference. ..."
"... Thanks to commenter oylenshpeegul for correcting the syntax of the loops. ..."
Jan 30, 2015 | sanctum.geek.nz

Large shell startup scripts ( .bashrc , .profile ) over about fifty lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome to manage over time, and if you keep your dotfiles under version control it's not terribly helpful to see large sets of commits just editing the one file when it could be more instructive if broken up into files by section.

Given that shell configuration is just shell code, we can apply the source builtin (or the . builtin for POSIX sh ) to load several files at the end of a .bashrc , for example:

source ~/.bashrc.options
source ~/.bashrc.aliases
source ~/.bashrc.functions

This is a better approach, but it still binds us into using those filenames; we still have to edit the ~/.bashrc file if we want to rename them, or remove them, or add new ones.

Fortunately, UNIX-like systems have a common convention for this, the .d directory suffix, in which sections of configuration can be stored to be read by a main configuration file dynamically. In our case, we can create a new directory ~/.bashrc.d :

$ ls ~/.bashrc.d
options.bash
aliases.bash
functions.bash

With a slightly more advanced snippet at the end of ~/.bashrc , we can then load every file with the suffix .bash in this directory:

# Load any supplementary scripts
for config in "$HOME"/.bashrc.d/*.bash ; do
    source "$config"
done
unset -v config

Note that we unset the config variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference.

The same method can be applied with .profile to load all scripts with the suffix .sh in ~/.profile.d , if we want to write in POSIX sh , with some slightly different syntax:

# Load any supplementary scripts
for config in "$HOME"/.profile.d/*.sh ; do
    . "$config"
done
unset -v config

Another advantage of this method is that if you have your dotfiles under version control, you can arrange to add extra snippets on a per-machine basis unversioned, without having to update your .bashrc file.

Here's my implementation of the above method, for both .bashrc and .profile :

Thanks to commenter oylenshpeegul for correcting the syntax of the loops.

[Oct 31, 2017] Better Bash history by Tom Ryder

Feb 21, 2012 | sanctum.geek.nz

By default, the Bash shell keeps the history of your most recent session in the .bash_history file, and the commands you've issued in your current session are also available with a history call. These defaults are useful for keeping track of what you've been up to in the shell on any given machine, but with disks much larger and faster than they were when Bash was designed, a little tweaking in your .bashrc file can record history more permanently, consistently, and usefully. Append history instead of rewriting it

You should start by setting the histappend option, which will mean that when you close a session, your history will be appended to the .bash_history file rather than overwriting what's in there.

shopt -s histappend
Allow a larger history file

The default maximum number of commands saved into the .bash_history file is a rather meager 500. If you want to keep history further back than a few weeks or so, you may as well bump this up by explicitly setting $HISTSIZE to a much larger number in your .bashrc . We can do the same thing with the $HISTFILESIZE variable.

HISTFILESIZE=1000000
HISTSIZE=1000000

The man page for Bash says that HISTFILESIZE can be unset to stop truncation entirely, but unfortunately this doesn't work in .bashrc files due to the order in which variables are set; it's therefore more straightforward to simply set it to a very large number.

If you're on a machine with resource constraints, it might be a good idea to occasionally archive old .bash_history files to speed up login and reduce memory footprint.

Don't store specific lines

You can prevent commands that start with a space from going into history by setting $HISTCONTROL to ignorespace . You can also ignore duplicate commands, for example repeated du calls to watch a file grow, by adding ignoredups . There's a shorthand to set both in ignoreboth .

HISTCONTROL=ignoreboth

You might also want to remove the use of certain commands from your history, whether for privacy or readability reasons. This can be done with the $HISTIGNORE variable. It's common to use this to exclude ls calls, job control builtins like bg and fg , and calls to history itself:

HISTIGNORE='ls:bg:fg:history'
Record timestamps

If you set $HISTTIMEFORMAT to something useful, Bash will record the timestamp of each command in its history. In this variable you can specify the format in which you want this timestamp displayed when viewed with history . I find the full date and time to be useful, because it can be sorted easily and works well with tools like cut and awk .

HISTTIMEFORMAT='%F %T '
Use one command per line

To make your .bash_history file a little easier to parse, you can force commands that you entered on more than one line to be adjusted to fit on only one with the cmdhist option:

shopt -s cmdhist
Store history immediately

By default, Bash only records a session to the .bash_history file on disk when the session terminates. This means that if you crash or your session terminates improperly, you lose the history up to that point. You can fix this by recording each line of history as you issue it, through the $PROMPT_COMMAND variable:

PROMPT_COMMAND='history -a'

[Oct 31, 2017] Bash history expansion by Tom Ryder

Notable quotes:
"... Thanks to commenter Mihai Maruseac for pointing out a bug in the examples. ..."
Aug 16, 2012 | sanctum.geek.nz

Setting the Bash option histexpand allows some convenient typing shortcuts using Bash history expansion . The option can be set with either of these:

$ set -H
$ set -o histexpand

It's likely that this option is already set for all interactive shells, as it's on by default. The manual, man bash , describes these features as follows:

-H  Enable ! style history substitution. This option is on
    by default when the shell is interactive.

You may have come across this before, perhaps to your annoyance, in the following error message that comes up whenever ! is used in a double-quoted string, or without being escaped with a backslash:

$ echo "Hi, this is Tom!"
bash: !": event not found

If you don't want the feature and thereby make ! into a normal character, it can be disabled with either of these:

$ set +H
$ set +o histexpand

History expansion is actually a very old feature of shells, having been available in csh before Bash usage became common.

This article is a good followup to Better Bash history , which among other things explains how to include dates and times in history output, as these examples do.

Basic history expansion

Perhaps the best known and most useful of these expansions is using !! to refer to the previous command. This allows repeating commands quickly, perhaps to monitor the progress of a long process, such as disk space being freed while deleting a large file:

$ rm big_file &
[1] 23608
$ du -sh .
3.9G    .
$ !!
du -sh .
3.3G    .

It can also be useful to specify the full filesystem path to programs that aren't in your $PATH :

$ hdparm
-bash: hdparm: command not found
$ /sbin/!!
/sbin/hdparm

In each case, note that the command itself is printed as expanded, and then run to print the output on the following line.

History by absolute index

However, !! is actually a specific example of a more general form of history expansion. For example, you can supply the history item number of a specific command to repeat it, after looking it up with history :

$ history | grep expand
 3951  2012-08-16 15:58:53  set -o histexpand
$ !3951
set -o histexpand

You needn't enter the !3951 on a line by itself; it can be included as any part of the command, for example to add a prefix like sudo :

$ sudo !3850

If you include the escape string \! as part of your Bash prompt , you can include the current command number in the prompt before the command, making repeating commands by index a lot easier as long as they're still visible on the screen.

History by relative index

It's also possible to refer to commands relative to the current command. To subtitute the second-to-last command, we can type !-2 . For example, to check whether truncating a file with sed worked correctly:

$ wc -l bigfile.txt
267 bigfile.txt
$ printf '%s\n' '11,$d' w | ed -s bigfile.txt
$ !-2
wc -l bigfile.txt
10 bigfile.txt

This works further back into history, with !-3 , !-4 , and so on.

Expanding for historical arguments

In each of the above cases, we're substituting for the whole command line. There are also ways to get specific tokens, or words , from the command if we want that. To get the first argument of a particular command in the history, use the !^ token:

$ touch a.txt b.txt c.txt
$ ls !^
ls a.txt
a.txt

To get the last argument, add !$ :

$ touch a.txt b.txt c.txt
$ ls !$
ls c.txt
c.txt

To get all arguments (but not the command itself), use !* :

$ touch a.txt b.txt c.txt
$ ls !*
ls a.txt b.txt c.txt
a.txt  b.txt  c.txt

This last one is particularly handy when performing several operations on a group of files; we could run du and wc over them to get their size and character count, and then perhaps decide to delete them based on the output:

$ du a.txt b.txt c.txt
4164    a.txt
5184    b.txt
8356    c.txt
$ wc !*
wc a.txt b.txt c.txt
16689    94038  4250112 a.txt
20749   117100  5294592 b.txt
33190   188557  8539136 c.txt
70628   399695 18083840 total
$ rm !*
rm a.txt b.txt c.txt

These work not just for the preceding command in history, but also absolute and relative command numbers:

$ history 3
 3989  2012-08-16 16:30:59  wc -l b.txt
 3990  2012-08-16 16:31:05  du -sh c.txt
 3991  2012-08-16 16:31:12  history 3
$ echo !3989^
echo -l
-l
$ echo !3990$
echo c.txt
c.txt
$ echo !-1*
echo c.txt
c.txt

More generally, you can use the syntax !n:w to refer to any specific argument in a history item by number. In this case, the first word, usually a command or builtin, is word 0 :

$ history | grep bash
 4073  2012-08-16 20:24:53  man bash
$ !4073:0
man
What manual page do you want?
$ !4073:1
bash

You can even select ranges of words by separating their indices with a hyphen:

$ history | grep apt-get
 3663  2012-08-15 17:01:30  sudo apt-get install gnome
$ !3663:0-1 purge !3663:3
sudo apt-get purge gnome

You can include ^ and $ as start and endpoints for these ranges, too. 3* is a shorthand for 3-$ , meaning "all arguments from the third to the last."

Expanding history by string

You can also refer to a previous command in the history that starts with a specific string with the syntax !string :

$ !echo
echo c.txt
c.txt
$ !history
history 3
 4011  2012-08-16 16:38:28  rm a.txt b.txt c.txt
 4012  2012-08-16 16:42:48  echo c.txt
 4013  2012-08-16 16:42:51  history 3

If you want to match any part of the command line, not just the start, you can use !?string? :

$ !?bash?
man bash

Be careful when using these, if you use them at all. By default it will run the most recent command matching the string immediately , with no prompting, so it might be a problem if it doesn't match the command you expect.

Checking history expansions before running

If you're paranoid about this, Bash allows you to audit the command as expanded before you enter it, with the histverify option:

$ shopt -s histverify
$ !rm
$ rm a.txt b.txt c.txt

This option works for any history expansion, and may be a good choice for more cautious administrators. It's a good thing to add to one's .bashrc if so.

If you don't need this set all the time, but you do have reservations at some point about running a history command, you can arrange to print the command without running it by adding a :p suffix:

$ !rm:p
rm important-file

In this instance, the command was expanded, but thankfully not actually run.

Substituting strings in history expansions

To get really in-depth, you can also perform substitutions on arbitrary commands from the history with !!:gs/pattern/replacement/ . This is getting pretty baroque even for Bash, but it's possible you may find it useful at some point:

$ !!:gs/txt/mp3/
rm a.mp3 b.mp3 c.mp3

If you only want to replace the first occurrence, you can omit the g :

$ !!:s/txt/mp3/
rm a.mp3 b.txt c.txt
Stripping leading directories or trailing files

If you want to chop a filename off a long argument to work with the directory, you can do this by adding an :h suffix, kind of like a dirname call in Perl:

$ du -sh /home/tom/work/doc.txt
$ cd !$:h
cd /home/tom/work

To do the opposite, like a basename call in Perl, use :t :

$ ls /home/tom/work/doc.txt
$ document=!$:t
document=doc.txt
Stripping extensions or base names

A bit more esoteric, but still possibly useful; to strip a file's extension, use :r :

$ vi /home/tom/work/doc.txt
$ stripext=!$:r
stripext=/home/tom/work/doc

To do the opposite, to get only the extension, use :e :

$ vi /home/tom/work/doc.txt
$ extonly=!$:e
extonly=.txt
Quoting history

If you're performing substitution not to execute a command or fragment but to use it as a string, it's likely you'll want to quote it. For example, if you've just found through experiment and trial and error an ideal ffmpeg command line to accomplish some task, you might want to save it for later use by writing it to a script:

$ ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -r 30 -s 1600x900 \
> -i :0.0+1600,0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast \
> -crf 0 -threads 0 "$(date +%Y%m%d%H%M%S)".mkv

To make sure all the escaping is done correctly, you can write the command into the file with the :q modifier:

$ echo '#!/usr/bin/env bash' >ffmpeg.sh
$ echo !ffmpeg:q >>ffmpeg.sh

In this case, this will prevent Bash from executing the command expansion "$(date ... )" , instead writing it literally to the file as desired. If you build a lot of complex commands interactively that you later write to scripts once completed, this feature is really helpful and saves a lot of cutting and pasting.

Thanks to commenter Mihai Maruseac for pointing out a bug in the examples.

[Oct 31, 2017] Prompt directory shortening by Tom Ryder

Notable quotes:
"... If you're using Bash version 4.0 or above ( bash --version ), you can save a bit of terminal space by setting the PROMPT_DIRTRIM variable for the shell. This limits the length of the tail end of the \w and \W expansions to that number of path elements: ..."
Nov 07, 2014 | sanctum.geek.nz

The common default of some variant of \h:\w\$ for a Bash prompt PS1 string includes the \w escape character, so that the user's current working directory appears in the prompt, but with $HOME shortened to a tilde:

tom@sanctum:~$
tom@sanctum:~/Documents$
tom@sanctum:/usr/local/nagios$

This is normally very helpful, particularly if you leave your shell for a time and forget where you are, though of course you can always call the pwd shell builtin. However it can get annoying for very deep directory hierarchies, particularly if you're using a smaller terminal window:

tom@sanctum:/chroot/apache/usr/local/perl/app-library/lib/App/Library/Class:~$

If you're using Bash version 4.0 or above ( bash --version ), you can save a bit of terminal space by setting the PROMPT_DIRTRIM variable for the shell. This limits the length of the tail end of the \w and \W expansions to that number of path elements:

tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$ PROMPT_DIRTRIM=3
tom@sanctum:.../App/Library/Class$

This is a good thing to include in your ~/.bashrc file if you often find yourself deep in directory trees where the upper end of the hierarchy isn't of immediate interest to you. You can remove the effect again by unsetting the variable:

tom@sanctum:.../App/Library/Class$ unset PROMPT_DIRTRIM
tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$

[Oct 25, 2017] How to modify scripts behavior on signals using bash traps - LinuxConfig.org

Oct 25, 2017 | linuxconfig.org

Trap syntax is very simple and easy to understand: first we must call the trap builtin, followed by the action(s) to be executed, then we must specify the signal(s) we want to react to:

trap [-lp] [[arg] sigspec]
Let's see what the possible trap options are for.

When used with the -l flag, the trap command will just display a list of signals associated with their numbers. It's the same output you can obtain running the kill -l command:

$ trap -l
1) SIGHUP        2) SIGINT       3) SIGQUIT      4) SIGILL       5) SIGTRAP
6) SIGABRT       7) SIGBUS       8) SIGFPE       9) SIGKILL     10) SIGUSR1
11) SIGSEGV     12) SIGUSR2     13) SIGPIPE     14) SIGALRM     15) SIGTERM
16) SIGSTKFLT   17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU     25) SIGXFSZ
26) SIGVTALRM   27) SIGPROF     28) SIGWINCH    29) SIGIO       30) SIGPWR
31) SIGSYS      34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX
It's really important to specify that it's possible to react only to signals which allows the script to respond: the SIGKILL and SIGSTOP signals cannot be caught, blocked or ignored.

Apart from signals, traps can also react to some pseudo-signal such as EXIT, ERR or DEBUG, but we will see them in detail later. For now just remember that a signal can be specified either by its number or by its name, even without the SIG prefix.

About the -p option now. This option has sense only when a command is not provided (otherwise it will produce an error). When trap is used with it, a list of the previously set traps will be displayed. If the signal name or number is specified, only the trap set for that specific signal will be displayed, otherwise no distinctions will be made, and all the traps will be displayed:

$ trap 'echo "SIGINT caught!"' SIGINT
We set a trap to catch the SIGINT signal: it will just display the "SIGINT caught" message onscreen when given signal will be received by the shell. If we now use trap with the -p option, it will display the trap we just defined:
$ trap -p
trap -- 'echo "SIGINT caught!"' SIGINT
By the way, the trap is now "active", so if we send a SIGINT signal, either using the kill command, or with the CTRL-c shortcut, the associated command in the trap will be executed (^C is just printed because of the key combination):
^CSIGINT caught!
Trap in action We now will write a simple script to show trap in action, here it is:
#!/usr/bin/env bash
#
# A simple script to demonstrate how trap works
#
set -e
set -u
set -o pipefail

trap 'echo "signal caught, cleaning..."; rm -i linux_tarball.tar.xz' SIGINT SIGTERM

echo "Downloading tarball..."
wget -O linux_tarball.tar.xz https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.13.5.tar.xz &> /dev/null
The above script just tries to download the latest linux kernel tarball into the directory from what it is launched using wget . During the task, if the SIGINT or SIGTERM signals are received (notice how you can specify more than one signal on the same line), the partially downloaded file will be deleted.

In this case the command are actually two: the first is the echo which prints the message onscreen, and the second is the actual rm command (we provided the -i option to it, so it will ask user confirmation before removing), and they are separated by a semicolon. Instead of specifying commands this way, you can also call functions: this would give you more re-usability. Notice that if you don't provide any command the signal(s) will just be ignored!

This is the output of the script above when it receives a SIGINT signal:

$ ./fetchlinux.sh
Downloading tarball...
^Csignal caught, cleaning...
rm: remove regular file 'linux_tarball.tar.xz'?
A very important thing to remember is that when a script is terminated by a signal, like above, its exist status will be the result of 128 + the signal number . As you can see, the script above, being terminated by a SIGINT, has an exit status of 130 :
$ echo $?
130
Lastly, you can disable a trap just by calling trap followed by the - sign, followed by the signal(s) name or number:
trap - SIGINT SIGTERM
The signals will take back the value they had upon the entrance to shell. Pseudo-signals As already mentioned above, trap can be set not only for signals which allows the script to respond but also to what we can call "pseudo-signals". They are not technically signals, but correspond to certain situations that can be specified: EXIT When EXIT is specified in a trap, the command of the trap will be execute on exit from the shell. ERR This will cause the argument of the trap to be executed when a command returns a non-zero exit status, with some exceptions (the same of the shell errexit option): the command must not be part of a while or until loop; it must not be part of an if construct, nor part of a && or || list, and its value must not be inverted by using the ! operator. DEBUG This will cause the argument of the trap to be executed before every simple command, for , case or select commands, and before the first command in shell functions RETURN The argument of the trap is executed after a function or a script sourced by using source or the . command.

[Oct 20, 2017] Simple logical operators in Bash - Stack Overflow

Notable quotes:
"... Backquotes ( ` ` ) are old-style form of command substitution, with some differences: in this form, backslash retains its literal meaning except when followed by $ , ` , or \ , and the first backquote not preceded by a backslash terminates the command substitution; whereas in the $( ) form, all characters between the parentheses make up the command, none are treated specially. ..."
"... Double square brackets delimit a Conditional Expression. And, I find the following to be a good reading on the subject: "(IBM) Demystify test, [, [[, ((, and if-then-else" ..."
Oct 20, 2017 | stackoverflow.com

Amit , Jun 7, 2011 at 19:18

I have a couple of variables and I want to check the following condition (written out in words, then my failed attempt at bash scripting):
if varA EQUALS 1 AND ( varB EQUALS "t1" OR varB EQUALS "t2" ) then 

do something

done.

And in my failed attempt, I came up with:

if (($varA == 1)) && ( (($varB == "t1")) || (($varC == "t2")) ); 
  then
    scale=0.05
  fi

Best answer Gilles

What you've written actually almost works (it would work if all the variables were numbers), but it's not an idiomatic way at all.

This is the idiomatic way to write your test in bash:

if [[ $varA = 1 && ($varB = "t1" || $varC = "t2") ]]; then

If you need portability to other shells, this would be the way (note the additional quoting and the separate sets of brackets around each individual test):

if [ "$varA" = 1 ] && { [ "$varB" = "t1" ] || [ "$varC" = "t2" ]; }; then

Will Sheppard , Jun 19, 2014 at 11:07

It's better to use == to differentiate the comparison from assigning a variable (which is also = ) – Will Sheppard Jun 19 '14 at 11:07

Cbhihe , Apr 3, 2016 at 8:05

+1 @WillSheppard for yr reminder of proper style. Gilles, don't you need a semicolon after yr closing curly bracket and before "then" ? I always thought if , then , else and fi could not be on the same line... As in:

if [ "$varA" = 1 ] && { [ "$varB" = "t1" ] || [ "$varC" = "t2" ]; }; then

– Cbhihe Apr 3 '16 at 8:05

Rockallite , Jan 19 at 2:41

Backquotes ( ` ` ) are old-style form of command substitution, with some differences: in this form, backslash retains its literal meaning except when followed by $ , ` , or \ , and the first backquote not preceded by a backslash terminates the command substitution; whereas in the $( ) form, all characters between the parentheses make up the command, none are treated specially.

– Rockallite Jan 19 at 2:41

Peter A. Schneider , Aug 28 at 13:16

You could emphasize that single brackets have completely different semantics inside and outside of double brackets. (Because you start with explicitly pointing out the subshell semantics but then only as an aside mention the grouping semantics as part of conditional expressions. Was confusing to me for a second when I looked at your idiomatic example.) – Peter A. Schneider Aug 28 at 13:16

matchew , Jun 7, 2011 at 19:29

very close
if (( $varA == 1 )) && [[ $varB == 't1' || $varC == 't2' ]]; 
  then 
    scale=0.05
  fi

should work.

breaking it down

(( $varA == 1 ))

is an integer comparison where as

$varB == 't1'

is a string comparison. otherwise, I am just grouping the comparisons correctly.

Double square brackets delimit a Conditional Expression. And, I find the following to be a good reading on the subject: "(IBM) Demystify test, [, [[, ((, and if-then-else"

Peter A. Schneider , Aug 28 at 13:21

Just to be sure: The quoting in 't1' is unnecessary, right? Because as opposed to arithmetic instructions in double parentheses, where t1 would be a variable, t1 in a conditional expression in double brackets is just a literal string.

I.e., [[ $varB == 't1' ]] is exactly the same as [[ $varB == t1 ]] , right? – Peter A. Schneider Aug 28 at 13:21

[Oct 20, 2017] shell script - OR in `expr match`

Notable quotes:
"... ...and if you weren't targeting a known/fixed operating system, using case rather than a regex match is very much the better practice, since the accepted answer depends on behavior POSIX doesn't define. ..."
"... Regular expression syntax, including the use of backquoting, is different for different tools. Always look it up. ..."
Oct 20, 2017 | unix.stackexchange.com

OR in `expr match` up vote down vote favorite

stracktracer , Dec 14, 2015 at 13:54

I'm confused as to why this does not match:

expr match Unauthenticated123 '^(Unauthenticated|Authenticated).*'

it outputs 0.

Charles Duffy , Dec 14, 2015 at 18:22

As an aside, if you were using bash for this, the preferred alternative would be the =~ operator in [[ ]] , ie. [[ Unauthenticated123 =~ ^(Unauthenticated|Authenticated) ]]Charles Duffy Dec 14 '15 at 18:22

Charles Duffy , Dec 14, 2015 at 18:25

...and if you weren't targeting a known/fixed operating system, using case rather than a regex match is very much the better practice, since the accepted answer depends on behavior POSIX doesn't define. Charles Duffy Dec 14 '15 at 18:25

Gilles , Dec 14, 2015 at 23:43

See Why does my regular expression work in X but not in Y?Gilles Dec 14 '15 at 23:43

Lambert , Dec 14, 2015 at 14:04

Your command should be:
expr match Unauthenticated123 'Unauthenticated\|Authenticated'

If you want the number of characters matched.

To have the part of the string (Unauthenticated) returned use:

expr match Unauthenticated123 '\(Unauthenticated\|Authenticated\)'

From info coreutils 'expr invocation' :

STRING : REGEX' Perform pattern matching. The arguments are converted to strings and the second is considered to be a (basic, a la GNU grep') regular expression, with a `^' implicitly prepended. The first argument is then matched against this regular expression.

 If the match succeeds and REGEX uses `\(' and `\)', the `:'
 expression returns the part of STRING that matched the
 subexpression; otherwise, it returns the number of characters
 matched.

 If the match fails, the `:' operator returns the null string if
 `\(' and `\)' are used in REGEX, otherwise 0.

 Only the first `\( ... \)' pair is relevant to the return value;
 additional pairs are meaningful only for grouping the regular
 expression operators.

 In the regular expression, `\+', `\?', and `\|' are operators
 which respectively match one or more, zero or one, or separate
 alternatives.  SunOS and other `expr''s treat these as regular
 characters.  (POSIX allows either behavior.)  *Note Regular
 Expression Library: (regex)Top, for details of regular expression
 syntax.  Some examples are in *note Examples of expr::.

stracktracer , Dec 14, 2015 at 14:18

Thanks escaping the | worked. Weird, normally I'd expect it if I wanted to match the literal |... – stracktracer Dec 14 '15 at 14:18

reinierpost , Dec 14, 2015 at 15:34

Regular expression syntax, including the use of backquoting, is different for different tools. Always look it up.reinierpost Dec 14 '15 at 15:34

Stéphane Chazelas , Dec 14, 2015 at 14:49

Note that both match and \| are GNU extensions (and the behaviour for : (the match standard equivalent) when the pattern starts with ^ varies with implementations). Standardly, you'd do:
expr " $string" : " Authenticated" '|' " $string" : " Unauthenticated"

The leading space is to avoid problems with values of $string that start with - or are expr operators, but that means it adds one to the number of characters being matched.

With GNU expr , you'd write it:

expr + "$string" : 'Authenticated\|Unauthenticated'

The + forces $string to be taken as a string even if it happens to be a expr operator. expr regular expressions are basic regular expressions which don't have an alternation operator (and where | is not special). The GNU implementation has it as \| though as an extension.

If all you want is to check whether $string starts with Authenticated or Unauthenticated , you'd better use:

case $string in
  (Authenticated* | Unauthenticated*) do-something
esac

netmonk , Dec 14, 2015 at 14:06

$ expr match "Unauthenticated123" '^\(Unauthenticated\|Authenticated\).*' you have to escape with \ the parenthesis and the pipe.

mikeserv , Dec 14, 2015 at 14:18

and the ^ may not mean what some would think depending on the expr . it is implied anyway. – mikeserv Dec 14 '15 at 14:18

Stéphane Chazelas , Dec 14, 2015 at 14:34

@mikeserv, match and \| are GNU extensions anyway. This Q&A seems to be about GNU expr anyway (where ^ is guaranteed to mean match at the beginning of the string ). – Stéphane Chazelas Dec 14 '15 at 14:34

mikeserv , Dec 14, 2015 at 14:49

@StéphaneChazelas - i didn't know they were strictly GNU. i think i remember them being explicitly officially unspecified - but i don't use expr too often anyway and didn't know that. thank you. – mikeserv Dec 14 '15 at 14:49

Random832 , Dec 14, 2015 at 16:13

It's not "strictly GNU" - it's present in a number of historical implementations (even System V had it, undocumented, though it didn't have the others like substr/length/index), which is why it's explicitly unspecified. I can't find anything about \| being an extension. – Random832 Dec 14 '15 at 16:13

[Oct 19, 2017] Bash One-Liners bashoneliners.com

Oct 19, 2017 | www.bashoneliners.com
Kill a process running on port 8080
 $ lsof -i :8080 | awk 'NR > 1 {print $2}' | xargs --no-run-if-empty kill

-- by Janos on Sept. 1, 2017, 8:31 p.m.

Make a new folder and cd into it.
 $ mkcd(){ NAME=$1; mkdir -p "$NAME"; cd "$NAME"; }

-- by PrasannaNatarajan on Aug. 3, 2017, 6:49 a.m.

Go up to a particular folder
 $ alias ph='cd ${PWD%/public_html*}/public_html'

-- by Jab2870 on July 18, 2017, 6:07 p.m.

Explanation

I work on a lot of websites and often need to go up to the public_html folder.

This command creates an alias so that however many folders deep I am, I will be taken up to the correct folder.

alias ph='....' : This creates a shortcut so that when command ph is typed, the part between the quotes is executed

cd ... : This changes directory to the directory specified

PWD : This is a global bash variable that contains the current directory

${...%/public_html*} : This removes /public_html and anything after it from the specified string

Finally, /public_html at the end is appended onto the string.

So, to sum up, when ph is run, we ask bash to change the directory to the current working directory with anything after public_html removed.

Open another terminal at current location
 $ $TERMINAL & disown

-- by Jab2870 on July 18, 2017, 3:04 p.m.

Explanation

Opens another terminal window at the current location.

Use Case

I often cd into a directory and decide it would be useful to open another terminal in the same folder, maybe for an editor or something. Previously, I would open the terminal and repeat the CD command.

I have aliased this command to open so I just type open and I get a new terminal already in my desired folder.

The & disown part of the command stops the new terminal from being dependant on the first meaning that you can still use the first and if you close the first, the second will remain open. Limitations

It relied on you having the $TERMINAL global variable set. If you don't have this set you could easily change it to something like the following:

gnome-terminal & disown or konsole & disown

Preserve your fingers from cd ..; cd ..; cd..; cd..;
 $ up(){ DEEP=$1; for i in $(seq 1 ${DEEP:-"1"}); do cd ../; done; }

-- by alireza6677 on June 28, 2017, 5:40 p.m.

Generate a sequence of numbers
 $ echo {01..10}

-- by Elkku on March 1, 2015, 12:04 a.m.

Explanation

This example will print:

01 02 03 04 05 06 07 08 09 10

While the original one-liner is indeed IMHO the canonical way to loop over numbers, the brace expansion syntax of Bash 4.x has some kick-ass features such as correct padding of the number with leading zeros. Limitations

The zero-padding feature works only in Bash >=4.

Tweet

Related one-liners
Generate a sequence of numbers
 $ for ((i=1; i<=10; ++i)); do echo $i; done

-- by Janos on Nov. 4, 2014, 12:29 p.m.

Explanation

This is similar to seq , but portable. seq does not exist in all systems and is not recommended today anymore. Other variations to emulate various uses with seq :

# seq 1 2 10
for ((i=1; i<=10; i+=2)); do echo $i; done

# seq -w 5 10
for ((i=5; i<=10; ++i)); do printf '%02d\n' $i; done
Find recent logs that contain the string "Exception"
 $ find . -name '*.log' -mtime -2 -exec grep -Hc Exception {} \; | grep -v :0$

-- by Janos on July 19, 2014, 7:53 a.m.

Explanation

The find :

  • -name '*.log' -- match files ending with .log
  • -mtime -2 -- match files modified within the last 2 days
  • -exec CMD ARGS \; -- for each file found, execute command, where {} in ARGS will be replaced with the file's path

The grep :

  • -c is to print the count of the matches instead of the matches themselves
  • -H is to print the name of the file, as grep normally won't print it when there is only one filename argument
  • The output lines will be in the format path:count . Files that didn't match "Exception" will still be printed, with 0 as count
  • The second grep filters the output of the first, excluding lines that end with :0 (= the files that didn't contain matches)

Extra tips:

  • Change "Exception" to the typical relevant failure indicator of your application
  • Add -i for grep to make the search case insensitive
  • To make the find match strictly only files, add -type f
  • Schedule this as a periodic job, and pipe the output to a mailer, for example | mailx -s 'error counts' yourmail@example.com
Remove offending key from known_hosts file with one swift move
 $ sed -i 18d .ssh/known_hosts

-- by EvaggelosBalaskas on Jan. 16, 2013, 2:29 p.m.

Explanation

Using sed to remove a specific line.

The -i parameter is to edit the file in-place. Limitations

This works as posted in GNU sed . In BSD sed , the -i flag requires a parameter to use as the suffix of a backup file. You can set it to empty to not use a backup file:

[Oct 17, 2017] Converting string to lower case in Bash - Stack Overflow

Feb 15, 2010 | stackoverflow.com

assassin , Feb 15, 2010 at 7:02

Is there a way in bash to convert a string into a lower case string?

For example, if I have:

a="Hi all"

I want to convert it to:

"hi all"

ghostdog74 , Feb 15, 2010 at 7:43

The are various ways: tr
$ echo "$a" | tr '[:upper:]' '[:lower:]'
hi all
AWK
$ echo "$a" | awk '{print tolower($0)}'
hi all
Bash 4.0
$ echo "${a,,}"
hi all
Perl
$ echo "$a" | perl -ne 'print lc'
hi all
Bash
lc(){
    case "$1" in
        [A-Z])
        n=$(printf "%d" "'$1")
        n=$((n+32))
        printf \\$(printf "%o" "$n")
        ;;
        *)
        printf "%s" "$1"
        ;;
    esac
}
word="I Love Bash"
for((i=0;i<${#word};i++))
do
    ch="${word:$i:1}"
    lc "$ch"
done

jangosteve , Jan 14, 2012 at 21:58

Am I missing something, or does your last example (in Bash) actually do something completely different? It works for "ABX", but if you instead make word="Hi All" like the other examples, it returns ha , not hi all . It only works for the capitalized letters and skips the already-lowercased letters. – jangosteve Jan 14 '12 at 21:58

Richard Hansen , Feb 3, 2012 at 18:55

Note that only the tr and awk examples are specified in the POSIX standard. – Richard Hansen Feb 3 '12 at 18:55

Richard Hansen , Feb 3, 2012 at 18:58

tr '[:upper:]' '[:lower:]' will use the current locale to determine uppercase/lowercase equivalents, so it'll work with locales that use letters with diacritical marks. – Richard Hansen Feb 3 '12 at 18:58

Adam Parkin , Sep 25, 2012 at 18:01

How does one get the output into a new variable? Ie say I want the lowercased string into a new variable? – Adam Parkin Sep 25 '12 at 18:01

Tino , Nov 14, 2012 at 15:39

@Adam: b="$(echo $a | tr '[A-Z]' '[a-z]')" – Tino Nov 14 '12 at 15:39

Dennis Williamson , Feb 15, 2010 at 10:31

In Bash 4:

To lowercase

$ string="A FEW WORDS"
$ echo "${string,}"
a FEW WORDS
$ echo "${string,,}"
a few words
$ echo "${string,,[AEIUO]}"
a FeW WoRDS

$ string="A Few Words"
$ declare -l string
$ string=$string; echo "$string"
a few words

To uppercase

$ string="a few words"
$ echo "${string^}"
A few words
$ echo "${string^^}"
A FEW WORDS
$ echo "${string^^[aeiou]}"
A fEw wOrds

$ string="A Few Words"
$ declare -u string
$ string=$string; echo "$string"
A FEW WORDS

Toggle (undocumented, but optionally configurable at compile time)

$ string="A Few Words"
$ echo "${string~~}"
a fEW wORDS
$ string="A FEW WORDS"
$ echo "${string~}"
a FEW WORDS
$ string="a few words"
$ echo "${string~}"
A few words

Capitalize (undocumented, but optionally configurable at compile time)

$ string="a few words"
$ declare -c string
$ string=$string
$ echo "$string"
A few words

Title case:

$ string="a few words"
$ string=($string)
$ string="${string[@]^}"
$ echo "$string"
A Few Words

$ declare -c string
$ string=(a few words)
$ echo "${string[@]}"
A Few Words

$ string="a FeW WOrdS"
$ string=${string,,}
$ string=${string~}
$ echo "$string"

To turn off a declare attribute, use + . For example, declare +c string . This affects subsequent assignments and not the current value.

The declare options change the attribute of the variable, but not the contents. The reassignments in my examples update the contents to show the changes.

Edit:

Added "toggle first character by word" ( ${var~} ) as suggested by ghostdog74

Edit: Corrected tilde behavior to match Bash 4.3.

ghostdog74 , Feb 15, 2010 at 10:52

there's also ${string~} – ghostdog74 Feb 15 '10 at 10:52

Hubert Kario , Jul 12, 2012 at 16:48

Quite bizzare, "^^" and ",," operators don't work on non-ASCII characters but "~~" does... So string="łσdź"; echo ${string~~} will return "ŁΣDŹ", but echo ${string^^} returns "łσDź". Even in LC_ALL=pl_PL.utf-8 . That's using bash 4.2.24. – Hubert Kario Jul 12 '12 at 16:48

Dennis Williamson , Jul 12, 2012 at 18:20

@HubertKario: That's weird. It's the same for me in Bash 4.0.33 with the same string in en_US.UTF-8 . It's a bug and I've reported it. – Dennis Williamson Jul 12 '12 at 18:20

Dennis Williamson , Jul 13, 2012 at 0:44

@HubertKario: Try echo "$string" | tr '[:lower:]' '[:upper:]' . It will probably exhibit the same failure. So the problem is at least partly not Bash's. – Dennis Williamson Jul 13 '12 at 0:44

Dennis Williamson , Jul 14, 2012 at 14:27

@HubertKario: The Bash maintainer has acknowledged the bug and stated that it will be fixed in the next release. – Dennis Williamson Jul 14 '12 at 14:27

shuvalov , Feb 15, 2010 at 7:13

echo "Hi All" | tr "[:upper:]" "[:lower:]"

Richard Hansen , Feb 3, 2012 at 19:00

+1 for not assuming english – Richard Hansen Feb 3 '12 at 19:00

Hubert Kario , Jul 12, 2012 at 16:56

@RichardHansen: tr doesn't work for me for non-ACII characters. I do have correct locale set and locale files generated. Have any idea what could I be doing wrong? – Hubert Kario Jul 12 '12 at 16:56

wasatchwizard , Oct 23, 2014 at 16:42

FYI: This worked on Windows/Msys. Some of the other suggestions did not. – wasatchwizard Oct 23 '14 at 16:42

Ignacio Vazquez-Abrams , Feb 15, 2010 at 7:03

tr :
a="$(tr [A-Z] [a-z] <<< "$a")"
AWK :
{ print tolower($0) }
sed :
y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/

Sandeepan Nath , Feb 2, 2011 at 11:12

+1 a="$(tr [A-Z] [a-z] <<< "$a")" looks easiest to me. I am still a beginner... – Sandeepan Nath Feb 2 '11 at 11:12

Haravikk , Oct 19, 2013 at 12:54

I strongly recommend the sed solution; I've been working in an environment that for some reason doesn't have tr but I've yet to find a system without sed , plus a lot of the time I want to do this I've just done something else in sed anyway so can chain the commands together into a single (long) statement. – Haravikk Oct 19 '13 at 12:54

Dennis , Nov 6, 2013 at 19:49

The bracket expressions should be quoted. In tr [A-Z] [a-z] A , the shell may perform filename expansion if there are filenames consisting of a single letter or nullgob is set. tr "[A-Z]" "[a-z]" A will behave properly. – Dennis Nov 6 '13 at 19:49

Haravikk , Jun 15, 2014 at 10:51

@CamiloMartin it's a BusyBox system where I'm having that problem, specifically Synology NASes, but I've encountered it on a few other systems too. I've been doing a lot of cross-platform shell scripting lately, and with the requirement that nothing extra be installed it makes things very tricky! However I've yet to encounter a system without sed – Haravikk Jun 15 '14 at 10:51

fuz , Jan 31, 2016 at 14:54

Note that tr [A-Z] [a-z] is incorrect in almost all locales. for example, in the en-US locale, A-Z is actually the interval AaBbCcDdEeFfGgHh...XxYyZ . – fuz Jan 31 '16 at 14:54

nettux443 , May 14, 2014 at 9:36

I know this is an oldish post but I made this answer for another site so I thought I'd post it up here:

UPPER -> lower : use python:

b=`echo "print '$a'.lower()" | python`

Or Ruby:

b=`echo "print '$a'.downcase" | ruby`

Or Perl (probably my favorite):

b=`perl -e "print lc('$a');"`

Or PHP:

b=`php -r "print strtolower('$a');"`

Or Awk:

b=`echo "$a" | awk '{ print tolower($1) }'`

Or Sed:

b=`echo "$a" | sed 's/./\L&/g'`

Or Bash 4:

b=${a,,}

Or NodeJS if you have it (and are a bit nuts...):

b=`echo "console.log('$a'.toLowerCase());" | node`

You could also use dd (but I wouldn't!):

b=`echo "$a" | dd  conv=lcase 2> /dev/null`

lower -> UPPER

use python:

b=`echo "print '$a'.upper()" | python`

Or Ruby:

b=`echo "print '$a'.upcase" | ruby`

Or Perl (probably my favorite):

b=`perl -e "print uc('$a');"`

Or PHP:

b=`php -r "print strtoupper('$a');"`

Or Awk:

b=`echo "$a" | awk '{ print toupper($1) }'`

Or Sed:

b=`echo "$a" | sed 's/./\U&/g'`

Or Bash 4:

b=${a^^}

Or NodeJS if you have it (and are a bit nuts...):

b=`echo "console.log('$a'.toUpperCase());" | node`

You could also use dd (but I wouldn't!):

b=`echo "$a" | dd  conv=ucase 2> /dev/null`

Also when you say 'shell' I'm assuming you mean bash but if you can use zsh it's as easy as

b=$a:l

for lower case and

b=$a:u

for upper case.

JESii , May 28, 2015 at 21:42

Neither the sed command nor the bash command worked for me. – JESii May 28 '15 at 21:42

nettux443 , Nov 20, 2015 at 14:33

@JESii both work for me upper -> lower and lower-> upper. I'm using sed 4.2.2 and Bash 4.3.42(1) on 64bit Debian Stretch. – nettux443 Nov 20 '15 at 14:33

JESii , Nov 21, 2015 at 17:34

Hi, @nettux443... I just tried the bash operation again and it still fails for me with the error message "bad substitution". I'm on OSX using homebrew's bash: GNU bash, version 4.3.42(1)-release (x86_64-apple-darwin14.5.0) – JESii Nov 21 '15 at 17:34

tripleee , Jan 16, 2016 at 11:45

Do not use! All of the examples which generate a script are extremely brittle; if the value of a contains a single quote, you have not only broken behavior, but a serious security problem. – tripleee Jan 16 '16 at 11:45

Scott Smedley , Jan 27, 2011 at 5:37

In zsh:
echo $a:u

Gotta love zsh!

Scott Smedley , Jan 27, 2011 at 5:39

or $a:l for lower case conversion – Scott Smedley Jan 27 '11 at 5:39

biocyberman , Jul 24, 2015 at 23:26

Add one more case: echo ${(C)a} #Upcase the first char only – biocyberman Jul 24 '15 at 23:26

devnull , Sep 26, 2013 at 15:45

Using GNU sed :
sed 's/.*/\L&/'

Example:

$ foo="Some STRIng";
$ foo=$(echo "$foo" | sed 's/.*/\L&/')
$ echo "$foo"
some string

technosaurus , Jan 21, 2012 at 10:27

For a standard shell (without bashisms) using only builtins:
uppers=ABCDEFGHIJKLMNOPQRSTUVWXYZ
lowers=abcdefghijklmnopqrstuvwxyz

lc(){ #usage: lc "SOME STRING" -> "some string"
    i=0
    while ([ $i -lt ${#1} ]) do
        CUR=${1:$i:1}
        case $uppers in
            *$CUR*)CUR=${uppers%$CUR*};OUTPUT="${OUTPUT}${lowers:${#CUR}:1}";;
            *)OUTPUT="${OUTPUT}$CUR";;
        esac
        i=$((i+1))
    done
    echo "${OUTPUT}"
}

And for upper case:

uc(){ #usage: uc "some string" -> "SOME STRING"
    i=0
    while ([ $i -lt ${#1} ]) do
        CUR=${1:$i:1}
        case $lowers in
            *$CUR*)CUR=${lowers%$CUR*};OUTPUT="${OUTPUT}${uppers:${#CUR}:1}";;
            *)OUTPUT="${OUTPUT}$CUR";;
        esac
        i=$((i+1))
    done
    echo "${OUTPUT}"
}

Dereckson , Nov 23, 2014 at 19:52

I wonder if you didn't let some bashism in this script, as it's not portable on FreeBSD sh: ${1:$...}: Bad substitution – Dereckson Nov 23 '14 at 19:52

tripleee , Apr 14, 2015 at 7:09

Indeed; substrings with ${var:1:1} are a Bashism. – tripleee Apr 14 '15 at 7:09

Derek Shaw , Jan 24, 2011 at 13:53

Regular expression

I would like to take credit for the command I wish to share but the truth is I obtained it for my own use from http://commandlinefu.com . It has the advantage that if you cd to any directory within your own home folder that is it will change all files and folders to lower case recursively please use with caution. It is a brilliant command line fix and especially useful for those multitudes of albums you have stored on your drive.

find . -depth -exec rename 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;

You can specify a directory in place of the dot(.) after the find which denotes current directory or full path.

I hope this solution proves useful the one thing this command does not do is replace spaces with underscores - oh well another time perhaps.

Wadih M. , Nov 29, 2011 at 1:31

thanks for commandlinefu.com – Wadih M. Nov 29 '11 at 1:31

John Rix , Jun 26, 2013 at 15:58

This didn't work for me for whatever reason, though it looks fine. I did get this to work as an alternative though: find . -exec /bin/bash -c 'mv {} `tr [A-Z] [a-z] <<< {}`' \; – John Rix Jun 26 '13 at 15:58

Tino , Dec 11, 2015 at 16:27

This needs prename from perl : dpkg -S "$(readlink -e /usr/bin/rename)" gives perl: /usr/bin/prename – Tino Dec 11 '15 at 16:27

c4f4t0r , Aug 21, 2013 at 10:21

In bash 4 you can use typeset

Example:

A="HELLO WORLD"
typeset -l A=$A

community wiki, Jan 16, 2016 at 12:26

Pre Bash 4.0

Bash Lower the Case of a string and assign to variable

VARIABLE=$(echo "$VARIABLE" | tr '[:upper:]' '[:lower:]') 

echo "$VARIABLE"

Tino , Dec 11, 2015 at 16:23

No need for echo and pipes: use $(tr '[:upper:]' '[:lower:]' <<<"$VARIABLE") – Tino Dec 11 '15 at 16:23

tripleee , Jan 16, 2016 at 12:28

@Tino The here string is also not portable back to really old versions of Bash; I believe it was introduced in v3. – tripleee Jan 16 '16 at 12:28

Tino , Jan 17, 2016 at 14:28

@tripleee You are right, it was introduced in bash-2.05b - however that's the oldest bash I was able to find on my systems – Tino Jan 17 '16 at 14:28

Bikesh M Annur , Mar 23 at 6:48

You can try this
s="Hello World!" 

echo $s  # Hello World!

a=${s,,}
echo $a  # hello world!

b=${s^^}
echo $b  # HELLO WORLD!

ref : http://wiki.workassis.com/shell-script-convert-text-to-lowercase-and-uppercase/

Orwellophile , Mar 24, 2013 at 13:43

For Bash versions earlier than 4.0, this version should be fastest (as it doesn't fork/exec any commands):
function string.monolithic.tolower
{
   local __word=$1
   local __len=${#__word}
   local __char
   local __octal
   local __decimal
   local __result

   for (( i=0; i<__len; i++ ))
   do
      __char=${__word:$i:1}
      case "$__char" in
         [A-Z] )
            printf -v __decimal '%d' "'$__char"
            printf -v __octal '%03o' $(( $__decimal ^ 0x20 ))
            printf -v __char \\$__octal
            ;;
      esac
      __result+="$__char"
   done
   REPLY="$__result"
}

technosaurus's answer had potential too, although it did run properly for mee.

Stephen M. Harris , Mar 22, 2013 at 22:42

If using v4, this is baked-in . If not, here is a simple, widely applicable solution. Other answers (and comments) on this thread were quite helpful in creating the code below.
# Like echo, but converts to lowercase
echolcase () {
    tr [:upper:] [:lower:] <<< "${*}"
}

# Takes one arg by reference (var name) and makes it lowercase
lcase () { 
    eval "${1}"=\'$(echo ${!1//\'/"'\''"} | tr [:upper:] [:lower:] )\'
}

Notes:

JaredTS486 , Dec 23, 2015 at 17:37

In spite of how old this question is and similar to this answer by technosaurus . I had a hard time finding a solution that was portable across most platforms (That I Use) as well as older versions of bash. I have also been frustrated with arrays, functions and use of prints, echos and temporary files to retrieve trivial variables. This works very well for me so far I thought I would share. My main testing environments are:
  1. GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
  2. GNU bash, version 3.2.57(1)-release (sparc-sun-solaris2.10)
lcs="abcdefghijklmnopqrstuvwxyz"
ucs="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
input="Change Me To All Capitals"
for (( i=0; i<"${#input}"; i++ )) ; do :
    for (( j=0; j<"${#lcs}"; j++ )) ; do :
        if [[ "${input:$i:1}" == "${lcs:$j:1}" ]] ; then
            input="${input/${input:$i:1}/${ucs:$j:1}}" 
        fi
    done
done

Simple C-style for loop to iterate through the strings. For the line below if you have not seen anything like this before this is where I learned this . In this case the line checks if the char ${input:$i:1} (lower case) exists in input and if so replaces it with the given char ${ucs:$j:1} (upper case) and stores it back into input.

input="${input/${input:$i:1}/${ucs:$j:1}}"

Gus Neves , May 16 at 10:04

Many answers using external programs, which is not really using Bash .

If you know you will have Bash4 available you should really just use the ${VAR,,} notation (it is easy and cool). For Bash before 4 (My Mac still uses Bash 3.2 for example). I used the corrected version of @ghostdog74 's answer to create a more portable version.

One you can call lowercase 'my STRING' and get a lowercase version. I read comments about setting the result to a var, but that is not really portable in Bash , since we can't return strings. Printing it is the best solution. Easy to capture with something like var="$(lowercase $str)" .

How this works

The way this works is by getting the ASCII integer representation of each char with printf and then adding 32 if upper-to->lower , or subtracting 32 if lower-to->upper . Then use printf again to convert the number back to a char. From 'A' -to-> 'a' we have a difference of 32 chars.

Using printf to explain:

$ printf "%d\n" "'a"
97
$ printf "%d\n" "'A"
65

97 - 65 = 32

And this is the working version with examples.
Please note the comments in the code, as they explain a lot of stuff:

#!/bin/bash

# lowerupper.sh

# Prints the lowercase version of a char
lowercaseChar(){
    case "$1" in
        [A-Z])
            n=$(printf "%d" "'$1")
            n=$((n+32))
            printf \\$(printf "%o" "$n")
            ;;
        *)
            printf "%s" "$1"
            ;;
    esac
}

# Prints the lowercase version of a sequence of strings
lowercase() {
    word="$@"
    for((i=0;i<${#word};i++)); do
        ch="${word:$i:1}"
        lowercaseChar "$ch"
    done
}

# Prints the uppercase version of a char
uppercaseChar(){
    case "$1" in
        [a-z])
            n=$(printf "%d" "'$1")
            n=$((n-32))
            printf \\$(printf "%o" "$n")
            ;;
        *)
            printf "%s" "$1"
            ;;
    esac
}

# Prints the uppercase version of a sequence of strings
uppercase() {
    word="$@"
    for((i=0;i<${#word};i++)); do
        ch="${word:$i:1}"
        uppercaseChar "$ch"
    done
}

# The functions will not add a new line, so use echo or
# append it if you want a new line after printing

# Printing stuff directly
lowercase "I AM the Walrus!"$'\n'
uppercase "I AM the Walrus!"$'\n'

echo "----------"

# Printing a var
str="A StRing WITH mixed sTUFF!"
lowercase "$str"$'\n'
uppercase "$str"$'\n'

echo "----------"

# Not quoting the var should also work, 
# since we use "$@" inside the functions
lowercase $str$'\n'
uppercase $str$'\n'

echo "----------"

# Assigning to a var
myLowerVar="$(lowercase $str)"
myUpperVar="$(uppercase $str)"
echo "myLowerVar: $myLowerVar"
echo "myUpperVar: $myUpperVar"

echo "----------"

# You can even do stuff like
if [[ 'option 2' = "$(lowercase 'OPTION 2')" ]]; then
    echo "Fine! All the same!"
else
    echo "Ops! Not the same!"
fi

exit 0

And the results after running this:

$ ./lowerupper.sh 
i am the walrus!
I AM THE WALRUS!
----------
a string with mixed stuff!
A STRING WITH MIXED STUFF!
----------
a string with mixed stuff!
A STRING WITH MIXED STUFF!
----------
myLowerVar: a string with mixed stuff!
myUpperVar: A STRING WITH MIXED STUFF!
----------
Fine! All the same!

This should only work for ASCII characters though .

For me it is fine, since I know I will only pass ASCII chars to it.
I am using this for some case-insensitive CLI options, for example.

nitinr708 , Jul 8, 2016 at 9:20

To store the transformed string into a variable. Following worked for me - $SOURCE_NAME to $TARGET_NAME
TARGET_NAME="`echo $SOURCE_NAME | tr '[:upper:]' '[:lower:]'`"

[Oct 16, 2017] Indenting Here-Documents - bash Cookbook

Oct 16, 2017 | www.safaribooksonline.com

Indenting Here-Documents Problem

The here-document is great, but it's messing up your shell script's formatting. You want to be able to indent for readability. Solution

Use <<- and then you can use tab characters (only!) at the beginning of lines to indent this portion of your shell script.

   $ cat myscript.sh
        ...
             grep $1 <<-'EOF'
                lots of data
                can go here
                it's indented with tabs
                to match the script's indenting
                but the leading tabs are
                discarded when read
                EOF
            ls
        ...
        $
Discussion

The hyphen just after the << is enough to tell bash to ignore the leading tab characters. This is for tab characters only and not arbitrary white space. This is especially important with the EOF or any other marker designation. If you have spaces there, it will not recognize the EOF as your ending marker, and the "here" data will continue through to the end of the file (swallowing the rest of your script). Therefore, you may want to always left-justify the EOF (or other marker) just to be safe, and let the formatting go on this one line.

[Oct 16, 2017] Indenting bourne shell here documents

Oct 16, 2017 | prefetch.net

The Bourne shell provides here documents to allow block of data to be passed to a process through STDIN. The typical format for a here document is something similar to this:

command <<ARBITRARY_TAG
data to pass 1
data to pass 2
ARBITRARY_TAG

This will send the data between the ARBITRARY_TAG statements to the standard input of the process. In order for this to work, you need to make sure that the data is not indented. If you indent it for readability, you will get a syntax error similar to the following:

./test: line 12: syntax error: unexpected end of file

To allow your here documents to be indented, you can append a "-" to the end of the redirection strings like so:

if [ "${STRING}" = "SOMETHING" ]
then
        somecommand <<-EOF
        this is a string1
        this is a string2
        this is a string3
        EOF
fi

You will need to use tabs to indent the data, but that is a small price to pay for added readability. Nice!

[Oct 09, 2017] TMOUT - Auto Logout Linux Shell When There Isn't Any Activity by Aaron Kili

Oct 07, 2017 | www.tecmint.com
... ... ..

To enable automatic user logout, we will be using the TMOUT shell variable, which terminates a user's login shell in case there is no activity for a given number of seconds that you can specify.

To enable this globally (system-wide for all users), set the above variable in the /etc/profile shell initialization file.

[Sep 27, 2017] Arithmetic Evaluation

Sep 27, 2017 | mywiki.wooledge.org

Bash has several different ways to say we want to do arithmetic instead of string operations. Let's look at them one by one.

The first way is the let command:

$ unset a; a=4+5
$ echo $a
4+5
$ let a=4+5
$ echo $a
9

You may use spaces, parentheses and so forth, if you quote the expression:

$ let a='(5+2)*3'

For a full list of operators availabile, see help let or the manual.

Next, the actual arithmetic evaluation compound command syntax:

$ ((a=(5+2)*3))

This is equivalent to let , but we can also use it as a command , for example in an if statement:

$ if (($a == 21)); then echo 'Blackjack!'; fi

Operators such as == , < , > and so on cause a comparison to be performed, inside an arithmetic evaluation. If the comparison is "true" (for example, 10 > 2 is true in arithmetic -- but not in strings!) then the compound command exits with status 0. If the comparison is false, it exits with status 1. This makes it suitable for testing things in a script.

Although not a compound command, an arithmetic substitution (or arithmetic expression ) syntax is also available:

$ echo "There are $(($rows * $columns)) cells"

Inside $((...)) is an arithmetic context , just like with ((...)) , meaning we do arithmetic (multiplying things) instead of string manipulations (concatenating $rows , space, asterisk, space, $columns ). $((...)) is also portable to the POSIX shell, while ((...)) is not.

Readers who are familiar with the C programming language might wish to know that ((...)) has many C-like features. Among them are the ternary operator:

$ ((abs = (a >= 0) ? a : -a))

and the use of an integer value as a truth value:

$ if ((flag)); then echo "uh oh, our flag is up"; fi

Note that we used variables inside ((...)) without prefixing them with $ -signs. This is a special syntactic shortcut that Bash allows inside arithmetic evaluations and arithmetic expressions.

There is one final thing we must mention about ((flag)) . Because the inside of ((...)) is C-like, a variable (or expression) that evaluates to zero will be considered false for the purposes of the arithmetic evaluation. Then, because the evaluation is false, it will exit with a status of 1. Likewise, if the expression inside ((...)) is non-zero , it will be considered true ; and since the evaluation is true, it will exit with status 0. This is potentially very confusing, even to experts, so you should take some time to think about this. Nevertheless, when things are used the way they're intended, it makes sense in the end:

$ flag=0      # no error
$ while read line; do
>   if [[ $line = *err* ]]; then flag=1; fi
> done < inputfile
$ if ((flag)); then echo "oh no"; fi

[Sep 27, 2017] Integer ASCII value to character in BASH using printf

Sep 27, 2017 | stackoverflow.com

user14070 , asked May 20 '09 at 21:07

Character to value works:
$ printf "%d\n" \'A
65
$

I have two questions, the first one is most important:

broaden , answered Nov 18 '09 at 10:10

One line
printf "\x$(printf %x 65)"

Two lines

set $(printf %x 65)
printf "\x$1"

Here is one if you do not mind using awk

awk 'BEGIN{printf "%c", 65}'

mouviciel , answered May 20 '09 at 21:12

For this kind of conversion, I use perl:
perl -e 'printf "%c\n", 65;'

user2350426 , answered Sep 22 '15 at 23:16

This works (with the value in octal):
$ printf '%b' '\101'
A

even for (some: don't go over 7) sequences:

$ printf '%b' '\'{101..107}
ABCDEFG

A general construct that allows (decimal) values in any range is:

$ printf '%b' $(printf '\\%03o' {65..122})
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz

Or you could use the hex values of the characters:

$ printf '%b' $(printf '\\x%x' {65..122})
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz

You also could get the character back with xxd (use hexadecimal values):

$ echo "41" | xxd -p -r
A

That is, one action is the reverse of the other:

$ printf "%x" "'A" | xxd -p -r
A

And also works with several hex values at once:

$ echo "41 42 43 44 45 46 47 48 49 4a" | xxd -p -r
ABCDEFGHIJ

or sequences (printf is used here to get hex values):

$ printf '%x' {65..90} | xxd -r -p 
ABCDEFGHIJKLMNOPQRSTUVWXYZ

Or even use awk:

$ echo 65 | awk '{printf("%c",$1)}'
A

even for sequences:

$ seq 65 90 | awk '{printf("%c",$1)}'
ABCDEFGHIJKLMNOPQRSTUVWXYZ

David Hu , answered Dec 1 '11 at 9:43

For your second question, it seems the leading-quote syntax ( \'A ) is specific to printf :

If the leading character is a single-quote or double-quote, the value shall be the numeric value in the underlying codeset of the character following the single-quote or double-quote.

From http://pubs.opengroup.org/onlinepubs/009695399/utilities/printf.html

Naaff , answered May 20 '09 at 21:21

One option is to directly input the character you're interested in using hex or octal notation:
printf "\x41\n"
printf "\101\n"

MagicMercury86 , answered Feb 21 '12 at 22:49

If you want to save the ASCII value of the character: (I did this in BASH and it worked)
{
char="A"

testing=$( printf "%d" "'${char}" )

echo $testing}

output: 65

chand , answered Nov 20 '14 at 10:05

Here's yet another way to convert 65 into A (via octal):
help printf  # in Bash
man bash | less -Ip '^[[:blank:]]*printf'

printf "%d\n" '"A'
printf "%d\n" "'A"

printf '%b\n' "$(printf '\%03o' 65)"

To search in man bash for \' use (though futile in this case):

man bash | less -Ip "\\\'"  # press <n> to go through the matches

,

If you convert 65 to hexadecimal it's 0x41 :

$ echo -e "\x41" A

[Sep 27, 2017] linux - How to convert DOS-Windows newline (CRLF) to Unix newline in a Bash script

Notable quotes:
"... Technically '1' is your program, b/c awk requires one when given option. ..."

Koran Molovik , asked Apr 10 '10 at 15:03

How can I programmatically (i.e., not using vi ) convert DOS/Windows newlines to Unix?

The dos2unix and unix2dos commands are not available on certain systems. How can I emulate these with commands like sed / awk / tr ?

Jonathan Leffler , answered Apr 10 '10 at 15:13

You can use tr to convert from DOS to Unix; however, you can only do this safely if CR appears in your file only as the first byte of a CRLF byte pair. This is usually the case. You then use:
tr -d '\015' <DOS-file >UNIX-file

Note that the name DOS-file is different from the name UNIX-file ; if you try to use the same name twice, you will end up with no data in the file.

You can't do it the other way round (with standard 'tr').

If you know how to enter carriage return into a script ( control-V , control-M to enter control-M), then:

sed 's/^M$//'     # DOS to Unix
sed 's/$/^M/'     # Unix to DOS

where the '^M' is the control-M character. You can also use the bash ANSI-C Quoting mechanism to specify the carriage return:

sed $'s/\r$//'     # DOS to Unix
sed $'s/$/\r/'     # Unix to DOS

However, if you're going to have to do this very often (more than once, roughly speaking), it is far more sensible to install the conversion programs (e.g. dos2unix and unix2dos , or perhaps dtou and utod ) and use them.

ghostdog74 , answered Apr 10 '10 at 15:21

tr -d "\r" < file

take a look here for examples using sed :

# IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format.
sed 's/.$//'               # assumes that all lines end with CR/LF
sed 's/^M$//'              # in bash/tcsh, press Ctrl-V then Ctrl-M
sed 's/\x0D$//'            # works on ssed, gsed 3.02.80 or higher

# IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format.
sed "s/$/`echo -e \\\r`/"            # command line under ksh
sed 's/$'"/`echo \\\r`/"             # command line under bash
sed "s/$/`echo \\\r`/"               # command line under zsh
sed 's/$/\r/'                        # gsed 3.02.80 or higher

Use sed -i for in-place conversion e.g. sed -i 's/..../' file .

Steven Penny , answered Apr 30 '14 at 10:02

Doing this with POSIX is tricky:

To remove carriage returns:

ex -bsc '%!awk "{sub(/\r/,\"\")}1"' -cx file

To add carriage returns:

ex -bsc '%!awk "{sub(/$/,\"\r\")}1"' -cx file

Norman Ramsey , answered Apr 10 '10 at 22:32

This problem can be solved with standard tools, but there are sufficiently many traps for the unwary that I recommend you install the flip command, which was written over 20 years ago by Rahul Dhesi, the author of zoo . It does an excellent job converting file formats while, for example, avoiding the inadvertant destruction of binary files, which is a little too easy if you just race around altering every CRLF you see...

Gordon Davisson , answered Apr 10 '10 at 17:50

The solutions posted so far only deal with part of the problem, converting DOS/Windows' CRLF into Unix's LF; the part they're missing is that DOS use CRLF as a line separator , while Unix uses LF as a line terminator . The difference is that a DOS file (usually) won't have anything after the last line in the file, while Unix will. To do the conversion properly, you need to add that final LF (unless the file is zero-length, i.e. has no lines in it at all). My favorite incantation for this (with a little added logic to handle Mac-style CR-separated files, and not molest files that're already in unix format) is a bit of perl:
perl -pe 'if ( s/\r\n?/\n/g ) { $f=1 }; if ( $f || ! $m ) { s/([^\n])\z/$1\n/ }; $m=1' PCfile.txt

Note that this sends the Unixified version of the file to stdout. If you want to replace the file with a Unixified version, add perl's -i flag.

codaddict , answered Apr 10 '10 at 15:09

Using AWK you can do:
awk '{ sub("\r$", ""); print }' dos.txt > unix.txt

Using Perl you can do:

perl -pe 's/\r$//' < dos.txt > unix.txt

anatoly techtonik , answered Oct 31 '13 at 9:40

If you don't have access to dos2unix , but can read this page, then you can copy/paste dos2unix.py from here.
#!/usr/bin/env python
"""\
convert dos linefeeds (crlf) to unix (lf)
usage: dos2unix.py <input> <output>
"""
import sys

if len(sys.argv[1:]) != 2:
  sys.exit(__doc__)

content = ''
outsize = 0
with open(sys.argv[1], 'rb') as infile:
  content = infile.read()
with open(sys.argv[2], 'wb') as output:
  for line in content.splitlines():
    outsize += len(line) + 1
    output.write(line + '\n')

print("Done. Saved %s bytes." % (len(content)-outsize))

Cross-posted from superuser .

nawK , answered Sep 4 '14 at 0:16

An even simpler awk solution w/o a program:
awk -v ORS='\r\n' '1' unix.txt > dos.txt

Technically '1' is your program, b/c awk requires one when given option.

UPDATE : After revisiting this page for the first time in a long time I realized that no one has yet posted an internal solution, so here is one:

while IFS= read -r line;
do printf '%s\n' "${line%$'\r'}";
done < dos.txt > unix.txt

Santosh , answered Mar 12 '15 at 22:36

This worked for me
tr "\r" "\n" < sampledata.csv > sampledata2.csv

ThorSummoner , answered Jul 30 '15 at 17:38

Super duper easy with PCRE;

As a script, or replace $@ with your files.

#!/usr/bin/env bash
perl -pi -e 's/\r\n/\n/g' -- $@

This will overwrite your files in place!

I recommend only doing this with a backup (version control or otherwise)

Ashley Raiteri , answered May 19 '14 at 23:25

For Mac osx if you have homebrew installed [ http://brew.sh/][1]
brew install dos2unix

for csv in *.csv; do dos2unix -c mac ${csv}; done;

Make sure you have made copies of the files, as this command will modify the files in place. The -c mac option makes the switch to be compatible with osx.

lzc , answered May 31 '16 at 17:15

TIMTOWTDI!
perl -pe 's/\r\n/\n/; s/([^\n])\z/$1\n/ if eof' PCfile.txt

Based on @GordonDavisson

One must consider the possibility of [noeol] ...

kazmer , answered Nov 6 '16 at 23:30

You can use awk. Set the record separator ( RS ) to a regexp that matches all possible newline character, or characters. And set the output record separator ( ORS ) to the unix-style newline character.
awk 'BEGIN{RS="\r|\n|\r\n|\n\r";ORS="\n"}{print}' windows_or_macos.txt > unix.txt

user829755 , answered Jul 21 at 9:21

interestingly in my git-bash on windows sed "" did the trick already:
$ echo -e "abc\r" >tst.txt
$ file tst.txt
tst.txt: ASCII text, with CRLF line terminators
$ sed -i "" tst.txt
$ file tst.txt
tst.txt: ASCII text

My guess is that sed ignores them when reading lines from input and always writes unix line endings on output.

Gannet , answered Jan 24 at 8:38

As an extension to Jonathan Leffler's Unix to DOS solution, to safely convert to DOS when you're unsure of the file's current line endings:
sed '/^M$/! s/$/^M/'

This checks that the line does not already end in CRLF before converting to CRLF.

vmsnomad , answered Jun 23 at 18:37

Had just to ponder that same question (on Windows-side, but equally applicable to linux.) Surprisingly nobody mentioned a very much automated way of doing CRLF<->LF conversion for text-files using good old zip -ll option (Info-ZIP):
zip -ll textfiles-lf.zip files-with-crlf-eol.*
unzip textfiles-lf.zip

NOTE: this would create a zip file preserving the original file names but converting the line endings to LF. Then unzip would extract the files as zip'ed, that is with their original names (but with LF-endings), thus prompting to overwrite the local original files if any.

Relevant excerpt from the zip --help :

zip --help
...
-l   convert LF to CR LF (-ll CR LF to LF)
I tried sed 's/^M$//' file.txt on OSX as well as several other methods ( http://www.thingy-ma-jig.co.uk/blog/25-11-2010/fixing-dos-line-endings or http://hintsforums.macworld.com/archive/index.php/t-125.html ). None worked, the file remained unchanged (btw Ctrl-v Enter was needed to reproduce ^M). In the end I used TextWrangler. Its not strictly command line but it works and it doesn't complain.

[Sep 01, 2017] linux - Looping through the content of a file in Bash - Stack Overflow

Notable quotes:
"... done <<< "$(...)" ..."
Sep 01, 2017 | stackoverflow.com
down vote favorite 234

Peter Mortensen , asked Oct 5 '09 at 17:52

How do I iterate through each line of a text file with Bash ?

With this script

echo "Start!"
for p in (peptides.txt)
do
    echo "${p}"
done

I get this output on the screen:

Start!
./runPep.sh: line 3: syntax error near unexpected token `('
./runPep.sh: line 3: `for p in (peptides.txt)'

(Later I want to do something more complicated with $p than just output to the screen.)


The environment variable SHELL is (from env):

SHELL=/bin/bash

/bin/bash --version output:

GNU bash, version 3.1.17(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.

cat /proc/version output:

Linux version 2.6.18.2-34-default (geeko@buildhost) (gcc version 4.1.2 20061115 (prerelease) (SUSE Linux)) #1 SMP Mon Nov 27 11:46:27 UTC 2006

The file peptides.txt contains:

RKEKNVQ
IPKKLLQK
QYFHQLEKMNVK
IPKKLLQK
GDLSTALEVAIDCYEK
QYFHQLEKMNVKIPENIYR
RKEKNVQ
VLAKHGKLQDAIN
ILGFMK
LEDVALQILL

Bruno De Fraine , answered Oct 5 '09 at 18:00

One way to do it is:
while read p; do
  echo $p
done <peptides.txt

Exceptionally, if the loop body may read from standard input , you can open the file using a different file descriptor:

while read -u 10 p; do
  ...
done 10<peptides.txt

Here, 10 is just an arbitrary number (different from 0, 1, 2).

Warren Young , answered Oct 5 '09 at 17:54

cat peptides.txt | while read line
do
   # do something with $line here
done

Stan Graves , answered Oct 5 '09 at 18:18

Option 1a: While loop: Single line at a time: Input redirection
#!/bin/bash
filename='peptides.txt'
echo Start
while read p; do 
    echo $p
done < $filename

Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).

#!/bin/bash
filename='peptides.txt'
exec 4<$filename
echo Start
while read -u4 p ; do
    echo $p
done

Option 2: For loop: Read file into single variable and parse.
This syntax will parse "lines" based on any white space between the tokens. This still works because the given input file lines are single work tokens. If there were more than one token per line, then this method would not work as well. Also, reading the full file into a single variable is not a good strategy for large files.

#!/bin/bash
filename='peptides.txt'
filelines=`cat $filename`
echo Start
for line in $filelines ; do
    echo $line
done

mightypile , answered Oct 4 '13 at 13:30

This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.
for word in $(cat peptides.txt); do echo $word; done

This format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.

for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done

Or if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.

for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txt

I've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:

OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFS

This just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.

Best of luck!

Jahid , answered Jun 9 '15 at 15:09

Use a while loop, like this:
while IFS= read -r line; do
   echo "$line"
done <file

Notes:

  1. If you don't set the IFS properly, you will lose indentation.
  2. You should almost always use the -r option with read.
  3. Don't read lines with for

codeforester , answered Jan 14 at 3:30

A few more things not covered by other answers: Reading from a delimited file
# ':' is the delimiter here, and there are three fields on each line in the file
# IFS set below is restricted to the context of `read`, it doesn't affect any other code
while IFS=: read -r field1 field2 field3; do
  # process the fields
  # if the line has less than three fields, the missing fields will be set to an empty string
  # if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s)
done < input.txt
Reading from more than one file at a time
while read -u 3 -r line1 && read -u 4 -r line2; do
  # process the lines
  # note that the loop will end when we reach EOF on either of the files, because of the `&&`
done 3< input1.txt 4< input2.txt
Reading a whole file into an array (Bash version 4+)
readarray -t my_array < my_file

or

mapfile -t my_array < my_file

And then

for line in "${my_array[@]}"; do
  # process the lines
done

Anjul Sharma , answered Mar 8 '16 at 16:10

If you don't want your read to be broken by newline character, use -
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
    echo "$line"
done < "$1"

Then run the script with file name as parameter.

Sine , answered Nov 14 '13 at 14:23

#!/bin/bash
#
# Change the file name from "test" to desired input file 
# (The comments in bash are prefixed with #'s)
for x in $(cat test.txt)
do
    echo $x
done

dawg , answered Feb 3 '16 at 19:15

Suppose you have this file:
$ cat /tmp/test.txt
Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR

There are four elements that will alter the meaning of the file output read by many Bash solutions:

  1. The blank line 4;
  2. Leading or trailing spaces on two lines;
  3. Maintaining the meaning of individual lines (i.e., each line is a record);
  4. The line 6 not terminated with a CR.

If you want the text file line by line including blank lines and terminating lines without CR, you must use a while loop and you must have an alternate test for the final line.

Here are the methods that may change the file (in comparison to what cat returns):

1) Lose the last line and leading and trailing spaces:

$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'

(If you do while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt instead, you preserve the leading and trailing spaces but still lose the last line if it is not terminated with CR)

2) Using process substitution with cat will reads the entire file in one gulp and loses the meaning of individual lines:

$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done
'Line 1
    Line 2 has leading space
Line 3 followed by blank line

Line 5 (follows a blank line) and has trailing space    
Line 6 has no ending CR'

(If you remove the " from $(cat /tmp/test.txt) you read the file word by word rather than one gulp. Also probably not what is intended...)


The most robust and simplest way to read a file line-by-line and preserve all spacing is:

$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'    Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space    '
'Line 6 has no ending CR'

If you want to strip leading and trading spaces, remove the IFS= part:

$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt
'Line 1'
'Line 2 has leading space'
'Line 3 followed by blank line'
''
'Line 5 (follows a blank line) and has trailing space'
'Line 6 has no ending CR'

(A text file without a terminating \n , while fairly common, is considered broken under POSIX. If you can count on the trailing \n you do not need || [[ -n $line ]] in the while loop.)

More at the BASH FAQ

,

Here is my real life example how to loop lines of another program output, check for substrings, drop double quotes from variable, use that variable outside of the loop. I guess quite many is asking these questions sooner or later.
##Parse FPS from first video stream, drop quotes from fps variable
## streams.stream.0.codec_type="video"
## streams.stream.0.r_frame_rate="24000/1001"
## streams.stream.0.avg_frame_rate="24000/1001"
FPS=unknown
while read -r line; do
  if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then
    echo ParseFPS $line
    FPS=parse
  fi
  if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then
    echo ParseFPS $line
    FPS=${line##*=}
    FPS="${FPS%\"}"
    FPS="${FPS#\"}"
  fi
done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")"
if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then 
  echo ParseFPS Unknown frame rate
fi
echo Found $FPS

Declare variable outside of the loop, set value and use it outside of loop requires done <<< "$(...)" syntax. Application need to be run within a context of current console. Quotes around the command keeps newlines of output stream.

Loop match for substrings then reads name=value pair, splits right-side part of last = character, drops first quote, drops last quote, we have a clean value to be used elsewhere.

[Aug 29, 2017] How to view the `.bash_history` file via command line

Aug 29, 2017 | askubuntu.com

If you actually need the output of the .bash_history file , replace history with

cat ~/.bash_history in all of the commands below.

If you actually want the commands without numbers in front, use this command instead of history :

history | cut -d' ' -f 4-

[Aug 17, 2017] Linux Shell Scripting Tutorial - A Beginner's handbook

Table of Contents

Chapter 1: Quick Introduction to Linux
What Linux is?
Who developed the Linux?
How to get Linux?
How to Install Linux
Where I can use Linux?
What Kernel Is?
What is Linux Shell?
How to use Shell
What is Shell Script ?
Why to Write Shell Script ?
More on Shell...
Chapter 2: Getting started with Shell Programming
How to write shell script
Variables in shell
How to define User defined variables (UDV)
Rules for Naming variable name (Both UDV and System Variable)
How to print or access value of UDV (User defined variables)
echo Command
Shell Arithmetic
More about Quotes
Exit Status
The read Statement
Wild cards (Filename Shorthand or meta Characters)
More commands on one command line
Command Line Processing
Why Command Line arguments required
Redirection of Standard output/input i.e. Input - Output redirection
Pipes
Filter
What is Processes
Why Process required
Linux Command(s) Related with Process
Chapter 3: Shells (bash) structured Language Constructs
Decision making in shell script ( i.e. if command)
test command or [ expr ]
if...else...fi
Nested ifs
Multilevel if-then-else
Loops in Shell Scripts
for loop
Nested for loop
while loop
The case Statement
How to de-bug the shell script?
Chapter 4: Advanced Shell Scripting Commands
/dev/null - to send unwanted output of program
Local and Global Shell variable (export command)
Conditional execution i.e. && and ||
I/O Redirection and file descriptors
Functions
User Interface and dialog utility-Part I
User Interface and dialog utility-Part II
Message Box (msgbox) using dialog utility
Confirmation Box (yesno box) using dialog utility
Input (inputbox) using dialog utility
User Interface using dialog Utility - Putting it all together
trap command
The shift Command
getopts command
Chapter 5: Essential Utilities for Power User
Preparing for Quick Tour of essential utilities
Selecting portion of a file using cut utility
Putting lines together using paste utility
The join utility
Translating range of characters using tr utility
Data manipulation using awk utility
sed utility - Editing file without using editor
Removing duplicate lines from text database file using uniq utility
Finding matching pattern using grep utility
Chapter 6: Learning expressions with ex
Getting started with ex
Printing text on-screen
Deleting lines
Coping lines
Searching the words
Find and Replace (Substituting regular expression)
Replacing word with confirmation from user
Finding words
Using range of characters in regular expressions
Using & as Special replacement character
Converting lowercase character to uppercase
Chapter 7: awk Revisited
Getting Starting with awk
Predefined variables of awk
Doing arithmetic with awk
User Defined variables in awk
Use of printf statement
Use of Format Specification Code
if condition in awk
Loops in awk
Real life examples in awk
awk miscellaneous
sed - Quick Introduction
Redirecting the output of sed command
How to write sed scripts?
More examples of sed
Chapter 8: Examples of Shell Scripts
Logic Development:
Shell script to print given numbers sum of all digit
Shell script to print contains of file from given line number to next given number of lines
Shell script to say Good morning/Afternoon/Evening as you log in to system
Shell script to find whether entered year is Leap or not
Sort the given five number in ascending order (use of array)
Command line (args) handling:
Adding 2 nos. suppiled as command line args
Calculating average of given numbers on command line args
Finding out biggest number from given three nos suppiled as command line args
Shell script to implement getopts statement.
Basic math Calculator (case statement)
Loops using while & for loop:
Print nos. as 5,4,3,2,1 using while loop
Printing the patterns using for loop.
Arithmetic in shell scripting:
Performing real number calculation in shell script
Converting decimal number to hexadecimal number
Calculating factorial of given number
File handling:
Shell script to determine whether given file exist or not.
Screen handling/echo command with escape sequence code:
Shell script to print "Hello World" message, in Bold, Blink effect, and in different colors like red, brown etc.
Background process implementation:
Digital clock using shell script
User interface and Functions in shell script:
Shell script to implements menu based system.
System Administration:
Getting more information about your working environment through shell script
Shell script to gathered useful system information such as CPU, disks, Ram and your environment etc.
Shell script to add DNS Entery to BIND Database with default Nameservers, Mail Servers (MX) and host
Integrating awk script with shell script:
Script to convert file names from UPPERCASE to lowercase file names or vice versa.
Chapter 9: Other Resources
Appendix - A : Linux File Server Tutorial (LFST) version b0.1 Rev. 2
Appendix - B : Linux Command Reference (LCR)
About the author
About this Document

[Jul 29, 2017] linux - Directory bookmarking for bash - Stack Overflow

Notable quotes:
"... May you wan't to change this alias to something which fits your needs ..."
Jul 29, 2017 | stackoverflow.com

getmizanur , asked Sep 10 '11 at 20:35

Is there any directory bookmarking utility for bash to allow move around faster on the command line?

UPDATE

Thanks guys for the feedback however I created my own simple shell script (feel free to modify/expand it)

function cdb() {
    USAGE="Usage: cdb [-c|-g|-d|-l] [bookmark]" ;
    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1" ;
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then 
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi    
            ;;
        # list bookmarks
        -l) shift
            ls -l ~/.cd_bookmarks/ ;
            ;;
         *) echo "$USAGE" ;
            ;;
    esac
}

INSTALL

1./ create a file ~/.cdb and copy the above script into it.

2./ in your ~/.bashrc add the following

if [ -f ~/.cdb ]; then
    source ~/.cdb
fi

3./ restart your bash session

USAGE

1./ to create a bookmark

$cd my_project
$cdb -c project1

2./ to goto a bookmark

$cdb -g project1

3./ to list bookmarks

$cdb -l

4./ to delete a bookmark

$cdb -d project1

5./ where are all my bookmarks stored?

$cd ~/.cd_bookmarks

Fredrik Pihl , answered Sep 10 '11 at 20:47

Also, have a look at CDPATH

A colon-separated list of search paths available to the cd command, similar in function to the $PATH variable for binaries. The $CDPATH variable may be set in the local ~/.bashrc file.

ash$ cd bash-doc
bash: cd: bash-doc: No such file or directory

bash$ CDPATH=/usr/share/doc
bash$ cd bash-doc
/usr/share/doc/bash-doc

bash$ echo $PWD
/usr/share/doc/bash-doc

and

cd -

It's the command-line equivalent of the back button (takes you to the previous directory you were in).

ajreal , answered Sep 10 '11 at 20:41

In bash script/command,
you can use pushd and popd

pushd

Save and then change the current directory. With no arguments, pushd exchanges the top two directories.

Usage

cd /abc
pushd /xxx    <-- save /abc to environment variables and cd to /xxx
pushd /zzz
pushd +1      <-- cd /xxx

popd is to remove the variable (reverse manner)

fgm , answered Sep 11 '11 at 8:28

bookmarks.sh provides a bookmark management system for the Bash version 4.0+. It can also use a Midnight Commander hotlist.

Dmitry Frank , answered Jun 16 '15 at 10:22

Thanks for sharing your solution, and I'd like to share mine as well, which I find more useful than anything else I've came across before.

The engine is a great, universal tool: command-line fuzzy finder by Junegunn.

It primarily allows you to "fuzzy-find" files in a number of ways, but it also allows to feed arbitrary text data to it and filter this data. So, the shortcuts idea is simple: all we need is to maintain a file with paths (which are shortcuts), and fuzzy-filter this file. Here's how it looks: we type cdg command (from "cd global", if you like), get a list of our bookmarks, pick the needed one in just a few keystrokes, and press Enter. Working directory is changed to the picked item:

It is extremely fast and convenient: usually I just type 3-4 letters of the needed item, and all others are already filtered out. Additionally, of course we can move through list with arrow keys or with vim-like keybindings Ctrl+j / Ctrl+k .

Article with details: Fuzzy shortcuts for your shell .

It is possible to use it for GUI applications as well (via xterm): I use that for my GUI file manager Double Commander . I have plans to write an article about this use case, too.

return42 , answered Feb 6 '15 at 11:56

Inspired by the question and answers here, I added the lines below to my ~/.bashrc file.

With this you have a favdir command (function) to manage your favorites and a autocompletion function to select an item from these favorites.

# ---------
# Favorites
# ---------

__favdirs_storage=~/.favdirs
__favdirs=( "$HOME" )

containsElement () {
    local e
    for e in "${@:2}"; do [[ "$e" == "$1" ]] && return 0; done
    return 1
}

function favdirs() {

    local cur
    local IFS
    local GLOBIGNORE

    case $1 in
        list)
            echo "favorite folders ..."
            printf -- ' - %s\n' "${__favdirs[@]}"
            ;;
        load)
            if [[ ! -e $__favdirs_storage ]] ; then
                favdirs save
            fi
            # mapfile requires bash 4 / my OS-X bash vers. is 3.2.53 (from 2007 !!?!).
            # mapfile -t __favdirs < $__favdirs_storage
            IFS=$'\r\n' GLOBIGNORE='*' __favdirs=($(< $__favdirs_storage))
            ;;
        save)
            printf -- '%s\n' "${__favdirs[@]}" > $__favdirs_storage
            ;;
        add)
            cur=${2-$(pwd)}
            favdirs load
            if containsElement "$cur" "${__favdirs[@]}" ; then
                echo "'$cur' allready exists in favorites"
            else
                __favdirs+=( "$cur" )
                favdirs save
                echo "'$cur' added to favorites"
            fi
            ;;
        del)
            cur=${2-$(pwd)}
            favdirs load
            local i=0
            for fav in ${__favdirs[@]}; do
                if [ "$fav" = "$cur" ]; then
                    echo "delete '$cur' from favorites"
                    unset __favdirs[$i]
                    favdirs save
                    break
                fi
                let i++
            done
            ;;
        *)
            echo "Manage favorite folders."
            echo ""
            echo "usage: favdirs [ list | load | save | add | del ]"
            echo ""
            echo "  list : list favorite folders"
            echo "  load : load favorite folders from $__favdirs_storage"
            echo "  save : save favorite directories to $__favdirs_storage"
            echo "  add  : add directory to favorites [default pwd $(pwd)]."
            echo "  del  : delete directory from favorites [default pwd $(pwd)]."
    esac
} && favdirs load

function __favdirs_compl_command() {
    COMPREPLY=( $( compgen -W "list load save add del" -- ${COMP_WORDS[COMP_CWORD]}))
} && complete -o default -F __favdirs_compl_command favdirs

function __favdirs_compl() {
    local IFS=$'\n'
    COMPREPLY=( $( compgen -W "${__favdirs[*]}" -- ${COMP_WORDS[COMP_CWORD]}))
}

alias _cd='cd'
complete -F __favdirs_compl _cd

Within the last two lines, an alias to change the current directory (with autocompletion) is created. With this alias ( _cd ) you are able to change to one of your favorite directories. May you wan't to change this alias to something which fits your needs .

With the function favdirs you can manage your favorites (see usage).

$ favdirs 
Manage favorite folders.

usage: favdirs [ list | load | save | add | del ]

  list : list favorite folders
  load : load favorite folders from ~/.favdirs
  save : save favorite directories to ~/.favdirs
  add  : add directory to favorites [default pwd /tmp ].
  del  : delete directory from favorites [default pwd /tmp ].

Zied , answered Mar 12 '14 at 9:53

Yes there is DirB: Directory Bookmarks for Bash well explained in this Linux Journal article

An example from the article:

% cd ~/Desktop
% s d       # save(bookmark) ~/Desktop as d
% cd /tmp   # go somewhere
% pwd
/tmp
% g d       # go to the desktop
% pwd
/home/Desktop

Al Conrad , answered Sep 4 '15 at 16:10

@getmizanur I used your cdb script. I enhanced it slightly by adding bookmarks tab completion. Here's my version of your cdb script.
_cdb()
{
    local _script_commands=$(ls -1 ~/.cd_bookmarks/)
    local cur=${COMP_WORDS[COMP_CWORD]}

    COMPREPLY=( $(compgen -W "${_script_commands}" -- $cur) )
}
complete -F _cdb cdb


function cdb() {

    local USAGE="Usage: cdb [-h|-c|-d|-g|-l|-s] [bookmark]\n
    \t[-h or no args] - prints usage help\n
    \t[-c bookmark] - create bookmark\n
    \t[-d bookmark] - delete bookmark\n
    \t[-g bookmark] - goto bookmark\n
    \t[-l] - list bookmarks\n
    \t[-s bookmark] - show bookmark location\n
    \t[bookmark] - same as [-g bookmark]\n
    Press tab for bookmark completion.\n"        

    if  [ ! -e ~/.cd_bookmarks ] ; then
        mkdir ~/.cd_bookmarks
    fi

    case $1 in
        # create bookmark
        -c) shift
            if [ ! -f ~/.cd_bookmarks/$1 ] ; then
                echo "cd `pwd`" > ~/.cd_bookmarks/"$1"
                complete -F _cdb cdb
            else
                echo "Try again! Looks like there is already a bookmark '$1'"
            fi
            ;;
        # goto bookmark
        -g) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # show bookmark
        -s) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                cat ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
        # delete bookmark
        -d) shift
            if [ -f ~/.cd_bookmarks/$1 ] ; then
                rm ~/.cd_bookmarks/"$1" ;
            else
                echo "Oops, forgot to specify the bookmark" ;
            fi
            ;;
        # list bookmarks
        -l) shift
            ls -1 ~/.cd_bookmarks/ ;
            ;;
        -h) echo -e $USAGE ;
            ;;
        # goto bookmark by default
        *)
            if [ -z "$1" ] ; then
                echo -e $USAGE
            elif [ -f ~/.cd_bookmarks/$1 ] ; then
                source ~/.cd_bookmarks/"$1"
            else
                echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ;
            fi
            ;;
    esac
}

tobimensch , answered Jun 5 '16 at 21:31

Yes, one that I have written, that is called anc.

https://github.com/tobimensch/anc

Anc stands for anchor, but anc's anchors are really just bookmarks.

It's designed for ease of use and there're multiple ways of navigating, either by giving a text pattern, using numbers, interactively, by going back, or using [TAB] completion.

I'm actively working on it and open to input on how to make it better.

Allow me to paste the examples from anc's github page here:

# make the current directory the default anchor:
$ anc s

# go to /etc, then /, then /usr/local and then back to the default anchor:
$ cd /etc; cd ..; cd usr/local; anc

# go back to /usr/local :
$ anc b

# add another anchor:
$ anc a $HOME/test

# view the list of anchors (the default one has the asterisk):
$ anc l
(0) /path/to/first/anchor *
(1) /home/usr/test

# jump to the anchor we just added:
# by using its anchor number
$ anc 1
# or by jumping to the last anchor in the list
$ anc -1

# add multiple anchors:
$ anc a $HOME/projects/first $HOME/projects/second $HOME/documents/first

# use text matching to jump to $HOME/projects/first
$ anc pro fir

# use text matching to jump to $HOME/documents/first
$ anc doc fir

# add anchor and jump to it using an absolute path
$ anc /etc
# is the same as
$ anc a /etc; anc -1

# add anchor and jump to it using a relative path
$ anc ./X11 #note that "./" is required for relative paths
# is the same as
$ anc a X11; anc -1

# using wildcards you can add many anchors at once
$ anc a $HOME/projects/*

# use shell completion to see a list of matching anchors
# and select the one you want to jump to directly
$ anc pro[TAB]

Cảnh Toΰn Nguyễn , answered Feb 20 at 5:41

Bashmarks is an amazingly simple and intuitive utility. In short, after installation, the usage is:
s <bookmark_name> - Saves the current directory as "bookmark_name"
g <bookmark_name> - Goes (cd) to the directory associated with "bookmark_name"
p <bookmark_name> - Prints the directory associated with "bookmark_name"
d <bookmark_name> - Deletes the bookmark
l                 - Lists all available bookmarks

,

For short term shortcuts, I have a the following in my respective init script (Sorry. I can't find the source right now and didn't bother then):
function b() {
    alias $1="cd `pwd -P`"
}

Usage:

In any directory that you want to bookmark type

b THEDIR # <THEDIR> being the name of your 'bookmark'

It will create an alias to cd (back) to here.

To return to a 'bookmarked' dir type

THEDIR

It will run the stored alias and cd back there.

Caution: Use only if you understand that this might override existing shell aliases and what that means.

[Jul 29, 2017] If processes inherit the parents environment, why do we need export?

Notable quotes:
"... "Processes inherit their environment from their parent (the process which started them)." ..."
"... in the environment ..."
Jul 29, 2017 | unix.stackexchange.com
Amelio Vazquez-Reina asked May 19 '14

I read here that the purpose of export in a shell is to make the variable available to sub-processes started from the shell.

However, I have also read here and here that "Processes inherit their environment from their parent (the process which started them)."

If this is the case, why do we need export ? What am I missing?

Are shell variables not part of the environment by default? What is the difference?

Your assumption is that all shell variables are in the environment . This is incorrect. The export command is what defines a name to be in the environment at all. Thus:

a=1
b=2
export b

results in the current shell knowing that $a expands to 1 and $b to 2, but subprocesses will not know anything about a because it is not part of the environment (even in the current shell).

Some useful tools:

Alternatives to export :

  1. name=val command # Assignment before command exports that name to the command.
  2. declare/local -x name # Exports name, particularly useful in shell functions when you want to avoid exposing the name to outside scope.
====

There's a difference between shell variables and environment variables. If you define a shell variable without export ing it, it is not added to the processes environment and thus not inherited to its children.

Using export you tell the shell to add the shell variable to the environment. You can test this using printenv (which just prints its environment to stdout, since it's a child-process you see the effect of export ing variables):

#!/bin/sh
MYVAR="my cool variable"
echo "Without export:"
printenv | grep MYVAR
echo "With export:"
export MYVAR 
printenv | grep MYVAR
A variable, once exported, is part of the environment. PATH is exported in the shell itself, while custom variables can be exported as needed.

... ... ..

[Jul 29, 2017] Why does subshell not inherit exported variable (PS1)?

Jul 29, 2017 | superuser.com
up vote down vote favorite 1 I am using startx to start the graphical environment. I have a very simple .xinitrc which I will add things to as I set up the environment, but for now it is as follows:

catwm
&
# Just a basic window manager, for testing.


xterm

The reason I background the WM and foreground terminal and not the other way around as often is done, is because I would like to be able to come back to the virtual text console after typing exit in xterm . This appears to work as described.

The problem is that the PS1 variable that currently is set to my preference in /etc/profile.d/user.sh (which is sourced from /etc/profile supplied by distro), does not appear to propagate to the environment of the xterm mentioned above. The relevant process tree is as follows:


\_
bash
    \_ xinit
home
user
/.
xinitrc
--
etc
X11
xinit
xserverrc
auth
tmp
serverauth
ggJna3I0vx
        \_
usr
bin
nolisten tcp
auth
tmp
serverauth
ggJna3I0vx vt1
        \_ sh
home
user
/.
xinitrc
            \_
home
user
catwm
            \_ xterm
                \_ bash

The shell started by xterm appears to be interactive, the shell executing .xinitrc however is not. I am ok with both, the assumptions about interactivity seem to be perfectly valid, but now I have a non-interactive shell that spawns an interactive shell indirectly, and the interactive shell has no chance to automatically inherit the prompt, because the prompt was unset or otherwise made unavailable higher up the process tree.

How do I go about getting my prompt back? bash environment-variables sh

share improve this question edited Oct 21 '13 at 11:39 asked Oct 21 '13 at 9:51 amn 453 12 29
down vote accepted

Commands env and export list only variables which are exported. $PS1 is usually not exported. Try echo $PS1 in your shell to see actual value of $PS1 .

Non-interactive shells usually do not have $PS1 . Non-interactive bash explicitly unsets $PS1 . 1 You can check if bash is interactive by echo $- . If the output contains i then it is interactive. You can explicitly start interactive shell by using the option on the command line: bash -i . Shell started with -c is not interactive.

The /etc/profile script is read for a login shell. You can start the shell as a login shell by: bash -l .

With bash shell the scripts /etc/bash.bashrc and ~/.bashrc are usually used to set $PS1 . Those scripts are sourced when interactive non-login shell is started. It is your case in the xterm .

See Setting the PS? Strings Permanently

Possible solutions
share improve this answer edited Oct 22 '13 at 16:45 answered Oct 21 '13 at 11:19 pabouk 4,250 25 40
I am specifically avoiding to set PS1 in .bashrc or /etc/bash.bashrc (which is executed as well), to retain POSIX shell compatibility. These do not set or unset PS1 . PS1 is set in /etc/profile.d/user.sh , which is sourced by /etc/profile . Indeed, this file is only executed for login shells, however I do export PS1 from /etc/profile.d/user.sh exactly because I want propagation of my preferred value down the process tree. So it shouldn't matter which subshells are login and/or interactive ones then, should it? – amn Oct 21 '13 at 11:32
It seems that bash removes the PS1 variable. What exactly do you want to achieve by "POSIX shell compatibility"? Do you want to be able to replace bash by a different POSIX-compliant shell and retain the same functionality? Based on my tests bash removes PS1 when it is started as non-interactive. I think of two simple solutions: 1. start the shell as a login shell with the -l option (attention for actions in the startup scripts which should be started only at login) 2. start the intermediate shells as interactive with the -i option. – pabouk Oct 21 '13 at 12:00
I try to follow interfaces and specifications, not implementations - hence POSIX compatibility. That's important (to me). I already have one login shell - the one started by /usr/bin/login . I understand that a non-interactive shell doesn't need prompt, but unsetting a variable is too much - I need the prompt in an interactive shell (spawned and used by xterm ) later on. What am I doing wrong? I guess most people set their prompt in .bashrc which is sourced by bash anyway, and so the prompt survives. I try to avoid .bashrc however. – amn Oct 22 '13 at 12:12
@amn: I have added various possible solutions to the reply. – pabouk Oct 22 '13 at 16:46

[Jul 29, 2017] Bash subshell mystery

Notable quotes:
"... The subshell created using parentheses does not ..."
Jul 29, 2017 | stackoverflow.com

user3718463 , asked Sep 27 '14 at 21:41

The Learning Bash Book mention that a subshell will inherit only environment variabels and file descriptors , ...etc and that it will not inherit variables that are not exported of
$ var=15
$ (echo $var)
15
$ ./file # this file include the same command echo $var

$

As i know the shell will create two subshells for () case and for ./file, but why in () case the subshell identified the var variable although it is not exported and in the ./file case it did not identify it ?

...

I tried to use strace to figure out how this happens and surprisingly i found that bash will use the same arguments for the clone system call so this means that the both forked process in () and ./file should have the same process address space of the parent, so why in () case the variable is visible to the subshell and the same does not happen for ./file case although the same arguments is based with clone system call ?

Alfe , answered Sep 27 '14 at 23:16

The subshell created using parentheses does not use an execve() call for the new process, the calling of the script does. At this point the variables from the parent shell are handled differently: The execve() passes a deliberate set of variables (the script-calling case) while not calling execve() (the parentheses case) leaves the complete set of variables intact.

Your probing using strace should have shown exactly that difference; if you did not see it, I can only assume that you made one of several possible mistakes. I will just strip down what I did to show the difference, then you can decide for yourself where your error was.

... ... ...

Nicolas Albert , answered Sep 27 '14 at 21:43

You have to export your var for child process:


export var
15

Once exported, the variable is used for all children process at the launch time (not export time).



var
15



export var

is same as



export var
var
15

is same as



export var
15

Export can be cancelled using unset . Sample: unset var .

user3718463 , answered Sep 27 '14 at 23:11

The solution for this mystery is that subshells inherit everything from the parent shell including all shell variables because they are simply called with fork or clone so they share the same memory space with the parent shell , that's why this will work
$ var=15
$ (echo $var)
15

But in the ./file , the subshell will be later followed by exec or execv system call which will clear all the previous parent variables but we still have the environment variables you can check this out using strace using -f to monitor the child subshell and you will find that there is a call to execv

[Jul 29, 2017] How To Read and Set Environmental and Shell Variables on a Linux VPS

Mar 03, 2014 | www.digitalocean.com
Introduction

When interacting with your server through a shell session, there are many pieces of information that your shell compiles to determine its behavior and access to resources. Some of these settings are contained within configuration settings and others are determined by user input.

One way that the shell keeps track of all of these settings and details is through an area it maintains called the environment . The environment is an area that the shell builds every time that it starts a session that contains variables that define system properties.

In this guide, we will discuss how to interact with the environment and read or set environmental and shell variables interactively and through configuration files. We will be using an Ubuntu 12.04 VPS as an example, but these details should be relevant on any Linux system.

How the Environment and Environmental Variables Work

Every time a shell session spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.

Basically the environment provides a medium through which the shell process can get or set settings and, in turn, pass these on to its child processes.

The environment is implemented as strings that represent key-value pairs. If multiple values are passed, they are typically separated by colon (:) characters. Each pair will generally will look something like this:

KEY
value1
value2:...

If the value contains significant white-space, quotations are used:

KEY
="
value with spaces
"

The keys in these scenarios are variables. They can be one of two types, environmental variables or shell variables.

Environmental variables are variables that are defined for the current shell and are inherited by any child shells or processes. Environmental variables are used to pass information into processes that are spawned from the shell.

Shell variables are variables that are contained exclusively within the shell in which they were set or defined. They are often used to keep track of ephemeral data, like the current working directory.

By convention, these types of variables are usually defined using all capital letters. This helps users distinguish environmental variables within other contexts.

Printing Shell and Environmental Variables

Each shell session keeps track of its own shell and environmental variables. We can access these in a few different ways.

We can see a list of all of our environmental variables by using the env or printenv commands. In their default state, they should function exactly the same:

printenv


SHELL=/bin/bash
TERM=xterm
USER=demouser
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca:...
MAIL=/var/mail/demouser
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
PWD=/home/demouser
LANG=en_US.UTF-8
SHLVL=1
HOME=/home/demouser
LOGNAME=demouser
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/printenv

This is fairly typical of the output of both printenv and env . The difference between the two commands is only apparent in their more specific functionality. For instance, with printenv , you can requests the values of individual variables:

printenv SHELL


/bin/bash

On the other hand, env let's you modify the environment that programs run in by passing a set of variable definitions into a command like this:

env VAR1="blahblah" command_to_run command_options

Since, as we learned above, child processes typically inherit the environmental variables of the parent process, this gives you the opportunity to override values or add additional variables for the child.

As you can see from the output of our printenv command, there are quite a few environmental variables set up through our system files and processes without our input.

These show the environmental variables, but how do we see shell variables?

The set command can be used for this. If we type set without any additional parameters, we will get a list of all shell variables, environmental variables, local variables, and shell functions:

set


BASH=/bin/bash
BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
. . .

This is usually a huge list. You probably want to pipe it into a pager program to deal with the amount of output easily:

set | less

The amount of additional information that we receive back is a bit overwhelming. We probably do not need to know all of the bash functions that are defined, for instance.

We can clean up the output by specifying that set should operate in POSIX mode, which won't print the shell functions. We can execute this in a sub-shell so that it does not change our current environment:

(set -o posix; set)

This will list all of the environmental and shell variables that are defined.

We can attempt to compare this output with the output of the env or printenv commands to try to get a list of only shell variables, but this will be imperfect due to the different ways that these commands output information:

comm -23 <(set -o posix; set | sort) <(env | sort)

This will likely still include a few environmental variables, due to the fact that the set command outputs quoted values, while the printenv and env commands do not quote the values of strings.

This should still give you a good idea of the environmental and shell variables that are set in your session.

These variables are used for all sorts of things. They provide an alternative way of setting persistent values for the session between processes, without writing changes to a file.

Common Environmental and Shell Variables

Some environmental and shell variables are very useful and are referenced fairly often.

Here are some common environmental variables that you will come across:

In addition to these environmental variables, some shell variables that you'll often see are:

Setting Shell and Environmental Variables

To better understand the difference between shell and environmental variables, and to introduce the syntax for setting these variables, we will do a small demonstration.

Creating Shell Variables

We will begin by defining a shell variable within our current session. This is easy to accomplish; we only need to specify a name and a value. We'll adhere to the convention of keeping all caps for the variable name, and set it to a simple string.

TEST_VAR='Hello World!'

Here, we've used quotations since the value of our variable contains a space. Furthermore, we've used single quotes because the exclamation point is a special character in the bash shell that normally expands to the bash history if it is not escaped or put into single quotes.

We now have a shell variable. This variable is available in our current session, but will not be passed down to child processes.

We can see this by grepping for our new variable within the set output:

set | grep TEST_VAR


TEST_VAR='Hello World!'

We can verify that this is not an environmental variable by trying the same thing with printenv :

printenv | grep TEST_VAR

No out should be returned.

Let's take this as an opportunity to demonstrate a way of accessing the value of any shell or environmental variable.

echo $TEST_VAR


Hello World!

As you can see, reference the value of a variable by preceding it with a $ sign. The shell takes this to mean that it should substitute the value of the variable when it comes across this.

So now we have a shell variable. It shouldn't be passed on to any child processes. We can spawn a new bash shell from within our current one to demonstrate:

bash
echo $TEST_VAR

If we type bash to spawn a child shell, and then try to access the contents of the variable, nothing will be returned. This is what we expected.

Get back to our original shell by typing exit :

exit

Creating Environmental Variables

Now, let's turn our shell variable into an environmental variable. We can do this by exporting the variable. The command to do so is appropriately named:

export TEST_VAR

This will change our variable into an environmental variable. We can check this by checking our environmental listing again:

printenv | grep TEST_VAR


TEST_VAR=Hello World!

This time, our variable shows up. Let's try our experiment with our child shell again:

bash
echo $TEST_VAR


Hello World!

Great! Our child shell has received the variable set by its parent. Before we exit this child shell, let's try to export another variable. We can set environmental variables in a single step like this:

export NEW_VAR="Testing export"

Test that it's exported as an environmental variable:

printenv | grep NEW_VAR


NEW_VAR=Testing export

Now, let's exit back into our original shell:

exit

Let's see if our new variable is available:

echo $NEW_VAR

Nothing is returned.

This is because environmental variables are only passed to child processes. There isn't a built-in way of setting environmental variables of the parent shell. This is good in most cases and prevents programs from affecting the operating environment from which they were called.

The NEW_VAR variable was set as an environmental variable in our child shell. This variable would be available to itself and any of its child shells and processes. When we exited back into our main shell, that environment was destroyed.

Demoting and Unsetting Variables

We still have our TEST_VAR variable defined as an environmental variable. We can change it back into a shell variable by typing:

export -n TEST_VAR

It is no longer an environmental variable:

printenv | grep TEST_VAR

However, it is still a shell variable:

set | grep TEST_VAR


TEST_VAR='Hello World!'

If we want to completely unset a variable, either shell or environmental, we can do so with the unset command:

unset TEST_VAR

We can verify that it is no longer set:

echo $TEST_VAR

Nothing is returned because the variable has been unset.

Setting Environmental Variables at Login

We've already mentioned that many programs use environmental variables to decide the specifics of how to operate. We do not want to have to set important variables up every time we start a new shell session, and we have already seen how many variables are already set upon login, so how do we make and define variables automatically?

This is actually a more complex problem than it initially seems, due to the numerous configuration files that the bash shell reads depending on how it is started.

The Difference between Login, Non-Login, Interactive, and Non-Interactive Shell Sessions

The bash shell reads different configuration files depending on how the session is started.

One distinction between different sessions is whether the shell is being spawned as a "login" or "non-login" session.

A login shell is a shell session that begins by authenticating the user. If you are signing into a terminal session or through SSH and authenticate, your shell session will be set as a "login" shell.

If you start a new shell session from within your authenticated session, like we did by calling the bash command from the terminal, a non-login shell session is started. You were were not asked for your authentication details when you started your child shell.

Another distinction that can be made is whether a shell session is interactive, or non-interactive.

An interactive shell session is a shell session that is attached to a terminal. A non-interactive shell session is one is not attached to a terminal session.

So each shell session is classified as either login or non-login and interactive or non-interactive.

A normal session that begins with SSH is usually an interactive login shell. A script run from the command line is usually run in a non-interactive, non-login shell. A terminal session can be any combination of these two properties.

Whether a shell session is classified as a login or non-login shell has implications on which files are read to initialize the shell session.

A session started as a login session will read configuration details from the /etc/profile file first. It will then look for the first login shell configuration file in the user's home directory to get user-specific configuration details.

It reads the first file that it can find out of ~/.bash_profile , ~/.bash_login , and ~/.profile and does not read any further files.

In contrast, a session defined as a non-login shell will read /etc/bash.bashrc and then the user-specific ~/.bashrc file to build its environment.

Non-interactive shells read the environmental variable called BASH_ENV and read the file specified to define the new environment.

Implementing Environmental Variables

As you can see, there are a variety of different files that we would usually need to look at for placing our settings.

This provides a lot of flexibility that can help in specific situations where we want certain settings in a login shell, and other settings in a non-login shell. However, most of the time we will want the same settings in both situations.

Fortunately, most Linux distributions configure the login configuration files to source the non-login configuration files. This means that you can define environmental variables that you want in both inside the non-login configuration files. They will then be read in both scenarios.

We will usually be setting user-specific environmental variables, and we usually will want our settings to be available in both login and non-login shells. This means that the place to define these variables is in the ~/.bashrc file.

Open this file now:

nano ~/.bashrc

This will most likely contain quite a bit of data already. Most of the definitions here are for setting bash options, which are unrelated to environmental variables. You can set environmental variables just like you would from the command line:

export VARNAME=value

We can then save and close the file. The next time you start a shell session, your environmental variable declaration will be read and passed on to the shell environment. You can force your current session to read the file now by typing:

source ~/.bashrc

If you need to set system-wide variables, you may want to think about adding them to /etc/profile , /etc/bash.bashrc , or /etc/environment .

Conclusion

Environmental and shell variables are always present in your shell sessions and can be very useful. They are an interesting way for a parent process to set configuration details for its children, and are a way of setting options outside of files.

This has many advantages in specific situations. For instance, some deployment mechanisms rely on environmental variables to configure authentication information. This is useful because it does not require keeping these in files that may be seen by outside parties.

There are plenty of other, more mundane, but more common scenarios where you will need to read or alter the environment of your system. These tools and techniques should give you a good foundation for making these changes and using them correctly.

By Justin Ellingwood

[Jul 29, 2017] shell - Whats the difference between .bashrc, .bash_profile, and .environment - Stack Overflow

Notable quotes:
"... "The following paragraphs describe how bash executes its startup files." ..."
Jul 29, 2017 | stackoverflow.com

up vote 130 down vote favorite 717

Adam Rosenfield , asked Jan 6 '09 at 3:58

I've used a number of different *nix-based systems of the years, and it seems like every flavor of Bash I use has a different algorithm for deciding which startup scripts to run. For the purposes of tasks like setting up environment variables and aliases and printing startup messages (e.g. MOTDs), which startup script is the appropriate place to do these?

What's the difference between putting things in .bashrc , .bash_profile , and .environment ? I've also seen other files such as .login , .bash_login , and .profile ; are these ever relevant? What are the differences in which ones get run when logging in physically, logging in remotely via ssh, and opening a new terminal window? Are there any significant differences across platforms (including Mac OS X (and its Terminal.app) and Cygwin Bash)?

Cos , answered Jan 6 '09 at 4:18

The main difference with shell config files is that some are only read by "login" shells (eg. when you login from another host, or login at the text console of a local unix machine). these are the ones called, say, .login or .profile or .zlogin (depending on which shell you're using).

Then you have config files that are read by "interactive" shells (as in, ones connected to a terminal (or pseudo-terminal in the case of, say, a terminal emulator running under a windowing system). these are the ones with names like .bashrc , .tcshrc , .zshrc , etc.

bash complicates this in that .bashrc is only read by a shell that's both interactive and non-login , so you'll find most people end up telling their .bash_profile to also read .bashrc with something like

[[ -r ~/.bashrc ]] && . ~/.bashrc

Other shells behave differently - eg with zsh , .zshrc is always read for an interactive shell, whether it's a login one or not.

The manual page for bash explains the circumstances under which each file is read. Yes, behaviour is generally consistent between machines.

.profile is simply the login script filename originally used by /bin/sh . bash , being generally backwards-compatible with /bin/sh , will read .profile if one exists.

Johannes Schaub - litb , answered Jan 6 '09 at 15:21

That's simple. It's explained in man bash :
... ... ... 

Login shells are the ones that are the one you login (so, they are not executed when merely starting up xterm, for example). There are other ways to login. For example using an X display manager. Those have other ways to read and export environment variables at login time.

Also read the INVOCATION chapter in the manual. It says "The following paragraphs describe how bash executes its startup files." , i think that's a spot-on :) It explains what an "interactive" shell is too.

Bash does not know about .environment . I suspect that's a file of your distribution, to set environment variables independent of the shell that you drive.

Jonathan Leffler , answered Jan 6 '09 at 4:13

Classically, ~/.profile is used by Bourne Shell, and is probably supported by Bash as a legacy measure. Again, ~/.login and ~/.cshrc were used by C Shell - I'm not sure that Bash uses them at all.

The ~/.bash_profile would be used once, at login. The ~/.bashrc script is read every time a shell is started. This is analogous to /.cshrc for C Shell.

One consequence is that stuff in ~/.bashrc should be as lightweight (minimal) as possible to reduce the overhead when starting a non-login shell.

I believe the ~/.environment file is a compatibility file for Korn Shell.

Filip Ekberg , answered Jan 6 '09 at 4:03

I found information about .bashrc and .bash_profile here to sum it up:

.bash_profile is executed when you login. Stuff you put in there might be your PATH and other important environment variables.

.bashrc is used for non login shells. I'm not sure what that means. I know that RedHat executes it everytime you start another shell (su to this user or simply calling bash again) You might want to put aliases in there but again I am not sure what that means. I simply ignore it myself.

.profile is the equivalent of .bash_profile for the root. I think the name is changed to let other shells (csh, sh, tcsh) use it as well. (you don't need one as a user)

There is also .bash_logout wich executes at, yeah good guess...logout. You might want to stop deamons or even make a little housekeeping . You can also add "clear" there if you want to clear the screen when you log out.

Also there is a complete follow up on each of the configurations files here

These are probably even distro.-dependant, not all distros choose to have each configuraton with them and some have even more. But when they have the same name, they usualy include the same content.

Rose Perrone , answered Feb 27 '12 at 0:22

According to Josh Staiger , Mac OS X's Terminal.app actually runs a login shell rather than a non-login shell by default for each new terminal window, calling .bash_profile instead of .bashrc.

He recommends:

Most of the time you don't want to maintain two separate config files for login and non-login shells ! when you set a PATH, you want it to apply to both. You can fix this by sourcing .bashrc from your .bash_profile file, then putting PATH and common settings in .bashrc.

To do this, add the following lines to .bash_profile:


if ~/.bashrc ]; then 
    source ~/.bashrc
fi

Now when you login to your machine from a console .bashrc will be called.

PolyThinker , answered Jan 6 '09 at 4:06

A good place to look at is the man page of bash. Here 's an online version. Look for "INVOCATION" section.

seismick , answered May 21 '12 at 10:42

I have used Debian-family distros which appear to execute .profile , but not .bash_profile , whereas RHEL derivatives execute .bash_profile before .profile .

It seems to be a mess when you have to set up environment variables to work in any Linux OS.

[Jul 29, 2017] Preserve bash history in multiple terminal windows - Unix Linux Stack Exchange

Jul 29, 2017 | unix.stackexchange.com

Oli , asked Aug 26 '10 at 13:04

I consistently have more than one terminal open. Anywhere from two to ten, doing various bits and bobs. Now let's say I restart and open up another set of terminals. Some remember certain things, some forget.

I want a history that:

Anything I can do to make bash work more like that?

Pablo R. , answered Aug 26 '10 at 14:37

# Avoid duplicates
export HISTCONTROL=ignoredups:erasedups  
# When the shell exits, append to the history file instead of overwriting it
shopt -s histappend

# After each command, append to the history file and reread it
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

kch , answered Sep 19 '08 at 17:49

So, this is all my history-related .bashrc thing:
export HISTCONTROL=ignoredups:erasedups  # no duplicate entries
export HISTSIZE=100000                   # big big history
export HISTFILESIZE=100000               # big big history
shopt -s histappend                      # append to history, don't overwrite it

# Save and reload the history after each command finishes
export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"

Tested with bash 3.2.17 on Mac OS X 10.5, bash 4.1.7 on 10.6.

lesmana , answered Jun 16 '10 at 16:11

Here is my attempt at Bash session history sharing. This will enable history sharing between bash sessions in a way that the history counter does not get mixed up and history expansion like !number will work (with some constraints).

Using Bash version 4.1.5 under Ubuntu 10.04 LTS (Lucid Lynx).

HISTSIZE=9000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups

_bash_history_sync() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
  builtin history -c         #3
  builtin history -r         #4
}

history() {                  #5
  _bash_history_sync
  builtin history "$@"
}

PROMPT_COMMAND=_bash_history_sync
Explanation:
  1. Append the just entered line to the $HISTFILE (default is .bash_history ). This will cause $HISTFILE to grow by one line.
  2. Setting the special variable $HISTFILESIZE to some value will cause Bash to truncate $HISTFILE to be no longer than $HISTFILESIZE lines by removing the oldest entries.
  3. Clear the history of the running session. This will reduce the history counter by the amount of $HISTSIZE .
  4. Read the contents of $HISTFILE and insert them in to the current running session history. this will raise the history counter by the amount of lines in $HISTFILE . Note that the line count of $HISTFILE is not necessarily $HISTFILESIZE .
  5. The history() function overrides the builtin history to make sure that the history is synchronised before it is displayed. This is necessary for the history expansion by number (more about this later).
More explanation: About the constraints of the history expansion:

When using history expansion by number, you should always look up the number immediately before using it. That means no bash prompt display between looking up the number and using it. That usually means no enter and no ctrl+c.

Generally, once you have more than one Bash session, there is no guarantee whatsoever that a history expansion by number will retain its value between two Bash prompt displays. Because when PROMPT_COMMAND is executed the history from all other Bash sessions are integrated in the history of the current session. If any other bash session has a new command then the history numbers of the current session will be different.

I find this constraint reasonable. I have to look the number up every time anyway because I can't remember arbitrary history numbers.

Usually I use the history expansion by number like this

$ history | grep something #note number
$ !number

I recommend using the following Bash options.

## reedit a history substitution line if it failed
shopt -s histreedit
## edit a recalled history line before executing
shopt -s histverify
Strange bugs:

Running the history command piped to anything will result that command to be listed in the history twice. For example:

$ history | head
$ history | tail
$ history | grep foo
$ history | true
$ history | false

All will be listed in the history twice. I have no idea why.

Ideas for improvements:

Maciej Piechotka , answered Aug 26 '10 at 13:20

I'm not aware of any way using bash . But it's one of the most popular features of zsh .
Personally I prefer zsh over bash so I recommend trying it.

Here's the part of my .zshrc that deals with history:

SAVEHIST=10000 # Number of entries
HISTSIZE=10000
HISTFILE=~/.zsh/history # File
setopt APPEND_HISTORY # Don't erase history
setopt EXTENDED_HISTORY # Add additional data to history like timestamp
setopt INC_APPEND_HISTORY # Add immediately
setopt HIST_FIND_NO_DUPS # Don't show duplicates in search
setopt HIST_IGNORE_SPACE # Don't preserve spaces. You may want to turn it off
setopt NO_HIST_BEEP # Don't beep
setopt SHARE_HISTORY # Share history between session/terminals

Chris Down , answered Nov 25 '11 at 15:46

To do this, you'll need to add two lines to your ~/.bashrc :
shopt -s histappend
PROMPT_COMMAND="history -a;history -c;history -r;"
$PROMPT_COMMAND

From man bash :

If the histappend shell option is enabled (see the description of shopt under SHELL BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history file is over-written.

Schof , answered Sep 19 '08 at 19:38

You can edit your BASH prompt to run the "history -a" and "history -r" that Muerr suggested:
savePS1=$PS1

(in case you mess something up, which is almost guaranteed)

PS1=$savePS1`history -a;history -r`

(note that these are back-ticks; they'll run history -a and history -r on every prompt. Since they don't output any text, your prompt will be unchanged.

Once you've got your PS1 variable set up the way you want, set it permanently it in your ~/.bashrc file.

If you want to go back to your original prompt while testing, do:

PS1=$savePS1

I've done basic testing on this to ensure that it sort of works, but can't speak to any side-effects from running history -a;history -r on every prompt.

pts , answered Mar 25 '11 at 17:40

If you need a bash or zsh history synchronizing solution which also solves the problem below, then see it at http://ptspts.blogspot.com/2011/03/how-to-automatically-synchronize-shell.html

The problem is the following: I have two shell windows A and B. In shell window A, I run sleep 9999 , and (without waiting for the sleep to finish) in shell window B, I want to be able to see sleep 9999 in the bash history.

The reason why most other solutions here won't solve this problem is that they are writing their history changes to the the history file using PROMPT_COMMAND or PS1 , both of which are executing too late, only after the sleep 9999 command has finished.

jtimberman , answered Sep 19 '08 at 17:38

You can use history -a to append the current session's history to the histfile, then use history -r on the other terminals to read the histfile.

jmanning2k , answered Aug 26 '10 at 13:59

I can offer a fix for that last one: make sure the env variable HISTCONTROL does not specify "ignorespace" (or "ignoreboth").

But I feel your pain with multiple concurrent sessions. It simply isn't handled well in bash.

Toby , answered Nov 20 '14 at 14:53

Here's an alternative that I use. It's cumbersome but it addresses the issue that @axel_c mentioned where sometimes you may want to have a separate history instance in each terminal (one for make, one for monitoring, one for vim, etc).

I keep a separate appended history file that I constantly update. I have the following mapped to a hotkey:

history | grep -v history >> ~/master_history.txt

This appends all history from the current terminal to a file called master_history.txt in your home dir.

I also have a separate hotkey to search through the master history file:

cat /home/toby/master_history.txt | grep -i

I use cat | grep because it leaves the cursor at the end to enter my regex. A less ugly way to do this would be to add a couple of scripts to your path to accomplish these tasks, but hotkeys work for my purposes. I also periodically will pull history down from other hosts I've worked on and append that history to my master_history.txt file.

It's always nice to be able to quickly search and find that tricky regex you used or that weird perl one-liner you came up with 7 months ago.

Yarek T , answered Jul 23 '15 at 9:05

Right, So finally this annoyed me to find a decent solution:
# Write history after each command
_bash_history_append() {
    builtin history -a
}
PROMPT_COMMAND="_bash_history_append; $PROMPT_COMMAND"

What this does is sort of amalgamation of what was said in this thread, except that I don't understand why would you reload the global history after every command. I very rarely care about what happens in other terminals, but I always run series of commands, say in one terminal:

make
ls -lh target/*.foo
scp target/artifact.foo vm:~/

(Simplified example)

And in another:

pv ~/test.data | nc vm:5000 >> output
less output
mv output output.backup1

No way I'd want the command to be shared

rouble , answered Apr 15 at 17:43

Here is my enhancement to @lesmana's answer . The main difference is that concurrent windows don't share history. This means you can keep working in your windows, without having context from other windows getting loaded into your current windows.

If you explicitly type 'history', OR if you open a new window then you get the history from all previous windows.

Also, I use this strategy to archive every command ever typed on my machine.

# Consistent and forever bash history
HISTSIZE=100000
HISTFILESIZE=$HISTSIZE
HISTCONTROL=ignorespace:ignoredups

_bash_history_sync() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
}

_bash_history_sync_and_reload() {
  builtin history -a         #1
  HISTFILESIZE=$HISTSIZE     #2
  builtin history -c         #3
  builtin history -r         #4
}

history() {                  #5
  _bash_history_sync_and_reload
  builtin history "$@"
}

export HISTTIMEFORMAT="%y/%m/%d %H:%M:%S   "
PROMPT_COMMAND='history 1 >> ${HOME}/.bash_eternal_history'
PROMPT_COMMAND=_bash_history_sync;$PROMPT_COMMAND

simotek , answered Jun 1 '14 at 6:02

I have written a script for setting a history file per session or task its based off the following.
        # write existing history to the old file
        history -a

        # set new historyfile
        export HISTFILE="$1"
        export HISET=$1

        # touch the new file to make sure it exists
        touch $HISTFILE
        # load new history file
        history -r $HISTFILE

It doesn't necessary save every history command but it saves the ones that i care about and its easier to retrieve them then going through every command. My version also lists all history files and provides the ability to search through them all.

Full source: https://github.com/simotek/scripts-config/blob/master/hiset.sh

Litch , answered Aug 11 '15 at 0:15

I chose to put history in a file-per-tty, as multiple people can be working on the same server - separating each session's commands makes it easier to audit.
# Convert /dev/nnn/X or /dev/nnnX to "nnnX"
HISTSUFFIX=`tty | sed 's/\///g;s/^dev//g'`
# History file is now .bash_history_pts0
HISTFILE=".bash_history_$HISTSUFFIX"
HISTTIMEFORMAT="%y-%m-%d %H:%M:%S "
HISTCONTROL=ignoredups:ignorespace
shopt -s histappend
HISTSIZE=1000
HISTFILESIZE=5000

History now looks like:

user@host:~# test 123
user@host:~# test 5451
user@host:~# history
1  15-08-11 10:09:58 test 123
2  15-08-11 10:10:00 test 5451
3  15-08-11 10:10:02 history

With the files looking like:

user@host:~# ls -la .bash*
-rw------- 1 root root  4275 Aug 11 09:42 .bash_history_pts0
-rw------- 1 root root    75 Aug 11 09:49 .bash_history_pts1
-rw-r--r-- 1 root root  3120 Aug 11 10:09 .bashrc

fstang , answered Sep 10 '16 at 19:30

Here I will point out one problem with
export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"

and

PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"

If you run source ~/.bashrc, the $PROMPT_COMMAND will be like

"history -a; history -c; history -r history -a; history -c; history -r"

and

"history -a; history -n history -a; history -n"

This repetition occurs each time you run 'source ~/.bashrc'. You can check PROMPT_COMMAND after each time you run 'source ~/.bashrc' by running 'echo $PROMPT_COMMAND'.

You could see some commands are apparently broken: "history -n history -a". But the good news is that it still works, because other parts still form a valid command sequence (Just involving some extra cost due to executing some commands repetitively. And not so clean.)

Personally I use the following simple version:

shopt -s histappend
PROMPT_COMMAND="history -a; history -c; history -r"

which has most of the functionalities while no such issue as mentioned above.

Another point to make is: there is really nothing magic . PROMPT_COMMAND is just a plain bash environment variable. The commands in it get executed before you get bash prompt (the $ sign). For example, your PROMPT_COMMAND is "echo 123", and you run "ls" in your terminal. The effect is like running "ls; echo 123".

$ PROMPT_COMMAND="echo 123"

output (Just like running 'PROMPT_COMMAND="echo 123"; $PROMPT_COMMAND'):

123

Run the following:

$ echo 3

output:

3
123

"history -a" is used to write the history commands in memory to ~/.bash_history

"history -c" is used to clear the history commands in memory

"history -r" is used to read history commands from ~/.bash_history to memory

See history command explanation here: http://ss64.com/bash/history.html

PS: As other users have pointed out, export is unnecessary. See: using export in .bashrc

Hopping Bunny , answered May 13 '15 at 4:48

Here is the snippet from my .bashrc and short explanations wherever needed:
# The following line ensures that history logs screen commands as well
shopt -s histappend

# This line makes the history file to be rewritten and reread at each bash prompt
PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"
# Have lots of history
HISTSIZE=100000         # remember the last 100000 commands
HISTFILESIZE=100000     # start truncating commands after 100000 lines
HISTCONTROL=ignoreboth  # ignoreboth is shorthand for ignorespace and     ignoredups

The HISTFILESIZE and HISTSIZE are personal preferences and you can change them as per your tastes.

Mulki , answered Jul 24 at 20:49

This works for ZSH
##############################################################################
# History Configuration for ZSH
##############################################################################
HISTSIZE=10000               #How many lines of history to keep in memory
HISTFILE=~/.zsh_history     #Where to save history to disk
SAVEHIST=10000               #Number of history entries to save to disk
#HISTDUP=erase               #Erase duplicates in the history file
setopt    appendhistory     #Append history to the history file (no overwriting)
setopt    sharehistory      #Share history across terminals
setopt    incappendhistory  #Immediately append to the history file, not just when a term is killed

[Jul 29, 2017] shell - How does this bash code detect an interactive session - Stack Overflow

Notable quotes:
"... ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ..."
Jul 29, 2017 | stackoverflow.com

user1284631 , asked Jun 5 '13 at 8:44

Following some issues with scp (it did not like the presence of the bash bind command in my .bashrc file, apparently), I followed the advice of a clever guy on the Internet (I just cannot find that post right now) that put at the top of its .bashrc file this:
[[ ${-#*} != ${-} ]] || return

in order to make sure that the bash initialization is NOT executed unless in interactive session.

Now, that works. However, I am not able to figure how it works. Could you enlighten me?

According to this answer , the $- is the current options set for the shell and I know that the ${} is the so-called "substring" syntax for expanding variables.

However, I do not understand the ${-#*i} part. And why $-#*i is not the same as ${-#*i} .

blue , answered Jun 5 '13 at 8:49

parameter#word}

$parameter##word}

The word is expanded to produce a pattern just as in filename expansion. If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the '#' case) or the longest matching pattern (the '##' case) deleted.

If parameter is '@' or ' ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ', the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list.

Source: http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html

So basically what happens in ${-#*i} is that *i is expanded, and if it matches the beginning of the value of $- , then the result of the whole expansion is $- with the shortest matching pattern between *i and $- deleted.

Example

VAR "baioasd" 
echo ${VAR#*i};

outputs oasd .

In your case

If shell is interactive, $- will contain the letter 'i', so when you strip the variable $- of the pattern *i you will get a string that is different from the original $- ( [[ ${-#*i} != ${-} ]] yelds true). If shell is not interactive, $- does not contain the letter 'i' so the pattern *i does not match anything in $- and [[ ${-#*i} != $- ]] yelds false, and the return statement is executed.

perreal , answered Jun 5 '13 at 8:53

See this :

To determine within a startup script whether or not Bash is running interactively, test the value of the '-' special parameter. It contains i when the shell is interactive

Your substitution removes the string up to, and including the i and tests if the substituted version is equal to the original string. They will be different if there is i in the ${-} .

[Jul 28, 2017] bash - About .bash_profile, .bashrc, and where should alias be written in - Stack Overflow

Jul 28, 2017 | stackoverflow.com

Community May 23 at 12:17

Possible Duplicate: What's the difference between .bashrc, .bash_profile, and .environment?

It seems that if I use

alias ls
'ls -F'

inside of .bashrc on Mac OS X, then the newly created shell will not have that alias. I need to type bash again and that alias will be in effect.

And if I log into Linux on the hosting company, the .bashrc file has a comment line that says:

For non-login shell

and the .bash_profile file has a comment that says

for login shell

So where should aliases be written in? How come we separate the login shell and non-login shell?

Some webpage say use .bash_aliases , but it doesn't work on Mac OS X, it seems.

Maggyero edited Apr 25 '16 at 16:24

The reason you separate the login and non-login shell is because the .bashrc file is reloaded every time you start a new copy of Bash.

The .profile file is loaded only when you either log in or use the appropriate flag to tell Bash to act as a login shell.

Personally,

Oh, and the reason you need to type bash again to get the new alias is that Bash loads your .bashrc file when it starts but it doesn't reload it unless you tell it to. You can reload the .bashrc file (and not need a second shell) by typing


source
~/.
bashrc

which loads the .bashrc file as if you had typed the commands directly to Bash.

lhunath answered May 24 '09 at 6:22

Check out http://mywiki.wooledge.org/DotFiles for an excellent resource on the topic aside from man bash .

Summary:

Adam Rosenfield May 24 '09 at 2:46
From the bash manpage:

When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile , if that file exists. After reading that file, it looks for ~/.bash_profile , ~/.bash_login , and ~/.profile , in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.

When a login shell exits, bash reads and executes commands from the file ~/.bash_logout , if it exists.

When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc , if that file exists. This may be inhibited by using the --norc option. The --rcfile file option will force bash to read and execute commands from file instead of ~/.bashrc .

Thus, if you want to get the same behavior for both login shells and interactive non-login shells, you should put all of your commands in either .bashrc or .bash_profile , and then have the other file source the first one.

Adam Rosenfield May 24 '09 at 2:46

.bash_profile is loaded for a "login shell". I am not sure what that would be on OS X, but on Linux that is either X11 or a virtual terminal.

.bashrc is loaded every time you run Bash. That is where you should put stuff you want loaded whenever you open a new Terminal.app window.

I personally put everything in .bashrc so that I don't have to restart the application for changes to take effect.

[Jul 26, 2017] I feel stupid declare not found in bash scripting

A single space can make a huge difference in bash :-)
www.linuxquestions.org

Mohtek

I feel stupid: declare not found in bash scripting? I was anxious to get my feet wet, and I'm only up to my toes before I'm stuck...this seems very very easy but I'm not sure what I've done wrong. Below is the script and its output. What the heck am I missing?

______________________________________________________
#!/bin/bash
declare -a PROD[0]="computers" PROD[1]="HomeAutomation"
printf "${ PROD[*]}"
_______________________________________________________

products.sh: 6: declare: not found
products.sh: 8: Syntax error: Bad substitution

wjevans_7d1@yahoo.co

I ran what you posted (but at the command line, not in a script, though that should make no significant difference), and got this:

Code:

-bash: ${ PROD[*]}: bad substitution

In other words, I couldn't reproduce your first problem, the "declare: not found" error. Try the declare command by itself, on the command line.

And I got rid of the "bad substitution" problem when I removed the space which is between the ${ and the PROD on the printf line.

Hope this helps.

blackhole54

The previous poster identified your second problem.

As far as your first problem goes ... I am not a bash guru although I have written a number of bash scripts. So far I have found no need for declare statements. I suspect that you might not need it either. But if you do want to use it, the following does work:

Code:
#!/bin/bash

declare -a PROD
PROD[0]="computers"
PROD[1]="HomeAutomation"
printf "${PROD[*]}\n"

EDIT: My original post was based on an older version of bash. When I tried the declare statement you posted I got an error message, but one that was different from yours. I just tried it on a newer version of bash, and your declare statement worked fine. So it might depend on the version of bash you are running. What I posted above runs fine on both versions.

[Jul 26, 2017] Associative array declaration gotcha

Jul 26, 2017 | unix.stackexchange.com

bash silently does function return on (re-)declare of global associative read-only array - Unix & Linux Stack Exchange

Ron Burk :

Obviously cut out of a much more complex script that was more meaningful:

#!/bin/bash

function InitializeConfig(){
    declare -r -g -A SHCFG_INIT=( [a]=b )
    declare -r -g -A SHCFG_INIT=( [c]=d )
    echo "This statement never gets executed"
}

set -o xtrace

InitializeConfig
echo "Back from function"
The output looks like this:
ronburk@ubuntu:~/ubucfg$ bash bug.sh
+ InitializeConfig
+ SHCFG_INIT=([a]=b)
+ declare -r -g -A SHCFG_INIT
+ SHCFG_INIT=([c]=d)
+ echo 'Back from function'
Back from function
Bash seems to silently execute a function return upon the second declare statement. Starting to think this really is a new bug, but happy to learn otherwise.

Other details:

Machine: x86_64
OS: linux-gnu
Compiler: gcc
Compilation CFLAGS:  -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gn$
uname output: Linux ubuntu 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Lin$
Machine Type: x86_64-pc-linux-gnu

Bash Version: 4.3
Patch Level: 11
Release Status: release
bash array readonly
share improve this question edited Jun 14 '15 at 17:43 asked Jun 14 '15 at 7:05 118

By gum, you're right! Then I get readonly warning on second declare, which is reasonable, and the function completes. The xtrace output is also interesting; implies declare without single quotes is really treated as two steps. Ready to become superstitious about always single-quoting the argument to declare . Hard to see how popping the function stack can be anything but a bug, though. – Ron Burk Jun 14 '15 at 23:58

Weird. Doesn't happen in bash 4.2.53(1). – choroba Jun 14 '15 at 7:22
I can reproduce this problem with bash version 4.3.11 (Ubuntu 14.04.1 LTS). It works fine with bash 4.2.8 (Ubuntu 11.04). – Cyrus Jun 14 '15 at 7:34
Maybe related: unix.stackexchange.com/q/56815/116972 I can get expected result with declare -r -g -A 'SHCFG_INIT=( [a]=b )' . – yaegashi Jun 14 '15 at 23:22
add a comment |

I found this thread in bug-bash@gnu.org related to test -v on an assoc array. In short, bash implicitly did test -v SHCFG_INIT[0] in your script. I'm not sure this behavior got introduced in 4.3.

You might want to use declare -p to workaround this...

if  declare p SHCFG_INIT >/dev/null >& ; then
    echo "looks like SHCFG_INIT not defined"
fi
====
Well, rats. I think your answer is correct, but also reveals I'm really asking two separate questions when I thought they were probably the same issue. Since the title better reflects what turns out to be the "other" question, I'll leave this up for a while and see if anybody knows what's up with the mysterious implicit function return... Thanks! – Ron Burk Jun 14 '15 at 17:01
Edited question to focus on the remaining issue. Thanks again for the answer on the "-v" issue with associative arrays. – Ron Burk Jun 14 '15 at 17:55
Accepting this answer. Complete answer is here plus your comments above plus (IMHO) there's a bug in this version of bash (can't see how there can be any excuse for popping the function stack without warning). Thanks for your excellent research on this! – Ron Burk Jun 21 '15 at 19:31

[Jul 26, 2017] Typing variables: declare or typeset

Jul 26, 2017 | www.tldp.org

The declare or typeset builtins , which are exact synonyms, permit modifying the properties of variables. This is a very weak form of the typing [1] available in certain programming languages. The declare command is specific to version 2 or later of Bash. The typeset command also works in ksh scripts.

declare/typeset options
-r readonly
( declare -r var1 works the same as readonly var1 )

This is the rough equivalent of the C const type qualifier. An attempt to change the value of a readonly variable fails with an error message.

declare -r var1=1
echo "var1 = $var1"   # var1 = 1

(( var1++ ))          # x.sh: line 4: var1: readonly variable
-i integer
declare -i number
# The script will treat subsequent occurrences of "number" as an integer.             

number=3
echo "Number = $number"     # Number = 3

number=three
echo "Number = $number"     # Number = 0
# Tries to evaluate the string "three" as an integer.

Certain arithmetic operations are permitted for declared integer variables without the need for expr or let .

n=6/3
echo "n = $n"       # n = 6/3

declare -i n
n=6/3
echo "n = $n"       # n = 2
-a array
declare -a indices

The variable indices will be treated as an array .

-f function(s)
declare -f

A declare -f line with no arguments in a script causes a listing of all the functions previously defined in that script.

declare -f function_name

A declare -f function_name in a script lists just the function named.

-x export
declare -x var3

This declares a variable as available for exporting outside the environment of the script itself.

-x var=$value
declare -x var3=373

The declare command permits assigning a value to a variable in the same statement as setting its properties.

Example 9-10. Using declare to type variables
#!/bin/bash

func1 ()
{
  echo This is a function.
}

declare -f        # Lists the function above.

echo

declare -i var1   # var1 is an integer.
var1=2367
echo "var1 declared as $var1"
var1=var1+1       # Integer declaration eliminates the need for 'let'.
echo "var1 incremented by 1 is $var1."
# Attempt to change variable declared as integer.
echo "Attempting to change var1 to floating point value, 2367.1."
var1=2367.1       # Results in error message, with no change to variable.
echo "var1 is still $var1"

echo

declare -r var2=13.36         # 'declare' permits setting a variable property
                              #+ and simultaneously assigning it a value.
echo "var2 declared as $var2" # Attempt to change readonly variable.
var2=13.37                    # Generates error message, and exit from script.

echo "var2 is still $var2"    # This line will not execute.

exit 0                        # Script will not exit here.
Caution Using the declare builtin restricts the scope of a variable.
foo ()
{
FOO="bar"
}

bar ()
{
foo
echo $FOO
}

bar   # Prints bar.

However . . .

foo (){
declare FOO="bar"
}

bar ()
{
foo
echo $FOO
}

bar  # Prints nothing.


# Thank you, Michael Iatrou, for pointing this out.
9.2.1. Another use for declare

The declare command can be helpful in identifying variables, environmental or otherwise. This can be especially useful with arrays .

bash$


declare | grep HOME


HOME=/home/bozo

bash$


zzy=68


bash$


declare | grep zzy


zzy=68

bash$


Colors=([0]="purple" [1]="reddish-orange" [2]="light green")


bash$


echo ${Colors[@]}


purple reddish-orange light green

bash$


declare | grep Colors


Colors=([0]="purple" [1]="reddish-orange" [2]="light green")

Notes
[1] In this context, typing a variable means to classify it and restrict its properties. For example, a variable declared or typed as an integer is no longer available for string operations .
declare -i intvar

intvar=23
echo "$intvar"   # 23
intvar=stringval
echo "$intvar"   # 0

[Jul 25, 2017] Beginner Mistakes

Jul 25, 2017 | wiki.bash-hackers.org

Script execution Your perfect Bash script executes with syntax errors If you write Bash scripts with Bash specific syntax and features, run them with Bash , and run them with Bash in native mode .

Wrong

See also:

Your script named "test" doesn't execute Give it another name. The executable test already exists.

In Bash it's a builtin. With other shells, it might be an executable file. Either way, it's bad name choice!

Workaround: You can call it using the pathname:

/home/user/bin/test

Globbing Brace expansion is not globbing The following command line is not related to globbing (filename expansion):

# YOU EXPECT
# -i1.vob -i2.vob -i3.vob ....

echo -i{*.vob,}

# YOU GET
# -i*.vob -i
Why? The brace expansion is simple text substitution. All possible text formed by the prefix, the postfix and the braces themselves are generated. In the example, these are only two: -i*.vob and -i . The filename expansion happens after that, so there is a chance that -i*.vob is expanded to a filename - if you have files like -ihello.vob . But it definitely doesn't do what you expected.

Please see:

Test-command

Please see:

Variables Setting variables The Dollar-Sign There is no $ (dollar-sign) when you reference the name of a variable! Bash is not PHP!
# THIS IS WRONG!
$myvar="Hello world!"

A variable name preceeded with a dollar-sign always means that the variable gets expanded . In the example above, it might expand to nothing (because it wasn't set), effectively resulting in

="Hello world!"
which definitely is wrong !

When you need the name of a variable, you write only the name , for example

When you need the content of a variable, you prefix its name with a dollar-sign , like

Whitespace Putting spaces on either or both sides of the equal-sign ( = ) when assigning a value to a variable will fail.
# INCORRECT 1
example = Hello

# INCORRECT 2
example= Hello

# INCORRECT 3
example =Hello

The only valid form is no spaces between the variable name and assigned value

# CORRECT 1
example=Hello

# CORRECT 2
example=" Hello"

Expanding (using) variables A typical beginner's trap is quoting.

As noted above, when you want to expand a variable i.e. "get the content", the variable name needs to be prefixed with a dollar-sign. But, since Bash knows various ways to quote and does word-splitting, the result isn't always the same.

Let's define an example variable containing text with spaces:

example="Hello world"
Used form result number of words
$example Hello world 2
"$example" Hello world 1
\$example $example 1
'$example' $example 1

If you use parameter expansion, you must use the name ( PATH ) of the referenced variables/parameters. i.e. not ( $PATH ):

# WRONG!
echo "The first character of PATH is ${$PATH:0:1}"

# CORRECT
echo "The first character of PATH is ${PATH:0:1}"

Note that if you are using variables in arithmetic expressions , then the bare name is allowed:

((a=$a+7))         # Add 7 to a
((a = a + 7))      # Add 7 to a.  Identical to the previous command.
((a += 7))         # Add 7 to a.  Identical to the previous command.

a=$((a+7))         # POSIX-compatible version of previous code.

Please see:

Exporting Exporting a variable means to give newly created (child-)processes a copy of that variable. not copy a variable created in a child process to the parent process. The following example does not work, since the variable hello is set in a child process (the process you execute to start that script ./script.sh ):
$ cat script.sh
export hello=world

$ ./script.sh
$ echo $hello
$

Exporting is one-way. The direction is parent process to child process, not the reverse. The above example will work, when you don't execute the script, but include ("source") it:

$ source ./script.sh
$ echo $hello
world
$
In this case, the export command is of no use.

Please see:

Exit codes Reacting to exit codes If you just want to react to an exit code, regardless of its specific value, you don't need to use $? in a test command like this:
grep
 ^root:
etc
passwd
>/
dev
null
>&

 
if
$?
-neq
then
echo
"root was not found - check the pub at the corner"
fi

This can be simplified to:

if
grep
 ^root:
etc
passwd
>/
dev
null
>&
then
echo
"root was not found - check the pub at the corner"
fi

Or, simpler yet:

grep
 ^root:
etc
passwd
>/
dev
null
>&
||
echo
"root was not found - check the pub at the corner"

If you need the specific value of $? , there's no other choice. But if you need only a "true/false" exit indication, there's no need for $? .

See also:

Output vs. Return Value It's important to remember the different ways to run a child command, and whether you want the output, the return value, or neither.

When you want to run a command (or a pipeline) and save (or print) the output , whether as a string or an array, you use Bash's $(command) syntax:

$(ls -l /tmp)
newvariable=$(printf "foo")

When you want to use the return value of a command, just use the command, or add ( ) to run a command or pipeline in a subshell:

if grep someuser /etc/passwd ; then
    # do something
fi

if ( w | grep someuser | grep sqlplus ) ; then
    # someuser is logged in and running sqlplus
fi

Make sure you're using the form you intended:

# WRONG!
if $(grep ERROR /var/log/messages) ; then
    # send alerts
fi

[Jul 25, 2017] Arrays in bash 4.x

Jul 25, 2017 | wiki.bash-hackers.org

Purpose An array is a parameter that holds mappings from keys to values. Arrays are used to store a collection of parameters into a parameter. Arrays (in any programming language) are a useful and common composite data structure, and one of the most important scripting features in Bash and other shells.

Here is an abstract representation of an array named NAMES . The indexes go from 0 to 3.

NAMES
 0: Peter
 1: Anna
 2: Greg
 3: Jan

Instead of using 4 separate variables, multiple related variables are grouped grouped together into elements of the array, accessible by their key . If you want the second name, ask for index 1 of the array NAMES . Indexing Bash supports two different types of ksh-like one-dimensional arrays. Multidimensional arrays are not implemented .

Syntax Referencing To accommodate referring to array variables and their individual elements, Bash extends the parameter naming scheme with a subscript suffix. Any valid ordinary scalar parameter name is also a valid array name: [[:alpha:]_][[:alnum:]_]* . The parameter name may be followed by an optional subscript enclosed in square brackets to refer to a member of the array.

The overall syntax is arrname[subscript] - where for indexed arrays, subscript is any valid arithmetic expression, and for associative arrays, any nonempty string. Subscripts are first processed for parameter and arithmetic expansions, and command and process substitutions. When used within parameter expansions or as an argument to the unset builtin, the special subscripts * and @ are also accepted which act upon arrays analogously to the way the @ and * special parameters act upon the positional parameters. In parsing the subscript, bash ignores any text that follows the closing bracket up to the end of the parameter name.

With few exceptions, names of this form may be used anywhere ordinary parameter names are valid, such as within arithmetic expressions , parameter expansions , and as arguments to builtins that accept parameter names. An array is a Bash parameter that has been given the -a (for indexed) or -A (for associative) attributes . However, any regular (non-special or positional) parameter may be validly referenced using a subscript, because in most contexts, referring to the zeroth element of an array is synonymous with referring to the array name without a subscript.

# "x" is an ordinary non-array parameter.
$ x=hi; printf '%s ' "$x" "${x[0]}"; echo "${_[0]}"
hi hi hi

The only exceptions to this rule are in a few cases where the array variable's name refers to the array as a whole. This is the case for the unset builtin (see destruction ) and when declaring an array without assigning any values (see declaration ). Declaration The following explicitly give variables array attributes, making them arrays:

Syntax Description
ARRAY=() Declares an indexed array ARRAY and initializes it to be empty. This can also be used to empty an existing array.
ARRAY[0]= Generally sets the first element of an indexed array. If no array ARRAY existed before, it is created.
declare -a ARRAY Declares an indexed array ARRAY . An existing array is not initialized.
declare -A ARRAY Declares an associative array ARRAY . This is the one and only way to create associative arrays.
Storing values Storing values in arrays is quite as simple as storing values in normal variables.
Syntax Description
ARRAY[N]=VALUE Sets the element N of the indexed array ARRAY to VALUE . N can be any valid arithmetic expression
ARRAY[STRING]=VALUE Sets the element indexed by STRING of the associative array ARRAY .
ARRAY=VALUE As above. If no index is given, as a default the zeroth element is set to VALUE . Careful, this is even true of associative arrays - there is no error if no key is specified, and the value is assigned to string index "0".
ARRAY=(E1 E2 ) Compound array assignment - sets the whole array ARRAY to the given list of elements indexed sequentially starting at zero. The array is unset before assignment unless the += operator is used. When the list is empty ( ARRAY=() ), the array will be set to an empty array. This method obviously does not use explicit indexes. An associative array can not be set like that! Clearing an associative array using ARRAY=() works.
ARRAY=([X]=E1 [Y]=E2 ) Compound assignment for indexed arrays with index-value pairs declared individually (here for example X and Y ). X and Y are arithmetic expressions. This syntax can be combined with the above - elements declared without an explicitly specified index are assigned sequentially starting at either the last element with an explicit index, or zero.
ARRAY=([S1]=E1 [S2]=E2 ) Individual mass-setting for associative arrays . The named indexes (here: S1 and S2 ) are strings.
ARRAY+=(E1 E2 ) Append to ARRAY.

As of now, arrays can't be exported. Getting values article about parameter expansion and check the notes about arrays.

Syntax Description
${ARRAY[N]} Expands to the value of the index N in the indexed array ARRAY . If N is a negative number, it's treated as the offset from the maximum assigned index (can't be used for assignment) - 1
${ARRAY[S]} Expands to the value of the index S in the associative array ARRAY .
"${ARRAY[@]}"
${ARRAY[@]}
"${ARRAY[*]}"
${ARRAY[*]}
Similar to mass-expanding positional parameters , this expands to all elements. If unquoted, both subscripts * and @ expand to the same result, if quoted, @ expands to all elements individually quoted, * expands to all elements quoted as a whole.
"${ARRAY[@]:N:M}"
${ARRAY[@]:N:M}
"${ARRAY[*]:N:M}"
${ARRAY[*]:N:M}
Similar to what this syntax does for the characters of a single string when doing substring expansion , this expands to M elements starting with element N . This way you can mass-expand individual indexes. The rules for quoting and the subscripts * and @ are the same as above for the other mass-expansions.

For clarification: When you use the subscripts @ or * for mass-expanding, then the behaviour is exactly what it is for $@ and $* when mass-expanding the positional parameters . You should read this article to understand what's going on. Metadata

Syntax Description
${#ARRAY[N]} Expands to the length of an individual array member at index N ( stringlength
${#ARRAY[STRING]} Expands to the length of an individual associative array member at index STRING ( stringlength )
${#ARRAY[@]}
${#ARRAY[*]}
Expands to the number of elements in ARRAY
${!ARRAY[@]}
${!ARRAY[*]}
Expands to the indexes in ARRAY since BASH 3.0
Destruction The unset builtin command is used to destroy (unset) arrays or individual elements of arrays.
Syntax Description
unset -v ARRAY
unset -v ARRAY[@]
unset -v ARRAY[*]
Destroys a complete array
unset -v ARRAY[N] Destroys the array element at index N
unset -v ARRAY[STRING] Destroys the array element of the associative array at index STRING

It is best to explicitly specify -v when unsetting variables with unset.

pathname expansion to occur due to the presence of glob characters.

Example: You are in a directory with a file named x1 , and you want to destroy an array element x[1] , with

unset x[1]
then pathname expansion will expand to the filename x1 and break your processing!

Even worse, if nullglob is set, your array/index will disappear.

To avoid this, always quote the array name and index:

unset -v 'x[1]'

This applies generally to all commands which take variable names as arguments. Single quotes preferred.

Usage Numerical Index Numerical indexed arrays are easy to understand and easy to use. The Purpose and Indexing chapters above more or less explain all the needed background theory.

Now, some examples and comments for you.

Let's say we have an array sentence which is initialized as follows:

sentence=(Be liberal in what you accept, and conservative in what you send)

Since no special code is there to prevent word splitting (no quotes), every word there will be assigned to an individual array element. When you count the words you see, you should get 12. Now let's see if Bash has the same opinion:

$ echo ${#sentence[@]}
12

Yes, 12. Fine. You can take this number to walk through the array. Just subtract 1 from the number of elements, and start your walk at 0 (zero)

((n_elements=${#sentence[@]}, max_index=n_elements - 1))

for ((i = 0; i <= max_index; i++)); do
  echo "Element $i: '${sentence[i]}'"
done

You always have to remember that, it seems newbies have problems sometimes. Please understand that numerical array indexing begins at 0 (zero)

The method above, walking through an array by just knowing its number of elements, only works for arrays where all elements are set, of course. If one element in the middle is removed, then the calculation is nonsense, because the number of elements doesn't correspond to the highest used index anymore (we call them " sparse arrays "). Associative (Bash 4) Associative arrays (or hash tables ) are not much more complicated than numerical indexed arrays. The numerical index value (in Bash a number starting at zero) just is replaced with an arbitrary string:

# declare -A, introduced with Bash 4 to declare an associative array
declare -A sentence

sentence[Begin]='Be liberal in what'
sentence[Middle]='you accept, and conservative'
sentence[End]='in what you send'
sentence['Very end']=...

Beware: don't rely on the fact that the elements are ordered in memory like they were declared, it could look like this:

# output from 'set' command
sentence=([End]="in what you send" [Middle]="you accept, and conservative " [Begin]="Be liberal in what " ["Very end"]="...")
This effectively means, you can get the data back with "${sentence[@]}" , of course (just like with numerical indexing), but you can't rely on a specific order. If you want to store ordered data, or re-order data, go with numerical indexes. For associative arrays, you usually query known index values:
for element in Begin Middle End "Very end"; do
    printf "%s" "${sentence[$element]}"
done
printf "\n"

A nice code example: Checking for duplicate files using an associative array indexed with the SHA sum of the files:

# Thanks to Tramp in #bash for the idea and the code

unset flist; declare -A flist;
while read -r sum fname; do 
    if [[ ${flist[$sum]} ]]; then
        printf 'rm -- "%s" # Same as >%s<\n' "$fname" "${flist[$sum]}" 
    else
        flist[$sum]="$fname"
    fi
done <  <(find . -type f -exec sha256sum {} +)  >rmdups

Integer arrays Any type attributes applied to an array apply to all elements of the array. If the integer attribute is set for either indexed or associative arrays, then values are considered as arithmetic for both compound and ordinary assignment, and the += operator is modified in the same way as for ordinary integer variables.

 ~ $ ( declare -ia 'a=(2+4 [2]=2+2 [a[2]]="a[2]")' 'a+=(42 [a[4]]+=3)'; declare -p a )
declare -ai a='([0]="6" [2]="4" [4]="7" [5]="42")'

a[0] is assigned to the result of 2+4 . a[1] gets the result of 2+2 . The last index in the first assignment is the result of a[2] , which has already been assigned as 4 , and its value is also given a[2] .

This shows that even though any existing arrays named a in the current scope have already been unset by using = instead of += to the compound assignment, arithmetic variables within keys can self-reference any elements already assigned within the same compound-assignment. With integer arrays this also applies to expressions to the right of the = . (See evaluation order , the right side of an arithmetic assignment is typically evaluated first in Bash.)

The second compound assignment argument to declare uses += , so it appends after the last element of the existing array rather than deleting it and creating a new array, so a[5] gets 42 .

Lastly, the element whose index is the value of a[4] ( 4 ), gets 3 added to its existing value, making a[4] == 7 . Note that having the integer attribute set this time causes += to add, rather than append a string, as it would for a non-integer array.

The single quotes force the assignments to be evaluated in the environment of declare . This is important because attributes are only applied to the assignment after assignment arguments are processed. Without them the += compound assignment would have been invalid, and strings would have been inserted into the integer array without evaluating the arithmetic. A special-case of this is shown in the next section.

eval , but there are differences.) 'Todo: ' Discuss this in detail.

Indirection Arrays can be expanded indirectly using the indirect parameter expansion syntax. Parameters whose values are of the form: name[index] , name[@] , or name[*] when expanded indirectly produce the expected results. This is mainly useful for passing arrays (especially multiple arrays) by name to a function.

This example is an "isSubset"-like predicate which returns true if all key-value pairs of the array given as the first argument to isSubset correspond to a key-value of the array given as the second argument. It demonstrates both indirect array expansion and indirect key-passing without eval using the aforementioned special compound assignment expansion.

isSubset() {
    local -a 'xkeys=("${!'"$1"'[@]}")' 'ykeys=("${!'"$2"'[@]}")'
    set -- "${@/%/[key]}"

    (( ${#xkeys[@]} <= ${#ykeys[@]} )) || return 1

    local key
    for key in "${xkeys[@]}"; do
        [[ ${!2+_} && ${!1} == ${!2} ]] || return 1
    done
}

main() {
    # "a" is a subset of "b"
    local -a 'a=({0..5})' 'b=({0..10})'
    isSubset a b
    echo $? # true

    # "a" contains a key not in "b"
    local -a 'a=([5]=5 {6..11})' 'b=({0..10})'
    isSubset a b
    echo $? # false

    # "a" contains an element whose value != the corresponding member of "b"
    local -a 'a=([5]=5 6 8 9 10)' 'b=({0..10})'
    isSubset a b
    echo $? # false
}

main

This script is one way of implementing a crude multidimensional associative array by storing array definitions in an array and referencing them through indirection. The script takes two keys and dynamically calls a function whose name is resolved from the array.

callFuncs() {
    # Set up indirect references as positional parameters to minimize local name collisions.
    set -- "${@:1:3}" ${2+'a["$1"]' "$1"'["$2"]'}

    # The only way to test for set but null parameters is unfortunately to test each individually.
    local x
    for x; do
        [[ $x ]] || return 0
    done

    local -A a=(
        [foo]='([r]=f [s]=g [t]=h)'
        [bar]='([u]=i [v]=j [w]=k)'
        [baz]='([x]=l [y]=m [z]=n)'
        ) ${4+${a["$1"]+"${1}=${!3}"}} # For example, if "$1" is "bar" then define a new array: bar=([u]=i [v]=j [w]=k)

    ${4+${a["$1"]+"${!4-:}"}} # Now just lookup the new array. for inputs: "bar" "v", the function named "j" will be called, which prints "j" to stdout.
}

main() {
    # Define functions named {f..n} which just print their own names.
    local fun='() { echo "$FUNCNAME"; }' x

    for x in {f..n}; do
        eval "${x}${fun}"
    done

    callFuncs "$@"
}

main "$@"

Bugs and Portability Considerations

Bugs Evaluation order Here are some of the nasty details of array assignment evaluation order. You can use this testcase code to generate these results.
Each testcase prints evaluation order for indexed array assignment
contexts. Each context is tested for expansions (represented by digits) and
arithmetic (letters), ordered from left to right within the expression. The
output corresponds to the way evaluation is re-ordered for each shell:

a[ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}}               No attributes
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia a
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia b
a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]}                  typeset -ia a b
(( a[ $1 a ] = b[ $2 b ] ${c[ $3 c ]} ))           No attributes
(( a[ $1 a ] = ${b[ $2 b ]:=c[ $3 c ]} ))          typeset -ia b
a+=( [ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}} [ $4 d ]=$(( $5 e )) ) typeset -a a
a+=( [ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} [ $4 d ]=${5}e ) typeset -ia a

bash: 4.2.42(1)-release
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 3 2 b c 1 a
2 b 3 2 b c 1 a c
1 2 3 c b a
1 2 b 3 2 b c c a
1 2 b 3 c 2 b 4 5 e a d
1 2 b 3 2 b 4 5 a c d e

ksh93: Version AJM 93v- 2013-02-22
1 2 b b a
1 2 b b a
1 2 b b a
1 2 b b a
1 2 3 c b a
1 2 b b a
1 2 b b a 4 5 e d
1 2 b b a 4 5 d e

mksh: @(#)MIRBSD KSH R44 2013/02/24
2 b 3 c 1 a
2 b 3 1 a c
2 b 3 c 1 a
2 b 3 c 1 a
1 2 3 c a b
1 2 b 3 c a
1 2 b 3 c 4 5 e a d
1 2 b 3 4 5 a c d e

zsh: 5.0.2
2 b 3 c 2 b 1 a
2 b 3 2 b 1 a c
2 b 1 a
2 b 1 a
1 2 3 c b a
1 2 b a
1 2 b 3 c 2 b 4 5 e
1 2 b 3 2 b 4 5

See also

[Jul 25, 2017] Handling positional parameters

Notable quotes:
"... under construction ..."
"... under construction ..."
Jul 25, 2017 | wiki.bash-hackers.org

Intro The day will come when you want to give arguments to your scripts. These arguments are known as positional parameters . Some relevant special parameters are described below:

Parameter(s) Description
$0 the first positional parameter, equivalent to argv[0] in C, see the first argument
$FUNCNAME the function name ( attention : inside a function, $0 is still the $0 of the shell, not the function name)
$1 $9 the argument list elements from 1 to 9
${10} ${N} the argument list elements beyond 9 (note the parameter expansion syntax!)
$* all positional parameters except $0 , see mass usage
$@ all positional parameters except $0 , see mass usage
$# the number of arguments, not counting $0

These positional parameters reflect exactly what was given to the script when it was called.

Option-switch parsing (e.g. -h for displaying help) is not performed at this point.

See also the dictionary entry for "parameter" . The first argument The very first argument you can access is referenced as $0 . It is usually set to the script's name exactly as called, and it's set on shell initialization:

Testscript - it just echos $0 :


#!/bin/bash

echo "$0"

You see, $0 is always set to the name the script is called with ( $ is the prompt ):

> ./testscript 

./testscript


> /usr/bin/testscript

/usr/bin/testscript

However, this isn't true for login shells:


> echo "$0"

-bash

In other terms, $0 is not a positional parameter, it's a special parameter independent from the positional parameter list. It can be set to anything. In the ideal case it's the pathname of the script, but since this gets set on invocation, the invoking program can easily influence it (the login program does that for login shells, by prefixing a dash, for example).

Inside a function, $0 still behaves as described above. To get the function name, use $FUNCNAME . Shifting The builtin command shift is used to change the positional parameter values:

The command can take a number as argument: Number of positions to shift. e.g. shift 4 shifts $5 to $1 . Using them Enough theory, you want to access your script-arguments. Well, here we go. One by one One way is to access specific parameters:


#!/bin/bash

echo "Total number of arguments: $#"

echo "Argument 1: $1"

echo "Argument 2: $2"

echo "Argument 3: $3"

echo "Argument 4: $4"

echo "Argument 5: $5"

While useful in another situation, this way is lacks flexibility. The maximum number of arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames as arguments.

⇒ forget that one Loops There are several ways to loop through the positional parameters.


You can code a C-style for-loop using $# as the end value. On every iteration, the shift -command is used to shift the argument list:


numargs=$#

for ((i=1 ; i <= numargs ; i++))

do

    echo "$1"

    shift

done

Not very stylish, but usable. The numargs variable is used to store the initial value of $# because the shift command will change it as the script runs.


Another way to iterate one argument at a time is the for loop without a given wordlist. The loop uses the positional parameters as a wordlist:


for arg

do

    echo "$arg"

done

Advantage: The positional parameters will be preserved

The next method is similar to the first example (the for loop), but it doesn't test for reaching $# . It shifts and checks if $1 still expands to something, using the test command :


while [ "$1" ]

do

    echo "$1"

    shift

done

Looks nice, but has the disadvantage of stopping when $1 is empty (null-string). Let's modify it to run as long as $1 is defined (but may be null), using parameter expansion for an alternate value :


while [ "${1+defined}" ]; do

  echo "$1"

  shift

done

Getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). Mass usage All Positional Parameters Sometimes it's necessary to just "relay" or "pass" given arguments to another program. It's very inefficient to do that in one of these loops, as you will destroy integrity, most likely (spaces!).

The shell developers created $* and $@ for this purpose.

As overview:

Syntax Effective result
$* $1 $2 $3 ${N}
$@ $1 $2 $3 ${N}
"$*" "$1c$2c$3c c${N}"
"$@" "$1" "$2" "$3" "${N}"

Without being quoted (double quotes), both have the same effect: All positional parameters from $1 to the last one used are expanded without any special handling.

When the $* special parameter is double quoted, it expands to the equivalent of: "$1c$2c$3c$4c ..$N" , where 'c' is the first character of IFS .

But when the $@ special parameter is used inside double quotes, it expands to the equivanent of

"$1" "$2" "$3" "$4" .. "$N"

which reflects all positional parameters as they were set initially and passed to the script or function. If you want to re-use your positional parameters to call another program (for example in a wrapper-script), then this is the choice for you, use double quoted "$@" .

Well, let's just say: You almost always want a quoted "$@" ! Range Of Positional Parameters Another way to mass expand the positional parameters is similar to what is possible for a range of characters using substring expansion on normal parameters and the mass expansion range of arrays .

${@:START:COUNT}

${*:START:COUNT}

"${@:START:COUNT}"

"${*:START:COUNT}"

The rules for using @ or * and quoting are the same as above. This will expand COUNT number of positional parameters beginning at START . COUNT can be omitted ( ${@:START} ), in which case, all positional parameters beginning at START are expanded.

If START is negative, the positional parameters are numbered in reverse starting with the last one.

COUNT may not be negative, i.e. the element count may not be decremented.

Example: START at the last positional parameter:


echo "${@: -1}"

Attention : As of Bash 4, a START of 0 includes the special parameter $0 , i.e. the shell name or whatever $0 is set to, when the positional parameters are in use. A START of 1 begins at $1 . In Bash 3 and older, both 0 and 1 began at $1 . Setting Positional Parameters Setting positional parameters with command line arguments, is not the only way to set them. The builtin command, set may be used to "artificially" change the positional parameters from inside the script or function:


set "This is" my new "set of" positional parameters



# RESULTS IN

# $1: This is

# $2: my

# $3: new

# $4: set of

# $5: positional

# $6: parameters

It's wise to signal "end of options" when setting positional parameters this way. If not, the dashes might be interpreted as an option switch by set itself:


# both ways work, but behave differently. See the article about the set command!

set -- ...

set - ...

Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may otherwise be reset by set


set -$- ...

Production examples Using a while loop To make your program accept options as standard command syntax:

COMMAND [options] <params> # Like 'cat -A file.txt'

See simple option parsing code below. It's not that flexible. It doesn't auto-interpret combined options (-fu USER) but it works and is a good rudimentary way to parse your arguments.


#!/bin/sh

# Keeping options in alphabetical order makes it easy to add more.



while :

do

    case "$1" in

      -f | --file)

          file="$2"   # You may want to check validity of $2

          shift 2

          ;;

      -h | --help)

          display_help  # Call your function

          # no shifting needed here, we're done.

          exit 0

          ;;

      -u | --user)

          username="$2" # You may want to check validity of $2

          shift 2

          ;;

      -v | --verbose)

          #  It's better to assign a string, than a number like "verbose=1"

          #  because if you're debugging the script with "bash -x" code like this:

          #

          #    if [ "$verbose" ] ...

          #

          #  You will see:

          #

          #    if [ "verbose" ] ...

          #

          #  Instead of cryptic

          #

          #    if [ "1" ] ...

          #

          verbose="verbose"

          shift

          ;;

      --) # End of all options

          shift

          break;

      -*)

          echo "Error: Unknown option: $1" >&2

          exit 1

          ;;

      *)  # No more options

          break

          ;;

    esac

done



# End of file

Filter unwanted options with a wrapper script This simple wrapper enables filtering unwanted options (here: -a and –all for ls ) out of the command line. It reads the positional parameters and builds a filtered array consisting of them, then calls ls with the new option set. It also respects the as "end of options" for ls and doesn't change anything after it:


#!/bin/bash



# simple ls(1) wrapper that doesn't allow the -a option



options=()  # the buffer array for the parameters

eoo=0       # end of options reached



while [[ $1 ]]

do

    if ! ((eoo)); then

        case "$1" in

          -a)

              shift

              ;;

          --all)

              shift

              ;;

          -[^-]*a*|-a?*)

              options+=("${1//a}")

              shift

              ;;

          --)

              eoo=1

              options+=("$1")

              shift

              ;;

          *)

              options+=("$1")

              shift

              ;;

        esac

    else

        options+=("$1")



        # Another (worse) way of doing the same thing:

        # options=("${options[@]}" "$1")

        shift

    fi

done



/bin/ls "${options[@]}"

Using getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). See also

Discussion 2010/04/14 14:20
The shell-developers invented $* and $@ for this purpose.
Without being quoted (double-quoted), both have the same effect: All positional parameters from $1 to the last used one >are expanded, separated by the first character of IFS (represented by "c" here, but usually a space):
$1c$2c$3c$4c........$N

Without double quotes, $* and $@ are expanding the positional parameters separated by only space, not by IFS.


#!/bin/bash



export IFS='-'



echo -e $*

echo -e $@


$./test "This is" 2 3

This is 2 3

This is 2 3

2011/02/18 16:11 #!/bin/bash

OLDIFS="$IFS" IFS='-' #export IFS='-'

#echo -e $* #echo -e $@ #should be echo -e "$*" echo -e "$@" IFS="$OLDIFS"

2011/02/18 16:14 #should be echo -e "$*"

2012/04/20 10:32 Here's yet another non-getopts way.

http://bsdpants.blogspot.de/2007/02/option-ize-your-shell-scripts.html

2012/07/16 14:48 Hi there!

What if I use "$@" in subsequent function calls, but arguments are strings?

I mean, having:


#!/bin/bash

echo "$@"

echo n: $#

If you use it


mypc$ script arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

n: 4

But having


#!/bin/bash

myfunc()

{

  echo "$@"

  echo n: $#

}

echo "$@"

echo n: $#

myfunc "$@"

you get:


mypc$ myscrpt arg1 arg2 "asd asd" arg4

arg1 arg2 asd asd arg4

4

arg1 arg2 asd asd arg4

5

As you can see, there is no way to make know the function that a parameter is a string and not a space separated list of arguments.

Any idea of how to solve it? I've test calling functions and doing expansion in almost all ways with no results.

2012/08/12 09:11 I don't know why it fails for you. It should work if you use "$@" , of course.

See the example I used your second script with:


$ ./args1 a b c "d e" f

a b c d e f

n: 5

a b c d e f

n: 5

[Jul 25, 2017] Bash function for 'cd' aliases

Jul 25, 2017 | artofsoftware.org

Sep 2, 2011

Posted by craig in Tools

Leave a comment

Tags

bash , CDPATH

Once upon a time I was playing with Windows Power Shell (WPSH) and discovered a very useful function for changing to commonly visited directories. The function, called "go", which was written by Peter Provost , grew on me as I used WPSH ! so much so that I decided to implement it in bash after my WPSH experiments ended.

The problem is simple. Users of command line interfaces tend to visit the same directories repeatedly over the course of their work, and having a way to get to these oft-visited places without a lot of typing is nice.

The solution entails maintaining a map of key-value pairs, where each key is an alias to a value, which is itself a commonly visited directory. The "go" function will, when given a string input, look that string up in the map, and if the key is found, move to the directory indicated by the value.

The map itself is just a specially formatted text file with one key-value entry per line, while each entry is separated into key-value components by the first encountered colon, with the left side being interpreted as the entry's key and the right side as its value.

Keys are typically short easily typed strings, while values can be arbitrary path names, and even contain references to environment variables. The effect of this is that "go" can respond dynamically to the environment.

Finally, the "go" function finds the map file by referring to an environment variable called "GO_FILE", which should have as its value the full path to the map.

Before I ran into this idea I had maintained a number of shell aliases, (i.e. alias dwork='cd $WORK_DIR'), to achieve a similar end, but every time I wanted to add a new location I was forced to edit my .bashrc file. Then I would subsequently have to resource it or enter the alias again on the command line. Since I typically keep multiple shells open this is just a pain, and so I didn't add new aliases very often. With this method, a new entry in the "go file" is immediately available to all open shells without any extra finagling.

This functionality is related to CDPATH, but they are not replacements for one another. Indeed CDPATH is the more appropriate solution when you want to be able to "cd" to all or most of the sub-directories of some parent. On the other hand, "go" works very well for getting to a single directory easily. For example you might not want "/usr/local" in your CDPATH and still want an abbreviated way of getting to "/usr/local/share".

The code for the go function, as well as some brief documentation follows.

##############################################
# GO
#
# Inspired by some Windows Power Shell code
# from Peter Provost (peterprovost.org)
#
# Here are some examples entries:
# work:${WORK_DIR}
# source:${SOURCE_DIR}
# dev:/c/dev
# object:${USER_OBJECT_DIR}
# debug:${USER_OBJECT_DIR}/debug
###############################################
export GO_FILE=~/.go_locations
function go
{
   if [ -z "$GO_FILE" ]
   then
      echo "The variable GO_FILE is not set."
      return
   fi

   if [ ! -e "$GO_FILE" ]
   then
      echo "The 'go file': '$GO_FILE' does not exist."
      return
   fi

   dest=""
   oldIFS=${IFS}
   IFS=$'\n'
   for entry in `cat ${GO_FILE}`
   do
      if [ "$1" = ${entry%%:*} ]
      then
         #echo $entry
         dest=${entry##*:}
         break
      fi
   done

   if [ -n "$dest" ]
   then
      # Expand variables in the go file.
      #echo $dest
      cd `eval echo $dest`
   else
      echo "Invalid location, valid locations are:"
      cat $GO_FILE
   fi
   export IFS=${oldIFS}
}

[Jul 25, 2017] Local variables

Notable quotes:
"... completely local and separate ..."
Jul 25, 2017 | wiki.bash-hackers.org

local to a function:

myfunc
()
local
var
=VALUE
 
# alternative, only when used INSIDE a function
declare
var
=VALUE
 
...

The local keyword (or declaring a variable using the declare command) tags a variable to be treated completely local and separate inside the function where it was declared:

foo
=external
 
printvalue
()
local
foo
=internal
 
echo
$foo

 
 
# this will print "external"
echo
$foo

 
# this will print "internal"

printvalue
 
# this will print - again - "external"
echo
$foo

[Jul 25, 2017] Environment variables

Notable quotes:
"... environment variables ..."
"... including the environment variables ..."
Jul 25, 2017 | wiki.bash-hackers.org

The environment space is not directly related to the topic about scope, but it's worth mentioning.

Every UNIX® process has a so-called environment . Other items, in addition to variables, are saved there, the so-called environment variables . When a child process is created (in Bash e.g. by simply executing another program, say ls to list files), the whole environment including the environment variables is copied to the new process. Reading that from the other side means: Only variables that are part of the environment are available in the child process.

A variable can be tagged to be part of the environment using the export command:

# create a new variable and set it:
# -> This is a normal shell variable, not an environment variable!
myvariable
"Hello world."

 
# make the variable visible to all child processes:
# -> Make it an environment variable: "export" it
export
 myvariable

Remember that the exported variable is a copy . There is no provision to "copy it back to the parent." See the article about Bash in the process tree !


1) under specific circumstances, also by the shell itself

[Jul 25, 2017] Block commenting

Jul 25, 2017 | wiki.bash-hackers.org

: (colon) and input redirection. The : does nothing, it's a pseudo command, so it does not care about standard input. In the following code example, you want to test mail and logging, but not dump the database, or execute a shutdown:

#!/bin/bash
# Write info mails, do some tasks and bring down the system in a safe way
echo
"System halt requested"
 mail
-s
"System halt"
 netadmin
example.com
logger
-t
 SYSHALT
"System halt requested"

 
##### The following "code block" is effectively ignored

:
<<
"SOMEWORD"
etc
init.d
mydatabase clean_stop
mydatabase_dump
var
db
db1
mnt
fsrv0
backups
db1
logger
-t
 SYSHALT
"System halt: pre-shutdown actions done, now shutting down the system"

shutdown
-h
 NOW
SOMEWORD
##### The ignored codeblock ends here
What happened? The : pseudo command was given some input by redirection (a here-document) - the pseudo command didn't care about it, effectively, the entire block was ignored.

The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check redirection with here-documents for more

[Jul 25, 2017] Doing specific tasks: concepts, methods, ideas

Notable quotes:
"... under construction! ..."
Jul 25, 2017 | wiki.bash-hackers.org

[Jul 25, 2017] Bash 4 - a rough overview

Jul 25, 2017 | wiki.bash-hackers.org

Bash changes page for new stuff introduced.

Besides many bugfixes since Bash 3.2, Bash 4 will bring some interesting new features for shell users and scripters. See also Bash changes for a small general overview with more details.

Not all of the changes and news are included here, just the biggest or most interesting ones. The changes to completion, and the readline component are not covered. Though, if you're familiar with these parts of Bash (and Bash 4), feel free to write a chapter here.

The complete list of fixes and changes is in the CHANGES or NEWS file of your Bash 4 distribution.

The current available stable version is 4.2 release (February 13, 2011): New or changed commands and keywords The new "coproc" keyword Bash 4 introduces the concepts of coprocesses, a well known feature of other shells. The basic concept is simple: It will start any command in the background and set up an array that is populated with accessible files that represent the filedescriptors of the started process.

In other words: It lets you start a process in background and communicate with its input and output data streams.

See The coproc keyword The new "mapfile" builtin The mapfile builtin is able to map the lines of a file directly into an array. This avoids having to fill an array yourself using a loop. It enables you to define the range of lines to read, and optionally call a callback, for example to display a progress bar.

See: The mapfile builtin command Changes to the "case" keyword The case construct understands two new action list terminators:

The ;& terminator causes execution to continue with the next action list (rather than terminate the case construct).

The ;;& terminator causes the case construct to test the next given pattern instead of terminating the whole execution.

See The case statement Changes to the "declare" builtin The -p option now prints all attributes and values of declared variables (or functions, when used with -f ). The output is fully re-usable as input.

The new option -l declares a variable in a way that the content is converted to lowercase on assignment. For uppercase, the same applies to -u . The option -c causes the content to be capitalized before assignment.

declare -A declares associative arrays (see below). Changes to the "read" builtin The read builtin command has some interesting new features.

The -t option to specify a timeout value has been slightly tuned. It now accepts fractional values and the special value 0 (zero). When -t 0 is specified, read immediately returns with an exit status indicating if there's data waiting or not. However, when a timeout is given, and the read builtin times out, any partial data recieved up to the timeout is stored in the given variable, rather than lost. When a timeout is hit, read exits with a code greater than 128.

A new option, -i , was introduced to be able to preload the input buffer with some text (when Readline is used, with -e ). The user is able to change the text, or press return to accept it.

See The read builtin command Changes to the "help" builtin The builtin itself didn't change much, but the data displayed is more structured now. The help texts are in a better format, much easier to read.

There are two new options: -d displays the summary of a help text, -m displays a manpage-like format. Changes to the "ulimit" builtin Besides the use of the 512 bytes blocksize everywhere in POSIX mode, ulimit supports two new limits: -b for max socket buffer size and -T for max number of threads. Expansions Brace Expansion The brace expansion was tuned to provide expansion results with leading zeros when requesting a row of numbers.

See Brace expansion Parameter Expansion Methods to modify the case on expansion time have been added.

On expansion time you can modify the syntax by adding operators to the parameter name.

See Case modification on parameter expansion Substring expansion When using substring expansion on the positional parameters, a starting index of 0 now causes $0 to be prepended to the list (if the positional parameters are used). Before, this expansion started with $1:

# this should display $0 on Bash v4, $1 on Bash v3
echo ${@:0:1}

Globbing There's a new shell option globstar . When enabled, Bash will perform recursive globbing on ** – this means it matches all directories and files from the current position in the filesystem, rather than only the current level.

The new shell option dirspell enables spelling corrections on directory names during globbing.

See Pathname expansion (globbing) Associative Arrays Besides the classic method of integer indexed arrays, Bash 4 supports associative arrays.

An associative array is an array indexed by an arbitrary string, something like

declare -A ASSOC

ASSOC[First]="first element"
ASSOC[Hello]="second element"
ASSOC[Peter Pan]="A weird guy"

See Arrays Redirection There is a new &>> redirection operator, which appends the standard output and standard error to the named file. This is the same as the good old >>FILE 2>&1 notation.

The parser now understands |& as a synonym for 2>&1 | , which redirects the standard error for a command through a pipe.

See Redirection Interesting new shell variables

Variable Description
BASHPID contains the PID of the current shell (this is different than what $$ does!)
PROMPT_DIRTRIM specifies the max. level of unshortened pathname elements in the prompt
FUNCNEST control the maximum number of shell function recursions

See Special parameters and shell variables Interesting new Shell Options The mentioned shell options are off by default unless otherwise mentioned.

Option Description
checkjobs check for and report any running jobs at shell exit
compat* set compatiblity modes for older shell versions (influences regular expression matching in [[ ... ]]
dirspell enables spelling corrections on directory names during globbing
globstar enables recursive globbing with **
lastpipe (4.2) to execute the last command in a pipeline in the current environment

See List of shell options Misc

[Jul 25, 2017] Keeping persistent history in bash

Jul 25, 2017 | eli.thegreenplace.net

June 11, 2013 at 19:27 Tags Linux , Software & Tools

Update (Jan 26, 2016): I posted a short update about my usage of persistent history.

For someone spending most of his time in front of a Linux terminal, history is very important. But traditional bash history has a number of limitations, especially when multiple terminals are involved (I sometimes have dozens open). Also it's not very good at preserving just the history you're interested in across reboots.

There are many approaches to improve the situation; here I want to discuss one I've been using very successfully in the past few months - a simple "persistent history" that keeps track of history across terminal instances, saving it into a dot-file in my home directory ( ~/.persistent_history ). All commands, from all terminal instances, are saved there, forever. I found this tremendously useful in my work - it saves me time almost every day.

Why does it go into a separate history and not the main one which is accessible by all the existing history manipulation tools? Because IMHO the latter is still worthwhile to be kept separate for the simple need of bringing up recent commands in a single terminal, without mixing up commands from other terminals. While the terminal is open, I want the press "Up" and get the previous command, even if I've executed a 1000 other commands in other terminal instances in the meantime.

Persistent history is very easy to set up. Here's the relevant portion of my ~/.bashrc :

log_bash_persistent_history()
{
  [[
    $(history 1) =~ ^\ *[0-9]+\ +([^\ ]+\ [^\ ]+)\ +(.*)$
  ]]
  local date_part="${BASH_REMATCH[1]}"
  local command_part="${BASH_REMATCH[2]}"
  if [ "$command_part" != "$PERSISTENT_HISTORY_LAST" ]
  then
    echo $date_part "|" "$command_part" >> ~/.persistent_history
    export PERSISTENT_HISTORY_LAST="$command_part"
  fi
}

# Stuff to do on PROMPT_COMMAND
run_on_prompt_command()
{
    log_bash_persistent_history
}

PROMPT_COMMAND="run_on_prompt_command"

The format of the history file created by this is:

2013-06-09 17:48:11 | cat ~/.persistent_history
2013-06-09 17:49:17 | vi /home/eliben/.bashrc
2013-06-09 17:49:23 | ls

Note that an environment variable is used to avoid useless duplication (i.e. if I run ls twenty times in a row, it will only be recorded once).

OK, so we have ~/.persistent_history , how do we use it? First, I should say that it's not used very often, which kind of connects to the point I made earlier about separating it from the much higher-use regular command history. Sometimes I just look into the file with vi or tail , but mostly this alias does the trick for me:

alias phgrep='cat ~/.persistent_history|grep --color'

The alias name mirrors another alias I've been using for ages:

alias hgrep='history|grep --color'

Another tool for managing persistent history is a trimmer. I said earlier this file keeps the history "forever", which is a scary word - what if it grows too large? Well, first of all - worry not. At work my history file grew to about 2 MB after 3 months of heavy usage, and 2 MB is pretty small these days. Appending to the end of a file is very, very quick (I'm pretty sure it's a constant-time operation) so the size doesn't matter much. But trimming is easy:

tail -20000 ~/.persistent_history | tee ~/.persistent_history

Trims to the last 20000 lines. This should be sufficient for at least a couple of months of history, and your workflow should not really rely on more than that :-)

Finally, what's the use of having a tool like this without employing it to collect some useless statistics. Here's a histogram of the 15 most common commands I've used on my home machine's terminal over the past 3 months:

ls        : 865
vi        : 863
hg        : 741
cd        : 512
ll        : 289
pss       : 245
hst       : 200
python    : 168
make      : 167
git       : 148
time      : 94
python3   : 88
./python  : 88
hpu       : 82
cat       : 80

Some explanation: hst is an alias for hg st . hpu is an alias for hg pull -u . pss is my awesome pss tool , and is the reason why you don't see any calls to grep and find in the list. The proportion of Mercurial vs. git commands is likely to change in the very

[Jul 24, 2017] Bash history handling with multiple terminals

Add to your Prompt command history -a to preserve history from multiple terminals. This is a very neat trick !!!
get=

Bash history handling with multiple terminals

The bash session that is saved is the one for the terminal that is closed the latest. If you want to save the commands for every session, you could use the trick explained here.

export PROMPT_COMMAND='history -a'

To quote the manpage: "If set, the value is executed as a command prior to issuing each primary prompt."

So every time my command has finished, it appends the unwritten history item to ~/.bash

ATTENTION: If you use multiple shell sessions and do not use this trick, you need to write the history manually to preserver it using the command history -a

See also:

[Jul 16, 2017] Bash prompt tips and tricks

Jul 07, 2017 | opensource.com
7 comments your Bash prompt.

Anyone who has started a terminal in Linux is familiar with the default Bash prompt:


[
user
@
$host
 ~
]
$

But did you know is that this is completely customizable and can contain some very useful information? Here are a few hidden treasures you can use to customize your Bash prompt.

How is the Bash prompt set?

The Bash prompt is set by the environment variable PS1 (Prompt String 1), which is used for interactive shell prompts. There is also a PS2 variable, which is used when more input is required to complete a Bash command.

[ dneary @ dhcp- 41 - 137 ~ ] $ export PS1 = "[Linux Rulez]$ "
[ Linux Rulez ] export PS2 = "... "
[ Linux Rulez ] if true ; then
... echo "Success!"
... fi
Success ! Where is the value of PS1 set?

PS1 is a regular environment variable.

The system default value is set in /etc/bashrc . On my system, the default prompt is set with this line:


[
"
$PS1
"
 = 
"\\s-\
\v
\\
\$
 "
]
&&
PS1
=
"[\u@\h \W]\
\$
 "

This tests whether the value of PS1 is \s-\v$ (the system default value), and if it is, it sets PS1 to the value [\u@\h \W]\\$ .

If you want to see a custom prompt, however, you should not be editing /etc/bashrc . You should instead add it to .bashrc in your Home directory.

What do \u, \h, \W, \s, and \v mean? More Linux resources

In the PROMPTING section of man bash , you can find a description of all the special characters in PS1 and PS2 . The following are the default options:

What other special strings can I use in the prompts?

There are a number of special strings that can be useful.

There are many other special characters!you can see the full list in the PROMPTING section of the Bash man page .

Multi-line prompts

If you use longer prompts (say if you include \H or \w or a full date-time ), you may want to break things over two lines. Here is an example of a multi-line prompt, with the date, time, and current working directory on one line, and username @hostname on the second line:


PS1
=
"\D{%c} \w
\n
[\u@\H]$ "

Are there any other interesting things I can do?

One thing people occasionally do is create colorful prompts. While I find them annoying and distracting, you may like them. For example, to change the date-time above to display in red text, the directory in cyan, and your username on a yellow background, you could try this:

PS1 = "\[\e[31m\]\D{%c}\[\e[0m\]
\[\e[36m\]\w\[\e[0m\] \n [\[\e[1;43m\]\u\[\e[0m\]@\H]$ "

To dissect this:

You can find more colors and tips in the Bash prompt HOWTO . You can even make text inverted or blinking! Why on earth anyone would want to do this, I don't know. But you can!

What are your favorite Bash prompt customizations? And which ones have you seen that drive you crazy? Let me know in the comments. Ben Cotton on 07 Jul 2017 Permalink I really like the Bash-Beautify setup by Chris Albrecht:
https://github.com/KeyboardCowboy/Bash-Beautify/blob/master/.bash_beautify

When you're in a version-controlled directory, it includes the VCS information (e.g. the git branch and status), which is really handy if you do development. Victorhck on 07 Jul 2017 Permalink An easy drag and drop interface to build your own .bashrc/PS1 configuration

http://bashrcgenerator.com/

've phun!

How Docker Is Growing Its Container Business (Apr 21, 2017, 07:00)
VIDEO: Ben Golub, CEO of Docker Inc., discusses the business of containers and where Docker is headed.

Understanding Shell Initialization Files and User Profiles in Linux (Apr 22, 2017, 10:00)
tecmint: Learn about shell initialization files in relation to user profiles for local user management in Linux.

Cockpit An Easy Way to Administer Multiple Remote Linux Servers via a Web Browser (Apr 23, 2017, 18:00)
Cockpit is a free and open source web-based system management tool where users can easily monitor and manage multiple remote Linux servers.

The Story of Getting SSH Port 22 (Apr 24, 2017, 13:00)
It's no coincidence that the SSH protocol got assigned to port 22.

How To Suspend A Process And Resume It Later In Linux (Apr 24, 2017, 11:00)
This brief tutorial describes how to suspend or pause a running process and resume it later in Unix-like operating systems.

ShellCheck -A Tool That Shows Warnings and Suggestions for Shell Scripts (Apr 25, 2017, 06:00)
tecmint: ShellCheck is a static analysis tool that shows warnings and suggestions concerning bad code in bash/sh shell scripts.

Quick guide for Linux check disk space (Apr 26, 2017, 14:00)
Do you know how much space is left on your Linux system?

[Jul 16, 2017] A Collection Of Useful BASH Scripts For Heavy Commandline Users - OSTechNix

Notable quotes:
"... Provides cheat-sheets for various Linux commands ..."
Jul 16, 2017 | www.ostechnix.com
Today, I have stumbled upon a collection of useful BASH scripts for heavy commandline users. These scripts, known as Bash-Snippets , might be quite helpful for those who live in Terminal all day. Want to check the weather of a place where you live? This script will do that for you. Wondering what is the Stock prices? You can run the script that displays the current details of a stock. Feel bored? You can watch some youtube videos. All from commandline. You don't need to install any heavy memory consumable GUI applications.

Bash-Snippets provides the following 12 useful tools:

  1. currency – Currency converter.
  2. stocks – Provides certain Stock details.
  3. weather – Displays weather details of your place.
  4. crypt – Encrypt and decrypt files.
  5. movies – Search and display a movie details.
  6. taste – Recommendation engine that provides three similar items like the supplied item (The items can be books, music, artists, movies, and games etc).
  7. short – URL Shortner
  8. geo – Provides the details of wan, lan, router, dns, mac, and ip.
  9. cheat – Provides cheat-sheets for various Linux commands .
  10. ytview – Watch YouTube from Terminal.
  11. cloudup – A tool to backup your GitHub repositories to bitbucket.
  12. qrify – Turns the given string into a qr code.
Bash-Snippets – A Collection Of Useful BASH Scripts For Heavy Commandline Users Installation

You can install these scripts on any OS that supports BASH.

First, clone the GIT repository using command:

git clone https://github.com/alexanderepstein/Bash-Snippets

Sample output would be:

Cloning into 'Bash-Snippets'...
remote: Counting objects: 1103, done.
remote: Compressing objects: 100% (45/45), done.
remote: Total 1103 (delta 40), reused 55 (delta 23), pack-reused 1029
Receiving objects: 100% (1103/1103), 1.92 MiB | 564.00 KiB/s, done.
Resolving deltas: 100% (722/722), done.

Go to the cloned directory:

cd Bash-Snippets/

Git checkout to the latest stable release:

git checkout v1.11.0

Finally, install the Bash-Snippets using command:

sudo ./install.sh

This will ask you which scripts to install. Just type Y and press ENTER key to install the respective script. If you don't want to install a particular script, type N and hit ENTER.

[Jul 16, 2017] Classifier by classifying them into folders of Xls, Docs, .png, .jpeg, vidoe, music, pdfs, images, ISO, etc.

Jul 16, 2017 | github.com
If i'm not wrong, all our download folder is pretty Sloppy compare with others because most of the downloaded files are sitting over there and we can't delete blindly, which leads to lose some important files. Also not possible to create bunch of folders based on the files and move appropriate files into folder manually.

So, what to do to avoid this ? Better to organize files with help of classifier, later we can delete unnecessary files easily. Classifier app was written in Python.

How to Organize directory ? Simple navigate to corresponding directory, where you want to organize/classify your files and run the classifier command, it will take few mins or more depends on the directory files count or quantity.

Make a note, there is no undo option, if you want to go back. So, finalize before run classifier in directory. Also, it wont move folders.

Install Classifier in Linux through pip

pip is a recommended tool for installing Python packages in Linux. Use pip command instead of package manager to get latest build.

For Debian based systems.

$ sudo apt-get install python-pip

For RHEL/CentOS based systems.

$ sudo yum install python-pip

For Fedora

$ sudo dnf install python-pip

For openSUSE

$ sudo zypper install python-pip

For Arch Linux based systems

$ sudo pacman -S python-pip

Finally run the pip tool to install Classifier on Linux.

$ sudo pip install classifier
Organize pattern files into specific folders

First i will go with default option which will organize pattern files into specific folders. This will create bunch of directories based on the file types and move them into specific folders.

See my directory, how its looking now (Before run classifier command).

$ pwd
/home/magi/classifier

$ ls -lh
total 139M
-rw-r--r-- 1 magi magi 4.5M Mar 21 21:21 Aaluma_Doluma.mp3
-rw-r--r-- 1 magi magi  26K Mar 21 21:12 battery-monitor_0.4-xenial_all.deb
-rw-r--r-- 1 magi magi  24K Mar 21 21:12 buku-command-line-bookmark-manager-linux.png
-rw-r--r-- 1 magi magi    0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi   25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 101K Mar 21 21:12 drawing.svg
-rw-r--r-- 1 magi magi  86M Mar 21 21:12 go1.8.linux-amd64.tar.gz
-rw-r--r-- 1 magi magi   28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi   27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi  48M Apr 30  2016 Kabali Tamil Movie _ Official Teaser _ Rajinikanth _ Radhika Apte _ Pa Ranjith-9mdJV5-eias.webm
-rw-r--r-- 1 magi magi   28 Mar 21 21:12 magi1.txt
-rw-r--r-- 1 magi magi   66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
-rw-r--r-- 1 magi magi  45K Mar 21 21:12 v0.4.zip

Navigate to corresponding directory where you want to organize files, then run classifier command without any option to achieve it.

$ classifier
Scanning Files
Done!

See the Directory look, after run classifier command

$ ls -lh
total 44K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
-rw-r--r-- 1 magi magi    0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi   25 Mar 21 21:13 core.py
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
-rw-r--r-- 1 magi magi   28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi   27 Mar 21 21:13 index.php
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
-rw-r--r-- 1 magi magi   66 Mar 21 21:12 ppa.py
-rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos

Make a note, this will organize only general category files such docs, audio, video, pictures, archive, etc and wont organize .py, .html, .php, etc.,.

Classify specific file types into specific folder

To Classify specific file types into specific folder, just add -st (mention the file type) & -sf (folder name) followed by classifier command.

For best understanding, i'm going to move .py , .html & .php files into Development folder. See the exact command to achieve it.

$ classifier -st .py .html .php -sf "Development" 
Scanning Files
Done!

If the folder doesn't exit, it will create the new one and organize the files into that. See the following output. It created Development directory and moved all the files inside the directory.

$ ls -lh
total 28K
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:51 Development
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures
drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Videos

For better clarification, i have listed Development folder files.

$ ls -lh Development/
total 12K
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 config.php
-rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py
-rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html
-rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 ppa.py
-rw-r--r-- 1 magi magi  0 Mar 21 21:43 Release.html

To Organize files by Date. It will organize current directory files based on the date.

$ classifier -dt

me width=

me width=

To save organized files in different location, add -d (source directory) & -o (destination directory) followed by classifier command.

$  classifier -d /home/magi/organizer -o /home/magi/2g

[Jul 06, 2017] Linux tip Bash test and comparison functions

Jul 06, 2017 | www.ibm.com

Demystify test, [, [[, ((, and if-then-else

Ian Shields Ian Shields
Published on February 20, 2007 > > >

[Jul 05, 2017] Linux tip: Bash parameters and parameter expansions by Ian Shields

Definitely gifted author !
economistsview.typepad.com

Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal functions or other scripts? Do you need to do simple validity tests on parameters or options, or perform simple extraction and replacement operations on the parameter strings? This tip helps you with parameter use and the various parameter expansions available in the bash shell.

[Jun 28, 2017] PBS Pro Tutorial by Krishna Arutwar

www.nakedcapitalism.com
What is PBS Pro?

Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Tor