Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

Usage of pipes with loops in shell

News

Loops in Shell

Best Shell Books

Recommended Links Pipes Using redirection and pipes Unix Filters
Pipe Debugging pv - pipe viewer Named pipes Unix Sockets Text files processing Use grep and extended regular expressions to analyze text files Finding files and directories; mass operations on files

xargs Command Tutorial

Unix Component Model awk one liners perl one liners netcat AWK Programming  

Coroutines

exec command

Sysadmin Horror Stories Tips History of pipe concept Humor Etc

Korn shell and its derivates like ksh93, bash and zsh have a unique feature: the ability to pipe a file into a loop. This is effectively an implementation of simple coroutines. Please note that shell has also more general form (See Learning Korn Shell by Bill Rosenblat and Arnold Robbins).

In bash this capability is limited as bash does not run the last stage of the pipe as the current process... Bash developers probably think they can reinvent the wheel, but created a square..  Googling for "bash pipe subprocesses order" shows the extent of the problem, but I couldn't find the bash developers' official stand on the problem. Looks like in recent version of bash there is an option to force ksh behavior...

Let's assume that we need to find all files that contain string "19%" which is a typical for printing commands like "19%2d"

cd/ /usr/bin
ls | while read file
do
    echo $file
    string $file | grep '19%'
done

Here we use the ls command to generate the list of the file names and this list it piped into a loop. In a loop we echo command and then run strings piped to grep looking for suspicious format strings.

In another example from O'Reilly "Learning Korn Shell" (first edition). Here we will pipe awk output into the loop. This is a  function that, given a pathname as argument, prints its equivalent in tilde notation if possible:

function tildize {
    if [[ $1 = $HOME* ]]; then
        print "\~/${1#$HOME}"
        return 0
    fi
    awk '{FS=":"; print $1, $6}' /etc/passwd | 
        while read user homedir; do
            if [[ $homedir != / && $1 = ${homedir}?(/*) ]]; then
                print "\~$user/${1#$homedir}"
                return 0
            fi
        done
    print "$1"
    return 1
}

Loop can also serve as a source to input for the pipe. For example

{ while read line'?adc> '; do
      print "$(alg2rpn $line)"
  done 
} | dc

As an example; assume that you want to go through all  files of a directory and, if they are readable to you, convert the filenames to contain lowercase letters only. We can do it it in slightly different ways.

There are two major ways to accomplish this:

  1. The first, more traditional, variant calls tr inside the the for loop:
    #!/bin/ksh
    for x in * 
    do
      [ -r $x ] && echo $x | tr 'A-Z' 'a-z'
    done
    
  2. The second, more elegant variant uses pipe to feed tr from the loop:
    #!/bin/ksh
    for x in * 
    do
      [ -r $x ] && echo $x 
    done | tr 'A-Z' 'a-z'
  3. Usage in submission scripts for SGE and other HPC schedulers. Here is one example when we generate ./machine file for MPI using SGE variable $PE_HOSTFILE:
    # get machine from $PE_HOSTFILE
    
    cat /dev/null > ./machines
    
    cat $PE_HOSTFILE | while read line; do
    host=`echo $line | cut -d" " -f1`
    cores=`echo $line | cut -d" " -f2`
    
    while (( $cores > 0 )) ; do
            echo $host >> machines
            let cores--
    done
    done
    
    ## done with $PE_HOSTFILE

Monitoring the progress of data  through a pipeline

There is also a useful terminal-based tool for monitoring the progress of data through a pipeline called pv - pipe viewer.  It can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion. It is available for all major Linux distributions. It also has precompiled Solaris binary (Solaris binary ).


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Nov 02, 2018] Working with data streams on the Linux command line by David Both The Linux Philosophy for SysAdmins And Everyone Who Wants To Be One by David Both. It is well worth $32.

Notable quotes:
"... This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface." ..."
Oct 30, 2018 | opensource.com
Author's note: Much of the content in this article is excerpted, with some significant edits to fit the Opensource.com article format, from Chapter 3: Data Streams, of my new book, The Linux Philosophy for SysAdmins .

Everything in Linux revolves around streams of data -- particularly text streams. Data streams are the raw materials upon which the GNU Utilities , the Linux core utilities, and many other command-line tools perform their work.

As its name implies, a data stream is a stream of data -- especially text data -- being passed from one file, device, or program to another using STDIO. This chapter introduces the use of pipes to connect streams of data from one utility program to another using STDIO. You will learn that the function of these programs is to transform the data in some manner. You will also learn about the use of redirection to redirect the data to a file.

The Linux Terminal

I use the term "transform" in conjunction with these programs because the primary task of each is to transform the incoming data from STDIO in a specific way as intended by the sysadmin and to send the transformed data to STDOUT for possible use by another transformer program or redirection to a file.

The standard term, "filters," implies something with which I don't agree. By definition, a filter is a device or a tool that removes something, such as an air filter removes airborne contaminants so that the internal combustion engine of your automobile does not grind itself to death on those particulates. In my high school and college chemistry classes, filter paper was used to remove particulates from a liquid. The air filter in my home HVAC system removes particulates that I don't want to breathe.

Although they do sometimes filter out unwanted data from a stream, I much prefer the term "transformers" because these utilities do so much more. They can add data to a stream, modify the data in some amazing ways, sort it, rearrange the data in each line, perform operations based on the contents of the data stream, and so much more. Feel free to use whichever term you prefer, but I prefer transformers. I expect that I am alone in this.

Data streams can be manipulated by inserting transformers into the stream using pipes. Each transformer program is used by the sysadmin to perform some operation on the data in the stream, thus changing its contents in some manner. Redirection can then be used at the end of the pipeline to direct the data stream to a file. As mentioned, that file could be an actual data file on the hard drive, or a device file such as a drive partition, a printer, a terminal, a pseudo-terminal, or any other device connected to a computer.

The ability to manipulate these data streams using these small yet powerful transformer programs is central to the power of the Linux command-line interface. Many of the core utilities are transformer programs and use STDIO.

In the Unix and Linux worlds, a stream is a flow of text data that originates at some source; the stream may flow to one or more programs that transform it in some way, and then it may be stored in a file or displayed in a terminal session. As a sysadmin, your job is intimately associated with manipulating the creation and flow of these data streams. In this post, we will explore data streams -- what they are, how to create them, and a little bit about how to use them.

Text streams -- a universal interface

The use of Standard Input/Output (STDIO) for program input and output is a key foundation of the Linux way of doing things. STDIO was first developed for Unix and has found its way into most other operating systems since then, including DOS, Windows, and Linux.

" This is the Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."

-- Doug McIlroy, Basics of the Unix Philosophy

STDIO

STDIO was developed by Ken Thompson as a part of the infrastructure required to implement pipes on early versions of Unix. Programs that implement STDIO use standardized file handles for input and output rather than files that are stored on a disk or other recording media. STDIO is best described as a buffered data stream, and its primary function is to stream data from the output of one program, file, or device to the input of another program, file, or device.

There are three STDIO data streams, each of which is automatically opened as a file at the startup of a program -- well, those programs that use STDIO. Each STDIO data stream is associated with a file handle, which is just a set of metadata that describes the attributes of the file. File handles 0, 1, and 2 are explicitly defined by convention and long practice as STDIN, STDOUT, and STDERR, respectively.

STDIN, File handle 0 , is standard input which is usually input from the keyboard. STDIN can be redirected from any file, including device files, instead of the keyboard. It is not common to need to redirect STDIN, but it can be done.

STDOUT, File handle 1 , is standard output which sends the data stream to the display by default. It is common to redirect STDOUT to a file or to pipe it to another program for further processing.

STDERR, File handle 2 . The data stream for STDERR is also usually sent to the display.

If STDOUT is redirected to a file, STDERR continues to be displayed on the screen. This ensures that when the data stream itself is not displayed on the terminal, that STDERR is, thus ensuring that the user will see any errors resulting from execution of the program. STDERR can also be redirected to the same or passed on to the next transformer program in a pipeline.

STDIO is implemented as a C library, stdio.h , which can be included in the source code of programs so that it can be compiled into the resulting executable.

Simple streams

You can perform the following experiments safely in the /tmp directory of your Linux host. As the root user, make /tmp the PWD, create a test directory, and then make the new directory the PWD.

# cd /tmp ; mkdir test ; cd test

Enter and run the following command line program to create some files with content on the drive. We use the dmesg command simply to provide data for the files to contain. The contents don't matter as much as just the fact that each file has some content.

# for I in 0 1 2 3 4 5 6 7 8 9 ; do dmesg > file$I.txt ; done

Verify that there are now at least 10 files in /tmp/ with the names file0.txt through file9.txt .

# ll
total 1320
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file0.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file1.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file2.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file3.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file4.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file5.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file6.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file7.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file8.txt
-rw-r--r-- 1 root root 131402 Oct 17 15:50 file9.txt

We have generated data streams using the dmesg command, which was redirected to a series of files. Most of the core utilities use STDIO as their output stream and those that generate data streams, rather than acting to transform the data stream in some way, can be used to create the data streams that we will use for our experiments. Data streams can be as short as one line or even a single character, and as long as needed.

Exploring the hard drive

It is now time to do a little exploring. In this experiment, we will look at some of the filesystem structures.

Let's start with something simple. You should be at least somewhat familiar with the dd command. Officially known as "disk dump," many sysadmins call it "disk destroyer" for good reason. Many of us have inadvertently destroyed the contents of an entire hard drive or partition using the dd command. That is why we will hang out in the /tmp/test directory to perform some of these experiments.

Despite its reputation, dd can be quite useful in exploring various types of storage media, hard drives, and partitions. We will also use it as a tool to explore other aspects of Linux.

Log into a terminal session as root if you are not already. We first need to determine the device special file for your hard drive using the lsblk command.

[root@studentvm1 test]# lsblk -i
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 60G 0 disk
|-sda1 8:1 0 1G 0 part /boot
`-sda2 8:2 0 59G 0 part
|-fedora_studentvm1-pool00_tmeta 253:0 0 4M 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-pool00_tdata 253:1 0 2G 0 lvm
| `-fedora_studentvm1-pool00-tpool 253:2 0 2G 0 lvm
| |-fedora_studentvm1-root 253:3 0 2G 0 lvm /
| `-fedora_studentvm1-pool00 253:6 0 2G 0 lvm
|-fedora_studentvm1-swap 253:4 0 10G 0 lvm [SWAP]
|-fedora_studentvm1-usr 253:5 0 15G 0 lvm /usr
|-fedora_studentvm1-home 253:7 0 2G 0 lvm /home
|-fedora_studentvm1-var 253:8 0 10G 0 lvm /var
`-fedora_studentvm1-tmp 253:9 0 5G 0 lvm /tmp
sr0 11:0 1 1024M 0 rom

We can see from this that there is only one hard drive on this host, that the device special file associated with it is /dev/sda , and that it has two partitions. The /dev/sda1 partition is the boot partition, and the /dev/sda2 partition contains a volume group on which the rest of the host's logical volumes have been created.

As root in the terminal session, use the dd command to view the boot record of the hard drive, assuming it is assigned to the /dev/sda device. The bs= argument is not what you might think; it simply specifies the block size, and the count= argument specifies the number of blocks to dump to STDIO. The if= argument specifies the source of the data stream, in this case, the /dev/sda device. Notice that we are not looking at the first block of the partition, we are looking at the very first block of the hard drive.

[root@studentvm1 test]# dd if=/dev/sda bs=512 count=1
�c�#�м���؎���|�#�#���!#��8#u
��#���u��#�#�#�|���t#�L#�#�|���#�����?t��pt#���y|1��؎м ��d|<�t#��R�|1��D#@�D��D#�##f�#\|f�f�#`|f�\
�D#p�B�#r�p�#�K`#�#��1��������#a`���#f��u#����f1�f�TCPAf�#f�#a�&Z|�#}�#�.}�4�3}�.�#��GRUB GeomHard DiskRead Error
�#��#�<u��ܻޮ�###��� ������ �_U�1+0 records in
1+0 records out
512 bytes copied, 4.3856e-05 s, 11.7 MB/s

This prints the text of the boot record, which is the first block on the disk -- any disk. In this case, there is information about the filesystem and, although it is unreadable because it is stored in binary format, the partition table. If this were a bootable device, stage 1 of GRUB or some other boot loader would be located in this sector. The last three lines contain data about the number of records and bytes processed.

Starting with the beginning of /dev/sda1 , let's look at a few blocks of data at a time to find what we want. The command is similar to the previous one, except that we have specified a few more blocks of data to view. You may have to specify fewer blocks if your terminal is not large enough to display all of the data at one time, or you can pipe the data through the less utility and use that to page through the data -- either way works. Remember, we are doing all of this as root user because non-root users do not have the required permissions.

Enter the same command as you did in the previous experiment, but increase the block count to be displayed to 100, as shown below, in order to show more data.

[root@studentvm1 test]# dd if=/dev/sda1 bs=512 count=100
##33��#:�##�� :o�[:o�[#��S�###�q[#
#<�#{5OZh�GJ͞#t�Ұ##boot/bootysimage/booC�dp��G'�*)�#A�##@
#�q[
�## ## ###�#���To=###<#8���#'#�###�#�����#�' �����#Xi �#��` qT���
<���
� r���� ]�#�#�##�##�##�#�##�##�##�#�##�##�#��#�#�##�#�##�##�#��#�#����# � �# �# �#

�#
�#
�#

�#
�#
�#

�#
�#
�#100+0 records in
100+0 records out
51200 bytes (51 kB, 50 KiB) copied, 0.00117615 s, 43.5 MB/s

Now try this command. I won't reproduce the entire data stream here because it would take up huge amounts of space. Use Ctrl-C to break out and stop the stream of data.

[root@studentvm1 test]# dd if=/dev/sda

This command produces a stream of data that is the complete content of the hard drive, /dev/sda , including the boot record, the partition table, and all of the partitions and their content. This data could be redirected to a file for use as a complete backup from which a bare metal recovery can be performed. It could also be sent directly to another hard drive to clone the first. But do not perform this particular experiment.

[root@studentvm1 test]# dd if=/dev/sda of=/dev/sdx

You can see that the dd command can be very useful for exploring the structures of various types of filesystems, locating data on a defective storage device, and much more. It also produces a stream of data on which we can use the transformer utilities in order to modify or view.

The real point here is that dd , like so many Linux commands, produces a stream of data as its output. That data stream can be searched and manipulated in many ways using other tools. It can even be used for ghost-like backups or disk duplication.

Randomness

It turns out that randomness is a desirable thing in computers -- who knew? There are a number of reasons that sysadmins might want to generate a stream of random data. A stream of random data is sometimes useful to overwrite the contents of a complete partition, such as /dev/sda1 , or even the entire hard drive, as in /dev/sda .

Perform this experiment as a non-root user. Enter this command to print an unending stream of random data to STDIO.

[student@studentvm1 ~]$ cat /dev/urandom

Use Ctrl-C to break out and stop the stream of data. You may need to use Ctrl-C multiple times.

Random data is also used as the input seed to programs that generate random passwords and random data and numbers for use in scientific and statistical calculations. I will cover randomness and other interesting data sources in a bit more detail in Chapter 24: Everything is a file.

Pipe dreams

Pipes are critical to our ability to do the amazing things on the command line, so much so that I think it is important to recognize that they were invented by Douglas McIlroy during the early days of Unix (thanks, Doug!). The Princeton University website has a fragment of an interview with McIlroy in which he discusses the creation of the pipe and the beginnings of the Unix philosophy.

Notice the use of pipes in the simple command-line program shown next, which lists each logged-in user a single time, no matter how many logins they have active. Perform this experiment as the student user. Enter the command shown below:

[student@studentvm1 ~]$ w | tail -n +3 | awk '{print $1}' | sort | uniq
root
student
[student@studentvm1 ~]$

The results from this command produce two lines of data that show that the user's root and student are both logged in. It does not show how many times each user is logged in. Your results will almost certainly differ from mine.

Pipes -- represented by the vertical bar ( | ) -- are the syntactical glue, the operator, that connects these command-line utilities together. Pipes allow the Standard Output from one command to be "piped," i.e., streamed from Standard Output of one command to the Standard Input of the next command.

The |& operator can be used to pipe the STDERR along with STDOUT to STDIN of the next command. This is not always desirable, but it does offer flexibility in the ability to record the STDERR data stream for the purposes of problem determination.

A string of programs connected with pipes is called a pipeline, and the programs that use STDIO are referred to officially as filters, but I prefer the term "transformers."

Think about how this program would have to work if we could not pipe the data stream from one command to the next. The first command would perform its task on the data and then the output from that command would need to be saved in a file. The next command would have to read the stream of data from the intermediate file and perform its modification of the data stream, sending its own output to a new, temporary data file. The third command would have to take its data from the second temporary data file and perform its own manipulation of the data stream and then store the resulting data stream in yet another temporary file. At each step, the data file names would have to be transferred from one command to the next in some way.

I cannot even stand to think about that because it is so complex. Remember: Simplicity rocks!

Building pipelines

When I am doing something new, solving a new problem, I usually do not just type in a complete Bash command pipeline from scratch off the top of my head. I usually start with just one or two commands in the pipeline and build from there by adding more commands to further process the data stream. This allows me to view the state of the data stream after each of the commands in the pipeline and make corrections as they are needed.

It is possible to build up very complex pipelines that can transform the data stream using many different utilities that work with STDIO.

Redirection

Redirection is the capability to redirect the STDOUT data stream of a program to a file instead of to the default target of the display. The "greater than" ( > ) character, aka "gt", is the syntactical symbol for redirection of STDOUT.

Redirecting the STDOUT of a command can be used to create a file containing the results from that command.

[student@studentvm1 ~]$ df -h > diskusage.txt

There is no output to the terminal from this command unless there is an error. This is because the STDOUT data stream is redirected to the file and STDERR is still directed to the STDOUT device, which is the display. You can view the contents of the file you just created using this next command:

[student@studentvm1 test]# cat diskusage.txt
Filesystem Size Used Avail Use% Mounted on
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 1.2M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/mapper/fedora_studentvm1-root 2.0G 50M 1.8G 3% /
/dev/mapper/fedora_studentvm1-usr 15G 4.5G 9.5G 33% /usr
/dev/mapper/fedora_studentvm1-var 9.8G 1.1G 8.2G 12% /var
/dev/mapper/fedora_studentvm1-tmp 4.9G 21M 4.6G 1% /tmp
/dev/mapper/fedora_studentvm1-home 2.0G 7.2M 1.8G 1% /home
/dev/sda1 976M 221M 689M 25% /boot
tmpfs 395M 0 395M 0% /run/user/0
tmpfs 395M 12K 395M 1% /run/user/1000

When using the > symbol to redirect the data stream, the specified file is created if it does not already exist. If it does exist, the contents are overwritten by the data stream from the command. You can use double greater-than symbols, >>, to append the new data stream to any existing content in the file.

[student@studentvm1 ~]$ df -h >> diskusage.txt

You can use cat and/or less to view the diskusage.txt file in order to verify that the new data was appended to the end of the file.

The < (less than) symbol redirects data to the STDIN of the program. You might want to use this method to input data from a file to STDIN of a command that does not take a filename as an argument but that does use STDIN. Although input sources can be redirected to STDIN, such as a file that is used as input to grep, it is generally not necessary as grep also takes a filename as an argument to specify the input source. Most other commands also take a filename as an argument for their input source.

Just grep'ing around

The grep command is used to select lines that match a specified pattern from a stream of data. grep is one of the most commonly used transformer utilities and can be used in some very creative and interesting ways. The grep command is one of the few that can correctly be called a filter because it does filter out all the lines of the data stream that you do not want; it leaves only the lines that you do want in the remaining data stream.

If the PWD is not the /tmp/test directory, make it so. Let's first create a stream of random data to store in a file. In this case, we want somewhat less random data that would be limited to printable characters. A good password generator program can do this. The following program (you may have to install pwgen if it is not already) creates a file that contains 50,000 passwords that are 80 characters long using every printable character. Try it without redirecting to the random.txt file first to see what that looks like, and then do it once redirecting the output data stream to the file.

$ pwgen -sy 80 50000 > random.txt

Considering that there are so many passwords, it is very likely that some character strings in them are the same. First, cat the random.txt file, then use the grep command to locate some short, randomly selected strings from the last ten passwords on the screen. I saw the word "see" in one of those ten passwords, so my command looked like this: grep see random.txt , and you can try that, but you should also pick some strings of your own to check. Short strings of two to four characters work best.

$ grep see random.txt
R=p)'s/~0}wr~2(OqaL.S7DNyxlmO69`"12u]h@rp[D2%3}1b87+>Vk,;4a0hX]d7see;1%9|wMp6Yl.
bSM_mt_hPy|YZ1<TY/Hu5{g#mQ<u_(@8B5Vt?w%i-&C>NU@[;zV2-see)>(BSK~n5mmb9~h)yx{a&$_e
cjR1QWZwEgl48[3i-(^x9D=v)seeYT2R#M:>wDh?Tn$]HZU7}j!7bIiIr^cI.DI)W0D"'vZU@.Kxd1E1
z=tXcjVv^G\nW`,y=bED]d|7%s6iYT^a^Bvsee:v\UmWT02|P|nq%A*;+Ng[$S%*s)-ls"dUfo|0P5+n Summary

It is the use of pipes and redirection that allows many of the amazing and powerful tasks that can be performed with data streams on the Linux command line. It is pipes that transport STDIO data streams from one program or file to another. The ability to pipe streams of data through one or more transformer programs supports powerful and flexible manipulation of data in those streams.

Each of the programs in the pipelines demonstrated in the experiments is small, and each does one thing well. They are also transformers; that is, they take Standard Input, process it in some way, and then send the result to Standard Output. Implementation of these programs as transformers to send processed data streams from their own Standard Output to the Standard Input of the other programs is complementary to, and necessary for, the implementation of pipes as a Linux tool.

STDIO is nothing more than streams of data. This data can be almost anything from the output of a command to list the files in a directory, or an unending stream of data from a special device like /dev/urandom , or even a stream that contains all of the raw data from a hard drive or a partition.

Any device on a Linux computer can be treated like a data stream. You can use ordinary tools like dd and cat to dump data from a device into a STDIO data stream that can be processed using other ordinary Linux tools.

Topics Linux Command line

David Both is a Linux and Open Source advocate who resides in Raleigh, North Carolina. He has been in the IT industry for over forty years and taught OS/2 for IBM where he worked for over 20 years. While at IBM, he wrote the first training course for the original IBM PC in 1981. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for almost 20 years. David has written articles for...

[Dec 06, 2015] Bash For Loop Examples

A very nice tutorial by Vivek Gite (created October 31, 2008 last updated June 24, 2015). His mistake is putting new for loop too far inside the tutorial. It should emphazied, not hidden.
June 24, 2015 | cyberciti.biz

... ... ...

Bash v4.0+ has inbuilt support for setting up a step value using {START..END..INCREMENT} syntax:

#!/bin/bash
echo "Bash version ${BASH_VERSION}..."
for i in {0..10..2}
  do
     echo "Welcome $i times"
 done

Sample outputs:

Bash version 4.0.33(0)-release...
Welcome 0 times
Welcome 2 times
Welcome 4 times
Welcome 6 times
Welcome 8 times
Welcome 10 times

... ... ...

Three-expression bash for loops syntax

This type of for loop share a common heritage with the C programming language. It is characterized by a three-parameter loop control expression; consisting of an initializer (EXP1), a loop-test or condition (EXP2), and a counting expression (EXP3).

for (( EXP1; EXP2; EXP3 ))
do
	command1
	command2
	command3
done

A representative three-expression example in bash as follows:

#!/bin/bash
for (( c=1; c<=5; c++ ))
do
   echo "Welcome $c times"
done
... ... ...

Jadu Saikia, November 2, 2008, 3:37 pm

Nice one. All the examples are explained well, thanks Vivek.

seq 1 2 20
output can also be produced using jot

jot – 1 20 2

The infinite loops as everyone knows have the following alternatives.

while(true)
or
while :

//Jadu

Andi Reinbrech, November 18, 2010, 7:42 pm
I know this is an ancient thread, but thought this trick might be helpful to someone:

For the above example with all the cuts, simply do

set `echo $line`

This will split line into positional parameters and you can after the set simply say

F1=$1; F2=$2; F3=$3

I used this a lot many years ago on solaris with "set `date`", it neatly splits the whole date string into variables and saves lots of messy cutting :-)

… no, you can't change the FS, if it's not space, you can't use this method

Peko, July 16, 2009, 6:11 pm
Hi Vivek,
Thanks for this a useful topic.

IMNSHO, there may be something to modify here
=======================
Latest bash version 3.0+ has inbuilt support for setting up a step value:

#!/bin/bash
for i in {1..5}
=======================
1) The increment feature seems to belong to the version 4 of bash.
Reference: http://bash-hackers.org/wiki/doku.php/syntax/expansion/brace
Accordingly, my bash v3.2 does not include this feature.

BTW, where did you read that it was 3.0+ ?
(I ask because you may know some good website of interest on the subject).

2) The syntax is {from..to..step} where from, to, step are 3 integers.
You code is missing the increment.

Note that GNU Bash documentation may be bugged at this time,
because on GNU Bash manual, you will find the syntax {x..y[incr]}
which may be a typo. (missing the second ".." between y and increment).

see http://www.gnu.org/software/bash/manual/bashref.html#Brace-Expansion

The Bash Hackers page
again, see http://bash-hackers.org/wiki/doku.php/syntax/expansion/brace
seeems to be more accurate,
but who knows ? Anyway, at least one of them may be right… ;-)

Keep on the good work of your own,
Thanks a million.

- Peko

Michal Kaut July 22, 2009, 6:12 am
Hello,

is there a simple way to control the number formatting? I use several computers, some of which have non-US settings with comma as a decimal point. This means that
for x in $(seq 0 0.1 1) gives 0 0.1 0.2 … 1 one some machines and 0 0,1 0,2 … 1 on other.
Is there a way to force the first variant, regardless of the language settings? Can I, for example, set the keyboard to US inside the script? Or perhaps some alternative to $x that would convert commas to points?
(I am sending these as parameters to another code and it won't accept numbers with commas…)

The best thing I could think of is adding x=`echo $x | sed s/,/./` as a first line inside the loop, but there should be a better solution? (Interestingly, the sed command does not seem to be upset by me rewriting its variable.)

Thanks,
Michal

Peko July 22, 2009, 7:27 am

To Michal Kaut:

Hi Michal,

Such output format is configured through LOCALE settings.

I tried :

export LC_CTYPE="en_EN.UTF-8″; seq 0 0.1 1

and it works as desired.

You just have to find the exact value for LC_CTYPE that fits to your systems and your needs.

Peko

Peko July 22, 2009, 2:29 pm

To Michal Kaus [2]

Ooops – ;-)
Instead of LC_CTYPE,
LC_NUMERIC should be more appropriate
(Although LC_CTYPE is actually yielding to the same result – I tested both)

By the way, Vivek has already documented the matter : http://www.cyberciti.biz/tips/linux-find-supportable-character-sets.html

Philippe Petrinko October 30, 2009, 8:35 am

To Vivek:
Regarding your last example, that is : running a loop through arguments given to the script on the command line, there is a simplier way of doing this:
# instead of:
# FILES="$@"
# for f in $FILES

# use the following syntax
for arg
do
# whatever you need here – try : echo "$arg"
done

Of course, you can use any variable name, not only "arg".

Philippe Petrinko November 11, 2009, 11:25 am

To tdurden:

Why would'nt you use

1) either a [for] loop
for old in * ; do mv ${old} ${old}.new; done

2) Either the [rename] command ?
excerpt form "man rename" :

RENAME(1) Perl Programmers Reference Guide RENAME(1)

NAME
rename – renames multiple files

SYNOPSIS
rename [ -v ] [ -n ] [ -f ] perlexpr [ files ]

DESCRIPTION
"rename" renames the filenames supplied according to the rule specified
as the first argument. The perlexpr argument is a Perl expression
which is expected to modify the $_ string in Perl for at least some of
the filenames specified. If a given filename is not modified by the
expression, it will not be renamed. If no filenames are given on the
command line, filenames will be read via standard input.

For example, to rename all files matching "*.bak" to strip the
extension, you might say

rename 's/\.bak$//' *.bak

To translate uppercase names to lower, you'd use

rename 'y/A-Z/a-z/' *

- Philippe

Philippe Petrinko November 11, 2009, 9:27 pm

If you set the shell option extglob, Bash understands some more powerful patterns. Here, a is one or more pattern, separated by the pipe-symbol (|).

?() Matches zero or one occurrence of the given patterns
*() Matches zero or more occurrences of the given patterns
+() Matches one or more occurrences of the given patterns
@() Matches one of the given patterns
!() Matches anything except one of the given patterns

source: http://www.bash-hackers.org/wiki/doku.php/syntax/pattern

Philippe Petrinko November 12, 2009, 3:44 pm

To Sean:
Right, the more sharp a knife is, the easier it can cut your fingers…

I mean: There are side-effects to the use of file globbing (like in [ for f in * ] ) , when the globbing expression matches nothing: the globbing expression is not susbtitued.

Then you might want to consider using [ nullglob ] shell extension,
to prevent this.
see: http://www.bash-hackers.org/wiki/doku.php/syntax/expansion/globs#customization

Devil hides in detail ;-)

Dominic January 14, 2010, 10:04 am

There is an interesting difference between the exit value for two different for looping structures (hope this comes out right):
for (( c=1; c<=2; c++ )) do echo -n "inside (( )) loop c is $c, "; done; echo "done (( )) loop c is $c"
for c in {1..2}; do echo -n "inside { } loop c is $c, "; done; echo "done { } loop c is $c"

You see that the first structure does a final increment of c, the second does not. The first is more useful IMO because if you have a conditional break in the for loop, then you can subsequently test the value of $c to see if the for loop was broken or not; with the second structure you can't know whether the loop was broken on the last iteration or continued to completion.

Dominic January 14, 2010, 10:09 am

sorry, my previous post would have been clearer if I had shown the output of my code snippet, which is:
inside (( )) loop c is 1, inside (( )) loop c is 2, done (( )) loop c is 3
inside { } loop c is 1, inside { } loop c is 2, done { } loop c is 2

Philippe Petrinko March 9, 2010, 2:34 pm

@Dmitry

And, again, as stated many times up there, using [seq] is counter productive, because it requires a call to an external program, when you should Keep It Short and Simple, using only bash internals functions:


for ((c=1; c<21; c+=2)); do echo "Welcome $c times" ; done

(and I wonder why Vivek is sticking to that old solution which should be presented only for historical reasons when there was no way of using bash internals.
By the way, this historical recall should be placed only at topic end, and not on top of the topic, which makes newbies sticking to the not-up-to-date technique ;-) )

Sean March 9, 2010, 11:15 pm

I have a comment to add about using the builtin for (( … )) syntax. I would agree the builtin method is cleaner, but from what I've noticed with other builtin functionality, I had to check the speed advantage for myself. I wrote the following files:

builtin_count.sh:

#!/bin/bash
for ((i=1;i<=1000000;i++))
do
echo "Output $i"
done

seq_count.sh:

#!/bin/bash
for i in $(seq 1 1000000)
do
echo "Output $i"
done

And here were the results that I got:
time ./builtin_count.sh
real 0m22.122s
user 0m18.329s
sys 0m3.166s

time ./seq_count.sh
real 0m19.590s
user 0m15.326s
sys 0m2.503s

The performance increase isn't too significant, especially when you are probably going to be doing something a little more interesting inside of the for loop, but it does show that builtin commands are not necessarily faster.

Andi Reinbrech November 18, 2010, 8:35 pm

The reason why the external seq is faster, is because it is executed only once, and returns a huge splurb of space separated integers which need no further processing, apart from the for loop advancing to the next one for the variable substitution.

The internal loop is a nice and clean/readable construct, but it has a lot of overhead. The check expression is re-evaluated on every iteration, and a variable on the interpreter's heap gets incremented, possibly checked for overflow etc. etc.

Note that the check expression cannot be simplified or internally optimised by the interpreter because the value may change inside the loop's body (yes, there are cases where you'd want to do this, however rare and stupid they may seem), hence the variables are volatile and get re-evaluted.

I.e. botom line, the internal one has more overhead, the "seq" version is equivalent to either having 1000000 integers inside the script (hard coded), or reading once from a text file with 1000000 integers with a cat. Point being that it gets executed only once and becomes static.

OK, blah blah fishpaste, past my bed time :-)

Cheers,
Andi

Anthony Thyssen June 4, 2010, 6:53 am

The {1..10} syntax is pretty useful as you can use a variable with it!

limit=10
echo {1..${limit}}
{1..10}

You need to eval it to get it to work!

limit=10
eval "echo {1..${limit}}"
1 2 3 4 5 6 7 8 9 10

'seq' is not avilable on ALL system (MacOSX for example)
and BASH is not available on all systems either.

You are better off either using the old while-expr method for computer compatiblity!

   limit=10; n=1;
   while [ $n -le 10 ]; do
     echo $n;
     n=`expr $n + 1`;
   done

Alternativally use a seq() function replacement…

 # seq_count 10
seq_count() {
  i=1; while [ $i -le $1 ]; do echo $i; i=`expr $i + 1`; done
}
# simple_seq 1 2 10
simple_seq() {
  i=$1; while [ $i -le $3 ]; do echo $i; i=`expr $i + $2`; done
}
seq_integer() {
    if [ "X$1" = "X-f" ]
    then format="$2"; shift; shift
    else format="%d"
    fi
    case $# in
    1) i=1 inc=1 end=$1 ;;
    2) i=$1 inc=1 end=$2 ;;
    *) i=$1 inc=$2 end=$3 ;;
    esac
    while [ $i -le $end ]; do
      printf "$format\n" $i;
      i=`expr $i + $inc`;
    done
  }

Edited: by Admin – added code tags.

TheBonsai June 4, 2010, 9:57 am

The Bash C-style for loop was taken from KSH93, thus I guess it's at least portable towards Korn and Z.

The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX.

Philippe Petrinko June 4, 2010, 10:15 am

Right Bonsai,
( http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_06_04 )

But FOR C-style does not seem to be POSIXLY-correct…

Read on-line reference issue 6/2004,
Top is here, http://www.opengroup.org/onlinepubs/009695399/mindex.html

and the Shell and Utilities volume (XCU) T.OC. is here
http://www.opengroup.org/onlinepubs/009695399/utilities/toc.html
doc is:
http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap01.html

and FOR command:
http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_09_04_03

Anthony Thyssen June 6, 2010, 7:18 am

TheBonsai wrote…. "The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX."

I am not certain it is in Posix. It was NOT part of the original Bourne Shell, and on some machines, I deal with Bourne Shell. Not Ksh, Bash, or anything else.

Bourne Shell syntax works everywhere! But as 'expr' is a builtin in more modern shells, then it is not a big loss or slow down.

This is especially important if writing a replacement command, such as for "seq" where you want your "just-paste-it-in" function to work as widely as possible.

I have been shell programming pretty well all the time since 1988, so I know what I am talking about! Believe me.

MacOSX has in this regard been the worse, and a very big backward step in UNIX compatibility. 2 year after it came out, its shell still did not even understand most of the normal 'test' functions. A major pain to write shells scripts that need to also work on this system.

TheBonsai June 6, 2010, 12:35 pm

Yea, the question was if it's POSIX, not if it's 100% portable (which is a difference). The POSIX base more or less is a subset of the Korn features (88, 93), pure Bourne is something "else", I know. Real portability, which means a program can go wherever UNIX went, only in C ;)

Philippe Petrinko November 22, 2010, 8:23 am

And if you want to get rid of double-quotes, use:

one-liner code:
while read; do record=${REPLY}; echo ${record}|while read -d ","; do field="${REPLY#\"}"; field="${field%\"}"; echo ${field}; done; done<data

script code, added of some text to better see record and field breakdown:

#!/bin/bash
while read
do
echo "New record"
record=${REPLY}
echo ${record}|while read -d ,
do
field="${REPLY#\"}"
field="${field%\"}"
echo "Field is :${field}:"
done
done<data

Does it work with your data?

- PP

Philippe Petrinko November 22, 2010, 9:01 am

Of course, all the above code was assuming that your CSV file is named "data".

If you want to use anyname with the script, replace:

done<data

With:

done

And then use your script file (named for instance "myScript") with standard input redirection:

myScript < anyFileNameYouWant

Enjoy!

Philippe Petrinko November 22, 2010, 11:28 am

well no there is a bug, last field of each record is not read – it needs a workout and may be IFS modification ! After all that's what it was built for… :O)

Anthony Thyssen November 22, 2010, 11:31 pm

Another bug is the inner loop is a pipeline, so you can't assign variables for use later in the script. but you can use '<<<' to break the pipeline and avoid the echo.

But this does not help when you have commas within the quotes! Which is why you needed quotes in the first place.

In any case It is a little off topic. Perhaps a new thread for reading CVS files in shell should be created.

Philippe Petrinko November 24, 2010, 6:29 pm

Anthony,
Would you try this one-liner script on your CSV file?

This one-liner assumes that CSV file named [data] has __every__ field double-quoted.


while read; do r="${REPLY#\"}";echo "${r//\",\"/\"}"|while read -d \";do echo "Field is :${REPLY}:";done;done<data

Here is the same code, but for a script file, not a one-liner tweak.


#!/bin/bash
# script csv01.sh
#
# 1) Usage
# This script reads from standard input
# any CSV with double-quoted data fields
# and breaks down each field on standard output
#
# 2) Within each record (line), _every_ field MUST:
# - Be surrounded by double quotes,
# - and be separated from preceeding field by a comma
# (not the first field of course, no comma before the first field)
#
while read
do
echo "New record" # this is not mandatory-just for explanation
#
#
# store REPLY and remove opening double quote
record="${REPLY#\"}"
#
#
# replace every "," by a single double quote
record=${record//\",\"/\"}
#
#
echo ${record}|while read -d \"
do
# store REPLY into variable "field"
field="${REPLY}"
#
#
echo "Field is :${field}:" # just for explanation
done
done

This script named here [cvs01.sh] must be used so:

cvs01.sh < my-cvs-file-with-doublequotes

Philippe Petrinko November 24, 2010, 6:35 pm

@Anthony,

By the way, using [REPLY] in the outer loop _and_ the inner loop is not a bug.
As long as you know what you do, this is not problem, you just have to store [REPLY] value conveniently, as this script shows.

TheBonsai March 8, 2011, 6:26 am
for ((i=1; i<=20; i++)); do printf "%02d\n" "$i"; done

nixCraft March 8, 2011, 6:37 am

+1 for printf due to portability, but you can use bashy .. syntax too

for i in {01..20}; do echo "$i"; done

TheBonsai March 8, 2011, 6:48 am

Well, it isn't portable per se, it makes it portable to pre-4 Bash versions.

I think a more or less "portable" (in terms of POSIX, at least) code would be

i=0
while [ "$((i >= 20))" -eq 0 ]; do
  printf "%02d\n" "$i"
  i=$((i+1))
done

Philip Ratzsch April 20, 2011, 5:53 am

I didn't see this in the article or any of the comments so I thought I'd share. While this is a contrived example, I find that nesting two groups can help squeeze a two-liner (once for each range) into a one-liner:

for num in {{1..10},{15..20}};do echo $num;done

Great reference article!

Philippe Petrinko April 20, 2011, 8:23 am

@Philip
Nice thing to think of, using brace nesting, thanks for sharing.

Philippe Petrinko May 6, 2011, 10:13 am

Hello Sanya,

That would be because brace expansion does not support variables. I have to check this.
Anyway, Keep It Short and Simple: (KISS) here is a simple solution I already gave above:

xstart=1;xend=10;xstep=1
for (( x = $xstart; x <= $xend; x += $xstep)); do echo $x;done

Actually, POSIX compliance allows to forget $ in for quotes, as said before, you could also write:

xstart=1;xend=10;xstep=1
for (( x = xstart; x <= xend; x += xstep)); do echo $x;done

Philippe Petrinko May 6, 2011, 10:48 am

Sanya,

Actually brace expansion happens __before__ $ parameter exapansion, so you cannot use it this way.

Nevertheless, you could overcome this this way:

max=10; for i in $(eval echo {1..$max}); do echo $i; done

Sanya May 6, 2011, 11:42 am

Hello, Philippe

Thanks for your suggestions
You basically confirmed my findings, that bash constructions are not as simple as zsh ones.
But since I don't care about POSIX compliance, and want to keep my scripts "readable" for less experienced people, I would prefer to stick to zsh where my simple for-loop works

Cheers, Sanya

Philippe Petrinko May 6, 2011, 12:07 pm

Sanya,

First, you got it wrong: solutions I gave are not related to POSIX, I just pointed out that POSIX allows not to use $ in for (( )), which is just a little bit more readable – sort of.

Second, why do you see this less readable than your [zsh] [for loop]?

for (( x = start; x <= end; x += step)) do
echo "Loop number ${x}"
done

It is clear that it is a loop, loop increments and limits are clear.

IMNSHO, if anyone cannot read this right, he should not be allowed to code. :-D

BFN

Anthony Thyssen May 8, 2011, 11:30 pm

If you are going to do… $(eval echo {1..$max});
You may as well use "seq" or one of the many other forms.
See all the other comments on doing for loops.

Tom P May 19, 2011, 12:16 pm

I am trying to use the variable I set in the for line on to set another variable with a different extension. Couldn't get this to work and couldnt find it anywhere on the web… Can someone help.

Example:

FILE_TOKEN=`cat /tmp/All_Tokens.txt`
for token in $FILE_TOKEN
do
A1_$token=`grep $A1_token /file/path/file.txt | cut -d ":" -f2`

my goal is to take the values from the ALL Tokens file and set a new variable with A1_ infront of it… This tells be that A1_ is not a command…

pipe viewer.

Can be inserted into any normal pipeline between two processes to give a visual indication of how quickly data is passing through, how long it has taken, how near to completion it is, and an estimate of how long it will be until completion. It has precompiled Solaris binary (Solaris binary )

Using Bash To Feed Command Output To A While Loop Without Using Pipes!

The Linux and Unix Menagerie

But here's a really neat trick for getting this to work in bash 2.x. If you change your program to be structured like so:

while read line
do
    echo $line
done < <(ls -1d *)
Your outcome will result in success!! You've got the command output and you didn't have to use a pipe to feed it to the while loop!

NOTE: The two most important things to remember about doing this are that:

1. The space between the first < and second < is mandatory! Although, it should be noted that, between the two <'s, you can have as many spaces as you want. You can even use a tab between the two <'s, they just can't be directly connected.

2. The command, from which you want to use output as fodder for the while loop, needs to be run in a subshell (generally placed between parentheses, just like the ones surrounding this sentence) and the left parenthesis must immediately follow the second <, with "no" space in between!

We've already looked at what happens if you ignore rule number 1 and use << instead of < <. If you ignore rule number 2, you'll get:

./program: line 4: syntax error near unexpected token `<'
./program: line 4: `done < < (ls -1d *)'

And here's the "even better part" - In bash 3.x, you don't have to worry about all that spacing anymore, as they've added a new feature which does the same thing (or is it really just an old feature dressed up to make it seem fabulous? ;) In bash 3.x, you can use the triple-< operator. Actually, I believe the <<< syntax is referred to as a "here string," but that's purely academic. They could call it "fudge," as long as it works ;)

So, in bash 3.x, you could write a while loop that takes input from a command without using a pipe like so:

while read line
do
 echo hi $line
done <<< `ls -1d *`
NOTE: The space between the <<< and your backticked (or otherwise extrapolated) command output is not necessary and you can have as much space as the shell can stand between those two parts of the "here string." Of course, the three <'s need to be all clumped together with no space in between them.

Reader - Contributions browse

bash Cookbook

pipeline of commands

We discuss in the book (in the chapter on common mistakes) the fact that a pipeline of commands runs those commands in subshells. The result (or dilema) is that what happens in those subshells (e.g. counting something) is lost to the parent shell script unless the parent captures output from the pipeline, but that isn't always easy or desirable.

The bash man page describes a feature of bash called "Process Substitution" that lets you substitute the output of a pipeline of commands (actually a list of commands) using <(list) as the syntax.

But notice how the feature is described:

The process list is run with its input or output connected to a FIFO or some file in /dev/fd. The name of this file is passed as an argument to the current command as the result of the expansion.

The <(...) is going to be replaced with the name of a fifo. So if you wrote:

wc <(some commands)
the result would be:
wc fifo

that is, the fifo filename is passed to the command. That's fine for commands like wc that can accept a filename. But what about a builtin like while?

It turns out that you can add the redirect from the fifo, but the space between the two less-than signs is crucial to distinguish it from "<<", the "here document" syntax.

So you can write:

  while read a b c
  do
    ...
  done < <(pipeline of commands)

Internal Commands and Builtins

Piping output to a read, using echo to set variables will fail.

Yet, piping the output of cat seems to work.

cat file1 file2 |
while read line
do
echo $line
done

However, as Bjön Eriksson shows:

Example 14-8. Problems reading from a pipe
#!/bin/sh
# readpipe.sh
# This example contributed by Bjon Eriksson.

last="(null)"
cat $0 |
while read line
do
    echo "{$line}"
    last=$line
done
printf "\nAll done, last:$last\n"

exit 0  # End of code.
        # (Partial) output of script follows.
        # The 'echo' supplies extra brackets.

#############################################

./readpipe.sh 

{#!/bin/sh}
{last="(null)"}
{cat $0 |}
{while read line}
{do}
{echo "{$line}"}
{last=$line}
{done}
{printf "nAll done, last:$lastn"}


All done, last:(null)

The variable (last) is set within the subshell but unset outside.

The gendiff script, usually found in /usr/bin on many Linux distros, pipes the output of find to a while read construct.

find $1 \( -name "*$2" -o -name ".*$2" \) -print |
while read f; do
. . .
It is possible to paste text into the input field of a read. See Example A-39.

ksh vs bash setting variable in piped loops are lost - Ubuntu Forums

ksh vs bash: setting variable in piped loops are lost
I'm a long time unix (not linux) programmer, mainly shell scripts.
Unix does not know about bash, but rather ksh or sh.

So I often build stuff like this:

Code:

n=0
du | sort -n | while read size dir
do
  if [ "$size" -gt 100000 ]
  then
    n=$((n+1))
  fi
done
echo "Found $n too big files"
the question is not how to reformulate this differently using awk or perl.
The question is:

in ksh this script returns the correct value
in bash it always returns 0

This is because in ksh the last command in the pipe runs in the current process, whereas in bash the first command runs in the current proces. Result: the modified variables are lost.

I'm still puzzled that his is the case and that nobody really cares about.
Maybe I'm missing the point and there is some simple environment variable or other setting to change to enable ksh compatible


Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: November 02, 2018