Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Input and Output Redirection

News

Unix Shells

Best Shell Books

Recommended Links Unix Filters

Pipes

Process Substitution in Shell
head command tail command cat command cut command AWK Programming Unix Find Tutorial grep command
exec command Tee Pipes in Loops Pipes support in Unix shell Introduction into text files processing in bash String Operations in Shell Subshells
bash Tips and Tricks Sysadmin Horror Stories Unix shells history History Tips Humor Etc

Introduction

During a normal day a sysadmin often writes several bash or Perl scripts.  They are called throwaway scripts.  Often they perform specific task related to the current problem that sysadmin is trying to solve. For example some information collection.  often if you face a problem you want to extract from the log file relevant to the problem information, but logfile is "dirty" and needs to be filtered from junk before it becomes usable. 

The main tool in such circumstances is a subclass of Unix utilities called filters and pipes connected together via two mechanisms available in Unix -- redirection and pipes. which allow them to process each out output as input

In some specific roles like web server administrator extracting relevant information from web server and proxy log can approach a full time job.

Standard files

Unix has three standard files:

Of course, that's mostly by convention. There's nothing stopping you from writing your error information to standard output if you wish. You can even close the three file handles totally and open your own files for I/O.

Redirection and pipes: two powerful mechanisms in shell programming

There are two major mechanism that increase flexibility of Unix utilities:

Before shell executes a command, it scans the command line for redirection characters. These special symbols instruct the shell to redirect input and output. Redirection characters can appear anywhere in a simple command or can precede or follow a command. They are not passed on to the invoked command.

Redirection of standard files

By default Unix/Linux assumes that all output is going to STDOUT  which is assigned to a user screen/console called  /dev/tty. You can divert messages directed to standard output, for example from commands like echo,  to files or other commands. Bash refers to this as redirection.

The most popular is > operator, which redirects STDOUT to a file. The redirection operator is followed by the name of the file the messages should be written to. For example, to write the message "The processing is complete" to a file named my.log , you use

timestamp=`date`
echo "The processing started at $timestamp" > /tmp/my.log

Try to execute

echo "Hello world" > /dev/tty

You will see that it typed on the your screen exactly the say way as if you executed the command

echo "Hello to myself"

Because those two command are actually identical.

It is important to understand that when messages aren't redirected in your program, the output goes through a special file called standard output. By default, standard output represents the screen. That means that everything sent through standard output is redirected to the screen. Bash uses the symbol &l to refer to standard output, and you can explicitly redirect messages to it. You can redirect to the file the output of the whole script

bash myscript.sh > mylisting.txt
This is the same as
bash myscript.sh 1> mylisting.txt

In this case any echo statement will write the information not the  screen, but to the file you've redirected the output to. In this case this is the file mylisting. txt

But you can also redirect each echo statement in you script. Let's see another set of examples:

echo "Don't forget to backup your data" > /dev/tty      # send explicitly to the screen
echo "Don't forget to backup your data"                 # sent to screen via standard output
echo "Don't forget to backup your data >&1              # same as the last one
echo "Don't forget to backup your data >/dev/stdout     # same as the last one
echo "Don't forget to backup your data" > warning.txt   # sent to a file in the current directory

Using standard output is a way to send all the output from a script and any commands in it to a new destination.

A script doesn't usually need to know where the messages are going: There’s always the possibility they were redirected. However, when errors occur and when warning messages are printed to the user, you don't want these messages to get redirected along with everything else.

Linux defines a second file especially for messages intended for the user called standard error. This file represents the destination for all error messages. Because standard error, like standard output, is a file, standard error can likewise be redirected. The symbol for standard error is &2. /dev/stderr can also be used. The default destination, like standard output, is the screen. For example,

echo "$SCRIPT:SLINENO: No files available for processing" >&2

This command appears to work the same as a echo without the >&2 redirection, but there is an important difference. It displays an error message to the screen, no matter where standard output has been previously redirected.

 The redirection symbols for standard error are the same as standard output except they begin with the number 2. For example

bash myscript.sh 2> myscript_errors.txt

There are several classic types of redirection:

Source and target can be expression. In this case bash performs command and parameter substitution before using the  parameter. File name substitution occurs only if the pattern matches a single file

Unix command cat is actually short for "catenate," i.e., link together. It accepts multiple filename arguments and copies them to the standard output. But let's pretend, for the moment, that cat and other utilities don't accept filename arguments and accept only standard input. Unix shell lets you redirect standard input so that it comes from a file. The notation command <  filename does the same as cat with less overhead.

The > operator always overwrites the named file. If a series of printf messages are redirected to the same file, only the last message appears.

To add messages to a file without overwriting the earlier ones, Bash has an append operator, >>. This operator redirects messages to the end of a file.

echo "The processing started at $timestamp" > /tmp/my.log
... ... ... 
echo "There were no errors. Normal exist of the program" >>  /tmp/my.log

In the same way, input can be redirected to a command from a file. The input redirection symbol is <. For example, the utility wc (word count) is able to calculate number of lines in the file with the option -l. That means that you can count the number of lines in a file, using the command:

wc  -l <  $HOME/.bashrc

Again, wc -l count lines of the file. In this case this is number of lines in your  .bashrc. Printing this information from your .bash_profile script might be a useful reminder to you that can alert you to the fact that you recently modified your env, or God forbid your .bashrc file disappeared without trace :-)  

There is also a possibility to imitate reading from a file inside the script by putting several lines directly into the script. The operator <<MARKER treats the lines following it in a script as if they were typed from the keyboard until it reaches the file starting from the work MARKET. In other words the lines which are treated as an input file are limited by the a special line using the delimiter you you define yourself. For example, in the following example the delimiter word used is "EOL": 

cat > /tmp/example <<EOF
this is a test demostrating how you can 
write several lines of text into 
a file
EOF

If you use >> instead of  >  you can add lines to a file without using any editor:

cat >>/etc/resolv.conf <<EOF
search datacenter.mycompany.com headquarters.mycompany.com
nameserver 10.100.20.5
nameserver 10.100.20.6
EOF

In this example bash treats the three lines between the EOF markers as if they were being typed from the keyboard and write them to the file specified after > (/tmp/example in our case).  there should be no spaces between << and EOF marker. Again, the name EOF is arbitrary. you can choose, for example,  LINES_END instead. the only important thing is there should be no lines in your test that start with the same word.

cat >>/etc/resolv.conf <<LINES_END
search datacenter.mycompany.com headquaters.mycompany.com
nameserver 10.100.20.5
nameserver 10.100.20.6
LINES_END

There should no market at any beginning of the lines of included text. that's why using all caps makes sense in this case. 

The data in the << list is known as a here file (or a here document) because the word HERE was often used in Bourne shell scripts as the marker of the end of the input lines.

Bash have another here file redirection operator, <<<, which redirects a variable or a literal.

cat > /tmp/example <<<  "this is another example of piping info into the file" 
Here is a summary of what we can do.

Pipes as cascading redirection

Instead of files, the results of a command can be redirected as input to another command. This process is called piping and uses the vertical bar (or pipe) operator |.

who | wc -l # count the number or users

Any number of commands can be strung together with vertical bar symbols. A group of such commands is called a pipeline.

If one command ends prematurely in a series of pipe commands, for example, because you interrupted a command with a Ctrl-C, Bash displays the message "Broken Pipe" on the screen.

Bash and the process tree [Bash Hackers Wiki]

Pipes are a very powerful tool. You can connect the output of one process to the input of another process. We won't delve into pipign at this point, we just want to see how it looks in the process tree. Again, we execute some commands, this time, we'll run ls and grep:

$ ls | grep myfile

It results in a tree like this:

                   +-- ls
xterm ----- bash --|
                   +-- grep

Note once again, ls can't influence the grep environment. grep can't influence the ls environmet, and neither grep nor ls can influence the bash environment.

How is that related to shell programming?!?

Well, imagine some Bash code that reads data from a pipe. For example, the internal command read, which reads data from stdin and puts it into a variable. We run it in a loop here to count input lines:

counter=0

cat /etc/passwd | while read; do ((counter++)); done
echo "Lines: $counter"

What? It's 0? Yes! The number of lines might not be 0, but the variable $counter still is 0. Why? Remember the diagram from above? Rewriting it a bit, we have:

                   +-- cat /etc/passwd
xterm ----- bash --|
                   +-- bash (while read; do ((counter++)); done)

See the relationship? The forked Bash process will count the lines like a charm. It will also set the variable counter as directed. But if everything ends, this extra process will be terminated - your "counter" variable is gone You see a 0 because in the main shell it was 0, and wasn't changed by the child process!

So, how do we count the lines? Easy: Avoid the subshell. The details don't matter, the important thing is the shell that sets the counter must be the "main shell". For example:

counter=0

while read; do ((counter++)); done </etc/passwd
echo "Lines: $counter"

It's nearly self-explanitory. The while loop runs in the current shell, the counter is incremented in the current shell, everything vital happens in the current shell, also the read command sets the variable REPLY (the default if nothing is given), though we don't use it here.

Bash creates subshells or subprocesses on various actions it performs:

As shown above, Bash will create subprocesses everytime it executes commands. That's nothing new.

But if your command is a subprocess that sets variables you want to use in your main script, that won't work.

For exactly this purpose, there's the source command (also: the dot . command). Source doesn't execute the script, it imports the other script's code into the current shell:

source ./myvariables.sh
# equivalent to:
. ./myvariables.sh

Explicit subshell

If you group commands by enclosing them in parentheses, these commands are run inside a subshell:

(echo PASSWD follows; cat /etc/passwd; echo GROUP follows; cat /etc/group) >output.txt

Command substitution

With command substitution you re-use the output of another command asr command line, for example to set a variable. The other command is run in a subshell:

number_of_users=$(cat /etc/passwd | wc -l)
Note that, in this example, a second subshell was created by using a pipe in the command substitution:
                                            +-- cat /etc/passwd
xterm ----- bash ----- bash (cmd. subst.) --|
                                            +-- wc -l<  not suspect exist. 

See also Pipes -- powerful and elegant programming paradigm

Some additional details for each redirection operator

< Sourcefile and process substitution

Uses the file specified by the Sourcefile as standard input (file descriptor 0).

>Targetfile and noclobber option

Uses the file specified by the target as standard output (file descriptor 1). If the file does not exist, the shell creates it. If the file exists and the noclobber option is on, an error results; otherwise, the file is truncated to zero length.

Note: When multiple shells have the noclobber option set and they redirect output to the same file, there could be a race condition, which might result in more than one of these shell processes writing to the file. The shell does not detect or prevent such race conditions.

>| Targetfile

Same as the > command, except that this redirection statement overrides the noclobber option.

2>/dev/null  # redirect stder to /dev/null

>> Targetfile

Uses the file specified by target as standard output. If the file currently exists, the shell appends the output to it (by first seeking the end-of-file character). If the file does not exist, the shell creates it.

<> Stream

Opens the file specified by the parameter for reading and writing as standard input.

<<[-] EndMarker (Here documents)

Here documents are often used with cat command. In this case shell reads each line input until it locates a line containing only the value of the parameter (which serves as end marker ) or an end-of-file character. This part of the script is called here document. For example

tr a-z A-Z << EOF
jan feb mar
apr may jun
jul aug sep
oct nov dec
EOF

The string EOF was used as the delimiting identifier. It specified the start and end of the here document. The redirect and the delimiting identifier do not need to be separated by a space: <<EOF and << EOF both work.

The delimited by the end marker part of the script is converted into a file that becomes the standard input. If all or part of the end marker parameter is quoted, no interpretation is used in the body here document. No expansion of variables, no arithmetic expressions, no backticks output substitution, nothing.

In other words, the here document is treated as a file that begins after the next newline character and continues until there is a line containing only the end_marker, with no trailing blank characters. Then the next here document, if any, starts (it can be several). In other words, the format of here document is as follows:

[n]<<end_marker
   here document
end_marker

There are two types of end marker:

If a hyphen (-) is appended to <<, the shell strips all leading tabs from the Word parameter and the document.

<<< -- Here string

A here string (available in Bash, ksh, or zsh) consisting of <<<, and effects input redirection from a word or a string literal:

In this case you do not need the end marker -- end of the sting signifies end of the here string. here is an explanation of the concept from Wikipedia:

A single word need not be quoted:

tr a-z A-Z <<< one

yields:

ONE

In case of a string with spaces, it must be quoted:

tr a-z A-Z <<< 'one two three'

yields:

ONE TWO THREE

This could also be written as:

 FOO='one two three'
 tr a-z A-Z <<< $FOO

Multiline strings are acceptable, yielding:

 tr a-z A-Z <<< 'one
 two three'

yields:

ONE
TWO THREE

Note that leading and trailing newlines, if present, are included:

 tr a-z A-Z <<< '
 one
 two three'

yields:

ONE
TWO THREE

The key difference from here documents is that in here documents, the delimiters are on separate lines (the leading and trailing newlines are stripped), and the terminating delimiter can be specified.

Note that here string behavior can also be accomplished (reversing the order) via piping and the echo command, as in:

 echo 'one two three' | tr a-z A-Z

Echo is less effective and less elegant the here string but still is widely used.

Combination of input and output redirectors

Input and output redirectors can be combined. For example: the cp command is normally used to copy files; if for some reason it didn't exist or was broken, you could use cat in this way:

cat  < file1 >  file2

This would be similar to cp file1 file2.

It is also possible to redirect the output of a command into the standard input of another command instead of a file. The construct that does this is called the pipe, notated as |. A command line that includes two or more commands connected with pipes is called a pipeline.

Pipes also can be combined with redirectors. To see a sorted listing of the file fred a screen at a time, type sort < fred | more. To print it instead of viewing it on your terminal, type sort < fred | lp.

Here's a more complicated example. The file /etc/passwd stores information about users' accounts on a UNIX system. Each line in the file contains a user's login name, user ID number, encrypted password, home directory, login shell, and other info. The first field of each line is the login name; fields are separated by colons (:).

To get a sorted listing of all users on the system, type:

cut -d: -f1 < /etc/passwd | sort 

(Actually, you can omit the <, since cut accepts input filename arguments.) The cut command extracts the first field (-f1), where fields are separated by colons (-d:), from the input.

If you want to send the list directly to the file /root/users in addition to your screen), you can extend the pipeline like this:

 cut -d: -f1 < /etc/passwd | sort | tee /root/users

Using cat for creation small documents and addling lines to system files

Cat also can be used for creating small documents or addling lines to system files:

cat > message <<EOF
Hello Jim, 
I will come to work later today as I have doctor appointment
EOF
You can also add lines to system files such as /etc/fstab (see fstab - Wikipedia)
cat >> /etc/fstab <<EOF
# Removable media
/dev/cdrom      /mnt/cdrom      udf,iso9660  noauto,owner,ro                                     0 0
EOF   

Summary

Now you should see how I/O directors and pipelines support the UNIX building block philosophy. The notation is extremely terse and powerful. Just as important, the pipe concept eliminates the need for messy temporary files to store output of commands before it is fed into other commands.

After sufficient practice, you will find yourself routinely typing in powerful command pipelines that do in one line what it would take several commands (and temporary files) in other operating systems to accomplish.

<&Digit Duplicates standard input from the file descriptor specified by the Digit parameter
>& Digit Duplicates standard output in the file descriptor specified by the Digit parameter
<&- Closes standard input
>&- Closes standard output
<&p Moves input from the co-process to standard input
>&p Moves output to the co-process to standard output

If one of these redirection options is preceded by a digit, then the file descriptor number referred to is specified by the digit (instead of the default 0 or 1). In the following example, the shell opens file descriptor 2 for writing as a duplicate of file descriptor 1:

... 2>&1

The order in which redirections are specified is significant. The shell evaluates each redirection in terms of the (FileDescriptor, File) association at the time of evaluation. For example, in the statement:

... 1>File 2>&1

the file descriptor 1 is associated with the file specified by the File parameter. The shell associates file descriptor 2 with the file associated with file descriptor 1 (File). If the order of redirections were reversed, file descriptor 2 would be associated with the terminal (assuming file descriptor 1 had previously been) and file descriptor 1 would be associated with the file specified by the File parameter.

If a command is followed by an ampersand (&) and job control is not active, the default standard input for the command is the empty file /dev/null. Otherwise, the environment for the execution of a command contains the file descriptors of the invoking shell as modified by input and output specifications.

For more information about redirection, see Input and output redirection.



NEWS CONTENTS

Old News ;-)

[Jan 02, 2021] Reference file descriptors

Jan 02, 2021 | www.redhat.com

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

  • exec 3>&1 Copy stdout to file descriptor 3
  • > write_to_file Make FD 1 to write to the file
  • echo "Hello World" Go to file because FD 1 now points to the file
  • exec 1>&3 Copy FD 3 back to 1 (swap)
  • Three>&- Close file descriptor three (we don't need it anymore)

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

[Nov 22, 2020] Save terminal output to a file under Linux or Unix bash

Apr 19, 2020 | www.cyberciti.biz
PayPal / Bitcoin , or become a supporter using Patreon . Advertisements

[Jul 07, 2020] More stupid Bash tricks- Variables, find, file descriptors, and remote operations - Enable Sysadmin

Notable quotes:
"... No such file or directory ..."
Jul 07, 2020 | www.redhat.com

Reference file descriptors

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

  • exec 3>&1 Copy stdout to file descriptor 3
  • > write_to_file Make FD 1 to write to the file
  • echo "Hello World" Go to file because FD 1 now points to the file
  • exec 1>&3 Copy FD 3 back to 1 (swap)
  • Three>&- Close file descriptor three (we don't need it anymore)

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

[Jul 06, 2020] BASH Shell Redirect stderr To stdout ( redirect stderr to a File ) by Vivek Gite

Jun 06, 2020 | www.cyberciti.biz

... ... ...

Redirecting the standard error stream to a file

The following will redirect program error message to a file called error.log:
$ program-name 2> error.log
$ command1 2> error.log

For example, use the grep command for recursive search in the $HOME directory and redirect all errors (stderr) to a file name grep-errors.txt as follows:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt
$ cat /tmp/grep-errors.txt

Sample outputs:

grep: /home/vivek/.config/google-chrome/SingletonSocket: No such device or address
grep: /home/vivek/.config/google-chrome/SingletonCookie: No such file or directory
grep: /home/vivek/.config/google-chrome/SingletonLock: No such file or directory
grep: /home/vivek/.byobu/.ssh-agent: No such device or address
Redirecting the standard error (stderr) and stdout to file

Use the following syntax:
$ command-name &>file
We can als use the following syntax:
$ command > file-name 2>&1
We can write both stderr and stdout to two different files too. Let us try out our previous grep command example:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt 1> /tmp/grep-outputs.txt
$ cat /tmp/grep-outputs.txt

Redirecting stderr to stdout to a file or another command

Here is another useful example where both stderr and stdout sent to the more command instead of a file:
# find /usr/home -name .profile 2>&1 | more

Redirect stderr to stdout

Use the command as follows:
$ command-name 2>&1
$ command-name > file.txt 2>&1
## bash only ##
$ command2 &> filename
$ sudo find / -type f -iname ".env" &> /tmp/search.txt

Redirection takes from left to right. Hence, order matters. For example:
command-name 2>&1 > file.txt ## wrong ##
command-name > file.txt 2>&1 ## correct ##

How to redirect stderr to stdout in Bash script

A sample shell script used to update VM when created in the AWS/Linode server:

#!/usr/bin/env bash
# Author - nixCraft under GPL v2.x+
# Debian/Ubuntu Linux script for EC2 automation on first boot
# ------------------------------------------------------------
# My log file - Save stdout to $LOGFILE
LOGFILE="/root/logs.txt"
 
# My error file - Save stderr to $ERRFILE
ERRFILE="/root/errors.txt"
 
# Start it 
printf "Starting update process ... \n" 1>"${LOGFILE}"
 
# All errors should go to error file 
apt-get -y update 2>"${ERRFILE}"
apt-get -y upgrade 2>>"${ERRFILE}"
printf "Rebooting cloudserver ... \n" 1>>"${LOGFILE}"
shutdown -r now 2>>"${ERRFILE}"

Our last example uses the exec command and FDs along with trap and custom bash functions:

#!/bin/bash
# Send both stdout/stderr to a /root/aws-ec2-debian.log file
# Works with Ubuntu Linux too.
# Use exec for FD and trap it using the trap
# See bash man page for more info
# Author:  nixCraft under GPL v2.x+
# ---------------------------------------------
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>/root/aws-ec2-debian.log 2>&1
 
# log message
log(){
        local m="$@"
        echo ""
        echo "*** ${m} ***"
        echo ""
}
 
log "$(date) @ $(hostname)"
## Install stuff ##
log "Updating up all packages"
export DEBIAN_FRONTEND=noninteractive
apt-get -y clean
apt-get -y update
apt-get -y upgrade
apt-get -y --purge autoremove
 
## Update sshd config ##
log "Configuring sshd_config"
sed -i'.BAK' -e 's/PermitRootLogin yes/PermitRootLogin no/g' -e 's/#PasswordAuthentication yes/PasswordAuthentication no/g'  /etc/ssh/sshd_config
 
## Hide process from other users ##
log "Update /proc/fstab to hide process from each other"
echo 'proc    /proc    proc    defaults,nosuid,nodev,noexec,relatime,hidepid=2     0     0' >> /etc/fstab
 
## Install LXD and stuff ##
log "Installing LXD/wireguard/vnstat and other packages on this box"
apt-get -y install lxd wireguard vnstat expect mariadb-server 
 
log "Configuring mysql with mysql_secure_installation"
SECURE_MYSQL_EXEC=$(expect -c "
set timeout 10
spawn mysql_secure_installation
expect \"Enter current password for root (enter for none):\"
send \"$MYSQL\r\"
expect \"Change the root password?\"
send \"n\r\"
expect \"Remove anonymous users?\"
send \"y\r\"
expect \"Disallow root login remotely?\"
send \"y\r\"
expect \"Remove test database and access to it?\"
send \"y\r\"
expect \"Reload privilege tables now?\"
send \"y\r\"
expect eof
")
 
# log to file #
echo "   $SECURE_MYSQL_EXEC   "
# We no longer need expect 
apt-get -y remove expect
 
# Reboot the EC2 VM
log "END: Rebooting requested @ $(date) by $(hostname)"
reboot
WANT BOTH STDERR AND STDOUT TO THE TERMINAL AND A LOG FILE TOO?

Try the tee command as follows:
command1 2>&1 | tee filename
Here is how to use it insider shell script too:

#!/usr/bin/env bash
{
   command1
   command2 | do_something
} 2>&1 | tee /tmp/outputs.log
Conclusion

In this quick tutorial, you learned about three file descriptors, stdin, stdout, and stderr. We can use these Bash descriptors to redirect stdout/stderr to a file or vice versa. See bash man page here :

Operator Description Examples
command>filename Redirect stdout to file "filename." date > output.txt
command>>filename Redirect and append stdout to file "filename." ls -l >> dirs.txt
command 2>filename Redirect stderr to file "filename." du -ch /snaps/ 2> space.txt
command 2>>filename Redirect and append stderr to file "filename." awk '{ print $4}' input.txt 2>> data.txt
command &>filename
command >filename 2>&1
Redirect both stdout and stderr to file "filename." grep -R foo /etc/ &>out.txt
command &>>filename
command >>filename 2>&1
Redirect both stdout and stderr append to file "filename." whois domain &>>log.txt

Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . RELATED TUTORIALS

  1. Matt Kukowski says: January 29, 2014 at 6:33 pm

    In pre-bash4 days you HAD to do it this way:

    cat file > file.txt 2>&1

    now with bash 4 and greater versions you can still do it the old way but

    cat file &> file.txt

    The above is bash4+ some OLD distros may use prebash4 but I think they are alllong gone by now. Just something to keep in mind.

  2. iamfrankenstein says: June 12, 2014 at 8:35 pm

    I really love: " command2>&1 | tee logfile.txt "

    because tee log's everything and prints to stdout . So you stil get to see everything! You can even combine sudo to downgrade to a log user account and add date's subject and store it in a default log directory :)

[Jul 26, 2019] What Is /dev/null in Linux by Alexandru Andrei

Images removed...
Jul 23, 2019 | www.maketecheasier.com
... ... ...

In technical terms, "/dev/null" is a virtual device file. As far as programs are concerned, these are treated just like real files. Utilities can request data from this kind of source, and the operating system feeds them data. But, instead of reading from disk, the operating system generates this data dynamically. An example of such a file is "/dev/zero."

In this case, however, you will write to a device file. Whatever you write to "/dev/null" is discarded, forgotten, thrown into the void. To understand why this is useful, you must first have a basic understanding of standard output and standard error in Linux or *nix type operating systems.

Related : How to Use the Tee Command in Linux

stdout and stder

A command-line utility can generate two types of output. Standard output is sent to stdout. Errors are sent to stderr.

By default, stdout and stderr are associated with your terminal window (or console). This means that anything sent to stdout and stderr is normally displayed on your screen. But through shell redirections, you can change this behavior. For example, you can redirect stdout to a file. This way, instead of displaying output on the screen, it will be saved to a file for you to read later – or you can redirect stdout to a physical device, say, a digital LED or LCD display.

A full article about pipes and redirections is available if you want to learn more.

  • With 2> you redirect standard error messages. Example: 2>/dev/null or 2>/home/user/error.log .
  • With 1> you redirect standard output.
  • With &> you redirect both standard error and standard output.

Related : 12 Useful Linux Commands for New User

Use /dev/null to Get Rid of Output You Don't Need

Since there are two types of output, standard output and standard error, the first use case is to filter out one type or the other. It's easier to understand through a practical example. Let's say you're looking for a string in "/sys" to find files that refer to power settings.

grep -r power /sys/

There will be a lot of files that a regular, non-root user cannot read. This will result in many "Permission denied" errors.

These clutter the output and make it harder to spot the results that you're looking for. Since "Permission denied" errors are part of stderr, you can redirect them to "/dev/null."

grep -r power /sys/ 2>/dev/null

As you can see, this is much easier to read.

In other cases, it might be useful to do the reverse: filter out standard output so you can only see errors.

ping google.com 1>/dev/null

The screenshot above shows that, without redirecting, ping displays its normal output when it can reach the destination machine. In the second command, nothing is displayed while the network is online, but as soon as it gets disconnected, only error messages are displayed.

You can redirect both stdout and stderr to two different locations.

ping google.com 1>/dev/null 2>error.log

In this case, stdout messages won't be displayed at all, and error messages will be saved to the "error.log" file.

Redirect All Output to /dev/null

Sometimes it's useful to get rid of all output. There are two ways to do this.

grep -r power /sys/ >/dev/null 2>&1

The string >/dev/null means "send stdout to /dev/null," and the second part, 2>&1 , means send stderr to stdout. In this case you have to refer to stdout as "&1" instead of simply "1." Writing "2>1" would just redirect stdout to a file named "1."

What's important to note here is that the order is important. If you reverse the redirect parameters like this:

grep -r power /sys/ 2>&1 >/dev/null

it won't work as intended. That's because as soon as 2>&1 is interpreted, stderr is sent to stdout and displayed on screen. Next, stdout is supressed when sent to "/dev/null." The final result is that you will see errors on the screen instead of suppressing all output. If you can't remember the correct order, there's a simpler redirect that is much easier to type:

grep -r power /sys/ &>/dev/null

In this case, &>/dev/null is equivalent to saying "redirect both stdout and stderr to this location."

Other Examples Where It Can Be Useful to Redirect to /dev/null

Say you want to see how fast your disk can read sequential data. The test is not extremely accurate but accurate enough. You can use dd for this, but dd either outputs to stdout or can be instructed to write to a file. With of=/dev/null you can tell dd to write to this virtual file. You don't even have to use shell redirections here. if= specifies the location of the input file to be read; of= specifies the name of the output file, where to write.

dd if=debian-disk.qcow2 of=/dev/null status=progress bs=1M iflag=direct

In some scenarios, you may want to see how fast you can download from a server. But you don't want to write to your disk unnecessarily. Simply enough, don't write to a regular file, write to "/dev/null."

wget -O /dev/null http://ftp.halifax.rwth-aachen.de/ubuntu-releases/18.04/ubuntu-18.04.2-desktop-amd64.iso
Conclusion

Hopefully, the examples in this article can inspire you to find your own creative ways to use "/dev/null."

Know an interesting use-case for this special device file? Leave a comment below and share the knowledge!

[Oct 31, 2017] Bash process substitution by Tom Ryder

Notable quotes:
"... Thanks to Reddit user Rhomboid for pointing out an incorrect assertion about this syntax necessarily abstracting ..."
"... calls, which I've since removed. ..."
February 27, 2012 sanctum.geek.nz

For tools like diff that work with multiple files as parameters, it can be useful to work with not just files on the filesystem, but also potentially with the output of arbitrary commands. Say, for example, you wanted to compare the output of ps and ps -e with diff -u . An obvious way to do this is to write files to compare the output:

$ ps > ps.out
$ ps -e > pse.out
$ diff -u ps.out pse.out

This works just fine, but Bash provides a shortcut in the form of process substitution , allowing you to treat the standard output of commands as files. This is done with the <() and >() operators. In our case, we want to direct the standard output of two commands into place as files:

$ diff -u <(ps) <(ps -e)

This is functionally equivalent, except it's a little tidier because it doesn't leave files lying around. This is also very handy for elegantly comparing files across servers, using ssh :

$ diff -u .bashrc <(ssh remote cat .bashrc)

Conversely, you can also use the >() operator to direct from a filename context to the standard input of a command. This is handy for setting up in-place filters for things like logs. In the following example, I'm making a call to rsync , specifying that it should make a log of its actions in log.txt , but filter it through grep -vF .tmp first to remove anything matching the fixed string .tmp :

$ rsync -arv --log-file=>(grep -vF .tmp >log.txt) src/ host::dst/

Combined with tee this syntax is a way of simulating multiple filters for a stdout stream, transforming output from a command in as many ways as you see fit:

$ ps -ef | tee >(awk '$1=="tom"' >toms-procs.txt) \
               >(awk '$1=="root"' >roots-procs.txt) \
               >(awk '$1!="httpd"' >not-apache-procs.txt) \
               >(awk 'NR>1{print $1}' >pids-only.txt)

In general, the idea is that wherever on the command line you could specify a file to be read from or written to, you can instead use this syntax to make an implicit named pipe for the text stream.

Thanks to Reddit user Rhomboid for pointing out an incorrect assertion about this syntax necessarily abstracting mkfifo calls, which I've since removed.

[Nov 28, 2014] Standard Input and Output Redirection

sc.tamu.edu
The Bourne shell uses a different format for redirection which includes numbers. The numbers refer to the file descriptor numbers (0 standard input, 1 standard output, 2 standard error). For example, 2> redirects file descriptor 2, or standard error. &n is the syntax for redirecting to a specific open file. For example 2>&1 redirects 2 (standard error) to 1 (standard output); if 1 has been redirected to a file, 2 goes there too. Other file descriptor numbers are assigned sequentially to other open files, or can be explicitly referenced in the shell scripts. Some of the forms of redirection for the Bourne shell family are:
Character Action
> Redirect standard output
2> Redirect standard error
2>&1 Redirect standard error to standard output
< Redirect standard input
| Pipe standard output to another command
>> Append to standard output
2>&1| Pipe standard output and standard error to another command

Note that < and > assume standard input and output, respectively, as the default, so the numbers 0 and 1 can be left off. The form of a command with standard input and output redirection is:

[Apr 09, 2010] Improve Your Unix Logging with Advanced I/O Redirection By Charlie Schluting

March 16, 2010 | http://www.enterprisenetworkingplanet.com/

Beyond the basic shell I/O redirection tools, there are a few interesting tricks to learn. One trick, swapping stderr and stdout, is highly useful in many situations. Command lengths tend to grow out of control and become difficult to understand, but taking the time to break them apart into each component reveals a simple explanation.

Briefly, let's make sure everyone is on the same page. In Unix IO, there are three file handles that always exist: stdin (0), stdout (1), and stderr (2). Standard error is the troublesome one, as we often want all command output sent to a file. Running ./command > logfile will send only stdout to the file, and stderr output will be sent to the terminal. To combat this, we can simply redirect the stderr stream into stdout, so the both stream into the file: ./command 2>&1 >logfile. This works wonderfully, but often severely limits our options.

Take the following which, is all one line (backslashes mean continue on the next line):

(((/root/backup.sh | tee -a /var/log/backuplog) \ 
   3>&1 1>&2 2>&3 | tee -a /var/log/backuplog) \ 
   3>&1 1>&2 2>&3) >/dev/null

This trick involves three nested subshells, and all kinds of weird file handle manipulation. The above command will log both stderr and stdout into the /var/log/backuplog file, and also emit any errors to the terminal as stderr should. This is the only way to accomplish those requirements, aside from creating your own FIFO pipes with mkfifo, or perhaps using bash's process substitution.

The reason this command gets so crazy is because 'tee' doesn't understand stderr. The tee command will take whatever it receives on stdin and write it to stdout and the specified file. If you approach this the normal way, that is to say combine stderr into stdout (2>&1), before sending it to tee, you've just lost the ability to write only stderr to the console. Both streams are combined, with no way to break them apart again.

Why is this useful? When the above command is run from cron, it will log everything to the file and if anything goes wrong, stderr will go to the console, which gets e-mailed to an administrator. If desired, you could also log stdout and stderr to two distinct files.

Starting with the basic command, we're simply writing the stdout output to a log file, and back to stdout again.

backup.sh | tee -a /var/log/backuplog

Any errors going to stderr are preserved, because we haven't redirected anything yet. Unfortunately, we've lost control of any stderr out that came from backup.sh. After running this command, stderr simply gets written to the console.

To test this, simple write a script to run that outputs both stderr and stdout:

#!/bin/sh
echo "test stdout" >&1
echo "test stderr" >&2

Now, to resolve the issue with stderr being written to the console before we've had a chance to play with it, we use a subshell:

(test.sh | tee -a ./log)

We now have our two distinct file descriptors to work with again. A subshell will capture stdout and stderr, and those file handles will both be available for the next process. If we just stopped there, we could now write those two items to distinct log files, like so:

(test.sh | tee -a ./log) 1>out.log 2> err.log

We're already writing stdout to the backuplog file, so this probably isn't useful in this situation. The next step in this process is to swap stdout and stderr. Remember, 'tee' can only operate on stdout, so in order to have it write stderr to the log, we have to swap the two. When done carefully, we don't permanently combine the two streams with no way to get them back apart.

(test.sh | tee -a ./log) 3>&1 1>&2 2>&3

This works be creating a new handle, 3, and mapping it to stdout. Then stdout gets mapped to stderr, and stderr becomes stdout in the last portion. The third file handle essentially "saves" stdout for use later, so 2>&3 maps stderr to stdout.

At this point, we have test.sh writing "test stdout" into the log file, and then the stdout and stderr file handles are swapped.

Now, we can write stderr into the log file:

((test.sh | tee -a ./log) 3>&1 1>&2 2>&3 | tee -a ./log)

This is the same thing we did before, including the subshell. This is where the 2nd level of subshell comes from; we want to retain both stdout and stderr so we can swap them back to normal again.

At this point, you could also modify the above to write to two different log files, one for the standard output and one for standard error. Now let's return out file handles to the proper state by using that swap trick again:

(((test.sh | tee -a ./log) 3>&1 1>&2 2>&3 | tee -a ./log) \
3>&1 1>&2 2>&3)


We're now back to where we started, except both stdout and stderr have been written to the log file! You can proceed as normal, and redirect the IO however you wish. In the original command above, we added >/dev/null to the end, so that only standard output would be output. This is great for cron scripts where you only want e-mail if something goes wrong, but you also wish to have all output logged for debugging purposes. There is one problem, however, with this approach. If both types of output are being written quickly, things can arrive in the log file out of order. There is no solution when using this technique, so be aware that it may happen.

[Aug 11, 2009] Input-Output redirection made simple in Linux

January 07, 2006 | All about Linux
Linux follows the philosophy that every thing is a file. For exEcho, cat and cut command as less effective substitutes for redirectiontor, mouse, printer .... you name it and it is classified as a file in Linux. Each of these pieces of hardware have got unique file descriptors associated with it. Now this nomenclature has got its own advantages. The main one being you can use all the common command line tools you have in Linux to send, receive or manipulate data with these devices.

For example, my mouse has the file descriptor '/dev/input/mice' associated with it (yours may be different).

So if I want to see the output of the mouse on my screen, I just enter the command :
cat /dev/input/mice
... and then move the mouse to get characters on the terminal. Try it out yourselves.

Note: In some cases, running the above command will scramble your terminal display. In such an outcome, you can type the command :

reset
... to get it corrected.

Linux provides each program that is run on it access to three important files. They are standard input, standard output and standard error. And each of these special files (standard input, output and error) have got the file descriptors 0, 1 and 2 respectively. In the previous example, the utility 'cat' uses standard output which by default is the screen or the console to display the output.

  • Standard Input - 0
  • Standard Output - 1
  • Standard Error - 2
Redirecting output to other files

You can easily redirect input / output to any file other than the default one. This is achieved in Linux using input and output redirection symbols. These symbols are as follows:

> - Output redirection
< - Input redirection 
Using a combination of these symbols and the standard file descriptors you can achieve complex redirection tasks quite easily.

Output Redirection

Suppose, I want to redirect the output of 'ls' to a text file instead of the console. This I achieve using the output redirection symbol as follows:
$ ls -l myfile.txt > test.txt
When you execute the above command, the output is redirected to a file by name test.txt. If the file 'test.txt' does not exist, then it is automatically created and the output of the command 'ls -l' is written to it. This is assuming that there is a file called myfile.txt existing in my current directory.

Now lets see what happens when we execute the same command after deleting the file myfile.txt.

$ rm myfile.txt
$ ls -l myfile.txt > test.txt
ls: myfile.txt: No such file or directory -- ERROR
What happens is that 'ls' does not find the file named myfile.txt and displays an error on the console or terminal. Now here is the fun part. You can also redirect the error generated above to another file instead of displaying on the console by using a combination of error file descriptor and output file redirection symbol as follows:
$ ls -l myfile.txt 2> test.txt
The thing to note in the above command is '2>' which can be read as - redirect the error (2) to the file test.txt.

Use two open xterms can be used to practice output redirection.

I can give one practical purpose for this error redirection which I use on a regular basis. When I am searching for a file in the whole hard disk as a normal user, I get a lot of errors such as :

find: /file/path: Permission denied
In such situations I use the error redirection to weed out these error messages as follows:
# find / -iname \* 2> /dev/null
Now all the error messages are redirected to /dev/null device and I get only the actual find results on the screen.

Note: /dev/null is a special kind of file in that its size is always zero. So what ever you write to that file will just disappear. The opposite of this file is /dev/zero which acts as an infinite source. For example, you can use /dev/zero to create a file of any size - for example, when creating a swap file for instance.

If you have a line printer connected to your Linux machine, and lets say its file descriptor is /dev/lp0 . Then you can send any output to the printer using output redirection. For example to print the contents of a text file, I do the following:

$ cat testfile.txt > /dev/lp0
Input Redirection

You use input redirection using the less-than symbol and it is usually used with a program which accepts user input from the keyboard. A legendary use of input redirection that I have come across is mailing the contents of a text file to another user.

$ mail ravi < mail_contents.txt

I say legendary because now with the advances in GUI, and also availability of good email clients, this method is seldom used.

Suppose you want to find the exact number of lines, number of words and characters respectively in a text file and at the same time you want to write it to another file. This is achieved using a combination of input and output redirection symbols as follows:

$ wc < my_text_file.txt > output_file.txt
What happens above is the contents of the file my_text_file.txt are passed to the command 'wc' whose output is in turn redirected to the file output_file.txt .

Appending data to a file

You can also use the >> symbol instead of output redirection to append data to a file. For example,

$ cat - >> test.txt
... will append what ever you write to the file test.txt.
Related Content

20 comments:

Anonymous said...
A very good article. Enjoyed it.
I may also add that you can do something like :
ls -l xxx.txt 2>&1 >another_file
which will redirect both output and any errors to another_file.
Henry
1/07/2006 10:40:00 AM
Anonymous said...
Excellent. The first time I have ever really understood redirection!
1/07/2006 10:42:00 PM
bogomipz said...
You mean:
ls -l xxx.txt 2>&1 another_file

What you said will redirect stderr to a file named 1 and try to run another_file as a command. I prefer this simpler syntax to redirect stdout and stderr to a file:

ls -l xxx.txt &> another_file

1/08/2006 02:26:00 AM
Anonymous said...
Maybe the first example:

$ cat /dev/input/mice

should be changed by:

# cat /dev/input/mice

if for example I had a vulnerability in a web application and someone could just read one of those files from www-data user, and with them, the attacker could read all what I do with my mouse or my keyboard, I would be really scared.

Fortunately, just root can do that

1/08/2006 05:57:00 AM
Anonymous said...
Aren't you glad that GNU provides all of these core utilities? Welcome to the GNU userland.
1/08/2006 08:50:00 AM
Anonymous said...
@ anonymous 5:57 AM

$ cat /dev/input/mice

should be changed by:

# cat /dev/input/mice

I think the author has written $ cat... instead of # cat ... for a reason. He is just conveying that it is better to play it safe and run those commands as a normal user and not as root.

A nice article. Cleared a lot of my doubts on this topic.

1/08/2006 09:48:00 AM
Anonymous said...
>I think the author has written $ cat... instead of # cat ... for a reason. He is just conveying that it is better to play it safe and run those commands as a normal user and not as root.

In a sane environment you can't do that. Normal users only will have permissions to write to this device, not to read from it.

1/08/2006 03:23:00 PM
Anonymous said...
If "everything is a file", then where are the semantics of file operations defined? I'm especially interested in the semantics of 'ioctl'.
1/08/2006 03:26:00 PM
Anonymous said...
great
this shows, how powerful gnu/linux/unix is
1/08/2006 03:34:00 PM
Anonymous said...
@ anonymous (3:26 PM)

Semantics is defined at some appropriate driver level.

For example, do a 'ls -l' on the files in the /dev directory, these are special devices with major/minor number pairs which maps to a device driver in kernel space, hence the semantics is left to that driver.

The same can be applied to ordinary files and directories, The VFS layer performs the mapping to the appropriate file system driver for those.

The character far left of an 'ls -l' on a file tells the file type (e.g. '-', 'd', 'b' or 'c' to mention a few)

1/08/2006 03:48:00 PM
Anonymous said...
I think the author has written $ cat... instead of # cat ... for a reason. He is just conveying that it is better to play it safe and run those commands as a normal user and not as root.

Yeah, run everything with the less privileges you can... but reading /dev/input/mice is something that just root can do it! run those commands as a normal user and you will get a "Permission denied".

In a sane environment you can't do that. Normal users only will have permissions to write to this device, not to read from it.

In a sane environment you also can't write to the device.

Those files represent devices.

Would you imagine that *any user* of the system would have permissions over input devices? I'm thinking in "postfix" user, "www-data" user, and so on.

You would be in almost the same situation as if you were just in front of the computer.

1/10/2006 04:11:00 AM
Anonymous said...
This should have been labeled 'Simple Input/Output Redirection at the command-line.' A more useful article would have been how to redirect stdin, stdout, stderr within a shell script using 'exec cat'. With a little info about tee thown in for good measure.

/djs

1/10/2006 05:23:00 AM
Anonymous said...
wc < my_text_file.txt > output_file.txt

can also be written as

wc my_text_file.txt | tee output_file.txt

1/12/2006 03:40:00 PM
Jason Thompson said...
Great article. I've always wanted to know how to redirect error messages. I'm kicking myself now since it is so easy.
7/11/2006 10:53:00 PM
Anonymous said...
I appreciate the examples given. They are very clear and easy to understand. It's a great article indeed.
9/15/2006 10:00:00 AM
Anonymous said...
Nice article, but let's dig deeper.
For example, you woud like to change your password.
You type:

$ passwd

That will ask for passwords.
Now how do you realize a redirection in this case, if you want to input these passwords from a file?

10/13/2006 01:58:00 PM
Anonymous said...
Very good,
Thank you!
4/13/2007 12:35:00 PM
Cabellos JL said...
good ... but I would like to
redirect the output of command "dsh"
dsh -a ps -u $USER > file
it does not work !!!!
how can i do this?
Thank you!
Cabellos JL
9/21/2007 09:59:00 AM
Anonymous said...
dsh -a ps -u $USER > file

probably because 'dsh' is a shell that's accepting a command as an argument. in this case it thinks you're passing it

ps -u $USER > file

in order to redirect the output of that command to the file you'd need to do:

dsh -a "ps -u $USER" > file

4/29/2008 02:00:00 AM
sudharsh said...
well, to ensure that the terminal is not scrambled, you could use hexdump

$ hexdump /dev/input/mice

prints a viewable dump..od would work as well.

KSH - Redirection and Pipes

An uncommon program to use for this example is the "fuser" program under solaris. it gives you a long listing of what processes are using a particular file. For example:

$ fuser /bin/sh
/bin/sh:    13067tm   21262tm
If you wanted to see just the processes using that file, you might initially groan and wonder how best to parse it with awk or something. However, fuser actually splits up the data for you already. It puts the stuff you may not care about on stderr, and the meaty 'data' on stdout. So if you throw away stderr, with the '2>' special redirect, you get
$ fuser /bin/sh  2>/dev/null
    13067   21262
which is then trivially usable.

Unfortunately, not all programs are that straightforward :-) However, it is good to be aware of these things, and also of status returns. The 'grep' command actually returns a status based on whether it found a line. The status of the last command is stored in the '$?' variable. So if all you care about is, "is 'biggles' in /etc/hosts?" you can do the following:

grep biggles /etc/hosts >/dev/null
if [[ $? -eq 0 ]] ; then
	echo YES
else
	echo NO
fi
As usual, there are lots of other ways to accomplish this task, even using the same 'grep' command. However, this method has the advantage that it does not waste OS cycles with a temp file, nor does it waste memory with a potentially very long variable.
(If you were looking for something that could potentially match hundreds of lines, then var=`grep something /file/name` could get very long)

Inline redirection

You have seen redirection TO a file. But you can also redirect input, from a file. For programs that can take data in stdin, this is useful. The 'wc' can take a filename as an argument, or use stdin. So all the following are roughly equivalent in result, although internally, different things happen:
wc -l /etc/hosts
wc -l < /etc/hosts
cat /etc/hosts | wc -l

Additionally, if there are a some fixed lines you want to use, and you do not want to bother making a temporary file, you can pretend part of your script is a separate file!. This is done with the special '<<' redirect operator.

command << EOF
means, "run 'command', but make its stdin come from this file right here, until you see the string 'EOF'"

EOF is the traditional string. But you can actually use any unique string you want. Additionally, you can use variable expansion in this section!

DATE=`date`
HOST=`uname -n`
mailx -s 'long warning' root << EOF
Something went horribly wrong with system $HOST
at $DATE
EOF

Pipes

In case you missed it before, pipes take the output of one command, and put it on the input of another command. You can actually string these together, as seen here;
grep hostspec /etc/hosts| awk '{print $1}' | fgrep '^10.1.' | wc -l
This is a fairly easy way to find what entries in /etc/hosts both match a particular pattern in their name, AND have a particular IP address ranage.

The "disadvantage" to this, is that it is very wasteful. Whenever you use more than one pipe at a time, you should wonder if there is a better way to do it. And indeed for this case, there most certainly IS a better way:

grep '^10\.1\..*hostspec' /etc/hosts | wc -l
There is actually a way to do this with a single awk command. But this is not a lesson on how to use AWK!

Combining pipes and redirection

An interesting example of pipes with stdin/err and redirection is the "tar" command. If you use "tar cvf file.tar dirname", it will create a tar file, and print out all the names of the files in dirname it is putting in the tarfile. It is also possible to take the same 'tar' data, and dump it to stdout. This is useful if you want to compress at the same time you are archiving:
tar cf - dirname | compress > file.tar.Z
But it is important to note that pipes by default only take the data on stdout! So it is possible to get an interactive view of the process, by using
tar cvf - dirname | compress > file.tar.Z
stdout has been redirected to the pipe, but stderr is still being displayed to your terminal, so you will get a file-by-file progress report. Or of course, you could redirect it somewhere else, with
tar cvf - dirname 2>/tmp/tarfile.list | compress > file.tar.Z 

Indirect redirection

Additionally, there is a special type of pipes+redirection. This only works on systems with /dev/fd/X support. You can automatically generate a "fake" file as the result of a command that does not normally generate a file. The name of the fake files will be /dev/fd/{somenumberhere}

Here's an example that doesnt do anything useful

wc -l <(echo one line) <(echo another line)
wc will report that it saw two files, "/dev/fd/4", and "/dev/fd/5", and each "file" had 1 line each. From its own perspective, wc was called simply as
wc -l /dev/fd/4 /dev/fd/5

There are two useful components to this:

  1. You can handle MULTIPLE commands' output at once
  2. It's a quick-n-dirty way to create a pipeline out of a command that "requires" a filename (as long as it only processes its input in a single continuous stream).

Solaris Advanced System Administrator's Guide, Second EditionWriting Shell=Scripts

Standard In, Standard Out, and Standard Error

When writing shell scripts, you can control input/output redirection. Input redirection is the ability to force a command to read any necessary input from a file instead of from the keyboard. Output redirection is the ability to send the output from a command into a file or pipe instead of to the screen.

Each process created by a shell script begins with three file descriptors associated with it, as shown in Figure 16-1.

These file descriptors—standard input, standard output, and standard error—determine where input to the process comes from, and where the output and error messages are sent.

Standard input (STDIN) is always file descriptor 0. Standard input is the place where the shell looks for its input data. Usually data for standard input comes from the keyboard. You can specify standard input to come from another source using input/output redirection.

Standard output (STDOUT) is always file descriptor 1. Standard output (default) is the place where the results of the execution of the program are sent. Usually, the results of program execution are displayed on the terminal screen. You can redirect standard output to a file, or suppress it completely by redirecting it to /dev/null.

Standard error (STDERR) is always file descriptor 2. Standard error is the place where error messages are sent as they are generated during command processing. Usually, error messages are displayed on the terminal screen. You can redirect standard error to a file, or suppress it completely by redirecting it to /dev/null.

You can use the file descriptor numbers 0 (standard input), 1 (standard output), and 2 (standard error) together with the redirection metacharacters to control input and output in the Bourne and Korn shells. Table 16-7 shows the common ways you can redirect file descriptors.

Table 16-7 Bourne and Korn Shell Redirection

Description Command
Take STDIN from file <file, or 0<file
Redirect STDOUT to file > file, or 1>file
Redirect STDERR to file 2> file
Append STDOUT to end of file >> file
Redirect STDERR to STDOUT 2>&1
Pipe standard output of cmd1 as standard input to cmd2 cmd1 | cmd2
Use file as both STDIN and STDOUT <> file
Close STDIN <&-
Close STDOUT >&-
Close STDERR 2>&-

When redirecting STDIN and STDOUT in the Bourne and Korn shells, you can omit the file descriptors 0 and 1 from the redirection symbols. You must always use the file descriptor 2 with the redirection symbol.

The 0 and 1 file descriptors are implied, and not used explicitly for the C shell, as shown in Table 16-8. The C shell representation for standard error (2) is an ampersand (&). STDERR can only be redirected when redirecting STDOUT.

Table 16-8 C Shell Redirection Metacharacters

Description Command
Redirect STDOUT to file > file
Take input from file < file
Append STDOUT to end of file >> file
Redirect STDOUT and STDERR to file >& file
Append STDOUT and STDERR to file >>& file

Tips For Linux - Input-Output Redirection in Unix

codecoffee.com

>> Input/Output Redirection in Unix Redirection is one of Unix's strongest points. Ramnick explains this concept in this article. He talks about Input, Output Redirection. He cites many simple and useful ways in which we can put redirection to good use.

Introduction

For those of you'll who have no idea what Redirection means, let me explain it in a few words. Whenever you run a program you get some output at the shell prompt. In case you don't want that output to appear in the shell window, you can redirect it elsewhere. you can make the output go into a file...or maybe go directly to the printer.. or you could make it disappear :)

This is known as Redirection. Not only can the output of programs be redirected, you can also redirect the input for programs. I shall be explaining all this in detail in this article. Lets begin...

File Descriptors

One important thing you have to know to understand Redirection is file descriptors. In Unix every file has a no. associated with it called the file descriptor. And in Unix everything is a file. Right from your devices connected to your machine to the normal text files storing some information - all of these are looked at, as files by the Operating System.

Similarly even your screen on which your programs display their output are files for Unix. These have file descriptors associated with it. So when a program actually executes it sends its output to this file descriptor and since this particular file descriptor happens to be pointing to the screen, the output gets displayed on the screen. Had it been the file descriptor of the printer, the output would have been printed by the printer. (There are ofcourse other factors which come into play, but I guess you got the idea of how everything is a file and you send whatver you want to particular files descriptors)

Whenever any program is executed (i.e. when the user types a command) the program has 3 important files to work with. They are standard input, standard output, and standard error. These are 3 files that are always open when a program runs. You could kind of consider them to be inherently present for all programs (For the techies.. basically when a child process is forked from a parent process, these 3 files are made available to the child process). For the rest, just remember that you always have these 3 files with you whenever you type any command at the prompt. As explained before a file descriptor, is associated with each of these files -

File Descriptor Descriptor Points to -
0 Standard Input (Generally Keyboard)
1 Standard output (Generally Display/Screen)
2 Standard Error Ouput (Generally Display/Screen)

You could redirect any of these files to other files. In short if you redirect 1 (standard output) to the printer, your programs output would start getting printed instead of being displayed on the screen.

What is the standard input? That would be your keyboard. Most of the times since you enter commands with your keyboard, you could consider 0 to be your keyboard. Since you get the output of your command on the screen, 1 would be the screen (display) and the errors as well are shown on the screen to you, so 2 would also be the screen.

For those of you'll who like to think ahead of what is being discussed... you'll must have already understood that you can now avoid all those irritating, irrelevant error messages you often get while executing some programs. You could just redirect the standard error (2) to some file and avoid seeing the error messages on the screen!!

Output Redirection

The most common use of Redirection is to redirect the output (that normally goes to the terminal) from a command to a file instead. This is known as Output Redirection. This is generally used when you get a lot of output when you execute your program. Often you see that screens scroll past very rapidly. You could get all the output in a file and then even transfer that file elsewhere or mail it to someone.

The way to redirect the output is by using the ' > ' operator in shell command you enter. This is shown below. The ' > ' symbol is known as the output redirection operator. Any command that outputs its results to the screen can have its output sent to a file.

$ ls > listing

The ' ls ' command would normally give you a directory listing. Since you have the ' > ' operator after the ' ls ' command, redirection would take place. What follows the ' > ' tells Unix where to redirect the output. In our case it would create a file named ' listing ' and write the directory listing in that file. You could view this file using any text editor or by using the cat command.

Note: If the file mentioned already exists, it is overwritten. So care should be taken to enter a proper name. In case you want to append to an existing file, then instead of the ' > ' operator you should use the ' >> ' operator. This would append to the file if it already exists, else it would create a new file by that name and then add the output to that newly created file.

Input Redirection

Input Redirection is not as popular as Output Redirection. Since most of the times you would expect the input to be typed at the keyboard. But when it is used effectively, Input Redirection can be of great use. The general use of Input Redirection is when you have some kind of file, which you have ready and now you would like to use some command on that file.

You can use Input Redirection by typing the ' < ' operator. An excellent example of Input Redirection has been shown below.

$ mail cousin < my_typed_letter

The above command would start the mail program with contents of the file named ' my_typed_letter ' as the input since the Input Redirection operator was used.

Note: You can't have Input Redirection with any program/command. Only those commands that accept input from keyboard could be redirected to use some kind of text files as their input. Similarly Output Redirection is also useful only when the program sends its output to the terminal. In case you are redirecting the output of a program that runs under X, it would be of no use to you.

Error Redirection

This is a very popular feature that many Unix users are happy to learn. In case you have worked with Unix for some time, you must have realised that for a lot of commands you type you get a lot of error messages. And you are not really bothered about those error messages. For example whenever I perform a search for a file, I always get a lot of permission denied error messages. There may be ways to fix those things. But the simplest way is to redirect the error messages elsewhere so that it doesn't bother me. In my case I know that errors I get while searching for files would be of no use to me.

Here is a way to redirect the error messages

$ myprogram 2>errorsfile

This above command would execute a program named ' myprogram ' and whatever errors are generated while executing that program would all be added to a file named ' errorsfile ' rather than be displayed on the screen. Remember that 2 is the error output file descriptor. Thus ' 2> ' means redirect the error output.

$ myprogram 2>>all_errors_till_now

The above command would be useful in case you have been saving all the error messages for some later use. This time the error messages would append to the file rather than create a new file.

You might realize that in the above case since I wasn't interested in the error messages generated by the program I redirected the output to a file. But since those error messages don't interest me I would have to go and delete that file created every time I run that command. Else I would have several such files created all over whenever I redirect my unwanted error output. An excellent way around is shown below

$ find / -name s*.jpg 2>/dev/null

What's /dev/null ????? That something like a black hole. Whatever is sent to the ' /dev/null ' never returns. Neither does one know where it goes. It simple disappears. Isn't that fantastic !! So remember.. whenever you want to remove something.. something that you don't want ...you could just send it to /dev/null

Isnt Unix wonderful !!!

Different ways to use Redirection Operators

Suppose you want to create a text file quickly

$ cat > filename
This is some text that I want in this file
^D

That's it!! Once you type the ' cat ' command, use the Redirection operator and add a name for a file. Then start typing your line. And finally press Ctrl+D. You will have a file named ' filename ' in the same directory.

Suppose you want to add a single line to an existing file.

$ echo "this is a new line" >> exsisting_file

That would add the new line to the file named ' existing_file ' . Remember to use ' >> ' instead of ' > ' else you would overwrite the file.

Suppose you wanted to join 2 files

$ cat file2 >> file1

Wow!! That a much neater way then to open a text editor and copy paste. The contents of ' file2 ' would be added to ' file1 ' .

Suppose you want to join a couple of files

$ cat file1 file2 > file3

This would add the contents of ' file1 ' and ' file2 ' and then write these contents into a new file named ' file3 ' .


Redirection works with many commands besides normal ones such as ' cat ' or ' ls ' . One example I could give you is in case you are programming using any language you could redirect the output messages of the compilation of your code so that you can view them later on. There are lots of commands where you can use Redirection. The more you use Unix the more you will come to know.

About the Author - Ramnick G currently works for Realtech Systems based in Brazil. He has been passionate about Linux since early 90s and has been developing on Linux machines for the last couple of years. When he finds some free time, he prefers to spend it listening to Yanni.

Tuesday Tiny Techie Tips by Jeff Youngstrom

15 April 1997

Tuesday Tiny Techie Tip -- 15 April 1997 Written

Redirect stderr to a file
$ ls 2> file
This redirects just stderr output (associated with fd2) to the file. stdout is unchanged.
Redirect both stdout and stderr to a file
$ ls > file 2>&1
First the "> file" indicates that stdout should be sent to the file, then the "2>&1" indicates that stderr (fd2) should be sent to the same place as stdout (fd1).

To append to the file, only the stdout redirection must change since stderr is just hitching a lift on whatever stdout is doing.

$ ls >> file 2>&1
Redirect stdout to one file and stderr to another
$ ls > file 2> file2
Pipe one process' stdout and stderr to another's stdin
$ ls 2>&1 | wc
Here we combine stderr onto the stdout stream, then use "|" to pipe the result to the next process.
Combinations
$ sed 's/^#//' < file 2> sederr | \
	wc -l 2> wcerr | \
	awk '{print $NF}' > final 2> awkerr
Here I'm saving the error output from each command in the pipeline to a separate file ("sederr", "wcerr", "awkerr"), but letting stdout go straight through the pipe into the file "final". Input to sed(1) at the beginning of the pipe is redirected from the file "file"

Up to the TTTT index

Tuesday Tiny Techie Tips are all © Copyright 1996-1997 by Jeff Youngstrom.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Linux I-O Redirection

Bourne Shell Scripting-Redirection - Wikibooks, collection of open-content textbooks

Input and output redirection in the Korn shell or POSIX shell

tee (Unix) - Wikipedia, the free encyclopedia



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: January 02, 2021