|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better
|See Also||Recommended Links||at command||time||x11perf|
|nice||top||vmstat||Horror Stories||Unix History||Humor||Etc|
The primary objective of the nohup command is to have a process continue executing if you get logged off the system because of a terminal line hang-up. It allows to start a job and then log off the system. The job continues running even though a hang-up signal was sent to it: the nohup command provides immunity to the hang-up and quit signals; thus its name no hang-up.
You usually use the nohup command to start long jobs and then log off the system. It can be used for jobs you want to execute and make certain they are not terminated if you are logged off the system or if a quit signal is received.
When a shell exits, each child process will receive a SIGHUP signal, which causes the process to exit if a signal handler is not installed to deal with the SIGHUP signal. When a command is invoked with the nohup(1) utility, the signal disposition for SIGHUP is set to ignored, allowing the process to continue executing when the shell exits
Following is the general format of the nohup command.
nohup command [ options arguments ]where:
The standard output and standard error of nohup and your command are redirected to the file nohup.out. If nohup cannot create nohup.out, then it tries to write to $HOME/nohup.out. You can redirect the output by specifying the output file. For example,
nohup ~bezroun/uptime_monitor.sh >> ~/var/log/uptime.log &
If you want the nohup command to apply to several commands, you must write a shell script containing the desired commands. For example,
cat sortdata | while read file do sort -o $file done
This shell script sorts files names of which are written in sortdata. if we call this script multisort then to run this script in background without terminating when you log off the system, you type
nohup multisort &
You must be careful in applying the nohup to several commands separated by ; on the command line. For example:
nohup command1; command2
will not work as expected. It will nohups the first command but not the second.
You cannot specify subshell process as another command's arguments.
nohup ( command1 )Linux version does not support -p option that permit to apply nohup to already running process. But Solaris and AIX has this important capability and can make a running process ignore all hang-up signals. For example:
nohup -p 1234In linux there is a workround if you are using bash shell, as in bash you can use built-in command disown
~ $ echo $SHELL /bin/bash ~ $ type disown disown is a shell builtin ~ $ help disown disown: disown [-h] [-ar] [jobspec ...] By default, removes each JOBSPEC argument from the table of active jobs. If the -h option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. The -a option, when JOBSPEC is not supplied, means to remove all jobs from the job table; the -r option means to remove only running jobs.
$ nohup find / -print &
After you enter this command, the following is displayed:
670 $ Sending output to nohup.out
The process ID number changes to that of the background process started by & (ampersand). The message Sending output to nohup.out informs you that the output from the find / -print command is in the nohup.out file. You can log off after you see these messages, even if the find command is still running.
$ nohup find / -print >filenames &
This example runs the find / -print command and stores its output in a file named filenames. Now only the process ID and prompt are displayed:
Wait before logging off because the nohup command takes a moment to start the command specified by the Command parameter. If you log off too quickly, the command specified by the Command parameter may not run at all. Once the command specified by the Command parameter starts, logging off does not affect it.
neqn math1 | nroff > fmath1
and name it the nnfmath1 file, you can run the nohup command for all of the commands in the nnfmath1 file with the command:
nohup sh nnfmath1
nohup nnfmath1 &
nohup ksh nnfmath1
Nov 25, 2020 | stackoverflow.com
> ,nohup only writes to
nohup.outif the output is otherwise to the terminal. If you redirect the output of the command somewhere else - including
/dev/null- that's where it goes instead.nohup command >/dev/null 2>&1 # doesn't create nohup.out
If you're using
nohup, that probably means you want to run the command in the background by putting another
&on the end of the whole thing:nohup command >/dev/null 2>&1 & # runs in background, still doesn't create nohup.out
On Linux, running a job with
nohupautomatically closes its input as well. On other systems, notably BSD and macOS, that is not the case, so when running in the background, you might want to close input manually. While closing input has no effect on the creation or not of
nohup.out, it avoids another problem: if a background process tries to read anything from standard input, it will pause, waiting for you to bring it back to the foreground and type something. So the extra-safe version looks like this:nohup command </dev/null >/dev/null 2>&1 & # completely detached from terminal
Note, however, that this does not prevent the command from accessing the terminal directly, nor does it remove it from your shell's process group. If you want to do the latter, and you are running bash, ksh, or zsh, you can do so by running
disownwith no argument as the next command. That will mean the background process is no longer associated with a shell "job" and will not have any signals forwarded to it from the shell. (Note the distinction: a
disowned process gets no signals forwarded to it automatically by its parent shell - but without
nohup, it will still exit upon receiving a
HUPsignal sent via other means, such as a manual
nohup'ed process ignores any and all
HUPsignals, no matter how they are sent.)
In Unixy systems, every source of input or target of output has a number associated with it called a "file descriptor", or "fd" for short. Every running program ("process") has its own set of these, and when a new process starts up it has three of them already open: "standard input", which is fd 0, is open for the process to read from, while "standard output" (fd 1) and "standard error" (fd 2) are open for it to write to. If you just run a command in a terminal window, then by default, anything you type goes to its standard input, while both its standard output and standard error get sent to that window.
But you can ask the shell to change where any or all of those file descriptors point before launching the command; that's what the redirection (
>>) and pipe (
|) operators do.
The pipe is the simplest of these...
command1 | command2arranges for the standard output of
command1to feed directly into the standard input of
command2. This is a very handy arrangement that has led to a particular design pattern in UNIX tools (and explains the existence of standard error, which allows a program to send messages to the user even though its output is going into the next program in the pipeline). But you can only pipe standard output to standard input; you can't send any other file descriptors to a pipe without some juggling.
The redirection operators are friendlier in that they let you specify which file descriptor to redirect. So
0<infilereads standard input from the file named
2>>logfileappends standard error to the end of the file named
logfile. If you don't specify a number, then input redirection defaults to fd 0 (
<is the same as
0<), while output redirection defaults to fd 1 (
>is the same as
Also, you can combine file descriptors together:
2>&1means "send standard error wherever standard output is going". That means that you get a single stream of output that includes both standard out and standard error intermixed with no way to separate them anymore, but it also means that you can include standard error in a pipe.
So the sequence
>/dev/null 2>&1means "send standard output to
/dev/null" (which is a special device that just throws away whatever you write to it) "and then send standard error to wherever standard output is going" (which we just made sure was
/dev/null). Basically, "throw away whatever this command writes to either file descriptor".
nohupdetects that neither its standard error nor output is attached to a terminal, it doesn't bother to create
nohup.out, but assumes that the output is already redirected where the user wants it to go.
/dev/nulldevice works for input, too; if you run a command with
</dev/null, then any attempt by that command to read from standard input will instantly encounter end-of-file. Note that the merge syntax won't have the same effect here; it only works to point a file descriptor to another one that's open in the same direction (input or output). The shell will let you do
>/dev/null <&1, but that winds up creating a process with an input file descriptor open on an output stream, so instead of just hitting end-of-file, any read attempt will trigger a fatal "invalid file descriptor" error.
Q: How do I put an already running process under nohup.
I have a process that is already running for a long time and don't want to end it.
How do I put it under nohup (i.e. how do I cause it to continue running even if I close the terminal?)
ctrl+z to stop (pause) the program and get back to the shell
bg to run it in the background
disown -h [job-spec] where [job-spec] is the job number (like %1 for the first running job; find about your number with the jobs command) so that the job isn't killed when the terminal closes
share|improve this answer edited Apr 13 at 23:49
As the question was how to "put it under nohup",disown -h perhaps is the more exact answer: "make disown behave more like nohup (i.e. the jobs will stay in your current shell's process tree until you exit your shell) This allows you to see all the jobs that this shell started." (from [quantprinciple.com/invest/index.php/docs/tipsandtricks/unix/ ) Jan-Philip Gehrcke Mar 17 '11 at 13:46
How do I recover the job later? I can see it running using ps -e. Paulo Casaretto Jan 11 '12 at 16:28
You can't see the output of a job after a disown, disown makes a process a daemon, which means standard input/output are redirected to /dev/null. So, if you plan to disown a job, its better to start it with logging into a file, e.g. my_job_command | tee my_job.log rustyx Jun 14 '12 at 21:06
is it possible somehow to do something like 'my_job_command | tee my_job.log' after the command is already running? arod Nov 21 '12 at 12:53
show 2 more comments
down vote The command to seperate a running job from the shell ( = makes it nohup) is disown and a basic shell-command.
From bash-manpage (man bash):
disown [-ar] [-h] [jobspec ...]
Without options, each jobspec is removed from the table of active jobs. If the -h option is given, each jobspec is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If no jobspec is present, and neither the -a nor the -r option is supplied, the current job is used. If no jobspec is supplied, the -a option means to remove or mark all jobs; the -r option without a jobspec argument restricts operation to running jobs. The return value is 0 unless a jobspec does not specify a valid job.
That means, that a simple
will remove all jobs from the job-table and makes them nohup
share|improve this answer edited Mar 9 '09 at 9:18
112k17146285 answered Mar 9 '09 at 8:38
down vote these are good answers above, I just wanted to add a clarification, You can't disown a pid or process, you disown a Job, and there is an important distinction. A Job is something that is a notion of a process that is attached to a shell. Therefore, you have to through the job into the background (not suspend it) and then disown it.
issue: % jobs
 running java
 suspended vi
% disown %1
See http://www.quantprinciple.com/invest/index.php/docs/tipsandtricks/unix/jobcontrol/ for a more detailed discussion of Unix Job Control.
share|improve this answer edited Nov 19 '09 at 19:09
answered Nov 19 '09 at 15:48
down vote Suppose for some reason Ctrl+Z is also not working, go to another terminal, find the process id (using ps) and run
kill -20 PID
kill -18 PID
kill -20 will suspend the process and kill -18 will resume the process, in background. So now, closing both your terminals won't stop your process.
share|improve this answer edited Feb 17 at 15:08
answered Apr 23 '12 at 8:01
down vote Unrfortunately disown is specific to bash and not available in all shells.
Certain flavours of Unix (e.g. AIX and Solaris) have an option on the nohup command itself which can be applied to a running process:
nohup -p pid
By using nohup -p <PID> we can no-hangup the already started process using the process id.
But the above does not work in linux environment.
Is there any workaround or equivalent command to be executed in linux to do no-hangup the already startup process.
Rep: Screen is available for Solaris as well. I never use nohup, though the man page is there for my own system, I see no available options except version and help. In order to use nohup in Linux, you have to run it with the command..
nohup [command] [options]
So in your case, screen is just as powerful or more powerful the way I see it.
I'm doing some test-runs of long-running data migration scripts, over SSH. Let's say I start running a script around 4 PM; now, 6 PM rolls around, and I'm cursing myself for not doing this all in screen.
Is there any way to "retroactively" nohup a process, or do I need to leave my computer online all night? If it's not possible to attach screen to/nohup a process I've already started, then why? Something to do with how parent/child proceses interact? (I won't accept a "no" answer that doesn't at least address the question of 'why' -- sorry ;) )
If you're using Bash, you can run "disown -h job"disown disown [-ar] [-h] [jobspec ...]
Without options, each jobspec is removed from the table of active jobs. If the -h' option is given, the job is not removed from the table, but is marked so that SIGHUP is not sent to the job if the shell receives a SIGHUP. If jobspec is not present, and neither the -a' nor -r' option is supplied, the current job is used. If no jobspec is supplied, the -a' option means to remove or mark all jobs; the `-r' option without a jobspec argument restricts operation to running jobs.
To steal a process from one tty to your current tty, you may want to try this hack:
It needs some reformatting in order to compile to current Linux/glibc versions, but still works.
Posted by 7Cryopid is a further development from the author of grab.c that freezes a process to a file, which you then run (inside screen) to resume the process.
Cryopid is a further development from the author of grab.c that freezes a process to a file, which you then run (inside screen) to resume the process.
David PashleyIf you can live with not being able to interact with the process and you don't object to loading random kernel modules, you could do worse than to look at Snoop. Alternatively, there are a couple of other projects. Here is one call injcode, which can mostly do what you want to do.
I've not used it however, so I cannot comment on whether it works or not.
Daniel LawsonI recently saw a link to neercs, which is a screen-like utility built using libcaca, a colour ascii-art library. Amongst other features, it boasts the ability to grab an existing process and re-parent it inside your neercs (screen) session.
I've not used it however, so I cannot comment on whether it works or not.
Jul 09, 2004 | Adam Leventhal's WeblogI always thought it was cool, but I was surprised by the amount of interest expressed for my recent post on nohup -p. There was even a comment asking how nohup manages the trick of redirecting the output of a running process. I'll describe in some detail now nohup -p works.
First, a little background material: Eric Schrock recently had a nice post about the history of the /proc file system; nohup makes use of Solaris's /proc and the agent LWP in particular which Eric also described in detail. All of the /proc and agent LWP tricks I describe are documented in the proc(4) man page.
Historically, nohup invoked a process with SIGHUP and SIGQUIT masked and the output directed to a file called nohup.out. When you run a command inside a terminal there can be two problems: all the output is just recorded to that terminal, and if the terminal goes away the command will receive a SIGHUP, killing it by default. You use nohup to both capture the output in a file and protect the process against the terminal being killed (e.g. if your telnet connection drops).
To "nohup" a running process we both need to mask SIGHUP and SIGQUIT and redirect the output to the file nohup.out. The agent LWP makes this possible. First we create the agent LWP and have it execute the sigaction(2) system call to mask of SIGHUP and SIGQUIT. Next we need to redirect any output intended for the controling terminal to the file nohup.out. This is easy in principle: we find all file descriptors open to the controlling terminal, have the agent LWP close them, and then reopen them to the file nohup.out. The problem is that other LWPs (threads) in the process might be using (e.g. with the read(2) or write(2) system calls) those file descriptors and the close(2) will actually block until those operations have completed. When the agent LWP is present in a process, none of the other LWPs can run so none of the outstanding operations on those file descriptors
cancomplete so the process would deadlock. Note that we can work ourselves out of the deadlock by removing the agent LWP, but we still have a problem.
The solution is this: with all LWPs in the process stopped, we identify all the file descriptors that we'll need to close and reopen, and then abort (using the PRSABORT flag listed in the proc(4) man page) those sytem calls. Once all outstanding operations have been aborted (or successfully completed) we know that there won't be any possibility of deadlocking the process. The agent LWP executes the open(2) system call to open the nohup.out file and then has the victim process dup2(3C) that file descriptor over the ones open to the process's controlling terminal (implicitly closing them). Actually, dup2(3C) is a library call so we have the agent LWP execute a fcntl(2) system call with the F_DUP2FD command.
Whew. Complicated to be sure, but at the end of it all, our precious process is protected against SIGHUP and SIGQUIT and through our arduous labors, output once intended for the terminal is now safely kept in a file. If this made sense or was even useful, I'd love to hear it...
How does nohup work? I have used nohup(1) for years to startup processes, and to ensure they keep running when my shell exits. When a shell exits, each child process will receive a SIGHUP signal, which causes the process to exit if a signal handler is not installed to deal with the SIGHUP signal. When a command is invoked with the nohup(1) utility, the signal disposition for SIGHUP is set to ignored, allowing the process to continue executing when the shell exits. We can see this with the Solaris "psig" command:
$ ssh oscar &  958
$ psig 958 | grep HUP HUP default
The psig utility indicates that the SIGHUP disposition is set to the default, which will cause the process to terminate when we exit the shell. When the same command is invoked with the nohup utility, we can see that the signal disposition for SIGHUP is set to ignored:
$ nohup ssh oscar &  967
$ psig 967 | grep HUP HUP ignored
Solaris is an amazing Operating system, and allows the signal dispositions of running processes (and process groups!!) to be set on the fly. This is accomplished with nohup's "-p" and "-g" options:
$ ssh -p 443 oscar &  1081
$ psig 1081 | grep HUP HUP default
$ nohup -p 1081 Sending output to nohup.out
$ psig 1081 | grep HUP HUP ignored
While this isn't the best example, hopefully you get the point. Sessions, process groups, process group leaders and controlling terminals are really neat concepts, and explained on pages 677 700 of Solaris Systems Programming (ISBN: 0201750392). This is an INCREDIBLE book, and sits next to my lazy boy for easy reference.
nohup - Wikipedia, the free encyclopedia
pSeries and AIX Information Center
Google matched content
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2020 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
Last modified: March, 12, 2019