pkill - ... signal processes based on name and other attributes
-u, --euid euid,...
Only match processes whose effective user ID is listed.
Either the numerical or symbolical value may be used.
-U, --uid uid,...
Only match processes whose real user ID is listed. Either the
numerical or symbolical value may be used.
-u, --user
Kill only processes the specified user owns. Command names
are optional.
I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use
full list of processes (doing some readdir of /proc ). I think, they will
iterate over /proc digital subfolders and check every found process for
match.
To get list of users, use getpwent
(it will get one user per call).
skill (procps & procps-ng)
and killall (psmisc)
tools both uses getpwnam library call
to parse argument of -u option, and only username will be parsed.
pkill (procps & procps-ng)
uses both atol and getpwnam to parse -u / -U argument and allow
both numeric and textual user specifier.
Lars Wirzenius, Aug 4, 2011 at 10:11
pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux
specifically. – Lars Wirzenius
Aug 4 '11 at 10:11
The pkill command allows you to kill a program simply by specifying the name. For instance if you want
to kill all open terminals with the same process ID you can type the following:
pkill term
You can return a count of the number of processes killed by supplying the -c switch as follows:
pkill -c <programname>
The output will simply be the number of processes killed.
To kill all the processes for a particular user run the following command:
pkill -u <userid>
To find the effective user id for a user use the ID command as follows:
id -u <username>
For example:
id -u gary
You can also kill all the processes for a particular user using the real user ID as follows:
pkill -U <userid>
The real user ID is the ID of the user running the process. In most cases it will be the same as
the effective user but if the process was run using elevated privileges then the real user ID of the
person running the command and the effective user will be different.
To find the real user ID use the following command.
id -ru <userid>
You can also kill all the programs in a particular group by using the following commands
pkill -g <processgroupid>
pkill -G <realgroupid>
The process group id is the group id running the process whereas the real group id is the process
group of the user who physically ran the command.
These may be different if the command was ran using elevated privileges.
To find the group id for a user run the following ID command:
id -g
To find the real group id using the following ID command:
id -rg
You can limit the amount of processes pkill actually kills. For instance killing all of a users processes
is probably not what you want to do. But you can kill their latest process by running the following
command.
pkill -n <programname>
Alternatively to kill the oldest program run the following command:
pkill -o <programname>
Imagine two users are running Firefox and you just want to kill the version of Firefox for a particular
user you can run the following command:
pkill -u <uid> firefox
You can kill all processes which have a specific parent ID. To do so run the following command:
pkill -P <parentprocessID>
You can also kill all processes with a specific session ID by running the following command:
pkill -s <sessionID>
Finally you can also kill all processes running on a particular terminal type by running the following
command:
pkill -t <terminal>
If you want to kill a lot of processes you can open a file using
an editor
such as nano and enter each process on a separate line. After saving the file you can run the following
command to read the file and kill each process listed within it.
I use Tilda (drop-down terminal) on Ubuntu as my "command central" - pretty much the way
others might use GNOME Do, Quicksilver or Launchy.
However, I'm struggling with how to completely detach a process (e.g. Firefox) from the
terminal it's been launched from - i.e. prevent that such a (non-)child process
is terminated when closing the originating terminal
"pollutes" the originating terminal via STDOUT/STDERR
For example, in order to start Vim in a "proper" terminal window, I have tried a simple
script like the following:
exec gnome-terminal -e "vim $@" &> /dev/null &
However, that still causes pollution (also, passing a file name doesn't seem to work).
First of all; once you've started a process, you can background it by first stopping it (hit
Ctrl - Z ) and then typing bg to let it resume in the
background. It's now a "job", and its stdout / stderr /
stdin are still connected to your terminal.
You can start a process as backgrounded immediately by appending a "&" to the end of
it:
firefox &
To run it in the background silenced, use this:
firefox </dev/null &>/dev/null &
Some additional info:
nohup is a program you can use to run your application with such that its
stdout/stderr can be sent to a file instead and such that closing the parent script won't
SIGHUP the child. However, you need to have had the foresight to have used it before you
started the application. Because of the way nohup works, you can't just apply
it to a running process .
disown is a bash builtin that removes a shell job from the shell's job list.
What this basically means is that you can't use fg , bg on it
anymore, but more importantly, when you close your shell it won't hang or send a
SIGHUP to that child anymore. Unlike nohup , disown is
used after the process has been launched and backgrounded.
What you can't do, is change the stdout/stderr/stdin of a process after having
launched it. At least not from the shell. If you launch your process and tell it that its
stdout is your terminal (which is what you do by default), then that process is configured to
output to your terminal. Your shell has no business with the processes' FD setup, that's
purely something the process itself manages. The process itself can decide whether to close
its stdout/stderr/stdin or not, but you can't use your shell to force it to do so.
To manage a background process' output, you have plenty of options from scripts, "nohup"
probably being the first to come to mind. But for interactive processes you start but forgot
to silence ( firefox < /dev/null &>/dev/null & ) you can't do
much, really.
I recommend you get GNU screen . With screen you can just close your running
shell when the process' output becomes a bother and open a new one ( ^Ac ).
Oh, and by the way, don't use " $@ " where you're using it.
$@ means, $1 , $2 , $3 ..., which
would turn your command into:
gnome-terminal -e "vim $1" "$2" "$3" ...
That's probably not what you want because -e only takes one argument. Use $1
to show that your script can only handle one argument.
It's really difficult to get multiple arguments working properly in the scenario that you
gave (with the gnome-terminal -e ) because -e takes only one
argument, which is a shell command string. You'd have to encode your arguments into one. The
best and most robust, but rather cludgy, way is like so:
Reading these answers, I was under the initial impression that issuing nohup
<command> & would be sufficient. Running zsh in gnome-terminal, I found that
nohup <command> & did not prevent my shell from killing child
processes on exit. Although nohup is useful, especially with non-interactive
shells, it only guarantees this behavior if the child process does not reset its handler for
the SIGHUP signal.
In my case, nohup should have prevented hangup signals from reaching the
application, but the child application (VMWare Player in this case) was resetting its
SIGHUP handler. As a result when the terminal emulator exits, it could still
kill your subprocesses. This can only be resolved, to my knowledge, by ensuring that the
process is removed from the shell's jobs table. If nohup is overridden with a
shell builtin, as is sometimes the case, this may be sufficient, however, in the event that
it is not...
disown is a shell builtin in bash , zsh , and
ksh93 ,
<command> &
disown
or
<command> &; disown
if you prefer one-liners. This has the generally desirable effect of removing the
subprocess from the jobs table. This allows you to exit the terminal emulator without
accidentally signaling the child process at all. No matter what the SIGHUP
handler looks like, this should not kill your child process.
After the disown, the process is still a child of your terminal emulator (play with
pstree if you want to watch this in action), but after the terminal emulator
exits, you should see it attached to the init process. In other words, everything is as it
should be, and as you presumably want it to be.
What to do if your shell does not support disown ? I'd strongly advocate
switching to one that does, but in the absence of that option, you have a few choices.
screen and tmux can solve this problem, but they are much
heavier weight solutions, and I dislike having to run them for such a simple task. They are
much more suitable for situations in which you want to maintain a tty, typically on a
remote machine.
For many users, it may be desirable to see if your shell supports a capability like
zsh's setopt nohup . This can be used to specify that SIGHUP
should not be sent to the jobs in the jobs table when the shell exits. You can either apply
this just before exiting the shell, or add it to shell configuration like
~/.zshrc if you always want it on.
Find a way to edit the jobs table. I couldn't find a way to do this in
tcsh or csh , which is somewhat disturbing.
Write a small C program to fork off and exec() . This is a very poor
solution, but the source should only consist of a couple dozen lines. You can then pass
commands as commandline arguments to the C program, and thus avoid a process specific entry
in the jobs table.
I've been using number 2 for a very long time, but number 3 works just as well. Also,
disown has a 'nohup' flag of '-h', can disown all processes with '-a', and can disown all
running processes with '-ar'.
Silencing is accomplished by '$COMMAND &>/dev/null'.
in tcsh (and maybe in other shells as well), you can use parentheses to detach the process.
Compare this:
> jobs # shows nothing
> firefox &
> jobs
[1] + Running firefox
To this:
> jobs # shows nothing
> (firefox &)
> jobs # still shows nothing
>
This removes firefox from the jobs listing, but it is still tied to the terminal; if you
logged in to this node via 'ssh', trying to log out will still hang the ssh process.
,
To disassociate tty shell run command through sub-shell for e.g.
(command)&
When exit used terminal closed but process is still alive.
Have a look at reptyr ,
which does exactly that. The github page has all the information.
reptyr - A tool for "re-ptying" programs.
reptyr is a utility for taking an existing running program and attaching it to a new
terminal. Started a long-running process over ssh, but have to leave and don't want to
interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session
and head on home.
USAGE
reptyr PID
"reptyr PID" will grab the process with id PID and attach it to your current
terminal.
After attaching, the process will take input from and write output to the new
terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have
to run "bg" or "fg" in the old terminal. This is likely impossible to fix in a reasonable
way without patching your shell.)
EDIT : As Stephane Gimenez said, it's not that simple. It's only allowing you to print to a
different terminal.
You can try to write to this process using /proc . It should be located in
/proc/ pid /fd/0 , so a simple :
echo "hello" > /proc/PID/fd/0
should do it. I have not tried it, but it should work, as long as this process still has a
valid stdin file descriptor. You can check it with ls -l on /proc/ pid
/fd/ .
if it's a link to /dev/null => it's closed
if it's a link to /dev/pts/X or a socket => it's open
See nohup for more
details about how to keep processes running.
Just ending the command line with & will not completely detach the process,
it will just run it in the background. (With zsh you can use &!
to actually detach it, otherwise you have do disown it later).
When a process runs in the background, it won't receive input from its controlling
terminal anymore. But you can send it back into the foreground with fg and then
it will read input again.
Otherwise, it's not possible to externally change its filedescriptors (including stdin) or
to reattach a lost controlling terminal unless you use debugging tools (see Ansgar's answer , or have a
look at the retty command).
Since a few days I'm successfully running the new Minecraft Bedrock Edition dedicated
server on my Ubuntu 18.04 LTS home server. Because it should be available 24/7 and
automatically startup after boot I created a systemd service for a detached tmux session:
Everything works as expected but there's one tiny thing that keeps bugging me:
How can I prevent tmux from terminating it's whole session when I press
Ctrl+C ? I just want to terminate the Minecraft server process itself instead of
the whole tmux session. When starting the server from the command line in a manually
created tmux session this does work (session stays alive) but not when the session was
brought up by systemd .
When starting the server from the command line in a manually created tmux session this
does work (session stays alive) but not when the session was brought up by systemd
.
The difference between these situations is actually unrelated to systemd. In one case,
you're starting the server from a shell within the tmux session, and when the server
terminates, control returns to the shell. In the other case, you're starting the server
directly within the tmux session, and when it terminates there's no shell to return to, so
the tmux session also dies.
tmux has an option to keep the session alive after the process inside it dies (look for
remain-on-exit in the manpage), but that's probably not what you want: you want
to be able to return to an interactive shell, to restart the server, investigate why it died,
or perform maintenance tasks, for example. So it's probably better to change your command to
this:
That is, first run the server, and then, after it terminates, replace the process (the
shell which tmux implicitly spawns to run the command, but which will then exit) with
another, interactive shell. (For some other ways to get an interactive shell after the
command exits, see e. g. this question – but note that the
<(echo commands) syntax suggested in the top answer is not available in
systemd unit files.)
How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111
or udp port 7000 under Linux?
You can the following programs to find out about port numbers and its associated process:
netstat a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
fuser a command line tool to identify processes using files or sockets.
lsof a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that
opened them.
/proc/$pid/ file system Under Linux /proc includes a directory for each running process (including kernel processes) at
/proc/PID, containing information about that process, notably including the processes name that opened port.
You must run above command(s) as the root user.
netstat example
Type the following command: # netstat -tulpn
Sample outputs:
OR try the following ps command: # ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
Sample outputs:
3813 vivek vivek transmission 02:44:05 Fri Oct 29 10:58:40 2010
Another option is /proc/$PID/environ, enter: # cat /proc/3813/environ
OR # grep --color -w -a USER /proc/3813/environ
Sample outputs (note colour option): Fig.01: grep output
Now, you get more information about pid # 1607 or 1616 and so on: # ps aux | grep '[1]616'
Sample outputs: www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
I recommend the following command to grab info about pid # 1616: # ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
Sample outputs:
/usr/sbin/apache2 -k start : The command name and its args
03:16:22 : Elapsed time since the process was started, in the form [[dd-]hh:]mm:ss.
Fri Oct 29 10:20:17 2010 : Time the command started.
Help: I Discover an Open Port Which I Don't Recognize At All
The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers: $ grep port /etc/services
$ grep 443 /etc/services
Sample outputs:
https 443/tcp # http protocol over TLS/SSL
https 443/udp
Check For rootkit
I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet
access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in
Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers.
See how to detecting
/ checking rootkits under Linux .
Keep an Eye On Your Bandwidth Graphs
Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.
See also:
See the following man pages for more information: $ man ps
$ man grep
$ man lsof
$ man netstat
$ man fuser
How can I find which process is constantly writing to disk?
I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has
a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.
And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting
( tick...tick...tick...trrrrrr rinse and repeat every second or so).
In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and
I simply redirected that one (not important) logging to a (real) RAM disk.
But here I'm not sure.
I tried the following:
ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
but nothing is changing there.
Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.
Could it be something in the kernel/system I just installed or do I have a faulty harddisk?
hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled
from big sources (Emacs) without issue so I don't think the system is bad.
Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange
clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce
for a few "clicks"...) Mat
Jul 27 '12 at 6:03
@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; )
Cedric Martin
Jul 27 '12 at 7:02
Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access
time. camh
Jul 27 '12 at 9:48
thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had
to apt-get iotop . Very cool command!
Cedric Martin
Aug 2 '12 at 15:56
I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO
bandwidth used. ndemou
Jun 20 '16 at 15:32
You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog
. This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current
activity.
It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging,
which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages)
dan3
Jul 15 '13 at 8:32
You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the
disk activity there is no need to stop the syslog daemon.
scai
Jul 16 '13 at 6:32
I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll
try it again :) dan3
Jul 16 '13 at 7:22
I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger
a write to disk. scai
Jul 16 '13 at 10:50
auditctl -S sync -S fsync -S fdatasync -a exit,always
Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed!
Check in /etc/auditd.conf that the flush option is set to none .
If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts
and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises.
With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by -
, then that log is flushed to disk after each write.
It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a
lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running
hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without
spinning up the drive - cretinous!).
I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script,
or in /etc/rc.local or similar.
for disk in /dev/sd? ; do
/sbin/hdparm -q -S 0 "/dev/$disk"
done
that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue
can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster?
I mean: it's never ever "resting" as long as the system is on then?
Cedric Martin
Aug 2 '12 at 16:03
IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives
i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one
of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have
to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily
exceeded if the drive is idling and spinning up every few seconds)
cas
Aug 2 '12 at 21:42
It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best
to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some
people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and
in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good
though is when the hard drive repeatedly spins down and up again in a short period of time.
Micheal Johnson
Mar 12 '16 at 20:48
Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for
a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly
if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using
the computer and is likely to need the drive again soon.
Micheal Johnson
Mar 12 '16 at 20:51
,
I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is
generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped
(or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if
I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.
Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.
netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not
all others (like AIX.) Add -t if you want TCP only.
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:24800 0.0.0.0:* LISTEN 27899/synergys
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 3361/python
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2264/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22964/apache2
tcp 0 0 192.168.99.1:53 0.0.0.0:* LISTEN 3389/named
tcp 0 0 192.168.88.1:53 0.0.0.0:* LISTEN 3389/named
etc.
xxx , Mar 14, 2011 at 21:01
Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something
similar for Solaris? user5721
Mar 14 '11 at 21:01
Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt
to remove it) ? Olivier Dulac
Sep 18 '13 at 4:05
@vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't
even try to remove it when a process holds it. That's cool! Thx!
Olivier Dulac
Sep 26 '13 at 16:00
Another tool available on Linux is ss . From the ss man page on Fedora:
NAME
ss - another utility to investigate sockets
SYNOPSIS
ss [options] [ FILTER ]
DESCRIPTION
ss is used to dump socket statistics. It allows showing information
similar to netstat. It can display more TCP and state informations
than other tools.
Example output below - the final column shows the process binding:
I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety
of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e.
http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it
became obvious what the process was (for the record, it was Splunk ).
One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!
In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful
when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant.
Gilles
May 8 '11 at 14:45
How can I find which process is constantly writing to disk?
I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has
a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.
And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting
( tick...tick...tick...trrrrrr rinse and repeat every second or so).
In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and
I simply redirected that one (not important) logging to a (real) RAM disk.
But here I'm not sure.
I tried the following:
ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
but nothing is changing there.
Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.
Could it be something in the kernel/system I just installed or do I have a faulty harddisk?
hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled
from big sources (Emacs) without issue so I don't think the system is bad.
Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange
clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce
for a few "clicks"...) Mat
Jul 27 '12 at 6:03
@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; )
Cedric Martin
Jul 27 '12 at 7:02
Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access
time. camh
Jul 27 '12 at 9:48
thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had
to apt-get iotop . Very cool command!
Cedric Martin
Aug 2 '12 at 15:56
I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO
bandwidth used. ndemou
Jun 20 '16 at 15:32
You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog
. This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current
activity.
It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging,
which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages)
dan3
Jul 15 '13 at 8:32
You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the
disk activity there is no need to stop the syslog daemon.
scai
Jul 16 '13 at 6:32
I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll
try it again :) dan3
Jul 16 '13 at 7:22
I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger
a write to disk. scai
Jul 16 '13 at 10:50
auditctl -S sync -S fsync -S fdatasync -a exit,always
Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed!
Check in /etc/auditd.conf that the flush option is set to none .
If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts
and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises.
With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by -
, then that log is flushed to disk after each write.
It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a
lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running
hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without
spinning up the drive - cretinous!).
I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script,
or in /etc/rc.local or similar.
for disk in /dev/sd? ; do
/sbin/hdparm -q -S 0 "/dev/$disk"
done
that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue
can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster?
I mean: it's never ever "resting" as long as the system is on then?
Cedric Martin
Aug 2 '12 at 16:03
IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives
i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one
of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have
to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily
exceeded if the drive is idling and spinning up every few seconds)
cas
Aug 2 '12 at 21:42
It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best
to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some
people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and
in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good
though is when the hard drive repeatedly spins down and up again in a short period of time.
Micheal Johnson
Mar 12 '16 at 20:48
Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for
a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly
if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using
the computer and is likely to need the drive again soon.
Micheal Johnson
Mar 12 '16 at 20:51
,
I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is
generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped
If you pass -1 as the process ID argument to either the
kill shell command or the
kill C function , then the signal is sent to all the processes it can reach, which
in practice means all the processes of the user running the kill command or syscall.
pkill - ... signal processes based on name and other attributes
-u, --euid euid,...
Only match processes whose effective user ID is listed.
Either the numerical or symbolical value may be used.
-U, --uid uid,...
Only match processes whose real user ID is listed. Either the
numerical or symbolical value may be used.
-u, --user
Kill only processes the specified user owns. Command names
are optional.
I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use
full list of processes (doing some readdir of /proc ). I think, they will
iterate over /proc digital subfolders and check every found process for
match.
To get list of users, use getpwent
(it will get one user per call).
skill (procps & procps-ng)
and killall (psmisc)
tools both uses getpwnam library call
to parse argument of -u option, and only username will be parsed.
pkill (procps & procps-ng)
uses both atol and getpwnam to parse -u / -U argument and allow
both numeric and textual user specifier.
; ,Aug 4, 2011 at 10:11
pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux
specifically. – Lars Wirzenius
Aug 4 '11 at 10:11
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.