|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
|May the source be with you, but remember the KISS principle ;-)|
|News||Enterprise Unix System Administration||Recommended Links||Unix Configuration Management Tools||pssh||Slurping||C3 Tools||parallel|
|rsync||rdist||Cluster SSH||Mussh||Job schedulers||Unix System Monitoring||Grid Engine||Perl Admin Tools and Scripts|
|SSH Power Tool||Tentakel||Multi Remote Tools||pdsh||pdcp|
|SSH Usage in Pipes||Password-less SSH login||scp||sftp||Tips||History||Humor||Etc|
Pdsh is a parallel remote shell client. Basic usage is via the familiar syntax
pdsh [OPTION]... COMMAND
where COMMAND is the remote command to run.
Pdsh is a threaded application that uses a sliding window (or fanout) of threads to conserve resources on the initiating host and allow some connections to time out while all other connections continue.
Output from each host is displayed as it is recieved and is prefixed with the name of the host and a ':' character, unless the -N option is used.
pdsh -av 'grep . /proc/sys/kernel/ostype' ehype78: Linux ehype79: Linux ehype76: Linux ehype85: Linux ehype77: Linux ehype84: Linux ...
When pdsh receives a Ctrl-C (SIGINT), it will list the status of all current threads
[Ctrl-C] pdsh@hype137: interrupt (one more within 1 sec to abort) pdsh@hype137: (^Z within 1 sec to cancel pending threads) pdsh@hype137: hype0: connecting pdsh@hype137: hype1: command in progress pdsh@hype137: hype2: command in progress pdsh@hype137: hype3: connecting pdsh@hype137: hype4: connecting ...
Another Ctrl-C within one second will cause pdsh to abort immediately, while a Ctrl-Z within a second will cancel all pending threads, allowing threads that are connecting and "in progress" to complete normally.
[Ctrl-C Ctrl-Z] pdsh@hype137: interrupt (one more within 1 sec to abort) pdsh@hype137: (^Z within 1 sec to cancel pending threads) pdsh@hype137: hype0: connecting pdsh@hype137: hype1: command in progress pdsh@hype137: hype2: command in progress pdsh@hype137: hype3: connecting pdsh@hype137: hype4: connecting pdsh@hype137: hype5: connecting pdsh@hype137: hype6: connecting pdsh@hype137: Canceled 8 pending threads.
At a minimum pdsh requires a list of remote hosts to target and a remote command. The standard options used to set the target hosts in pdsh are -w and -x, which set and exclude hosts respectively.
As noted in sections above pdsh accepts lists of hosts the general form: prefix[n-m,l-k,...],
where n < m and l < k, etc., as an alternative to explicit lists of hosts. This
form should not be confused
with regular expression character classes (also denoted by ). For example,
foo does not
represent an expression matching foo1 or foo9, but rather represents the degenerate hostlist: foo19.
The hostlist syntax is meant only as a convenience on clusters with a "prefixNNN" naming convention
and specification of ranges should not be considered necessary -- this foo1,foo9 could be specified
as such, or by the hostlist foo[1,9].
Some examples of usage follow:
Run command on foo01,foo02,...,foo05
pdsh -w foo[01-05] command
Run command on foo7,foo9,foo10
pdsh -w foo[7,9-10] command
Run command on foo0,foo4,foo5
pdsh -w foo[0-5] -x foo[1-3] command
A suffix on the hostname is also supported:
Run command on foo0-eth0,foo1-eth0,foo2-eth0,foo3-eth0
pdsh -w foo[0-3]-eth0 command
Note: Most shells will interpret brackets ([ and ]) for pattern matching. So it is necessary to enclose ranged lists within quotes:
pdsh -w "foo[01-05]" command
The -w option is used to set and/or filter the list of target hosts, and is used as
where TARGETS is a comma-separated list of the one or more of the following:
host[0-10] -> host0,host1,host2,...host10 host[0-2,10] -> host0,host1,host2,host10
See the HOSTLIST expressions page for details on the HOSTLIST format.
Read hosts from /tmp/hosts pdsh -w ^/tmp/hosts ... Also works for multiple files: pdsh -w ^/tmp/hosts,^/tmp/morehosts ...
Select only hosts ending in a 0 via regex: pdsh -w host[0-20],/0$/ ...
Run on all hosts (-a) except host0: pdsh -a -w -host0 ... Exclude all hosts ending in 0: pdsh -a -w -/0$/ ... Exclude hosts in file /tmp/hosts: pdsh -a -w -^/tmp/hosts ...
Additionally, a list of hosts preceded by "user@" specifies a remote username other than the default for these hosts. , and list of hosts preceeded by "rcmd_type:" specifies an alternate rcmd connect type for the following hosts. If used together, the rcmd type must be specified first, e.g. ssh:user1@host0 would use ssh to connect to host0 as user1.
Run with user `foo' on hosts h0,h1,h2, and user `bar' on hosts h3,h5: pdsh -w foo@h[0-2],bar@h[3,5] ... Use ssh and user "u1" for hosts h[0-2]: pdsh -w ssh:u1@h[0-2] ...
Note: If using the genders module, the rcmd_type for groups of hosts can be encoded in the genders file using the special pdsh_rcmd_type attribute
The -x option is used to exclude specific hosts from the target node list and is used simply as
This option may be used with other node selection options such as -a and -g (when available). Arguements to -x may also be preceeded by the filename ('^') and regex ('/') characters as described above. Also as with -w, the -x option also operates on HOSTLISTS.
Exclude hosts ending in 0: pdsh -a -x /0$/ ... Exclude hosts in file /tmp/hosts: pdsh -a -x ^/tmp/hosts ... Run on hosts node1-node100, excluding node50: pdsh -w node[1-100] -x node50 ...
As an alternative to "-w ^file", and for backwards compatibility with DSH, a file containing a list of hosts to target may also be specified in the WCOLL environment variable, in which case pdsh behaves just as if it were called as
pdsh -w ^$WCOLL ...Configure user environment for PDSH
# vim /etc/profile.dEdit the following:
# setup pdsh for cluster users export PDSH_RCMD_TYPE='ssh' export WCOLL='/etc/pdsh/machines'Put the host name of the Compute Nodes
# vim /etc/pdsh/machines/
node1 node2 node3 ....... .......
Additionally, there are many other pdsh modules that provide options for targetting remote hosts. These are documented in the Miscellaneous Modules page, but examples include -a to target all hosts in a machines, genders, or dshgroups file, -g to target groups of hosts with genders, dshgroups, and netgroups, and -j to target hosts in SLURM or Torque/PBS jobs.
As described earlier, pdsh uses modules to implement and extend its core functionality. There are two basic kinds of modules used in pdsh -- "rcmd" modules, which implement the remote connection method pdsh uses to run commands, and "misc" modules, which implement various other pdsh functionality, such as node list generation and filtering.
The current list of loaded modules is printed with the pdsh -V output
pdsh -V pdsh-2.23 (+debug) rcmd modules: ssh,rsh,mrsh,exec (default: mrsh) misc modules: slurm,dshgroup,nodeupdown (*conflicting: genders) [* To force-load a conflicting module, use the -M <name> option]
Note that some modules may be listed as conflicting with others. This is because these modules may provide additional command line options to pdsh, and if the command line options conflict, the options to pdsh, and if the command line options conflict, the
Detailed information about available modules may be viewed via the -L option:
> pdsh -L 8 modules loaded: Module: misc/dshgroup Author: Mark Grondona <firstname.lastname@example.org> Descr: Read list of targets from dsh-style "group" files Active: yes Options: -g groupname target hosts in dsh group "groupname" -X groupname exclude hosts in dsh group "groupname" Module: rcmd/exec Author: Mark Grondona <email@example.com> Descr: arbitrary command rcmd connect method Active: yes Module: misc/genders Author: Jim Garlick <firstname.lastname@example.org> Descr: target nodes using libgenders and genders attributes Active: no Options: -g query,... target nodes using genders query -X query,... exclude nodes using genders query -F file use alternate genders file `file' -i request alternate or canonical hostnames if applicable -a target all nodes except those with "pdsh_all_skip" attribute -A target all nodes listed in genders database ...
This output shows the module name author, a description, any options provided by the module and whether the module is currently "active" or not.
The -M option may be use to force-load a list of modules before all others, ensuring that they will be active if there is a module conflict. In this way, for example, the genders module could be made active and the dshgroup module deactivated for one run of pdsh. This option may also be set via the PDSH_MISC_MODULES environment variable.
- Output a usage message and quit. A list of available rcmd modules will also be printed at the end of the usage message. The available options for pdsh may change based on which modules are loaded or passed to the -M option.
- Return the largest of the remote command return values
- Batch mode. Disables the ctrl-C status feature so that a single ctrl-c kills pdsh.
- -l USER
- Run remote commands as user USER. The remote username may also be specified using the USER@TARGETS syntax with the -w option
- -t SECONDS
- Set a connect timeout in seconds. Default is 10.
- -u SECONDS
- Set a remote command timeout. The default is unlimited.
- -f FANOUT
- Set the maximum number of simultaneous remote commands to FANOUT. The default is 32, but can be overridden at build time.
- Disable the hostname: prefix on lines of pdsh output.
- Output pdsh version information, along with a list of currently loaded modules, and exit.
|Bulletin||Latest||Past week||Past month||
The cluster comes with a simple parallel shell named pdsh. The pdsh shell is handy for running commands across the cluster. See the man page, which describes the capabilities of pdsh in detail. One of the useful features is the capability of specifying all or a subset of the cluster.
pdsh -a <command> targets the <command> to all nodes of the cluster, including the master.
pdsh -a -x node00 <command> targets the <command> to all nodes of the cluster except the master.
pdsh -w node[01-08] <command> targets the <command> to the 8 nodes of the cluster named node01, node02, ..., node08.
Another utility that is useful for formatting the output of pdsh is dshbak. Here we will show some handy uses of pdsh.
- Show the current date and time on all nodes of the cluster.
pdsh -a date
- Show the current load and system uptime for all nodes of the cluster.
pdsh -a uptime
- Show all processes with the substring mpd in their name on the cluster.
pdsh -a ps augx | grep mpd
- Cleanup MPI files and sockets from all nodes in the system. This can be handy in removing leftover files from a earlier program or system crash.
pdsh -a mpdcleanup
- Remove all instances of pvm temporary files from the cluster. This can be handy in removing leftover files from a earlier program or system crash.
pdsh -a /bin/rm -f /tmp/pvm*
- The utility dshbak formats the output from pdsh by consolidating the output from each node. The option -c shows identical output from different nodes just once.
pdsh -a ls -l /etc/ntp | dshbak -c
Here is a sample output:[amit@onyx amit]$ pdsh -a ls -l /etc/ntp | dshbak -c ---------------- ws[01-16] ---------------- total 16 -rw-r--r-- 1 root root 8 Jun 4 11:53 drift -rw------- 1 root root 266 Jun 4 11:53 keys -rw-r--r-- 1 root root 13 Jun 4 11:53 ntpservers -rw-r--r-- 1 root root 13 Jun 4 11:53 step-tickers ---------------- ws00 ---------------- total 16 -rw-r--r-- 1 ntp ntp 8 Sep 5 21:51 drift -rw------- 1 ntp ntp 266 Feb 13 2003 keys -rw-r--r-- 1 root root 58 Oct 3 2003 ntpservers -rw-r--r-- 1 ntp ntp 23 Oct 3 2003 step-tickers ---------------- ws[17-32] ---------------- total 16 -rw-r--r-- 1 root root 8 May 27 13:31 drift -rw------- 1 root root 266 May 27 13:31 keys -rw-r--r-- 1 root root 13 May 27 13:31 ntpservers -rw-r--r-- 1 root root 13 May 27 13:31 step-tickers [amit@onyx amit]$
May 24, 2012 | radfest.wordpress.comEver have a multitude of hosts you need to run a command (or series of commands) on? We all know that forloop outputs are super fun to parse through when you need to do this, but why not do it better with a tool like pdsh.
A respected ex-colleague of mine made a great suggestion to start using pdsh instead of forloops and other creative make shift parallel shell processing. The majority of my notes in this blog post are from him. If he'll allow me too, I'll give him a shout out and cite his Google+ profile for anyone interested.Pdsh is a parallel remote shell client available from sourceforge. If you are using rpmforge CentOS repos you can pick it up there as well, but it may not be the most bleeding edge package available.
Pdsh lives on sourceforge, but the code is on google:
- Set up your environment:
- export PDSH_SSH_ARGS_APPEND="-o ConnectTimeout=5 -o CheckHostIP=no -o StrictHostKeyChecking=no" (Add this to your .bashrc to save time.)
Obviously if you have Puppet Enterprise fully integrated within your environment, you can take advantage of powerful tools such as mcollective. If you do not, pdsh is a great alternative.
- Create your target list in a text file, one hostname per line (in the examples below, this file is called "host-list"
- It would probably be a good idea to use "tee" to capture output.
- "man tee" if you need more information on tee.
- Run a test first to make sure your pdsh command works the way you think it will before potentially doing anything destructive:
- sudo pdsh -R ssh -w ^host-list "hostname" | tee output-test-1
- Change your test run to do what you really want it to after a successful test. e.g.:
- sudo pdsh -R ssh -w ^host-list "/usr/bin/mycmd args" | tee output-mycmd-run-1
Linux PRO Issue 166/2014
Building and Installing pdsh
Building and installing pdsh is really simple if you've built code using GNU's autoconfigure before. The steps are quite easy:./configure --with-ssh --without-rsh make make install
This puts the binaries into
/usr/local/, which is fine for testing purposes. For production work, I would put it in
/optor something like that just be sure it's in your path.
You might notice that I used the
--without-rshoption in the
configurecommand. By default, pdsh uses
rsh, which is not really secure, so I chose to exclude it from the configuration. In the output in Listing 1, you can see the pdsh rcmd modules (rcmd is the remote command used by pdsh). Notice that the "available rcmd modules" at the end of the output lists only ssh and exec. If I didn't exclude rsh, it would be listed here, too, and it would be the default. To override rsh and make ssh the default, you just add the following line to your
rcmd Modules[laytonjb@home4 ~]$ pdsh -v pdsh: invalid option -- 'v' Usage: pdsh [-options] command ... -S return largest of remote command return values -h output usage menu and quit -V output version information and quit -q list the option settings and quit -b disable ^C status feature (batch mode) -d enable extra debug information from ^C status -l user execute remote commands as user -t seconds set connect timeout (default is 10 sec) -u seconds set command timeout (no default) -f n use fanout of n nodes -w host,host,... set target node list on command line -x host,host,... set node exclusion list on command line -R name set rcmd module to name -M name,... select one or more misc modules to initialize first -N disable hostname: labels on output lines -L list info on all loaded modules and exit available rcmd modules: ssh,exec (default: ssh)export PDSH_RCMD_TYPE=ssh
Be sure to "source" your
source .bashrc) to set the environment variable. You can also log out and log back in. If, for some reason, you see the following when you try running pdsh,$ pdsh -w 192.168.1.250 ls -s pdsh@home4: 192.168.1.250: rcmd: socket: Permission denied
then you have built it with rsh. You can either rebuild pdsh without rsh, or you can use the environment variable in your
.bashrcfile, or you can do both.
First pdsh Commands
To begin, I'll try to get the kernel version of a node by using its IP address:$ pdsh -w 192.168.1.250 uname -r 192.168.1.250: 2.6.32-431.11.2.el6.x86_64
-woption means I am specifying the node(s) that will run the command. In this case, I specified the IP address of the node (192.168.1.250). After the list of nodes, I add the command I want to run, which is
uname -rin this case. Notice that pdsh starts the output line by identifying the node name.
If you need to mix rcmd modules in a single command, you can specify which module to use in the command line,$ pdsh -w ssh:email@example.com uname -r 192.168.1.250: 2.6.32-431.11.2.el6.x86_64
by putting the rcmd module before the node name. In this case, I used ssh and typical ssh syntax.
A very common way of using pdsh is to set the environment variable
WCOLLto point to the file that contains the list of hosts you want to use in the pdsh command. For example, I created a subdirectory
PDSHwhere I create a file
hoststhat lists the hosts I want to use:[laytonjb@home4 ~]$ mkdir PDSH [laytonjb@home4 ~]$ cd PDSH [laytonjb@home4 PDSH]$ vi hosts [laytonjb@home4 PDSH]$ more hosts 192.168.1.4 192.168.1.250
I'm only using two nodes: 192.168.1.4 and 192.168.1.250. The first is my test system (like a cluster head node), and the second is my test compute node. You can put hosts in the file as you would on the command line separated by commas. Be sure not to put a blank line at the end of the file because pdsh will try to connect to it. You can put the environment variable
As before, you can source your .bashrc file, or you can log out and log back in.
I won't list all the several other ways to specify a list of nodes, because the pdsh website  discusses virtually all of them; however, some of the methods are pretty handy. The simplest way is to specify the nodes on the command line is to use the
-woption:$ pdsh -w 192.168.1.4,192.168.1.250 uname -r 192.168.1.4: 2.6.32-431.17.1.el6.x86_64 192.168.1.250: 2.6.32-431.11.2.el6.x86_64
In this case, I specified the node names separated by commas. You can also use a range of hosts as follows:pdsh -w host[1-11] pdsh -w host[1-4,8-11]
In the first case, pdsh expands the host range to host1, host2, host3, , host11. In the second case, it expands the hosts similarly (host1, host2, host3, host4, host8, host9, host10, host11). You can go to the pdsh website for more information on hostlist expressions .
Another option is to have pdsh read the hosts from a file other than the one to which WCOLL points. The command shown in Listing 2 tells pdsh to take the hostnames from the file
/tmp/hosts, which is listed after
-w^ (with no space between the "^" and the filename). You can also use several host files,
Read Hosts from File[laytonjb@home4 ~]$ pdsh -w ^/tmp/hosts uptime 192.168.1.4: 15:51:39 up 8:35, 12 users, load average: 0.64, 0.38, 0.20 192.168.1.250: 15:47:53 up 2 min, 0 users, load average: 0.10, 0.10, 0.04 [laytonjb@home4 ~]$ more /tmp/hosts 192.168.1.4 192.168.1.250$ more /tmp/hosts 192.168.1.4 $ more /tmp/hosts2 192.168.1.250 $ pdsh -w ^/tmp/hosts,^/tmp/hosts2 uname -r 192.168.1.4: 2.6.32-431.17.1.el6.x86_64 192.168.1.250: 2.6.32-431.11.2.el6.x86_64
or you can exclude hosts from a list:$ pdsh -w -192.168.1.250 uname -r 192.168.1.4: 2.6.32-431.17.1.el6.x86_64
-w -192.168.1.250excluded node 192.168.1.250 from the list and only output the information for 192.168.1.4. You can also exclude nodes using a node file:$ pdsh -w -^/tmp/hosts2 uname -r 192.168.1.4: 2.6.32-431.17.1.el6.x86_64
In this case,
/tmp/hosts2contains 192.168.1.250, which isn't included in the output. Using the
-xoption with a hostname,$ pdsh -x 192.168.1.4 uname -r 192.168.1.250: 2.6.32-431.11.2.el6.x86_64 $ pdsh -x ^/tmp/hosts uname -r 192.168.1.250: 2.6.32-431.11.2.el6.x86_64 $ more /tmp/hosts 192.168.1.4
or a list of hostnames to be excluded from the command to run also works.
More Useful pdsh Commands
Now I can shift into second gear and try some fancier pdsh tricks. First, I want to run a more complicated command on all of the nodes (Listing 3). Notice that I put the entire command in quotes. This means the entire command is run on each node, including the first (
cat /proc/cpuinfo) and second (
grep bogomips) parts.
Quotation Marks 1[laytonjb@home4 ~]$ pdsh 'cat /proc/cpuinfo | grep bogomips' 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.250: bogomips : 5624.23 192.168.1.250: bogomips : 5624.23 192.168.1.250: bogomips : 5624.23 192.168.1.250: bogomips : 5624.23
In the output, the node precedes the command results, so you can tell what output is associated with which node. Notice that the BogoMips values are different on the two nodes, which is perfectly understandable because the systems are different. The first node has eight cores (four cores and four Hyper-Thread cores), and the second node has four cores.
You can use this command across a homogeneous cluster to make sure all the nodes are reporting back the same BogoMips value. If the cluster is truly homogeneous, this value should be the same. If it's not, then I would take the offending node out of production and check it.
A slightly different command shown in Listing 4 runs the first part contained in quotes,
cat /proc/cpuinfo, on each node and the second part of the command,
grep bogomips, on the node on which you issue the pdsh command.
Quotation Marks 2[laytonjb@home4 ~]$ pdsh 'cat /proc/cpuinfo' | grep bogomips 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.4: bogomips : 6997.39 192.168.1.250: bogomips : 5624.23 192.168.1.250: bogomips : 5624.23 192.168.1.250: bogomips : 5624.23 192.168.1.250: bogomips : 5624.23
The point here is that you need to be careful on the command line. In this example, the differences are trivial, but other commands could have differences that might be difficult to notice.
One very important thing to note is that pdsh does not guarantee a return of output in any particular order. If you have a list of 20 nodes, the output does not necessarily start with node 1 and increase incrementally to node 20. For example, in Listing 5, I run
vmstaton each node and get three lines of output from each node.
Order of Outputlaytonjb@home4 ~]$ pdsh vmstat 1 2 192.168.1.4: procs ------------memory------------ ---swap-- -----io---- --system-- -----cpu----- 192.168.1.4: r b swpd free buff cache si so bi bo in cs us sy id wa st 192.168.1.4: 1 0 0 30198704 286340 751652 0 0 2 3 48 66 1 0 98 0 0 192.168.1.250: procs -----------memory------------ ---swap-- -----io---- --system-- ------cpu------ 192.168.1.250: r b swpd free buff cache si so bi bo in cs us sy id wa st 192.168.1.250: 0 0 0 7248836 25632 79268 0 0 14 2 22 21 0 0 99 0 0 192.168.1.4: 1 0 0 30198100 286340 751668 0 0 0 0 412 735 1 0 99 0 0 192.168.1.250: 0 0 0 7249076 25632 79284 0 0 0 0 90 39 0 0 100 0 0
At first, it looks like the results from the first node are output first, but then the second node creeps in with its results. You need to expect that the output from a command that returns more than one line per node could be mixed. My best advice is to grab the output, put it into an editor, and rearrange the lines, remembering that the lines for any specific node are in the correct order.
... ... ...
Jeff Layton has been in the HPC business for almost 25 years (starting when he was 4 years old). He can be found lounging around at a nearby Frys enjoying the coffee and waiting for sales.
How to enable pdsh in IBM Platform Cluster Manager 3.2
pdsh is a parallel shell cluster tool which was included in earlier version of IBM Platform Cluster Manger, but is not distributed with version 3.2
Follow the steps below to install pdsh in your IBM Platform PCM/HPC 3.2 cluster. If you encounter a problem with the steps below, you can open a service request with IBM Support. For pdsh usage issue, please refer to pdsh man page or online documentation.
To install and setup pdsh, follow these steps:
The only pre-requisite is that your cluster management node must have access to internet, to access EPEL software repository.
Red Hat Enterprise Linux Server release 6.2 (Santiago)
- Setup EPEL yum repository on your RHEL 6.2 Installer node
1.1 Grab a URL for epel-release package
You should confirm that your PCM installer is running RHEL 6.2 by checking the redhat-release file.
# cat /etc/redhat-release
Once confirmed, get the URL from this page:
1.2 Download the epel-release package on the PCM installer node
# wget http://fedora-epel.mirror.lstn.net/6/i386/epel-release-6-8.noarch.rpm
1.3 Install the epel-release package
# rpm -ivh epel-release-6-7.noarch.rpm
1.5 Enable the base EPEL repository
Modify the /etc/yum.repos.d/epel.repo file and make sure to set enabled=1 for "epel" repository. Do not enable "epel-debuginfo' and "epel-source" repositories.
1.4 Confirm that EPEL repository is available via yum
# yum repolist
- Install PDSH
2.1 Use yum to install pdsh package
# yum -y install pdsh
# yum install pdsh-rcmd-rsh.x86_64
2.2 Confirm that pdsh is installed
# which pdsh
- Configure PDSH
3.1 Create machines file for pdsh
# mkdir /etc/pdsh
# touch /etc/pdsh/machines
# genconfig hostspdsh > /etc/pdsh/machines
3.2 Configure user environment for PDSH
Open /etc/bashrc file and add following lines at the end
# setup pdsh for cluster users
- Use PDSH
Now, pdsh is setup for cluster users similar to previous version of IBM Platform HPC. To use pdsh, simply run 'pdsh' command
[user@master ~]$ pdsh
compute000-eth0: 14:45:21 up 24 min, 1 user, load average: 0.00, 0.08, 0.27
Aug 4, 2013 | linuxtoolkit.blogspot.in
. Configure user environment for PDSH# vim /etc/profile.dEdit the following:
# setup pdsh for cluster users export PDSH_RCMD_TYPE='ssh' export WCOLL='/etc/pdsh/machines'
5. Put the host name of the Compute Nodes
# vim /etc/pdsh/machines/
node1 node2 node3 ....... .......
6. Make sure the nodes have their SSH-Key Exchange. For more information, see Auto SSH Login without Password 7. Do Install Step 1 to Step 3 on ALL the client nodes.
B. USING PDSH Run the command ( pdsh [options]... command )
1. To target all the nodes found at /etc/pdsh/machinefile. Assuming the files are transferred already. Do note that the parallel copy comes with the pdsh utilities
# pdsh -a "rpm -Uvh /root/htop-1.0.2-1.el6.rf.x86_64.rpm"
2. To target specific nodes, you may want to consider using the -x command
# pdsh -w host1,host2 "rpm -Uvh /root/htop-1.0.2-1.el6.rf.x86_64.rpm"References
- Install and setup pdsh on IBM Platform Cluster Manager
- PDSH Project Site
- PDSH Download Site (Sourceforge)
|Bulletin||Latest||Past week||Past month||
IBM Install and setup pdsh on IBM Platform Cluster Manager - United States
pdsh(1) - Linux man page
Ubuntu Manpage pdsh - issue commands to groups of hosts in parallel
Shell Games - Page 1.6 " Linux Magazine by Jeff Layton
- DSH: http://www.netfort.gr.jp/~dancer/software/dsh.html.en
- PyDSH: http://pydsh.sourceforge.net/
- PPSS: http://code.google.com/p/ppss/
- PSSH: http://code.google.com/p/parallel-ssh/
- pdsh: http://sourceforge.net/projects/pdsh
- PuSSH: http://pussh.sourceforge.net/
- sshpt: http://code.google.com/p/sshpt/
- mqsh: https://github.com/aia/mqsh
- Using pdsh: https://code.google.com/p/pdsh/wiki/UsingPDSH
- Hostlist expressions: https://code.google.com/p/pdsh/wiki/HostListExpressions
- Processor and memory metrics: http://www.admin-magazine.com/HPC/Articles/Processor-and-Memory-Metrics
- Process, network, and disk-metrics: http://www.admin-magazine.com/HPC/Articles/Process-Network-and-Disk-Metrics
- pdsh modules: https://code.google.com/p/pdsh/wiki/MiscModules
Parallel Shell http://cs.boisestate.edu
pdsh - Parallel Distributed Shell - Google Project Hosting
Linux at Livermore Pdsh
Parallel Distributed Shell SourceForge.net
Novell openSUSE 10.3 pdsh
Linux Toolkits Installing pdsh to issue commands to a group of nodes in parallel in CentOS
Using the Parallel Shell Utility (pdsh) - Building Hadoop Clusters
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: September 12, 2017