May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Testing openMPI environment on SGE


Compiling openmpi with SGE support

Recommended Links

Installing Mellanox InfiniBand Driver on RHEL 6.5

MCA Parameters

SGE Parallel Environment
InfiniBand mpirun command Running MPI jobs Running jobs under SGE VASP Performance optimization Specifying Ethernet interface
ulimit problem with infiniband in SGE   MPI startup(): RLIMIT_MEMLOCK too small problem with SGE and Infiniband      
  Linux Troubleshooting GPFS on Red Hat    Admin Horror Stories Humor Etc

There are two problems with running MPI jobs under SGE which are relatively frequent:

To run OpenMPI job under SGE you need to meet at least the following pre-conditions:

  1. SGE Parallel Environment should be correct. For MPI you should set "job_is_first_task FALSE" and "control_slaves TRUE".
  2. All nodes should be assessable via ssh with passwordless login from the head node.
  3. OpenMPI should be compiled with the "--with-sge" command line switch. In case you forget to do this errors thqat you will get will have no point to this fact. MPI often complains about inability to start daemon on the slave.  SGE support is not enabled by default: you need to explicitly request the SGE support during compilation using the "--with-sge" command line switch in configure command invocation. In simplest case this can be the only parameter. For example:
    ./configure --with-sge
  4. Intel compiler libraries and OpenMPI libraries should be made available on all computational nodes.
  5. Environment should be correct and identical for all computational nodes, including such variables as PATH and  LD_LIBRARY_PATH (this can be achieved by modifying .bashrc file for the particular application account, for example vasp). Forgetting providing nessesary values can lead to the errors in loading requred libraries but often the error message has no relation to the root couse so you should check this pre-requisite especially attentively. 
  6. You should create correct SGE submit script. Please node that submit script does not create the environment on other computation nodes except that first on which it runs.
  7. ulimit - locked memory limit should be set correctly on all the computational nodes.

Example of the setup a parallel environment (pe) under SGE (often called "orte"), which tells SGE how to run MPI codes using Open-MPI libraries is provided in SGE Parallel Environment

SGE support is not enabled by default. For Open MPI v1.3 and later, you need to explicitly request the SGE support during compilation with the "--with-sge" command line switch to the Open MPI configure script.

For example:

./configure --with-sge

SGE will wrap your job and start the child processes for you; you just need to build your code and create a submit  script that embeds SGE queue options as well and shell commands for setting up the environment and invoking mpirun.

Regarding basics of MPI parallel programming, please check out online tutorials such as one from LLNL National Labs.

To determine whether the SGE parallel job is successfully launched to the remote nodes, you can pass in the MCA parameter "--mca plm_base_verbose 1" to mpirun.

Various SGE documentation with pointers to more is available at the Son of GridEngine site, and configuration instructions can be found at the Son of GridEngine configuration how-to site..

Submit Script

A typical submit script would look like this:

#$ -cwd
##$ -j y
#$ -S /bin/bash
#$ -M [email protected]
#$ -pe orte 8
#$ -o output/hello.out
#$ -e output/hello.err
# Use modules to setup the runtime environment
. /etc/profile
module load compilers/intel/11.1
module load mpi/openmpi/1.4.2/intel 
# Execute the run
mpirun -np $NSLOTS ./hello

Note that this will need to be modified to match your environment and the cluster. In particular, check the

SGE Options

Here's description of the options embedded in the script header:

option explanation
-cwd Run in the current working directory
-j y
Stdout and stderr in same output file
-S /bin/bash Use the bash shell
-M username@... Mail notifications to this email address
-pe orte 8 Use parallel environment named "orte" for 8 MPI processes
-o filename Output file for stdout
-e filename File for stderr


To submit the job to SGE, we use the "qsub" command:

parrott@hpc mpi> qsub
Your job 6 ("") has been submitted
and immediately can see that the job has been queued.

parrott@hpc mpi> qstat -u "*"

job-ID  prior   name       user      state submit/start at     queue     slots ja-task-ID


      6 0.00000 parrott    qw    04/24/2008 12:20:226 0.00000 parrott    qw    04/24/2008 12:20:22

Once queued, it will start running, and its state change from "qw" to "r"

parrott@hpc mpi> qstat

job-ID  prior   name       user      state submit/start at     queue                     slots ja-task-ID


      6 0.55500 parrott   r     04/24/2008 12:20:26 all.q@compute-1-1.local    8

And when complete, will leave the queue. The output file should appear in the current directory, in this case names ".o6" and ".po6", since the job ID was "6".

You can see the status of available compute nodes using "qhost"

parrott@hpc mpi> qhost



global                  -               -     -       -       -       -       -

compute-1-0             lx26-amd64      8  5.10   15.7G    4.2G  996.2M     0.0

compute-1-1             lx26-amd64      8  0.02   15.7G  192.2M  996.2M     0.0


compute-1-8             lx26-amd64      8  8.05   15.7G    9.3G  996.2M     0.0

compute-1-9             lx26-amd64      8  8.11   15.7G    9.5G  996.2M     0.0

Top updates

Bulletin Latest Past week Past month
Google Search


Old News ;-)

OpenMPI Usage Instructions -- Penn state

Running OpenMPI Executables Through PBS

MPI executables need to be launched with the commmand mpirun. The mpirun command takes care of starting up all of the parallel processes in the MPI job. The following is an example PBS script for running an OpenMPI job across four processors. Note that the proper OpenMPI version needs to be loaded before running the job.

#PBS -l nodes=4:ppn=1
#PBS -l walltime=2:00:00
#PBS -j oe


module load openmpi/gnu
mpirun ./hellompi
More information on the module command and PBS can be found in their respective User Guides.

OpenMPI - UABgrid Documentation

Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was the world's fastest supercomputer from June 2008 to November 2009, and K computer, the fastest supercomputer since June 2011. The Open MPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners.

Project website:

Load SGE module

The following Modules files should be loaded for this package:

For GNU:

module load openmpi/openmpi-gnu

For Intel:

module load openmpi/openmpi-intel

Parallel Environment

Use the openmpi parallel environment in your job script (example for a 4 slot job)

#$ -pe openmpi 4

Submit Script

To enable verbose Grid Engine logging for OpenMPI, add the following the mpirun command in the job script --mca pls_gridengine_verbose 1, for example:

#$ -S /bin/bash
#$ -cwd
#$ -N j_openmpi_hello
#$ -pe openmpi 4
#$ -l h_rt=00:20:00,s_rt=0:18:00
#$ -j y
#$ -M [email protected]
#$ -m eas
# Load the appropriate module files
. /etc/profile.d/
module load openmpi/openmpi-gnu

#$ -V

mpirun --mca pls_gridengine_verbose 1 -np $NSLOTS hello_world_gnu_openmpi

openmpi - Submitting Open MPI jobs to SGE - Stack Overflow

I've installed openmpi , not in /usr/... but in a /commun/data/packages/openmpi/ , it was compiled with --with-sge.

I've added a new PE in SGE as descibed in

# /commun/data/packages/openmpi/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.3)

# qconf -sq all.q | grep pe_
pe_list               make orte

Without SGE, the program runs without any problem, using several processors.

/commun/data/packages/openmpi/bin/orterun -np 20 ./a.out args

Now I want to submit my program to SGE

In the Open MPI FAQ, I read:

# Allocate a SGE interactive job with 4 slots
# from a parallel environment (PE) named 'orte'
shell$ qsh -pe orte 4

but my output is:

qsh -pe orte 4
Your job 84550 ("INTERACTIVE") has been submitted
waiting for interactive job to be scheduled ...
Could not start interactive job.

I've also tried the mpirun command embedded in a script:

$ cat
/commun/data/packages/openmpi/bin/mpirun  \
    /path/to/a.out args

but it fails

$ cat
error: executing task of job 84552 failed: execution daemon on host "node02" didn't accept task
A daemon (pid 18327) died unexpectedly with status 1 while attempting
to launch so we are aborting.

There may be more information reported by the environment (see above).

This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
error: executing task of job 84552 failed: execution daemon on host "node01" didn't accept task
mpirun noticed that the job aborted, but has no info as to the process
that caused that situation.

How can I fix this?

answer in the openmpi mailing list:

=== answer ===

May 31 at 19:30

1 Answer 1

In my case setting "job_is_first_task FALSE" and "control_slaves TRUE" solved the problem.
# qconf -mp mpi1 

pe_name            mpi1
slots              9
user_lists         NONE
xuser_lists        NONE
start_proc_args    /bin/true
stop_proc_args     /bin/true
allocation_rule    $fill_up
control_slaves     TRUE
job_is_first_task  FALSE
urgency_slots      min
accounting_summary FALSE

Recommended Links

Google matched content

Softpanorama Recommended

Top articles


Top articles




SGE queues on HPC are described at:



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019