Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

PBS User Guide

News Cluster job schedulers Recommended Links PBS Professional Torque Installation of open source version of PBSpro PBS commands reference
qsub: submits PBS job qdel qstat: shows the status of PBS jobs and queues pbsnodes: shows the list and detailed information for all or selected nodes checkjob checknode Other commands
PBS command vs SGE commands PBS_Professional_
Quick_Reference.pdf
PBS Pro job script examples  Starting and stopping PBSpro Configuring queues Humor Etc

On user level PBSpro and Torque are quite similar and submission scripts are usually compatible.  Former users of SGE also can adapt very quickly to the new scheduler.

To check status of the scheduler itself you can use the following commands

Some useful qstat commands

To submit a job to the queues, you need to create a command file (also called a script).   

PBS job attributes can be set in two ways as command-line arguments to qsub, or  as psudo comments in the submission script. Themost common are -l and -m

-l comma separated list of required resources ((e.g. nodes=2, cput=00:10:00). This attribute defines the resources that are required by the job and establishes a limit to the amount of resources that can be consumed. If it is not set for a generally available resource, the PBS scheduler uses default values set by the system administrator.  For example
 
#PBS -l mem=32gb,nodes=4:ppn=1,walltime=1:00:00
requests 32 gigabyte of memory, four nodes, and 1 hour of wall clock time (wall clock time is the time from when the job start to when it completes).

-N name  Declares a name for the job

-o [hostname:]pathname Defines the path to be used for the standard output (STDOUT) stream of the batch job.

-e [hostname:]pathname Defines the path to be used for the standard error (STDERR) stream of the batch job.

-p integer between - 1024 and +1023. Defines the priority of the job. Higher values correspond to higher priorities.

-q queue -- defines the queue that the job will use

Here we will use the command file called test.pbs, which you can submit using the command:

qsub test.pbs

Here is a the content of the file:

#!/bin/bash
   #PBS -l mem=1gb,nodes=1:ppn=1,walltime=1:00:00
   #PBS -m abe -M your-email-address
   module add pathscale
   cd work-directory
   ./my_script 

PBS arguments to specify wall-time, number of cores, memory and more.  These commands start with #PBS. The command

Units to specify memory include kb (kilobytes), mb (megabytes) and gb (gigabytes).  The unit must be an integer.

The nodes resource list item (i.e. the node configuration) declares the node requirements for a job. It is a string of individual node specifications separated by plus signs, +. For example, 3+2:fast  requests 3 plain nodes and 2 ``fast'' nodes. A node specification is generally one of the following types the name of a particular node in the cluster, a node with a particular set of properties (e.g. fast and compute), a number of nodes

The "nodes=1:ppn=1" requests one core.  This means you requesting one node and one core per node.  The ppn notation is historic and means processors per node. It is from a time when every processor had one core. 

The line #PBS -m abe is optional.  It specifies notification options. You can use any combination of a,b, and e.  The meaning is as follows:

a  mail is sent when the job is aborted by the batch system.
b  mail is sent when the job begins execution.
e  mail is sent when the job terminates.

3.  The third part of the command file consists of commands that you want to use.  Typically, the module to load, then the actual command to execute the code you want to run.

Notes:

1.  PBS starts execution of the script in your home directory.  You need to add a cd command to the script if you want to the it in a different directory.

2. Be sure the last line of the pbs file has a carriage return in it)

Requesting Specific Nodes

Some software is licensed only on specific nodes.  To request nodes in which it is licensed,

#PBS -l mem=1gb,nodes=aimall+1:ppn=1,walltime=1:00:00

You may also request the type of node.  If you don't request a type of node, your job will run on any available core.  To see the list of nodes and their types, type

  pbsnodes -a

The "properties" column shows the types of nodes that you can request. To specify  4 HPintel8 nodes using 16 cores per node with 32 GB of total memory:

#PBS -l mem=32g,nodes=HPintel8+4:ppn=8,walltime=1:00:00

Environment variables

PBS_O_WORKDIR, that tells you what your directory was when you executed the qsub command.

Most common PBS commands

qsub: submits PBS job

For more details see HPC/PBS_and_derivatives/Reference/qsub.shtml

typical usage:

qsub [-A account] script

To submit a batch job to the specified queue using a script:

qsub -q queue_name job_script 

When queue_name is omitted, the job is routed to the default queue

To submit an interactive PBS job:

qsub -I -q queue_name -l resource_list 

No job_script should be included when submitting an interactive PBS job.

The resource_list typically specifies the number of nodes, CPUs, amount of memory and wall time needed for this job. The following example shows a request for a single Pleiades Ivy Bridge node and a wall time of 2 hours.

%qsub -I -lselect=1:ncpus=20:model=ivy,walltime=2:00:00

See man pbs_resources for more information on what resources can be specified.

Note: If -l resource_list is omitted, the default resources for the specified queue is used. When queue_name is omitted, the job is routed to the default queue

qdel: deletes PBS job

typical usage:

qdel job_identifier

qstat: shows the status of PBS jobs and queues

qstat -f job_identifier
qstat -Q queue_name
qstat -q queue_name
qstat -fQ queue_name 

Each option uses a different format to display all of the queues available, and their constraints and status. The queue_name is optional.

To display job status:

-a
Display all jobs in any status (running, queued, held)
-r
Display all running or suspended jobs
-n
Display the execution hosts of the running jobs
-i
Display all queued, held or waiting jobs
-u username
Display jobs that belong to the specified user
-s
Display any comment added by the administrator or scheduler. This option is typically used to find clues of why a job has not started running.
-f job_id
Display detailed information about a specific job
-xf job_id,
-xu user_id
Display status information for finished jobs (within the past 7 days). This option is only available in the newer version of PBS, which has been installed on Pleiades and Endeavour.

pbsnodes: shows the list and detailed information for all or selected nodes

typical usage:

pbsnodes -a

To list the count of unique nodes along with the properties of those nodes:

pbsnodes -a | grep properties | sort | uniq -c

Other commands

mybalance -h: shows the number of computational core hours used and how many core hours have been allocated to project

typical usage:

mybalance -h

myqueue: shows all running and queued jobs for a user

typical usage:

myqueue

myquota: show disk quotas for all projects in which the user is a member

typical usage:

myquota

showstart: shows the approximate starting time of the job

typical usage:

showstart job_identifier

checkjob: shows the current status of the job

typical  usage:

checkjob job_identifier

checknode: shows the status of the node

typical usage:

checknode node_identifier

 

pbsdsh: distributes task to nodes under pbs

typical usage:

pbsdsh executable [args]

This command copies the inputfile into /tmp directory of each node:

pbsdsh cp inputfile /tmp

 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Torque User Guide

How to use the queues

The Scorpio Linux Cluster is using Torque as the queuing system. Torque uses the same commands and has the same structure as PBS, another common system. Along with Torque, we use the Moab job scheduler.

To use torque, first load the torque module (module load torque).

To submit a job to the queues, you need to create a command file (also called a script). Here we will use the command file called test.pbs.

To submit your job:

qsub test.pbs

Here is a basic command file:

#!/bin/bash
   #PBS -l mem=1gb,nodes=1:ppn=1,walltime=1:00:00
   #PBS -m abe -M your-email-address
   
   module add pathscale
   cd work-directory
   
   ./my_script   

Note: Be sure there is a blank line at the end of your pbs file

This file consists of three parts.

1. The shell to use. In this case

#!/bin/tcsh

means to use the tcsh. You can also use bash or sh.

2. PBS arguments to specify wall-time, number of cores, memory and more. These commands start with #PBS. The command

#PBS -l mem=1gb,nodes=1:ppn=1,walltime=1:00:00

requests 1 gigabyte of memory, 1 core, and 1 hour of wall clock time (wall clock time is the time from when the job start to when it completes).

Units to specify memory include kb (kilobytes), mb (megabytes) and gb (gigabytes). The unit must be an integer.

The "nodes=1:ppn=1" requests one core. This means you requesting one node and one core per node. The ppn notation is historic and means processors per node. It is from a time when every processor had one core. See the section on Parallel Jobs for instructions on how to request multiple cores.

The line #PBS -m abe is not required. It specifies notification options.
You can use any combination of a,b, and e. The meaning is as follows:

a mail is sent when the job is aborted by the batch system.
b mail is sent when the job begins execution.
e mail is sent when the job terminates.

3. The third part of the command file consists of commands that you want to use. Typically, the module to load, then the actual command to execute the code you want to run.

Notes:

1. PBS starts from your home directory. Adding a cd command to take you to the directory where your files are is usually good to add to the commands.

2. Be sure the last line of the pbs file has a carriage return in it (Be sure it has a blank line at the end)

Requesting Specific Nodes

Some software, such as aimall is licensed only on specific nodes. To request nodes in which Aimall is licensed,

#PBS -l mem=1gb,nodes=aimall+1:ppn=1,walltime=1:00:00

You may also request the type of node. If you don't request a type of node, your job will run on any available core. To see the type of nodes, type

pbshosts

The "properties" column shows the types of nodes that you can request

To specify 4 HPintel8 nodes using 16 cores per node with 32 GB of total memory:

#PBS -l mem=32g,nodes=HPintel8+4:ppn=8,walltime=1:00:00

Parallel Jobs

To request more than one core, you can request nodes and cores per node (recommended) or just request the total number of cores

a) To request 4 cores, on a single node:

#!/bin/tcsh
#PBS -l pmem=4gb,nodes=1:ppn=4,walltime=1:00:00
#PBS -m abe -M your-email-address

b) To request 8 cores, on two nodes:

#!/bin/tcsh
#PBS -l pmem=1gb,nodes=2:ppn=4,walltime=1:00:00
#PBS -m abe -M your-email-address

Notes:

  1. In this example, we used pmem, not mem. This request memory per process, you can also use mem, which is total memory requested.
  2. It is best to specify either intel8 or HPintel8 when running multiprocessor jobs on multiple nodes. This will give you nodes with the same processor speed.

Optional Job Control Parameters

The following parameters are optional. If you don't specify them, torque will use default names.

a) To name your job, add:

#PBS -N your-choice-of-job-name

b) To specify the name of the output file (where the standard output is sent), add:

#PBS -o output-file-name

This will send the output file to the output-file-name in the directory from which the job was submitted. If you want to the output file to go to another directory, use the complete path to the file.

#PBS -o complete-path/output-file-name

c) To specify the name of the error file. You can use just a file name, or give a complete path to the file where you want the standard error as for the output file.

#PBS -e error-file-name

Environment Variables

You can use the environment variable PBS_JOBID in your script. For example

#PBS -o output.$PBS_JOBID

will send the output to a file named output.Job-Number.nas-1-0.local.

Other useful commands

To delete a job:

qdel job-number

To see the status of the queues, use the Moab command

showq

To check the status of your job, use the Moab command

checkjob -v job-number

To see the number of cores per node, and the number of free cores,

pbshosts

or to see information about only nodes with free cores

pbshosts -f

Note: This command is not part of the Torque distribution.

The Torque job scheduler

This guide describes basic job submission and monitoring for Torque:

In addition, some more advanced topics are covered:


Preparing a submission script

A submission script is a shell script that

Suppose we want to run a molecular dynamics MPI application called foo with the following requirements

Assuming the number of cores available on each cluster node is 16, so a total of 2 nodes are required to map one MPI process to a physical core. Supposing no input needs to be specified, the following submission script runs the application in a single job

#!/bin/bash

# set the number of nodes and processes per node
#PBS -l nodes=2:ppn=16

# set max wallclock time
#PBS -l walltime=100:00:00

# set name of job
#PBS -N protein123

# mail alert at start, end and abortion of execution
#PBS -m bea

# send mail to this address #PBS -M [email protected] # use submission environment #PBS -V # start job from the directory it was submitted cd $PBS_O_WORKDIR # define MPI host details (ARC specific script) . enable_arcus_mpi.sh # run through the mpirun launcher mpirun $MPI_HOSTS foo

The script starts with #!/bin/bash (also called a shebang), which makes the submission script also a Linux bash script.

The script continues with a series of lines starting with #. For bash scripts these are all comments and are ignored. For Torque, the lines starting with #PBS are directives requesting job scheduling resources. (NB: it's very important that you put all the directives at the top of a script, before any other commands; any #PBS directive coming after a bash script command is ignored!)

The final part of a script is normal Linux bash scripting and describes the set of operations to follow as part of the job. In this case, this involves running the MPI-based application foo through the MPI utility mpirun.

The resource request #PBS -l nodes=n:ppn=m is the most important of the directives in a submission script. The first part (nodes=n) is imperative and is determines how many compute nodes a job is allocated by the scheduler. The second part (ppn=m) is used by the scheduler to prepare the environment for a MPI parallel run with m processes per each compute nodes (e.g. writing a hostifile for the job, pointed to by $PBS_NODEFILE). However, it is up to the user and the submission script to use the environment generated from ppn adequately. In the example above, this is done first sourcing enable_arcus_mpi.sh (which uses $PBS_NODEFILE to prepare the variable $MPI_HOSTS) and then running the application through mpirun.

A note of caution is on threaded single process applications (e.g. Gaussian and Matlab). They cannot run on more than a single compute node; allocating more (e.g. #PBS -l nodes=2) will end up with the first node being allocated and the rest idle. Moreover, since there is no automatic effect on runs from using ppn, the only relevant resource scheduling request in the case of single process applications remains #PBS -l nodes=1. This gives a job user-exclusive access to a single compute node, allowing the application to use all available cores and physical memory on the node; these vary from system to system, see the Table below.

Arcus 16 cores 64GB
Caribou 16 cores 128GB

Examples of Torque submission scripts are given here for some of the more popular applications.


PBS job submission directives

Directives are job specific requirements given to the job scheduler.

The most important directives are those that request resources. The most common are the wallclock time limit (the maximum time the job is allowed to run) and the number of processors required to run the job. For example, to run an MPI job with 16 processes for up to 100 hours on a cluster with 8 cores per compute node, the PBS directives are

#PBS -l walltime=100:00:00
#PBS -l nodes=2:ppn=16

A job submitted with these requests runs for 100 hours at most; after this limit expires, the job is terminated regardless of whether the processing finished or not. Normally, the wallclock time should be conservative, allowing the job to finish normally (and terminate) before the limit is reached.

Also, the job is allocated two compute nodes (nodes=2) and each node is scheduled to run 16 MPI processes (ppn=8). (ppn is an abbreviation of Processes Per Node.) It is the task of the user to instruct mpirun to use this allocation appropriately, i.e. to start 32 processes which are mapped to the 32 cores available for the job. More information on how to run MPI application can be found in this guide.


Submitting jobs with the command qsub

Supposing you already have a submission script ready (call it submit.sh), the job is submitted to the execution queue with the command qsub script.sh. The queueing system prints a number (the job id) almost immediately and returns control to the linux prompt. At this point the job is already in the submission queue.

Once you have submitted the job it will sit in a pending queue for some time (how long depends on the demands of your job and the demand on the service). You can monitor the progress of the job using the command qstat.

Once the job is run you will see files with names like "job.e1234" and "job.o1234", either in your home directory or in the directory you submitted the job from (depending on how your job submission script is written). The ".e" files contain error messages. The ".o" files contain "standard output" which is essentially what the application you ran would normally have printed onto the screen. The ".e" file contains the possible error messages issued by the application; on a correct execution without errors, this file can be empty.

Read all the options for qsub on the Linux manual using the command man qsub.


Monitoring jobs with the command qstat

qstat is the main command for monitoring the state of systems, groups of jobs or individual jobs. The simple qstat command gives a list of jobs which looks something like this:

Job id            Name             User              Time Use S Queue
----------------  ---------------- ----------------  -------- - -----
1121.headnode1    jobName1         bob               15:45:05 R priorityq       
1152.headnode1    jobName2         mary              12:40:56 R workq       
1226.headnode1    jobName3         steve                    0 Q workq

The first column gives the job ID, the second the name of the job (specified by the user in the submission script) and the third the owner of the job. The fourth column gives the elapsed time for each particular job. The fifth column is the status of the job (R=running, Q=waiting, E=exiting, H=held, S=suspended). The last column is the queue for the job (a job scheduler can manage different queues serving different purposes).

Some other useful qstat features include:

Read all the options for qstat on the Linux manual using the command man qstat.


Deleting jobs with the command qdel

Use the qdel command to delete a job, e.g. qdel 1121 to delete job with id 1121. A user can delete own jobs at any time, whether the job is pending (waiting in the queue) or running. A user cannot delete the jobs of another user. Normally, there is a (small) delay between the execution of the qdel command and the time when the job is dequeued and killed. Occasionally a job may not delete properly, in which case, the ARC support team can delete it.


Environment variables

At the time a job is launched into execution, Torque defines multiple environment variables, which can be used from within the submission script to define the correct workflow of the job. The most useful of these environment variables are the following:

PBS_O_WORKDIR is typically used at the beginning of a script to go to the directory where the qsub command was issued, which is frequently also the directory containing the input data for the job, etc. The typical use is

cd $PBS_O_WORKDIR

inside a submission script.

PBS_NODEFILE is typically used to define the environment for the parallel run, for mpirun in particular. Normally, this usage is hidden from users inside a script (e.g. enable_arcus_mpi.sh), which defines the environment for the user.

PBS_JOBID is useful to tag job specific files and directories, typically output files or run directories. For instance, the submission script line

myApp > $PBS_JOBID.out

runs the application myApp and redirects the standard output to a file whose name is given by the job id. (NB: the job id is a number assigned by Torque and differs from the character string name given to the job in the submission script by the user.)

TMPDIR is the name of a scratch disk directory unique to the job. The scratch disk space typically has faster access than the disk space where the user home and data areas reside and benefits applications that have a sustained and large amount of I/O. Such a job normally involves copying the input files to the scratch space, running the application on scratch and copying the results to the submission directory. This usage is discussed in a separate section.


Array jobs

Arrays are a feature of Torque which allows users to submit a series of jobs using a single submission command and a single submission script. A typical use of this is the need to batch process a large number of very similar jobs, which have similar input and output, e.g. a parameter sweep study.

A job array is a single job with a list of sub-jobs. To submit an array job, use the -t flag to describe a range of sub-job indices. For example

qsub -t 1-100 script.sh

submits a job array whose sub-jobs are indexed from 1 to 100. Also,

qsub -t 100-200 script.sh

submits a job array whose sub-jobs are indexed from 100 to 200. Furthermore,

qsub -t 100,200,300 script.sh

submits a job array whose sub-jobs indices are 100, 200 and 300.

The typical submission script for a job array uses the index of each sub-job to define the task specific for each sub-job, e.g. the name of the input file or of the output directory. The sub-job index is given by the PBS variable PBS_ARRAYID. To illustrate its use, consider the application myApp processes some files named input_*.dat (taken as input), with * ranging from 1 to 100. This processing is described in a single submission script called submit.sh, which contains the following line

myApp < input_$PBS_ARRAYID.dat > output_$PBS_ARRAYID.dat

A job array is submitted using this script, with the command qsub -t 1-100 script.sh. When a sub-job is executed, the file names in the line above are expanded using the sub-job index, with the result that each sub-job processes a unique input file and outputs the result to a unique output file.

Once submitted, all the array sub-jobs in the queue can be monitored using the extra -t flag to qstat.


Using scratch disk space

At present, the use of scratch space (pointed to by the variable TMPDIR) does not offer any performance advantages over the disk space pointed to by $DATA. Users are then advised to avoid using the scratch space on the ARC resources. We have plans for infrastructure upgrade, in which performant storage can be used as fast access scratch disk space from within jobs.


Jobs with conditional execution

It is possible to start a job on the condition that another one completes beforehand; this may be necessary for instance if the input to one job is generated by another job. Job dependency is defined using the -W flag.

To illustrate with an example, suppose you need to start a job using the script second_job.sh after another job finished successfully. Assume the first job is started using script first_job.sh and the command to start the first job

qsub first_job.sh

returns the job ID 7777. Then, the command to start the second job is

qsub -W depend=after:7777 second_job.sh

This job dependency can be further automated (possibly to be included in a bash script) using environment variables:

JOB_ID_1=`qsub first_job.sh`
JOB_ID_2=`qsub -W depend=after:$JOB_ID_1 second_job.sh`

Furthermore, the conditional execution above can be changed so that the execution of the second job starts on the condition that the execution of the first was successful. This is achieved replacing after with afterok, e.g.

JOB_ID_2=`qsub -W depend=afterok:$JOB_ID_1 second_job.sh`

Conditional submission (as well as conditional submission after successful execution) is also possible with job arrays. This is useful, for example, to submit a "synchronization" job (script sync_job.sh) after the successful execution of an entire array of jobs (defined by array_job.sh). The conditional execution uses afterokarray instead of afterok:

JOB_ARRAY_ID=`qsub -t 2-6 array_job.sh`
JOB_SYNC_ID=`qsub -W depend=afterokarray:$JOB_ARRAY_ID sync_job.sh`

Submitting Jobs using TORQUE Yale Center for Research Computing

TORQUE (link is external) is an open source batch queuing system that is very similar to PBS. (link is external) Most PBS commands will work without any change. TORQUE is maintained by Adaptive Computing (link is external).

Choosing a Queue

PBS (Portable Batch System) is used as a way to manage jobs that are submitted to the cluster. The utilities you'll need are installed in /opt/torque/bin/. The PBS system can be used to submit jobs to the cluster from appropriate login node to a specified queue. Queues, their limits and purposes are listed on the individual cluster pages

Interactive Jobs

Interactive jobs can be used for testing and troubleshooting code. By requesting an interactive job, you will get a shell on the requested nodes to yourself.

$ qsub -q <queue> -I

This will assign a free node to you, and put you within a shell on that node. You can run any number of commands within that shell. To free the allocated node, exit from the shell. The environment variable $PBS_NODEFILE is set to the name of a file containing the names of the node(s) you were allocated.

When using an interactive shell under PBS, your job is vulnerable to being killed if you lose your network connection. We recommend that all long-duration interactive PBS shells be run under screen. If you do use screen, please be sure to keep track of your allocations and free those no longer needed!

Submitting a Batch Job

To submit a job via TORQUE, you first write a simple shell script that wraps your job called a "submission script". A submission script is comprised of two parts, the "directives" that tell the scheduler how to setup the computational resources for your job and the actual "script" portion, which are the commands you want executed during your job.

The directives are comprised of "#PBS" followed by a Torque flag. Most commonly used flags (should be in just about every submission script) include:

-N job_name Custom job name
-q queue_name Queue
-l nodes=<N>:ppn=8 Total number of nodes (N)
-l walltime=<HH:MM:SS> Walltime of the job in Hours:Minutes:seconds
-l mem=<M>gb Memory requested for job. A node on Omegas has 35GB of usable memory, so M=35*N

Special use flags include:

-m abe -M <email address> Sends you job status reports when you job starts (b), aborts (a), and/or finishes (e)
-o file_name Specify file name for standard output from job
-e file_name Specify file name for standard error from job

Additional flags can be found on in the official qsub documentation (link is external)

Here is an example script.pbs that runs a sequential job:

#PBS -q general
#PBS -N my_job
#PBS -l nodes=1:ppn=8,mem=35gb
#PBS -l walltime=12:00:00
#PBS -m abe -M [email protected]

cd $PBS_O_WORKDIR
./myprog arg1 arg2 arg3 ...

This script runs myprog on a node chosen by TORQUE from the general queue, after changing directory to where the user did the submission (the default behavior is to run in the home directory). In this case, the job will be run on a fully allocated node (the node will not be shared with any other users, so, for example, your program will have access to all available memory). You can put any number of PBS directives in the script, followed by commands to be run.

To actually submit the job, do:

 $ qsub script.pbs

We recommend that all jobs run on a queue be configured to send email notifications, so that you will know if they are aborted. To do this, use the –m and –M flags:

-m abe -M [email protected]

A Yale email address must be used; non-Yale emails will silently fail.

You can specify all flags either in the script or on the command line, with the command line taking precedence. For example, the previous script could be submitted to a "general" queue, without change, by doing:

 $ qsub -q general script.pbs
Monitoring Job Status

To check on the status of your jobs, do:

 $ qstat -u YourNetID # first argument is minus one.

To kill a running job, do:

$ qdel <job_id>

To check on the status of a queue (to see how many nodes are free, for example), do:

$ qstat -Q -f <queue>

Output will normally be buffered in an obscure location and then returned to you after the job completes in files called scriptname.ojobid and scriptname.ejobid for standard output and standard error, respectively. It is generally a good idea to explicitly redirect standard output and standard error so you can track the progress of the run by glancing at these files:

$ ./myprog arg1 arg2 arg3 ... > myprog.out 2> myprog.err
Passing Arguments to a Torque Script

There is no way to pass command line arguments to a Torque script, surprising as that seems. However, there is another way to accomplish this using environment variables. The -v flag to qsub causes the specified environment variable to be set in the script's environment. Here is an example script we will save as "env.pbs":

#PBS -v SLEEPTIME
echo "SLEEPTIME IS $SLEEPTIME"
sleep $SLEEPTIME

To run it, first set the env variable in your shell:

$ export SLEEPTIME=10

Then submit the script using qsub:

$ qsub env.pbs
Additional Tips

For more documentation on TORQUE:

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

Top articles

Sites

Useful PBS Commands ( usc.edu )

TORQUE:

PBSpro



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 29, 2020