Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

License tokens processing and limitation of the number of concurrent jobs

News Grid Engine Reference Recommended Links sge_conf SGE Parallel Environment SGE Queues
Resource requirements and limitations SGE Consumable Resources License tokens processing and limitation of the number of concurrent jobs Slot limits and restricting number of slots per server Load Sensors SGE Submit Scripts
Glossary SGE History Tips Perl Admin Tools and Scripts Humor Etc

The following demonstrates how easy it is to set up such a customization with the SGE resource quota set feature.

First thing to do is to create a resource counter that tracks how many jobs are being executed.  Using the SGE complex parameter, one can define:

# qconf -sc

#name             shortcut   type       relop requestable consumable default  urgency
#------------------------------------------------------------------------------------
concurjob         ccj        INT        <=    FORCED      YES        0        0

...

Now, all these special jobs should be executed on a special queue called "archive" queue.  The archive queue will be configured so that all these special jobs must use the special resource counter when submitting the job.

# qconf -sq archive

qname                 archive
...
complex_values        concurjob=1
...

As shown above, only one job will be scheduled to the archive queue instance per machine.

Now it's time to control the total number of such jobs globally. This can be done very easily with the resource quota set (RQS). The following command can be used to create such a  RQS rule.

#  qconf -arqs 
{
   name         limit_concur_jobs
   description  NONE
   enabled      TRUE
   limit        to concurjob=10
}

The red-colored, italicized entries are actually modified on the template.  This will complete all the customization that can limit the total number of special jobs running concurrently on the entire SGE cluster.

Now when you submit a special job to the archive queue, you must use the "-l concurjob=1" resource request, which in turn, will be used to monitor how many those special jobs are being run.

The following shows an example.

 For demonstration purpose, the archive queue is modified to accommodate two jobs per queue instance and the total number of allowed concurrent jobs to be 1.

# qconf -sq archive |egrep 'host|archive|concur'
qname                 archive
hostlist              @allhosts
complex_values        concurjob=2

 

# qconf -srqs
{
   name         limit_concur_jobs
   description  NONE
   enabled      TRUE
   limit        to concurjob=1
}

 

# qsub -b y -o /dev/null -j y -l concurjob=1 sleep 3600
Your job 53 ("sleep") has been submitted
# qsub -b y -o /dev/null -j y -l concurjob=1 sleep 3600
Your job 54 ("sleep") has been submitted

# qstat -f
queuename                      qtype resv/used/tot. load_avg arch          states
---------------------------------------------------------------------------------
archive@s4u-80a-bur02          BIP   0/2/10         0.02     sol-sparc64  
     53 0.55500 sleep      root         r     10/24/2008 15:05:59     1       

############################################################################
 - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS
############################################################################
     54 0.00000 sleep      root         qw    10/24/2008 15:05:57     1     
 
 

# qstat -j 54
...
scheduling info:            cannot run because it exceeds limit "/////" in rule "limit_concur_jobs/1"

 

As observed here, the job 54 is waiting to be scheduled when the resources become available.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Max job per user in a given queue

rabdomant | 3 Dec 13:55 2009

rabdomant <rago <at> physik.uni-wuppertal.de>
2009-12-03 12:55:22 GMT

Dear all,
I'm trying to set in SGE a limit for the number of job a user can run in 
a given queue.
I've searched through the mailing list but found no clue on how to solve 
my problem.
Could anyone help me?
A.

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=231130

To unsubscribe from this discussion, e-mail: [users-unsubscribe <at> gridengine.sunsource.net].

headers

templedf | 3 Dec 14:51 2009


Re: Max job per user in a given queue

templedf <dan.templeton <at> sun.com>
2009-12-03 13:51:47 GMT
The answer is resource quota sets.  You want a quota set like:

{
  name   xlimit
  description   Limit User X
  enabled   TRUE
  limit users X queues all.q to slots=5
}

Daniel

rabdomant wrote:
> Dear all,
> I'm trying to set in SGE a limit for the number of job a user can run in 
> a given queue.
> I've searched through the mailing list but found no clue on how to solve 
> my problem.
> Could anyone help me?
> A.
>
> ------------------------------------------------------
> http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=231130
>
> To unsubscribe from this discussion, e-mail: [users-unsubscribe <at> gridengine.sunsource.net].
>

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=231146

To unsubscribe from this discussion, e-mail: [users-unsubscribe <at> gridengine.sunsource.net].
(Continue reading)

headers

rabdomant | 3 Dec 14:54 2009


RE: Re: Max job per user in a given queue

rabdomant <rago <at> physik.uni-wuppertal.de>
2009-12-03 13:54:02 GMT
Hello Daniel
thank you for the fast answer.
Unfortunately this is not exactly what I'm looking for.
Infact , if I understand it right, your command limits the number of occupied slots, while I want to limit the
number of jobs.

In particular, to me, a job in a ompi instance that runs n slots simultaneously.

To give an example, I want allow some users to run only a job simultaneously , without any interest in the
number of slots needed for their jobs.
Thanks
A.

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=231148

To unsubscribe from this discussion, e-mail: [users-unsubscribe <at> gridengine.sunsource.net].

headers

rabdomant | 3 Dec 15:41 2009

RE: Re: Max job per user in a given queue

rabdomant <rago <at> physik.uni-wuppertal.de>
2009-12-03 14:41:30 GMT
What I've tried to do, is to set a consumable quantity called concurrent_jobs.
something like:
{
   name     max_per_user_in_silver_parallel
   description  "max number job per user in a given queue"
   enabled      TRUE
   limit        users {*} queues silver_parallel to concurjob_sp=1
}

and define the value of concurjob=1 for the queue.
But this is not working for many reasons.
If you want i can describe this test with more accuracy.
A.

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=231156

To unsubscribe from this discussion, e-mail: [users-unsubscribe <at> gridengine.sunsource.net].

reuti | 3 Dec 19:31 2009

Re: Max job per user in a given queue

reuti <reuti <at> staff.uni-marburg.de>
2009-12-03 18:31:26 GMT
Hi,

Am 03.12.2009 um 15:41 schrieb rabdomant:

> What I've tried to do, is to set a consumable quantity called  
> concurrent_jobs.
> something like:
> {
>    name     max_per_user_in_silver_parallel
>    description  "max number job per user in a given queue"
>    enabled      TRUE
>    limit        users {*} queues silver_parallel to concurjob_sp=1
> }
>
> and define the value of concurjob=1 for the queue.
> But this is not working for many reasons.

defining a complex is indeed necessary, I usually suggest to use:

$ qconf -sc
...
jobs                jb         INT         <=    YES          
JOB        1        1000

(note the entry JOB for "consumable" - only available in newer  
versions of SGE) Then you have to supply this to the complete cluster:

$ qconf -se global
...
complex_values        jobs=999999
(Continue reading)
headers

rabdomant | 4 Dec 22:54 2009

RE: Re: Max job per user in a given queue

rabdomant <rago <at> physik.uni-wuppertal.de>
2009-12-04 21:54:40 GMT
Hi Reuti
Thanks a lot, it worked!!
A.

> Hi,
> 
> Am 03.12.2009 um 15:41 schrieb rabdomant:
> 
> > What I've tried to do, is to set a consumable quantity called  
> > concurrent_jobs.
> > something like:
> > {
> >    name     max_per_user_in_silver_parallel
> >    description  "max number job per user in a given queue"
> >    enabled      TRUE
> >    limit        users {*} queues silver_parallel to concurjob_sp=1
> > }
> >
> > and define the value of concurjob=1 for the queue.
> > But this is not working for many reasons.
> 
> defining a complex is indeed necessary, I usually suggest to use:
> 
> $ qconf -sc
> ...
> jobs                jb         INT         <=    YES          
> JOB        1        1000
> 
> (note the entry JOB for "consumable" - only available in newer  
> versions of SGE) Then you have to supply this to the complete cluster:
(Continue reading)
headers

craffi | 3 Dec 15:22 2009

Re: Max job per user in a given queue

craffi <dag <at> sonsorol.org>
2009-12-03 14:22:45 GMT
Hello,

This link should help:

http://wikis.sun.com/display/GridEngine/Managing+Resource+Quotas

A resource quota rule should be what you try first.

Something like

{
   name max_u_per_queue
   limit users { * } queue all.q to slots=5
}

-Chris

rabdomant wrote:
> Dear all,
> I'm trying to set in SGE a limit for the number of job a user can run in
> a given queue.
> I've searched through the mailing list but found no clue on how to solve
> my problem.
> Could anyone help me?
> A.

------------------------------------------------------
http://gridengine.sunsource.net/ds/viewMessage.do?dsForumId=38&dsMessageId=231153

To unsubscribe from this discussion, e-mail: [users-unsubscribe <at> gridengine.sunsource.net].
(Continue reading)
headers

craffi | 3 Dec 15:24 2009

Re: Max job per user in a given queue

craffi <dag <at> sonsorol.org>
2009-12-03 14:24:21 GMT
oops. Late to this thread. Sorry about that.

dag

craffi wrote:
> Hello,
>
> This link should help:
>
> http://wikis.sun.com/display/GridEngine/Managing+Resource+Quotas
>
> A resource quota rule should be what you try first.
>
> Something like
>
>
> {
>     name max_u_per_queue
>     limit users { * } queue all.q to slots=5
> }
>
>
> -Chris
>
>
> rabdomant wrote:
>> Dear all,
>> I'm trying to set in SGE a limit for the number of job a user can run in
>> a given queue.
>> I've searched through the mailing list but found no clue on how to solve

amazon ec2 - SGE Auto configured consumable resource - Server Fault

I am using a tool called starcluster http://star.mit.edu/cluster to boot up an SGE configured cluster in the amazon cloud. The problem is that it doesn't seem to be configured with any pre-set consumable resources, excepts for SLOTS, which I don't seem to be able to request directly with a qsub -l slots=X. Each time I boot up a cluster, I may ask for a different type of EC2 node, so the fact that this slot resource is preconfigured is really nice. I can request a certain number of slots using a pre-configured parallel environment, but the problem is that it was set up for MPI, so requesting slots using that parallel environment sometimes grants the job slots spread out across several compute nodes.

Is there a way to either 1) make a parallel environment that takes advantage of the existing pre-configured HOST=X slots settings that starcluster sets up where you are requesting slots on a single node, or 2) uses some kind of resource that SGE is automatically aware of? Running qhost makes me think that even though the NCPU and MEMTOT are not defined anywhere I can see, that SGE is somehow aware of those resources, are there settings where I can make those resources requestable without explicitely defining how much of each are available?

Thanks for your time!

qhost output:

HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  SWAPUS
-------------------------------------------------------------------------------
global                  -               -     -       -       -       -       -
master                  linux-x64       2  0.01    7.3G  167.4M     0.0     0.0
node001                 linux-x64       2  0.01    7.3G  139.6M     0.0     0.0

qconf -mc output:

#name               shortcut   type        relop requestable consumable default  urgency 
#----------------------------------------------------------------------------------------
arch                a          RESTRING    ==    YES         NO         NONE     0
calendar            c          RESTRING    ==    YES         NO         NONE     0
cpu                 cpu        DOUBLE      >=    YES         NO         0        0
display_win_gui     dwg        BOOL        ==    YES         NO         0        0
h_core              h_core     MEMORY      <=    YES         NO         0        0
h_cpu               h_cpu      TIME        <=    YES         NO         0:0:0    0
h_data              h_data     MEMORY      <=    YES         NO         0        0
h_fsize             h_fsize    MEMORY      <=    YES         NO         0        0
h_rss               h_rss      MEMORY      <=    YES         NO         0        0
h_rt                h_rt       TIME        <=    YES         NO         0:0:0    0
h_stack             h_stack    MEMORY      <=    YES         NO         0        0
h_vmem              h_vmem     MEMORY      <=    YES         NO         0        0
hostname            h          HOST        ==    YES         NO         NONE     0
load_avg            la         DOUBLE      >=    NO          NO         0        0
load_long           ll         DOUBLE      >=    NO          NO         0        0
load_medium         lm         DOUBLE      >=    NO          NO         0        0
load_short          ls         DOUBLE      >=    NO          NO         0        0
m_core              core       INT         <=    YES         NO         0        0
m_socket            socket     INT         <=    YES         NO         0        0
m_topology          topo       RESTRING    ==    YES         NO         NONE     0
m_topology_inuse    utopo      RESTRING    ==    YES         NO         NONE     0
mem_free            mf         MEMORY      <=    YES         NO         0        0
mem_total           mt         MEMORY      <=    YES         NO         0        0
mem_used            mu         MEMORY      >=    YES         NO         0        0
min_cpu_interval    mci        TIME        <=    NO          NO         0:0:0    0
np_load_avg         nla        DOUBLE      >=    NO          NO         0        0
np_load_long        nll        DOUBLE      >=    NO          NO         0        0
np_load_medium      nlm        DOUBLE      >=    NO          NO         0        0
np_load_short       nls        DOUBLE      >=    NO          NO         0        0
num_proc            p          INT         ==    YES         NO         0        0
qname               q          RESTRING    ==    YES         NO         NONE     0
rerun               re         BOOL        ==    NO          NO         0        0
s_core              s_core     MEMORY      <=    YES         NO         0        0
s_cpu               s_cpu      TIME        <=    YES         NO         0:0:0    0
s_data              s_data     MEMORY      <=    YES         NO         0        0
s_fsize             s_fsize    MEMORY      <=    YES         NO         0        0
s_rss               s_rss      MEMORY      <=    YES         NO         0        0
s_rt                s_rt       TIME        <=    YES         NO         0:0:0    0
s_stack             s_stack    MEMORY      <=    YES         NO         0        0
s_vmem              s_vmem     MEMORY      <=    YES         NO         0        0
seq_no              seq        INT         ==    NO          NO         0        0
slots               s          INT         <=    YES         YES        1        1000
swap_free           sf         MEMORY      <=    YES         NO         0        0
swap_rate           sr         MEMORY      >=    YES         NO         0        0
swap_rsvd           srsv       MEMORY      >=    YES         NO         0        0

qconf -me master output (one of the nodes as an example):

hostname              master
load_scaling          NONE
complex_values        NONE
user_lists            NONE
xuser_lists           NONE
projects              NONE
xprojects             NONE
usage_scaling         NONE
report_variables      NONE

qconf -msconf output:

algorithm                         default
schedule_interval                 0:0:15
maxujobs                          0
queue_sort_method                 load
job_load_adjustments              np_load_avg=0.50
load_adjustment_decay_time        0:7:30
load_formula                      np_load_avg
schedd_job_info                   false
flush_submit_sec                  0
flush_finish_sec                  0
params                            none
reprioritize_interval             0:0:0
halftime                          168
usage_weight_list                 cpu=1.000000,mem=0.000000,io=0.000000
compensation_factor               5.000000
weight_user                       0.250000
weight_project                    0.250000
weight_department                 0.250000
weight_job                        0.250000
weight_tickets_functional         0
weight_tickets_share              0
share_override_tickets            TRUE
share_functional_shares           TRUE
max_functional_jobs_to_schedule   200
report_pjob_tickets               TRUE
max_pending_tasks_per_job         50
halflife_decay_list               none
policy_hierarchy                  OFS
weight_ticket                     0.010000
weight_waiting_time               0.000000
weight_deadline                   3600000.000000
weight_urgency                    0.100000
weight_priority                   1.000000
max_reservation                   0
default_duration                  INFINITY

qconf -mq all.q output:

qname                 all.q
hostlist              @allhosts
seq_no                0
load_thresholds       np_load_avg=1.75
suspend_thresholds    NONE
nsuspend              1
suspend_interval      00:05:00
priority              0
min_cpu_interval      00:05:00
processors            UNDEFINED
qtype                 BATCH INTERACTIVE
ckpt_list             NONE
pe_list               make orte
rerun                 FALSE
slots                 1,[master=2],[node001=2]
tmpdir                /tmp
shell                 /bin/bash
prolog                NONE
epilog                NONE
shell_start_mode      posix_compliant
starter_method        NONE
suspend_method        NONE
resume_method         NONE
terminate_method      NONE
notify                00:00:60
owner_list            NONE
user_lists            NONE
xuser_lists           NONE
subordinate_list      NONE
complex_values        NONE
projects              NONE
xprojects             NONE
calendar              NONE
initial_state         default
s_rt                  INFINITY
h_rt                  INFINITY
s_cpu                 INFINITY
h_cpu                 INFINITY
s_fsize               INFINITY
h_fsize               INFINITY
s_data                INFINITY
h_data                INFINITY
s_stack               INFINITY
h_stack               INFINITY
s_core                INFINITY
h_core                INFINITY
s_rss                 INFINITY

The solution I found is to make a new parallel environment that has the $pe_slots allocation rule (see man sge_pe). I set the number of slots available to that parallel environment to be equal to the max since $pe_slots limits the slot usage to per-node. Since starcluster sets up the slots at cluster bootup time, this seems to do the trick nicely. You also need to add the new parallel environment to the queue. So just to make this dead simple:

qconf -ap by_node

and here are the contents after I edited the file:

pe_name            by_node
slots              9999999
user_lists         NONE
xuser_lists        NONE
start_proc_args    /bin/true
stop_proc_args     /bin/true
allocation_rule    $pe_slots
control_slaves     TRUE
job_is_first_task  TRUE
urgency_slots      min
accounting_summary FALSE

Also modify the queue (called all.q by starcluster) to add this new parallel environment to the list.

qconf -mq all.q

and change this line:

pe_list               make orte

to this:

pe_list               make orte by_node

I was concerned that jobs spawned from a given job would be limited to a single node, but this doesn't seem to be the case. I have a cluster with two nodes, and two slots each.

I made a test file that looks like this:

#!/bin/bash

qsub -b y -pe by_node 2 -cwd sleep 100

sleep 100

and executed it like this:

qsub -V -pe by_node 2 test.sh

After a little while, qstat shows both jobs running on different nodes:

job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID
-----------------------------------------------------------------------------------------------------------------
     25 0.55500 test       root         r     10/17/2012 21:42:57 all.q@master                       2      
     26 0.55500 sleep      root         r     10/17/2012 21:43:12 all.q@node001                      2  

I also tested submitting 3 jobs at once requesting the same number of slots on a single node, and only two run at a time, one per node. So this seems to be properly set up!

clustering.gridengine.users - RE SGE - host configuration question - msg#00384 - Recent Discussion

OSDir.com
Performance Monitoring

Hi Oded,

On Thu, 28 Jun 2007, Oded Haim-Langford wrote:


Hello,

I encountered an example of what I think I needed on Sun N1 Grid Engine
6.1 Administration Guide p.155.

I configured the following dynamic resource quota to allow users to use
1 slot per cpu for all the hosts in the system. According to the manual
it should allow users to use a slot per cpu on all hosts defined in
@allhosts.

$> qconf -srqs
{
name align_slot_num_to_cpu_num
description "set number of slots per machine to be the number of
cpus"
enabled TRUE
limit hosts {@allhosts} to slots=$num_proc
}

Where host list @allhosts contains all the hosts in the system, some
with num_proc=1 others with num_proc=2.

Then I configured @allhosts to be the single hostlist for queue: all.q
I expected all.q to reflect the number of cpus in the system as its
number of slots, well that didn't happen.

I'm curious this didn't work. How did you verify? Did you just look slots amount reported by qstat -f? If so, qstat's slot output played you a trick: The per-queue slot limitation is independent from the per-queue slot limit defined in 'align_slot_num_to_cpu_num' and both
limits apply.

The number of slots configured in all.q was the number of slots every
host in @allhosts got. Do I miss something here? (I really hope so)
Is it possible to dynamically bind number of queue slots to the total
number of cpus of its hosts without configuring it manually per host per
queue?

I wish to work with virtual host groups e.g. set a simulation queue, it
will have a hostlist @simulation_hosts which contains @linux_32_2cpu +
@linux_32_4cpu + ...
So setting a default slot number for simulation queue will not do it,
and setting it manually for every host in the queue seems redundant and
unnecessary.

Dynamic queue slots must work with resource quotas as explained above.
If you consider the mismatch in qstat output as unsatisfying you can
also assign slot amounts on a per hostgroup basis in you simulation queue queue_conf(5) like this

slots 1,[@linux_32_2cpu=2],[@linux_32_4cpu=4]

that way slot amount gets not actually assigned dynamically, but qstat slot
output is consistent and host groups you have to maintain anyways.


I will really appreciate any help you could extend in this matter.

IMO both solutions are reasonable.

Regards,
Andreas



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019