Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

InfiniBand

News

High Performance Computing (HPC)

Recommended Links

Installing Mellanox InfiniBand Driver on RHEL 6.5

Setting up a basic infiniband network

Configuring InfiniBand on RHEL 6.5 InfiniBand Subnet Manager
Dell MXL 10 switch M4001T FDR10 40Gbps IB Switch          
Dell PowerEdge M1000e Enclosure ConnectX-3 cards GPFS on Red Hat HPC cluster architecture Message Passing Interface Dell M4001 IB switch Oracle Grid Engine
 MLNX_OFED  Troubleshooting InfiniBand connection issues using OFED tools Linux Troubleshooting  Linux Troubleshooting Tips Admin Horror Stories Humor Etc

Adapted from InfiniBand - Wikipedia

Contents


The Infiniband technology is a standard de facto on the HPC scene. It provides high-bandwidth, low-latency communications over high-speed serial connections. The technology was originally invented by a consortium including Microsoft, IBM, Intel, Hewlett-Packard, Compaq Computer, Dell Computer, and Sun Microsystems as a replacement for the current PCI standard for peripheral I/O. For various reasons, Infiniband has not yet lived up to the original expectations of the consortium, and both Microsoft and Intel appear to have backed away from the technology.

Infiniband is based on high-speed, switched serial links that may be combined in parallel to increase bandwidth. The Infiniband architecture supports a switched fabric for the traffic, in addition to the channel-based host interconnects. A single "1x" Infiniband link is rated at 2.5 Gbps, but interface and switching hardware is currently available in 10 Gbps (4x) and 30 Gbps (12x) full-duplex port configurations (20-Gbps and 60-Gbps total bandwidth, respectively).

Infiniband physical cable lengths for copper are limited to 17 m, and for fiber are allowed up to 17 km. The Infiniband specification includes physical, electrical, and software elements of the architecture. An interesting feature of the Infiniband hardware and software architecture is the allowance for RDMA between the interface cards and a user process' address space, so that operating system kernels can avoid data copying.

The Infiniband architecture is based on a reliable transport implemented on the interface card. This allows the bypass of the kernel's TCP/IP stack by use of the socket direct protocol (SDP), which interfaces directly between an application that uses the socket API and the Infiniband hardware, providing a TCP/IP-compatible transport. With this architecture, network transport tasks that would normally occur in the software TCP/IP stack are off-loaded to the Infiniband interface card, saving CPU overhead.

The Infiniband standard allows simultaneous transport of multiple high-level protocols through the switching fabric. Additional information on the Linux implementation of the Infiniband architecture is available at http://sourceforge.net/projects/infiniband.

Speed

InfiniBand is based on a switched fabric architecture of serial point-to-point links. Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand offers point-to-point bidirectional serial links intended for the connection of processors with high-speed peripherals such as disks. On top of the point to point capabilities, InfiniBand also offers multicast operations. It supports several signaling rates and, as with PCI Express, links can be bonded together for additional throughput.

An InfiniBand link is a serial link operating at one of five data rates: single data rate (SDR), double data rate (DDR), quad data rate (QDR), fourteen data rate (FDR), and enhanced data rate (EDR).

The SDR connection's signaling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection.

For SDR, DDR and QDR, links use 8b/10b encoding - every 10 bits sent carry 8bits of data - making the effective data transmission rate four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbit/s useful data, respectively. For FDR-10, FDR and EDR, links use 64b/66b encoding - every 66 bits sent carry 64 bits of data. (Neither of these calculations takes into account the additional physical layer overhead requirements for common characters or protocol requirements such as StartOfFrame and EndOfFrame).

Implementers can aggregate links in units of 4 or 12, called 4X or 12X. A 12X QDR link therefore carries 120 Gbit/s raw, or 96 Gbit/s of useful data. As of 2009 most systems use a 4X aggregate, implying a 10 Gbit/s (SDR), 20 Gbit/s (DDR) or 40 Gbit/s (QDR) connection. Larger systems with 12X links are typically used for cluster and supercomputer interconnects and for inter-switch connections

The InfiniBand future roadmap also has "HDR" (High Data rate), due in 2014, and "NDR" (Next Data Rate), due "some time later", but as of June 2010, these data rates were not yet tied to specific speeds.[2]

Low Latency

The single data rate switch chips have a latency of 200 nanoseconds, DDR switch chips have a latency of 140 nanoseconds and QDR switch chips have a latency of 100 nanoseconds. The end-to-end latency range spans from 1.07 microseconds MPI latency (Mellanox ConnectX QDR HCAs) to 1.29 microseconds MPI latency (Qlogic InfiniPath HCAs) to 2.6 microseconds (Mellanox InfiniHost DDR III HCAs).

As of 2009 various InfiniBand host channel adapters (HCA) exist in the market, each with different latency and bandwidth characteristics. InfiniBand also provides RDMA capabilities for low CPU overhead. The latency for RDMA operations is less than 1 microsecond (Mellanox[3] ConnectX HCAs).

Topology

InfiniBand uses a switched fabric topology, as opposed to a hierarchical switched network like traditional Ethernet architectures. All transmissions begin or end at a "channel adapter." Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service (QoS).

InfiniBand transmits data in packets of up to 4 KB that are taken together to form a message. A message can be: a direct memory access read from or, write to, a remote node (RDMA) a channel send or receive a transaction-based operation (that can be reversed) a multicast transmission. an atomic operation

Adoption

InfiniBand has been adopted in enterprise datacenters, for example Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle SPARC SuperCluster, financial sectors, cloud computing (an InfiniBand based system won the best of VMWorld for Cloud Computing) and more. InfiniBand has been mostly used for high performance clustering computer cluster applications. A number of the TOP500 supercomputers have used InfiniBand including the former[4] reigning fastest supercomputer, the IBM Roadrunner.

SGI, LSI, DDN, Netapp, Oracle, Nimbus Data, Rorke Data among others, have also released storage utilizing InfiniBand "target adapters". These products compete with architectures such as Fibre Channel, SCSI, and other more traditional connectivity-methods. Such target adapter-based discs can become a part of the fabric of a given network, in a fashion similar to DEC VMS clustering. The advantage to this configuration is lower latency and higher availability to nodes on the network (because of the fabric nature of the network). In 2009, the Oak-Ridge National Lab Spider storage system used this type of InfiniBand attached storage to deliver over 240 gigabytes per second of bandwidth.[citation needed]

Military applications such as UAV, UUV, electronic warfare are taking this technology into the rugged application space to enhance capabilities. InfiniBand is used in high performance embedded computing systems such as RADAR, Sonar and SIGINT applications. Companies such as GE Intelligent Platforms[5] Mercury Computer Systems produce military grade Single Board Computers> that are InfiniBand capable.[6]

Cables

Early InfiniBand used copper CX4 cable for SDR and DDR rates with 4x ports - also commonly used to connect SAS (Serial Attached SCSI) HBAs to external (SAS) disk arrays. With SAS, this is known as an SFF-8470 connector, and is referred to as an "InfiniBand-style" Connector. For 12x ports, SFF-8470 12x is used.[7]

The latest connectors used with up to QDR and FDR speeds 4x ports are QSFP (Quad SFP) and can be copper or fiber, depending on the length required.

For 12x ports, the CXP[8] (SFF-8642) can be used up to QDR speed.

Programming

InfiniBand has no standard programming API within the specification. The standard only lists a set of "verbs" - functions that must exist. The syntax of these functions is left to the vendors. The de-facto standard has been the syntax developed by the OpenFabrics Alliance, which was adopted by most of the InfiniBand vendors, for GNU/Linux, FreeBSD, and MS Windows. The InfiniBand software stack developed by OpenFabrics Alliance is released as "OpenFabrics Enterprise Distribution (OFED)", under a choice of two licenses GPL2 or BSD license for Linux and FreeBSD, and as "WinOF" under a choice of BSD license for Windows.

Overview of InfiniBand protocols

Upper-level protocols such as IP over InfiniBand (IPoIB), Socket Direct Protocol (SRP), SCSI RDMA Protocol (SDP), iSCSI Extensions for RDMA (iSER) and so on, facilitate standard data networking, storage, and file system applications to operate over InfiniBand. Except for IPoIB, which provides a simple encapsulation of TCP/IP data streams over InfiniBand, the other upper-level protocols transparently enable higher bandwidth, lower latency, lower CPU utilization, and end-to-end service using field-proven RDMA and hardware-based transport technologies available with InfiniBand. Configuring GPFS to exploit InfiniBand helps make the network design simple because InfiniBand integrates the network and the storage together (each server uses a single adapter utilizing different protocols), as in the following examples:

IP over InfiniBand (IPoIB)

IPoIB running over high-bandwidth InfiniBand adapters can provide an instant performance boost to any IP-based applications. IPoIB support tunneling of IP packets over InfiniBand hardware. This method of enabling IP applications over InfiniBand is effective for management, configuration, setup or control plane related data where bandwidth and latency are not critical. Because the application continuous to run over the standard TCP/IP networking stack, the application are completely unaware of the underlying I/O Hardware. Socket Direct Protocol (SDP) For applications that use TCP sockets, the SDP delivers a significant boost to performance and reduces CPU utilization and application latency. The SDP driver provides a high-performance interface for standard socket applications and a boost in performance by bypassing the software TCP/IP stack, implementing zero copy and asynchronous I/O, and transferring data using efficient RDMA and hardware-based transport mechanisms.

Socket Direct Protocol (SDP)

For applications that use TCP sockets, the SDP delivers a significant boost to performance and reduces CPU utilization and application latency. The SDP driver provides a high-performance interface for standard socket applications and a boost in performance by bypassing the software TCP/IP stack, implementing zero copy and asynchronous I/O, and transferring data using efficient RDMA and hardware-based transport mechanisms.

SCSI RDMA Protocol (SRP)

SRP was defined by the ANSI TIA committee to provide block storage capable for the InfiniBand architecture. SRP is protocol that uses the InfiniBand reliable connection and RDMA capabilities to provide a high-performance transport for the SCSI protocol. SRP is extremely similar to Fibre Channel Protocol (FCP), which uses Fibre Channel to provide SCSI over Fibre Channel. This allows one host driver to use storage target devices from various storage hardware.

Remote Direct Memory Access (RDMA)

RDMA used to allow various servers on the InfiniBand fabric to access the memory of another server directly. An example of this is a GPFS Cluster or a database server cluster. GPFS has verbPorts and verbRdma options for the InfiniBand RDMA function, and the database server cluster adds an RDMA agent to its core functionality, which allows two database instances running on different nodes to communicate directly with each other, bypassing all kernel-level communication operations, thus reducing the number of times that the data is compiled from persistent storage into the RAM memory of cluster nodes.
Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jul 24, 2013] Gentoo Cluster: Gamess Installation with MVAPICH2 and PBS

August 1st, 2011 | infiniband Ins & Outs

Gamess is an electronic structure calculation package. Its installation is easy if you just want to use "sockets" communication mode. Just emerge it as you regularly do. Then use "rungms" to submit your job. The default rungms is okay to run the serial code. For the parallel computation, you still need to tune the script slightly. But since our cluster has Infiniband installed, it is better to go with the "mpi" communication mode. It took me quite some time to figure out how to install it correctly and make it run with mpiexec.hydra alone or with OpenPBS (Torque). Here is how I did it.

Software packages related:

1. gamess-20101001.3 (Dowload it beforehand from its developer's website)
2. mvapich2-1.7rc1. (Previous versions should be okay and I installed it under /usr/local/)
3. OFED-1.5.3.2. (Userspace libraries for Infiniband. See my previous post. Only updated kernel modules installed. Userspace libraries should be the same as in OFED-1.5.3.1)
4. torque-2.4.14 (OpenPBS)

Steps

1. Update the gamess-20101001.3.ebuild with this one and manifest it.
2. Unmask the mpi user flag for gamess in /usr/portage/profiles/base/package.use.mask.
3. Add sci-chemistry/gamess mpi to /etc/portage/package.use; then emerge -av gamess.
4. Update rungms with this one;
5. Create a new script pbsgms as this one;
6. Add kernel.shmmax=XXXXX to /etc/sysctl.conf, in which XXXXX is a large enough integer for shared memory (default value 32MB is too small for DDI). Run /sbin/sysctl -w kernel.shmmax=XXXX to update the setting in-the-fly.
Added on Sept. 9, 2011. It seems that kernel.shmall=XXXXX should be modified as well. Please bear in mind that the unit for kernel.shmall is pages and kernel.shmmax is bytes. And a page is 4096 bytes in usual(use getconf PAGE_SIZE to verify).

7. Environment setting. Create a file /etc/env.d/99gamess

GMS_TARGET=mpi
GMS_SCR=/tmp/gamess
GMS_HOSTS=~/.hosts
GMS_MPI_KICK=hydra
GMS_MPI_PATH=/usr/local/bin

Then update your profile.
8. Create a hostfile, ~/.hosts

node1
node2
...

This file is only needed by invoking rungms directly.

9. Test your installation: copy a test job input file exam20.inpunder/usr/share/gamess/tests/; submit the job using pbsgms exam20 (other settings will be prompted), or using rungms exam20 00 4.

Explanations

1. Two changes were made on the ebuild file.
(a). The installation suggestions given in the documentation of Gamess is not enough. More libraries other than mpich are needed to pass over to lked, the linker program for Gamess.
(b) MPI environment constants are needed to exported to the installation program, compddi through an temporary file install.info.
2. Many changes were made for the script, rungms. I could not remember all of them. Some are as following.
(a) For parallel computation, the scratch file will be put under /tmp on each node by default.
(b) The script will be working with pbsgms.
(c) System-wide setting for Gamess can be put under /etc/env.d.
(d) A host file is needed if not using PBS. By default, it should be at ~/.hosts. If not found, running on the local host only.
3. The script pbsgms is based on sge-pbs shipped with the Gamess installation package. I have made it to work with Torque. Numerous changes were made.

InfiniBand SSDs etc - StorageSearch.com

OCZ's SAS SSDs in InfiniBand benchmark configuration

Editor:- June 12, 2013 - Mellanox today announced details of a benchmark demonstration it did this week showing its FDR 56Gb/s InfiniBand running on Windows Server 2012 in a system which uses OCZ's Talos 2R SSDs (2.5" SAS SSDs) working with LSI's Fast Path I/O acceleration software and RAID controllers - getting over 10GB/s throughput to a remote file system while consuming under 5% of CPU overhead.

Mellanox FDR InfiniBand pushes PCI-Express 3.0 to the limits • The Register

ISC 2012 If you want to try to choke a PCI-Express 3.0 peripheral slot, you have to bring a fire hose. And that is precisely what InfiniBand and Ethernet switch and adapter card maker Mellanox Technology has done with a new Connect-IB server adapter.

Mellanox was on hand at the International Super Computing event in Hamburg, Germany this week, showing off its latest 56Gb/sec FDR InfiniBand wares and boasting of the uptake in InfiniBand technology in the Top 500 rankings of supercomputers and its general uptake in database cluster, data analytics, clustered storage arrays, and other segments of the systems racket.

Mellanox is the dominant supplier now that QLogic has sold off its InfiniBand biz to Intel, and it is milking the fact that it has FDR switches and adapters in the field when QLogic is still at 40Gb/sec QDR InfiniBand. (QLogic, if it had not been eaten by Intel, would counter that it can get the same or better performance from its QDR gear than Mellanox delivers with its FDR gear.) These are good days for Mellanox, which ate rival Voltaire to get into the Ethernet racket and which is enjoying the benefits of the rise of high-speed clusters.

At least until Intel comes back at Mellanox in a big way, pursuing all of its own OEM partners with the Xeon-QLogic-Fulcrum-Cray Aries quadruple whammy. Intel did not buy QLogic, Fulcrum Microsystems, and the Cray supercomputer interconnect business to sit on these assets, like some kind of knickknacks sitting on shelf.

Intel is going to try to become a supplier of supercomputing interconnects that do all kinds of things and that hook into its Xeon processors and chipsets tightly and seamlessly, and that will eventually make it very tough for Mellanox.

But not so at ISC this year. As El Reg previously reported, for the first time in the history of the Top 500 rankings of supercomputers, InfiniBand has edged out Ethernet, with 208 machines using InfiniBand and 207 using Ethernet. Drilling down into the data a bit, there were 195 machines that used Gigabit Ethernet switches and adapters to link server nodes together, and another 12 that used 10 Gigabit Ethernet.

There are still 78 machines on the list that use earlier InfiniBand gear, but there are 110 machines using QDR InfiniBand, and 20 machines that use FDR InfiniBand. There are a few hybrid interconnects as well on the list that mix InfiniBand with some other network.

The remainder are a mix of custom interconnects like the Cray "SeaStar" XT and "Gemini" XE routers, the Silicon Graphics NUMAlink, IBM's BlueGene/Q, Fujitsu's "Tofu," and a few others. Gigabit Ethernet is by far the most popular of any single speed or type, of course, but it is dramatic how InfiniBand has really blunted the uptake of 10GE networks at the top end of supercomputer clusters. The idea seems to be that if you are going to spend money on anything faster than Gigabit Ethernet, then you might as well skip 10GE or even 40GE and get the benefits of QDR or FDR InfiniBand.

This is certainly what Mellanox is hoping customers do, and that is why it is bragging about a new server adapter card called Connect-IB that can push two full-speed FDR ports.

The Connect-IB dual-port InfiniBand FDR adapter

The Connect-IB dual-port InfiniBand FDR adapter card (click to enlarge)

This new Connect-IB card, which is sampling now, will be available for both PCI-Express 3.0 and PCI-Express 2.0 slots, and eats an x16 slot. Up until now, network adapter cards have generally been x8 slots, with half as many lanes of traffic and therefore a lot less theoretical and realized bandwidth available to let the network chat up the servers. By moving to servers that support PCI-Express 3.0 slots, you can put two FDR ports on each adapter using an x16 slot and still run them at up to 100Gb/sec aggregate across the two ports.

If your server is using older PCI-Express 2.0 slots – and at this point, that means anything that is not using an Intel Xeon E5-2400, E5-2600, E5-4600, or E3-1200 v2 processor since no other server processor maker is supporting PCI-Express 3.0 yet – then there is an x16 Connect-IB card that has one port that you can try to push all the way up to 56Gb/sec speeds.

These new cards have a single microsecond MPI ping latency and support Remote Direct Memory Access (RDMA), which is one of the core technologies that gives InfiniBand its performance edge over Ethernet and which allows for servers to reach across the network directly into each other's main memory without going through that pesky operating system stack. Mellanox says the new two-port Connect-IB card can push 130 million messages per second – four times that of its competitor. (That presumably means you, QLogic, er, Intel.)

There is also a single and dual-port option on the Connect-IB cards that slide into x8 slots. It is not clear how much data these x8 slots can really push, and until they are tested in the field, Mellanox is probably not even sure.

In theory, an x8 slot running at PCI-Express 3.0 speeds should be able to do 8GB/sec (that's bytes, not bits) of bandwidth in both directions, for a total of 16GB/sec of total bandwidth across that x8 link. This should not saturate the x8 link.

What is certain is that an x8 slot running at PCI-Express 2.0 speeds could not really handle FDR InfiniBand, with only 64Gb/sec of bandwidth (8GB/sec) each way available. That was getting too close to the ceiling.

Now, the ConnectX chips on the Mellanox adapters as well as the SwitchX ASICs at the heart of its switches swing both ways, Ethernet and InfiniBand, so don't jump to the wrong conclusion and think Mellanox doesn't love Ethernet.

The company was peddling its 40GE adapters and switches, which support RDMA over Converged Ethernet (RoCE) and which give many of the benefits of InfiniBand to customers who don't want to build mixed InfiniBand-Ethernet networks. (Or, perhaps more precisely, they want Mellanox to do it inside of the switch and inside of the adapter cards and mask the transformation from the network.) Mellanox says that it is showing up to an 80 per cent application performance boost using its 40GE end-to-end compared to 10GE networks on clusters.

In addition, Mellanox also announced that the latest FDR InfiniBand adapters will also support Nvidia's GPUDirect protocol, which is a kind of RDMA for the Tesla GPU coprocessors that allowed GPUs inside of a single machine to access each other's memory without going through the CPU and OS stack to do it.

With the current Tesla K10 and future Tesla K20 GPU coprocessors, GPUDirect will allow for coprocessors anywhere in a cluster to access the memory of any other coprocessor, fulfilling Nvidia's dream of not really needing the CPU for much at all. This GPUDirect support will be fully enabled in Mellanox FDR adapters. ®

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: April, 03, 2014