May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Intrusion Prevention and Detection Roadmap for 2005

Copyright 2004-2005, Dr. Nikolai Bezroukov. This is a copyrighted unpublished manuscript. All rights reserved.

Version 1.4


Note: An earlier version of this paper was published in Softpanorama Bulletin Vol 16, No. 04 (December, 2004)

Executive Summary and Proposed IDS/IPS Roadmap

  1. Large companies should moved away from the goal of intrusion detection to the goal of  policy monitoring selected activities.  Additional investment should be strategically aligned with the technologies that are designed to develop and enforce policies that prevent intrusions from occurring in the first place.  The latter  proved to be more efficient and cost-effective, especially when policy monitoring is augmented by policy enforcement.  For example connection initiated from  the server segment to DHCP (PC and workstations) segment are instantly suspect.  Types of connections  from one workstation to another  are also very limited, and should be studied and enumerated. Only legit connections should be allowed. 
  2. Intrusion Detection and Prevention architecture should be balanced using equal of similar investments into three major types of IDS:
  3. Fundamentally, any large company  should move away from generic signature based IDS sensors  and switch to using existing network IDS sensors as specialized monitoring devices with the explicit and limited function of  monitoring honeypots.  While some companies might try to preserve existing IDS infrastructure for political reasons, they can benefit from retooling them for a more limited, narrow function of monitoring honeypots and, if manpower permits, specific protocols and segments. Generic network IDS functions should probably migrate into firewall. 
  4. For servers that can be integrated into monitoring framework the stress in IDS area should be on host-based monitoring (at least via integrity checkers) and, first of all, on log analysis. With a solid system and applications monitoring and log analysis we can see real problem with the system and might be in better position to uncover real security events, not an endless stream of false positives.  In the near future, IDS will take a back-seat role and server based monitoring and log analysis should come into the forefront in intelligent organizations. Of cause many IT department are not that intelligent, and IT top brass prefer to "beat the drum and march  with the banner" because they do not understand the real situation and are detached from real It problems (especially if they were recruited from bean counters or supply chain specialists),  but this is another problem.  What's still needed is host-based and kernel-level enforcement to make sure policies can't be tampered with, the technology some peaces of which are visible in AIX 5.3 and Solaris 10, as well as Windows server 2003, but which in general is probably a couple of years away from maturity. 
  5. Malware advances push intrusion detection deeper into the host-based domain, essentially making most part of the functionality of the old network generic IDS sensors obsolete and the remaining part of its integrated with "perimeter" and "site" firewalls. IPS from major vendors should be avoided as they represent not a new technology, but a sugar-coating of existing IDS technology and attempt to add firewall functions to existing framework.  IDS and IPS use essentially the same detection techniques. Both are plagued by the false positives problems, but for IPS this is ten times more acute problem, but the first principle here is "do not do any harm".  And when you’re looking at a network of dozens of important hosts including e-commerce hosts, the last what you want is shooting first and asking questions later. We judge that this superficial, "cosmetic-style" adaptation of existing products in order to lure stupid customers doomed to be a failure.  Marcus Ranum, chief security officer at Tenable Security long ago  noted:

     “They are the same thing. IPS is just a signature IDS with firewall block rules that sits inline. Big whoop. The ‘convergence’—if there is any—is between firewalls and IDS.”

The current configuration of critical DMZ systems is characterized by absence of central logging, as well as absence of systematic, intelligent log monitoring. Currently we implicitly have to trust everyone who has the root password. The best can be done is to introduce two factor authentication (and that's a minimum requirement for any large corporation for admin with root access). The second  thing is to direct all root session via special "multiplexor" server, which are the only one from which root access can be accessed on the servers.  You can also disable root access completely, relying of sudo, but this is a mixed blessing. 

Investment in custom and intelligent log analysis software should be massive. Otherwise this will not work. There are several significant problems that make intelligent log analysis (even using some adaptive technologies) an expensive undertaking:

Log-based analyzers as a central part of IDS troika

Log analysis can be considered as the most fundamental part of an intrusion detection troika. Log files are important building blocks of any host-based IDS because they form an audit trail, making it easier to track down intermittent problems or attacks.

Log analysis is also closely related to monitoring system performance and can be integrated into Tivoli monitoring system. The latter is highly complex enterprise level monitoring framework that contains special custom components called adapters that can be integrated into the framework. Standard Tivoli log-file adapters, however, do not provide advanced logic capabilities.  A customers needs to write a set of custom script on the monitored system. They can be writing in Korn shell, Perl or other scripting language. The custom script Resource Model checks the standard output from the custom script. Therefore, the custom script must print its result to standard output.

In ideal case using log files, one may be able to piece together enough information to discover the source of a break-in, and the scope of the damage involved much more efficiently.

Currently most large companies usually do not monitor logs on day-to-day basis. They are examined only occasionally, usually in the case of after-the-fact reactive problem solving. And that is not accidental. There are several problems with log analysis and first of all its complexity. One is that for a large companies such logs as proxy logs can be really big. In many cases they are also dirty: contain a lot of noise records due to misconfigured services, errors on configuration, etc.  All-in-all creating of log-monitoring infrastructure requires substantial investment. 

Cleaning logs from noise and standardizing configurations to make them more useful for event analysis represents a significant project and requires qualitatively better levels of configuration and maintenance of the operating systems and applications in question as well as more structured policies regulating  sysadmins activity on DMZ.

One of the most important part of log analysis is existence of clear, current and unambiguous policies. The following additional policies can make log analysis a little bit simpler and logs itself more consistent: 

The success of log analysis IDS is highly dependent of the quality and the level of enforcement of those policies.

Host-based integrity checkers

Integrity checkers are very useful for finding Trojan programs and backdoors like Rootkit. Theoretically they are also useful for maintenance, but in reality this goal is pretty difficult to achieve. Perl-written (or Python-written) integrity checkers are more flexible and thus have an edge over C-written tools like Tripwire.

The most popular integrity checker for Unix, Tripwire never was able to supersede its beginning as a student project. Our experience suggests that a free version of Tripwire realistically can only be used in a limited way for static servers like appliances.  A commercial version is a little bit more flexible, but not by much. Moreover an introduction of central console for all Tripwire instances created an additional security risk. 

The fairy tails that Tripwire can detect or prevent host intrusions as a standalone application are not credible. Theoretically it can, but the tool itself is so inflexible that it largely defeats its purpose without a good log analyzer.  On Linux in most case you might have more success with RPM-based checking that with Tripwire.

Still it make sense to install Tripwire on critical severs as with all its faults, it is still the most credible commercial product. At the same time open source products should be investigated in the future as Perl-based integrity checkers are more flexible and powerful that C-based products like Tripwire.

We should avoid creating an all-encompassing rulebase for each server: this is a proven road to nowhere. Older versions of Tripwire were strictly file oriented and the problem of listing all the files and directories quickly made ruleset unmaintanable. Newer version (commercial version 4.0 and later) permit specification of all files in the directory (better late then never ;-)

Still the best policy in using Tripwire is to limit yourself to a few critical system and configuration files (for example used in the rootkits), plus several critical configuration files.  Actually control of configuration files is more important and here Tripwire while weak, can at least can provide some return on investment. If you are thinking about using Tripwire for tracking changes, please think again. This is possible by writing custom scripts, but there are better tools for the same purpose.  One problem is that if you do not compare with the baseline, you compare with the set of attributes. Also if you control both directory and a file in this directory, then for each change Tripwire will complain twice. In free version of Tripwire there is an option  -loosedir which would prevent tripwire from complaining about directory modification time updates that can filter out some noise. In commercial version it became a configuration option.

Network IDS

Security advances push intrusion detection deeper into the host-based domain, essentially making most part of the functionality of the old network generic IDS sensors obsolete and the remaining part of its integrated with firewalls. Drowning in bloated signatures databases and alerts that is of little or no value in locating attacks,  security specialists are fed up with signature-based IDS systems.

At least one research company proved to be brave enough to declare that "the king is naked."  A Gartner Inc. report [Gartner2003] called intrusion-detection systems a failed technology that isn't cost-effective. As Gardner report correctly stated IDS are dead:

Gartner Group, the well-known analyst firm, caused something of a stir recently with its pronouncement that Intrusion Detection Systems (IDS) and their Intrusion Prevention Systems (IPS) offspring were a market failure -- and in fact will be obsolete by the middle of the decade.

The Stamford, Conn.-based firm declared that IDS and IPS don't deliver the extra layer of security that was promised, and that many IDS implementations have been ineffective.

Gartner clearly has picked up on a massive source of end-user industry pain. IDS have long been derided as difficult to manage, creating many false positives and negatives, which is one of the reasons that security event management solutions evolved -- to make IDS both more manageable and more effective.

Some parts of IDS technology will definitely survive.  Moreover Gartner's prediction to a certain extent contradicts previous buying trends of organizations. According to the Computer Security Institute - FBI annual Computer Crime and Security Survey, only 43% of organizations bought intrusion-detection systems in 1998. That percentage has climbed steadily every year to reach 73% in 2002. Nonetheless, Stiennon considers that investments in intrusion-detection systems have already stalled because of all of their shortcomings.

Gartner suggests that  "deep packet inspection" will move into firewalls in the coming years. More realistic strategy is retooling of IDS sensors to monitor appliances that cannot be easily integrated into existing monitoring framework (Tivoli framework in case of  large company ). But what is actually dead is the sales pitch that IDS can protect the company from intrusions. It never did that in the first place and from the beginning served largely as an insurance policy

Despite a real threat of network exploits and a shrinking time gap between vulnerabilities and exploits, signature-matching IDS has become obsolete.  Here is one relevant quote: URL:

Intrusion detection systems are dead, a panel of analysts told the RSA Conference on Monday. The question remains what should replace them, and whether the newly fashionable "intrusion prevention systems" are more than just a change of buzzword.

"IDS is dead," said Vic Wheatman of Gartner Group. "People bought it, installed it and turned it down when they had too many alerts."

Analyst Mike Rasmussen of Giga agreed: "75 percent of IDS installations were failures," he said, blaming a failure to allocate enough resources to weed out the false positives, where the IDS issues a false alarm. But intrusion prevention--where systems are designed to respond automatically to prevent an attack having any effect -- is not necessarily the panacea it is made out to be, he warned: "In many cases, it's the old vendors abusing the term."

Large companies  should retool existing IDS censors as specialized network monitors for appliances that cannot be integrated into Tivoli framework and as generic traffic analyzers (NikSun sensors). large company  should not count on IDS to die as Gartner predicted in a controversial report last year. Instead, efforts needed to integrate IDS component into larger Tivoli framework,  which should be oriented more on policy enforcement then on intrusion detection and primarily uses host monitoring and log analysis as more reliable and cost effective technologies.

In the near term, this relegates currently installed IDS to a forensics and after-the-fact inspections. But in five years or so new security technologies could cause the demise of signature-based IDS altogether

Fundamental Problems with generic IDS sensors

Among the problems associated with IDSs we can mention the following:

A fog of false alerts:   "false positives" problem proved to be fundamental and cannot be resolved by tuning or other cosmetic means.

Generic IDS sensors has proved to be prone to streams of false alerts.  The essence of the problem stems from IDS over-reliance on signatures. As AV vendors know perfectly well signature based approach is mostly reactive. But that's not suitable for network IDS stated goals,  so IDS vendors are caught in the constant pressure to make signature more generic in order to be able to catch modified variants of known threats. Unfortunately this dramatically increases the rate of false positives and, in case of  IPS, would cause legitimate traffic to be blocked. That's not acceptable in a production environment like large company  have.

That's why many Wall Street companies are now all too happy to rid themselves of their signature-based systems altogether [[Bradley2004]:

"Every time we got a report off an IDS, it was pulse-raising. There'd be two $100,000-a-year Cisco Certified Network Engineers plowing through event logs trying to figure out what's going on," says Chris Van Waters, senior director of IT for QuadraMed, a Westin, Va., healthcare technology company with 1,000 employees. "Meanwhile, we've still got the network degraded, traffic's going through the roof, and we don't know where it's coming from."

The problem with IDS and IPS systems is that they assume everything is good until proven bad. Policy monitoring defines what is acceptable and anything outside of that is assumed bad and as such is a more realistic strategy.

In fact, the management and performance drawbacks of IDS proved to be so notorious that a Gartner Information Security Hype Cycle report published in June 2003 declared the category a market failure [Gartner2003]. Instead Gartner recommended that organizations hold off investing in IDS and shift resources to vulnerability scanning, server hardening, and newer, deep-packet inspection firewalls, which are more adept than standard firewalls at detecting and stopping application-level attacks .

Trying to survive, some network IDS vendors started work on elimination of false positives. They resort to various heuristics to determine if an attack is relevant and are trying to sell "enhanced"  technologies like anomaly detection, heuristics traffic analysis, application level protocols recreation and analysis, etc.  For example NFR now sells an operating system fingerprinting module, a technique that uses a proprietary sniffer to determine what applications are running on the network and tuning signature database accordingly.

Another heuristics is to a baseline of common traffic patterns on a device level and for each device  correlate only anomalies from the baseline. While being scanned every second of every day does not mean much, it might be useful for customers to see the type and content of packets that are outside the baseline, the amount of such packets per hour and corresponding ports distributions for such packets. If those abnormal packets for example looks like someone's doing a specific attack on an HTTP port of commerce server, then there is higher chance that those are relevant and deserve some action of the company personnel. 

But the market already turned negative to anything connected with the word IDS because people are tired of the care and feeding of traditional, signature-based IDS implementations and see them as having negative price/return ratio.  Everybody agrees that it takes an inordinate amount of time to get meaningful IDS data from those systems, hence the investment in IDS software does not pay off.  Investment into event correlation with the hosts might help to distill1 it into a more manageable volume but this is a very expensive path. 

Cost of monitoring false positives

Typical reaction on the cost of monitoring of false positives can be substantial both in dollar metrics and its demoralizing effects (Crying Wolf problem) as aptly summarized by an anonymous  security specialist at an electric utility:

Our IDS was a mess, alerting us on absolutely everything. In fact, I can’t even remember a single legitimate alert. We never had the time or manpower to monitor it all.

All network and security analysts currently agree that false alerts are a fundamental problem that cannot be avoided with generic IDS sensors. And it can take up to 10 hours to investigate one false positive. Because in the diverse set of enterprise apps, a stream of false positives from an IDS sensors essentially represent a denial of service attack on security resources of the corporation.  IDS architecture proved to appropriate only to detecting a very  narrow band of attacks on selected hosts and is too low level for detection of any application level exploits.

All-in all IDS based monitoring proved to be very costly to companies.  According to Gartner big companies annual IDS costs are around a hundred thousand dollars.  That doesn't include the cost of on-site personnel that is involved in analyzing (or distracted to be more correct) of all those alerts.

IPS is not a successor to IDS

IPS should be considered not so much as a technology innovation but as IDS vendors attempt to escape financial hole and responsibility for pushing semi-useless systems with selling newer and better mousetrap. In essence it represents an attempt to reduce reliance on signatures and avoid the famous flood of  false-positives, that make IDS a dirty word in security. IPS sits in-line at the network perimeter, scanning incoming traffic for signs of malicious code. Unlike IDS, it can drop suspect traffic automatically or alert network security staff, who will handle it manually. But they also come short of promises.  I am very skeptical about IPS vendors claims that IDS ultimately will replace IDS altogether. It the same fundamentally flowed approach repackages to close the most gaping holes and sold to unsuspecting or outright naive customers.  Some projections are too optimistic:

Infonetics projects a jump from $132.3 million to $425.5 million in sales for inline IDS between 2004 and 2007. Gartner, too, sees IPS sales surpassing IDS sales by the end of 2005.

Generally it is difficult to continue to instigate fear on a sustainable basis. That means that resources on signature updates and testing can shrink and the quality deteriorate. And that will create additional problems for enterprises who are slow to move to policy checking.

In a dream network intrusion-prevention environment, you'd have some device monitoring all of your traffic and detection or even stopping the bad guys. But was an illusion and now it is clear that they will never be capable of doing this.  IDS might get slightly better with time, but they are so compromised idea that enterprises would be better off by just moving on. IPS are just IDS on steroids and in addition to old problem with the false positives flood you now have a real chance of blocking the wrong traffic and thus damage your business.  It also makes possible to create attacks that use IPS as a zombie by feeding it a set of carefully forged packets in order to cut communication with the important hosts/networks. 

The Technology Hype Cycle and Dawn of Network Monitoring Mania

Security technologies remain a priority for many enterprises. Evaluating the hype and the reality is important for prudent investments and critical for properly protecting the enterprise at a reasonable cost. Like other internet technologies security technologies typically develop in five stages:

  1. Slow growth
  2. Exponential growth
  3. Super hype or bubble period,
  4. Bubble bust,
  5. Realistic assessment and usage of what was sound in it, if any

Gartner calls this "Hype cycle" and defines slightly differently:

A Hype Cycle is a graphic representation of the maturity, adoption and business application of specific technologies.

Since 1995, Gartner has used Hype Cycles to characterize the over-enthusiasm or "hype" and subsequent disappointment that typically happens with the introduction of new technologies (see Understanding Gartner's Hype Cycles) for an introduction to the Hype Cycle concepts). Hype Cycles also show how and when technologies move beyond the hype, offer practical benefits and become widely accepted.

  1. "Technology Trigger" The first phase of a Hype Cycle is the "technology trigger" or breakthrough, product launch or other event that generates significant press and interest.
  2. "Peak of Inflated Expectations" In the next phase, a frenzy of publicity typically generates over-enthusiasm and unrealistic expectations. There may be some successful applications of a technology, but there are typically more failures.
  3. "Trough of Disillusionment" Technologies enter the "trough of disillusionment" because they fail to meet expectations and quickly become unfashionable. Consequently, the press usually abandons the topic and the technology.
  4. "Slope of Enlightenment" Although the press may have stopped covering the technology, some businesses continue through the "slope of enlightenment" and experiment to understand the benefits and practical application of the technology.
  5. "Plateau of Productivity" A technology reaches the "plateau of productivity" as the benefits of it become widely demonstrated and accepted. The technology becomes increasingly stable and evolves in second and third generations. The final height of the plateau varies according to whether the technology is broadly applicable or benefits only a niche market.

It is reasonable to assume that IDS technology is entered the stage 4 now. In its height of its hype IDS developers were heroes that lead us to a bright secure future. Not anymore. After Gartner report multiple critical papers litter popular network and computer magazines. The real problem is mainly architectural: most packet sniffing solutions -- whether an IDS or IPS do not have access to the full context that is required to make a sound judgment about the level of threat. Most of this context is host-based.  That's why a network IDS generally have no idea whether an attack is relevant, and the volume of events that they produce tend to hide the dangerous attacks in low-risk and false positives noise.

Frustrated IT department are trying to work around network IDS' shortcomings by correlating IDS alerts with other security and vulnerability information. But it is easier said that done and require pretty open IDS solution, free of proprietary signature database or limitations of the log access on the sensor (that effectively excludes managed IDS solutions).  Currently this is better done by writing own log analysis middleware in scripting languages and gradually integrating it into enterprise monitoring framework like Tivoli.

Where IDS completely failed is distinction between important and trivial: they are crying wolf so many time that most security departments simply stopped reacting to alert relegating them to the level of background noise. Even if IDS will detect a real attack the whole idea is so compromised that nobody will ever care.  So the first change you'll see in intrusion management this year is addition of network recoding to IDS sensors which at least can help to recreate the events after then fact. 

The average IDS system has now several thousand signatures. Most of them are simple "grep-style" packet header based string matching rules. Such a primitive approach has two major problems:

That means that few companies can allocate staff  to manage network IDS intelligently.  In most cases they are just "circulating air" providing an illusion of security instead of real security.

Also in a desperate attempt to preserve their shrinking profits IDS vendors pollute signature database with the completely unrelated staff like virus and worm detection (which are completely unrelated to Network IDS and represent higher level protocols threat). IDS companies jumped in the worm/virus detection simply because this is almost the only useful thing they can show to the customers. It also allows them to make constant updates, speculate of the fear and simplifies the  justification of their annual maintenance fees.


[Kim&Spaffort1994] [PDF]  Gene H. Kim, Eugene H. Spafford Experiences with Tripwire: Using Integrity Checkers for Intrusion ...  Purdue Technical Report CSD-TR-94-012

[Saddi1993]  Allan Saddi   Yet Another File Integrity Checker  URL:

[Gartner2003] Richard Stiennon. "Hype Cycle for Information Security, 2003" Gargner Research Group Report. 2003

[Hulme2003]   V. Hulme  Gartner: Intrusion Detection On The Way Out  InformationWeek,  June 13, 2003.

[Bradley2004] Tony Bradley  Processor Editorial  The Line Between IDS & IPS Solutions Continues To Be Blurred. November 5, 2004 • Vol.26 Issue 45 Page(s) 11 in print issue. URL:

[Radcliff2004] Deborah Radcliff  The evolution of IDS   Network World, 11/08/04 pages 44-46. Last accessed from the URL:, Novemeber 15, 2004

[Kendall2003] Sandy Kendall  Is Intrusion Detection a Dead-End Technology - CSO Talk Back URL:

[Bekker2003] Scott Bekker   Gartner Intrusion Detection Systems a Bust    ENT News, June 11, 2003. URL:

[Hollows2003] Phil Hollows  IDS is Dead -- Long Live IDS  June 27, 2003. URL:

[Franklin&Wiens2003]  Curtis Franklin Jr. ,  Jordan Wiens   "Are your Web apps secure?")  Infoworld  February 06, 2004 URL:

[Ferrel2003] Keith Ferrell Intrusion Detection: Bright Future or Dead End?  TechWeb  June 18, 2003 URL:

[Schulze2003] Jan Schulze  No Unauthorized Access SAP INFO 18.08.2003 URL:




Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Created May 1, 2004; Last modified: March 12, 2019