|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells
Version 2.1 with updates of August 24, 2015
|News||Slightly Skeptical View on Enterprise Unix Administration||Recommended Links||Big Uncle is Watching You||Privacy is Dead – Get Over It||Cloud providers as intelligence collection hubs||Articles||Important Government publications||Tutorials|
|"Softpanorama laws of security"||Classic Open Source Security Tools||Solaris Security||Red Hat security||Suse Security||Managing AIX logs||AIX security||Hardening||Top Vulnerabilities In Linux Environment|
|Identity theft||Intrusion detection||Audit and Hardening||Firewalls||Network Security||Perl Scripts||Port Scanners||Integrity Checkers||Nephophobia: avoiding clouds|
|Security Certifications||CISSP Certification||DNS Security||DB_security||Log Auditing||Hype||Sysadmin Horror Stories||Humor||Etc|
Security department in a large corporation is often staffed with people who are useless in any department and they became really harmful in this new for them role. One typical case is enthusiastic "know-nothings" who after the move to the security department almost instantly turn into script kiddies and are ready to test on internal corporate infrastructure (aka production servers) latest and greatest exploits downloaded from Internet, seeing such activity as their sacred duty inherent in this newly minted "security specialists" role.
So let's discuss how to protect ourselves from those "institutionalized" hackers who are often more clueless and no less ambitious then a typical teenager script kiddie (Wikipedia):
In a Carnegie Mellon report prepared for the U.S. Department of Defense in 2005, script kiddies are defined as
"The more immature but unfortunately often just as dangerous exploiter of security lapses on the Internet. The typical script kiddy uses existing and frequently well known and easy-to-find techniques and programs or scripts to search for and exploit weaknesses in other computers on the Internet ó often randomly and with little regard or perhaps even understanding of the potentially harmful consequences.
In this case you need to think about some defenses. Enabling firewall on an internal server is probably an overkill and might get you into hot water quicker than you can realize consequences of such a conversion. But enabling TCP wrappers can help in cutting oxygen for overzealous "know-nothings" who typically are able to operate only from a dozen of so IP addresses (those addresses are visible in logs; also your friends in networking department can help ;-).
The simplest way is to include into deny file those protocols that caused problem in previous scan attempts. For example ftpd daemon sometimes crashes when subjected to sequence of packets representing exploit. HP-UX version of wu-ftpd is probably one of the weakest in this respect. Including in /etc/deny files selected hosts from which they run exploits can help. One thing that is necessary is to ensure that the particular daemon is compiled with TCP wrappers (which is the case for vsftpd and wu-ftpd) or run via xinetd on Linux or Solaris and via TCP wrappers on other Unixes.
Including SSH and telnet might also helpful as block the possibility of exploiting some unpatched servers. Latest fashion in large corporation is to make a big show from any remotely exploitable bug with spreadsheets going to the high management. Which create substantial and completely idiotic load on rank-and-file system administrators as typically corporation architecture has holes in which elephant can walk-in and any particular exploit change nothing in this picture. In this case restricting access to ssh to a handful useful servers or local subnets can save you some time not only with currently "hot" exploit, but in the future too.
In this case even after inserting a new entry into /etc/passwd or something similar, it is impossible to login to the server outside a small subset of internet addresses. This is a good security practice in any case.
See entries collected in the "Old News" section for additional hints.
Another useful measure is creating baseline of /etc and several other vital directories (/root and cron files). Simple script comparing current status with the base line is useful not only again those jerks but also can provide important information about actions of your co-workers who sometimes make changes and forget to notify other people who administer the server. And that means you.
And last but not least, new, unique entries in output of last command should be mailed to you within an hour. Same is for new, unique entries in /etc/messages. Server in corporate environment mostly perform dull repeating tasks and one week of long of logs serves as an excellent baseline of what to expect from the server. Any lines that are substantially different should generate a report, which in simplest case can be mailed or collected via scp on a special server with web server to view them. Ot they can be converted in pseudo-mailbox with the ability to view those reports via simple webmail interface.
Now each Unix/Linux server has its own built-in firewall which with careful configuration can protect the server from many "unpatched" remote exploits. But security department like any bureaucratic organization has its own dynamic and after some spectacular remote internet exploit or revelation about activities of three-letter agencies some wise head in this department comes with the initiative of installing additional firewalls.
To object to such an initiate means to take the responsibility to possible breach, so usually
corporate brass approves it. At this point real fun starts as typically the originator of the idea
has no clue about vulnerable aspects of corporate architecture and is just guided by the principle
the more is the better and improvise rule in ad hoc fashion. The key problem with such an approach
is that as soon as some level of complexity of rulesets raise above IQ of their creators and
became the source of denial of service attacks on corporate infrastructure. In other words it became
Moreover, typically people to put forward such initiatives never ask themselves a question whether there are alternative ways to login to the server. And on modern server there is such (actually extremely vulnerable) way: using ILO/DRAC or other built-in specialized computer for this purpose. Without putting those connections on a specially protected network segment and regular updates of firmware (I think exploits for it are a priority for three-letter agencies) this path is represent essentially an open backdoor to the server OS that bypass all this "multiplexer" controlled logins.
And this tendency of arbitrary, caprice based compartmentalization using badly firewalls has some other aspects that we will discuss in a special paper. One of them is that it typically raises the complexity of infrastructure far above IQ of staff and soon the set of rules soon becomes unmanageable. Nobody fully understand it and as such it represents a weak point in the security infrastructure not a strong point. Unless periodically pruned such set often makes even trivial activities such as downloading patches extremely cumbersome and subject to periodic "security-inspired" denial of service attacks.
Actually all this additional firewall infrastructure soon magically turns into giant denial of service attack on the users and system administrators. Attack on the scale that hackers can only envy with hdden losses in productivity far above typical script kiddies games costs ;-).
Another typical "fashion item" is install special server (we will call in login multiplexor) to block direct user logins and files transfer to the servers in some datacenter or lab. Fro now on everybody should use "multiplexer" for initial login. Only from it you can connect and retrieve data from the any individual server with "multiplexed" territory.
The problem with this approach is that if two factor authentication is used there is already central server (server that controls tokens) that collects all the data about user logins. Why we need another one? If security folk does not have IQ to use those already available data what exactly login multiplexor changes in this situation ?
And if multiplexer does not use two factor authentication using some type of tokens, it is a perfect point for harvesting all corporate passwords. So it in itself is just a huge vulnerability. Which should be avoided at all costs.
That suggests that while this idea has its merits, correct implementation is not trivial and as with everything corporate security requires deep understanding of architecture, understanding that is by definition lacking in security departments.
This move especially tricky (and critically effects productivity of researchers) for labs. And the first question here: "What we are protecting?" One cynical observation is that due to wave of downsizing staff might have so little loyalty that if somebody wants particular information we are trying to protect, he/she can buy it really cheaply without troubles of breaching into particular labs computers. Moreover breaking into cloud based email servers for key researchers (emails which BTW are replicated on their cell phones) in best NSA style also is a cheaper approach.
Naively put in place "login multiplexer" severely restricts "free movement of data" which is important for research as a discipline and, as such, has huge negative effect on researchers.
Also while multiplexer provides all the login data for the connection to it, it does not provide automatically any means to analyze them and this part should be iether written or acquired elsewhere. In this is not done, and activity of users is not analyzed to minimize negative effects, multiplexer soon becomes pretty useless additional roadblock for the users. Just another ritual peace of infrastructure. And nobody has courage to shout "King is naked".
Linux is a complex OS with theoretically unlimited number of exploits both in kernel and major filesystems. So patching a single exploit logically does not improve security much as we never know how many "zero-days" exploit exists "in the wild" and are in the hands of hackers and three-letter agencies.
At the same time paranoia is artificially wipes up by all those security companies which understand that this represents a chance to sell their (often useless of harmful) wares to stupid folk in large corporations (aka milk cows).
Typically conditions for applicability and exact nature of exploit is not revealed so it is just proclaimed to be another "Big Bad Exploit" and everything is rushing to patch it in pretty crazy frenzy as if their life depends on it. Forgetting about a proverb that mushrooms are usually grow in packs and that some exploit that are "really exploitable" (see for example Top 5 Worst Software Exploits of 2014 -- [Added May2, 2015, NNB]) often point to some architectural flaws in the network architecture of the corporation (OpenSSL and OpenSSH vulnerabilities, and, especially, Cryptolocker Trojan are quite interesting examples here). It goes without saying that architectural flaws can't be fixed by patching.
So patching new remotely exploited vulnerability became much like voodoo ritual by which shaman try to sway evil daemons to stay away from the corporation. Questions about about what aspects of the current datacenter architecture make this vulnerability more (or less) dangerous and how to make architecture more "exploit resistance" are never asked.
All efforts are spend on patching one current "fashionable" exploit that got great (and typically incompetent) MSM coverage, often without even investigated can it be used in the particular environment or not. All activity is driven by spreadsheet with the list of "vulnerable" servers. Achieving zero count is all that matter. After this noble goal is achieved all activity stops until the next "fashionable" exploit.
"We are going to spend a large amount of money to produce a more complicated artifact, and it is not easy to quantify what we are buying for all the money and effort"
Bob Blakey, principal analyst with
Bullshitting is not exactly lying, and bullshit remains bullshit whether it's true or false. The difference lies in the bullshitter's complete disregard for whether what he's saying corresponds to facts in the physical world: he does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.
Harry G. Frankfurt
Marketing-based security has changed the expectations about the products that are pushed by security companies in a very bad way. Like MSM in foreign policy coverage, vendors who are manufacturing a particular "artificial reality", painting an illusion, and then showing how their products make that imaginary reality more secure.
We should be aware that "snake oil" sellers recently moved into security. And considerable financial success. Of course, IT industry as a whole using "dirty" marketing tricks to sell their wares, but here situation is really outstanding when a company (ISS) which produces nothing but crap at the end was bought 1.3 billion dollars by IBM in 2006. I think IBM brass would be better off spending those money in best former TYCO CEO Dennis Kozlowski style with wild orgies somewhere in Cypress or other Greek island :-). At least this way money would be really wasted in style ;-).
Let's talk about one area, that one that I used to know really well -- network intrusion detector systems (NIDS)
For some unknown to me reason the whole industry became pretty rotten selling mostly hype and FUD. Still I need to admit that FUD sells well. The total size of the world market for network IDS is probably several hundred millions dollars and this market niche is occupied by a lot of snake oil salesmen:
Synergy Research Group reported that the worldwide network security market spending continued to be over the $1 billion in the fourth quarter of 2005, in all segments -- hybrid solutions (firewall/VPN, appliances, and hybrid software solutions), Intrusion Detection/Prevention Systems (IDS/IPS), and SSL VPN.
IDS/IPS sales increased seven percent for the quarter and were up 30 percent over 2004. Read article here.
Most money spent on IDS can be spend with much greater return on investment on ESM software as well as on improving rules in existing firewalls, increasing quality of log analyses and host based integrity checking.
That means that network IDS area is a natural area where open source software is more competitive then any commercial software. Simplifying we can even state that the fact of acquisition of commercial IDS by any organization can be a sign or weak or incompetent management ( although reality is more complex and sometimes such an acquisition is just a reaction on pressures outside IT like compliances-related pressures; moreover some implementations were done under the premises of "loss leader" mentality under the motto "let those jerks who want it have this sucker" ).
Actually an organization that is spending money on NIDS without first creating a solid foundation by deploying ESM commits what is called "innocent fraud" ;-). It does not matter what traffic you detect, if you do not understand what exactly happening on your servers/workstations and view your traffic as an unstructured stream, a pond out of which IDS magically fish alerts. In reality most time IDS is crying wolf even few useful alerts are buried in the noise. Also "real time" that is selling point for IDS does not really matter: most organization have no possibility to react promptly on alerts even if we assume that there are (very rare) cases when NIDS pick up useful signal instead on noise. A good introduction to NIDS can be found at NIST Draft Special Publication 800-94, Guide to Intrusion Detection and Prevention (IDP) Systems (Adobe PDF (2,333 KB) Zipped PDF (1,844 KB) )
A typical network IDS (NIDS) uses network card(s) in promiscuous mode, sniffing all packets on each network segment the server is connected to. Installations usually consists of several sensors and a central console to aggregate and analyze data. NIDS can be classified into several types:
On the other side there are possibilities of adding NIDS functionality to regular firewalls and
this is promising path of developing NIDS. If you think about it NIDS can be considered to
be a passive device listen to traffic and analyze only those packets that passed firewall filtering
rules. That means that 90% of processing required for NIDS were already done on firewall level.
It is in this narrow sense Gartner 2003 statement that NIDS are irrelevant and we should switch
to IPS might be true.
At the same time like local firewalls they represent a danger to the networking stack of the computer they supposedly protect.
The second important classification of NIDS is the placement:
Organizations rarely have the resources to investigate every "security" event. Instead, they must attempt to identify and address the top issues, using the tools they've been given. This is practically impossible if an IDS is listening to a large traffic stream with many different types of servers and protocols. In this case security personnel, if any, are being forced to practice triage: tackle the highest-impact problems first and move on from there. Eventually it is replaced with even more simple approach: ignore them all ;-). Of course much depends on how well signatures are tuned to particular network infrastructure. Therefore another classification can be based on the type of signature used:
Even is case when you limit traffic to specific segment of the internal network (for example local sites in national or international corporation, which is probably the best NIDS deployment strategy) the effectiveness of network IDS is low but definitely above zero. That can be marginally useful in this restricted environment. Moreover that might have value for network troubleshooting (especially if they also configured to act as a blackbox recorder for traffic; the latter can be easily done using TCPdump as the first stage and processing TCPdump results with Perl scripts postprocessing, say, each quarter of an hour). Please not that all those talks about real time detection are 99% pure security FUD. Nothing can be done in most large organizations in less then an hour ;-)
In order to preserve their business (and revenue stream) IDS vendors started to hype intrusion prevention systems as the next generation of IDS. But IPS is a very questionable idea that mixes the role of firewall with the role of IDS sensor. It's not surprising that it backfired many times for early (and/or too enthusiastic) adopters (beta addicts).
It is very symptomatic and proves the point about "innocent fraud" that intrusion prevention usually is advertised on the base of its ability to detect mail viruses, network worms threats and Spyware. For any specialist it is evident that mail viruses actually should be detected on mail gateway and it is benign idiotism to try to detect then on the packet filter level. Still idiotism might be key to commercial success and most IDS vendors pay a lot of attention to the rules or signatures that provide positive PR and that automatically drives that into virus/worms detection wonderland. There are two very important points here:
May be things eventually improve, but right now I do not see how commercial IDS can justify the return on investment and NIDS looks like a perfect area for open source solutions. In this sense please consider this page a pretty naive (missing organizational dynamic and power grab issues in large organizations) attempt to counter "innocent fraud" to borrow the catch phrase used by famous economist John Kenneth Galbraith in the title of his last book "The Economics of Innocent Fraud".
Important criteria for NIDS is also the level of programmability:
It's rather counterproductive to place NIDS in segments with large network traffic. Mirroring port on the switches work in simple cases, but in complex cases where there are multiple virtual LANs that will not work as usually only one port can be mirrored. Also mirroring increase the load on the switch. Taps are additional component and are somewhat risky on high traffic segments unless they are designed to channel all the traffic in case of failure. Logically network IDS belongs to firewall and some commercial firewalls have rudimentary IDS functionality. Also personal firewall with NIDS component might be even be more attractive for most consumers as they provide some insight on what is happening. They also can be useful for troubleshooting. Their major market is small business and probably people connected by DSL or cable who fear that their home computers may be invaded by crackers.
The problem is that useful signal about probes on actual intrusions is usually buried under mountains of data and wrong signal may drive you in a wrong direction. A typical way to cope with information overload from network IDS is to rely more on the aggregation of data (for example, detect scans not single probes) and "anomaly detection" (imitate firewall detector or use statistical criteria for traffic aggregation).
Misuse detection is more costly and more problematic that anomaly detection approach with the notable exception of honeypots. It might be beneficial to use a hybrid tools that combine honeypots and NIDS. Just as a sophisticated home security system might comprise both external cameras and sensors and internal monitoring equipment to watch for suspicious activity both outside and within the house - so should an intrusion detection system.
You may not know it, but a surprisingly large number of IDS vendors have license provisions that can prohibit you from communicating information about the quality and usability of their security software. Some vendors have used these software license provisions to file or threaten lawsuits to silence users who criticized software quality in places such as Web sites, Usenet newsgroups, user group bulletin boards, and the technical support boards maintained by software vendors themselves. Here open source has a definite advantage, because it may be not the best but at least it is open, has a reasonable quality (for example Snort is very competitive with most popular commercial solutions) or at least it is the cheapest alternative among several equally bad choices ;-).
IDS often (and wrongly) are considered to be the key component for the enterprise-level security. Often that is achieved by buying fashionable but mainly useless outsourced IDS services. Generally this idea has a questionable value proposition because of the level of false positives and problems with the internal infrastructure (often stupid misconfigurations on WEB server level, inability to apply patches in a timely manner, etc.) that far outweigh and IDS-inspired capabilities. If you are buying IDS, the good staring point is to ask to show what attacks they recently detected and negotiate one to six month trial before you pay the money ("try before you buy").
The problem of false positives for IDS is a very important problem that is rarely discussed on a sound technological level. I don't think there is a 'best' IDS. But here are some considerations:
You probably got the idea at this point: the IQ of the network/security administrators and the ability to adapt the solution to this organization is of primary importance in the IDS area, more important then in, say, virus protection (where precooked signatures sets rules despite being a huge overkill).
All-in-all the architecture and the level of customarization of the rulebase are more important then the capabilities of the NIDS.
From: AnonymousFrom: Anonymous
Reverse tunneling is very, very useful, but only in quite specific cases. Those cases are usual the result of extreme malice and or incompetence of network staff.
The only difficult part here is to determine what's the most common attribute of most IT/networking departments: Malice or incompetence. My last IT people had certainly both.
May 27, 2018, linux.slashdot.org ) [Recommended]
May 27, 2018| linux.slashdot.org
jfdavis668 ( 1414919 ) , Sunday May 27, 2018 @11:09AM ( #56682996 )Anonymous Coward writes:Re:So ( Score: 5 , Interesting)
Traceroute is disabled on every network I work with to prevent intruders from determining the network structure. Real pain in the neck, but one of those things we face to secure systems.Re: ( Score: 2 , Insightful)Hylandr ( 813770 ) , Sunday May 27, 2018 @05:57PM ( #56685274 )
What is the point? If an intruder is already there couldn't they just upload their own binary?Re: So ( Score: 5 , Interesting)gweihir ( 88907 ) , Sunday May 27, 2018 @12:19PM ( #56683422 )
They can easily. And often time will compile their own tools, versions of Apache, etc..
At best it slows down incident response and resolution while doing nothing to prevent discovery of their networks. If you only use Vlans to segregate your architecture you're boned.Re: So ( Score: 5 , Interesting)bferrell ( 253291 ) , Sunday May 27, 2018 @12:20PM ( #56683430 ) Homepage Journal
Also really stupid. A competent attacker (and only those manage it into your network, right?) is not even slowed down by things like this.Re: So ( Score: 4 , Interesting)fluffernutter ( 1411889 ) writes:
Except it DOESN'T secure anything, simply renders things a little more obscure... Since when is obscurity security?Re: ( Score: 3 )DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:37PM ( #56684878 )
Doing something to make things more difficult for a hacker is better than doing nothing to make things more difficult for a hacker. Unless you're lazy, as many of these things should be done as possible.Re:So ( Score: 5 , Insightful)mSparks43 ( 757109 ) writes:
Things like this don't slow down "hackers" with even a modicum of network knowledge inside of a functioning network. What they do slow down is your ability to troubleshoot network problems.
Breaking into a network is a slow process. Slow and precise. Trying to fix problems is a fast reactionary process. Who do you really think you're hurting? Yes another example of how ignorant opinions can become common sense.Re: So ( Score: 2 )ruir ( 2709173 ) writes:
Pretty much my reaction. like WTF? OTON, redhat flavors all still on glibc2 starting to become a regular p.i.t.a. so the chances of this actually becoming a thing to be concerned about seem very low.
Kinda like gdpr, same kind of groupthink that anyone actually cares or concerns themselves with policy these days.Re: ( Score: 3 )DamnOregonian ( 963763 ) , Sunday May 27, 2018 @04:32PM ( #56684858 )
Disable all ICMP is not feasible as you will be disabling MTU negotiation and destination unreachable messages. You are essentially breaking the TCP/IP protocol. And if you want the protocol working OK, then people can do traceroute via HTTP messages or ICMP echo and reply.
Or they can do reverse traceroute at least until the border edge of your firewall via an external site.Re:So ( Score: 4 , Insightful)DamnOregonian ( 963763 ) writes:
You have no fucking idea what you're talking about. I run a multi-regional network with over 130 peers. Nobody "disables ICMP". IP breaks without it. Some folks, generally the dimmer of us, will disable echo responses or TTL expiration notices thinking it is somehow secure (and they are very fucking wrong) but nobody blocks all ICMP, except for very very dim witted humans, and only on endpoint nodes.Re: ( Score: 3 )DamnOregonian ( 963763 ) writes:
That's hilarious... I am *the guy* who runs the network. I am our senior network engineer. Every line in every router -- mine.
You have no idea what you're talking about, at any level. "disabled ICMP" - state statement alone requires such ignorance to make that I'm not sure why I'm even replying to ignorant ass.Re: ( Score: 3 )DamnOregonian ( 963763 ) writes:
Nonsense. I conceded that morons may actually go through the work to totally break their PMTUD, IP error signaling channels, and make their nodes "invisible"
I understand "networking" at a level I'm pretty sure you only have a foggy understanding of. I write applications that require layer-2 packet building all the way up to layer-4.
In short, he's a moron. I have reason to suspect you might be, too.Re: ( Score: 3 )nyet ( 19118 ) writes:
A CDS is MAC. Turning off ICMP toward people who aren't allowed to access your node/network is understandable. They can't get anything else though, why bother supporting the IP control channel? CDS does *not* say turn off ICMP globally. I deal with CDS, SSAE16 SOC 2, and PCI compliance daily. If your CDS solution only operates with a layer-4 ACL, it's a pretty simple model, or You're Doing It Wrong (TM)Re: ( Score: 3 )kevmeister ( 979231 ) , Sunday May 27, 2018 @05:47PM ( #56685234 ) Homepage
> I'm not a network person
IOW, nothing you say about networking should be taken seriously.Re:So ( Score: 4 , Insightful)Hylandr ( 813770 ) writes:
No, TCP/IP is not working fine. It's broken and is costing you performance and $$$. But it is not evident because TCP/IP is very good about dealing with broken networks, like yours.
The problem is that doing this requires things like packet fragmentation which greatly increases router CPU load and reduces the maximum PPS of your network as well s resulting in dropped packets requiring re-transmission and may also result in widow collapse fallowed with slow-start, though rapid recovery mitigates much of this, it's still not free.
It's another example of security by stupidity which seldom provides security, but always buys added cost.Re: ( Score: 3 )Zaelath ( 2588189 ) , Sunday May 27, 2018 @07:51PM ( #56685758 )
As a server engineer I am experiencing this with our network team right now.
Do you have some reading that I might be able to further educate myself? I would like to be able to prove to the directors why disabling ICMP on the network may be the cause of our issues.Re:So ( Score: 4 , Informative)Bing Tsher E ( 943915 ) , Sunday May 27, 2018 @01:22PM ( #56683792 ) Journal
A brief read suggests this is a good resource: https://john.albin.net/essenti... [albin.net]Re: Denying ICMP echo @ server/workstation level t ( Score: 5 , Insightful)
Linux has one of the few IP stacks that isn't derived from the BSD stack, which in the industry is considered the reference design. Instead for linux, a new stack with it's own bugs and peculiarities was cobbled up.
Reference designs are a good thing to promote interoperability. As far as TCP/IP is concerned, linux is the biggest and ugliest stepchild. A theme that fits well into this whole discussion topic, actually.
Google matched content
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: January, 11, 2019