May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Skeptical View on Unix Security

by Dr. Nikolai Bezroukov

Version 1.1

Never underestimate the power of human stupidity


Computer Security is an anthropomorphic deity of a new messianic high demand cult. It is synonym of goodness, happiness and light; a mystic force which provides a beautiful eternal harmony of all things computable. The main recruitment base of the cult are system administrators.

A secure server is a cosmic harbinger of charismatic power; an exorcistic poltergeist that preserves mental health, cures headache, allergy, alcoholism, depression, and deters aging. It is a nirvana for both young and old system administrators; an enviable paragon of all imaginable idealistic virtues; an apocalyptic voice that answers the question: "What is truth?".

Finally, a secure computer network is the bright hope of all mankind, a glimpse of things to come with the help of Homeland Security Agency, and an inscrutable enigma that may well decide whether this nation, or any other nation, conceived in Liberty, can long endure. In the USA this notion plays a role similar to the second coming of Christ in some high demand cults.

The main problem with Unix security is that it is very similar to office cleaning services. It is a dull and unrewarding task that needs constant attention to details strongly depends on general enterprise architecture (over which security analysts usually have no control) and often involves fighting against your own management. These feelings were aptly  summed up by the comment of Dominique Brezinski: "My life is miserable and pathetic, and I want to get out of security soon."

Although the remark was mostly intended as a self-deprecating jest, it reflects well the level of frustration that security specialists and Unix system administrators need to live with   Despite all declarations about importance of security and of the work being done by vendors, the same old problems never seem to get solved.

Paradoxically one of the recurring issues is the endemic lack of security expertise in corporate computer security units itself. More often then not they attract people who are useless or even harmful in production or development environments. For them that's the last chance to survive and their misguided zeal might be one of the most important security dangers in modern large enterprise environment ;-).  This is even more true for Unix sysadmins as high level security expertise requires architectural view on the system and many sysadmins do not possess good architectural knowledge of Unix, the knowledge that gives one the possibility to see side effects and hidden undesirable interactions of various security measures.  Such sysadmins often quickly degrade into "administrative fascists" (see Know your Unix System Administrator)

To be able to secure Unix environment a good security specialist should first of all be an excellent system administrator. In the today's market the value of an excellent sysadmin is more then twice of the value of equivalent computer security specialist. May be more as such people are very rare. Moreover Unix sysadmin he has more meaningful and more rewarding job and does not waist his expertise for what is essentially is a highly paid cleaning job.

Another huge problem is that security is the field that attracts energetic know-nothings. See An observation about corporate security departments. In many organizations it also serves as a dumping ground for those who proved to be useless in their assigned function, but which for some reason are difficult or impossible to get rid off. I know a case when a technically clueless female sociopath (former executive secretary)  was exiled into security because the organization was afraid that she would exploit her gender in lawsuit if she is terminated (and this idea of using gender as a bullet-proof west of by a female sociopath is much more common that people realize). So there is always a danger that security in a particular organization is just a window dressing performed by people more concerted with their career and survival. People who are unable and unwilling to understand real challenges for the organization and its IT systems.

The Achilles' heel of today Unix security is an inability of key players who are responsible to enterprise security policy  to see architectural view of the system and integrate various existing security mechanism into meaningful whole. Vendors are also a lot to blame as fragmentation of Unix and Linux means that for each flavour it should be own secuirty policy.

As this is difficult, instead of tuning security polity to multiple existing in enterprise environment flavours of Unix/Linux most common solution is buying some additional component that (typically via security snake-oil salesmen) is considered the security panacea and stick with it.  Often the value of such system is negative and the system itself is as close to a scam as one can get.

ISS was a classical example of such security vendor before it was bought by IBM; I don't know if anything changed after IBM acquisition but I know that IBM brass paid a billion dollars for this junk; it would be much better for IBM brass to spend those money on some nice Greek island with beautiful gilds and a lot of booze in best Kozlovski manner, but this is a different story -- the story of degradation of IBM's leadership. May be things eventually improve, but right now I do not see how commercial IDS can justify the return on investment and NIDS looks like a perfect area for open source solutions. Also there is an urgent need to counter "innocent security fraud" to borrow the catch phrase used by famous economist John Kenneth Galbraith  in the title of his last book "The Economics of Innocent Fraud" 

It's also a problem of "enemy within" -- many admins can learn things about what not to do when they're in charge of a system only by trial and error. As [email protected] (Rick Furniss) puts it: "More systems have been wiped out by admins than any hacker could do in a lifetime.".

Three Laws of Computer Security

Due to importance of sysadmin qualification the author formulated three laws of Computer Security:

  1. In a long run the level of security of any large enterprise Unix environment can not be significantly different from the average level of qualification of system administrators responsible for this environment...
  2. If there is a large discrepancy between the level of qualification of system administrators and the level of Computer Security of the system or network, the main trend is toward restoring equilibrium.
  3. In a large corporate environment incompetent people implementing security solutions are a bigger problem that most OS security weaknesses because users tend to react on actions that decrease user-friendliness of the system by actions that the tend to restore it. The real computer security skills presuppose not only the knowledge of what should be done, but the knowledge were to stop in order not to cause excessive backlash. The latter skills presuppose understanding of architecture of the environment and are completely lacking in wanna-be security specialists. If incompetents happen to be in charge of security one should expect that they will implement the most destructive for corporate IT security measures dictated by the current fashion, driven by excessive zeal and desire to survive. Measures that backfire and due to use counteractions create security holes bigger then they are trying to patch.

Also most attacks are much more dangerous from inside than from outside. Unix systems, with their large number of built-in services, scripting languages, and interpreters like Perl and PHP (PHP scripts interpreter and programming flaws are the major source of security holes), are particularly vulnerable to attacks because of complexity and multiple attack vectors for hackers to exploit. Extra services are also a problem unless you go to pain to close most of the services. Even designed for security Open SSH for while was the source of backdoors to Internet ISPs. That does not mean that Unix systems cannot be made as secure as less capable systems like AS400, but out of the box they definitely are less secure. 

The first rule of security in KISS. Achieving simplicity of environment is one of the most solid approach in achieving high security, the approach that too often is completely ignored because it lessen the flexibility and complicates administration of the system.  That's understandable but that convert Unix server into appliance.

Also too much zeal in security is a self-defeating exercise. That's that case with the installation of some supposedly useful tools that in reality only add to complexity and thus lessen the general level of security. If  security tools raise the level of complexity without providing comparable benefits you need to have common sense to dump them without regrets. Of course it's very difficult not to follow hype and fashion. Few IT managers are able to answer the vendor or audit remarks like "Oh! You don't have intrusion detection system yet!" with something like "No, they does not make sense in this particular environment. Also your solution is crap" ;-).

In any case, the first step should delete and disable everything unnecessary, not to install something additional.  I am convinced that the major step in improving the security of your system can be achieved via maximum simplification and hardening of your OS. If one understand what software packages are really necessary for the particular system and remove everything that is not needed, many vulnerabilities will no longer be applicable to such a system even without patches. And you can spend more time plugging holes which really matter instead of installing every patch available. It looks like stripping your system to the bones is much more important then running a bunch of fancy security tools ;-).

Stripping is Unix flavor dependent. Solaris has especially good tutorials to help you with this task and has tremendous advantage of being not Linux (security via obscurity as most exploits are designed and tested for Linux). Stripping the system down and absence of extra networking services is the key -- this is old good KISS principle that one needs to apply to any security task -- the more secure system you want the less services it should run.  Access to key computers should be limited -- really limited. Rephrasing an old KGB (or was it Stalin's?) saying  " If the is a man -- there is a problem; if there is no men there is no problems" we can say "if there is a service there is a problem; if there is no service that is no problems" ;-).

There is a SAN's classification on most common Unix security vulnerabilities. While you should not believe it, it still is an interesting and useful read and it suggests that installation of patches for key software packages on the server is the other often overlooked mundane security task.

To take just one example: In July 1998, a bug report was published about Microsoft's Internet Information Server. No one knows how long--if at all--it was used to attack systems before then. Microsoft issued a patch fixing the vulnerability shortly after it was published. In July 1999, Microsoft issued a second warning of the vulnerability and the need to install the patch. Even so, in January 2000 the vulnerability was used to steal credit card numbers from several high-profile Web sites.

In any case, selecting only necessary minimum of security tools is important because not every security tool is an asset, wrong tools can create the false sense of security and as such are in essence an additional  vulnerability.  Generally I recommend avoiding complex and closed tools -- they can actually degrade the level of your security. And in most cases 80% of their positive effect, if any, can be achieved using simpler and/or open source tools.

The first rule of security is KISS. Avoid complex and closed security tools -- they can actually degrade the level of your security

Simplicity has other important aspect. Understanding of your system is the key. And if the first rule of high security is "simplicity, simplicity and again simplicity", then the second rule is "understanding, understanding and again understanding".  And in reality even simplified system often is too complex to understand.  IMHO any aspiring security administrator should first and foremost train himself in the flavor of Unix he want to protect (see my system administration page). Nothing can replace deep understanding of the system and protocols that you need to protect. There's a huge difference between those, who would never confuse RFC 821 with RFC 822, and the rest of administrators, who aren't necessarily even sure what "RFC" stands for.

Nothing can replace deep understanding of the systems and protocols
that you need to protect.

As Greg Knauss emotionally put it in his pessimistic RBL Without a Clue essay:

Anyone using the Internet today faces one irrefutable fact: Sturgeon's Law is for optimists. People who believe that a mere ninety percent of everything is crap are living in a sunny unreality only tenuously connected with a world where packets are delivered via IP. Consider, for instance, the collective state of the Net's Sendmail administrators.

With the increased availability of high-speed, always-on Internet connections and the fact that Linux has become so easy to install that certain species of bacteria are now being hired by MIS departments, what was once the domain of rigorously trained, highly specialized professionals has devolved into the Dark Land of the Monkeys. Over the course of the past year, millions of new machines have appeared on the Net, each and every one of them with a misconfigured port 25.

At least that's the case for the servers I've installed. I wouldn't know a properly configured SMTP host if it bit me in the ass - which, as far as I know, a properly configured SMTP host is capable of doing.

Nevertheless, in the Dark Land of the Monkeys, I am King, and the small fact that I administer a Sendmail installation does not in any way require that I actually be a qualified Sendmail administrator. I keep waiting to be dragged off by the Competence Police, but they're apparently no better at their job than I am at mine. Giving me the ability to put a mail server on the Internet merely because I can compile the software and cobble together a configuration file is the moral equivalent of letting the baby drive the steamroller because "he looks so cute."

I believe that if somebody did not read The WWW Security FAQ before installing WEB server connected to the Internet and then complains about problems with this WEB server, he/she either should be shot on the spot to prevent further suffering, or forced to RTFM ASAP (better late than never). Or if previous two solutions are for some reason unpractical he/she should attend a couple of $2000 WEB security seminars preferably in the most unattractive location possible ;-).

I see scripting languages (Perl, shell and TCL ) as an important tools that can help you both understand and maintain the system. As such they are very important security tools not just system administration tools and mastery of scripting languages is a very important asset for any security professional. I think that the use of scripting languages is the key to flexible and powerful security analyzers and other tools based on open source approach. Perl is very high level language and scripts can be made modular and short enough to understand what they are doing and how to adapt them to your needs. Generic prepackaged closed source commercial solutions that one cannot modify himself often give only an illusion of security or create so much noise that useful signal can be easily overlooked.  Here I would like to note that tools written in scripting languages for example Perl are more flexible and adaptable 

Scripting languages are important security tools. Security tools written in scripting languages are more flexible and adaptable than tools written in high-level languages

As Bruce Schneier noted "Security engineering involves making sure things do not fail in the presence of an intelligent and malicious adversary who forces faults at precisely the worst time and in precisely the worst way."  Configuration errors are not always evident and can be added as a side effect of other changes. That means that you need to probe your system on a regular basis against common misconfigurations and exploits.  Some organization limit their security efforts to periodic scan using network scanner which is by definition a limited and pretty a simplistic scanning product. This, in my view, is a wrong approach, although it is better than nothing. I think that using internal scanners for known vulnerabilities is even more important.

Detecting a security breach can generally be viewed as a pattern recognition problem. Thus, we need to develop understanding of a baseline of normal operating conditions (snapshots of  file system, network services, logon activity, normal CPU load, disk utilization, etc.).  It is important to get a sense of what log files normally look like. Forget about very skilled attackers who may leave little evidence of their presence, and only a full system audit can help you detect subtle system variations later.

Just assume that most attackers are on the same level as an average system administrators. They learned some tricks, but have no clue about other aspects of the system. That means that a good baseline represent one of the most powerful intrusion detection tools and as it also has high value in any complex troubleshooting, here efforts to create and maintain a set of such baselines can improve not only security, but also the life of regular system administrators. In any case having a snapshot of a system that is in a pristine/working state can yield valuable information down the road. Such snapshot doenot require full backup. A simple CD is OK. That's why it make perfect security sense to collect snapshots of, say weekly basis and to write then on the CDs, one CD per server. 

If you use Tripwire with a very simple set of rules and does not try to convert it into IDS, it might help to detect some interesting events. But this is a very rare skill. Most Tripwire installation overreach as a result not only useless but are really harmful as there is too much information and useful signal is masked by noise. Actually simple MD5 generation utility is as useful as Tripwire for all practical purposes. You can write a simple script that creates hash(es) of the important files, which periodically can be compared with a later snapshot to find any file system alteration. Such script is definitely more useful then Tripwire in most practical cases.  

Now let's discuss logs. With all those legitimate concerns about log tampering, it's important to understand that this is not always the case in a typical intrusion and the danger of log tampering is probably overblown.  Also remote syslog (which is extremely easy to implement) provides some simple but effective defense from tampering of local log files as this server can be hardered to a minimum of services.  

A separate machine for logs with properly configured firewall and/or without duplex connection to the host that produces logs is another simple measure to be considered.  This measure probably should be implemented before buying any commercial intrusion detection packages.

That means that logs are very important source of relevant information and a workable log analyzer is  probably more valuable than a dozen IDS sensors installed in your DMZ ;-).  Even simple Perl script that runs on your log files and periodically extracts a useful subset of messages from the noise and, say, simply mails it to your mailbox is a very powerful security tool that unlike some expensive commercial offerings can really help to pinpoint an unusual activity. 

Again, the level of qualification of security analyst/system administrator is probably the most important components of the security of the system. Nobody can apply all patches all the time, so sooner or later your system will be exposed for a considerable amount of time due to a bug in one of the components. That means that the real problem is how to structure the system so that it will not be so dependent of constant updating and to put an intruder in a difficult position even if he manages to exploit one of the holes that were not patched on a particular server. 

Also keeping with the list of holes and attacks on several different flavors of Unix is a really unimpressive job to have. Here I would like to note that although security though obscurity is a bad idea, but it can lengthen the period of time between patches. In general this is a valuable layer in general security architecture. Sometimes it makes perfect sense to have obscure layers, for example few people like to hack AIX (just try to find exploits for AIX on a typical hacker site like Hacking HP-UX is so disgusting that it can be done only for serious money ;-)

Linux is much more fashionable target, really the most hackable Unix in existence.  Just moving to FreeBSD or Solaris on Intel stops many exploits cold, without additional security tools. And OpenBSD is really the most secure Intel based Unix which while not as advanced as Solaris 10 still can be used for running critical internet infrastructure services, for example DNS. 

Just do not rely on such obscurity layers as a complete defense. It's just a layer that needs to be supplemented by other, but it's effective enough to stop weakly motivated intruders -- a lot of intruders hunt for well understood flavors of Unix that run of cheap Intel hardware. And in most case that means Linux.  Just using UltraSparc or PowerPC architecture protects you in this case.

Honey pots are an important tool to make intruder to feel vulnerable and uncomfortable. For example scans should be detected, but some typical vulnerable services that are mentioned in recent exploits can probably be better made to appear as enabled, with a decoy running on a particular port. Of course alerts should be generated intelligently not to produce (possibly intended) e-mail service denial attack ;-). If not overdone set of honey pots might be a useful layer so can help to lure intruder into a wrong path. And the mere presence of honey pots change the rules of the game -- intruder no longer has an advantage of selecting what to attack and need to think twice whether the hole that he just found is real or not.

If you invest in expensive tools without creating a honey pot you are actually helping an intruder -- as long as there are holes he still can get in. If the intruder is unsure about there a particular hole is real or not and needs to guess, strategically he is in much more difficult position and if he guess wrong the game can be over for him pretty soon.

Write protected and/or self-healing (for example, using CD baseline or read-only mirrors on other disks or servers) file systems are another effective avenue that can put the intruder into a more uncomfortable position.  For example it's easy to make /usr partition write protected in Solaris (recent version of JASS will do that for you, but you probably need to link /usr/local to /opt/sfw, as /usr/local is the most changing part of the /usr tree). In this case even if the intruder manages to break into computer, he cannot use probably 80% of his tricks and, what is more important,  his level of anxiety will probably be much higher and thus he will be much more prone to mistakes. For example all those stupid tricks with Rootkit will no longer work as intended, or at least will need a lot more sophistication to implement. If you use a Web site with CD or USB-based baseline, then any unauthorized change in static content will disappear the first time the file will be detected. That make any tampering with your website much more difficult.  Write-protected USB stick is a really powerful security tool :-). If most of the content of the WEB sites is static it make sense to store a copy of it in a separate read-only partition and periodically rsync with disk version. This is a useful variation of hardware-based write protection method mentioned above that can help to protects your images and include files.

Again the general recommendation is to have a CD/USB mirror of critical parts of the filesystem and check automatically integrity of  key files via cron jobs. One simple technique is to write a script to use the rpm -Vp (verify package) command to check a few key binary files (e.g. fileutils) against the Redhat ISO. An easy and sure way of detecting Rootkits. A CD/USB-based tripwire database can provides an primitive database about static files on your system. Having a secure baseline of vital files along with this database allow restoration "no questions asked".

The next  most obvious step is to have cryptographic checksums for all executables and check them during loading of executables to make tampering more difficult and less rewarding. Paradoxically, but Microsoft was the first to implement signing of executables (see my antiviral page) and Linux and other Unixes still lags behind in this respect although in Microsoft world this new and important layer of protection is rarely used outside ActiveX components  Sighing of executables does not affect the convenience of using the system, but prevents from a lot troubles (for example Rootkit-based approaches became much less usable).

In many cases POP3 or IMAP can be used instead of Sendmail and because connection here is initiated by  your host, if properly configured it's probably a more secure solution of getting mail then any Sendmail-type mail server. But most mail security problems (especially spamming) are quite different and here a layer based on up-to-date version of Sendmail (or more secure mailer such as Postfix) can make filtering easier.  Usually without content checking it's really difficult to block spammers. For more information see RFC 2505 and Softpanorama Antispam Page.

The collection of links used in the paper is very Spartan and far from being complete; it covers very few relevant approach to improving  Unix security. Please use Softpanorama Solaris hardening page as a starting point for additional research. Generally Linux/Unix can be made as secure as you want, but there is no free lunch and there some user inconveniences that are usually associated with high security. 


Softpanorama Laws of Computer Security

Architectural Issues of Intrusion Detection Infrastructure in Large Enterprises

Host-based IDS

Slightly Skeptical View on NIDS and Network-level Intrusion Prevention

Closing the Window of Exposure by Bruce Schneier

... In this paper I argue that the historical security model of threat avoidance is flawed, and that it should be abandoned in favor of a more businesslike risk management model. Traditional security products--largely preventive in nature--embody the threat avoidance paradigm: either they successfully repel attackers, or they fail. The unfortunate reality is that every security product ever sold has, on occasion, failed.

A security solution based on risk management encompasses several strategies. First, some risk is accepted as a cost of doing business. Second, some risk is reduced through technical and/or procedural means. And third, some risk is transferred, through contracts or insurance.

Most people concentrate on the second approach and attempt to solve the risk through the purchase of security equipment. However, technical risk reduction cannot be achieved this way; newly discovered attacks, proliferation of attack tools, and flaws in the products themselves all result in a private network becoming vulnerable at random (and increasingly frequent) intervals for random amounts of time.

... ... ...

Ask any network administrator what he needs security for, and he'll describe the threats: Web site defacements, corruption and loss of data due to network penetrations, denial of service attacks, viruses, loss of good name and reputation. The list seems endless, and an endless slew of press articles prove that the threats are real.

Most computer security is sold as a prophylactic: encryption prevents eavesdropping, firewalls prevent unauthorized network access, PKI prevents impersonations. To the world at large, this is a strange marketing strategy. A door lock is never sold with the slogan: "This lock prevents burglaries." But computer-security products are sold that way all the time.

There exists no computer-security product -- or even a suite of products -- that acts as magical security dust, imbuing a network with the property of "secure." Security products are risk management tools, some more effective than others, that reduce the risk of financial loss due to network attacks. These tools should be deployed when the savings due to risk reduction are worth the investment in the tool. Otherwise, it is cheaper to accept or insure the risk than it is to deploy the tool.

For example, it makes no sense to purchase a $10,000 safe to secure $1000 diamond. Even if you could buy a $500 safe, a $300 insurance policy would be a smarter purchase. But if you could buy a $100 safe and a $100 insurance policy that requires the safe, that would be the most cost-effective solution of all.

... ... ...

A company's computer network could be likened to a building, and the windows and doors to the Internet access points. Continuing this analogy, strong door and window locks could help keep out intruders, and office-door locks and locked filing cabinets could help prevent "insider" attacks. Of course these preventive security measures are not enough, and a well-protected building also has alarms: alarms on the doors and windows, and maybe motion sensors and pressure plates in critical areas inside.

The Internet is much more complicated than a building, and constantly changing. Every day there are new vulnerabilities discovered, new attack tools written, and new legitimate services offered. Whenever a new way to attack a house--or a network--is discovered, there exists a window of exposure until that attack method is prevented.

Rarely is a totally new technique for picking door locks invented, one that renders existing lock technology obsolete. Imagine for a moment it has. At the point of invention, there exists a window of exposure for all buildings that have these sorts of locks. As long as no one knows about the lockpicking technique, the window is small. As criminals learn about the technique, the window grows in size. If the technique is published and every criminal learns about it, the window is very wide. At this point, there is nothing anyone can do about the problem; the locks are vulnerable. Only after a lock manufacturer designs and markets a lock that is resistant to this technique can people start to install the new locks. The window closes slowly but, since some buildings will never get these new locks, never completely.

This is what happens daily on the Internet. And because the Internet is much more dynamic and unstable than a building, the repercussions are much worse. Someone discovers a new attack methodology that renders some networks vulnerable to attack. The exposure grows as more people learn about this vulnerability. Sometimes the window of exposure grows very slowly: there are attacks that are known by a few academics and no one else. Sometimes the window grows very quickly: some hacker writes an exploit that takes advantage of the vulnerability and distributes it free on the Internet. Sometimes the software vendor patches the vulnerable software quickly, and sometimes the vendor takes months or years. And some network administrators install patches quickly and religiously, while others never do.

To take just one example: In July 1998, a bug report was published about Microsoft's Internet Information Server. No one knows how long--if at all--it was used to attack systems before then. Microsoft issued a patch fixing the vulnerability shortly after it was published. In July 1999, Microsoft issued a second warning of the vulnerability and the need to install the patch. Even so, in January 2000 the vulnerability was used to steal credit card numbers from several high-profile Web sites.

Stronger Passwords Aren't  by Peter Tippet

A thought provoking article. The author tries to argue that in real world, an eight-character mixed alphanumeric password is no more secure than a simple four-character password as all that matters is the ability of intruder to guess it in 5 attempts. If the shadow file is stolen then game is usually over anyway. He correctly noted that the more restrictive password policy is, the more helpdesk calls about forgotten passwords such a policy generates, but here there are a couple of good schemes like AOL scheme (pairs of  simple 4-5 letter passwords, like viwa-kiev) and he did not mention that such a policy can be controlled automatically in passwd utility (npasswd).

While most IT/security professionals assume that plaintext data is vulnerable to eavesdropping over the public Internet, the risk of such an exploit is actually quite low. The cost and effort to maintain an infrastructure that supports Internet encryption probably outweighs any possible gain. In other words, when it comes to sniffing over the public Internet, SSL is on the wrong side of the cost/benefit equation.

This month, I'll focus on another security "necessity" that, in reality, has a minimal impact on risk reduction: strong passwords. Most of us are intimately familiar with the recipe for a "strong" password: it's seven or eight characters in length, uses mixed alphanumeric characters (or maybe even upper and lower case letters or Alt-key characters), and is changed every 60 days or so. The reason we're told to adopt such a password policy is to prevent crackers from using tools to easily determine an end-user's password, which could then be used to gain access to a corporate network.

Sounds simple enough, but unfortunately this type of password policy is a red herring. For all intents and purposes, a "strong" password is really no more secure than a "good enough" one.

Typically, a password file stores neither native nor "encrypted" passwords. Rather, passwords are usually hashed with SHA or MD5 and stored with corresponding user IDs. Hashes are truly one-way functions. In other words, you could hash the entire Bible and represent it as 8 bytes of gibberish. There's no way to use these 8 bytes of data to get the Bible back.

The reason we're told to use strong passwords boils down to this: Someone might steal the password file--or sniff the wire and capture the user ID/password hash pairs during logon--and run a password-cracking tool on it. Such tools come in many sizes and shapes; the most popular include Crack, L0phtcrack and John the Ripper. Since it's impossible to decrypt a hash back to a password, these programs first guess a password--say, "helloworld." The program then hashes "hello-world" and compares the hash to one of the hashed entries in the password file. If it matches, then that password hash represents the password "helloworld." If the hash doesn't match, the program takes another guess.

Depending on the utility, a password cracker will try all the words in a dictionary, all the names in a phone book and so on--and for good measure, throw in a few numbers and special characters to each of the words it guesses. Some password crackers guess words found in foreign languages (or even the fictional Klingon language). If any of the guessed words match any of the passwords in the password file, game over.

By using random alphanumeric characters in lengthy strings, strong passwords supposedly thwart these so-called dictionary attacks. But there are at least three problems with this assumption.

1. Strong password policies only work for very small groups of people. In larger companies, they fail miserably. Suppose you have the aforementioned strong password policy in your 1,000-user organization. On average, only about half of the users will actually use a password that satisfies your policy. Let's say your company constantly reminds your employees of the policy, and compliance increases to 80 percent. Maybe you use special software that won't allow users to have "bad" passwords. It's rare that such software can be deployed on all devices that use passwords for authentication, but for the sake of argument, let's say it gets you to 90 percent compliance.

Great, right? Sorry. Even if 900 out of 1,000 employees use strong passwords, a password cracker can still easily guess the remaining 100 user/ID password pairs. Is 100 better than 500? No, because either way, an attacker can log in. When it comes to strong passwords, anything less than 100 percent compliance is weak.

2. With modern processing power, even strong passwords are no match for current password crackers. The combination of desktop Pentium III processors and good hash dictionaries and algorithms means that, even if 100 percent of these 1,000 users had passwords that meet the policy, a password cracker will still win. Why? Because after it finishes its dictionary attack, it can conduct a brute-force attack. While some user ID/ password pairs may take days or weeks to crack, approximately 150, or 15 percent, can be brute forced in a few hours. It's only a matter of time.

3. Strong passwords are incredibly expensive. Organizations spend a lot of money supporting strong passwords. The second or third highest cost to help desks is related to resetting forgotten passwords. The stronger the password, the harder it is to remember. The harder it is to remember, the more help desk calls. Many companies have full-time employees dedicated to nothing more than password resets.

What Should You Do?

So, we're left with an unwieldy password policy that, among other things, requires expensive training, expends lots of valuable help desk time and results in lost end- user productivity. Not very logical, is it?

What's the answer? Many recommend augmenting passwords with another form factor, such as biometrics, smart cards, security tokens or digital certificates. But each of these solutions is expensive to deploy and maintain, especially for distributed organizations with heterogeneous platforms.

For most organizations, we should recognize that 95 percent of our users could use simple (but not basic) passwords--good enough to keep a person (not a password cracker) from guessing it within five attempts while sitting at a keyboard. I'm talking about four or five characters, no names or initials, changed perhaps once a year. Practically speaking, this type of password is equivalent to our current strong passwords. The benefit of these passwords is that they're much easier and cheaper to maintain. Fewer calls to the help desk, fewer password resets, less of a productivity hit--all at no measurable security degradation. Under this scenario, we could reserve the super-strong passwords for the 5 percent of system administrators who wield a lot of control over many accounts or devices.

Everyone should make the password files mighty hard to steal. You should also introduce measures to mitigate sniffing, such as network segmentation and desktop automated inventory for sniffers and other tools. For the truly paranoid, you could encrypt all network traffic with IPSec on every desktop and server.

If the promised land is robust authentication, you can't get there with passwords alone, no matter how "strong" they are. If you want to cut costs and solve problems, think clearly about the vulnerability, threat and cost of each risk, as well as the costs of the purported mitigation. Then find a way to make mitigation cheaper with more of a security impact. 


[Bezroukov2011] An observation about corporate security departments

[Bezroukov2005a] Solaris vs Linux Security in Large Enterprise Environment

[Bezroukov2005b] Softpanorama (Slightly Skeptical) Sun Solaris Security and Hardening Page

Softpanorama bulletin. Vol 17, No. 02 (April, 2005) Why everybody in IT hates Computer security



Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy


War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes


Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law


Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D

Copyright © 1996-2021 by Softpanorama Society. was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site


The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019