|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Draft version 0.4
|
Note: Earlier version was published in Softpanorama bulletin, Vol 18, No.1, 2006
Not too much zeal.
Charles Maurice de Talleyrand
advice to young diplomats
|
The latest Holy Grail of enterprise IT is server consolidation. In an attempt to lower the costs of IT infrastructure many companies are looking at server virtualization. The idea is to consolidate small and not very loaded servers into fewer and larger as well as more heavily-loaded physical servers. This can bring up a whole new set of complications and new risks. There is no free lunch and you need to pay for additional complexity, usually with stability. Like with junk bonds if overdone such an investment that can lead to losses instead of gains. But in moderation this new trend that can be called conversion of IT environment into set of Virtual Machines can be quite beneficial and opens some additional, non-foreseen avenues of savings. Other things equal virtualization belongs to OS vendors and using virtualization provided by the vendor is safer that third party virtualization solution. That means that from 1000 feet distance we would suggest that Microsoft Server 2008 might eventually be preferable to VMware for virtualization of Windows servers.
Also if done intelligently and without too much zeal virtualization can probably squeeze the number of the servers in a typical datacenter 30-50% and that also lead to some modest maintenance cost savings as well as electricity and air-conditioning related savings. Low end servers as extremely inefficient from the point of view of electricity consumption and add considerable to air-conditioning costs as their power supplies are less inefficient. So replacing two low end servers with one larger server running two virtual partition is a very promising avenue of datacenter server consolidation. IBM is a leader in this area and its Power server are preconfigured to run several paravirualised instances of AIX or other OS.
Blades also can be considered to be an alternative to low end servers and as we will discuss later blades as a pretty attractive alternative to virtualization of low end servers.
Saving on hardware that motivates many virtualization efforts is a questionable idea as low level servers represent the most competitive segment of server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end servers. In other words, margins on midrange servers and high-end servers work against virtualization. Still with recent Intel quarto and dual CPUs some saving might eventually be squeezed to a fully configured server with two four core CPUs and with, say, with 32G of RAM costs less that two servers with one fore core CPU and 16G RAM in each.
At the same time the heavy reliance on virtualized servers for production applications, as well as the task of managing and provisioning them, are fairly new areas in the "new brave" virtualized IT world and both need higher level of skills then "business as usual" and special software solutions. Both add to costs. Also when virtualization is expensive like is the case of VMware cost benefits can be realized only with oversubscription.
Virtualization increases the importance of Tivoli and other ESM applications. Virtualization also dramatically influence configuration management, capacity management, provisioning, patch management, back-ups, and software licensing. It is inherently stimulates adoption of open source software, especially scripting-based solutions. It also opens a lot of new possibilities in saving time on system administration, electricity savings as well as makes possible some extremely impressive (albeit not yet fully practical) feats like a dynamic migration of a virtual instance from one (more loaded) physical server to another (less loaded). This is often called "factory" approach to datacenter restructuring.
We can distinguish the following five different types of virtualization:
This is hardware domain-based virtualization that is used only on high-end servers. Domain can, essentially, be called "blades with common memory and I/O devices". Those "blades on steroids" are probably the closest thing on getting more power from a singe server without related sacrifices in CPU, memory access and I/O speed, sacrifices that are typical for all other virtualization solutions. Of course there is no free lunch and you need to pay for such luxury. Sun is the most prominent vendor of such servers (mainframe class servers like E15K, etc are all hardware domain-based).
Access to memory of other domains is slower then to local memory so those systems are closer to NUMA.
By heavy-weight virtualization we will understand full hardware virtualization as exemplified by VMware. CPU vendors now are paying huge attention to this type of virtualization. All new CPUs are usually "virtualization-friendly" and contain instructions and hardware capabilities that make heavy-weight virtualization more efficient. Intel latest CPU that are not dominant in server space are a classic example of this trend. IBM P5/P6 and Sun UltraSparc T1/T2 are examples among RISC CPUs.
VMware is the most popular representative of this approach and recently it was greatly helped by Intel and ADM who incorporated virtualization extensions in their CPUs. VMware officially suppoted a lot of different types of guests: it can run Linux (Red Hat and Suse), Solaris and Windows as virtual instances on one physical server. As such it is the most versatile solution on this category. 32-bit Suse can be run in paravirtualized mode on VMware.
The industry consensus is that VMware's solution is overpriced. Hogwash like VM statements:
Horschman countered the 'high pricing' claim saying "Virtualization customers should focus on cost per VM more than upfront license costs when choosing a hypervisor. VMware Infrastructure's exclusive ability to overcommit memory gives it an advantage in cost per VM the others can't match." And he adds, "Our rivals are simply trying to compensate for limitations in their products with realistic pricing."
should be ignored. All those attempts to run dozens of instances on server with multiple cores (and in mid 2011 you can get 80 core server for less then $60K, so it is affordable to many organization) are more result of incompetence of typical IT brass then progress in virtualization technology. No matter how much you can share the memory (and over commitment is just a new term for what IBM VM did since 1972), you can't bypass the limitation of a single channel from CPU to memory, unless this is a NUMA server. The more guests are running the more this channel is stressed and running dozens of instances is possible mainly in situations when they are doing nothing or close to nothing. That's happens (web servers are typical example) but even for web servers paravirtualization and zones are better solutions.
Even assuming the same efficiency as multiple standalone 1U servers VMware is not cost efficient unless you can squeeze more then four guests per server. The following table demonstrates that the cost efficiency at less then four guest per physical server is just non-existent. You need at least eight guest to achieve the same cost efficiency as four Xen servers running two guests each (Red Hat and Novell do not charge for additional guests on the same physical server).
Cost of the server | Number of physical servers | Number of guests | Cost of SAN cards (Qlogic) | Cost of SAN storage | Server maintenance (annual) | VM license | VM Maintenance (annual) | OS maintenance (annual) | Five years total cost of ownership | annualized cost per one guest or physical server | Cost efficiency of one guest vs. one 1U server (annualized) | |
VMware solution | ||||||||||||
Running 2 guests | 7 | 1 | 2 | 0.00 | 0.00 | 0.42 | 5 | 1.4 | 0.35 | 25.02 | 12.51 | -3.24 |
Running 4 guests | 10 | 1 | 4 | 4.00 | 3.00 | 0.42 | 5 | 1.4 | 0.35 | 38.52 | 9.63 | -0.36 |
Running 8 guests | 20 | 1 | 8 | 4.00 | 6.00 | 0.42 | 5 | 1.4 | 0.35 | 58.52 | 7.32 | 3.13 |
Xen solution | ||||||||||||
Running 2 guests | 7 | 1 | 2 | 0.00 | 0.00 | 0.42 | 0 | 0 | 0.35 | 13.02 | 6.51 | 2.76 |
Running 4 guests | 10 | 1 | 4 | 4.00 | 3.00 | 0.42 | 0 | 1.3 | 0.35 | 33.02 | 8.26 | 1.02 |
Physical servers | ||||||||||||
two 1U servers | 5 | 2 | 0 | 0.00 | 0.00 | 0.42 | 0 | 0 | 0.35 | 18.54 | 9.27 | 0.00 |
four 1U servers | 5 | 4 | 0 | 0.00 | 0.00 | 0.42 | 0 | 0 | 0.35 | 37.08 | 9.27 | 0.00 |
Notes
1 Even assuming the same efficiency, there is no cost savings running 4 or less guests per VMware server.
2 Cost of blades is slightly higher then 1U servers due to the cost of the enclosure but can be assumed equal for simplicity
3 We assume that in case of two instances no SAN is needed/used (internal drives are used for each guest)
4 We assume that in case of 4 guests or more, SAN cards and SAN storage is used
5 We assume that in case of 4 or more guests Oracle virtual VM is used (which has maintenance fees)
6 For simplicity the cost of SAN storage is assumed to be fixed cost $3K per 1T per 5 years
(includes SAN unit amortization, maintenance and switches, excludes SAN cards in the server itself)
Performance on high loads is not impressive as it should be for non-paravirtualized hypervisor. Here is a more realistic assessment from a rival Xen camp:
Simon Crosby, CTO of the Virtualization and Management Division at Citrix Systems, writes on his blog: "The bottom line: VMware's 'ROI analysis' offers neither an ROI comparison nor any analysis. But it does offer valuable insight into the mindset of a company that will fight tooth and nail to maintain VI3 sales at the expense of a properly thought through solution that meets end user requirements. The very fact that the VMware EULA still forbids Citrix or Microsoft or anyone in the Xen community from publishing performance comparisons against ESX is further testimony to VMware's deepest fear, that customers will become smarter about their choices, and begin to really question ROI."
Sun calls heavy-weight virtual partitions "logical domains"(LDOM) and until recently preferred hardware-base domains to logical domains. But this stance is changing with the introduction of LDOMs on Sun's T1000 and T2000 Sun Fire servers. The first is low end server and as such is questionable platform for heavy-weight virtualization, the second is actually something in between low end and middle-weight server and can run at least two or may be three virtual partitions with substantial simultaneous loads. Customers that use Solaris 10 11/06 need to use servers shipped since January 1007 or a firmware update on older boxes in order to get LDOMs functionality. About differences with LPARs see Rolf M Dietze blog. Among other things he came to the following conclusions:
Sun’s LDoms supply a virtual terminal server, so you have consoles for the partitions, but I guess this comes out of the UNIX history: You don’t like flying without any sight or instruments at high speed through caves, do you? So you need a console for a partition! T2000 with LDoms seems to support this, at IBM you need to buy an HMC (Linux-PC with HMC-software).
With crossbow virtual network comes to Solaris. LDoms seem to give all advantages of logical partitioning as IBMs have, but hopefully a bit faster and clearly less power consumption.
Sun offers a far more open licensing of course and: You do not need a Windows-PC to administer the machine (iSeries OS/400 is administered from such a thing).
A T2000 is fast and has up to 8 cores (32 thread-CPUs) 16GBRam and has a good price and those that do not really need the pure power and are more interested in partitioning.
The Solaris zones have some restrictions aka no NFS/server in zones etc. That is where LDoms come in. That’s why I want to actually compare LDoms and LPARs.
It looks like it becomes cold out there for IBM boxes….
The main advantage of heavy-weight virtualization is almost complete isolation of instances. Paravirtualization and blades achieve similar level so this advantage is not exclusive.
Disadvantages are connected with the fact that both CPUs, memory and I/O are shared and you will never get the same speed on high workloads as in case of several standalone servers each with corresponding fraction of CPUs and memory and the same set of applications. Especially problematic is sharing of memory as it might well become a bottleneck before CPU. Each virtual instance of OS loads pages independently of the other and compete for memory bandwidth. So if, for example, two virtual instances are simultaneously active and for example are loading modules or data from the disk each can enjoy probably only 2/3 of the memory bandwidth (accesses to memory are randomly spread in time so sum should probably be greater then 100%) in comparison with a standalone system. In other words you lose approximately 1/3 of memory bandwidth by jumping into virtualization bandwagon. That's why heavy-weight virtualization behaves bad on memory intensive applications. Users of IBM Power5 servers and AIX 5.3 (probably are the best and the most widely used commercial heavyweight virtualization platform) know that all too well.
There can be a lot of synergy if you run identical OSes in two or more instances. Some pages can be loaded only once while used in both virtual instances. But I think you lose stack overflow protection this way as you pages are shared by different instances.
As memory speed and memory channel are bottlenecks adding CPUs (or cores) at some point became just wasting of
money. The amount of resources used for intercommunication dramatically increases with the growth of the number of CPUs.
VMware server farms based on largest servers like HP DL 980 (up to eight 10 core CPUs with two threads each) tend to suffer
from this effect. The presence of a full non-modified version of an OS for each partition introduces significant drag
on resources (both memory and CPU-wise). I/O load can be diminished by using SAN for each virtual instance OS and
multiple cards on the server. Still in some deep sense heavy-weight partitioning is inefficient and will always waist significant
part of server resources.
This approach is important for running legacy applications which is the area where this type of virtualization shine.
Para-virtualization is a variant of native virtualization, where the VM (hypervisor) emulates only part of hardware and provides a special API requiring OS modifications. The most popular representative of this approach is Xen :
With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the server’s hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more “virtual servers,” effectively decoupling the operating system and its applications from the underlying physical server.
Therefore only specially modified for Xen versions on OS can run in virtual mode. Work on Xen has been supported by UK EPSRC grant GR/S01894, Intel Research, HP Labs and Microsoft Research (Yes, despite naive Linux zealots wining Microsoft did contributed code to Linux ;-). Other things equal it provides higher speed and less overhead then native virtualization. NetBSD was the first to implement Xen. Currently the key platform for Xen is linux with Novell supporting it in production version of Suse. Red Hat does not support it in RHEL 4 but is expected to support in RHEL 5 somewhere in 2007. Sun Solaris 10 for x86 also have Xen support (currently in beta with production version in early 2007).
Xen is now sold commercially by IBM; Sun will have Xen compatible version of Solaris in mid 2007. SPARC has separate implementation that will be released in late 2007 and two implementations re expected to merge in the future).
The main advantage of Xen is that it supports live relocation capability. It is also more cost effective solution the VMware that is definitly overpriced.
The main problem is that like with any para-virtualization solution the OS needs to be modified to be aware of the environment it is running and pass control to hypervisor in case of executing all privileged instructions. Therefore is not suitable for running legacy OSes.
Para-virtualization improves speed in comparison with heavy-weight virtualization, but does little beyond that. It is unclear how much faster is para-virtualized instance of OS in comparison with heavy-weight virtualization on "virtualization-friendly" CPUs. Xen page claims that:
Xen offers near-native performance for virtual servers with up to 10 times less overhead than proprietary offerings, and benchmarked overhead of well under 5% in most cases compared to 35% or higher overhead rates for other virtualization technologies.
It's unclear was this difference measure of old Intel CPU or new 5xxx series that support virtualization extensions.
I would like to stress it again that the level of modification OS is very basic and duplicate function like virtual memory
management are not factored out. Therefore all the redundant processing typical for heavy-weight virualization in
present in para-virtualization environment.
Note: Xen 2.0 had the initial support for para-virtualization, meaning that guest OSes would have to be modified
to run on top of the hypervisor. Xen 3.0 and above support both para-virtualization and full (heavy-weight) virtualization
to leverage the inbuilt hardware support built into the Intel-VT-x and AMD pacifica processors. According to
XenSource Products - Xen 3.0 page:
With the 3.0 release, Xen extends its feature leadership with functionality required to virtualize the servers found in today’s enterprise data centers. New features include:
- Support for up to 32-way SMP guest
- Intel® VT-x and AMD Pacifica hardware virtualization support
- PAE support for 32 bit servers with over 4 GB memory
- x86/64 support for both AMD64 and EM64T
This type of virtualization was pioneered in Free BCD (jails) and was further developed by Sun and introduced in Solaris 10 as concept of Zones. There are various experimental add-ons of this type for Linux but none got prominence.
In Solaris 10 11/06, the next build of the operating system that will be released at the end of November, admins will be able to clone a Zone as well as relocate it to another box, through a feature called Attach/Detach. Also now it is possible to run Linux applications in zones on X86 servers (branded zones). The key advantage is that you have a single instance of OS so the price that you paid in case of heavy-weight virtualization is waived. That means that light-weight virtualization is the most efficient resources-wise. It also has great security value. Memory can become a bottleneck here as all memory accesses are channeled via a single controller.
IBM's "lightweight" product would be "Workload manager" for AIX which is an older (2001 ???)and less elegant technology then BSD Jails and Solaris zones:
Current UNIX offerings for partitioning and workload management have clear architectural differences. Partitioning creates isolation between multiple applications running on a single server, hosting multiple instances of the operating system. Workload management supplies effective management of multiple, diverse workloads to efficiently share a single copy of the operating system and a common pool of resources
IBM lightweight virtualization operates under a different paradigm with the most close thing to zone being a "class". The system administrator (root) can delegate the administration of the subclasses of each superclass to a superclass administrator (a non-root user). Unlike zones classes can be nested:
The central concept of WLM is the class. A class is a collection of processes (jobs) that has a single set of resource limits applied to it. WLM assigns processes to the various classes and controls the allocation of system resources among the different classes. For this purpose, WLM uses class assignment rules and per-class resource shares and limits set by the system administrator. T he resource entitlements and limits are enforced at the class level. This is a way of defining classes of service and regulating the resource utilization of each class of applications to prevent applications with very different resource utilization patterns from interfering with each other when they are sharing a single server.
Blade servers are an increasingly important part of the enterprise datacenters, with consistent double-digit growth easily outpacing the overall server market. IDC estimated that 500,000 blade servers were sold in 2005, or 7% of the total market, with customers spending $2.1 billion.
While blades are not virtualization in pure technical sense, the rack with blades (bladesystem) possesses some additional management capabilities that are not present in stand-alone U1 servers and in modern versions usually have shared I/O channel to NAS. Still they can be viewed as "hardware factorization" approach to server construction which is not that different from virtualization. The first shot in this direction is the new generation of bladesystems like IBM BladeCenter H system has offered I/O virtualization since February, 2006 and HP BladeSystem c-Class. The latter offers better server management, virtualization and saves up to 30% power in comparison with rack mounted 1U servers with identical CPU and memory configurations. Sun also offers blades but it is a minor player in this area. It offers pretty interesting and innovative Sun Blade 8000 Modular System which target higher end that usual blade servers. Here is how Cnet described the key idea behind the server if the article Sun defends big blade server 'Size matters':
Sun co-founder Andy Bechtolsheim, the company's top x86 server designer and a respected computer engineer, shed light on his technical reasoning for the move.
"It's not that our blade is too large. It's that the others are too small," he said.
Today's dual-core processors will be followed by models with four, eight and 16 cores, Bechtolsheim said. "There are two megatrends in servers: miniaturization and multicore--quad-core, octo-core, hexadeci-core. You definitely want bigger blades with more memory and more input-output."
When blade server leaders IBM and HP introduced their second-generation blade chassis earlier this year, both chose larger products. IBM's grew 3.5 inches taller, while HP's grew 7 inches taller. But opinions vary on whether Bechtolsheim's prediction of even larger systems will come true.
"You're going to have bigger chassis," said IDC analyst John Humphries, because blade server applications are expanding from lower-end tasks such as e-mail to higher-end tasks such as databases. On the more cautious side is Illuminata analyst Gordon Haff, who said that with IBM and HP just at the beginning of a new blade chassis generation, "I don't see them rushing to add additional chassis any time soon."
Business reasons as well as technology reasons led Sun to re-enter the blade server arena with big blades rather than more conventional smaller models that sell in higher volumes, said the Santa Clara, Calif.-based company's top server executive, John Fowler. "We believe there is a market for a high-end capabilities. And sometimes you go to where the competition isn't," Fowler said.
As a result of such factorization more and more functions move to the blade enclosure. As a result power consumption
improves dramatically as blades typically use low power dissipating CPUs and all blades typically share the same power supply
that in case of full or nearly full rack permits power supply to work with much greater power efficiency (twice of more
efficient then on a typical server). That cuts air conditioning costs too. also newer blades monitor air flow and adjust
fans accordingly. As a result energy bill can be half of the same amount of U1 servers.
Blades generally solves the problem of lack of CPU power typical for most types of virtualization except domain-based, and
with the current price of memory it solves the memory latency problem. Think about them are predefined partitions with fixed
amount of CPU and memory. Dynamic swap of images between blades is possible. Some I/O can be local and with high speed
solid drives very reliable and fast. That permits offloading OS-related IO from application related I/O.
Among major vendors:
There is no free lunch and virtualization is not panacea. It increases the complexity of environment and put severe stress of a single server that host multiple instances on virtual machines. Failure of this server lead to failure of all instances.
Therefore the natural habitat of virtualization is development, test and stage servers as well as almost idle servers that servers various enterprise consoles and similar low CPU intensive applications (Web servers and e-commerce servers).
At the same time virtualization opens new capabilities and sometimes it make sense to run a single instance of virtual machine on the server to get such advantages as on the fly relocation of instances, virtual images manipulation capabilities, etc. With technologies like Xen that claims less then 5% overhead that approach becomes feasible. "Binary servers" -- servers that host just two applications also look very promising.
Migration of rack-mounted servers to blade servers is the most safe approach to server consolidation. Managers without experince of work in partitioned environment shouldn’t underestimate what their administrators need to learn and the set of new problems that cirtualization creates One good advice is "Make sure you put the training dollars in."
There are also other problems. A lot of software vendors won’t certify applications as virtual environment compatible, for example VMware compatible. In such cases running the application in virtual environment means that you need to assume the risks and cannot count on vendor tech support to resolve your issues.
All-in all virtualization is mainly played now inIt make sense to proceed slowly testing the water before jumping in. Those that have adopted virtualization have, on average, only about 20% of their environment virtualized, according to IDC. VMware pricing structure is a little bit ridiculous and nullifies hardware savings, if any. Their maintenance costs are even worse. That means that alternative solutions like Xen3 or Microsoft should be considered on Intel side and IBM and Sun on Unix side. As vendor consolidation is ahead if you don’t have a clear benefit from virtualization today, you can wait or limit yourself to "sure bets" like development, testing and staging servers. The next version of Windows Server will put serious pressure on VMware in a year or so. Xen is also making progress with IBM support behind it. With those competitive pressures, VMware could become significantly less expensive in the future.
VMs are also touted as a solution to the computer security problem. It's pretty obvious that they can improve security. After all, if you're running your browser on one VM and your mailer on another, a security failure by one shouldn't affect the other. If one virtual machine is compromised you can just discard it and create an fresh image. There is some merit to that argument, and in many situations it's a good configuration to use. But at the same time the transient nature of Virtual Machines introduces new security and compliance challenges not addressed by traditional systems management processes and tools. For example virtual images are more portable and possibility of stealing the whole OS images and running them on a different VM are very real. New security risks inherent in virtualized environments need to be understood and mitigated.
Virtualization in Xen 3.0 Linux Journal
CERIAS Weblogs » Using Virtual Machines to Defend Against Security and Trust Failures
VM Rootkits: The Next Big Threat?
Meditations on a virtually secure world
Do virtual machines weaken security? - 29 Mar 2006 - IT Week
DISA VIRTUAL MACHINE SECURITY TECHNICAL IMPLEMENTATION GUIDE Version 2
HP.com - HP BladeSystem c7000 Enclosure - Overview & Features
Sun defends big blade server 'Size matters' CNET News.com
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 12, 2019