Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Softpanorama Bulletin
Vol 12, No.02 (April, 2000)

Prev | Contents | Next

Bulletin 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018

Bad Linux Advocacy FAQ

 "Avoid hyperbole and unsubstantiated claims at all costs.
It's unprofessional and will result in unproductive discussions."
Linux Advocacy Guidelines

 

Version 0.60

Note: those who are extra-sensitive to any criticism of open source/free software ideologies and practice might do well to avoid reading the article. Maintained by Dr. Nikolai Bezroukov. See also:

This BLA FAQ is partially based on e-mails received by the author after the publication of two papers in First Monday:

 


Contents


Why the term "Bad Linux advocacy" was introduced ?

In cases of major discrepancy
 it is always reality that's got it wrong.

From RFC1118

A: I consider Open Source to be an important part of the international programming fraternity, an institution organized as a virtual scientific academy that discovered many talented developers from various countries, especially Europe and Spanish speaking countries. It's great how active this fraternity is! This "pro bono" (Latin "for the common good") development is not unique to software. Most professional codes of ethics encourage participants to donate some of their time "pro bono". I think that the healthy part of Open Source movement is in reality a "pro bono" movement that already produced a long lasting impact on software and is especially important to education, developing countries, cash-strapped startups, etc. It's an important part of  the Unix Renaissance, the most important democratic movement in software development in the XX century started by Berkeley University Free/Net BSD project( TCP/IP, Bind, sendmail, vi to name a few things)  and MIT's GNU project (gcc, gdb, emacs, etc.) One very important benefit that Linux provides is that that along with Free/Open BSD it's a free and open alternative to any proprietary operating system and due to the GNU license it most probably will stay that way.

At the same time the movement is still in its early stages (and not last days, as some predict) and it suffers from some "childish diseases".  One of them is bad advocacy. The term "bad Linux advocacy" or Raymondism was introduced in my first First Monday paper to differentiate a credible OSS advocacy from the popular brand of naive on the border of blind fold Linux chauvinism ("Linux uber alles"). The main problem with Raymondism is that with the loss of credibility comes a betrayal of trust to the intelligent readership

What ESR and Co failed to realize is that people who are developing and using Solaris, Novell and Microsoft products are also professionals and many of them are of a caliber far superior to the author of low to middle-range open source products like EMACS editor macros, a mail utility, and like ;-). For any intelligent professional an open demonstration of arrogance naturally creates a strong negative reaction, a backlash that is damaging to the movement credibility and future.

Before I get flamed for this, please understand that a holy war, "Linux uber alles" of sorts,  is a self-defeating strategy. I hope that there is a healthy "silent majority" of the open source community (that why I actually am writing this FAQ) who are just writing code as best they can, and/or submitting patches bug reports. But that does not mean that we can just ignore the ranting and raving of the zealots. But the public tend to define the open source community in terms of its most outspoken members (ESR and Co) which in this particular case means zealots...

 

The main problem with Raymondism is that with the loss of credibility comes a betrayal of trust to the intelligent readership.

 

As Jono Bacon put it in the UK Linux Group article The Good, the Bad and the Proprietary:

I mean, let's take a reality check at this early point in this discussion; Linux is software - a man made tool that serves a purpose, and we need to remember that Linux is only software, and not some godly means in life where we must cast down all those who oppose. The particular Linux users that I direct this comment to are what I would call "those users who like to express their opinion in forceful manner"; in other words, those people who get very hostile to anything that isn't Linux.

Microsoft is usually a direct target when it comes to shoving some negative energy in the right direction. While I think that Microsoft does have it's flaws, everything has two sides, and Microsoft has consistently developed well designed easy to use software that lets novices get some work done.

The same problems exist with primitive anti Microsoft rhetoric like ERS's (see Slashdot ESR responds to Ed Muth for more detailed discussion:

After months of silence out of Redmond, the themes of Microsoft's coming FUD campaign against Linux are beginning to emerge like a zombie army from the fetid mists of Redmond. And who should that black-armored, axe-wielding figure riding point be but our old friend Ed "Sheriff of Nottingham" Muth, apparently recovered from leading with his chin last time around and ready for another go at Linus and his Merry Men of Sherwood.

Even Linux Torvalds proved to be not immune to this disease. Some of his technical judgments are very suspect.  It's enough to read attentively several of his interviews to understand that he started making predictions and evaluate things about which he actually has very little real knowledge due to the specifics of his career and the best he can do is to make an educated guess. As Charles Hannan, a developer of an alternative operating system was quoted in Ottawa Citizen artilce  "All in all that IPO money did to some Linux developers was make them incredibly arrogant."

Overhyping open source doesn't actually help to create a larger user base and/or sustained development of the community. We should suspect any OSS advocacy that includes the following features:

  1. Gross oversimplifications like "open source software is good, closed source is bad", "Linux has better quality than closed source UNIXes or Free/Open BSD", that "contributors to open source projects is plentiful and contributions always has high quality", etc. Bob Young's example of a car in which you are not allowed to "look under the hood" (as if most customers know or want to know how to fine tune fuel injection or able to diagnose various malfunctions and/or install additional equipment like, say, turbo compressor)  is a more subtle example of the same category.
  2. Claming that open source software is the most economically efficient paradigm of producing software and is much better than any alternative method. This is called economism or Vulgar Marxism. See Is "Vulgar Marxism" a legitimate scientific term. Bad Linux advocacy considers commercial software developers inferior to free/open source developers. It also has fundamentalist attitude about the necessity of redistributing software code. I agree that it's nice feature and it really make the difference in many cases (especially in education, poor countries, cash-strapped startups, etc) but still the absence of the source code should not be the cause of moral indignation as Bertrand Meyer convincingly demonstrated in his essay.
  3. Emphasizing volunteer development and concealment of the facts about the true economic origin of many popular open source software products including Linux. In reality a considerable part of it is not "donated", but "taxpayer-funded" (university-funded) or "commercially funded" (current versions of Linux). Even Linus Torvalds cannot be called volunteer after probably just first two years of kernel development: after that his "hobby" was financed by the University of Helsinki (which allowed Linus to do development on his university job), then Transmeta picked the bills. Later IPO stock gold rush remunerated him quite nicely, probably on the level very few leading commercial developers enjoy. I would say that Linus Torvalds probably belongs to the first dozen of most highly paid developers in the Unix word.  Without commercial developers and support of  development by commercial distributors Linux in its present form would be impossible. Most significant open source products are now developed by paid developers (staff of Linux companies) and in this respect are not that different from commercial products. It's just new commercial software development paradigm that can be called complexity-level based commercialization.  There is nothing bad about it, we just need to understand the real picture. Actually FSF from the beginning used paid developers to develop software. That means that CatB's claim that Linus Torvalds is a volunteer developer contradicts Linus Torvalds biography. From the other hand commercial companies contributed a lot more to the Linux than Raymondism would like to accept. The role of DEC in the development of Linux is ignored in CatB and similar essays.
  4. A holier than thou attitude, disrespect of other developers. The attacks against commercial software developers especially Microsoft. These two communities actually are interdependent. First of all one needs to understand that development of all major open source products is currently organized on the commercial basis. Instigation of hatred of the members of the commercial community is unproductive and unethical. Often open source products are re-implementation of commercial products (Linux is a very good example here, but Ghostscript, GIMP and Samba probably can be mentioned too). Borrowing from the design of the commercial product requires respect and acknowledgement of the original product. This attitude is definitely lacking in phases like "I invented Linux" (the most generous claim possible would be "re-implemented a Unix kernel using Minix and FSF tools") and so on. This is as close to the infamous Microsoft phrase "We invented Windows" as one can get. See also Linus Torvalds cult of personality issue below. Another example "a holier than thou" attitude is ESR's anti-Microsoft rhetoric like his discussion of Microsoft in  Halloween documents (see for example Halloween V)

    After months of silence out of Redmond, the themes of Microsoft's coming FUD campaign against Linux are beginning to emerge like a zombie army from the fetid mists of Redmond. And who should that black-armored, axe-wielding figure riding point be but our old friend Ed "Sheriff of Nottingham" Muth, apparently recovered from leading with his chin last time around and ready for another go at Linus and his Merry Men of Sherwood.

    Yes, of course, Microsoft is far from being a saint, but this is simply ridiculous. Even the usual marketing suspects rarely sink so low. Jeff Lewis in his interesting paper The Cathedral and the Bizarre discussed other trick that ERS often uses ("Windows 2000 63K bugs trick"):

    Raymond points out that Windows 2000, which reportedly shipped with 63,000 bugs, shows that OpenSource works because under Brook's "Law", the number of bugs is roughly proportional to the square of the number of programmers. Since Linux has 40,000 developers working on it - there should be 2 billion bugs in Linux. The flaw is that he perceives Linux as a collection of small projects, but sees Win2K as a single monolithic application - much as he seems to see MacOS. In reality, Win2K and MacOS aren't monolithic. They are composed of many smaller parts which are handled by smaller teams. Much like Linux.

    As for comparing bug counts - at least Microsoft has a bug count. If Raymond had bothered to check the number, he'd have found that a rather large proportion of the 63,000 bugs are cosmetic - and none were considered 'showstoppers'. We don't even have a way to determine the real bug count for Linux since there's no central repository for this sort of information.

    Raymondism seems to be assuming that all OSes are targeted to the same market segment. This is a questionable assumption. Developer resources are not infinite and explicit or implicit priorities lead to particular strong and weak points of a particular OS. Unix in general was designed as a developer OS and naturally most developers and power users really like this OS and prefer it to others. It also a very good server OS. That does not exclude the possibility of using it by other market segments but the level of success achievable in each of them is questionable. For example Mac is popular among graphic artists, musicians and users without special computer training. One can say that it is a specialized OS for those market segments and that's why the question of consistency of user interface is so well addressed in the OS.  It's a top priority for those segments.  See Slashdot discussion of the The Cathedral and Bizarre for more information. Its just reinforcing the idea that there are different markets and different kinds of software and different kinds of users. No surprise that OSS fits some niches and doesn't fit others.

  5. Explicit and/or implicit attacks of FSF. As Bertrand Meyer have shown FSF has its own problems in the advocacy area, but we need to remember that FSF did a fundamental job of creating GCC complier that is the cornerstone of the whole movement. Not to mention other important tools like Gdb, Emacs, etc. In any case ESR and his closed Open Source Initiative is a pigmy in comparison with FSF past and present role and achievements. Paradoxically in many important aspects Raymondism is more radical than FSF. For example FSF never claimed that free software development is a superior model for the software development than commercial development. Nor they ever claim that everybody should use open source software only -- for them it's a personal preference that you can take or reject. In no way FSF contributed to those crazy Linux IPOs that lead to "enthusiasm led investments" from the most active people in the community -- the investments that are now at risk due to the unclear commercial perspectives of the Linux-based companies.  Actually the FSF and the Free Software community in general have no problem with commercial software developers and selling software. In fact, the FSF generates revenue through the sale of software. Since Jan. 1998 Eric Raymond successfully promoted "open source" as a distinct and slightly anti-Stallman movement. See for example his interview with Smart Reseller Straight From The Source where Eric was called a Godfather of Linux ;-) Note how skillfully an anti-FSF stance was injected -- GPL essentially permit commercial use and is the core reason of Linux popularity. Open source license is a Johnny-come-lately and as of this writing has no important products to claim:

    SR: Some of our readers may be confused by the "open source" movement you represent, which is significantly different from Richard Stallman's (a.k.a. RMS, founder of the Free Software Foundation and the GNU Project) "free software" statements. Open source is not the same thing as Stallman's "free software," right?

    Raymond: The distinction between the open-source movement and what RMS is doing is that we push utility arguments while he publishes moralistic ones. RMS's basic stance is that intellectual property is evil and, therefore, sources must be open. Ours is that we want what gives the best engineering results, and that's open source.

    It seems that ESR got a pop star syndrome at some point and decided that he can safely attack FSF in order to promote his own Open Source Initiative. This "pop star syndrome" can probably explain the fact that he also have felt the need to go public about his new wealth. This was very bad move from PR perspective and just shows how arrogant ESR became; he not only managed to discredit himself both as a person and as an evangelist by attacking FSF, but also alienated those Linux developers who fail to get into this short-lived get-money-fast-and-run Linux Gold Rush (I suspect the latter category encompass most developers outside USA). There's nothing wrong with making money even big money, but Linux doesn't benefit from crazy IPOs based on hype and manipulations by investment brokers. I suspect that "Open source rich"  became rich at the expense of naive believers in the OSS phenomena, not from the day traders.

  6. Attempts to contribute to Linus Torvalds' "cult of personality". Often this is done via blatant exaggerations in the best style of North Korea press, but sometimes more subtle ways are used too. For example calling him a "true pragmatist" and contrasting his with an "idealist" RMS (who, by the way is the principal author of GPL license -- the most pragmatic thing the movement has and actually the cornerstone of the business use of open source). For example here is a stance by Evan Leibovitch in his essay In the middle lies sanity:

    The GNU Project was around long before Linux. But it's not a stretch to suggest that GNU was a relatively obscure phenomenon before Linux brought its benefits front and center to the computing mainstream. Stallman's belief that it is better to have poor-quality free software than high-quality proprietary stuff might have forever kept the GNU world view as a niche had Torvalds' pragmatism not brought it out.

    See cult_of_personality for additional quotes.

  7. Claiming that open source software has intrinsic higher quality than closed source commercially developed software. This statement is an article of faith among some open source advocates, but until I see an objective, empirical study that substantiates it, it shouldn't be stated as fact. Actually there are badly designed, insecure and quite popular open source products (Sendmail might be one example). Some open source products might use algorithms that are no longer on the cutting edge of technology, development might be slow, but they still play the role of the standard de-facto in the open source world (compare, for example, speed of development and the level of interface refinement of gzip and rar). The issue of the quality of algorithms is often ignored, but IMHO algorithms used are often far more important that other issues and make the difference between bad and good software.
  8. Blah-blah-blah about word domination. Linux domination would be a bad thing. We need to respect BeOS, Inferno,  VMWare. VM/Linux (derivative of older VM/CMS -- IBM's two layer approach to the OS design in which simper OS (Linux) run on the top of complex (and proprietary) virtual machine monitor that hides a lot of complexity from the upper level and that provide virtual cluster or network and as such is different from plain-vanilla Linux on a single machine) and other free Unixes (Free/OpenBSD/Net BSD). Pluralism in OSes is as important as in other spheres of life and one of the greatest achievement of Linux is that it helps to overcome Microsoft dominance in the PC world. The Microsoft dominance already badly influenced people and there should be no new "Mongol oppression". It's really important that people chose a right tool for the job (best tool for the job, if you have enough money).  That will never always be Linux or Unix. No single OS can do everything well. Actually Linux on a desktop is far from being a paradise and probably will never be due to its server-centric architecture. It's pretty attractive to power users mainly because they actually need and can productively use a server as a desktop. But you need to want to be your own sysadmin ;-).

    The other side of this "word domination" drive is the attempts to represent Linux kernel as the best available Unix kernel implementation strengthen an impression about Linux movement as a high-tech cult. The kernel is pretty good and I like and use it but in many respects its not the best and never will be.
  9. Overrating open source security. The problem is not finding people, but finding quality people to audit software and that's much more difficult than ESR assumes. On April 14th 2000 reports began to appear of a apparently deliberate back-door in Microsoft FrontPage services. The reports specified that the back-door password was "Netscape engineers are weenies!".  ESR fell over himself. After his Halloween success this was the news item he was waiting for! But here the result was quite opposite. A real fiasco occurred. In his note Designed for Uncertainty  Matt Michie wrote:

    Eric Raymond wrote an article where he stated, "It's pretty clear. Anybody who trusts their security to closed-source software is begging to have a back door slipped on to their system -- with or without the knowledge of the people who shipped the code and theoretically stand behind it. ... Apache has never had an exploit like this, and never will. Nor will Linux...".

    Of course the next day, after some background and fact checking, it was revealed that the Microsoft back-door wasn't as bad as was originally reported. Further, ten days later a security firm found a what could be considered a back door in Red Hat Linux. Ironically, the bug was in a piece of web software. The security advisory states, "The GUI portion of Piranha may allow any remote attacker to execute commands on the server. This may lead to remote compromise of the server, as well as exposure or defacement of the website."

    Wait a minute. Doesn't Red Hat "theoretically" stand behind the code they ship? How could this back door have been inserted into Open Source code? Didn't Mr. Raymond say that this couldn't happen to Linux? What do all the pundits who were railing against Microsoft's security holes have to say about this? Is there a double standard when it comes to reporting Microsoft? In this situation, the Linux press, such as Slashdot, are looking more like a sick imitation of what ZDNet used to be. Why is it "evil" when Microsoft FUDs Linux, but "advocacy" when Linux sites FUD Microsoft?

    Is it too much to expect unbiased reporting in the media?

    But the problem is deeper. Here is an opinion of John Viega, a Senior Research Associate in the Software Security Group at Reliable Software Technologies, an Adjunct Professor of Computer Science at the Virginia Polytechnic Institute, the author of Mailman, the open source GNU Mailing List Manager, and ITS4, a tool for finding security vulnerabilities in C and C++ code. He has authored over 30 technical publications in the areas of software security and testing, and is responsible for finding several well-publicized security vulnerabilities in major network and e-commerce products, including a recent break in Netscape's security. In his recent paper open source resources at open source it The Myth of Open Source Security he wrote:

    ...Even if you get the right kind of people doing the right kinds of things, you may have problems that you never hear about. Security problems are often incredibly subtle, and may span large parts of a source tree. It is not uncommon to have two or three features spread throughout a program, none of which constitutes a security problem alone, but which can be used together to perform a security breach. For example, two buffer overflows recently found in Kerberos version 5 could only be exploited when used in conjunction with each other.

    As a result, doing security reviews of source code tends to be complex and boring, since you generally have to look at a lot of code, and understand it pretty well. Even many experts don't like to do these kinds of reviews.

    And even the experts can miss things. Consider the case of the popular open source FTP server wu-ftpd. In the past two years, several very subtle buffer overflow problems have been found in the code. Almost all of these problems had been in the code for years, despite the fact that the program had been examined many times by both hackers and security auditors. If any of them had discovered the problems, they didn't announce it publicly. In fact, the wu-ftpd has been used as a case study for vulnerability detection techniques that never identified these problems as definite flaws. One tool was able to identify one of the problems as potentially exploitable, but researchers examined the code thoroughly for a couple of days, and came to the conclusion that there was no way that the problem identified by their tool could actually be exploited. Over a year later, they learned that they were wrong, when an expert audit finally did turn up the problem.

    In code with any reasonable complexity, it can be very difficult to find bugs. The wu-ftpd is less than 8000 lines of code long, but it was easy for several bugs to remain hidden in that small space over long periods of time.

    To compound the problem, even when people know about security holes, they may not get fixed, at least not right away. Even when identified, the security problems in Mailman took many months to fix, because security was not the core development team's most immediate concern. In fact, the team believes one problem still persists in the code, but only in a configuration that we suspect doesn't get used.

    An army in my belly

    The single most pernicious problem in computer security today is the buffer overflow. While the availability of source code has clearly reduced the number of buffer overflow problems in open source programs, according to several sources, including CERT, buffer overflows still account for at least a quarter of all security advisories, year after year.

    Open source proponents sometimes claim that the "many eyeballs" phenomenon prevents Trojan horses from being introduced in open source software. The speed with which the TCP wrappers Trojan was discovered in early 1999 is sometimes cited as supporting evidence. This too can lull the open source movement into a false sense of security, however, since the TCP wrappers Trojan is not a good example of a truly stealthy Trojan horse: the code was glaringly out of place and obviously put there for malicious purposes only. It was as if the original Trojan horse had been wheeled into Troy with a sign attached that said, "I've got an army in my belly!"

    ...Currently, however, the benefits open source provides in terms of security are vastly overrated, because there isn't as much high-quality auditing as people believe, and because many security problems are much more difficult to find than people realize. Open source programs which appeal to a limited audience are particularly at risk, because of the smaller number of eyeballs looking at the code. But all open source software is vulnerable, and the open source movement can only benefit by paying more attention to security.

That's why in my first paper I raised a heretical question: "Is the global free software/open source movement suffering from a special type of bad advocacy?".  For those who read the paper it's clear that my answer is yes. Bad Linux advocacy for me is the name of an Linux-based open source fundamentalism -- the  dominant type of bad advocacy that adopts the simplistic and badly thought out arguments in a spirit of an obscure cult (see also Lysenkoism).  I’ve often wondered why it’s so difficult to avoid hyperbole when discussing Linux. After all it's just one of several free Unix kernels and technically even not the best one. Actually dramatic overstatement is not confined only to those who are most closely associated with Linux evangelism (sometimes called slashdot crowd), it also spread to those who are developing and implementing Linux. In his DaveNet The Sixth Sense  Dave Winer aptly stated (bold italics are mine):

What does open source have in common with Java?

Both are former panaceas. Like object oriented programming in the late 80s, if adopted, they would supposedly lead to magic synergies known only to the promoters, and breathlessly replayed by reporters looking for an easy story to repeat over and over and over.

Java was (and is) Pascal reincarnated. A virtual machine that ran on real machines. Good idea. Been done before. Open source is a tradition that dates back at least thirty years, if not longer. If you learned how to program in the 70s your teacher was quite possibly the source code for Unix (that's how I learned).

Fodder for the hype machine, these "trends" make some people rich, and take the focus off what's really happening, which is still the Web. Then they fade out, to be replaced by the next vaccuous trend, and in the meantime, most developers work hard, outside the spotlight, to make their users happy. (That includes open source developers, btw.)

The thing that's truly offensive about these panaceas is that they are so exclusive and disrespectful of other developers. Until Sun embraced SOAP, the only Sun-endorsed way to communicate with Java apps was to convert your whole program to Java. The Java evangelists would cheerfully and seriously tell you to do this. Same with open source. Unless you shipped all your source on their terms the wall was insurmountable. These are outages of the first order.

When will we learn the lesson, that predates even the Internet, that all outages in software are eventually routed around. Try to control and you lose your place. The time-of-control is shortening all the time. IBM had a 20 year run. DEC was the leader for 10 years. Microsoft, for four, at most. Java had an even more brief run and it was over before Java could actually do anything that anyone wanted to do. If you're a student of technology history, the long-shot bet of being dominant looks worse and worse. Even if you manage to attain dominance, briefly, who wants to be the trend of last year?

Thankfully the open source rage is on its last legs. If you're honest and made a bet on open source, and want to get help from the press and investors, here's some open source (free) advice. Play it down.

Actually open source software can be better than closed source it can be worse, it's just different class of software with its own strengths and weaknesses and to claim that its superior by definition is kind of naive.  I think that RMS is right that the main advantage here is freedom, not quality or other real or perceived attributes per se.  Actual benefits depend on the personality of developer. Only talented people can produce top quality software be it open or proprietary. And quality documentation about algorithms used is often as important (or more important) as the availability of the source code.  I often will prefer more simple and probably not that well written open source program to the closed source one, but I would prefer a closed source program with carefully documented algorithms and clean interface to a large undocumented open source program. For non-trivial programs algorithms are of crucial importance.

I would not go that far as to claim that extremist open source advocacy  is a new type of religious fundamentalism although raymondism does  share some features with a high demand cult. Religious fundamentalism is extraordinarily dangerous. Open source fundamentalism, at best, is annoying. It's just an example of what's happening when a movement based on particular principles is repackaged and sold to companies as a new and effective means, i.e. a better way to turn a profit.

It's interesting that RMS is now typically criticized by ESR for being past his time, etc. There's a predictable pattern to many social movements of this sort: they begin with prophetic characters with a radical moral-ethical agenda (RMS), and then gradually become co-opted and assimilated to serve corporations and the market by opportunists like ESR. But in reality FSF approach "free as a principle" approach is close to the idea of "pro bono" and might be the most democratic approach.


Is "Vulgar Marxism" a legitimate scientific term ?

Yes, it is. In My response to the letter by Paolo Pumilia to the FM  I tried to answer this question in the following way:

I would like to reiterate that ERS's views on the economic superiority of open source are close to Vulgar Marxism with it's economic determinism. Contrary to your impression "Vulgar Marxism " is a legitimate scientific term. As Professor Robert M. Young stated in his work "Marxism and the history of science" [see R.C. Olby, G.N. Cantor, J.R.R. Christie and M.J.S. Hodge (editors), Companion to the History of Modern Science. (1996), pp. 77-86.]:

"The defining feature of Marxist approaches to the history of science is that the history of scientific ideas, of research priorities, of concepts of nature and of the parameters of discoveries are all rooted in historical forces which are, in the last instance, socio-economic. There are variations in how literally this is taken and various Marxist-inspired and Marxist-related positions define the interrelations among science and other historical forces more or less loosely. There is a continuum of positions. The most orthodox provides one-to-one correlations between the socio-economic base and the intellectual superstructure. This is referred to as economism or vulgar Marxism."

Claims that open source is a new development paradigm that is superiors to all others paradigms of software development are pretty close to Vulgar Marxism.


Can you give an example of bad Linux advocacy thinking?

Some of them are discussed in my first and second papers. It's not limited to ESR. For example here is a stance by Evan Leibovitch in his essay In the middle lies sanity:

The GNU Project was around long before Linux. But it's not a stretch to suggest that GNU was a relatively obscure phenomenon before Linux brought its benefits front and center to the computing mainstream. Stallman's belief that it is better to have poor-quality free software than high-quality proprietary stuff might have forever kept the GNU world view as a niche had Torvalds' pragmatism not brought it out.

Here both facts ("But it's not a stretch to suggest that GNU was a relatively obscure phenomenon before Linux" -- IMHO it's a stretch and FSF was pretty well known long before Linux) and Stallman's views were deliberately twisted (I fully support the FSF idea that in many cases even poor written program with the source is more useful than much more polished closed source program; moreover FSF products were mostly first class software products). There is also a clear attempt to diminish the importance of FSF. Essentially he is saying that FSF were being narrow-minded and Linus Torvalds was a kind of "true pragmatist" (i.e. "liberator of oppressed masses"). But the fact that companies release software using GPL implies that they see business rational behind GPL. Now please tell me who in this case is a real pragmatist ;-)

Actually almost any Eric Raymond major paper can serve as a good example because each contains a lot of  oversimplifications, overgeneralizations, explicit and implicit attacks of FSF, attempts to contribute to Linus Torvalds'  "personality cult" and/or plain-vanilla fairy tales (that's why I called this phenomenon Raymondism in my first paper), but any his interview or commentaries about Microsoft is especially good starting point. Here is one recent sample called "Microsoft -- Designed for Insecurity" that clearly demonstrate that ESR did not research the problem, checked facts or understand the issue he is writing about.  It's just plain-vanilla pontification about merits of open source and I especially like the final phase "Cockcroaches breed in the dark. Crackers thrive on code secrecy. It's time to let the sunlight in." :-). Here is the story:

News services all over the world reported today (14 April 2000) that Microsoft programmers had inserted a security-compromising back door in their FrontPage web server software. Thousands of websites worldwide may be affected. Representative coverage of this story can be found at CNET.

Amidst all the nervousness about yet another Windows security hole, and not a little amusement at the passphrase the Microsoft programmers chose to activate the back door ("Netscape engineers are weenies!") there is one major implication of this story that is going unreported.

This back door seems to have been present since at least 1996. That's four years -- *four years* -- that nobody but the pranksters who wrote it has known about that back door. Except, of course, for any of the unknown crackers and vandals who might have found it out years ago. All the world's crackers certainly know about it now after the worldwide media coverage.

Webmasters all over the world are going to be pulling all-nighters and tearing their hair out over this one. That is, webmasters who are unlucky enough to work for bosses who bought Microsoft. At the over 60% of sites running the open-source Apache webserver, webmasters will be kicking back and smiling -- because they know that Apache will *never* have a back door like this one.

Never may sound like a pretty strong claim. But it's true. Because back doors (unlike some other kinds of security bugs) tend to stand out like a sore thumb in source code. They're hard to conceal, easy to spot and disable -- *if you have access to the source code*.

It's the fact that the compromised Microsoft DLL was distributed in opaque binary form that made it possible for the good guys to miss this back door for four long years. In the Apache world, every every one of the tens of thousands of webmasters who uses it has access to the Apache source code. Many of them actually look at code difference reports when a new release comes out, as a routine precaution against bugs of all kinds.

Under all that scrutiny, a back door would be unlikely to escape detection for even four *days*. Anybody competent enough to try inserting a back door in Apache knows this in their bones. So it would be pointless to try, and won't be tried.

What's the wider lesson here?

It's pretty clear. Anybody who trusts their security to closed-source software is begging to have a back door slipped on to their system -- with or without the knowledge of the people who shipped the code and theoretically stand behind it. Microsoft HQ is doubtless sincere when it says this back door wasn't authorized. Not that that sincerity will be any help at all to the people who will have to clean up the mess. Nor will it compensate their bosses for what could be millions of dollars in expenses and business losses.

If you don't have any way to know what's in the bits of your software, you're at its mercy. You can't know its vulnerabilities. You can't know what *other people might know about it that you don't*. You're disarmed against your enemies.

Does this mean every single webmaster, every single software consumer, has to know the source code of the programs they use to feel secure? Of course not. But open source nevertheless changes the power equilibrium of security in ways that favor the defense -- it means back doors and bugs have a short, inglorious lifetime, because it means the guys in white hats can *see* them. And even if not every white hat is looking, potential black hats know that plenty of them will be. That changes and restricts the black hats' options.

Apache has never had an exploit like this, and never will. Nor will Linux, or the BIND library, or Perl, or any of the other open-source core software of the global Internet. Open-source software, subject to constant peer review, evolves and gets more secure over time. But as more crackers seek and find the better-hidden flaws in opaque binaries, closed-source software gets *less* secure over time.

Who knows what back doors may be lurking right now in other Windows software, only to be publicly acknowledged four years in the future? Who *can* know? And who in their right mind would be willing to risk their personal privacy or the operation of their business on the gamble that this is the *last* back door in Windows?

The truth is this: in an environment of escalating computer-security threats, closed source software is not just expensive and failure-prone -- it's *irresponsible*. Anyone relying on it is just asking, *begging* to be cracked. If theory didn't tell us that, the steadily rising rate of Windows cracks and exploits over the last eighteen months would.

Cockcroaches breed in the dark. Crackers thrive on code secrecy. It's time to let the sunlight in.
--
Eric S. Raymond

He retracted this article later but still insists  "The bottom line is very simple: Closed source can't be trusted, because you can't see what it's doing.". I used to work in this area and things are much less simple for me that for Eric Raymond. Anyway here is his retraction:

The status of the "back door" I discussed in "Microsoft: Designed For Insecurity" is now uncertain. Since the problem was reported on 14 April by BugTraq and the Wall Street Journal, one of the people involved in discovering it has retracted his report. There is now dispute over whether this problem was due to a genuine back door or a server misconfiguration.

The general point of "Designed For Insecurity", though, is independent of this particular incident. As if to illustrate this, there is yet another back door report from 13 April that may affect hundreds of e-commerce sites. See

http://www.internetnews.com/ec-news/article/0,2171,4_340591,00.html

The key quote in this story is this one from Kasey Johns, webmaster of one of the affected sites:

"I want the right to look at the code, make modifications, and not be locked into whatever ghosts the author has hiding in there," said Johns.

The security and trust problems that come with that kind of lock-in are the real point here, not the details of any particular exploit or the name of the vendor attached to it.

The bottom line is very simple: Closed source can't be trusted, because you can't see what it's doing.


Can you give an principles of good Linux advocacy ?

There are several papers of different quality devoted to this issue. Generally one should avoid practice of used car salesmen like a plague. Linux Advocacy Guidelines looks pretty reasonable for me. It contains a extremely useful canon of conduct: "Avoid hyperbole and unsubstantiated claims at all costs. It's unprofessional and will result in unproductive discussions." IMHO canons still extremely current and it make sense to reproduce all of them:

In his interesting paper on Linux advocacy  Joe Barr details the reasons he's no Linux advocate, and the reasons he's not into advocacy in general. Among principles that he proposed I would agree with the following:

  1. Don't promise too much
  2. Participate in the free source process
  3. Offer to help someone start using Linux
  4. Consider the other person's viewpoint
  5. Donate previous distributions to others
  6. Be accurate in what you say

... What really sets them apart is that they stand up in front of people as representatives of Linux, or the Linux community. What they say and do in that role, they are saying and doing on behalf of Linux. That's advocacy as defined in my book -- or, for that matter, in the dictionary, which describes lawyers representing, or advocating for, their clients.

...I do advocate Linux as the right choice as an operating system more and more often these days. In the bad old days, I did the same with OS/2. Further, I have quite often have taken Microsoft to task for its shoddy products and shoddier business practices.

...Secondly, it matters because, when you become an advocate, you lose the right of free speech. As an advocate you begin to speak from a script that you may or may not have had a hand in writing. You begin to lose your own voice, your own style, your own delivery. Everyone you're speaking for is watching and listening to make sure that what you are saying agrees with their take on the party line. Without apology I will say simply that I am not a political animal. Never have been, never will be.


Why do you object to the current Linux Gold Rush? After all it helps to produce more open source software...

"I'm very idealistic.
I want to make the world a better place
 for me to live in."

From old Peanuts cartoon

I do not actually object. But the sad truth is that a lot of rank-and file developers might lose money as the shares price go down below the level they bought them.  The Linux companies stock valuations fluctuations just remind me that the US technology stock market can be considered as a new type of casino.  And playing Linux card is a very dangerous play. I'll be the first to admit that proprietary companies (with Microsoft and Oracle as the leaders) have abused their end of the spectrum. But Linux companies instead of solving that problem actually created a second problem.  It's rather difficult to actually generate a substantial revenue off of open source software in this new commercial space to justify valuation at which they were sold at IPO. Moreover if you overshoot and hire people faster than you can increase revenue, you can't  survive in a long run.  Recently Corel announced plans to lay off 20% of its workforce. Linuxcare has postponed its IPO and is looking for additional financing to stay afloat. TurboLinux, once a bright star in the Linux heavens, has declared bankruptcy.

With a multimillion losses each quarter (for example RHAT net loss for the first quarter of fiscal 2001 is $14.9 million) IPO capital will not last forever and probably only Red Hat and Caldera have enough money to survive for at least a decade of losses on that scale (in addition to IPO money Caldera  has several hundred million of Microsoft money due to the successful DrDos lawsuit). In his Linux Journal column Open Source is Dead: I Read it on the Net Jason Compton wrote:

...Dreamers who think open source will lead directly to a global gift economy by 2003 or some nonsense like that aside, the challenge for "commercial" open source has been and will continue to be finding a sustainable support and services revenue stream to make the books balance at the end of the year. If you take a little peek over at LinuxCare and their recent financial and management troubles, you will see that this sort of business plan takes careful planning, execution, and luck."

...Because the truth is that open source doesn't cure cancer, doesn't lead to a global gift economy, and doesn't produce perfect software on the first, second, or even fifty-seventh try. Hell, I could put together a laundry list right now of glaring flaws and shortcomings in Linux that I blame squarely on open source development and developers. But that's another column.

I tried to warn that like in case of science, any serious OSS development involves certain risks and requires not only talent but a certain courage to be away from the marketplace in order not to permit marketplace to enslave you. Commercialization of open source is a mixed blessing and it does attract to the movement wrong people. With gold rush the type of people that movement require for survival (devoted academic researchers. aspiring students in computer science departments  and voluntary programmers from the industry all over the globe, who slowly and persistently implement their new vision,  or polish important for community programs, or provide an alternative to an existing but expensive tool)  find themselves on the sidelines.  They cannot compete with fulltimers in OSS startups and feel pressure either to move into them or adopt a second roles or quit.

There is another important danger that commercial development of represent -- the danger of  featurism. Now that many of the Linux distributions have gone public, their first job is to make money for their stockholders and they have money to make open source products as complex or even more complex that other competing commercial products. The simplest way to differentiate themselves is to have software that can featurewise compete with the best closed source products including Microsoft products (in a really Zen way turning in a process into open source bloatware). The next logical step is to develop products that take advantage of a particular distribution (and there are already several packages developed specifically for Red Hat, that are supported only on Red Hat although they might run on other distributions too). If every distribution can run it, how can they keep/attract users and generate the profit?

In the same way the independence of Linux press, especially commercialized part of it, is questionable. In order to survive they need to get advertisement dollars whatever it cost and that means that they are essentially a highly biased "party press":

Corporate press and self-censorship. (Score:4, Interesting)
by dominion ([email protected]) on Wednesday June 07, @02:18PM EDT (#44)
(User Info) http://www.tao.ca/~dominion

    When I was in Washington D.C. for the A16 actions against the IMF/World Bank, some activists from England and I had a conversation with a journalist from some corporate newspaper (I forget which). The argument was over self-censorship, the fact that most journalists know where their bread is buttered and are never censored because they know to never write an article which merits censorship.

    Now, the journalist insisted that he had full freedom of the press, and could write an article on anything he wanted without getting fired (He did admit that it's very possible that the editors wouldn't print it).

    I told him to challenge the assumption that he had complete freedom under the totalitarian structure of his workplace. I asked him to dig up a story on the parent company or a majority stockholder of his newspaper. Something really incriminating, which is easy since so many large corporations are involved in criminal activity.

    If he got the article printed, then I would concede the argument to him, but if it got censored, or if he felt repurcussions for challenging the authority of his workplace, then I win.

    His response? Something to the effect of "Well, I don't need to test my boundaries, because I already know that I have no boundaries."

    Thus, we have self-censorship.

Michael Chisari
[email protected]
-- www.infoshop.org -- www.spunk.org -- www.radio4all.org

What bother me is the fact that this self-censorship dramatically increases with consolidation of Linux companies that produced  strange bedfellows like Slashdot and VA Linux. As another Slashdot reader put it  #156:

All of us would be well-served by keeping at least a healthy suspicion of VA. The sad part about the whole Linux movement is, that when you get down to it, VA and companies like them don't care about you. They don't have to care. They're going to do everything within their power to screw their way into the black and keep the board of directors happy, just like any other publicly held business.

After VA Linux acquisition the editorial independence of Slashdot is highly suspect. Why would a company such as VA spend $900 million for a company that lost $3 million last year? Call me a pessimist, but as far as I've seen, the one of the reasons why a company can spend such a huge amount of money is when it expects to profit from the implicit advertising and strong marketing of VA brand provided by Slashdot. Actually this is very consistent with VA prev. acquisition of the Linux.com domain in 1999. In this case VA has tried to leverage close relationship with the Linux community as the key point of differentiation from similar hardware vendors and that strategy played handsomely during IPO.  Without strong marketing of VA brand as the "leading open source company" VA Linux could find itself in trouble when much larger players, like Compaq (with 25% share of total Linux servers shipments) and IBM (with 10% share) preinstall Linux on their servers and have much better research and are more diversified that VA. As one Slashdot correspondent described this model:

Selling this open source model by appealing to logic appears to be on shaky ground. How about a slick marketing campaign instead, with lots of slo-mo, leather, and hypnotic music.

Actually there is a pressure for Linux software vendors to become  hardware vendors. In the Linux market, where it's axiomatic that software can be downloaded for free, small vendors are finding that they must package software as a hardware appliance to have a viable commercial product. For example, that's the strategy at Workfire.com, whose Web-accelerating cache algorithms are packaged in hardware, with Linux as an embedded operating system. That means that Linux-related products might concentrate in a appliance and server niches where profits from hardware might compensate for the cost of development of open source. Some predict that the number of players in Linux software area will decrease significantly in a couple of years from now. That means that job security of those who are employed by Linux software companies are not that assured.

The second part of the problem is that Make Money Fast Open Source IPOs brought to the movement quite another type of people -- people that can distort the reality without any problem if that serves their needs. In some other aspects situation starts looking like commercially oriented sect that can distort information for the sake of enlarging the movement; this tendency actually increased with the commercialization of Linux media like Slashdot.org. The Me use Linux too IPO open sore Linus open ebiz ASP solutions satire aptly described crazy atmosphere of the second half of 1999 -- the time of Red Hat and VA Linux IPOs:

The final destruction of what used to be a charming little OS scene arrived today, Monday, December 13, 1999.

Linuxtoday is spewing forth "me-use-linux-too-IPO-open-sore-Linus-open-ebiz-ASP-solutions" press releases from every backwater, buzzless Joe Q. Corp with a hotmail account...

OS figureheads are being courted for interviews with a veracity that is usually reserved only for pathological child molesters and internet CEOs...

Forty thousand "Embedded Internet eSolution Firewall Privacy Biz Remote" solutions are being deeply discounted to the five people who care enough to add one more yeahd00ditssecure.pl script to their boxes...

2-bit players are buying half-bit companies without a dime to their names just to get at the word Linux in their press releases...


Are key open source developers volunteer developers ?

Key people in Linux development are paid developers.  Actually this was the reason why Linux became most popular GNU-based kernel -- distributors hired most of open source developers and convert them into paid developers.  Some developers became cofounders of Linux companies, some highly paid employees of this companies. In reality Linux companies staff listings look like a roster of open source developers. That means that considerable part of open source software  is not "donated", but "taxpayer-funded" (university-funded) or "commercially funded" (current versions of Linux).

Even Linus Torvalds cannot be called volunteer after probably just first two years of kernel development: after that his "hobby" was financed by the University of Helsinki (which allowed Linus to do development on his university job), then Transmeta picked the bills. Later IPO stock gold rush remunerated him quite nicely, probably on the level the few leading commercial developers enjoy. I would say that Linus Torvalds probably belongs to the first dozen of most highly paid developers in the Unix word.  Without commercial developers and support of  development by commercial distributors Linux in its present form would be impossible. Most significant open source products are now developed by paid developers (staff of Linux companies) and in this respect are not that different from commercial products. It's just new commercial software development paradigm that can be called complexity-level based commercialization.  Actually FSF from the beginning used paid developers to develop software  and Linus Torvalds after the initial two years or so became a paid (by the University if Helsinki) developer. That means that CatB claim that Linus Torvalds is volunteer developer contradicts Linus Torvalds biography. For example it looks like the decision to include CMP into kernel was commercially dictated including what in the paper Lawyers Guns and Money - KDE and Corel Working in the Real World  by Dennis E. Powell was called "valuable considerations":

The dominant view is that this is generally a good thing.

And it can be. But it can also become a bad thing, and unless some lines are drawn, it's a sure bet that it will.

... ...

Even uglier, one could imagine--and this example is entirely hypothetical--a situation where the top people in a development effort accepted what lawyers call "valuable considerations" in exchange for looking the other way as a project got steered in a way that benefits a particular company. Nothing illegal about it.

Less sinister but troublesome nonetheless is the simple culture clash when open source meets commercial software. Corporations play it close to the vest, open source does anything but. The place where they meet can be awkward and frictional--it's hard to imagine them being otherwise. The potential for abuse is real as well. Indeed, Jon "Maddog" Hall of Linux International has done and is doing much to educate corporations about a crucial reality: they don't own the system and they cannot mandate changes in it. He has helped many companies understand the differences between closed, proprietary software and open source.

... ...

The leading voice of open source is Eric S. Raymond, again one of the few in the Linux community recognized just by his initials. ESR is watching more and more corporations become involved; his philosophy embraces the idea that closed development is old and creaky, while open source development is young and robust. A company will do it the new way or die:

"I'm not worried. What I see is corporations realizing that if they want our results, they have to buy into our process--and if they don't, they'll be eaten by a competitor who does."

... ... ...

I don't know of any lawyers who would be eager to try to defend the GPL if offense came to judge and jury. Commercial software companies have over the last decade tended to give the best parking spaces not to programmers but to lawyers. It would take some heavy-duty mobilization to take on a big software company and not die of attrition before a dispute ever came to trial. I think there will be offenses, as surely as there will be another VBX macro virus.

It's frightening, the idea that Linux and its applications could fall victim to its own success. But wherever there's money, big money as there is now in Linux, there is someone very clever who will try to take it away.

Corporations and open-source development are for the most part diametrically opposed as to organization and goals. Neither is bad; indeed, neither is better and both can be abused. But when the two come together, it's oil and vinegar--it needs a little shaking up before it can go onto the salad. And these are indeed the salad days of Linux.

It's not difficult at all, though, to imagine the companies that make commercial software for other platforms looking upon Linux as some kind of odd little brother. And, with a smile, saying, "I'm very idealistic. I want to make the world a better place for me to live in."

The paper also contains a very nice quote from old Peanuts cartoon that catch the essence of the problem "I'm very idealistic. I want to make the world a better place for me to live in."


Why the success of Linux is mainly a manifestation of Unix Renaissance that could happen with or without Linux

Actually what is new about Linux from the technical standpoint? Not much. It's just another attempt to refine a Posix compatible kernel and technically it's difficult to call it the best free Posix compatible kernel. The main difference with other kernels is that from the beginning it was oriented on most widely available and cheap hardware (386sx) and was distributed under GNU license. I would agree that Unix kernel is an extremely interesting  operating system kernel that remains viable despite numerous attempts to create new and better kernels. And that Linux is a viable implementation of Unix in a Microsoft way, if you wish. My point is that Microsoft does not innovate on purpose and arguments to the contrary (VBX-COM, OLE, etc) are accidental. It just brings other people's innovations into the mainstream. The Microsoft paradigm is to be a "close follower", not an "innovator". The same is basically true about Linux. See Slashdot Systems Research Is Dead -- the discussion of Rob Pike paper Systems Software Research is irrelevant.  Therefore for me Linux kernel development is not a new and revolutionary development model, but just one of the most vivid demonstration of the Unix Renaissance. I see it more as a logical continuation of the famous GNU project of the FSF -- the project with strong connections to the MIT. I am convinced that this connection was crucial to the success of GNU project like the connection to the University of Helsinki immensely helped the Linux project during early, most difficult stages. Universities are really, really, nice places to work if you want to advance state of the art in technology.


Why competition with Microsoft can be  unhealthy for OSS movement ?

It depends on the dose. Like commercial involvement it's a mixed blessing and I will argue below that it might increase the "cult of personality" danger.

As I noted in one of my papers the competition with Microsoft as well as anti Microsoft stance serves as an important organizing force for the movement, but at the same time there are dangers. First of all to compete with Microsoft in all area (including featurewise competition) is a self-defeating strategy -- you need to dramatically increase both the size of the products and the size of the community. But the pool of talented software developers is very limited and a large developers community inhibits innovation to the extent that a given project may stagnate. As Jamie Zawinski put it:

"There exist counterexamples to this, but in general, great things are accomplished by small groups of people who are driven, who have unity of purpose. The more people involved, the slower and stupider their union is."

Moreover loss of focus leads to loss of architectural integrity. Linux for Intel is one thing. Linux for ten different CPUs is quite another, much more complex project. We need to fight "for simplicity", not "against Microsoft".  Open source in general and Linux in particular has a lot of advantages (as I noted above, especially in education, developing countries and for startups) that need to be developed, but it's not necessary for Linux in general and Linux applications in particular to match "feature by feature" best commercial offerings (and this way match them in complexity too) or to be faster then commercial offerings. In many respects it's just a different animal and the value of simplicity often match or exceed the value of featurism. That means that open source version of MS Word is not necessary a good thing. While all-in-all still very good, the latest versions of MS Word are designed for the average user. Why not use TeX or a good XML editor if you are a power user ?  Actually MS Word has at least one serious limitation for the power users -- there is no ability to view the markup of the document is a raw form as in WordPerfect. Also macro capabilities are clumsy and not very well integrated...

But there is one more problem with speed and open source development. I feel that if speed is really important then authoritarian methods have distinct advantages over democratic methods. "Speed kills" and the first victim is democracy (first noted by Frederick Brooks in his analysis of OS/360 development). A purely authoritarian style will lead to creation of project elite and strict hierarchy (benevolent dictatorship as it is sometimes called in Linux). But Linus Torvalds' love affair with the Linux kernel is now in its ten year and he firmly positioned himself as an irreplaceable coordinator of kernel development. How long can he run the race and what happens if he will be driven over by a track ?

I think that any attempt to speed up an existing OSS project beyond certain limits could lead to unforeseen consequences including dangerous in a long run authoritarian changes in the project social structure. Thus head to head competition with Microsoft on all spectrum of hardware available (that means both in server and desktop space as promoted by Raymondism) is a dangerous threat to the open source movement as a whole. In order to compete with an authoritarian organization like Microsoft the speed of delivery is the matter of survival and that means the creation of authoritarian organization of its own.


Why is simplicity so important for the OSS movement?

Small is beautiful

Without simplicity the value of open source is much less. My main conviction is that open source movement first of all was and is about simplicity, about attempt to avoid overload connected with what Harvard Business School professor Clayton M. Christensen called the technology mudslide -- a perpetual cycle of extending existing architecture in pursuit of the latest techno fashion that Microsoft represents so well. But this is no longer the case, and complexity of the leading OSS products is now pretty close to the complexity of commercial counterparts. For example let's take SMP -- Linux is about desktop and with current CPU speeds is SMP really necessary or beneficial in the desktop environment and justifies corresponding complications in kernel?

Will open source MS Word clone solve the problem connected with MS Word dominance?  Or will it  just repeat the same cycle of growth until it will be unable to run in, say, 16M? BTW early Linux kernels run in less than 2M and it was possible to use Linux on a PC with just 2M of memory (this is a palmtop by today's standards ;-) although 4M was preferable.  This vicious cycle in which "Coping with the relentless onslaught of technology change was akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you've got to stay on top of it. And if you ever once stop to catch your breath, you get buried." is pretty disturbing.


Initially OSS was a movement at least partially connected with seeking "the other way" -- a simpler solution to the closed software complexity and connected with this luck of flexibility and reliability. And  as such it was a constrictive way to avoid information overload. The philosophy that "small is beautiful"  and that smaller, simpler tools with source that are adaptable by owner can be better and more productive that complex and untouchable commercial monsters is now lost. The current mood (the mood of "world domination") is "I want absolute parity, no matter what even if you (developer) need to die for it".

The greatest idea of OSS software seems to be the idea that tool should be simple enough to be understandable and  modifiable by a power user. Moreover such modifiable simple tool often is able to be more flexible and powerful that tools that are much more complex for this part of  user population. Unless a power user is able to learn how to modify the tool to meet his/her needs the difference between open and closed tools exists mainly in price. That' why I think that Perl-based tools are superiors to similar tools written in lower level languages like C-- they are more easily modifiable. And this is a great idea -- a real alternative to Microsoft vision of huge all dancing all singing products that meet any foreseeable requirement (that most users will never have). Actually I think the fact that major scripting languages are open sourced has a very deep meaning indeed -- this help to achieve the core idea of the movement -- simple and powerful tools. BTW from this point of view fetchmail (a small product that can be easily written by one programmer that paradoxically was used in CatB as an example of an OSS project similar to the Linux kernel -- a really complex project) use wrong implementation language. It definitely need to be rewritten in a different language -- a scripting language would be much better implementation language for such a tool.

What I am afraid is that in the current race to match commercial competitors this advantage of open source may be completely lost. In my understanding OSS benefits are not scalable -- like sound barrier there is a complexity barrier after which they became closer to proprietary commercial tools than one may think. Many OSS products now are produced by salaried programmers and naturally are free from the limitations of volunteer development and underling requirement of architectural integrity and simplicity.  They are becoming as hairy and complex as commercial products they compete with. In order to meet (often conflicting) demands of the commercial marketplace they are moving farther and farther along the road called "one Microsoft way".  And along this road they lose both simplicity and conceptual integrity  the principal ingredients of open source if we assume that the users need to be able to read the source.  In a way, Open-Source development may be doomed by its own success. As Davor Cubranic in his paper Open-Source Software Development recently noted:

Ted Lewis in his ``Binary Critic'' column in IEEE Computer [10] voiced a rare criticism aimed at the open-source development process itself (in particular the ``bazaar'' approach) and its ability to cope with its own success. Specifically, Lewis thinks that as open-source projects' popularity grows and they begin multiplying, the pool of talented programmers who devote their free time to such projects will become increasingly scarce. Furthermore, Lewis claims, as those projects mature, they will also grow in features and complexity, eventually overwhelming the resources of their handful of core developers and the capability of their typically ad-hoc organizational structure to cope with those stresses. This sentiment is also echoed by Microsoft: ``The biggest roadblock for open source software projects is dealing with exponential growth and management costs as a project is scaled up in terms of innovation and size'' [19].

In his osOpinion paper  Simplify! Monty Manley wrote:

I was an English major in college, and this probably influences how I approach programming: I view source code as I do certain kinds of technical literature -- not just as *instructions*, but as a thing in and of itself. Well-written source-code can be beautiful if you know how to read it; it can be a marvel of clarity or of devilish subtlety. Sometimes the code scans wonderfully, reading almost like a digital-age poem.

Many years ago I read a very battered tenth-generation photocopy of "Lions Commentary on Unix" (also called the Lion Book), and I remember thinking that I had passed some kind of event horizon. Looking at the original Unix sources (with John Lions' wonderful commentary), I came to realize that good computer programs are not those that simply *work*; good computer programs are also elegant, efficient, and (when they need to be) devious and clever. Ken Thompson, Brian Kernighan, and Dennis Ritchie had discovered something profound: small things could be both beautiful and amazingly powerful.

Begin with the C language itself: it consists of perhaps 50 keywords, and has a fairly simple syntax. It is not a complicated language to learn, although it can be used in amazingly complicated ways. But rather than try to invent a language for beginners (like BASIC), or a teaching language (as Niklaus Worth did with Pascal), they invented a language for *programmers*. That is, C was written to get things done on digital computers. It was portable. In a time when most pundits thought operating systems had to be coded in machine-language, Thompson, Kernighan, and Ritchie built an operating system almost entirely out of platform-neutral C code. About 95% of the original Unix kernel was -- and is -- straight C. The original Unix kernel was composed of about 10,000 lines of code. (John Lyons wryly noted that later versions of Unix fixed this problem.)

... ... ...

But why is this a good thing? Why is it desirable?

Not everyone would agree that it *is* desirable. A certain company in Redmond, for example, subscribes to the belief that more and bigger is better. It is estimated that Windows 2000 contains between thirty and fifty million lines of code. By contrast, a basic Linux installation (kernel, shell, and essential libraries and tools) takes about five million lines of code, more or less. Why is one approach "better" than the other?

The answer depends on how you feel about software stability and robustness. For better or worse, most software users have decided to value features over stability -- we would rather have something colorful and featureful that crashes occasionally rather than something more quotidian that crashes not at all. Every line of code added to a piece of software increases the chances that something will go wrong: not just programming bugs like a misplaced minus sign or a rounding error, but more obscure problems. Everyone who has used Windows is familiar with "DLL hell" -- incompatible libraries that cause software not to work correctly. Each library by itself works fine, but they cannot reliably work together. Multiply this by several hundred libraries and you begin to see the scope of the problem. (And lest we forget, Linux has its own library dependency problems; we all remember the libc5/glibc conversion nightmare of a couple of years ago.)

It may be that software, like the biosphere itself, has a natural tendency to complexity. Certainly history lends credibility to this point of view -- almost no program ever gets *smaller* as time goes by. Features are added, capabilities are enhanced, and the code grows and grows. Programmers have neither the time nor the tools to exhaustively test these programs completely, and so they always ship with unfound bugs...

...The hidden evil here is the "ease of use" fallacy. Programmers assume that end-users need lots of hand-holding and visual cues in order to be productive with their software, and oftentimes this is true. But the *reason* for this is that software is too complex; so it is a circular problem. For every feature added to a piece of software, another wizard, help file, or speed-key combination must be added to access that feature. This feature must interact with all *other* features, sometimes in unexpected and contrary ways. These so-called "easy to use" programs require five-pound tomes to explain their workings. Users can sometimes spend years learning all the features of a given program, and become terrified when faced with the prospect of having to learn something else. Users will put up with a great deal -- crashes, anomalous behavior, bad implementations -- rather than switch due to the learning curve involved. And yet, if asked, they will usually wish for a more stable system.


Why you are stressing the value of algorithms, architectural integrity and quality documentation of internals in the open source?

I do not believe that open source is panacea. Moreover I think that poorly documented large open source program is closer to proprietary software than one might think. And I prefer well closed source product that implement brilliant algorithm to open source that implement a mediocre one. Mongol horde approach advocated by ESR in CatB does not work. In software too many cooks spoil the broth.

One need to understand that, after all,  algorithms are much more valuable and permanent thing than any source code. And contrary to claims of some OSS movement poster boys reimplementation of Unix kernel does not advance computer science. It does lower costs and make it more suitable for education, but that's it. IMHO the contribution of Donald Knuth is of several orders of magnitude more important than all contributions from the three main OSS figures (Stallman, Torvalds and Wall) taken together.

With some level of oversimplification I would say that the idea of Minix from which Linux 1.x was derived was really brilliant -- small operating system with carefully documented internals running of the most widely available hardware(286 at the time). Linux started as a "Minix on steroids" for 386, but now it moved to the level of complexity that is close to the level of complexity of commercial operating systems, but  much less documented that the original Minix. That means that Linux 2.x kernels including 2.4 can be considered as yet another large operating system kernels that very few people fully understand. But the presence of kernel source code in distribution makes simplification of it possible and IMHO highly desirable. After all for students the top priority is not contributing to (most fashionable) OSS  programming project -- the top priority should be to learn real computer science. Here again I would like to stress that algorithms are actually more important than the source code. That's why I think Linux mini-distributions are so important. And I would applaud the book with the title "How to simplify Linux kernel and Linux OS".

Again, let's put it strait. My favorite role model for the open source movement is Donald Knuth. Not Richard Stallman with his great idea of free software for all, not Linus Torvalds who in a difficult and exhausting race managed to be first to implement Posix compatible kernel under GNU license but never manage to write a single coherent paper about its internals.

Although I have great respect for both, I believe that the contribution of Donald Knuth,  the author of the classical three volume series The Art of Computer Programming, the creator of the brilliant open source typesetting systems TeX and Metafond and five volumes of Computers & Typesetting is more important on the absolute scale if such scale exists at all. What is interesting that this great computer scientist believes that programming can be an aesthetic experience, much like composing poetry or music. This is very close to my own views on programming.

That does not mean that I think that free software is unimportant. On the contrary, I think that Unix Renaissance OSS proved to be the most important democratic movement in software development in the XX century. Actually there are great programmers working for OSS that makes analogy with Renaissance stronger. It is extremely valuable for education, for developing countries, for startups and underprivileged minorities. And free redistribution is really a very important and a very democratic feature of open source.  I believe in the power of social cooperation and in this respect my views are similar to views expressed in For the people, by the people. I just oppose  naive, on the border of blind-folded chauvinism vision of open source software as a total replacement of commercial software no matter what.

Returning to one of my main point I would like to stress again that excessive complexity (that is a natural consequence of attempt to compete with commercial developers, because for commercial developers complexity is a commercial advantage) can be harmful both for OSS projects and for OSS developers. the ability to read source code is pretty limited. From this point of view Bad Linux advocacy as an influential OSS movement  violates the famous KISS principle in order to make OSS commercially viable. Actually it wants to make OSS commercially viable at any human or technical cost. I rather skeptically view all this Linux IPO gold rush with ERS as one of several newly minted multi-millionaire (please read his manifesto after the VA Linux IPO that was aptly called Open Source Rich Opens Mouth by Wired).  I do not think that "end justifies means".  I am afraid that along with further popularization of the idea of OSS and additional financial support to selected (commercially viable) OSS projects Bad Linux advocacy might bring substantial negative consequences for the movement.  Some OSS developers may became prisoners of the new "gold cage" and large part of academic-style freedom that they enjoyed can disappear.  When you take money you better think about explicit or implicit conditions of this transaction. The other problem is that after certain level of complexity the difference between open source and closed source became much less that one might expect. I touched this question in my second paper.

Here is an interesting supporting argument from the Interests of Amicus Curiae submitted to Microsoft trial:

Second, a licensing remedy for Windows, which might involve publishing the source code, mandatory licensing, or providing open-source versions, would also be relatively easy to accomplish. In principle, these options would allow other companies to compete directly with Microsoft in the various platform markets. In practice, however, because the source code is long, complex, and difficult to adapt, rivals are not likely to be able to compete effectively, even with a license to the source code. At a minimum, outsiders will need to have extensive access to Microsoft's programmers and middle managers. Mandating that Microsoft's employees cooperate in helping its competitors produce a competitive product seems a tall order for any Court to write and enforce..

One of the problems of the commercial software industry was pace of development or as it often called the "time to market". This pace rarely allows time to adequately polish architecture of new product, it press adding feature indiscriminately without spending time to understand and thoughtfully develop said new features. Although talented developers may manage to do an excellent job even in this difficult circumstances, largely due this pressures some commercial software products are created and enhanced without due attention to the underling architecture. One of the important advantages of open source is that the development can be more slow, with more attention to architectural issues and correction of architectural problem discovered. If Linux kernel 2.2 is any indication, dominance of commercial distributors can damage or eliminate this advantage. This danger need to be discussed and not masked by propagandist phrases like in CatB:

Perhaps this should have been obvious (it's long been proverbial that "Necessity is the mother of invention") but too often software developers spend their days grinding away for pay at programs they neither need nor love. But not in the Linux world - which may explain why the average quality of software originated in the Linux community is so high.

Actually in some cases it's not high enough and rush to release new version can be one explanation of this problem. Despite the tremendous marketing momentum, among free and open sourced Unix kernels Linux kernel is still not considered the best by some of the most influential members of Unix community (Ken Thompson is one such member, Keith Bostic is another). The gap is narrowing, but for example Keith Bostic who definitely understands the Unix kernel on an expert level considers Solaris as the best, with FreeBSD as the close second. And as for the love, the question arise how long any single developer can work on a single program without starting to  hate it. I would like to remind again that Linus Torvalds already has ten years love affair with the kernel ;-).


What is OSS nobility  and why you stress the fact of stratification of Linux community ?

OSS nobility is a group of people who became rich during the first, most profitable wave of the Linux gold rush (both Torvalds and ESR belong to this selected group).

And like is the case with any nobility the interests of this group can be different from the interests of rank and file members of the community. As a large stock holders they are really interested in commercial success of Linux and that became a top priority of this group whether they want it or not. If that had required adding SMP to the kernel, so be it.  Although Linus Torvalds initially had objections to the idea (he considered Linux as more viable as a desktop), he no longer can withstand the pressure and now we have SMP. Moreover SMP became a point of differentiation between Free/Open BSD and Linux.

I see several other problems in this stratification of the movement.  For those who do not belong to OSS nobility with fat salaries or stock options or both, the dilemma is that  either you exhaust yourself in the long race trying to satisfy (often unrealistic) demands of the commercial marketplace in which other (not you) are competing (and to certain extent using you as a slave), or you might be viewed as a traitor of the movement. Money tend to divide people and from that point of view I would like to classify OSS developers into five partially overlapping groups:

The history teaches us that the first  thing that happens in such a setting where everyone is treated equally but some are more equal that others, is brain-drain. Russia saw it happen as every other large scale foray into collectivism have seen.  Can this happen with OSS developers ?


Why the postulate about high quality of OSS products in comparison with commercial is partially a myth ?

Bad Linux advocacy postulate the higher quality of open source. Proprietary software is considered as a sub-optimal solution or, usually sub-optimal. Here is a relevant ESR quote (Quoted in Slashdot Thus Spake Stallman with reference to Salon Sept- Oct 1998.):

"Either open source is a net win for both producers and consumers on pure self-interest grounds or it is not. If it is, you cannot lose; if it is not, you cannot (and *should* not) win."

Actually in my second paper I discussed this problem. I would like to add just one interesting example from Linux Kernel Mailing List:

Re: Update: Subtle data corruption of TCP streams

From: Wietse Venema ([email protected])
Date: Sat Mar 25 2000 - 00:20:12 EST

Next message: Jeff Dike: "Re: VM modules in kernel?"

Seems that routers don't do the packet rewriting that I'm observing here. It's the domain of bandwith management systems. There are at least four players. I got reactions from several.

Disabling TCP options in the Linux kernel does help, especially when you're talking to non-Linux systems.

        Wietse

Austin Schutz:
> [bugtraq removed from cc:]
>
> I've noticed this problem on my network. I'm behind an Ascend pipeline 50. I had thought that this >might be the root of the problem. Supposedly there's something similar reported as a problem with
> older ascend software.

>
> Any idea what the other people experiencing the problem are using as gateway routers?
> Austin
> On Fri, Mar 24, 2000 at 04:26:13PM -0800, Blu3Viper wrote:
> > How about taking this to linux-kernel for discussion since it appears to be
> > a development kernel bug possibly?
> >
> > -d
> >
> > On Fri, 24 Mar 2000, Wietse Venema wrote:
> >
> > > Date: Fri, 24 Mar 2000 10:38:36 -0500
> > > From: Wietse Venema <[email protected]>
> > > To: [email protected]
> > > Subject: Update: Subtle data corruption of TCP streams
> > >
> > > Apparently, once instance of this data corruption problem is caused
> > > by an unnamed bandwidth management system. It runs as a bridge,
> > > and does not show up in traceroute etc. output. We were able to
> > > estimate its location (at 5 ms round-trip time from one endpoint)
> > > by analyzing packet arrival times.
> > >
> > > Until now, this TCP data corruption problem has been observed only
> > > when one of the connection endpoints runs a recent LINUX version.
> > > Sightings have been reported by sites in Germany and in France.
> > >
> > > Only recent LINUX versions request the use of timestamp options
> > > that cause the tell-tale patterns of "01 01 08 0a" in TCP packets,
> > > and that end up being regurgitated as ^A^A^H^J data.
> > >
> > > I have updated the analysis at ftp://ftp.porcupine.org/pub/debugging/
> > >
> > > Wietse
> > >
> > > Wietse Venema:
> > > > This note is about a subtle data corruption problem with TCP data
> > > > streams that may bite people as more and more (LINUX) systems are
> > > > sending network traffic with TCP-level options turned on.
> > > >
> > > > Last week, several Postfix users reported mail delivery failures
> > > > because sequences of control characters (for example, ^A^A^H) were
> > > > being inserted into their SMTP connections, resulting in SMTP
> > > > protocol errors and non-delivery of email.
> > > >
> > > > These data corruption problems are not host specific: they are
> > > > observed with both Linux and BSD/OS systems, and with mail sent to
> > > > and/or received from systems running Postfix, Sendmail and qmail.
> > > >
> > > > Over the weekend of March 18, 2000, a few people left tcpdump
> > > > running on their machines, in order to record some of these corrupted
> > > > SMTP sessions. This note is based on an analysis of that data.

There is also one more paradox here. According to ESR, Open Source is a superior engineering model than Closed Source and hence Open Source Software should be more reliable and stable than most Closed Source Software. Paradoxically that also means that Open Source Software should need less support than Closed Source software. But here there is one painful for ESR logical consequence: Open Source companies will never be as attractive to investors because closed source companies will always have larger profit margins ;-).  So by ESR's own logic there are very little faith in any Linux company or other OSS pure-play ever becoming a very dominant company or even surviving in a long run.

But things are not that bad :-). Quality of open source products varies greatly, much like quality of closed source products:

OpenBSD owns (Score:3, Funny)
by niekze on Thursday May 18, @08:19AM EDT (#59)
(User Info)
OpenBSD:

Three years without a remote hole in the default install!
Two years without a localhost hole in the default install!

RedHat:

Three weeks without a remote hole in the default install!
Two weeks without a localhost hole in the default install!

Thats all im going to say.

Actually quality of some parts of the Linux kernel, for example the process scheduler, is relatively lame. In no way they are the best available solutions for the Unix kernel. Free/OpenBSD still has  a better engineered kernel. As another Slashdot reader put:

"I use OpenBSD not because I necessarily like or agree with everything Theo has done that may be controversial over the years. I use OpenBSD because, all things considered, it's a damn good OS. The developers work hard with a primary goal of producing the best code, not just code-that-works-and-supports-latest-doohickey."

Probably the same reasoning can be applied to Sendmail (which as a software product is a mess) vs. Postfix. And the list can be continued indefinitely.

Some Linux companies try to find ways to improve the quality of open source applications.  For example regexp.com proposed  to restore a commercial model of paying for a close access to developers and early releases Quality Costs Money regexps.com Pioneers R&D Strategy for Open Source:

The commercial success of Linux is not surprising. Linux is good stuff. But is it ready for the end-user application market?

A lot of Linux projects, though they produce tantalizing applications, miss the boat on producing quality products: documentation is shoddy or missing, testing is inadequate, developers work without feedback from users outside of the community of hackers, release engineering is uneven, and the applications have bugs, bugs, bugs.

Proprietary software companies, like Microsoft, spend many millions on testing and user studies and use that spending to steer the R&D process. That feedback from market to developers, more than any other single factor, earns customers for the Microsofts of the world.

At the same time, the Linux community, and the Free Software community in general, produces a lot of software with very little tangible compensation. Yet every major computer manufacturer now promotes Linux. Several companies with large revenues sell Linux distributions.

What's wrong with this picture?

regexps.com thinks it has the solution: a market-driven development process in which paying customers, those who sell to end-users, become subscribers to R&D efforts. Subscribers pay for an intimate relationship with development teams: helping to prioritize development with feedback from the demands of the market; calling developers attention to where better QA is needed; using funding incentives to organize separate teams of developers into a more coherent whole.

We already discussed this issue and to summarize the discussion I would like to reiterate that quality is not automatic, it depends on the personality of the leader, underling algorithms, architectural integrity and the quality of code (including right implementation language).  Minimalist approach is usually more secure. That's why Linux will never be as secure as OpenBSD. See also RedHatIsNotLinux.Org

Maturity is also important for software tools because of the complexity. It's impossible to write a perfect OS or a perfect compiler in six month no matter what. And what is the most important: like there is no replacement for displacement, there is no replacement for the talent in software engineering.  Open source is not a panacea, it can help, but you still need a talent, right tools and a lot of time. As for development tools the quality is really a myth and the recent study revealed severe dissatisfaction among OSS developers with the tools currently available for the Linux platform. Nine of the eleven 'tools' categories received satisfaction ratings under 50%, including OSS debuggers, profilers, modeling tools, error detection tools, GUI Frameworks, testing tools and code management tools:

Monday, March 27, 2000 - Evans Marketing Services released what is being referred to as the most comprehensive research study of Linux developers ever. Compiled throughout March, the study consists of more than 300 in-depth interviews with Linux developers on a variety of topics ranging from what applications they are working in, to which languages and distributions they are using. One of the most significant findings of the study revealed severe dissatisfaction among developers with the tools currently available for the Linux platform. Nine of the eleven 'tools' categories received satisfaction ratings under 50%, including debuggers, profilers, modeling tools, error detection tools, GUI Frameworks, testing tools and code management tools. When asked, 87% of developers didn't care if the tools were proprietary or open source. This means that the opportunity is there for a vendor to step up and fill the niche in any or all of those categories. For more information on the study, visit www.evansmarketing.com

Other interesting fact is that "When asked, 87% of developers didn't care if the tools were proprietary or open source." Of course one can attack the validity of the study, but I believe that the life is too short and developer need to use the best tools available. For example very few developers has time to modify the internals of the editor to their liking; at most they write macros. Most are using it as a closed source product even if source is available. So much for the CatB idealistic blah-blah-blah on the subject. The last but not least is that CatB mythology cannot change the stubborn fact that  probably 99% of Open Source projects never have more than one maintainer. More often then not the latter is the same as the initial developer.

Actually this is a question of choice. I believe that RMS was right when he said:

The Open Source Movement seems to think of proprietary software as a suboptimal solution (at least, usually suboptimal). For the Free Software Movement, proprietary software is the problem, and free software is the solution. Free software is often very powerful and reliable, and I'm glad that adds to its appeal; but I would choose a bare-bones unreliable free program rather than a featureful and reliable proprietary program that doesn't respect my freedom.

But it's very important to understand (and RMS here does not point out this issue, although I think he understands it too)  that paradoxically bare-bone open source program can be more productive that complex open source program or supercomplex commercial solution. But again source code in not enough. In his article Be An Engineer, Not An Artist  Monty Manley wrote:

We can refine software quality into the following points:

  1. Efficiency. All other things being equal, a smaller program is better than a big one because it achieves the same result in a smaller space. Efficient programs use fewer resources and run faster. ("Bloat" is a curse in the developer community, and for good reason.)
  2. Design. A good program exhibits a structure and flow, even when they are very complicated. Too many programmers just sit down and start banging out code without designing the program first; this leads to a program that is obfuscated, inflexible, and hard to hand off to other developers.
  3. Modularity. Good programs can be extended and enhanced without requiring a major rewrite. (Modularity is hard to achieve without a good design -- see previous point.)
  4. Robustness. Good programs anticipate the unexpected and gracefully handle problems. (It's worth noting that a lack of QA is something that devils Linux badly.)
  5. Documentation. Good programs are *documented*, both internally as comments and externally as developer/user documentation. No program can be considered finished until the documentation is completed. And yet the vast majority of Open Source projects suffer from incomplete, inaccurate, or missing documentation.

There is a lot to be said for art, but there is still more to be said for *craft*. The hard fact is that the important parts of writing a good program are the boring parts -- the "sexy" stuff is usually less than 30% of the project. Unfortunately, Open Source projects tend to stall when the "sexy" stuff is done. The developers lose interest and move on to other "sexy" stuff. If you consider the high-profile projects (Mozilla, AbiWord, Samba, etc.), you discover that about 10% of the developers do 90% of the work -- in other words, all those legions of programmers do essentially nothing for the project.

I used to be amazed that there was so much duplication in the Linux software space. How many FTP clients or newsreaders does one platform need? How many XClocks? CD players? Window managers? And the hell of it is, few of these programs are actual *refinements*; they are simply the same thing done a different way. This tendency also underscores the relative immaturity of many Linux developers: they have a hard time thinking up new concepts, and so keep on reimplementing old ones.

I'd like to see Open Source software submitted to some kind of formal code-review and auditing process. Not only would it improve the quality of the code, it would reduce the endless security exploits that have deviled Unix code for three decades. OpenBSD should be a model here -- that OS has not had a remote root exploit in more than two years! However, I doubt this will happen; there is too much ego and testosterone floating around the Open Source community to make such a thing work.

I believe in Open Source, but the promise of "better software" isn't materializing, despite all the noise to the contrary. The *process* won't do it; the *developers* have to do it. We need to be engineers first and artists second.

Unix was designed as a development OS. That's why historically it was extremely popular at universities. And because Linux belong to the Unix tradition or school of thought if you wish, we need to understand that most Open Source software was written by developers who will use their products, which means Open Source software is written by developers for developers or at least system administrators. Developers typically love to configure everything via a configuration file, love to do everything with their keyboard without repeating the same task twice, etc. Not all categories of users share this love ;-).


Script Kiddies and Open Source or Vanity Fair Rulez

The results of the discussion of the previous question can be logically extended to the security area. That lead us to the logical conclusion that OSS products cannot claim superiors security. Generally speaking among three free UNIXes (OpenBSD, FreeBSD, and Linux), it's Linux that is the most insecure flavor and it will probably stay this way. And any organization that is using or plan using it should be aware about the risks and consider OpenBSD among other alternatives.  But that's not what we want to discuss. We will discuss here the dangerous trend to extend open source concepts to the publishing of exploits. Here we can see that there are some really interesting side effects of opening code. As one Slashdot reader put it:

The myth of many eyes
by Jon Erikson ([email protected]) on Thursday July 27, @11:51AM EDT (#15)
(User #198204 Info)

It's about time somebody stood up to the legions of open source zealots and told them that their cherished view of "many eyes makes bugs shallow" is little more than McCarthy-like jingoism rather than a solid foundation for security.

I'm not saying that obscurity is good for security either mind you, but the fact is that when you have the source code to a product at hand, it becomes a hell of a lot easier to find exploits with a debugger and a bit of patience than it would be with a raw binary. And thanks to the "efforts" of system administrators who would rather spend their time playing Quake than downloading the latest patches and bug-fixes these exploits put thousands of sites that rely on open source software at risk.

The many eyes mantra only applies when many eyes are actually looking at the code. In most cases there are about two people (the programmers) who actually look through the code to fix it, and everyone else is hackers looking for their latest backdoor penetration.

This is an area in which there is so much FUD, from both sides, that a reasoned debate is next to impossible. Until the zealots stop and think, security is going to be something that is argued about rather than realised.

Although I would not go that far as to claim that  security tools produces are a special kind of weapon dealers, I would agree with Marcus Ranum (see also discussion on slashdot) that script kiddies are dangerous, not merely annoying, and that "vanity fair rulez" -- many vulnerabilities that are being disclosed are researched for the sole purpose of disclosing them:

"We are creating hordes and hordes of script kiddies," Ranum said. "They are like cockroaches. There are so many script kiddies attacking our networks that it's hard to find the real serious attackers" because of all the chaotic noise.

"A lot of the " he said. "Someone who releases a harmful program through a press release has a different agenda than to help you."

Over the next few years, society's tolerance of hackers will lessen once hacking is regarded as "non-ideological terrorism," he said. As home users increasingly find themselves the target of hackers, there will be less and less patience with break-ins.

"In the next five years, we are going to move to a counterterrorism model," he said. "It will turn into a witch hunt, unless we stop the script kiddies today."

Ranum's message to the creators of tools: "Why don't you do something useful."

As Elias Levy noted in his paper "Is Open Source really more secure than closed?" :

If Open Source were the panacea some think it is, then every security hole described, fixed and announced to the public would come from people analyzing the source code for security vulnerabilities, such as the folks at OpenBSD,  the Linux Auditing Project, or the developers or users of the application.

But there have been plenty of security vulnerabilities in Open Source Software that were discovered, not by peer review, but by black hats. Some security holes aren't discovered by the good guys until an attacker's tools are found on a compromised site, network traffic captured during an intrusion turns up signs of the exploit, or knowledge of the bug finally bubbles up from the underground.

Why is this? When the security company Trusted Information Systems (TIS) began making the source code of their Gauntlet firewall available to their customers many years ago, they believed that their clients would check for themselves how secure the product was. What they found instead was that very few people outside of TIS ever sent in feedback, bug reports or vulnerabilities. Nobody, it seems, is reading the source.

The fact is, most open source users run the software, but don't personally read the code. They just assume that someone else will do the auditing for them, and too often, it's the bad guys.

Old versions of the Sendmail mail transport agent implemented a DEBUG SMTP command that allowed the connecting user to specify a set of commands instead of an email address to receive the message. This was one of the vulnerabilities exploited by the notorious Morris Internet worm.

Sendmail is one of the oldest examples of open source software, yet this vulnerability, and many others, lay unfixed a long time. For years Sendmail was plagued by security problems, because this monolithic programs was very large, complicated, and little understood but for a few.

Vulnerabilities can be a lot more subtle than the Sendmail DEBUG command. How many people really understand the ins and outs of a kernel based NFS server? Are we sure its not leaking file handles in some instances? Ssh 1.2.27 is over seventy-one thousand lines of code (client and server). Are we sure a subtle flaw does not weakening its key strength to only 40-bits?

...While some of the binaries are cryptographically signed to verify the identity of the packager, they make no other guarantees. Until the day comes when a trusted distributor of binary open source software can issue a strong cryptographic guarantee that a particular binary is the result of a given source, any security expectations one may have about the source can't be transferred to the binary.

 

But things are not that bad :-). I would like to stress again that the quality of open source products varies greatly, much like quality of closed source products:

OpenBSD owns
by niekze on Thursday May 18, @08:19AM EDT (#59)
(User Info)
OpenBSD:

Three years without a remote hole in the default install!
Two years without a localhost hole in the default install!

RedHat:

Three weeks without a remote hole in the default install!
Two weeks without a localhost hole in the default install!

Thats all im going to say.

Why the postulate about high level of quality feedback to be partially a myth ?

I think that, other things equal, OSS products have some edge because the level of conflict between commercial developers and voluntary developers is lower in case of GPL products. There are other advantages including the trivial one: just availability of the both source and tools to build it free of charge that lower barrier of entrance and attract talented developers from less wealthy countries. But CatB reasoning is either primitive or plan wrong. There is no understanding of the complexity of the situation.  Quality is not automatic consequence of opening code or adopting open source approach from the beginning.  There are many badly written open source products.

Unless they are engaged in the "vainly fair" game of finding security holes described above, most people hate reading other code. Moreover, even if people are reviewing the code, that doesn't mean they're qualified to do so. In the scientific world, peer review works because reviewers are pre-selected and generally as a group posses a comparable, or higher, technical caliber on the subject matter. The quality of technical publication is maintained by attracting the best reviewers to magazines and publishing houses. There is no comparable process in open source. As Andrew Leonard put it  in Salon Free Software Project BSD Unix Power to the people, from the code:

"Most people are bad programmers," says Joy. "The honest truth is that having a lot of people staring at the code does not find the really nasty bugs. The really nasty bugs are found by a couple of really smart people who just kill themselves. Most people looking at the code won't see anything ... You can't have thousands of people contributing and achieve a high standard."

Here are also some relevant critical postings from the Slashdot discussion:

Yet again the open source model fails to deliver
by Anonymous Coward on Tuesday May 09, @08:41AM EST (#19)

Well, what can I say? I'd say I've told you so, but I didn't, but anyway it's certainly not unexpected that the open source model so praised by /. and other similar communities has failed to deliver on its quite amazingly grandiose claims.

When people praise open source, they point to the legions of developers, the "many eyes" to find bugs, and the ability to quickly and efficiently incorporate user feedback and requests. But the truth seems to be a "little" different from this Garden of Eden coding paradise.

Most open source projects are really only the work of one or two people, who occasionally can be bothered to actually write a few lines of code and then release it as the next version (hence version numbers like 0.99beta-pre3). There is none of the dedication which occurs in a closed source software house, and certainly none of the encouragement or rewards, unless you count the mythical "kudos" which are often heard of, but never seen.

And as for the bugs issue, well, how many people ever read the source code to something they download? Not very many from what I can see. And when you've got projects as convoluted and bloated as the Linux kernel, how many people can actually find the bug in the first place? Linus and a few of his cronies is about all I'd say. So this argument seems to be little more than hand-waving.

User feedback ties in with both the previous two points - it all depends on whether the coder(s) actually bother with doing anything often, and whether bugs and requests can actually be fixed.

Two high profile open source projects - Mozilla and the HURD - have both been pretty much vapourware for the last God-knows how long, and if you ignore the extremely buggy versions of Mozilla which we've seen, they appear to be so for the foreseeable future.

Closed source may not be the best thing in the world, but at least it delivers what its customers want and on time.

 

"Troll"rating was unfair / Open source in general
by Master of Kode Fu ([email protected]) on Tuesday May 09, @09:38AM EST (#93)
(User Info)
I don't think that a "troll" rating for the previous post is completely fair. Whether open source is better than closed source is a matter of opinion, and the poster is simply voicing his/hers.

The points made in the comments "Most open source projects are really only the work of one or two people" and "how many people ever read the source code to something they download?" are well made. I can't say for certain, because I haven't carried out a poll or seen any statistics. What I know from the literature (such as CatB), experience and observation is that most open source projects start out as an itch that a programmer decides to scratch. The majority of people who find out about the project don't do anything except download the binary, a few download the code, a fraction of them comments and a fraction of that fraction hand back some modifications. Only a small portion of the much-vaunted "many eyes that make bugs shallow" are actually being harnessed. That's not a bad thing -- the closed source model exposes the software to even fewer eyes. It's just that there are many obstacles to fixing or modifying someone else's code: how the original programmer wrote it, availability of documentation, availability of the programmer to answer questions, whether it's written in a language you know, the complexity of the program's problem domain, how good a programmer you are, and the big one -- how much time you have.

The primary advantage provided by open source to a program's developer is the potential peer review and fine-grained testing that freely handing out your source gives you. There's a large body of evidence that code review is one of the best ways for incresing software quality. However, as Fred Brooks says, there's no silver bullet for the problems of software development, and peer review is only one step in making good software. Without good planning and process, all you have is a bickering committee. And we know why they don't make statues, painting or memorials of committees, right?

There's a very good article by Steve McConnell (author of Code Complete, Rapid Development, Software Project Survival Guide and After the Gold Rush) covering some of the issues around open source software. He points out that if open source wants to really reach its potential, the following needs to be done:

1. Create a central clearinghouse for the open-source methodology so it can be fully captured and evolved.

2. Kick its addiction to Code and Fix.

3. Focus on eliminating upstream defects earlier.

4. Collect and publish data to support its claims about the effectiveness of the open-source development approach.

While theoretically the more people are reviewing a program, the more likely they will find bugs. But reality is different. Mongol horde approach does not work. A single well-trained reviewer who understands the subject area will be more effective than a hundred people who just recently learned how to program.

The grim reality of OSS projects that you may have users. But getting quality feedback, reviews or help in development is not that simple, actually it's more difficult than for closed source commercial products where you can pay money to attract people of necessary qualification. CatB myth about huge volume of high quality voluntary feedback is just another myth.  Like rare metals, talent is a very rare commodity that naturally concentrates on development and high quality feedback is impossible without talent. And the reason that a talented person would thoroughly analyze mundane code for free without any strong incentives looks like stretching the truth. For anybody who wrote a substantial program it's clear that most of the code is just mundane. It's error recovery, input checking, GUI(if any), etc, with code that is directly related to functionality of the program often around 10% or less.

But it's only one side of the coin. The other side of the coin is that for innovative projects the level of feedback cannot be high, because not many people can understand the innovation. I am convinced that if we are talking about innovation, not imitation, the author need to be ready to pursue his dream on his own regarding the support of rejection of the crowd. With very few exceptions, the applauds of the crowd for really innovative products usually come too late ;-). This is the case in mathematics and this is the case in programming because  programs can be considered as a special class of the applied mathematical theories. Please remember that in no way Linux kernel development can be considered as innovative activity; this was just an attempt to replicate functionality of existing Posix kernels.


Why the idea of support as the main source of revenue for open source is partially a myth ?

Open source is most successful for the products that represent direct interest to a powerful group that can be called system administrators. Actually you do not need to be a system administrator to belong to this group. All you need is to have a corresponding technical level. And products for this particular group (Apache is a very nice example here) will probably never generate substantial support revenues due to the nature of the group. As one of Slashdot correspondents put it:

Let me ask you this, do you think that there is a market for selling Apache support? Selling support for Open Source Software doesn't work as an exclusive business model because unlike proprietary software most people who have jobs working with OSS have a clue and they have the source or at least have access to people who have the source. This means that unlike IBM and MSFT who can rest assured that if there is a problem with their software their users will have to call their expensive support lines, users of OSS can simply fix the code or ask on a newsgroup and get a faster and sometimes better response from some enterprising hacker than from some tech support flunkie. I see selling Linux support being like selling toys online, anyone can do it but the market will only support one or two major players while the rest will flounder and die.


What are consequences of considering Open source a special kind of academic research

First of all any scientific community and it is very fragile  and can be distracted, impoverished or damaged in other ways by commercial companies especially if a lot of commercial sharks are cruising around. As I already said,  like in case of science, any serious OSS development involves certain risks and requires not only talent, but a certain courage to distance yourself from the marketplace in order not to permit marketplace to enslave you. Money is often serve to  divide people and if some get huge money some get none people get frustrated more easily. Student Linus Torvalds and millionaire Linus Torvalds are two different developers with different priorities and different abilities to attract people and this huge difference exist independently whether we want it or not.


Is recognition the main driving force ?

Yes, I think so, but not at the beginning of the project.

One thing that has been neglected in the recent debate is that probably the most common reason for writing open source software has nothing - or very little - to do with recognition. A lot of software is written simply because the writer needed the software, or needed to write it to train himself in a particular language or protocol. Most free software got started because the author needed it to solve a problem, or needed to write something for class, or wanted an excuse to learn a language or toolkit. The idea of releasing the software is then a decision taken after the fact, with the realization that the program could be useful for others as well.

My understanding of the motivation of open source developers is that they start writing software to solve their problems, and then releasing it on the web on the "pro bona" basis so that other might also benefit from it.

As people have brought up, other motivations arise - sure, very common one is the desire to get the respect of your peers, but other motivations can be as strong. Among them the desire to practice or improve your skills, the fun of accomplishing something cool which you've set your mind to. Usually motivation is some complex combination of these motives and others - depending on the person and the project. It's important to understand that peer-respect related motivation is not the only and not universal, some people does not care about getting the respect of their peers.

But writing is one thing and maintaining quite another including providing functionality beyond what one needs from the program. It becomes the author's "child", and not only fun but a burden and here peer recognition is a very important part that should be overlooked. See also discussion of Slashdot Editorial: Fame Ego Oversimplification!



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019