|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
First of all, we must say that additional CPUs for plain web server operation with static HTML pages are a waste. Even with two Fast Ethernet lines there's only a moderate less than twenty percent increase. It seems that CPU performance is not the decisive factor with these tasks. The graphs indicate that the Primergy server itself didn't have to work to its full capacity at all.Linux's comparatively bad results when tested with two network boards show that Mindcraft's results are quite realistic. NT and IIS are clearly superior to their free competitors if you stick to their rules.
To clarify it once again: These results correspond to a load beyond 1000 requests per second. In comparison: At peak times, the Heise server deals with about 100 requests/s. Also, we are talking about purely static pages which on top of that are already present in the system's main memory. Our tests showed that Mindcraft's result can't be transferred to situations with mainly dynamic contents - the common case in nearly every sophisticated web site.
In SMP mode, Linux still exhibited clear weaknesses. Kernel developers, too, admit freely that scaleability problems still exist in SMP mode if the major part of the load comes through in kernel mode. However, if user mode tasks are involved as well, as is the case with CGI scripts, Linux can benefit from additional processors, too. These SMP problems are currently the target of massive developing efforts.
In the web server areas most relevant for practical use, Linux and Apache are already ahead by at least one nose. If the pages don't come directly from the system's main memory, the situation is even reverted to favour Linux and Apache: Here, the OpenSource movement's prime products leave their commercial competitors from Redmond way behind.
Relevant for practical use is Mindcraft's critical assessment of Linux and Apache tuning tips being difficult to come by. True enough, professional Linux support structures are still in development. However, Apache and Linux developers have been extremely helpful. While Microsoft needed more than a week to come up with the ListenBackLog hint, our questions regarding Linux and Apache problems were answered with helpful hints and tips within hours.
Emails to the respective mailing lists even resulted in special kernel patches which significantly increased performance. We have, on the other hand, never heard of an NT support contract supplying NT kernels specially designed for customer problems. (ju)
Q: Who are the members of the TPC? A: While the majority of TPC members are computer system vendors, the TPC also has several software database vendors. In addition, the TPC membership also includes market research firms, system integrators, and end-user organizations. There are approximately 40 members worldwide. Q: What are the benefits of being a member? Timely access to detailed competitive data. TPC membership is a who's who of computing. With a TPC membership, you have access to all benchmark test data (full disclosure reports). These reports provide all the performance and detailed pricing information on your competitors' new systems. If you want to stay ahead of your competition, you better know what they're up to. Marketing leverage. TPC benchmarks create a level playing field where your company can compete with all the major players in the industry. The TPC encourages everyone to publicize TPC results, and a good TPC result can dramatically improve your competitive stance. But it's very difficult to get the best TPC result if you haven't read the latest full disclosure reports from your competitors and are not aware of the latest TPC technical rulings. While most of the TPC's information is available to the public, your company can hardly be an effective competitor standing on the sidelines.
The tests were designed and run by industry analysis company Doculabs. Doculabs' @Bench benchmark measures how well application servers can handle the demands of a full-size electronic commerce application (in this case, an online bookstore with a 12.5-million-row back-end database).
To carry out the tests, Doculabs used a mix of Sun and Compaq Computer Corp. servers, along with 120 client PCs, and used Client/Server Solutions Inc.'s Benchmark Factory 97 benchmark control software (available at www.benchmarkfactory.com).
(For more details on the @Bench benchmark and test hardware, see PC Week's April 5 report on the first half of the tests.) Doculabs is also producing an exhaustive report describing its results, which will be available from the Chicago company by the end of this month.
The eight tested products were Apple Computer Inc.'s WebObjects 4.0, Bluestone Software Inc.'s Sapphire/Web 5.1, Haht Software Inc.'s Hahtsite 4.0, Microsoft's Windows NT Enterprise Server 4.0, Progress Software's Apptivity 3.0, Sybase's Sybase EAS (Enterprise Application Server) 3.0, and the Sun/Netscape alliance's NetDynamics 5.0 and Netscape Application Server 2.1 application servers. IBM was also going to participate with its WebSphere application server but later pulled out, saying a new version of WebSphere was in the works.
The tests were done at PC Week's Foster City, Calif., lab in exchange for first access to the results. Doculabs charged each vendor a flat fee of $35,000 to defray server hardware costs; PC Week received no payment and didn't take part in this test in any way except to provide the facility.
...Overall, we were surprised by how fast all the application servers were. On the Sun testbed, speeds ranged from about 400 to about 1,400 pages per second (with Apptivity, Sybase EAS, NetDynamics and Netscape Application Server the top performers), and on the Compaq testbed (which no one else used), Microsoft's performance pushed almost 3,500 pages per second--with 93 percent of these pages dynamically generated.
These performance figures certainly indicate big differences among the products, but even a throughput of 400 pages per second is enough to saturate the Internet connections of most businesses hosting their own e-commerce applications. As a draft of the Doculabs report states, "Clearly, most production environments do not have the infrastructure to support even this 'modest' performance."
The average Web page in @Bench was between 2.5KB and 3KB, so a 400-page-per-second throughput rate requires at least a 7.8M-bps connection (or about six T-1 lines) before the application server will be a bottleneck...
All products showed virtually linear scalability, but there were differences in fault tolerance (see table, below). Hahtsite and WebObjects each lost some users' shopping cart information during the network failure tests. In addition, only Hahtsite, NetDynamics, Netscape Application Server and Sapphire/Web were able to bring failed servers automatically back online. Doculabs officials said Microsoft's server could also have done this had the company run its application server on a separate system from the Web server.
The recent tests done at ZDLabs turned up some interesting results. They were of course presented under the auspices of Windows NT is faster than Linux, which was, strictly speaking, true. Now, it doesn't really matter what the testing software was, or what the testing hardware was. I don't really care, for the moment at least, how honest the test was. I expect that it was at least somewhat honest since some RedHat people were on the scene. I'm interested in how we interpret the results.
Now, there is the face value of the results, that is, that windows NT is faster than Linux, thus better, and hence that in any given situation, NT is better to use than Linux. There's also the option, of course, to actually look at what the tests found. What are some of the actual facts that the tests came up with? Here are some important ones that I found (pretty color graphs aside):
- With 4 CPUs and 1 Gig of RAM, NT & IIS achieved 4,166 http requests per second.
- With 4 CPUs and 1 Gig of RAM, Linux & Apache achieved 1,842 http requests per second.
- With 1 CPU and 256 MB RAM, NT & IIS achieved 1,863 http requests per second.
- With 1 CPU and 256 MB RAM, Linux & Apache achieved 1,314 http requests per second.
- note: when I refer to a single CPU linux box or a 4 CPU linux box, I will be talking about the box that ZD used (i.e. whatever processor speed etc. that it had).
Linux looks pretty slow, doesn't it? Who would use it for any real application? Well, let's examine this situation a bit more than just comparatively. First off, let's just look at an approximation of the situation that this represents:
- 1,842 hits/sec * 3600 sec/hour * 24 hours/day = 159,148,800 hits/day.
- 1,314 hits/sec * 3600 sec/hour * 24 hours/day = 113,529,600 hits/day.
So Linux/Apache should be able to handle your site on a 4 CPU 1 Gig RAM box if you get 159 million hits per day or less. If you get only a measly 113 million hits/day, then a single CPU box with 256 meg of RAM should be able to host your site. Of course, this only works if your access is 100% even which is extremely unrealistic. Let's assume that your busy times get ten times more hits per second than your average hits/second. That means that a single CPU Linux box with 256 meg of RAM should work for you if you get about 11 million hits every day. Heck, let's be more conservative. Let's say that your busy times get 100 times more hits/second than your average hits/second. That means that if you get 1.1 million hits per day or less, that same box will serve your site just fine.
OK, there's that way of looking at it, but it's not really a good way. It's a very coarse approximation of access patterns and what a site needs. Let's try another way of looking at this.
Let's do some simple calculations to see what sort of bandwidth these numbers mean. Bandwidth will be a better and more constant method of determining who these numbers apply to than guessed at hit ratios.
The ZDNet page said that the files served were of "varying sizes", so we'll have to make some assumptions about the average size of the files being served. Since over 1000 files were served per second in all of the tests, it's pretty safe to work by averages. Some numbers:
- 1,842 hits/sec * 1 kilobyte/hit * 8192 bits/kilobyte = 15089664 bits/sec = 15 MBits/sec.
- 1,842 hits/sec * 2 kilobytes/hit * 8192 bits/kilobyte = 30179328 bits/sec = 30 MBits/sec.
- 1,842 hits/sec * 5 kilobytes/hit * 8192 bits/kilobyte = 75448320 bits/sec = 75 MBits/sec.
- 1,842 hits/sec * 10 kilobytes/hit * 8192 bits/kilobyte = 150896640 bits/sec = 150 MBits/sec.
- 1,842 hits/sec * 25 kilobytes/hit * 8192 bits/kilobyte = 377241600 bits/sec = 377 MBits/sec.
- 1,314 hits/sec * 1 kilobyte/hit * 8192 bits/kilobyte = 10764288 bits/sec = 10 MBits/sec.
- 1,314 hits/sec * 2 kilobytes/hit * 8192 bits/kilobyte = 21528576 bits/sec = 21 MBits/sec.
- 1,314 hits/sec * 5 kilobytes/hit * 8192 bits/kilobyte = 53821440 bits/sec = 53 MBits/sec.
- 1,314 hits/sec * 10 kilobytes/hit * 8192 bits/kilobyte = 107642880 bits/sec = 107 MBits/sec.
- 1,314 hits/sec * 25 kilobytes/hit * 8192 bits/kilobyte = 269107200 bits/sec = 269 MBits/sec.
Just as a reference, a T1 line is worth approximately 1.5 MBits/sec, these numbers don't include TCP/IP & HTTP overhead, and this document is approximately 12k.
Now, what does this tell us? Well, that if you are serving up 1,314 pages per second where the average page is only 1 kilobyte, you'll be needing 10 T1 lines or the equivalent until the computer is the limiting factor. What site on earth is going to be getting a sustained >1000 hits per second for 1 kilobyte files? Certainly not one with any graphics in it. Let's assume that you're running a site with graphics in it and that you're average file is 5 kilobytes - not too conservative or too liberal. This means that if you're serving up 1,314 of them a second, you'll need 53 MBits of bandwidth. And there are no peak issues here, you can't peak out more than your bandwidth.
Let's go at it another way, this time starting with our available bandwidth: note: these numbers don't include TCP/IP or HTTP overhead.
- 1 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/kilobyte = 184 hits/sec.
- 1 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/2 kilobytes = 92 hits/sec.
- 1 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/5 kilobytes = 37 hits/sec.
- 1 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/10 kilobytes = 19 hits/sec.
- 1 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/25 kilobytes = 8 hits/sec.
- 5 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/kilobyte = 916 hits/sec.
- 5 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/2 kilobytes = 458 hits/sec.
- 5 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/5 kilobytes = 183 hits/sec.
- 5 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/10 kilobytes = 92 hits/sec.
- 5 T1 Line * 1.5 MBits/T1 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/25 kilobytes = 36 hits/sec.
- 1 T3 Line * 45 MBits/T3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/kilobyte = 5,494 hits/sec.
- 1 T3 Line * 45 MBits/T3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/2 kilobytes = 2747 hits/sec.
- 1 T3 Line * 45 MBits/T3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/5 kilobytes = 1099 hits/sec.
- 1 T3 Line * 45 MBits/T3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/10 kilobytes = 550 hits/sec.
- 1 T3 Line * 45 MBits/T3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/25 kilobytes = 220 hits/sec.
- 1 OC3 Line * 155 MBits/OC3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/kilobyte = 18,921 hits/sec.
- 1 OC3 Line * 155 MBits/OC3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/2 kilobytes = 9461 hits/sec.
- 1 OC3 Line * 155 MBits/OC3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/5 kilobytes = 3785 hits/sec.
- 1 OC3 Line * 155 MBits/OC3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/10 kilobytes = 1,893 hits/sec.
- 1 OC3 Line * 155 MBits/OC3 * 1,000,000 bits/MBit * 1 kilobyte/8192 bits * 1 hit/25 kilobytes = 757 hits/sec.
I am assuming that the tests that ZD made were meant to mean something, so I won't entertain the idea that they used an average file size of less that 1K. Given that, It is clear that the numbers that ZD's tests produced are only significant when you have the equivalent bandwidth of over 6 T1 lines. Let's be clear about this: if you have only 5 T1 lines or less, a single CPU Linux box with 256 MB RAM will wait on your internet connection and not be able to serve up to its full potential. Let me reemphasize this: ZD's tests prove that a single CPU Linux box with 256 MB RAM running apache will run faster than your internet connection!. Put another way, if your site runs on 5 T1 lines or less, a single CPU Linux box with 256 MB RAM will more than fulfill your needs with CPU cycles left over.
That was just if the ZD numbers were valid for files of only 1K in size. Let's make an assumption that you either (a) have pages with more than about a screen of text or (b)black and white pictures that make your average file size 5K and that ZD's tests accurately reflect this condition. Given this, ZD's tests would indicate that a single CPU Linux box with only 256 MB RAM running Apache would be constantly waiting on your T3 line. In other words, a single CPU Linux box with 256 MB RAM will serve your needs with room to grow if your site is served by a T3 line or less.
More recent criticism points out that the benchmark has little to do with any sort of real world situation. Doug Ledford of Red Hat was quoted widely as saying "The tests do not accurately represent how and what our customers are using Red Hat for." Penguin Computing put out a strongly worded press release arguing the irrelevance of the benchmark. See also Chris Lansdown's article on the sort of network connectivity it would take to actually sustain the number of hits per second tested in these benchmarks. A separate set of tests documented in this c't article show that, under more "realistic" conditions, Linux performs much better.
All that is true - the connection with the benchmarks and reality is weak at best. But complaints along those lines just sound like sour grapes at this point. They make Linux look bad, and are not worth the trouble.
A few problems with Linux have been found as a result of these benchmarks. There is a bottleneck in the networking code that appears to be the cause of the plateau in Apache's performance, for example. Work is already well underway to fix those problems. See Dan Kegel's page for a detailed discussion of what is happening in this area.
And that, really, is the best result out of these benchmarks. There is no deep design problem within Linux that causes performance problems in these conditions. There are, instead, specific implementation problems that have been found, and will soon be fixed. It may not be long before Linux starts winning these benchmarks. The end result will be to show how quickly Linux can adapt and deal with problems. In the long run, these benchmarks will probably look like a good thing for Linux, from both the technical and public relations point of view.
In an April 27, 1999 article entitled "Will Mindcraft II Be Better?" Linux Today presented a one-sided report clearly designed to destroy Mindcraft's credibility and to falsely make our reports look wrong. I want to set the record straight with this rebuttal, so I'll point out what's right and wrong with the Linux Today article. Unfortunately, it takes more words to right a wrong than it does to make someone look wrong, so please bear with me.
What's Right
Dave Whitinger and Dwight Johnson had several points right in their article:
Mindcraft did the tests stated in the article under contract with Microsoft in a Microsoft lab.
Many have tried to imply that something is wrong with Mindcraft's tests because they were done in a Microsoft lab. You should know that Mindcraft verified the clients were set up as we documented in our report and that Mindcraft, not Microsoft, loaded the server software and tuned it as documented in our report. In essence, we took over the lab we were using and verified it was set up fairly.
Mindcraft did conduct a second test with support from Linus Torvalds, Alan Cox, Jeremy Allison, Dean Gaudet, and David Miller. Andrew Tridgell provided only one piece of input before he left on vacation. Mindcraft received excellent support from these leading members of the Linux community. I thank them for their help and very much appreciate it.
Jeremy Allison was correct that the I made the initial contact at the suggestion of a journalist, Lee Gomes from the Wall Street Journal.
Jeremy was right that we were under an NDA and, as stated above, the tests were run at a Microsoft lab.
What was not mentioned in the article was the excellent support Red Hat provided for our second test. Doug Ledford, from Red Hat, answered my questions on the phone, always called back when I left messages, and participated in the email correspondence with the above named Linux experts.
What's Wrong
Unfortunately, Mr. Whitinger and Mr. Johnson by not even attempting to contact Mindcraft to get information from us. It seems as though they wanted to write a one-sided story from the beginning. The following points will give you the other side of their story.
Linus is attributed as saying ".... that nobody in the Linux community is really working on the Mindcraft test per se, because Mindcraft hasn't allowed them access to the test site." It's clear from the emails we exchanged that the Linux experts did make suggestions on tunes for Linux, Apache, and Samba. They also provided a kernel patch that was not readily available. We applied all tunes they suggested and the kernel patch. Here are some of the things that happened:
Red Hat provided version 1.0 of the MegaRAID driver during our tests and used it, even though it meant retesting.
We sent out our Apache and Samba configuration files for review and received approval of them before we tested. (We actually got better performance in Apache when we made some changes to the approved configuration file on our own).
Whenever we got poor performance we sent a description of out how the system was set up and the performance numbers we were measuring. The Linux experts and Red Hat told us what to check out, offered tuning changes, and provided patches to try. We had several rounds of messages between us in which Mindcraft answered the questions they posed.
According to the article, Linus complained about the opaqueness of our test. This is a strange complaint since he and all of the Linux experts knew the exact configuration of the system we were testing and knew the benchmarks we were running. The NetBench and WebBench benchmarks are readily available on the Web for free and are probably some of the best documented benchmarks available. We withheld no technical details from him or the other Linux experts.
Jeremy Allison directly contradicts Linus later in the article when he says "...I can confirm that we have reproduced Mindcraft's NT server numbers here in our lab." Clearly, Jeremy was tracking what we were doing and had the lab to verify our results.
The article says that all emails to the Linux experts came from a Microsoft address. That's wrong. On April 16, 17, 18, and 19 I sent emails to them from Mindcraft's office on a Mindcraft IP address. Emails sent during the second test were sent from a Microsoft IP number.
Mr. Whitinger and Mr. Johnson are wrong about the email alias of "will" belonging to me. It belongs to a person who is not a Mindcraft employee. He is someone who did a posting to a newsgroup about Linux on the system we were going to use for testing. He wanted to remain as anonymous as possible because he didn't want to get a ton of flamming email (based on the email Mindcraft has received, his expectation was underestimated). I see no need to reveal who he is now because his worst nightmare will come true and because he had nothing to do with our test.
Jeremy did give me excellent support both on the phone and via email. I applied all of his suggestions. If he gave me all of the tuning parameters he used for the February 1, 1999 PC Week article showing Samba performance on a VA Research system, they should have been applicable to the system I was using. That certainly is true for systems as similar as those two when running Windows NT Server.
The Crux of The Matter
The whole controversy over Mindcraft's benchmark report is about three things: we showed that Windows NT Server was faster than Linux on an enterprise-class server, Apache did not outperform IIS, and we didn't get the same performance measurements for Samba that Jeremy got in the PC Week article or his lab. Let's look at these issues.
Comparing the performance of a resource-constrained desktop PC with an enterprise-class server is like saying a go-kart beat a grand prix race car on a go-kart race course.
Smart Reseller reported a head-to-head test of Linux and Windows NT Server in a January 25, 1999 article; they tested performance on a resource-constrained 266 MHz desktop PC. One cannot reasonably extrapolate the performance of a resource-constrained desktop PC to an unconstrained, enterprise-class server with four 400 MHz Xeon processors.
In a February 1, 1999 article, PC Week tested the file server performance of Linux and Samba on an enterprise-class system. They did not compare it to Windows NT Server on the same system. Jeremy Allison helped with these tests comparing the Linux 2.2 kernel with the Linux 2.0 kernel. I'll show you below what he thinks about Windows NT Server on an enterprise-class server.
If you doubt our published Apache performance, Dean Gaudet, who wrote the Apache Performance Notes and who supported our testing, gives some insights in a recent newsgroup posting. In response to a request for tuning Apache for Web benchmarks, Dean wrote, " Unless by tuning you mean 'replace apache with something that's actually fast' ;)
"Really, with the current multiprocess apache I've never really been able to see more than a handful of percentage improvement from all the tweaks. It really is a case of needing a different server architecture to reach the loads folks want to see in benchmarks."
In other words, Apache cannot achieve the performance that companies want to see in benchmarks. That's probably why none of the Unix benchmarks results reported at SPEC use Apache.
Jeremy Allison believes, according to the Linux Today article, that if we do another benchmark with his help, "...this doesn't mean Linux will neccessarily [sic] win, (it doesn't when serving Win95 clients here in my lab, although it does when serving NT clients)..." In other words, in a fair test we should find Windows NT Server outperforming Linux and Samba on the same system. That's what we found.
Jeremy's statement in the Linux Today article that "It is a shame that they [Mindcraft] cannot reproduce the PC Week Linux numbers ..." shows a lack of understanding of the NetBench benchmark. If he looked at the NetBench documentation , he would find a very significant reason why Mindcraft's measured Samba performance was lower:
We used 133 MHz Pentium clients while Jeremy and PC Week used faster clients, although we don't know how much faster because neither documented that. We believe that PC Week uses clients running with at least a 266 MHz Pentium II CPU. Because they use clients that are at least twice as fast and because so much of the NetBench measurements are affected by the clients, this can account for most of the difference in the reported measurements.
In addition, the following testbed and server differences add to the measured performance variances:
Mindcraft used a server with 400 MHz Xeon processors while PC Week used one with 450 MHz Xeon processors. Jeremy did not disclose what speed processor he was using.
Mindcraft used a server with a MegaRAID controller with a beta driver (which was the latest version available at the time of the test) for our first test while the PC Week server used an eXtremeRAID controller with a fully released driver. The MegaRAID driver was single threaded while the eXtremeRAID driver was multi-threaded.
Mindcraft used Windows 9x clients while Jeremy and PC Week used Windows NT clients. According to Jeremy, he gets faster performance with Windows NT clients than with Windows 9x clients.
Given these differences in the testbeds and servers, is it any wonder we got lower performance than Jeremy and PC Week did?
If you scale up our numbers to account for their speed advantage, we get essentially the same results.
The only reason to use Windows NT clients is to give Linux and Samba an advantage, if you believe Jeremy's claim. In the real world, there are many more Windows 9x clients connected to file servers than Windows NT clients. So benchmarks that use Windows NT clients are unrealistic and should be viewed as benchmark-special configurations.
The fact that Jeremy did not publish the details of the testbed he used and the tunes he applied to Linux and Samba is a violation of the NetBench license. If he had published the tunes he used, we would have tried them. What's the big secret?
Jeremy states in the article "The essense of scientific testing is *repeatability* of the experiment..." I concur with his assertion. But a scientific test would use the same test apparatus set up and would use the same initial conditions. Jeremy's unscientific test did not use the same testbed or even one with client computers of the same speed we used. We reported enough information in our report so that someone could do a scientific test to determine the accuracy of our findings. Jeremy did not.
Given the warning in the NetBench documentation against comparing results from different testbeds, it is Jeremy and Linus that are being unscientific in their thrashing of Mindcraft's results. Mindcraft never compared its NetBench results to those produced on a different testbed.
Some Background on Mindcraft
Mindcraft has been in business for over 14 years doing various kinds of testing. For example, from May 1, 1991 through September 30, 1998 Mindcraft was accredited as a POSIX Testing Laboratory by the National Voluntary Laboratory Program (NVLAP), part of the National Institute of Standards and Technology (NIST ). During that time, Mindcraft did more POSIX FIPS certifications than all other POSIX labs combined. All of those tests were paid for by the client seeking certification. NIST saw no conflict of interest in our being paid by the company seeking certification and NIST reviewed and validated each test result we submitted. We apply the same honesty to our performance testing that we do for our conformance testing. To do otherwise would be foolish and would put us out of business quickly.
Some may ask why we decided not to renew our NVLAP accreditation. The reason is simple, NIST stopped its POSIX FIPS certification program on December 31, 1997. That program was picked up by the IEEE and on November 7, 1997 the IEEE announced that they recognized Mindcraft as an Accredited POSIX Testing Laboratory. We still are IEEE accredited and are still certifying systems for POSIX FIPS conformance.
We've received many emails and there have been many postings in newsgroups accusing us of lying in our report about Linux and Windows NT Server because Microsoft paid for the tests. Nothing could be further from the truth. No Mindcraft client, including Microsoft, has ever asked us to deliver a report that lied or misrepresented the results of a test. On the contrary, all of our clients ask us to get the best performance for their product and for their competitor's products. They want to know where they really stand. If a client ever asked us to rig a test, to lie about test results, or to misrepresent test results, we would decline to do the work.
A few of the emails we've received asked us why the company that sponsored a comparative benchmark always came out on top. The answer is simple. When that was not the case our client exercised a clause in the contract that allowed them to refuse us the right to publish the results. We've had several such cases.
Mindcraft works much like a CPA hired by a company to audit its books. We give an independent, impartial assessment based on our testing. Like a CPA we're paid by our client. NVLAP approved test labs that measure everything from asbestos to the accuracy of scales are paid by their clients. It is a common practice for test labs to be paid by their clients.
What's Fair
Considering the defamatory misrepresentations and bias in the Linux Today article written by Mr. Whitinger and Mr. Johnson, we believe that Linux Today should take the following actions in fairness to Mindcraft and its readers:
Remove the article from its Web site and put an apology in its place. If you do not do that, at least provide a link to this rebuttal at the top of the article so that your readers can get both sides of the story.
Disclose who Mr. Whitinger and Mr. Johnson work for. Were they paid by someone with a vested interest in seeing Linux outperform Windows NT Server?
Disclose who owns Linux Today and if it gets advertising revenue from companies who do not a vested interest in seeing Linux outperform Windows NT Server.
Provide fair coverage from an unbiased reporter of Mindcraft's Open Benchmark of Windows NT Server and Linux. For this benchmark, we have invited Linus Torvalds, Jeremy Allison, Red Hat, and all of the other Linux experts we were in contact with to tune Linux, Apache, and Samba and to witness all tests. We have also invited Microsoft to tune Windows NT and to witness the tests. Mindcraft will participate in this benchmark at its own expense.
References to NetBench Documentation
The NetBench document entitled Understanding and Using NetBench 5.01 states on page 24, " You can only compare results if you used the same testbed each time you ran that test suite [emphasis added]."
Understanding and Using NetBench 5.01 clearly gives another reason why the performance measurements Mindcraft reported are so different than the ones Jeremy and PC Week found. Look what's stated on page 236, "Client-side caching occurs when the client is able to place some or all of the test workspace into its local RAM, which it then uses as a file cache. When the client caches these test files, the client can satisfy locally requests that normally require a network access. Because a client's RAM can handle a request many times faster than it takes that same request to traverse the LAN, the client's throughput scores show a definite rise over scores when no client-side caching occurs. In fact, the client's throughput numbers with client-side caching can increase to levels that are two to three times faster than is possible given the physical speed of the particular network [emphasis added]."
[May 16, 1999] Linux Today Response to Microsoft PC Week benchmarks reveal Mindcraft failings
[May 16, 1999] Linux Today The Mindcraft Focus Where oh where has my freedom of speech gone
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Created June 1, 1998; Last modified: March 12, 2019