|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
|May the source be with you, but remember the KISS principle ;-)|
Nikolai Bezroukov. Portraits of Open Source Pioneers
For readers with high sensitivity to grammar errors access to this page is not recommended :-)
|News||Donald Knuth: Leonard Euler of Computer Science||Recommended Links||The Art of Computer Programming||TAoCP and its Influence of Computer Science||Andrew Binstock Interview (Apr 25, 2008)|
|Advogato.org Interview (Jan 25, 2000)||Rewriting the Bible in 0’s and 1’s||Innovations Interviews Donald Knuth on The Art of Computer Programming (Addison Weslye, Fall/Winter 96)||Amazon.com Interview by Tom Mace||1993 Donald Knuth Interview to Computer Literacy||Geek Chic short questionary filled by Donald Knuth|
Here is my collection of Donald Knuth interviews. Not all of them are of equal quality but each provides some interesting tidbits about his personality and views.
Knuth thinks that one of the skills that you need to be a computer scientist is the ability to work with multiple levels of abstraction simultaneously. If so then usage of machine language is just fine. When you're working at one level, you try and ignore the details of what's happening at the lower levels. When you're debugging a computer program and you get some mysterious error message, the machine language is your best (and only) friends. It could be a failure in any of the levels below you, but at the end of the day you need to understand the lowest level to find the bug.
The most impressive out of this collection are:
He takes this error business very seriously. Engraved in the entryway to his home are the words of Danish poet Piet Hein:
The road to wisdom?
Well it’s plain
and simple to express:
and err again
11 June 2013
Fifty years after starting the 'Art of Computer Programming', (TAOCP), Don Knuth is still working hard at the project. He has almost completed the first five volumes. It is considered amongst the "hundred or so books that shaped a century of science". Richard Morris asks him how things are going, and to find out more about his many achievements.
In the nearly fifty years since beginning the book, 'The Art of Computer Programming', that has almost defined computer programming as much as it has defined him, Donald Knuth has received awards including the Kyoto Prize (1996), the Turing Award (1974), and the National Medal of Science (1979). He is an extraordinary man. As well as inventing 'Literate Programming' and writing the most important textbook on programming algorithms, he is also famous for designing and programming one of the most widely-used digital typesetting systems ever, even designing the fonts that went with it. He also pioneered the use of 'Open-source' software.
Knuth is a man of engaging charm and enthusiasms who combines a knowledge of history, music, art and mathematics with a unique insight into the art of computer programming.
Don Knuth has always viewed the stages of writing The Art of Computer Programming, as the most important project of his life.
Knuth's full-time writing schedule means that he is 'pretty much a hermit… concentrating intensively and uninterruptedly on one subject at a time, rather than swapping a number of topics in and out of his head. I'm unable to schedule appointments with visitors, travel to conferences or accept speaking engagements, or undertake any new responsibilities of any kind.'
The irony is that computer science nearly lost Knuth to its ranks because of his love of music (his house is built around a two-storey pipe organ that he designed himself) and says he intends to return to it once he has completed the expected seven volumes of 'The Art of Computer Programming'.
- Don, the last time we spoke we touched on complexity and you said that simplicity was the only answer to this. Why is it important top find simple solutions to software problems?
- It is important to find simple solutions instead of stopping as soon as a first solution is found.
I suppose people usually stop early because most problems do not have a simple solution; thus they figure there's no reason to spend any time looking further. If we start with the assumption that a simple solution does exist, we're much more likely to find one.
- You have an essay about developing TeX where you talk about going over to a pure, destructive QA personality and trying your hardest to break code. Do you think most developers are good at that?
- You are right that it takes a certain kind of mentality to create test cases that will detect subtle bugs. For example, I know that I'm not sneaky enough to ever become an expert on cryptography or computer security.
On the other hand I have been reasonably successful at designing torture tests for software that I've written, mostly by
(1) imagining myself as the enemy of the system, rather than as its friend;
(2) thinking of inputs that are legal but bizarre and unlikely ever to be useful;
(3) embedding an incredibly complicated construction into another that's even less scrutable.
Some parts of my test programs for TeX and METAFONT required many hours of thought before I could convince myself that the program did the right thing. But in the process I discovered bugs that I'm pretty sure wouldn't have been found by any other method that I've ever heard about.
Even better results will presumably be obtained if several different people independently create the torture tests. I can imagine that test creation would be a satisfying career.
I guess I do tend to use systems in unexpected ways, and I get some satisfaction when this reveals a flaw. For example, I remember having fun while visiting my sister and playing with a `shoot-em-up' game that my nephew showed me.
Since I'm kind of a pacifist, I tried shooting bullets at the wall, instead of trying to kill the hidden attackers. Sure enough, I could spell my name on that wall, by making bullet holes in an appropriate pattern. But later when I came back to that same position, after wandering around further in the game, my name was gone! I think that was a bug in the software.
Long ago when I was a teenager, probably in 1952 or 1953, I saw my first "computer game", a demonstration of tic-tac-toe at a museum in Chicago where visitors were challenged to beat a machine.
The exhibit was designed by AT&T, and I think it was controlled by relay switches rather than by a real general-purpose computer.
'Hmm. No Angry Birds here!'
(Don Knuth with an IBM 650 in 1958)
I knew that I could never win if I made normal responses; so I intentionally made the stupidest possible moves. The machine got to a position where it had two ways to beat me by completing three in a row. I decided not to block either possibility. In response, the machine made both of its winning moves ... and this was clearly in violation of the rules. So I had won a moral victory.
Thus, it appears that an encouragement to "think outside the box" helps for designing test cases, just as it does in many other situations.
- If you were to start over and design TeX today, would the advances in computing or your understanding change the design in dramatic ways or would it turn out mostly the same?
- I'm not sure if anybody can still write such a program today, without paying a fortune to license patented ideas. But suppose we ignore such problems; then I would keep the system basically as it is now.
The only serious mistake that I regret making with respect to TeX is that I used binary arithmetic internally but decimal arithmetic in the user interface; I should have done all computations in decimal.
- What would you suggest to someone who wants to be a better software > designer? Decide if they want to make money or whether they want to do science? Do you think we need better incentives to make better software?
- I think existing incentives are working fine, except of course that I wish more people would discover the advantages of literate programming.
- Is there any truth in the rumour that Chuck Moore had an influence on your thinking about algorithms or you having any influence on Chuck?
- We haven't intersected at all, as far as I know.
- What's your process look like when you're doing exploratory programming and readying it for publication?
- I write about five programs each week, on average, and I continue to enjoy making them "literate." I often tear up the first draft or two, after I see how things are going, because algorithms tend to be full or surprises. I often make close studies of code written by the leading experts in whatever kind of application I'm currently studying; but I also try to rethink everything from scratch. I make many mistakes, but afterwards I try to help my readers to avoid them.
- How do you define the idea of designing a programming language? Is it a tool to express ideas or a tool to express goals?
- I think of a programming language as a tool to convert a programmer's mental images into precise operations that a machine can perform. The main idea is to match the user's intuition as well as possible. There are many kinds of users, and many kinds of application areas, so we need many kinds of languages.
I discuss language design in detail in Chapter 11 of my recent book Selected Papers on Computer Languages. I was asked in the 1960s to write about this topic, and the best way I could figure out to explain principles of good design was to violate them all and to present the design of BL\I, "Bad Language One." But I hesitated for more than 40 years before publishing anything about BL\I, because I was afraid people might implement it and start to use it. Finally, when preparing that book, I decided to take a chance.
- I realise this is an enormous question, but what is the link between the design of a language and the design of software written with that language?
- The software writers who have particular ways of thinking about algorithms need a language that matches their thoughts so that they can efficiently transform those thoughts into working code.
Different thought processes result in different structures.
- What are the major problems in the computer industry today?
- D'uh. I'm just an academic who writes about programming; I never have understood industry or economics.
- Do you prefer freedom or order? Do you prefer one way to do a single thing, or a thousand ways to reach the same goal?
- (a) Freedom AND order.
- (b) I guess I prefer maybe three ways, having different characteristics, together with the knowledge of how to convert each of them into the other two.
- In what ways do you think you have influenced the computer industry as a technologist and what lessons should people learn from your experience?
- I can't say what influence I've had, but I can summarize what I've been trying to accomplish during my career: I've attempted to organize many of the best ideas that have been discovered related to programming, and to explain them effectively, because I believe computer science is a beautiful body of knowledge.
I'm glad that this knowledge has proved to be very useful, although I would actually want to organize it and explain it even if the only motivation were intellectual curiosity and a desire to explore fascinating patterns.
Apr 25, 2008 | informit.com
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.
Andrew Binstock: You are one of the fathers of the open-source revolution, even if you aren't widely heralded as such. You previously have stated that you released TeX as open source because of the problem of proprietary implementations at the time, and to invite corrections to the code-both of which are key drivers for open-source projects today. Have you been surprised by the success of open source since that time?
Donald Knuth: The success of open source code is perhaps the only thing in the computer field that hasn't surprised me during the past several decades. But it still hasn't reached its full potential; I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code.
For example, open-source code can produce thousands of binaries, tuned perfectly to the configurations of individual users, whereas commercial software usually will exist in only a few versions. A generic binary executable file must include things like inefficient "sync" instructions that are totally inappropriate for many installations; such wastage goes away when the source code is highly configurable. This should be a huge win for open source.
Yet I think that a few programs, such as Adobe Photoshop, will always be superior to competitors like the Gimp-for some reason, I really don't know why! I'm quite willing to pay good money for really good software, if I believe that it has been produced by the best programmers.
Remember, though, that my opinion on economic questions is highly suspect, since I'm just an educator and scientist. I understand almost nothing about the marketplace.
Andrew: A story states that you once entered a programming contest at Stanford (I believe) and you submitted the winning entry, which worked correctly after a single compilation. Is this story true? In that vein, today's developers frequently build programs writing small code increments followed by immediate compilation and the creation and running of unit tests. What are your thoughts on this approach to software development?
Donald: The story you heard is typical of legends that are based on only a small kernel of truth. Here's what actually happened: John McCarthy decided in 1971 to have a Memorial Day Programming Race. All of the contestants except me worked at his AI Lab up in the hills above Stanford, using the WAITS time-sharing system; I was down on the main campus, where the only computer available to me was a mainframe for which I had to punch cards and submit them for processing in batch mode. I used Wirth's ALGOL W system (the predecessor of Pascal). My program didn't work the first time, but fortunately I could use Ed Satterthwaite's excellent offline debugging system for ALGOL W, so I needed only two runs. Meanwhile, the folks using WAITS couldn't get enough machine cycles because their machine was so overloaded. (I think that the second-place finisher, using that "modern" approach, came in about an hour after I had submitted the winning entry with old-fangled methods.) It wasn't a fair contest.
As to your real question, the idea of immediate compilation and "unit tests" appeals to me only rarely, when I'm feeling my way in a totally unknown environment and need feedback about what works and what doesn't. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."
Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you've published for Volume 4 of The Art of Computer Programming (TAOCP) doesn't seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you're currently working on?
Donald: The field of combinatorial algorithms is so vast that I'll be lucky to pack its sequential aspects into three or four physical volumes, and I don't think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity.
Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions?
Donald: I don't want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they're trying to pass the blame for the future demise of Moore's Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won't be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific-until it turned out that the wished-for compilers were basically impossible to write.
Let me put it this way: During the past 50 years, I've written well over a thousand programs, many of which have substantial size. I can't think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.
How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I'm wrong.
I know that important applications for parallelism exist-rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years.
Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts. (Similarly, when I prepare the third edition of Volume 3 I plan to rip out much of the material about how to sort on magnetic tapes. That stuff was once one of the hottest topics in the whole software field, but now it largely wastes paper when the book is printed.)
The machine I use today has dual processors. I get to use them both only when I'm running two independent jobs at the same time; that's nice, but it happens only a few minutes every week. If I had four processors, or eight, or more, I still wouldn't be any better off, considering the kind of work I do-even though I'm using my computer almost every day during most of the day. So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it's a pipe dream. (No-that's the wrong metaphor! "Pipelines" actually work for me, but threads don't. Maybe the word I want is "bubble.")
From the opposite point of view, I do grant that web browsing probably will get better with multicores. I've been talking about my technical work, however, not recreation. I also admit that I haven't got many bright ideas about what I wish hardware designers would provide instead of multicores, now that they've begun to hit a wall with respect to sequential computation. (But my MMIX design contains several ideas that would substantially improve the current performance of the kinds of programs that concern me most-at the cost of incompatibility with legacy x86 programs.)
Andrew: One of the few projects of yours that hasn't been embraced by a widespread community is literate programming. What are your thoughts about why literate programming didn't catch on? And is there anything you'd have done differently in retrospect regarding literate programming?
Donald: Literate programming is a very personal thing. I think it's terrific, but that might well be because I'm a very strange person. It has tens of thousands of fans, but not millions.
In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay-I'd give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they're understood by a vast community of programmers, most people have no big incentive to change, just as I'm not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched).
Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn't taken the whole world by storm. He observed that a small percentage of the world's population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets.
Yet to me, literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s-it has actually been indispensable at times. Some of my major programs, such as the MMIX meta-simulator, could not have been written with any other methodology that I've ever heard of. The complexity was simply too daunting for my limited brain to handle; without literate programming, the whole enterprise would have flopped miserably.
If people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming. Literate programming is what you need to rise above the ordinary level of achievement. But I don't believe in forcing ideas on anybody. If literate programming isn't your style, please forget it and do what you like. If nobody likes it but me, let it die.
On a positive note, I've been pleased to discover that the conventions of CWEB are already standard equipment within preinstalled software such as Makefiles, when I get off-the-shelf Linux these days.
Andrew: In Fascicle 1 of Volume 1, you reintroduced the MMIX computer, which is the 64-bit upgrade to the venerable MIX machine comp-sci students have come to know over many years. You previously described MMIX in great detail in MMIXware. I've read portions of both books, but can't tell whether the Fascicle updates or changes anything that appeared in MMIXware, or whether it's a pure synopsis. Could you clarify?
Donald: Volume 1 Fascicle 1 is a programmer's introduction, which includes instructive exercises and such things. The MMIXware book is a detailed reference manual, somewhat terse and dry, plus a bunch of literate programs that describe prototype software for people to build upon. Both books define the same computer (once the errata to MMIXware are incorporated from my website). For most readers of TAOCP, the first fascicle contains everything about MMIX that they'll ever need or want to know.
I should point out, however, that MMIX isn't a single machine; it's an architecture with almost unlimited varieties of implementations, depending on different choices of functional units, different pipeline configurations, different approaches to multiple-instruction-issue, different ways to do branch prediction, different cache sizes, different strategies for cache replacement, different bus speeds, etc. Some instructions and/or registers can be emulated with software on "cheaper" versions of the hardware. And so on. It's a test bed, all simulatable with my meta-simulator, even though advanced versions would be impossible to build effectively until another five years go by (and then we could ask for even further advances just by advancing the meta-simulator specs another notch).
Suppose you want to know if five separate multiplier units and/or three-way instruction issuing would speed up a given MMIX program. Or maybe the instruction and/or data cache could be made larger or smaller or more associative. Just fire up the meta-simulator and see what happens.
Andrew: As I suspect you don't use unit testing with MMIXAL, could you step me through how you go about making sure that your code works correctly under a wide variety of conditions and inputs? If you have a specific work routine around verification, could you describe it?
Donald: Most examples of machine language code in TAOCP appear in Volumes 1-3; by the time we get to Volume 4, such low-level detail is largely unnecessary and we can work safely at a higher level of abstraction. Thus, I've needed to write only a dozen or so MMIX programs while preparing the opening parts of Volume 4, and they're all pretty much toy programs-nothing substantial. For little things like that, I just use informal verification methods, based on the theory that I've written up for the book, together with the MMIXAL assembler and MMIX simulator that are readily available on the Net (and described in full detail in the MMIXware book).
That simulator includes debugging features like the ones I found so useful in Ed Satterthwaite's system for ALGOL W, mentioned earlier. I always feel quite confident after checking a program with those tools.
Andrew: Despite its formulation many years ago, TeX is still thriving, primarily as the foundation for LaTeX. While TeX has been effectively frozen at your request, are there features that you would want to change or add to it, if you had the time and bandwidth? If so, what are the major items you add/change?
Donald: I believe changes to TeX would cause much more harm than good. Other people who want other features are creating their own systems, and I've always encouraged further development-except that nobody should give their program the same name as mine. I want to take permanent responsibility for TeX and Metafont, and for all the nitty-gritty things that affect existing documents that rely on my work, such as the precise dimensions of characters in the Computer Modern fonts.
Andrew: One of the little-discussed aspects of software development is how to do design work on software in a completely new domain. You were faced with this issue when you undertook TeX: No prior art was available to you as source code, and it was a domain in which you weren't an expert. How did you approach the design, and how long did it take before you were comfortable entering into the coding portion?
Donald: That's another good question! I've discussed the answer in great detail in Chapter 10 of my book Literate Programming, together with Chapters 1 and 2 of my book Digital Typography. I think that anybody who is really interested in this topic will enjoy reading those chapters. (See also Digital Typography Chapters 24 and 25 for the complete first and second drafts of my initial design of TeX in 1977.)
Andrew: The books on TeX and the program itself show a clear concern for limiting memory usage-an important problem for systems of that era. Today, the concern for memory usage in programs has more to do with cache sizes. As someone who has designed a processor in software, the issues of cache-aware and cache-oblivious algorithms surely must have crossed your radar screen. Is the role of processor caches on algorithm design something that you expect to cover, even if indirectly, in your upcoming work?
Donald: I mentioned earlier that MMIX provides a test bed for many varieties of cache. And it's a software-implemented machine, so we can perform experiments that will be repeatable even a hundred years from now. Certainly the next editions of Volumes 1-3 will discuss the behavior of various basic algorithms with respect to different cache parameters.
In Volume 4 so far, I count about a dozen references to cache memory and cache-friendly approaches (not to mention a "memo cache," which is a different but related idea in software).
Andrew: What set of tools do you use today for writing TAOCP? Do you use TeX? LaTeX? CWEB? Word processor? And what do you use for the coding?
Donald: My general working style is to write everything first with pencil and paper, sitting beside a big wastebasket. Then I use Emacs to enter the text into my machine, using the conventions of TeX. I use tex, dvips, and gv to see the results, which appear on my screen almost instantaneously these days. I check my math with Mathematica.
I program every algorithm that's discussed (so that I can thoroughly understand it) using CWEB, which works splendidly with the GDB debugger. I make the illustrations with MetaPost (or, in rare cases, on a Mac with Adobe Photoshop or Illustrator). I have some homemade tools, like my own spell-checker for TeX and CWEB within Emacs. I designed my own bitmap font for use with Emacs, because I hate the way the ASCII apostrophe and the left open quote have morphed into independent symbols that no longer match each other visually. I have special Emacs modes to help me classify all the tens of thousands of papers and notes in my files, and special Emacs keyboard shortcuts that make bookwriting a little bit like playing an organ. I prefer rxvt to xterm for terminal input. Since last December, I've been using a file backup system called backupfs, which meets my need beautifully to archive the daily state of every file.
According to the current directories on my machine, I've written 68 different CWEB programs so far this year. There were about 100 in 2007, 90 in 2006, 100 in 2005, 90 in 2004, etc. Furthermore, CWEB has an extremely convenient "change file" mechanism, with which I can rapidly create multiple versions and variations on a theme; so far in 2008 I've made 73 variations on those 68 themes. (Some of the variations are quite short, only a few bytes; others are 5KB or more. Some of the CWEB programs are quite substantial, like the 55-page BDD package that I completed in January.) Thus, you can see how important literate programming is in my life.
I currently use Ubuntu Linux, on a standalone laptop-it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux. Incidentally, with Linux I much prefer the keyboard focus that I can get with classic FVWM to the GNOME and KDE environments that other people seem to like better. To each his own.
Andrew: You state in the preface of Fascicle 0 of Volume 4 of TAOCP that Volume 4 surely will comprise three volumes and possibly more. It's clear from the text that you're really enjoying writing on this topic. Given that, what is your confidence in the note posted on the TAOCP website that Volume 5 will see light of day by 2015?
Donald: If you check the Wayback Machine for previous incarnations of that web page, you will see that the number 2015 has not been constant.
You're certainly correct that I'm having a ball writing up this material, because I keep running into fascinating facts that simply can't be left out-even though more than half of my notes don't make the final cut.
Precise time estimates are impossible, because I can't tell until getting deep into each section how much of the stuff in my files is going to be really fundamental and how much of it is going to be irrelevant to my book or too advanced. A lot of the recent literature is academic one-upmanship of limited interest to me; authors these days often introduce arcane methods that outperform the simpler techniques only when the problem size exceeds the number of protons in the universe. Such algorithms could never be important in a real computer application. I read hundreds of such papers to see if they might contain nuggets for programmers, but most of them wind up getting short shrift.
From a scheduling standpoint, all I know at present is that I must someday digest a huge amount of material that I've been collecting and filing for 45 years. I gain important time by working in batch mode: I don't read a paper in depth until I can deal with dozens of others on the same topic during the same week. When I finally am ready to read what has been collected about a topic, I might find out that I can zoom ahead because most of it is eminently forgettable for my purposes. On the other hand, I might discover that it's fundamental and deserves weeks of study; then I'd have to edit my website and push that number 2015 closer to infinity.
Andrew: In late 2006, you were diagnosed with prostate cancer. How is your health today?
Donald: Naturally, the cancer will be a serious concern. I have superb doctors. At the moment I feel as healthy as ever, modulo being 70 years old. Words flow freely as I write TAOCP and as I write the literate programs that precede drafts of TAOCP. I wake up in the morning with ideas that please me, and some of those ideas actually please me also later in the day when I've entered them into my computer.
On the other hand, I willingly put myself in God's hands with respect to how much more I'll be able to do before cancer or heart disease or senility or whatever strikes. If I should unexpectedly die tomorrow, I'll have no reason to complain, because my life has been incredibly blessed. Conversely, as long as I'm able to write about computer science, I intend to do my best to organize and expound upon the tens of thousands of technical papers that I've collected and made notes on since 1962.
Andrew: On your website, you mention that the Peoples Archive recently made a series of videos in which you reflect on your past life. In segment 93, "Advice to Young People," you advise that people shouldn't do something simply because it's trendy. As we know all too well, software development is as subject to fads as any other discipline. Can you give some examples that are currently in vogue, which developers shouldn't adopt simply because they're currently popular or because that's the way they're currently done? Would you care to identify important examples of this outside of software development?
Donald: Hmm. That question is almost contradictory, because I'm basically advising young people to listen to themselves rather than to others, and I'm one of the others. Almost every biography of every person whom you would like to emulate will say that he or she did many things against the "conventional wisdom" of the day.
Still, I hate to duck your questions even though I also hate to offend other people's sensibilities-given that software methodology has always been akin to religion. With the caveat that there's no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I've ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other's code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.
I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you're totally convinced that reusable code is wonderful, I probably won't be able to sway you anyway, but you'll never convince me that reusable code isn't mostly a menace.
Here's a question that you may well have meant to ask: Why is the new book called Volume 4 Fascicle 0, instead of Volume 4 Fascicle 1? The answer is that computer programmers will understand that I wasn't ready to begin writing Volume 4 of TAOCP at its true beginning point, because we know that the initialization of a program can't be written until the program itself takes shape. So I started in 2005 with Volume 4 Fascicle 2, after which came Fascicles 3 and 4. (Think of Star Wars, which began with Episode 4.)
 My colleague Kunle Olukotun points out that, if the usage of TeX became a major bottleneck so that people had a dozen processors and really needed to speed up their typesetting terrifically, a super-parallel version of TeX could be developed that uses "speculation" to typeset a dozen chapters at once: Each chapter could be typeset under the assumption that the previous chapters don't do anything strange to mess up the default logic. If that assumption fails, we can fall back on the normal method of doing a chapter at a time; but in the majority of cases, when only normal typesetting was being invoked, the processing would indeed go 12 times faster. Users who cared about speed could adapt their behavior and use TeX in a disciplined way.
Andrew Binstock is the principal analyst at Pacific Data Works. He is a columnist for SD Times and senior contributing editor for InfoWorld magazine. His blog can be found at: http://binstock.blogspot.com.
Advogato: The first questions that I have are about free software. TeX was one of the first big projects that was released as free software and had a major impact. These days, of course, it's a big deal. But I think when TeX came out it was just something you did, right?
Prof. Knuth: I saw that the whole business of typesetting was being held back by proprietary interests, and I didn't need any claim to fame. I had already been successful with my books and so I didn't have to stake it all on anything. So it didn't matter to me whether or not whether I got anything financial out of it.
But how did you address those problems with the fonts that got contributed to TeX?
Prof. Knuth: In my case, I hired research associates and they put their fonts out into the open. Or else, other people learned it and they did it for the love of it. Some of the excellent fonts came about because they were for Armenian and Ethiopian and so on, where there wasn't that much money. It was either them taking time and making the fonts or else their favorite language would be forever backwards, so I made tools by which they could do this. But in every case, the people who did it weren't relying on this for their income.
If we had somebody who would commission fonts and pay the font designer, the font designer wouldn't be upset at all about having it open, as long as the font designer gets some support.
And you did some of that.
Yeah. In fact, I worked with some of the absolute best type designers, and they were thrilled by the idea that they could tell what they knew to students and have it published and everything. They weren't interested in closed stuff. They're interested in controlling the quality, that somebody isn't going to spoil it, but we could assure them of that.
One of the things that struck me when I was reading "Digital Typography" is the intensive study that you did, especially in the area of math typesetting. When I was writing papers, using math formulas in TeX, I just typed in the commands and out came the math and it looked pretty good to me. It shouldn't have been surprising, but it definitely struck me how much attention you paid to the best mathematics typesetting of past centuries.
I do strongly think that people, when they start throwing computers at something, they think that it's a whole new ballgame, so why should they study the past. I think that is a terrible mistake. But also, I love to read historical source materials, so I couldn't resist. I had a good excuse to study these things, and the more I looked at it, the more interesting it was. But I don't think responsible computer scientists should be unaware of hundreds of years of history that went before us. So that was just a natural thing to approach it that way, for me.
There's a fairly major controversy with TrueType right now, that there a number of patents that are owned now by Apple. It's kind of interesting to me that that is the case even though it's for the most part derivative work of what was in Metafont.
I've been very unhappy with the way patents are handled. But the more I look at it, the more I decide that it's a waste of time. I mean, my life is too short to fight with that, so I've just been staying away. But I know that the ideas for rendering... The main thing is that TrueType uses only quadratic splines, and that Type1 fonts use cubic splines, which allow you to get by with a lot fewer points where you have to specify things.
The quadratic has the great advantage that there's a real cheap way to render them. You can make hardware to draw a quadratic spline lickety-split. It's all Greek mathematics, the conic sections. You can describe a quadratic spline by a quadratic equation (x, y) so that the value of f(x, y) is positive on one side of the curve and negative on the other side. And then you can just follow along pixel by pixel, and when x changes by one and y changes by one, you can see which way to move to draw the curve in the optimal way. And the mathematics is really simple for a quadratic. The corresponding thing for a cubic is six times as complicated, and it has extra very strange effects in it because cubic curves can have cusps in them that are hidden. They can have places where the function will be plus on both sides of the cubic, instead of plus on one side and minus on the other.
As it is, if you look at Principles of Programming Languages, there's all this type theory in there with beautiful Greek letters, inference rules and so on. The visual aesthetic seems to have been inspired by the easy availability that TeX provides.
Certainly, TeX was partly influenced by the needs of computer science, that pushed beyond what mathematicians had needed. Computer scientists needed to see mathematical structure, but they also needed to see the kind of structure you have in programs. So we had to push the envelope in the ACM Journal in the '60s. To publish computer science papers, the typesetters using hand methods in the '60s were being forced to work harder by the computer scientists than by the mathematicians at that time. It's part of the need for a way to see the structure that's in the stuff you're working with. Since I'm writing books about computers, naturally I made TeX so that it could do that.
It's clear that TeX was inspired by the visual aesthetic of mathematics.
I'm just saying: to what extent mathematics and computer science today is being influenced by the visual aesthetic of TeX?
Well, I don't know. I think the fact that TeX was open-ended means that people can choose their notation themselves. They don't have to adopt somebody else's macros particularly. That was the bind that we were in before. We would have to first write our paper, then explain it to the printer, and the printer would maybe get it right. But now we're less hesitant to use a notation that's not traditional, but that we think is appropriate, because we know that we don't have to go through a noisy channel in the middle.
One of the other accomplishments of TeX that I continue to be impressed with is the consistent rendering, the idea that you have a source file and that on virtually any implementation of TeX, you'll get the same results.
That's what I insisted on the most. I didn't want to get paid, but I didn't want it to change.
Speaking of literate programming, the question that I have for you is: now that people are moving everything to the Web, including programming, do you think that's another chance for literate programming to become popular?
There's a lot of ferment in this direction. I still don't have the dvipdf program that I got to get installed on my Linux. I guess I gotta try it. But a guy has worked out now that he can convert to Acrobat format, a literate program automatically come out in Acrobat format, so that you can use all the features of the Acrobat reader to click on stuff to move around in the documentation.
Cross-referencing and so on?
Yeah, find a variable where it's declared, and all other uses of the variable. It's certainly a natural thing for hypertext, and this guy in Brazil has worked out a nice system. Still, the thing that's holding it back is probably that some programmers just don't like to document their stuff.
That's certainly a problem we've had in free software, that if there's a documentation file and then a code file, that the two often get out of sync.
These are slowly changing, but most of the horror stories that you hear about that stuff is because of the illiterate nature of the program.
So you think that literate programming is a way to make that better?
It's so much better than the alternative, but I think Jon Bentley explained it to me best, not much percentage of the world's population is really good at programming, and not much percentage of the world's population is really good at documenting, and here you need to be doing both. [laughs]
So, I think that in my experiments with Stanford students I found that more than half of the students that I worked with really hit it off well with literate programming. Maybe Stanford students aren't average. [laugh]
Computing's philosopher king argues for elegance in programming -- and a Pulitzer Prize for the best written.Donald Ervin Knuth is trying to explain what has delayed work on Volume 4 of his magnum opus. "I've never been a good estimator of how long things are going to take," he says.
Coming from someone who's been writing one book on and off for the past quarter-century, this seems a bit of an understatement. But when you consider that most of Knuth's work has been devoted to just that -- figuring out how much time things like computer programs take -- and the statement takes on new (and slightly disingenuous) meanings.
"I'm getting toward being able to take up Volume 4 full time," Knuth says. "I'm writing little snippets. I wrote a sentence just the other day."
"Volume 4," of course, refers to the long-awaited next installment of Knuth's masterwork, "The Art of Computer Programming." Less a set of instruction manuals than a kind of analytic philosophy of programming, the books -- which first appeared in the 1960s -- lay out principles both broad and specific to guide computer programmers toward greater efficiency. So comprehensive are the texts that the Jargon File of hacker slang offers a definition of the word "Knuth": "Mythically, the reference that answers all questions about data structures or algorithms," and goes on to recommend a safe response to any question for which you don't have a ready answer: "I think you can find that in Knuth."
Time was when such a comment would have the curious programmer dusting off "Fundamental Algorithms" and "Sorting and Searching" (Volumes 1 and 3 of "Knuth"), which were required reading in computer science courses for decades. But modern keyboard jocks no longer worry about things like saving 11 microseconds in each iteration of a binary tree search (if they even know what a binary tree is). Instead, they spend their time assembling prefab software components and designing graphical user interfaces to wow clients. Some "write" whole systems having never even seen a line of code. To them, Knuth, now professor emeritus of the art of computer programming at Stanford University, is irrelevant, abstruse and bothersome because he illustrates concepts in machine code, the lowest-level programming language and the hardest to read.
If his attention to the minutiae of programming has earned the annoyance of a younger generation of programmers, though, Knuth remains the iminence grise of algorithm analysis, and one of the leading thinkers on programming in general.
"I think of him as sort of a godfather," says software engineer Ellen Ullman, author of "Close to the Machine: Technophilia and its Discontents." "It would be very difficult these days to take a job and approach programming in that sort of algorithm and design sense, [but] it's a solace to think that there are places where people think deeply about algorithms in a general and abstract way and have notions of elegance and beauty."
Of course, other computer scientists have made contributions to the field that are every bit as substantial (most notably Edsger Dijkstra, Tony Hoare and Niklaus Wirth). But Knuth's work brings to life the complex mathematical underpinnings of the discipline, and deals with the logistics of programming on all levels, from the conceptual design of solutions to the most intimate details of the machine. The fundamental elements of any computer program are, perhaps not surprisingly, time and space. (In programming terms, time describes the speed with which a program accomplishes its task, while space refers to the amount of memory a program requires both to store itself -- i.e. the length of the code -- and to compute and store its results.) But Knuth is concerned not only with bytes and microseconds, but with a concept that has come to be known in coding circles as "elegance," and that applies to programming at any level.
Elegance takes in such factors as readability, modular coding techniques and the ease with which a program can be adapted to other functions or expanded to perform additional tasks. (Knuth's broader ideas about documentation and structured programming are laid out in his 1992 book, "Literate Programming.") Though rarely mentioned, "sloppy coding" often costs companies a great deal in terms of time and money; programmers brought in to update the code of consultants gone by must spend hours or days deciphering a poorly documented program, or hunting down bugs that might have been caught easily had the initial programmer simply been a bit more conscientious in the practice of his craft.
Ullman points out that "the practice of programming has moved very far away from the notion that the professional programmer considers algorithms in a deep way. Of course," she adds, "it would be impossible if every bit of code had to go through that kind of deeply professional process. On the other hand, the code that we have would be better. There's no doubt in my mind that it would be better and more long-lasting code."
Ullman, however, admits she hasn't revisited Knuth's work in many years. Many people are put off on even a first reading by the "mythical" computer with which Knuth illustrates his concepts. MIX, "the world's first polyunsaturated computer," was designed by Knuth as a kind of ideal machine along the lines popular in the 1960s. (Knuth is now updating MIX to MMIX, a reduced instruction-set computing machine that more closely mimics computers in use today.) "The Art of Computer Programming" is filled with examples in MIX, Knuth's fictional machine code and assembly language. In today's world of natural-language compilers, pseudo-code and "click-and-drag" programming tools, though, learning a new assembly language is as attractive to most students of computer science as a visit to the dentist.
But programmers ignore "the very pulse of the machine" (a Wordsworth quotation found in Volume 1) at their peril. As Lyle Ramshaw, a former graduate student of Knuth's, points out, "Don claims that one of the skills that you need to be a computer scientist is the ability to work with multiple levels of abstraction simultaneously. When you're working at one level, you try and ignore the details of what's happening at the lower levels. But when you're debugging a computer program and you get some mysterious error message, it could be a failure in any of the levels below you, so you can't afford to be too compartmentalized."
"MIX was incredibly popular in the early '70s," Knuth says. "Right now there are a lot of comments on Amazon.com saying how it was my terrible mistake, and how am I ever going to recover from it? Well, some of those comments are right, but some of them are dead wrong. The people who say I shouldn't have machine language and just go into high-level languages, they're the ones I think are wrong."
In fact, without machine code, it would be impossible for Knuth to even attempt the low-level analyses (like the time spent executing each instruction in a computer program) that are the backbone of his work. The same BASIC program, for instance, may run at different speeds and use different amounts of memory on different types of machine. In addition, such languages tend to go in and out of vogue faster than a Madonna single. If Knuth based his books on Java, C++, VisualBASIC or SNOBOL (remember SNOBOL?), they'd be obsolete in a matter of months.
And as Knuth points out, "People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird."
If Volume 4 has been a long time coming, Knuth has not been idle. Since finishing Volume 3 in 1973, he has written several academic works on computer science and mathematics, composed a novel (it took him one week), developed revolutionary and widely used desktop typography and font design systems (in which all of his books are now handsomely typeset) and engaged in a study of Chapter 3, Verse 16, of every single book of the Bible, which he published in 1991. ("It's different from any other book, and that means it was either very necessary or never should have been written," Knuth says.) He has also revised his earlier books, incorporating thousands of improvements, "including all the letters from people saying they had found errors." Knuth offers a reward of "at least" $2.56 (a "hexadecimal dollar") to anyone who points out a previously unsighted mistake in one of his books. To date, he has written more than $10,000 worth of such checks. "But I'm not sure how much of it has actually been cashed," he notes.
Unlike most books, Volume 4 will not appear all at once. Instead, 128-page fascicles will be released more or less as Knuth finishes writing them. Though not scheduled for publication until 2000 or later, the fascicles are sure to begin circulating informally before then. (A fascicle describing the MMIX machine is already available on the Internet.) Knuth, now 61, hopes to finish the book around 2003 -- though "that's probably slipped by a year or two," he admits. It could be a decade or two into the next millennium before he completes the set.
Peter Gordon, Knuth's editor at Addison-Wesley, sounds wistful when asked about a due date for Volume 4 (which will actually be published as three "sub-volumes"). "From my 20 years' experience in computer-science publishing, the most frequently asked question by far is, 'Where is Volume 4?'" he says. "Nobody has to say more than that, who the author is, what book they're talking about. Just 'Volume 4.'"
Speaking with Knuth, one gets the impression of a man hard pressed to keep up with his mind's high-speed output of ideas. His writing career dates back to 1957, when, as a 19-year-old freshman at the Case Institute of Technology, he earned $25 for the publication of his "Potrzebie System of Weights and Measures" in Mad magazine. His style has remained delightfully literate, featuring sly plays on the jargon of computing, as well as some that are not so sly: Chapter 2 of "Fundamental Algorithms" opens with "Hamlet's" first-act resolution "Yea, from the table of my memory I'll wipe away all trivial fond records."
Besides demonstrating the techniques of clear, efficient coding, Knuth has sought to bring a deeper sense of aesthetics to the discipline. "You try to consider that the program is an essay, a work of literature," he says. "I'm hoping someday that the Pulitzer Prize committee will agree." Prizes would be handed out for "best-written program," he says, only half-joking. Knuth himself has already collected numerous awards, including the National Medal of Science from then-President Jimmy Carter and Japan's prestigious Kyoto Prize.
And though it may take him another quarter century to complete his magnum opus, Knuth is already dreaming up projects to come -- including the computer-aided composition of an orchestral piece based on the Book of Revelations.
At Stanford, Knuth no longer teaches, though he occasionally lectures on whatever happens to interest him at the moment. (This fall will find him at the Massachusetts Institute of Technology, talking about "Things a Computer Scientist Rarely Talks About.") When he has the time, Knuth reads four-hand piano music with friends on his Bosendorfer grand, or plays the 16-rank organ that stands across from it in the music room of the Palo Alto, Calif., home he shares with his wife, Jill. Otherwise, his days are spent sifting through scientific journals, research papers and pages on pages of notes for his next books. "I'm obsessively detail-oriented," Knuth says. An example: During the 10 years or more in which Knuth was occupied designing the TeX typesetting system and revising Volumes 1 through 3, he accumulated a 270-inch stack of such correspondence. Twenty-two and a half feet of heady research may be daunting, but more revealing is the fact that Knuth actually measured it.
Though the world of programming may have little time these days for Knuth's rigorous analytical style and painstaking attention to low-level detail, his work remains an indispensable contribution to the body of knowledge that is computer science. He will perhaps one day be remembered as programming's Dr. Johnson, but the label would do him a disservice, for Knuth's ideas of elegance can be applied to more disciplines than simply the digital realm. Knuth hesitates at this suggestion, then demurs: "Everyday life is like programming, I guess," he says. "If you love something you can put beauty into it."
Now intent on completing his scriptures, the 61-year-old Knuth (ka-NOOTH) leads what he calls a hermit-like existence (with his wife) in the hills surrounding the university, having taken early retirement from teaching. He has unplugged his personal e-mail account, posting a Web page (www-cs-faculty.stanford.edu/~knuth/) to keep the software multitudes at bay by answering frequently asked questions such as, "When is Volume 4 coming out?"
About once a month during the academic year, Knuth comes down from the heights to a basement lecture room in the Gates Computer Science Building at Stanford to deliver one of his "Computer Musings" lectures, usually about some aspect of his current work on The Art of Computer Programming. These talks draw computer science students, visiting professors, software engineers from nearby companies and an occasional CEO. On a balmy day earlier this year, the topic and listeners are different. To celebrate the publication of the
third volume of his collected papers, Digital Typography, the associates of the Stanford University Libraries have invited an audience of fans of the printed word to hear Knuth talk about creating the TeX system for scientific and mathematical publication. Wearing a black T-shirt over a long-sleeve black shirt, his bald pate glistening in the overhead lights, he appears suitably monkish before about 70 acolytes and colleagues.
Hesitatingly, his words fighting his innate Lutheran modesty, he begins: "My main life's work and the reason that I started this whole project is to write a series of books called The Art of Computer Programming-for which I hope to live another 20 years and finish the project I began in 1962. Unfortunately, computer programming has grown over the years and so I've had to write a little more than I thought when I sketched it out." The faithful laugh knowingly.
Knuth relates his detour into digital typography during the 1970s. This was a time of enormous change in the typesetting industry, as computer systems replaced the hot type that had been used since the day of Gutenberg. Computer typography was less expensive, but also less esthetically pleasing-especially for complex mathematical notation. Recalls Knuth: "As printing technology changed, the more important commercial activities were treated first and mathematicians came last. So our books and journals started to look very bad. I couldn't stand to write books that weren't going to look good."
Knuth took it upon himself to write every line of code for software that yielded beautiful typography. He drew the name of his typesetting program from the Greek word for art-the letters are tau epsilon chi (it rhymes with "blecch"). Says Knuth: "Well over 90 percent of all books on mathematics and physics" are typeset with TeX and with its companion software, Metafont, a tool Knuth developed to design pleasing type fonts.
He is quick to acknowledge the contribution of the type designers, punch cutters, typographers, book historians and scholars he gathered at Stanford while developing TeX. Some are in the audience. He tells them: "TeX is what we now call open-system software-anybody around the world can use it free of charge. Because of this, we had thousands of people around the world to help us find all the mistakes. I think it's probably the most reliable computer program of its size ever."
Anyone who doubts this claim by the decidedly unboastful Knuth can find confirmation from Guy Steele, one of TeX's first users and now a distinguished engineer at Sun Microsystems. TeX, says Steele, was one of the first large programs whose source code was published openly. Steele says Knuth's publication of the TeX code in a book, along with full comments, made it so that "anyone could understand how it works and offer bug fixes." With academe's top scientists and mathematicians as beta-testers, an extraordinary quality control team helped perfect TeX. (The TeX development effort was a model for today's open-source software movement, which has given the world Linux-an operating system that is beginning to compete with Microsoft Windows.)
Perfectability is a major preoccupation of Knuth's. The only e-mail address Knuth maintains gathers reports of errata from readers of his books, offering $2.56 for each previously unreported error. (The amount is an inside joke: 256 equals 2 to the 8th power-the number of values a byte can represent.) Knuth's reward checks are among computerdom's most prized trophies; few are actually cashed.
He takes this error business very seriously. Engraved in the entryway to his home are the words of Danish poet Piet Hein:
The road to wisdom?
Well it's plain
and simple to express:
and err again
In a variation on this theme of perfectability, Knuth's contribution to computer science theory in the pages of The Art of Computer Programming has been his rigorous analysis of algorithms. Using methods in his book, the operations used to translate machine instructions into equations can be tested to determine whether they are optimal. Improving a program then becomes a question of finding algorithms with the most desirable attributes. Not that theoretical proofs can replace actually running software on a computer. In an often-cited remark he mentions on his Web page, he once warned a colleague: "Beware of the above code; I have only proved it correct, not tried it."
In Knuth's Stanford talk, perfectability was again a theme. He followed the pages in his volume on Digital Typography beyond its introductory chapters to the longest section in the book, which attacks a crucial problem in typography. He calls his listeners' attention to "one of the main technical tricks in the TeX system: the question of how to break up paragraphs so that the lines are approximately equal and good."
Poor spacing between words and ugly choices for line breaks had been among the major computer typography gaffes that launched Knuth on his TeX crusade. Odd word chasms, ladders of hyphens, and orphaned bits of text resulted from the rigid algorithms used to program line breaks without regard for visual elegance. Knuth's solution: have the computer use trial-and-error methods to test how each paragraph of text can best be broken up. Instead of "greedy" algorithms packing in the most words on a line-standard in computer typography before and after TeX - Knuth's computation-intensive method evaluates beauty.
Knuth seems born to the task of promoting beauty on the printed page - via computational methods. "I had a love of books from the beginning," he tells his audience. "In my mother's collection, we found the first alphabet book I had. I had taken the letters and counted all the serifs." He is proud of his early literacy, telling a writer that he was the youngest member of the Book Worm Club at the Milwaukee Public Library. His interest in typographic reproduction also came early in life. One of his earliest memories of pre-desktop publishing was helping his father, Ervin, with the mimeograph stencils for printing the church newsletter in the basement. Like his father's newsletter, TeX was meant to be a homebrew project, on a manageable scale. "The original intent was that it would be for me and my secretary," he tells TR in an interview in his home's second-floor study. Leaning back in the black lounge chair, Knuth acknowledges that the long journey into TeX was intended to be a quick side trip: "I was going to finish it in a year."
Events took a different turn. In 1978, Sun's Steele-then an MIT grad student visiting Stanford-translated TeX for use on MIT's mainframe computer. Suddenly, Knuth recalls, "I had 10 users, then 100. Each time it went through different levels of error. In between the 1,000th and 10,000th user, I tore up the code and started over." Knuth says he realized then that TeX wasn't just a digression, it was itself part of the vision. "I saw that this fulfilled a need in the world and so I better do it right."
A key turning point in the spread of TeX was a lecture Knuth gave before the American Mathematical Society (AMS). Barbara Beeton, a staff specialist in composition systems for AMS and a longtime official of the Portland, Ore.-based TeX User's Group, remembers the occasion: "He was invited to deliver the Josiah Willard Gibbs Lecture. Albert Einstein and John von Neumann had been among the previous speakers. Knuth talked about his new typesetting system for the first time in public." Knuth was preaching to the choir; the assembled mathematicians were familiar with how printing quality had declined. Adds Beeton: "TeX was the first composition system meant to be used by the author of an article or book" as opposed to a publishing house. Soon after, AMS became the original institutional user of TeX, employing Knuth's system to publish all of its documents and journals.
As word spread and more users took advantage of his free software (written for an academic mainframe computer but soon made available for PCs), Knuth found himself studying the history of printing to find solutions for narrow applications. Often as not, his research proved fruitless and he would have to come up with his own answer. For ceremonial invitations, he created new fonts; for musical typesetting he solved difficult alignment problems. "I had so many users," he recalls. "From wedding invitations and programs for the local symphonic orchestra to computer programs."
For nearly nine years, Knuth's foray into typography occupied him full time-pulling him away from work on the programming book that he considered his true calling. "I had to think of the endgame," he says. "How could I responsibly finish TeX and say: This is not going to change anymore? I had to work out a four-year strategy to extricate myself" and return to The Art of Computer Programming.
Knuth's solution: with the release of TeX version 3.0 in 1990, he declared his work complete. Disciples will have to maintain the system. Knuth says he will limit his work to repairing the rare bugs brought to his attention; with each fix he assigns one more digit to the version number so that it tends to pi (the current version is 3.14159).
One result of Knuth's decision to stop making major changes to TeX is that the TeX file format has remained unchanged. "It's the only software where you can take the file for your paper from 1985 and not have to convert it to print out the same way today," notes David Fuchs, a senior researcher with Liberate Technologies (formerly Network Computer Inc.),who was a grad student at Stanford during the development of TeX. Fuchs estimates that there are 1 million TeX users worldwide; many employ special-purpose commercial packages built around the TeX "kernel," such as LATeX (a command-oriented macro language) and TeXDoc (optimized for software documentation).
"On the downside, TeX is limited in its appeal because it's not WYSIWYG," Fuchs admits, employing the acronym for "what you see is what you get"-the standard term describing text processing software that displays formatting on screen as it will appear on the printed page. Rather than offering real-time onscreen interactivity, TeX requires a markup language typed into a document and interpreted by the computer; you see what you get only after it is in print. Despite its unintuitive user interface, TeX has developed a dedicated core of production professionals who will accept no substitute. "Why would anyone want anything else?" asks Paul Anagnostopolis, a Carlisle, Mass.-based publishers' consultant and author of TeX-based software for book composition. "A lot of people don't care about WYSIWYG."
Opus in Progress
Spending nine years instead of one to create tex is the same kind of epic miscalculation that led Knuth to the monumental scale of The Art of Computer Programming. After earning his undergraduate degree at Case Institute (now Case Western Reserve), he was studying for his PhD and teaching at the California Institute of Technology in 1962 when he was contracted by textbook publisher Addison-Wesley to write a volume on computer compilers. (Compilers are special programs that convert the text typed in by programmers into instructions in a computer's native binary language.)
In his book-lined study, Knuth recounts the history of the project. From 1962 to 1966, he wrote the first draft in pencil. The manuscript was 3,000 pages long. "I was thinking it was one volume of maybe 600 pages. I just figured type in books was smaller than my handwriting. Then I typed up chapter one and by itself it was 450 pages. I sent it to the publisher and they said: Don, do you have any idea how long your book will be?"
Faced with such an unwieldy manuscript, many publishers would have dumped the project. Instead, Addison-Wesley worked out a publication schedule for what could eventually stretch out to seven volumes. Volume 4 is supposed to be ready in 2004 and Volume 5 by 2009. Then Knuth may finish Volumes 6 and 7-if what he has to say on his chosen topics is still instructive. Peter Gordon, publishing partner at Addison-Wesley and Donald Knuth's editor for the last 20 years, explains that the success of the first three volumes of The Art of Computer Programming has allowed the publisher to build its entire computer science line around Knuth's work. "Don has his own life plan and his own sense of timing," he notes. "He's such a creative and gifted author, the best any editor can do is stay out of his way and let him follow his plan."
It helps that the book continues to draw praise from other seers in the digital realm. In his syndicated newspaper column, Bill Gates once responded to a reader: "If you think you're a really good programmer, or if you want to challenge your knowledge, read The Art of Computer Programming, by Donald Knuth." Gates described his own encounter with the book: "It took incredible discipline, and several months, for me to read it. I studied 20 pages, put it away for a week, and came back for another 20 pages. If somebody is so brash that they think they know everything, Knuth will help them understand that the world is deep and complicated. If you can read the whole thing, send me a resume."
What sustains Knuth through his epic project is his fundamental love of the subject. "People who work on the analysis of algorithms have double happiness," he says, sounding Yoda-like. "You are happy when you solve a problem and again when people smile using your solution in their software."
Before he can rest in the promised land, Knuth faces one last mountain. He must redesign the generalized computer used in his book for programming examples and exercises from a 50-year-old von Neumann-style machine with inefficient commands to a more modern RISC (reduced instruction set computer) system permitting faster operation. (Intel processors in most PCs are of the older variety; PowerPC chips in recent Macintosh models are RISC.) "I'm trying to design it so it's 10 years ahead of its time," says Knuth. "I've studied all the machines we have now and tried to take their nicest features and put them all together." This super RISC machine, which he calls MMIX, is essentially a teaching concept. But he says he "would love to see it built. I'm spending a lot of time documenting it so someone could build it. The design will be in the public domain." In the midst of his "Computer Musings" series of introductory talks on MMIX, Knuth is mere months away from completing this phase of his work.
And then? "I start charging away on Volume 4 at top speed. I can write about one publishable page a day," he says. On another book he once wrote at a rate of two pages a day "but that was too much. I couldn't be a good husband and a good father and that was not good. So, I'm just promising 250 pages a year for Volume 4." For devotees of Knuth's software Bible, Bill Gates included, those pages can't come soon enough.
Donald Knuth is updating all three volumes of his definitive series, The Art of Computer Programming, one of the most well-known works in computer science. Innovations interviewed him to find out more about how this came about.
Innovations: What do you see as the most important developments in programming since starting The Art of Computer Programming (TAOCP)?
Knuth: The most important developments were surely the ideas of structured programming (1970s) and literate programming (1980s). But I'm a fan of all developments, not just the most important ones; and, of course, we now know a huge number of new techniques, especially with respect to Volume 4 [forthcoming]. When I began writing TAOCP in 1962, almost none of the ideas now in Volume 4 had been discovered; almost nobody would even have thought of writing a book about Combinatorial Algorithms. But during the 1970s, more than half of all papers on Computer Science were about that subject.
Innovations: Where are those developments reflected in these new editions?
Knuth: I've gone over every page and updated material, when I think the subject has "converged" to a form that people will find important not only today but also 50 or 100 years from now. Such changes appear throughout the books, most notably in the chapter on random numbers. On the other hand, many topics in Volumes 1, 2, and 3 are still evolving rapidly. In such cases, I have not made a major update; I've simply added a little icon to the page, meaning "sorry, still under construction"! I will do a final update to those books after I finish Volumes 4 and 5; otherwise I'd have to rewrite them again, and I would never finish. It's more important for me to get Volume 4 done than to keep Volumes 1, 2, and 3 strictly up to the minute.
The new editions have hundreds of new exercises and answers to exercises that I know will always be instructive; I've been noting these things in my own copies of the books since the 1970s, and I'm making them public now.
Innovations: Why revise Volumes 1, 2, & 3 before publishing Volume 4?
Knuth: Because they haven't been revised for a long time and I have a megabyte of updates that I'm sure people will want to know about. Silvio Levy has made it possible for me to do this without taking much time away from Volume 4, because he's doing the hard work of converting the old books to TeX and merging everything together. Another friend, Jeff Oldham, is putting all the illustrations into METAPOST form, so that they will be improved too.
And there's another significant reason: By going through Volumes 1, 2, and 3 in this way, I'm able to be sure that Volume 4 matches them well, in spite of the fact that I took 13 years off to work on TeX and METAFONT and Concrete Mathematics and some other books that had to be written in the 80s.
"I've gone over every page and updated material, when I think the subject has 'converged' to a form that people will find important not only today but also 50 or 100 years from now."
Innovations: Do you still see this as a seven volume set?
Knuth: Volume 4 will be split into three subvolumes: 4A, 4B, 4C. I have always regarded the subject matter of Volumes 1-5 as the "basic core" of computer methods for sequential machines. These volumes deal with the algorithms that are used for hundreds of different applications in all branches of computer science. Therefore, after I finish Volumes 1-5, I plan to put out a single-volume "reader's digest" version that summarizes their highlights.
By contrast, I've always viewed Volumes 6 and 7 as specialized offshoots of the inner core. Volume 6, on the theory of context-free languages, and Volume 7, on the writing of compilers, deal with very important areas but they are not as central as the algorithms I'm dealing with in Volumes 1-5.
When I've finished writing the core volumes-and please notice that there will be seven of them, since Volume 4 will actually split into Volumes 4A, 4B, and 4C-I will of course go next to Volumes 6 and 7, provided that they still need to be written. I've been saving a lot of good stuff for those books, and my files are full of things that I look forward to including in them some day. But that will be 15 or 20 years from now. If I discover that most of what I want to say has already been said by somebody else, then I'll declare my series finished and I'll happily declare my main life's work to be finished. Then I'll go and write the music that I've been dreaming about all these years.
Innovations: Can you tell us about the process by which Volume 4 will eventually be published?
Knuth: I'll publish so-called fascicles, about 128 pages each, about twice a year. These will be "beta-test" versions of the eventual book; they will represent my best shot, but I'm sure that readers will be able to help me make many improvements in the final edition. The subject is so vast that I cannot hope to get everything right on my first try. Charles Dickens did a similar thing with his novels: He published fascicles containing Chapters 1 and 2 before he had any idea how the stories were going to end. That way he could get the best reader feedback.
I view my role as trying to be a spokesman for many people who are developing computer science; I try to present their discoveries in a uniform way that a programmer-on-the-street who cannot read advanced scientific jargon will be able to understand. I've spent 35 years gathering a database of materials and notes about these topics, and I think my point of view (although biased) will be helpful to many readers; that's why I'm hoping to have readers participate and have adopted a fascicle-preview strategy.
Innovations: What inspired you to start this project?
Knuth: There was no reliable guide to the literature in 1962. I was the only person I knew who had read most of the journals yet had not discovered very many things myself; and I liked to write. Thus I thought I could give a more balanced and unbiased account than the people who had made the most important discoveries. Of course, after I got going, I discovered a few things of my own, so by now I'm as biased as anybody. But you asked about what was the inspiration in 1962. And the answer is: There was a huge need for a book like The Art of Computer Programming, but everybody who was capable of writing it was unfortunately likely to give a terribly slanted account!
Innovations: What do you see as the biggest challenge facing programmers today?
Knuth: The hardest thing is to go to sleep at night, when there are so many urgent things needing to be done. A huge gap exists between what we know is possible with today's machines and what we have so far been able to finish.
Innovations: Who have been the biggest influences on your computing career?
Knuth: Of course I have been tremendously influenced by giants in the field, such as Dijkstra, Flajolet, Karp, Schönhage, Tarjan, Yao, as well as by great mathematicians like de Bruijn. But Computer Science, like all sciences, grows chiefly by thousands of little steps rather than by a few giant steps. Therefore I am convinced that the Great Edifice of Computer Science is built primarily from the important foundation stones contributed by thousands of people who will probably never be members of the National Academy of Science. It has been my great pleasure to learn from them and to try to put their wonderful discoveries into a coherent framework. Some great computer scientists never write papers; I learn about their work either in conversation or by reading their programs. If only a few "big influences" had been behind my books, I would have finished writing them many years ago.
Innovations: What do you think about the whole language war with C++, Java, etc.?
Knuth: So what else is new? There have been such battles ever since I learned to program as a college freshman in 1957. Languages come and go much faster than I can write books. That's why I chose to explain algorithms in English, not in the language of the moment. Readers learn a lot by converting from English to their favorite language; The Art of Computer Programming emphasizes things that are independent of languages. No matter what programming language is hot, you need good ideas to express in those languages. If you want your algorithms to be prepackaged, fine, but then my books aren't written for you.
Actually I'm extremely glad to see the continuing development of languages, not only because programming languages are getting better and better in important ways, but also because such work soaks up a lot of people's energy-therefore computer scientists don't write papers that I would otherwise have to read, and I can get my books finished a lot sooner.
Innovations: Other than working on the new editions of The Art of Computer Programming, what takes up your time these days?
Knuth: I happily swim, play keyboard instruments, and accept prizes.
Innovations: What was your first reaction to the news of being selected a recipient of the Kyoto Prize?
Knuth: This was a wonderful climax for my career, although I still think I'm able to do better and better work every year. It reminds me that some day I'll begin to "go downhill," so I'd better get Volume 4 done soon.
- What is your programming language of choice?CWEB
- What is your favorite operating system?LINUX
- Do you have a hero or role model?MANY
- What is your favorite kind of music?CHAMBER
- What is your favorite news group?NONE
- What is your favorite web page (besides geekchic!)?LIBRARIES
- What sports do you enjoy?BASEBALL
- What kind of car do you drive?VOLVO
- What hobbies do you enjoy outside of work?PIPE ORGAN
- What is your favorite book (or author)?AGATHA CHRISTIE
- What is your favorite movie?SILVER STREAK
- What sort of clothing do you usually wear to work?T-SHIRT WITH THEME OF THE DAY
- What is your favorite food?CHOCOLATE
The following "philosophical warm-up" is taken from a recent book by Donald E. Knuth. Knuth, or " DEK" as he is often called on the net, wrote the original WEB (for the Pascal language), TeX and Metafont (both written in WEB), and is still actively supporting CWEB.
Why I Must Write Readable Programs
"Computer programs are fun to write, and well-written computer programs are fun to read. One of life's greatest pleasures can be the composition of a computer program that you know will be a pleasure for other people to read, and for yourself to read.
Computer programs can also do useful work. One of life's greatest sources of satisfaction is the knowledge that something you have created is contributing to the progress of welfare of society.
Some people even get paid for writing computer programs! Programming can therefore be triply rewarding---on aesthetic, humanitarian, and economic grounds.
Of course I don't mean to imply that programming is easy. Easy things are often amusing and relaxing, but their value soon fades. Greater pleasure, deeper satisfaction, and higher wages are associated with genuine accomplishments, with the successful fulfillment of a challenging task.
I have spent a good deal of my life trying to help make computer programming as rewarding as possible, in all three senses. t first, I thought programming was primarily analogous to musical composition---to the creation of intricate patterns, which are meant to be performed. But lately I have come to realize that a far better analogy is available: Programming is best regarded as the process of creating works of literature, which are meant to be read.
Literature of the program genre is performable by machines, but that is not its main purpose. The computer programs that are truly beautiful, useful, and profitable must be readable by people. So we ought to address them to people, not to machines. All of the major problems associated with computer programming---issues of reliability, portability, learnability, maintainability, and efficiency---are ameliorated when programs and their dialogs with users become more literate.
Literate programming is still a fairly new concept, still in its infancy, still undergoing much-needed experimentation. Many people have contributed important ideas and independent approaches to the creation of systems that improve on WEB in various ways. In this book I describe the techniques that have worked best for me, but I have never imagined that any of my ideas would be the "last word" in any sense. I am sure that people with differing tastes will prefer a system that differs from my own initial attempts.
The time is on ripe for second-generation systems that integrate literate programming into complete programming environments. "
-- Donald E. /a> " Literate Programming", 1992.
Amazon.comDonald Knuth: A life's work in the art of programming
Amazon.com interviews legendary computer scientist Donald Knuth as the first volume of his classic work, The Art of Computer Programming , appears in a major new edition.
Tom Mace, Amazon.com: Your four-volume series, The Art of Computer Programming , has been a classic of computer science literature for 25 years. We'd like to congratulate you on the publication of The Art of Computer Programming: Volume 1, Fundamental Algorithms , the first volume in a complete new edition of this work. What's new in this release? What differences will programmers find between this copy and the copies they've thumbed and studied for years?
Donald Knuth: You won't find dramatic differences--the revisions were more a matter of refinement than radical revision. But I'd say that the book is better in every way. After 25 years of not having changed the old edition, I've tried to give the new one the perspective of maturity while keeping some of my more youthful exuberance. If you compare random pages from the old edition to the new, you'll probably find two to three dozen things on each page that make the new one nicer. You'll also find hundreds of additional exercises and you'll discover that the answers to the exercises are better.
Amazon.com: What will someone new to your work find in this revised copy and how can it change their approach to programming?
Knuth: This is a book for those who take programming seriously, the one person in 50 who has this strange way of thinking that makes a programmer. Volume 1 is the introduction to the whole series. It sets the scene, defines algorithms, gives the basis of data structures, and explains the basic mathematical techniques and paradigms that you use in the rest of the work. Then I present a hypothetical computer, MIX, whose capabilities are used to ground the whole thing in reality. I also bring in the history of basic programming techniques and the approach to representing data--how we take things from the outside world and deal with them inside the computer. There many ways to organize data in the machine, and all the basic methods for this are covered with lots of examples. My book also gives the little tricks that people have invented over the years, which I've tried to present in a way that's as jargon-free as possible. It's not easy to read my book but it's a lot easier than many others written by specialists. If I describe something in mathematics, for example, I try not to use terminology that only math graduate students can understand. Most of the literature is impenetrable to someone who hasn't had advanced training so I've tried to boil down the concepts to where they're as simple as they can be.
Amazon.com: Can you talk about your working methods and how you approached the revision?
Knuth: I've been accumulating corrections and emendations in my own personal copies of the books for 25 years, and people have written to me and said, "Don, do you know that there's a typo on page such and such?" I always knew about these mistakes and I wasn't happy to see thousands of copies printed every year having these mistakes in them. But I also knew that correcting them was a lot of work, as there are many, many cross-references. And my biggest project was to work on the volumes that haven't yet been finished. So, my original plan was simply to make an errata list for volumes 1, 2, and 3 that I could put up on the Web. I created a big database of corrections--there were quite a lot, about 200 pages for each volume--and posted them. When I showed this list of changes to a meeting of the TeX user group [TeX is a computer typesetting system developed by Mr. Knuth. Ed.], one of the guys in the audience volunteered for the hard work of putting the revisions in electronic form. He wound up creating many megabytes of data from which to generate the book. All I needed to do was double-check the corrections. All in all, several volunteers spent a couple of years of their lives doing the detail work and getting the revisions ready. In January of this year, I received volumes 1, 2, and 3 in electronic form, and used them to generate 2,000 laser-printed pages incorporating my hundreds of pages of errata, which looked something like The Art of Computer Programming . When a book exists as a computer file, you have a different feeling about it because you know it's something that you can easily improve. This is my life's work after all--I've spent 35 years on it--and I saw many, many places where I could make it better. So I spent the last seven months making this book into something special. Of course, I'm not unbiased, but in my humble opinion, I've gotten close to something that I can be really proud of. It's a much better book than I would have dared to attempt with the old method of correcting galleys by hand.
Amazon.com: How is your work coming on the remaining volumes?
Knuth: At the beginning of the year, I thought that I'd have the first three ready by now, but as we speak, I'm getting well along with the significant changes to Volume 2, Seminumerical Algorithms , and it should only be another three months for Volume 3, Sorting and Searching .
Amazon.com: Has the programmer's art remain fundamentally unchanged over the past 25 years?
Knuth: It's changed in several ways but the basic things like subroutines and techniques of representing data in certain ways will last. I'm trying to capture things now that will be important 50 years from now as they are today. I want to distill them out, explain them as well as possible, and give people something that is permanent.
Amazon.com: What do you see as the most interesting advance in programming since you published the first edition?
Knuth: It's what I call literate programming, a technique for writing, documenting, and maintaining programs using a high-level language combined with a written language like English. This is discussed in my book Literate Programming.
Amazon.com: Have there been any big wrong turns in computing in the last 25 years? Do you regret any of the directions that it has taken as a lost opportunity?
Knuth: The thing that I regret most is that people are trying to patent algorithms. This means that it is becoming impossible to write software unless you're a big company. This climate is going to stifle progress. If such attitudes had existed back then, I would never have been able to write this book.
Amazon.com: Thank you very much for your time.
May 20, 2014 | InformIT
To celebrate the publication of the eBooks of The Art of Computer Programming, (TAOCP), we asked several computer scientists, contemporaries, colleagues, and well-wishers to pose one question each to author Donald E. Knuth. Here are his answers.
Check informit.com/knuth throughout 2014 to purchase Vol 3-4A eBooks as they become available. If you want email notifications, send an email to email@example.com.
1. Jon Bentley, researcher: What a treat! The last time I had an opportunity like this was at the end of your data structures class at Stanford in June, 1974. On the final day, you opened the floor so that we could ask any question on any topic, barring only politics and religion. I still vividly remember one question that was asked on that day: "Among all the programs you've written, of which one are you most proud?"
Your answer (as I approximately recall it, four decades later) described a compiler that you wrote for a minicomputer with 1024 available bytes of memory. Your first draft was 1029 bytes long, but you eventually had it up and running and debugged at 1023 bytes. You said that you were particularly proud of cramming so much functionality into so little memory.
My query today is a slight variant on that venerable question. Of all the programs that you've written, what are some of which you are most proud, and why?
Don Knuth: I'd like to ask you the same! But that's something like asking parents to name their favorite children.
Of course I'm proud of and , because they seem to have helped to change the world, and because they led to many friendships. Furthermore they've made these eBooks possible: I'm enormously happy that the work I did more than 30 years ago has miraculously survived many changes of technology, and that the 3,000 pages of TAOCP now look so great on a little tablet-even after zooming.
While I was preparing for Volume 4 of TAOCP in the 90s, I wrote several dozen short routines using what you and I know as "literate programming." Those little essays have been packaged into The Stanford GraphBase (1994), and I still enjoy using and modifying them. My favorite is the implementation of Tarjan's beautiful algorithm for strong components, which appears on pages 512–519 of that book.
I have to admit some pride also in the implementation of IEEE floating-point arithmetic that appears in my book MMIXware (1999), as well as that book's metasimulator for MMIX, in which I explain many principles of advanced pipelined computers from the ground up.
Literate programming continues to be one of the greatest joys of my life. In fact, I find myself writing roughly two programs per week, on average, both large and small, as I draft new material for the next volumes of TAOCP.
2. Dave Walden, Users Group: Might you publish the original 3,000-page version of TAOCP (before the decision to change it into seven volumes), as a historical artifact of your view of the state of the art of algorithms and their analysis circa 1965? I think lots of people would like to see this.
Don Knuth: Scholars can look at the handwritten pages that led to Volumes 1–3 by going to the Stanford Archives, and all of the remaining pages will be deposited there eventually. I see little value in making those drafts more generally available-although some of the material about baseball that I decided not to use is pretty cool. Archives from the real pioneers of computer science, who wrote in the 40s and 50s, should be published first.
I do try to retain the youthful style of the original, in the pages that I write today, except where my first draft was embarrassingly naïve or corny. I've also learned when to say "that" instead of "which," thanks in part to Guy Steele's tutelage.
3. Charles Leiserson, MIT: TAOCP shows a great love for computer science, and in particular, for algorithms and discrete mathematics. But love is not always easy. When writing this series, when did you find yourself reaching deepest into your emotional reservoir to overcome a difficult challenge to your vision?
Don Knuth: Again, Charles, I'd like to ask you exactly the same question!
For me, I guess, the hardest thing has always been to figure out what to cut. And I obviously haven't been very successful at that, in spite of much rewriting.
The most difficult technical challenge was to write the metasimulator for MMIX. I needed to do that behind the scenes, in order to shape what actually appears in the books, and it was surely the toughest programming task that I've ever faced. Without the methodology of literate programming, I don't think I could have finished that job successfully.
Many of the "starred" mathematical sections also stretched me pretty far. Overall, however, after working on TAOCP for more than fifty years, I can't think of any aspect of the activity where the effort of writing wasn't amply repaid by what I learned while doing it.
4. Dennis Shasha, NYU: How does a beautiful algorithm compare to a beautiful theorem? In other words, what would be your criteria of beauty for each?
Don Knuth: Beauty has many aspects, of course, and is in the eye of the beholder. Some theorems and algorithms are beautiful to me because they have many different applications; some because they do powerful things with severely limited resources; some because they involve aesthetically pleasing patterns; some because they have a poetic purity of concept.
For example, I mentioned Tarjan's algorithm for strong components. The data structures that he devised for this problem fit together in an amazingly beautiful way, so that the quantities you need to look at while exploring a directed graph are always magically at your fingertips. And his algorithm also does topological sorting as a byproduct.
It's even possible sometimes to prove a beautiful theorem by exhibiting a beautiful algorithm. Look, for instance, at Theorem 5.1.4D and/or Corollary 7H in TAOCP.
5. Mark Taub, Pearson: Does the emergence of "apps" (small, single-function, networked programs) as the dominant programming paradigm today impact your plans in any way for future material in TAOCP?
Don Knuth: People who write apps use the ideas and paradigms that are already present in the first volumes. And apps make use of ever-growing program libraries, which are intimately related to TAOCP. Users of those libraries ought to know something about what goes on inside.
Future volumes will probably be even more "app-likable," because I've been collecting tons of fascinating games and puzzles that illustrate programming techniques in especially instructive and appealing ways.
6. Radia Perlman, Intel: (1) What is not in the books that you wish you'd included? (2) If you'd been born 200 years ago, what kind of career might you imagine you'd have had?
Don Knuth: (1) Essentially everything that I want to include is either already in the existing volumes or planned for the future ones. Volume 4B will begin with a few dozen pages that introduce certain newfangled mathematical techniques, which I didn't know about when I wrote the corresponding parts of Volume 1. (Those pages are now viewable from my website in beta-test form, under the name "mathematical preliminaries redux.") I plan to issue similar gap-filling "fascicles" when future volumes need to refer to recently invented material that ultimately belongs in Volume 3, say.
(2) Hey, what a fascinating question-I don't think anybody else has ever asked me that before!
If I'd been born in 1814, the truth is that I would almost certainly have had a very limited education, coupled with hardly any access to knowledge. My own male ancestors from that era were all employed as laborers, on farms that they didn't own, in what is now called northern Germany.
But I suppose you have a different question in mind. What if I had been one of the few people with a chance to get an advanced education, and who also had some flexibility to choose a career?
All my life I've wanted to be a teacher. In fact, when I was in first grade, I wanted to teach first grade; in second grade, I wanted to teach second; and so on. I ended up as a college teacher. Thus I suppose that I'd have been a teacher, if possible.
To continue this speculation, I have to explain about being a geek. Fred Gruenberger told me long ago that about 2% of all college students, in his experience, really resonated with computers in the way that he and I did. That number stuck in my mind, and over the years I was repeatedly able to confirm his empirical observations. For instance, I learned in 1977 that the University of Illinois had 11,000 grad students, of whom 220 were CS majors!
Thus I came to believe that a small percentage of the world's population has somehow acquired a peculiar way of thinking, which I happen to share, and that such people happened to discover each other's existence after computer science had acquired its name.
For simplicity, let me say that people like me are "geeks," and that geeks comprise about 2% of the world's population. I know of no explanation for the rapid rise of academic computer science departments-which went from zero to one at virtually every college and university between 1965 and 1975-except that they provided a long-needed home where geeks could work together. Similarly, I know of no good explanation for the failure of many unsuccessful software projects that I've witnessed over the years, except for the hypothesis that they were not entrusted to geeks.
So who were the geeks of the early 19th century? Beginning a little earlier than 1814, I'd maybe like to start with Abel (1802); but he's been pretty much claimed by the mathematicians. Jacobi (1804), Hamilton (1805), Kirkman (1806), De Morgan (1806), Liouville (1809), Kummer (1810), and China's Li Shanlan (1811) are next; I'm listing "mathematicians" whose writings speak rather directly to the geek in me. Then we get precisely to your time period, with Catalan (1814) and Sylvester (1814), Boole (1815), Weierstraß (1815), and Borchardt (1817). I would have enjoyed the company of all these people, and with luck I might have done similar things.
By the way, the first person in history whom I'd classify as "100% geek" was Alan Turing. Many of his predecessors had strong symptoms of our disease, but he was totally infected.
7. Tony Gaddis, author: Do you remember a specific moment when you discovered the joy of programming, and decided to make it your life's work?
Don Knuth: During the summer of 1957, between my freshman and sophomore years at Case Tech in Cleveland, I was allowed to spend all night with an IBM 650, and I was totally hooked.
But there was no question of viewing that as a "life's work," because I knew of nobody with such a career. Indeed, as mentioned above, my life's work was to be a teacher. I did write a compiler manual in 1958, which by chance was actually used as the textbook for one of my classes in 1959(!). Still, programming was for me primarily a hobby at first, after which it became a way to support myself while in grad school.
I saw no connection between computer programming and my intended career as a math professor until I met Bob Floyd late in 1962. I didn't foresee that computer science would ever be an academic discipline until I met George Forsythe in 1964.
8. Robert Sedgewick, Princeton: Don, I remember some years ago that you took the position that you weren't trying to reach everyone with your books-knowing that they would be particularly beneficial to people with a certain interest and aptitude who enjoy programming and exploring its relationship to mathematics. But lately I've been wondering about your current thoughts on this issue. It took a long time for society to realize the benefits of teaching everyone to read; now the question before us is whether everyone should learn to program. What do you think?
Don Knuth: I suppose all college professors think that their subject ought to be taught to everybody in the world. In this regard I can't help quoting from a wonderful paper that John Hammersley wrote in 1968:Just for the fun of getting his reactions, I asked an eminent scholar of English Literature what educational benefits might lie in the study of goliardic verse, Erse curses, and runic erotica. 'A working background of goliardic verse would be more than helpful to anyone hoping to have some modest facility in his own mother tongue', he declared; and with that he warmed to his subject and to the poverties of unlettered science, so that it was some minutes before I could steer him back to the Erse curses, about which he seemed a good deal less enthusiastic. 'Really', he said, 'that sort of thing isn't my subject at all. Of course, I applaud breadth of vocabulary; and you never know when some seemingly useless piece of knowledge may not turn out to be of cardinal practical importance. I could certainly envisage a situation in which they might come in very handy indeed'. 'And runic erotica?' 'Not extant'. (Was it only my fancy that heard a note of faint regret in his reply?) Certainly the higher flights of scholarship can add savour; but does the man-in-the-street have the time and the pertinacity and the intellectual digestion for them?
Programming, of course, is not just an ordinary subject. It is intrinsically empowering, and applicable to many different kinds of knowledge. And I also know that you've been having enormous successes, at Princeton and online, teaching advanced concepts of programming to students from every discipline.
But your question asks about everybody. I still think many years will have to go by before I would recommend that my own highly intelligent wife, son, and daughter should learn to program, much less that everybody else I know should do so.
Nick Trefethen told me a few years back that he had just visited his son's high school in Oxford, which is one of the best anywhere, and learned that not a single student knew how to program! Britain is now beginning to change that, indeed at a more rapid pace than in America. Yet such a revolution almost surely needs to take place over a generation or more. Where are the teachers going to come from?
My own experience is with the subset of college students who are sufficiently interested in programming that they expect it to become an integral part of their life. TAOCP is essentially for specialists. I've primarily been writing it for geeks, not for a general audience, because somebody has to write books that aren't for dummies. (By a "dummy" I mean a smart non-geek. That's a much larger market, and very important; but it's not my target audience, and general education is not my forte.)
On the other hand, believe it or not, I try to explain everything in my books by imagining a non-specialist reader. My goal is to be jargon-free whenever possible; I especially try to avoid terms from higher mathematics that tend to frighten the programmer-on-the-street. Whenever possible I try to translate results from the theoretical literature into a language that high-school students could understand.
I know that my books still aren't terribly easy to fathom, even for geeks. But I could have made them much, much harder.
9. Barbara Steele: What was the conversion process, and what tools did you use, to convert your print books to eBooks?
Don Knuth: I knew that these volumes would not work especially well as eBooks unless they were converted by experts. Fortunately I received some prize money in 2011, which could be used to pay for professional help. Therefore I was able to achieve the kind of quality that I envisioned, without delaying my work on future volumes, by letting the staff at Mathematical Sciences Publishers in Berkeley (MSP) handle all of the difficult stuff.
My principal goal was to make the books easily searchable-and that's a much more challenging problem than it seems, if you want to do it right. Secondarily, I wanted to let readers easily click on the number of any exercise or equation or illustration or table or algorithm, etc., and to jump to that exercise; also to jump readily between an exercise and its answer.
The people at MSP wrote special software that converts my source text into suitable input to other software that creates pdf files. I don't know the details, except that they use "change files" analogous to those used in WEB and CWEB. I've checked the results pretty carefully, and I couldn't be more pleased. Moreover, they've designed things so that it won't be hard for me to make changes next year, as readers discover bugs in the present editions.
(My style of writing tends to maximize the number of opportunities to make mistakes, hence I would be fooling myself if I thought that the books were now perfect. Therefore it has always been important to keep future errata in mind. The production staff at Addison-Wesley has been consistently wonderful in the way they allow me to correct about fifty pages every year in each volume.)
10. Silvio Levy, MSP: Could you comment on the differences between the print, pdf, ePUB, etc., editions of TAOCP? What would you say is gained or lost with each?
Don Knuth: The printed versions weigh a lot more, but they don't need battery power or a tether to electricity. They are always there; I don't have to turn them on, and I can have them all open at once.
I can scribble in the margins (and elsewhere) of the print versions, and I can highlight text in different colors. Ten years from now I expect analogous features will be commonly available for eBooks.
I'm used to flipping pages and finding my way around a regular book, much more so than in an eBook; but my grandchildren might have the opposite reaction.
The great advantage of an eBook is the reader's ability to search exhaustively. What fun it is to look for all occurrences of a random word like 'game', or for a random word fragment like 'gam' or 'ame', and find lots of cool material that I don't recall having written. The search feature on these books works even better than I had a right to hope for.
The index in a printed book has the advantage of being more focused. But that index also appears in the eBook, and in the eBook you can even click in the index to get to the cited pages.
Today's eBook readers are often inconvenient for setting bookmarks and going back to where you were a couple of minutes ago, especially after you click on an Internet link and then want to go back to reading. But that software will surely improve, and so will today's electronic devices.
In the future I look forward to curated eBooks that have additional notes by experts-and possibly even graffiti in the style of Concrete Mathematics-somewhat analogous to the "director's comments" and other extras found on the DVDs for films. One could select different subsets of these comments when reading.
11. Peter Gordon, Addison-Wesley (retired): If the full range of today's eBook features and functionalities had been available when TAOCP was first published, would you have written those volumes very differently?
Don Knuth: Well, I don't think I would have gotten very far at all. I would have had to think about doing everything in color, and with interactive figures, tables, equations, and exercises. A single person cannot use the "full range" of features that eBooks potentially have.
But by limiting myself to what can be presented well in black-and-white type, on printed pages of a fixed size, I was fortunately able to complete 3,000 pages over a period of 50 years.
12. Udi Manber, Google: The early volumes of TAOCP established computer programming as computer science. They introduced the necessary rigor. This was at the time when computers were used mostly for numerical applications. Today, most applications are related to people-social interaction, search, entertainment, and so on. Rigor is rarely used in the development of these applications. Speed is not always the most important factor, and "correctness" is rarely even defined. Do you have any advice on how to develop a new computer science that can introduce rigor to these new applications?
Don Knuth: The numerical computations that were somewhat central when computer science was born are by no means gone; they continue to grow, year by year. Of course, they now represent a much smaller piece of the pie, but I don't believe in concentrating too much on the big pieces.
My work on introduced me to applications where "correctness" cannot be defined. How do I know, for example, that my program for the letter A produces a correct image? I never will; and I've learned to live with that uncertainty. On the other hand, when I implemented the routines that interpret specifications and draw the associated bitmaps, there was plenty of room for rigor. The algorithms that go into font rendering are among the most interesting I've ever seen.
As a user of products from Google and Adobe and other corporations, I know that a tremendous amount of rigor goes into the manipulation of map data, transportation data, pixel data, linguistic data, metadata, and so on. Furthermore, much of that processing is done with distributed and decentralized algorithms that require more rigor than anybody ever thought of in the 60s.
So I can't say that rigor has disappeared from the computer science scene. I do wish, however, that Google's and Adobe's and Apple's programmers would learn rigorously how to keep their systems from crashing my home computers, when I'm not using Linux.
In general I agree with you that there's no decrease in the need for rigor, rather an increase in the number of kinds of rigor that are important. The fact that correctness can't be defined on the "bottom line" should not lull people into thinking that there aren't intermediate levels within every nontrivial system where correctness is crucial. Robustness and quality are compromised by every weak link.
On the other hand, I certainly don't think that everything should be mathematized, nor that everything that involves computers is properly a subdiscipline of computer science. Many parts of important software systems do not require the special talents of geeks; quite the contrary. Ideally, many disciplines collaborate, because a wide variety of orthogonal skill sets is a principal reason why life is such a joy. Vive la différence.
Indeed, I myself follow the path of rigor only partway: Rarely do I ever give a formal proof that any of my programs are correct, once I've constructed an informal proof that convinces me. I have no real interest, for example, in defining exactly what it would mean for to be correct, or for verifying formally that my implementation of that 550-page program is free of bugs. I know that anomalous results are possible when users try to specify pages that are a mile wide, or constants that involve a trillion zeros, etc. I've taken care to avoid catastrophic crashes, but I don't check every addition operation for possible overflow.
There's even a fundamental gap in the foundations of my main mathematical specialty, the analysis of algorithms. Consider, for example, a computer program that sorts a list of numbers into order. Thanks to the work of Floyd, Hoare, and others, we have formal definitions of semantics, and tools by which we can verify that sorting is indeed always achieved. My job is to go beyond correctness, to an analysis of such things as the program's running time: I write down a recurrence, say, which is supposed to represent the average number of comparisons made by that program on random input data. I'm 100% sure that my recurrence correctly describes the program's performance, and all of my colleagues agree with me that the recurrence is "obviously" valid. Yet I have no formal tools by which I can prove that my recurrence is right. I don't really understand my reasoning processes at all! My student Lyle Ramshaw began to create suitable foundations in his thesis (1979), but the problem seems inherently difficult. Nevertheless, I don't lose any sleep over this situation.
13. Al Aho, Columbia: We all know that the Turing Machine is a universal model for sequential computation.
But let's consider reactive distributed systems that maintain an ongoing interaction with their environment-systems like the Internet, cloud computing, or even the human brain. Is there a universal model of computation for these kinds of systems?
Don Knuth: I'm not strong on logic, so TAOCP treads lightly on this sort of thing. The TAOCP model of computation, discussed on pages 4–8 of Volume 1, considers "reactive processes," a.k.a. "computational methods," which correspond to single processors. I've long planned to discuss recursive coroutines and other cooperative processes in Chapter 8, after I finish Chapter 7. The beautiful model of context-free parsing via semiautonomous agents, in Floyd's great survey paper of 1964, has strongly influenced my thinking in this regard.
I'd like to see extensions of the set-theoretic model of computation at the beginning of Volume 1 to the things you mention. They might well shed light on the subject.
But fully distributed processes are well beyond the scope of my books and my own ability to comprehend them. For a long time I've thought that an understanding of the way ant colonies are able to perform incredibly organized tasks might well be the key to an understanding of human cognition. Yet the ants that invade my house continually baffle me.
14. Guy Steele, Oracle Labs: Don, you and I are both interested in program analysis: What can one know about an algorithm without actually executing it? Type theory and Hoare logic are two formalisms for that sort of reasoning, and you have made great contributions to using mathematical tools to analyze the execution time of algorithms. What do you think are interesting currently open problems in program analysis?
Don Knuth: Guy, I'm sure you aren't really against the idea of program execution. You and I both like to know things about programs and to execute them. Often the execution contradicts our supposed knowledge.
The quest for better ways to verify programs is one of the famous grand challenges of computer science. And as I said to Udi, I'm particularly rooting for better techniques that will avoid crashes.
Just now I'm writing the part of Volume 4B that discusses algorithms for satisfiability, a problem of great industrial importance. Almost nothing is known about why the heuristics in modern solvers work as well as they do, or why they fail when they do. Most of the techniques that have turned out to be important were originally introduced for the wrong reasons!
If I had my druthers, I wish people like you would put a lot of effort into a problem of which I've only recently become aware: The programmers of today's multithreaded machines need new kinds of tools that will make linked data structures much more cache-friendly. One can in many cases start up auxiliary parallel threads whose sole purpose is to anticipate the memory accesses that the main computational threads will soon be needing, and to preload such data into the cache. However, the task of setting this up is much too daunting, at present, for an ordinary programmer like me.
15. Robert Tarjan, Princeton: What do you see as the most promising directions for future work in algorithm design and analysis? What interesting and important open problems do you see?
Don Knuth: My current draft about satisfiability already mentions 25 research problems, most of which are not yet well known to the theory community. Hence many of them might well be answered before Volume 4B is ready. Open problems pop up everywhere and often. But your question is, of course, really intended to be much more general.
In general I'm looking for more focus on algorithms that work fast with respect to problems whose size, n, is feasible. Most of today's literature is devoted to algorithms that are asymptotically great, but they are helpful only when n exceeds the size of the universe.
In one sense such literature makes my life easier, because I don't have to discuss those methods in TAOCP. I'm emphatically not against pure research, which significantly sharpens our abilities to deal with practical problems and which is interesting in its own right. So I sometimes play asymptotic games. But I sure wouldn't mind seeing a lot more algorithms that I could also use.
For instance, I've been reading about algorithms that decide whether or not a given graph G belongs to a certain class. Is G, say, chordal? You and others discovered some great algorithms for the chordality and minimum fillin problems, early on, and an enormous number of extremely ingenious procedures have subsequently been developed for characterizing the graphs of other classes. But I've been surprised to discover that very few of these newer algorithms have actually been implemented. They exist only on paper, and often with details only sketched.
Two years ago I needed an algorithm to decide whether G is a so-called comparability graph, and was disappointed by what had been published. I believe that all of the supposedly "most efficient" algorithms for that problem are too complicated to be trustworthy, even if I had a year to implement one of them.
Thus I think the present state of research in algorithm design misunderstands the true nature of efficiency. The literature exhibits a dangerous trend in contemporary views of what deserves to be published.
Another issue, when we come down to earth, is the efficiency of algorithms on real computers. As part of the Stanford GraphBase project I implemented four algorithms to compute minimum spanning trees of graphs, one of which was the very pretty method that you developed with Cheriton and Karp. Although I was expecting your method to be the winner, because it examines much of the data only half as often as the others, it actually came out two to three times worse than Kruskal's venerable method. Part of the reason was poor cache interaction, but the main cause was a large constant factor hidden by O notation.
16. Frank Ruskey, University of Victoria: Could you comment on the importance of working on unimportant problems? My sense is that computer science research, funding, and academic hiring is becoming more and more focused on short-term problems that have at their heart an economic motivation. Do you agree with this assessment, is it a bad trend, and do you see a way to mitigate it?
Similarly, could you comment on the demise of the individual researcher? So many papers that I see published these days have multiple authors. Five-author papers are routine. But when I dig into the details it seems that often only one or two have contributed the fresh ideas; the others are there because they are supervisors, or financial contributors, or whatever. I'm pretty sure that Euler didn't publish any papers with five co-authors. What is the reason for this trend, how does it interfere with trying to establish a history of ideas, and what can be done to reverse it?
Don Knuth: I was afraid somebody was going to ask a question related to economics. I've never understood anything about that subject. I don't know why people spend money to buy things. I'm willing to believe that some economists have enough wisdom to keep the world running some of the time, but their reasons are beyond me.
I just write books. I try to tell stories that seem to be important, at least for geeks. I've never bothered to think about marketing, or about what might sell, except when my publishers ask me to answer questions as I'm doing now!
Three years ago I published Selected Papers on Fun and Games, a 750-page book that is entirely devoted to unimportant problems. In many ways the fact that I was able to live during a time in the history of the world when such a book could be written has given me even more satisfaction than I get when seeing the currently healthy state of TAOCP.
I've reached an age where I can fairly be described as a "grumpy old man," and perhaps that is why I strongly share your concern for the alarming trends that you bring up. I'm profoundly upset when people rate the quality of my work by measuring the extent to which it affects Wall Street.
Everybody seems to understand that astronomers do astronomy because astronomy is interesting. Why don't they understand that I do computer science because computer science is interesting? And that I'd do it regardless of whether or not it made money for anybody? The reason is probably that not everybody is a geek.
Regarding joint authorship, you are surely right about Euler in the 18th century. In fact I can't think of any two-author papers in mathematics, until Hardy and Littlewood began working together at the beginning of the 20th century.
In my own case, two of my earliest papers were joint because the other authors did the theory and I wrote computer programs to validate it. Two other papers were related to the ALGOL language, and done together with ACM committees. In a number of others, written while I was at Caltech, I did the theory and my student co-authors wrote computer programs to validate it. There was one paper with Mike Garey, Ron Graham, and David Johnson, in which they did the theory and my role was to explain what they did. You and I wrote a joint paper in 2004, related to recursive coroutines, in which we shared equally.
The phenomenon of hyperauthorship still hasn't infected computer science as much as it has hit physics and biology, where I've read that Thomson-Reuters indexed more than 200 papers having 1,000 authors or more, in a single recent year! When I cite a paper in TAOCP, I like to mention all of the authors, and to give their full names in the index. That policy will become impossible if CS publication practices follow in the footsteps of those fields.
Collaborative work is exhilarating, and it's wonderful when new results are obtained that wouldn't have been discovered by individuals working alone. But as you say, authors should be authors, not hangers-on.
You mention the history of ideas. To me the method of discovery tends to be more important than the identification of the discoverers. Still, credit should be given where credit is due; conversely, credit shouldn't be given where credit isn't due.
I suppose the multiple-author anomalies are largely due to poor policies related to financial rewards. Unenlightened administrators seem to base salaries and promotions on publication counts.
What can we do? As I say, I'm incompetent to deal with economics. I've gone through life refusing to go along with a crowd, and bucking trends with which I disagree. I've often declined to have my name added to a paper. But I suppose I've had a sheltered existence; young people may be forced to bow to peer pressure.
17. Andrew Binstock, Dr. Dobb's: At the ACM Turing Centennial in 2012, you stated that you were becoming convinced that P = N P. Would you be kind enough to explain your current thinking on this question, how you came to it, and whether this growing conviction came as a surprise to you?
Don Knuth: As you say, I've come to believe that P = N P, namely that there does exist an integer M and an algorithm that will solve every n-bit problem belonging to the class N P in nM elementary steps.
Some of my reasoning is admittedly naïve: It's hard to believe that P ≠ N P and that so many brilliant people have failed to discover why. On the other hand if you imagine a number M that's finite but incredibly large-like say the number 10 3 discussed in my paper on "coping with finiteness"-then there's a humongous number of possible algorithms that do nM bitwise or addition or shift operations on n given bits, and it's really hard to believe that all of those algorithms fail.
My main point, however, is that I don't believe that the equality P = N P will turn out to be helpful even if it is proved, because such a proof will almost surely be nonconstructive. Although I think M probably exists, I also think human beings will never know such a value. I even suspect that nobody will even know an upper bound on M.
Mathematics is full of examples where something is proved to exist, yet the proof tells us nothing about how to find it. Knowledge of the mere existence of an algorithm is completely different from the knowledge of an actual algorithm.
For example, RSA cryptography relies on the fact that one party knows the factors of a number, but the other party knows only that factors exist. Another example is that the game of N × N Hex has a winning strategy for the first player, for all N. John Nash found a beautiful and extremely simple proof of this theorem in 1952. But Wikipedia tells me that such a strategy is still unknown when N = 9, despite many attempts. I can't believe anyone will ever know it when N is 100.
More to the point, Robertson and Seymour have proved a famous theorem in graph theory: Any class of graphs that is closed under taking minors has a finite number of minor-minimal graphs. (A minor of a graph is any graph obtainable by deleting vertices, deleting edges, or shrinking edges to a point. A minor-minimal graph H for is a graph whose smaller minors all belong to although H itself doesn't.) Therefore there exists a polynomial-time algorithm to decide whether or not a given graph belongs to : The algorithm checks that G doesn't contain any of 's minor-minimal graphs as a minor.
But we don't know what that algorithm is, except for a few special classes , because the set of minor-minimal graphs is often unknown. The algorithm exists, but it's not known to be discoverable in finite time.
This consequence of Robertson and Seymour's theorem definitely surprised me, when I learned about it while reading a paper by Lovász. And it tipped the balance, in my mind, toward the hypothesis that P = N P.
The moral is that people should distinguish between known (or knowable) polynomial-time algorithms and arbitrary polynomial-time algorithms. People might never be able to implement a polynomial-time-worst-case algorithm for satisfiability, even though P happens to equal N P.
18. Jeffrey O. Shallit, University of Waterloo: Decision methods, automated theorem-proving, and proof assistants have been successful in a number of different areas: the Wilf-Zeilberger method for combinatorial identities and the Robbins conjecture, to name two. What do you think theorem discovery and proof will look like in 100 years? Rather like today, or much more automated?
Don Knuth: Besides economics, I was also afraid that somebody would ask me about the future, because I'm a notoriously bad prophet. I'll take a shot at your question anyway.
Assuming 100 years of sustainable civilization, I'm fairly sure that a large percentage of theorems (maybe even 38.1966%) will be discovered with computer aid, and that a nontrivial percentage (maybe 0.7297%) will have computer-verified proofs that cannot be understood by mortals.
In my Ph.D. thesis (1963), I looked at computer-generated examples of small finite projective planes, and used that data to construct infinitely many planes of a kind never before known. Ten years later, I discovered the so-called Knuth-Morris-Pratt algorithm by studying the way one of Steve Cook's automata was able to recognize concatenated palindromes in linear time. Such investigations are fun.
A few months ago, however, I tried unsuccessfully to do a similar thing. I had a 5,000-step mechanically discovered proof that the edges of a smallish flower snark graph cannot be 3-colored, and I wanted to psych out how the machine had come up with it. Although I gave up after a couple of days, I do think it would be possible to devise new tools for the study of computer proofs in order to identify the "aha moments" therein.
In February of this year I noticed that the calculation of an Erdős-discrepancy constant-made famous by Tim Gowers' Polymath project, in which many mathematicians collaborated via the Internet-makes an instructive benchmark for satisfiability-testing algorithms. My first attempt to compute it needed 49 hours of computer time. Two weeks later I'd cut that down to less than 2 hours, but there still were 20 million steps in the proof. I see no way at present for human beings to understand more than the first few thousand of those steps.
19. Scott Aaronson, MIT: Would you recommend to other scientists to abandon the use of email, as you have done?
Don Knuth: My own situation is unusual, because I do my best work when I'm not interrupted. I eat, sleep, and write content, more-or-less as a recluse who spends considerable time reading archives and other people's code. As I say on my home page, most people need to keep on top of things, but my role is to get to the bottom of things.
So I don't recommend a no-email policy to people who thrive on communication. And I actually take advantage of others in this respect (either shamelessly or shamefully, I'm not sure which), by pestering them with random questions, even though I don't want anybody to pester me-except about the one topic that I happen to be zooming in on at any particular time.
I do welcome email that reports bugs in TAOCP, because I always try to correct them as soon as possible.
Other unsolicited messages go to the bit bucket in the sky, otherwise known as /dev/null.
20. J. H. Quick, blogger: Why is this multi-interview called "twenty questions," when only 19 questions were asked?
Don Knuth: I'm stumped. No, wait-Radia asked two.
Incidentally, the eVolumes of TAOCP contain some 4,500 questions, and almost as many answers.
Compiled by Nikolai Bezroukov
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: September 12, 2017