Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Programming Languages Usage and Design Problems

News Scripting languages Recommended Books Recommended Links Classic Papers Software Engineering Programming as a Profession Programming style
Donald Knuth TAoCP and its Influence of Computer Science Algorithms and Data Structures Bit Tricks Searching Algorithms Sorting Algorithms Pattern Matching Graph Algorithms
Assembler C Cpp Java Pascal PL/1 Prolog R programming language
GCC UNIX Make Program Understanding Real Insights into Architecture Come Only From Actual Programming Version Control & Configuration Management Tools  Literate Programming Programming style Software Testing
Forth Compilers Lexical analysis Recursive Descent Parsing Coroutines A symbol table Bit Tricks Debugging
Brooks law Perl-based Bug Tracking Code Reviews and Inspections Code Metrics Structured programming Design patterns Extreme Programming CMM (Capability Maturity Model)
Software Fashion Conway Law KISS Principle Tips Quotes History Humor Etc

As Donald Knuth noted (Don Knuth and the Art of Computer Programming The Interview):

I think of a programming language as a tool to convert a programmer's mental images into precise operations that a machine can perform. The main idea is to match the user's intuition as well as possible. There are many kinds of users, and many kinds of application areas, so we need many kinds of languages.
 
Ordinarily technology changes fast. But programming languages are different: programming languages are not just technology, but what programmers think in.

They're half technology and half religion. And so the median language, meaning whatever language the median programmer uses, moves as slow as an iceberg.

Paul Graham: Beating the Averages

Libraries are more important that the language.

Donald Knuth


Introduction

A fruitful way to think about language development is to consider it a to be special type of theory building. Peter Naur suggested that programming in general is theory building activity in his 1985 paper "Programming as Theory Building". But idea is especially applicable to compilers and interpreters. What Peter Naur failed to understand was that design of programming languages has religious overtones and sometimes represent an activity, which is pretty close to the process of creating a new, obscure cult ;-). Clueless academics publishing junk papers at obscure conferences are high priests of the church of programming languages. some like Niklaus Wirth and Edsger W. Dijkstra (temporary) reached the status close to those of (false) prophets :-).

On a deep conceptual level building of a new language is a human way of solving complex problems. That means that complier construction in probably the most underappreciated paradigm of programming of large systems much more so then greatly oversold object-oriented programming. OO benefits are greatly overstated. For users, programming languages distinctly have religious aspects, so decisions about what language to use are often far from being rational and are mainly cultural.  Indoctrination at the university plays a very important role. Recently they were instrumental in making Java a new Cobol.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities,  popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things.  A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked.  This is  a story behind success of  Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

Progress in programming languages has been very uneven and contain several setbacks. Currently this progress is mainly limited to development of so called scripting languages.  Traditional high level languages field is stagnant for many decades.

At the same time there are some mysterious, unanswered question about factors that help the language to succeed or fail. Among them:

Those are difficult questions to answer without some way of classifying languages into different categories. Several such classifications exists. First of all like with natural languages, the number of people who speak a given language is a tremendous force that can overcome any real of perceived deficiencies of the language. In programming languages, like in natural languages nothing succeed like success.

Complexity Curse

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that a language with simpler core, or even simplistic core Basic, Pascal) have better chances to acquire high level of popularity.  The underlying fact here probably is that most programmers are at best mediocre and such programmers tend on intuitive level to avoid more complex, more rich languages and prefer, say, Pascal to PL/1 and PHP to Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because of the progress of hardware, availability of compilers and not the least, because it was associated with OO exactly at the time OO became a mainstream fashion).  Complex non-orthogonal languages can succeed only as a result of a long period of language development (which usually adds complexly -- just compare Fortran IV with Fortran 99; or PHP 3 with PHP 5 ) from a smaller core. The banner of some fashionable new trend extending existing popular language to this new "paradigm" is also a possibility (OO programming in case of C++, which is a superset of C).

Historically, few complex languages were successful (PL/1, Ada, Perl, C++), but even if they were successful, their success typically was temporary rather then permanent  (PL/1, Ada, Perl). As Professor Wilkes noted   (iee90):

Things move slowly in the computer language field but, over a sufficiently long period of time, it is possible to discern trends. In the 1970s, there was a vogue among system programmers for BCPL, a typeless language. This has now run its course, and system programmers appreciate some typing support. At the same time, they like a language with low level features that enable them to do things their way, rather than the compiler’s way, when they want to.

They continue, to have a strong preference for a lean language. At present they tend to favor C in its various versions. For applications in which flexibility is important, Lisp may be said to have gained strength as a popular programming language.

Further progress is necessary in the direction of achieving modularity. No language has so far emerged which exploits objects in a fully satisfactory manner, although C++ goes a long way. ADA was progressive in this respect, but unfortunately it is in the process of collapsing under its own great weight.

ADA is an example of what can happen when an official attempt is made to orchestrate technical advances. After the experience with PL/1 and ALGOL 68, it should have been clear that the future did not lie with massively large languages.

I would direct the reader’s attention to Modula-3, a modest attempt to build on the appeal and success of Pascal and Modula-2 [12].

Complexity of the compiler/interpreter also matter as it affects portability: this is one thing that probably doomed PL/1 (and later Ada), although those days a new language typically come with open source compiler (or in case of scripting languages, an interpreter) and this is less of a problem.

Here is an interesting take on language design from the preface to The D programming language book:

Programming language design seeks power in simplicity and, when successful, begets beauty.

Choosing the trade-offs among contradictory requirements is a difficult task that requires good taste from the language designer as much as mastery of theoretical principles and of practical implementation matters. Programming language design is software-engineering-complete.

D is a language that attempts to consistently do the right thing within the constraints it chose: system-level access to computing resources, high performance, and syntactic similarity with C-derived languages. In trying to do the right thing, D sometimes stays with tradition and does what other languages do, and other times it breaks tradition with a fresh, innovative solution. On occasion that meant revisiting the very constraints that D ostensibly embraced. For example, large program fragments or indeed entire programs can be written in a well-defined memory-safe subset of D, which entails giving away a small amount of system-level access for a large gain in program debuggability.

You may be interested in D if the following values are important to you:

The role of fashion

At the initial, the most difficult stage of language development the language should solve an important problem that was inadequately solved by currently popular languages.  But at the same time the language has few chances rto cesseed unless it perfectly fits into the current software fashion. This "fashion factor" is probably as important as several other factors combined with the exclution of "language sponsor" factor.

Like in woman dress fashion rules in language design.  And with time this trend became more and more prononced.  A new language should simultaneously represent the current fashionable trend.  For example OO-programming was a visit card into the world of "big, successful languages" since probably early 90th (C++, Java, Python).  Before that "structured programming" and "verification" (Pascal, Modula) played similar role.

Programming environment and the role of "powerful sponsor" in language success

PL/1, Java, C#, Ada are languages that had powerful sponsors. Pascal, Basic, Forth are examples of the languages that had no such sponsor during the initial period of development.  C and C++ are somewhere in between.

But any language now need a "programming environment" which consists of a set of libraries, debugger and other tools (make tool, link, pretty-printer, etc). The set of standard" libraries and debugger are probably two most important elements. They cost  lot of time (or money) to develop and here the role of powerful sponsor is difficult to underestimate.

While this is not a necessary condition for becoming popular, it really helps: other things equal the weight of the sponsor of the language does matter. For example Java, being a weak, inconsistent language (C-- with garbage collection and OO) was pushed through the throat on the strength of marketing and huge amount of money spend on creating Java programming environment.  The same was partially true for  C# and Python. That's why Python, despite its "non-Unix" origin is more viable scripting language now then, say, Perl (which is better integrated with Unix and has pretty innovative for scripting languages support of pointers and regular expressions), or Ruby (which has support of coroutines form day 1, not as "bolted on" feature like in Python). Like in political campaigns, negative advertizing also matter. For example Perl suffered greatly from blackmail comparing programs in it with "white noise".   And then from withdrawal of O'Reilly from the role of sponsor of the language (although it continue to milk that Perl book publishing franchise ;-)

People proved to be pretty gullible and in this sense language marketing is not that different from woman clothing marketing :-)

Language level and success

One very important classification of programming languages is based on so called the level of the language.  Essentially after there is at least one language that is successful on a given level, the success of other languages on the same level became more problematic. Higher chances for success are for languages that have even slightly higher, but still higher level then successful predecessors.

The level of the language informally can be described as the number of statements (or, more correctly, the number of  lexical units (tokens)) needed to write a solution of a particular problem in one language versus another. This way we can distinguish several levels of programming languages:

 "Nanny languages" vs "Sharp razor" languages

Some people distinguish between "nanny languages" and "sharp razor" languages. The latter do not attempt to protect user from his errors while the former usually go too far... Right compromise is extremely difficult to find.

For example, I consider the explicit availability of pointers as an important feature of the language that greatly increases its expressive power and far outweighs risks of errors in hands of unskilled practitioners.  In other words attempts to make the language "safer" often misfire.

Expressive style of the languages

Another useful typology is based in expressive style of the language:

Those categories are not pure and somewhat overlap. For example, it's possible to program in an object-oriented style in C, or even assembler. Some scripting languages like Perl have built-in regular expressions engines that are a part of the language so they have functional component despite being procedural. Some relatively low level languages (Algol-style languages) implement garbage collection. A good example is Java. There are scripting languages that compile into common language framework which was designed for high level languages. For example, Iron Python compiles into .Net.

Weak correlation between quality of design and popularity

Popularity of the programming languages is not strongly connected to their quality. Some languages that look like a collection of language designer blunders (PHP, Java ) became quite popular. Java became especially a new Cobol and PHP dominates dynamic Web sites construction. The dominant technology for such Web sites is often called LAMP, which means Linux - Apache -My SQL PHP. Being a highly simplified but badly constructed subset of Perl, kind of new Basic for dynamic Web sites construction PHP provides the most depressing experience. I was unpleasantly surprised when I had learnt the Wikipedia engine was rewritten in PHP from Perl some time ago, but this quite illustrates the trend.

So language design quality has little to do with the language success in the marketplace. Simpler languages have more wide appeal as success of PHP (which at the beginning was at the expense of Perl) suggests. In addition much depends whether the language has powerful sponsor like was the case with Java (Sun and IBM) as well as Python (Google).

Progress in programming languages has been very uneven and contain several setbacks like Java. Currently this progress is usually associated with scripting languages. History of programming languages raises interesting general questions about "laws" of programming language design. First let's reproduce several notable quotes:

  1. Knuth law of optimization: "Premature optimization is the root of all evil (or at least most of it) in programming." - Donald Knuth
  2. "Greenspun's Tenth Rule of Programming: any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp." - Phil Greenspun
  3. "The key to performance is elegance, not battalions of special cases." - Jon Bentley and Doug McIlroy
  4. "Some may say Ruby is a bad rip-off of Lisp or Smalltalk, and I admit that. But it is nicer to ordinary people." - Matz, LL2
  5. Most papers in computer science describe how their author learned what someone else already knew. - Peter Landin
  6. "The only way to learn a new programming language is by writing programs in it." - Kernighan and Ritchie
  7. "If I had a nickel for every time I've written "for (i = 0; i < N; i++)" in C, I'd be a millionaire." - Mike Vanier
  8. "Language designers are not intellectuals. They're not as interested in thinking as you might hope. They just want to get a language done and start using it." - Dave Moon
  9. "Don't worry about what anybody else is going to do. The best way to predict the future is to invent it." - Alan Kay
  10. "Programs must be written for people to read, and only incidentally for machines to execute." - Abelson & Sussman, SICP, preface to the first edition

Please note that one thing is to read language manual and appreciate how good the concepts are, and another to bet your project on a new, unproved language without good debuggers, manuals and, what is very important, libraries. Debugger is very important but standard libraries are crucial: they represent a factor that makes or breaks new languages.

In this sense languages are much like cars. For many people car is the thing that they use get to work and shopping mall and they are not very interesting is engine inline or V-type and the use of fuzzy logic in the transmission. What they care is safety, reliability, mileage, insurance and the size of trunk. In this sense "Worse is better" is very true. I already mentioned the importance of the debugger. The other important criteria is quality and availability of libraries. Actually libraries are what make 80% of the usability of the language, moreover in a sense libraries are more important than the language...

A popular belief that scripting is "unsafe" or "second rate" or "prototype" solution is completely wrong. If a project had died than it does not matter what was the implementation language, so for any successful project and tough schedules scripting language (especially in dual scripting language+C combination, for example TCL+C) is an optimal blend that for a large class of tasks. Such an approach helps to separate architectural decisions from implementation details much better that any OO model does.

Moreover even for tasks that handle a fair amount of computations and data (computationally intensive tasks) such languages as Python and Perl are often (but not always !) competitive with C++, C# and, especially, Java.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities, popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things. A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked. This is a story behind success of Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that languages with simpler core, or even simplistic core has more chanced to acquire high level of popularity. The underlying fact here probably is that most programmers are at best mediocre and such programmer tend on intuitive level to avoid more complex, more rich languages like, say, PL/1 and Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because OO became a fashion). Complex non-orthogonal languages can succeed only as a result on long period of language development from a smaller core or with the banner of some fashionable new trend (OO programming in case of C++).

Programming Language Development Timeline

Here is modified from Byte the timeline of Programming Languages (for the original see BYTE.com September 1995 / 20th Anniversary /)

Forties

ca. 1946


1949

Fifties


1951


1952

1957


1958


1959

Sixties


1960


1962


1963


1964


1965


1966


1967



1969

Seventies


1970


1972


 

1974


1975


1976

 


1977


1978


1979

Eighties


1980


1981


1982


1983


1984


1985


1986


1987


1988


1989

Nineties


1990


1991


1992


1993


1994


1995


1996


1997


2006

2007 

2011

Avoiding C-style languages design blunder of "easy" mistyping "=" instead of "=="

One of most famous C design blunders was two small lexical difference between assignment and comparison (remember that Algol used := for assignment) caused by the design decision  to make the language more compact (terminals at this type were not very reliable and number of symbols typed matter greatly. In C assignment is allowed in if statement but no attempts were  made to make language more failsafe by avoiding possibility of mixing up  "=" and "==".  In  C syntax if (a = b) assigns the contents of b to a and executes the code following if b <> 0. It is easy to mix thing and write if (a = b ) instead of (if (a == b)  which is a pretty nasty bug.  You can often reverse the sequence and put constant first like in

if ( 1==i ) ...
as
if ( 1=i ) ...
does not make any sense, such a blunder will be detected on syntax level.

Dealing with unbalanced "{" and "}" problem in C-style languages

One of the nasty problems with C, C++, Java, Perl and other C-style languages is that missing brackets are pretty difficult to find. One effective solution that was first implemented in PL/1 and calculation of nesting (in compiler listing) and ability of multiple closure of blocks in the end statement (PL/1 did not use brackets {}, they were introduced in C).

In C one can use pseudo comments that signify nesting level zero and check those points with special program or by writing an editor macro.

Many editors have the ability to point to the closing bracket for any given opening bracket and vise versa. This is also useful but less efficient way to solve the problem.  

Problem of unclosed literal

Specifying max length of literals is an effecting way of catching missing quote. This was implemented in PL/1 compilers. You can also have an option to limit literal to a single line. In general multi-line literals should have different lexical markers (like "here" construct in shell). Some language like Perl provide opportunity to use concatenation operator for splitting literals into multiple line, which are "merged" at compile time. But there is no limit on the number of lines string literal can occupy so this does not help much.

If such limit can be communicated via prgma statement at compile type in a particular fragment of text this is an effective way to avoid the problem. Ususally only few places in program use multiline literals, if any. 

Editors that use coloring help to detect unclosed literal problem but there are cases when they are useless.

Commenting out blocks  of code

This is best done not with comments but with a preprocessor in the languages that has one (PL/1, C, etc)

The "dangling else" problem

Having both an if-else and an if statement leads to some possibilities of confusion when one of the clause of a selection statement is itself a selection statement. For example, the C code

if (level >= good)
   if (level == excellent)
      cout << "excellent" << endl;
else
   cout << "bad" << endl;

is intended to process a three-state situation in which something can be bad, good or (as a special case of good) excellent; it is supposed to print an appropriate description for the excellent and bad cases, and print nothing for the good case. The indentation of the code reflects these expectations. Unfortunately, the code does not do this. Instead, it prints excellent for the excellent case, bad for the good case, and nothing for the bad case.

The problem is deciding which if matches the else in this expression. The basic rule is

an else matches the nearest previous unmatched if

There are two ways to avoid the dangling else problem:

In fact, you can avoid the dangling else problem completely by always using brackets around the clauses of an if or if-else statement, even if they only enclose a single statement.
So a good strategy for notation of if-else statements is always use { brace brackets } around the clauses of an if-else or if statement
Always use { brace brackets } around the clauses of an if-else or if statement
(This strategy also helps if you need to cut-and-paste more code into one of the clauses: if a clause consists of only one statement, without enclosing brace brackets, and you add another statement to it, then you also need to add the brace brackets. Having the brace brackets there already makes the job easier.)

 


NEWS CONTENTS

Old News ;-)

[Jun 30,2005] Art and Computer Programming by John Littler

Knuth view holds; Stallman's views does not make any sense other then in context of his cult :-). See also Slashdot discussion Slashdot Is Programming Art
ONLamp.com

Art and hand-waving are two things that a lot of people consider to go very well together. Art and computer programming, less so. Donald Knuth put them together when he named his wonderful multivolume set on algorithms The Art of Computer Programming, but Knuth chose a craft-oriented definition of art (PDF) in order to do so.

... ... ...

Someone I didn't attempt to contact but whose words live on is Albert Einstein. Here are a couple of relevant quotes:

[W]e do science when we reconstruct in the language of logic what we have seen and experienced. We do art when we communicate through forms whose connections are not accessible to the conscious mind yet we intuitively recognise them as something meaningful.

Also:

After a certain level of technological skill is achieved, science and art tend to coalesce in aesthetic plasticity and form. The greater scientists are artists as well.[1]

This is a lofty place to start. Here's Fred Brooks with a more direct look at the subject:

The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.[2]

He doesn't say it's art, but it sure sounds a lot like it.

In that vein, Andy Hunt from the Pragmatic Programmers says:

It is absolutely an art. No question about it. Check out this quote from the Marines:

An even greater part of the conduct of war falls under the realm of art, which is the employment of creative or intuitive skills. Art includes the creative, situational application of scientific knowledge through judgment and experience, and so the art of war subsumes the science of war. The art of war requires the intuitive ability to grasp the essence of a unique military situation and the creative ability to devise a practical solution.

Sounds like a similar situation to software development to me.

There are other similarities between programming and artists, see my essay at Art In Programming (PDF).

I could go on for hours about the topic...

Guido van Rossum, the creator of Python, has stronger alliances to Knuth's definition:

I'm with Knuth's definition (or use) of the word art.

To me, it relates strongly to creativity, which is very important for my line of work.

If there was no art in it, it wouldn't be any fun, and then I wouldn't still be doing it after 30 years.

Bjarne Stroustrup, the creator of C++, is also more like Knuth in refining his definition of art:

When done right, art and craft blends seamlessly. That's the view of several schools of design, though of course not the view of people into "art as provocation".

Define "craft"; define "art". The crafts and arts that I appreciate blend seamlessly into each other so that there is no dilemma.

So far, these views are very top-down. What happens when you change the viewpoint? Paul Graham, programmer and author of Hackers and Painters, responded that he'd written quite a bit on the subject and to feel free to grab something. This was my choice:

I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation.

For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging.

For a long time I felt bad about this, just as I once felt bad that I didn't hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do.[3]

Paul goes on to talk about the implications for software design and the joys of dynamic typing, which allows you to stay looser later.

Now, we're right down to the code. This is what Richard Stallman, founder of the GNU Project and the Free Software Foundation, has to say (throwing in a geek joke for good measure):

I would describe programming as a craft, which is a kind of art, but not a fine art. Craft means making useful objects with perhaps decorative touches. Fine art means making things purely for their beauty.

Programming in general is not fine art, but some entries in the obfuscated C contest may qualify. I saw one that could be read as a story in English or as a C program. For the English reading one had to ignore punctuation--for instance, the name Charlotte might appear as char *lotte.

(Once I was eating in Legal Sea Food and ordered arctic char. When it arrived, I looked for a signature, saw none, and complained to my friends, "This is an unsigned char. I wanted a signed char!" I would have complained to the waiter if I had thought he'd get the joke.)

... ... ...

Constraints and Art

The existence of so many restraints in the actual practice of code writing makes it tempting to dismiss programming as art, but when you think about it, people who create recognized art have constraints too. Writers, painters, and so on all have their code--writers must be comprehensible in some sort of way in their chosen language. Musicians have tools of expression in scales, harmonies, and timbres. Painters might seem to be free of this, but cultural rules exist, as they do for the other categories. An artist can break rules in an inspired way and receive the highest praise for it--but sometimes only after they've been dead for a long time.

Program syntax and logic might seem to be more restrictive than these rules, which is why it is more inspiring to think as Fred Brooks did--in the heart of the machine.

Perhaps it's more useful to look at the process. If there are ways in which the concept of art could be useful, then maybe we'll find them there.

If we broadly take the process as consisting of idea, design, and implementation, it's clear that even if we don't accept that implementation is art, there is plenty of scope in the first two stages, and there's certainly scope in the combination. Thinking about it a little more also highlights the reductio ad absurdum of looking at any art in this way, where sculpture becomes the mere act of chiseling stone or painting is the application of paint to a surface.

Looking at the process immediately focuses on the different situations of the lone hacker or small team as opposed to large corporate teams, who in some cases send specification documents to people they don't even know in other countries. The latter groups hope that they've specified things in such detail that they need to know nothing about the code writers other than the fact that they can deliver.

The process for the lone hacker or small team might be almost unrecognizable as a process to an outsider--a process like that described by Paul Graham, where writing the code itself alters and shapes an idea and its design. The design stage is implicit and ongoing. If there is art in idea and design, then this is kneaded through the dough of the project like a special magic ingredient--the seamless combination that Bjarne Stroustrup mentioned. In less mystical terms, the process from beginning to end has strong degrees of integrity.

The situation with larger project groups is more difficult. More people means more time constraints on communication, just because the sums are bigger. There is an immediate tendency for the existence of more rules and a concomitant tendency for thinking inside the box. You can't actually order people to be creative and brilliant. You can only make the environment where it's more likely and hope for the best. Xerox PARC and Bell Labs are two good examples of that.

The real question is how to be inspired for the small team, and additionally, how not to stop inspiration for the larger team. This is a question of personal development. Creative thinking requires knowledge outside of the usual and ordinary, and the freedom and imagination to roam.

Why It Matters

What's the prize? What's the point? At the micro level, it's an idea (which might not be a Wow idea) with a brilliant execution. At the macro level, it's a Wow idea (getting away from analogues, getting away from clones--something entirely new) brilliantly executed.

I realize now that I should have also asked my responders, if they were sympathetic to the idea of programming as art, to nominate some examples. I'll do that myself. Maybe you'd like to nominate some more? I think of the early computer game Elite, made by a team of two, which extended the whole idea of games both graphically and in game play. There are the first spreadsheets VisiCalc and Lotus 1-2-3 for the elegance of the first concept even if you didn't want to use one. Even though I don't use it anymore, the C language is artistic for the elegance of its basic building blocks, which can be assembled to do almost anything.

Anyway, go make some art. Why not?!

References

John Littler is chief gopher for Mstation.org.

Art and Computer Programming/Discussion

ONLamp.com

Trackbacks
Comments made on other sites via trackbacks appear below.

  • Trackback from [Smalltalk]
    Programming — Art or Science
    2005-07-06 01:05:47
    ONLamp has an essay by John Littler discussing the relationship between computer programming and art. Included in the essay are quotes on that topic from various luminaries. My favorite quotes are from Fred Brooks: The programmer, like the poet, works...
  • Trackback from Toadkillerdog's DogHouse
    Programming: Geek Art or Science?
    2005-07-05 20:56:46
    An interesting blurb on Slashdot on whether or not programming is art of science (includes link to the...
  • Trackback from Riaan's Blog
    Programming. Is is Art?
    2005-07-05 20:42:31
  • Trackback from Sashidhar Kokku 's Development blog.
    Is Programming Art? or is assembling???
    2005-07-05 17:33:50
    Is programming an art, or is assembling an art? Is drawing an art, or is filling the drawing with colors an art?
  • Trackback from Sashidhar Kokku 's Development blog.
    Is Programming Art? or is assembling???
    2005-07-05 17:31:08
    chromatic writes "A constant question for software developers is 'What is the nature of programming?'...
  • Trackback from Sexy Jihad
    Is Programming Art?
    2005-07-01 05:28:30

    Is programming art? This is a very interesting question that ONLAMP has an article about:
    What the heck is art anyway, at least as most people understand it? What do people mean when they say “art”? A straw poll showed a fair degree of ...

  • James Gosling on Java

    Java is a horrible language, but people are better then institutions :-)

    Slashdot

    Page 2 and scripting languages (Score:5, Interesting)
    by MarkEst1973 (769601) on Thursday June 30, @09:59PM (#12956728) The entire second page of the article talks about scripting languages, specifically Javascript (in browsers) and Groovy.

    1. Kudos to the Groovy [codehaus.org] authors. They've even garnered James Gosling's attention. If you write Java code and consider yourself even a little bit of a forward thinker, look up Groovy. It's a very important JSR (JSR-241 specifically).

    2. He talks about Javascript solely from the point of view of the browser. Yes, I agree that Javascript is predominently implemented in a browser, but it's reach can be felt everywhere. Javascript == ActionScript (Flash scripting language). Javascript == CFScript (ColdFusion scripting language). Javascript object notation == Python object notation.

    But what about Javascript and Rhino's [mozilla.org] inclusion in Java 6 [sun.com]? I've been using Rhino as a server side language for a while now because Struts is way too verbose for my taste. I just want a thin glue layer between the web interface and my java components. I'm sick and tired of endless xml configuration (that means you, too, EJB!). A Rhino script on the server (with embedded Request, Response, Application, and Session objects) is the perfect glue that does not need xml configuration. (See also Groovy's Groovlets for a thin glue layer).

    3. Javascript has been called Lisp in C's clothing. Javascript (via Rhino) will be included in Java 6. I also read that Java 6 will allow access to the parse trees created by the javac compiler (same link as Java 6 above).

    Java is now Lisp? Paul Graham writes about 9 features [paulgraham.com] that made Lisp unique when it debuted in the 50s. Access to the parse trees is one of the most advanced features of Lisp. He argues that when a language has all 9 features (and Java today is at about #5), you've not created a new language but a dialect of Lisp.

    I am a Very Big Fan of dynamic languages that can flex like a pretzel to fit my problem domain. Is Java evolving to be that pretzel?

    [Jun 28, 2017] PBS Pro Tutorial by Krishna Arutwar

    www.nakedcapitalism.com
    What is PBS Pro?

    Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Torque) is developed from OpenPBS. It is developed and maintain by Adaptive Computing Enterprises. It is used as a distributed resource manager can perform well when integrated with Maui cluster scheduler to improve performance. 2) PBS Professional (PBS Pro): It is commercial version of PBS offered by Altair Engineering. 3) OpenPBS: It is open source version released in 1998 developed by NASA. It is not actively developed.

    In this article we are going to concentrate on tutorial of PBS Pro it is similar to some extent with Torque.

    PBS contain three basic units server, MoM (execution host), scheduler.

    1. Server: It is heart of the PBS, with executable named "pbs_server". It uses IP network to communicate with the MoMs. PBS server create a batch job, modify the job requested from different MoMs. It keeps track of all resources available, assigned in the PBS complex from different MoMs. It will also monitor the PBS license for jobs. If your license expires it will throw an error.
    2. Scheduler: PBS scheduler uses various algorithms to decide when job should get executed on which node or vnode by using detail of resources available from server. It has executable as "pbs_sched".
    3. MoM: MoM is the mother of all execution job with executable "pbs_mom". When MoM gets job from server it will actually execute that job on the host. Each node must have MoM running to get participate in execution.

    Installation and Setting up of environment (cluster with multiple nodes)

    Extract compressed software of PBS Pro and go the path of extracted folder it contain "INSTALL" file, make that file executable you may use command like "chmod +x ./INSTALL". As shown in the image below run this executable. It will ask for the "execution directory" where you want to store the executable (such as qsub, pbsnodes, qdel etc.) used for different PBS operations and "home directory" which contain different configuration files. Keep both as default for simplicity. There are three kind of installation available as shown in figure:

    1) Server node: PBS server, scheduler, MoM and commands are installed on this node. PBS server will keep track of all execution MoMs present in the cluster. It will schedule jobs on this execution nodes. As MoM and commands are also installed on server node it can be used to submit and execute the jobs. 2) Execution node: This type installs MoM and commands. This nodes are added as available nodes for execution in a cluster. They are also allowed to submit the jobs at server side with specific permission by server as we are going to see below. They are not involved in scheduling. This kind of installation ask for PBS server which is used to submit jobs, get status of jobs etc. 3 ) Client node: This are the nodes which are only allowed to submit a PBS job at server with specific permission by the server and allowed to see the status of the jobs. They are not involved in execution or scheduling.

    Creating vnode in PBS Pro:

    We can create multiple vnodes in a single node which contain some part of resources in a node. We can execute job on this vnodes with specified allocated resources. We can create vnode using qmgr command which is command line interface to PBS server. We can use command given below to create vnode using qmgr.

    Qmgr:
    create node Vnode1,Vnode2 resources_available.ncpus=8, resources_available.mem=10gb, 
    resources_available.ngpus=1, sharing=default_excl 
    The command above will create two vnodes named Vnode1 and Vnode2 with 8 cpus cores, 10gb of memory and 1 GPU with sharing mode as default_excl which means this vnode can execute exclusively only one job at a time independent of number of resources free. This sharing mode can be default_shared which means any number of jobs can run on that vnode until all resources are busy. To know more about all attributes which can be used with vnode creation are available in PBS Pro reference guide.

    You can also create a file in " /var/spool/PBS/mom_priv/config.d/ " this folder with any name you want I prefer hostname -vnode with sample given below. It will select all files even temporary files with (~) and replace configuration for same vnode so delete unnecessary files to get proper configuration of vnodes.

    e.g.

    $configversion 2
    hostname
    :resources_available.ncpus=0
    hostname
    :resources_available.mem=0
    hostname
    :resources_available.ngpus=0
    hostname
    [0]:resources_available.ncpus=8
    hostname
    [0]:resources_available.mem=16gb
    hostname
    [0]:resources_available.ngpus=1
    hostname
    [0]:sharing=default_excl
    hostname
    [1]:resources_available.ncpus=8
    hostname
    [1]:resources_available.mem=16gb
    
    
    hostname
    [1]:resources_available.ngpus=1
    
    
    
    
    hostname
    [1]:sharing=default_excl
    
    
    hostname
    [2]:resources_available.ncpus=8
    
    
    hostname
    [2]:resources_available.mem=16gb
    
    
    hostname
    [2]:resources_available.ngpus=1
    
    
    
    
    hostname
    [2]:sharing=default_excl
    
    
    hostname
    [3]:resources_available.ncpus=8
    
    
    hostname
    [3]:resources_available.mem=16gb
    
    
    hostname
    [3]:resources_available.ngpus=1
    
    hostname
    [3]:sharing=default_excl
    
    Here in this example we assigned default node configuration to resource available as 0 because by default it will detect and allocate all available resources to default node with sharing attribute as is default_shared.

    Which cause problem as all the jobs will by default get scheduled on that default vnode because its sharing type is default_shared . If you want to schedule jobs on your customized vnodes you should allocate resources available as 0 on default vnode . Every time whenever you restart the PBS server

    PBS get status:

    get status of Jobs:

    qstat will give details about jobs there states etc.

    useful options:

    To print detail about all jobs which are running or in hold state: qstat -a

    To print detail about subjobs in JobArray which are running or in hold state: qstat -ta

    get status of PBS nodes and vnodes:

    "pbsnode -a" command will provide list of all nodes present in PBS complex with there resources available, assigned, status etc.

    To get details of all nodes and vnodes you created use " pbsnodes -av" command.

    You can also specify node or vnode name to get detail information of that specific node or vnode.

    e.g.

    pbsnodes wolverine (here wolverine is hostname of the node in PBS complex which is mapped with IP address in /etc/hosts file)

    Job submission (qsub):

    PBS MoM will submit jobs to the PBS server. Server maintain queue of jobs by default all jobs are submitted to default queue named "workq". You may create multiple queues by using "qmgr" command which is administrator interface mainly used to create, delete & modify queues and vnodes. PBS server will decide which job to be scheduled on which node or vnode based on scheduling policy and privileges set by user. To schedule jobs server will continuously ping to all MoMs in the PBS complex to get detail of resources available and assigned. PBS assigns unique job identifier to each and every job called JobID. For job submission PBS uses "qsub" command. It has syntax as shown below qsub script Here script may be a shell (sh, csh, tchs, ksh, bash) script. PBS by default uses /bin/sh. You may refer simple script given below #!/bin/sh

    echo "This is PBS job"

    When PBS completes execution of job it will store errors in file name with JobName.e{JobID} e.g. Job1.e1492

    Output with file name

    JobName.o{JobID} e.g. Job1.o1492

    By default it will store this files in the current working directory (can be seen by pwd command) . You can change this location by giving path with -o option.

    you may specify job name with -N option while submitting the job

    qsub -N firstJob ./test.sh

    If you don't specify job name it will store files by replacing JobName with script name. e.g. qsub ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in current working directory.

    OR

    qsub -N firstJob -o /home/user1/ ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in /home/user1/ directory.

    If submitted job terminate abnormally (errors in job is not abnormal, this errors get stored in JobName.e{JobID} file) it will store its error and output files in "/var/spool/PBS/undelivered/ " folder.

    Useful Options:

    Select resources:

    qsub -l select="chunks":ncpus=3:ngpus=1:mem=2gb script 
    

    e.g.

    This Job will selects 2 copies with 3 cpus, 1 gpu and 2gb memory which mean it will select 6 cpus, 2 gpus and 4 gb ram.

    qsub -l nodes=megamind:ncpus=3 /home/titan/PBS/input/in.sh

    This job will select one node specified with hostname.

    To select multiple nodes you may use command given below

    qsub -l nodes=megamind+titan:ncpus=3 /home/titan/PBS/input/in.sh
    Submit multiple jobs with same script (JobArray):

    qsub -J 1-20 script

    Submit dependant jobs:

    In some cases you may require job which should run after successful or unsuccessful completion of some specified jobs for that PBS provide some options such as

    qsub -W depend=afterok:316.megamind /home/titan/PBS/input/in.sh
    
    

    This specified job will start only after successful completion of job with job ID "316.megamind". Like afterok PBS has other options such as beforeok

    beforenotok to , afternotok. You may find all this details in the man page of qsub .

    Submit Job with priority :

    There are two ways using which we can set priority to jobs which are going to execute.

    1) Using single queue with different jobs with different priority:

    To change sequence of jobs queued in a execution queue open "$PBS_HOME/sched_priv/sched_config" file, normally $PBS_HOME is present in "/var/spool/PBS/" folder. Open this file and uncomment the line below if present otherwise add it .

    job_sort_key : "job_priority HIGH"

    After saving this file you will need to restart the pbs_sched daemon on head node you may use command below

    service pbs restart

    After completing this task you have to submit the job with -p option to specify priority of job within queue. This value may range between (-1024) to 1023, where -1024 is the lowest priority and 1023 is the highest priority in the queue.

    e.g.

    qsub -p 100 ./X.sh
    
    qsub -p 101 ./Y.sh
    
    
    qsub -p 102 ./Z.sh 
    In this case PBS will execute jobs as explain in the diagram given below

    multipleJobsInoneQ

    2) Using different queues with specified priority: We are going to discuss this point in PBS Queue section.

    q

    In this example all jobs in queue 2 will complete first then queue 3 then queue 1, since priority of queue 2 > queue 3 > queue 1. Because of this job execution flow is as shown below

    J4=> J5=> J6=>J7=> J8=> J9=> J1=> J2=> J3 PBS Queue:

    PBS Pro can manage multiple queue as per users requirement. By default every job is queued in "workq" for execution. There are two types of queue are available execution and routing queue. Jobs in execution queue are used by pbs server for execution. Jobs in routing queue can not be executed they can be redirected to execution queue or another routing queue by using command qmove command. By default queue "workq" is an execution queue. The sequence of job in queue may change by using priority defined while job submission as specified above in job submission section.

    Useful qmgr commands:

    First type qmgr which is Manager interface of PBS Pro.

    To create queue:

    
    Qmgr:
     create queue test2
    
    

    To set type of queue you created:

    
    Qmgr:
     set queue test2 queue_type=execution
    
    

    OR

    
    Qmgr:
     set queue test2 queue_type=route
    
    

    To enable queue:

    
    Qmgr:
     set queue test2 enabled=True
    
    

    To set priority of queue:

    
    Qmgr:
     set queue test2 priority=50
    
    

    Jobs in queue with higher priority will get first preference. After completion of all jobs in the queue with higher priority jobs in lower priority queue are scheduled. There is huge probability of job starvation in queue with lower priority.

    To start queue:

    
    Qmgr:
     set queue test2 started = True
    
    

    To activate all queue (present at particular node):

    
    Qmgr:
     active queue @default
    
    

    To set queue for specified users : You require to set acl_user_enable attribute to true which indicate PBS to only allow user present in acl_users list to submit the job.

    
     Qmgr:
     set queue test2 acl_user_enable=True
    
    

    To set users permitted (to submit job in a queue):

    
    Qmgr:
     set queue test2 acl_users="user1@
    ..
    ,user2@
    ..
    ,user3@
    ..
    "

    (in place of .. you have to specify hostname of compute node in PBS complex. Only user name without hostname will allow users ( with same name ) to submit job from all nodes ( permitted to submit job ) in a PBS Complex).

    To delete queues we created:

    
    Qmgr:
     delete queue test2
    
    

    To see details of all queue status:

    qstat -Q
    
    
    

    You may specify specific queue name: qstat -Q test2

    To see full details of all queue: qstat -Q -f

    You may specify specific queue name: qstat -Q -f test2

    [May 08, 2017] Betteridge's law of headlines

    Apr 27, 2017 | en.wikipedia.org
    Betteridge's law of headlines From Wikipedia, the free encyclopedia Jump to: navigation , search Betteridge's law of headlines is one name for an adage that states: "Any headline that ends in a question mark can be answered by the word no ." It is named after Ian Betteridge, a British technology journalist, [1] [2] although the principle is much older. As with similar "laws" (e.g., Murphy's law ), it is intended as a humorous adage rather than always being literally true. [3] [4]

    The maxim has been cited by other names since as early as 1991, when a published compilation of Murphy's Law variants called it " Davis's law ", [5] a name that also crops up online, without any explanation of who Davis was. [6] [7] It has also been called just the " journalistic principle ", [8] and in 2007 was referred to in commentary as "an old truism among journalists". [9]

    Ian Betteridge's name became associated with the concept after he discussed it in a February 2009 article, which examined a previous TechCrunch article that carried the headline "Did Last.fm Just Hand Over User Listening Data To the RIAA ?": [10]

    This story is a great demonstration of my maxim that any headline which ends in a question mark can be answered by the word "no." The reason why journalists use that style of headline is that they know the story is probably bullshit, and don't actually have the sources and facts to back it up, but still want to run it. [1]

    A similar observation was made by British newspaper editor Andrew Marr in his 2004 book My Trade , among Marr's suggestions for how a reader should interpret newspaper articles:

    If the headline asks a question, try answering 'no'. Is This the True Face of Britain's Young? (Sensible reader: No.) Have We Found the Cure for AIDS? (No; or you wouldn't have put the question mark in.) Does This Map Provide the Key for Peace? (Probably not.) A headline with a question mark at the end means, in the vast majority of cases, that the story is tendentious or over-sold. It is often a scare story, or an attempt to elevate some run-of-the-mill piece of reporting into a national controversy and, preferably, a national panic. To a busy journalist hunting for real information a question mark means 'don't bother reading this bit'. [11]

    Outside journalism

    In the field of particle physics , the concept is known as Hinchliffe's Rule , [12] [13] after physicist Ian Hinchliffe , [14] who stated that if a research paper's title is in the form of a yes–no question, the answer to that question will be "no". [14] The adage was humorously led into a Liar's paradox by a pseudonymous 1988 paper which bore the title "Is Hinchliffe's Rule True?" [13] [14]

    However, at least one article found that the "law" does not apply in research literature. [15]

    See also

    [Nov 08, 2015] 2013 Keynote: Dan Quinlan: C++ Use in High Performance Computing Within DOE: Past and Future

    At 31 min there is an interesting slide that gives some information about the scale of system in DOE. Current system has 18,700 News system will have 50K to 500K nodes, 32 core per node (power consumption is ~15 MW equal to a small city power consumption). The cost is around $200M
    Jun 09, 2013 | YouTube

    watch-v=zZGYfM1iM7c

    [Nov 08, 2015] The Anti-Java Professor and the Jobless Programmers

    Nick Geoghegan

    James Maguire's article raises some interesting questions as to why teaching Java to first year CS / IT students is a bad idea. The article mentions both Ada and Pascal – neither of which really "took off" outside of the States, with the former being used mainly by contractors of the US Dept. of Defense.

    This is my own, personal, extension to the article – which I agree with – and why first year students should be taught C in first year. I'm biased though, I learned C as my first language and extensively use C or C++ in projects.

    Java is a very high level language that has interesting features that make it easier for programmers. The two main points, that I like about Java, are libraries (although libraries exist for C / C++ ) and memory management.

    Libraries

    Libraries are fantastic. They offer an API and abstract a metric fuck tonne of work that a programmer doesn't care about. I don't care how the library works inside, just that I have a way of putting in input and getting expected output (see my post on abstraction). I've extensively used libraries, even this week, for audio codec decoding. Libraries mean not reinventing the wheel and reusing code (something students are discouraged from doing, as it's plagiarism, yet in the real world you are rewarded). Again, starting with C means that you appreciate the libraries more.

    Memory Management

    Managing your programs memory manually is a pain in the hole. We all know this after spending countless hours finding memory leaks in our programs. Java's inbuilt memory management tool is great – it saves me from having to do it. However, if I had have learned Java first, I would assume (for a short amount of time) that all languages managed memory for you or that all languages were shite compared to Java because they don't manage memory for you. Going from a "lesser" language like C to Java makes you appreciate the memory manager

    What's so great about C?

    In the context of a first language to teach students, C is perfect. C is

    Java is a complex language that will spoil a first year student. However, as noted, CS / IT courses need to keep student retention rates high. As an example, my first year class was about 60 people, final year was 8. There are ways to keep students, possibly with other, easier, languages in the second semester of first year – so that students don't hate the subject when choosing the next years subject post exams.

    Conversely, I could say that you should teach Java in first year and expand on more difficult languages like C or assembler (which should be taught side by side, in my mind) later down the line – keeping retention high in the initial years, and drilling down with each successive semester to more systems level programming.

    There's a time and place for Java, which I believe is third year or final year. This will keep Java fresh in the students mind while they are going job hunting after leaving the bosom of academia. This will give them a good head start, as most companies are Java houses in Ireland.

    [Nov 08, 2015] Abstraction

    nickgeoghegan.net

    Filed in Programming No Comments

    A few things can confuse programming students, or new people to programming. One of these is abstraction.

    Wikipedia says:

    In computer science, abstraction is the process by which data and programs are defined with a representation similar to its meaning (semantics), while hiding away the implementation details. Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time. A system can have several abstraction layers whereby different meanings and amounts of detail are exposed to the programmer. For example, low-level abstraction layers expose details of the hardware where the program is run, while high-level layers deal with the business logic of the program.

    That might be a bit too wordy for some people, and not at all clear. Here's my analogy of abstraction.

    Abstraction is like a car

    A car has a few features that makes it unique.

    If someone can drive a Manual transmission car, they can drive any Manual transmission car. Automatic drivers, sadly, cannot drive a Manual transmission drivers without "relearing" the car. That is an aside, we'll assume that all cars are Manual transmission cars – as is the case in Ireland for most cars.

    Since I can drive my car, which is a Mitsubishi Pajero, that means that I can drive your car – a Honda Civic, Toyota Yaris, Volkswagen Passat.

    All I need to know, in order to drive a car – any car – is how to use the breaks, accelerator, steering wheel, clutch and transmission. Since I already know this in my car, I can abstract away your car and it's controls.

    I do not need to know the inner workings of your car in order to drive it, just the controls. I don't need to know how exactly the breaks work in your car, only that they work. I don't need to know, that your car has a turbo charger, only that when I push the accelerator, the car moves. I also don't need to know the exact revs that I should gear up or gear down (although that would be better on the engine!)

    Virtually all controls are the same. Standardization means that the clutch, break and accelerator are all in the same place, regardless of the car. This means that I do not need to relearn how a car works. To me, a car is just a car, and is interchangeable with any other car.

    Abstraction means not caring

    As a programmer, or someone using a third party API (for example), abstraction means not caring how the inner workings of some function works – Linked list data structure, variable names inside the function, the sorting algorithm used, etc – just that I have a standard (preferable unchanging) interface to do whatever I need to do.

    Abstraction can be taught of as a black box. For input, you get output. That shouldn't be the case, but often is. We need abstraction so that, as a programmer, we can concentrate on other aspects of the program – this is the corner-stone for large scale, multi developer, software projects.

    [Oct 18, 2013] Tom Clancy, Best-Selling Master of Military Thrillers, Dies at 66

    Fully applicable to programming...
    NYTimes.com

    “I tell them you learn to write the same way you learn to play golf,” he once said. “You do it, and keep doing it until you get it right. A lot of people think something mystical happens to you, that maybe the muse kisses you on the ear. But writing isn’t divinely inspired — it’s hard work.

    [Jan 14, 2013] Learn Basic Programming So You Aren't At the Mercy of Programmers

    I like rephrased line: "You need to learn to program. Because if you don't, you're always going to be at the mercy of some asshole programmer."
    January 13, 2013 | developers.slashdot.org

    An anonymous reader writes "Derek Sivers, creator of online indie music store CD Baby, has a post about why he thinks basic programming is a useful skill for everybody.

    He quotes a line from a musician he took guitar lessons from as a kid: "You need to learn to sing. Because if you don't, you're always going to be at the mercy of some a****** singer." Sivers recommends translating that to other areas of life.

    He says, 'The most common thing I hear from aspiring entrepreneurs is, "I have this idea for an app or site. But I'm not technical, so I need to find someone who can make it for me." I point them to my advice about how to hire a programmer, but as most of the good ones are already booked solid, it's a pretty helpless position to be in. If you heard someone say, "I have this idea for a song. But I'm not musical, so I need to find someone who will write, perform, and record it for me." — you'd probably advise them to just take some time to sit down with a guitar or piano and learn enough to turn their ideas into reality.

    And so comes my advice: Yes, learn some programming basics. Just some HTML, CSS, and JavaScript should be enough to start. ... You don't need to become an expert, just know the basics, so you're not helpless.'"

    BrokenHalo (565198:

    Well, no reason why it should. Just about anyone should be able to write some form of pseudocode, however incomplete, for whatever task they want to accomplish with or without the assistance of a computer.

    That said, when I first started working with computers back in the '70s, programmers mostly didn't have access to the actual computer hardware, so if the chunk of code was large, we simply wrote out our FORTRAN, Assembly or COBOL programs on a cellulose-fibre "paper" substance called a Coding Sheet with a graphite-filled wooden stick known as a pencil. These were then transcribed on to mag tape by a platoon of very pretty but otherwise non-human keypunch ops who were universally capable of typing at a rate of 6.02 x 10^23 words per minute. (If the program or patch happened to be small or trivial, we used one of those metal card-punch contraptions with an 029 keypad, thus allowing the office door to slam with nothing to restrain it.)

    This leisurely approach led to a very different and IMHO more creative attitude to coding, and it was probably no coincidence that many programmers back then were pipe-smokers.

    Anonymous Coward:

    "I have an idea for an app" is exactly what riles up programmers. Ideas are a dime a dozen. If you, the "nontechnical person", do your job right, then you'll find a competent and cooperative programmer.

    If, on the other hand, and this is is much too common, you expect the programmer to do your work (requirements engineering, reading your mind for what you want, correcting your conceptual mistakes, graphics design, business planning to get the scale right, etc.) on top of the actual programming in return for a one-time payment while you expect to sell "your" startup for millions, then you'll get asshole programmers - and you deserve them.

    Anonymous Coward:

    A programmer's job is to implement a specification. People who "have an idea for an app" only want to pay a programmer (I'm being generous here, often they don't even want to pay a programmer, see the article), but expect to get a business analyst, graphics artist, software architect, marketer, programmer and system administrator rolled into one, so that they don't have to give away too much of the money they expect to earn with their creative idea.

    Someone who thinks you can learn a little programming to avoid being at the mercy of programmers isn't looking for a partner, isn't willing to share with a partner and doesn't deserve the input from a partner.

    aaarrrgggh (9205):

    I'm an engineer. I want to remodel my home. I come up with ideas, document them, and give them to an architect to build into a complete design that conveys scope to the general contractor and trades. Me being educated about the process helps me to manage scope and hopefully get the product I want in the most efficient manner possible, while also taking advantage of the expertise of others. A prima donna architect that only wants to create something they find to be beautiful might not solve my problems.

    Programming is no different. If I convey something in pseudo code or user interface, I would expect a skilled programmer to be able to provide a critical evaluation of my idea and guide me into the best direction. I might not be able to break down the functions for security the right way, but I would at least be highlighting the need for security as an example.

    Moraelin (679338)

    I'm not sure that learning some superficial idea of a language is going to help. And I'll give you a couple of reasons why:

    1. Dunning-Kruger. The people with the least knowledge on the domain are those who overrate their knowledge the most.

      Now I really wish to believe that some management or marketing guy is willing to sink 10,000 hours into becoming good at programming, and have a good idea of exactly what he's asking for. I really do. But we both know that even if he does a decent amount overtime, that's about 3 years of doing NOTHING BUT programming, i.e., he'd have to not do his real job at all any more. Or more like 15 years if he does some two-hours a day of hobby-style programming in the afternoon. And he probably won't even do that.

      What is actually going to happen, if at all, is that he'll plod through it up to first peak of his own sense of how much it knows, i.e., the Dunning-Kruger sweet spot. The point where he thinks he knows it all, except, you know, maybe some minor esoteric stuff that doesn't matter anyway. But is actually the point where he doesn't know jack.

    2. And from my experience, those are the worst problem bosses. The kind which is an illustration of Russell's, "The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt." The kind who is cock-sure that he probably is better at programming than you anyway, he just, you know, doesn't have the time to actually do it. (Read: to actually get experience.)

      That's the kind who's just moved from just a paranoid suspicion that your making a fuss about the 32414'th change request is taking advantage of him, to the kind who "knows" that you're just an unreasonable asshole. After all, he has no problem making changes to the 1000 line JSP or PHP page he did for practice (half of which being just HTML mixed in with the business code.) If he wants to add a button to that one, hey, his editor even lets him drag and drop it in 5 seconds. Why, he can even change it from displaying a fictive list of widgets to a fictive list of employees. So your wanting to redo a part of the design to accommodate his request to change the whole functionality of a 1,000,000 line program (which is actually quite small) must be some kind of trying to shaft him.

      It's the kind who thinks that if he did a simple example program in Visual Fox Pro, a single-user "database", placed the database files on a file server, and then accessed them from another workstation, that makes him qualified to decide he doesn't need MySQL or Oracle for his enterprise system, he can just demand to have it done in Visual Fox Pro. In fact, he "knows" it can be done that way. No, really, this is an actual example that happened to me. Verbatim. I'm not making it up.

    3. Well, it doesn't work on other domains either, so I don't see why programming would be any different. People can have a superficial understanding of how a map editor for Skyrim works, and it won't prevent them from coming with some unreasonable idea like that someone should make him every outfit from [insert Anime series] and not just do it for free, but credit him, because, hey, he had the idea. No, seriously, just about every other idiot thinks that the reason someone hasn't done a total conversion from Skyrim to Star Wars is that they didn't have the precious idea.

      Basically it's Dunning-Kruger all over again.

    I think more than understanding programming, what people need is understanding that ideas are a dime a dozen. What matters is the execution.

    What they need to understand is that, no, you're probably not the next Edison or Ford or Steve Jobs or whatever. There are probably a thousand other guys who had the same idea, some may have even tried it, and there might actually be a reason why you never heard of it being actually finished. And even those are remembered for actually having the management skills to make those ideas work, not just for having an idea.

    Ford didn't just make it for having the idea of making a cheap car, nor for being a mechanic himself. Why it worked was managing to sort things out like managing to hire and hold onto some good subordinates, reduce the turnover that previously had some departments literally hire 300 people a year to fill 100 positions, etc. It's the execution that mattered, not just having an idea.

    Once they get disabused of the idea all that matters is that their brain farted a vague idea, I think it will go a longer way towards less frustration both for them and their employees.

    RabidReindeer (2625839):

    short version: "A little knowledge is a dangerous thing."

    People who think they know what the job entails start out saying "It's Easy! All You Have To Do Is..." and the whole thing swiftly descends into Hell.

    Ideas are just a multiplier of execution

    2009-07-28

    It's so funny when I hear people being so protective of ideas. (People who want me to sign an NDA to tell me the simplest idea.)

    To me, ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions.

    Explanation:

    AWFUL IDEA = -1
    WEAK IDEA = 1
    SO-SO IDEA = 5
    GOOD IDEA = 10
    GREAT IDEA = 15
    BRILLIANT IDEA = 20
    -------- ---------
    NO EXECUTION = $1
    WEAK EXECUTION = $1000
    SO-SO EXECUTION = $10,000
    GOOD EXECUTION = $100,000
    GREAT EXECUTION = $1,000,000
    BRILLIANT EXECUTION = $10,000,000

    To make a business, you need to multiply the two.

    The most brilliant idea, with no execution, is worth $20.

    The most brilliant idea takes great execution to be worth $20,000,000.

    That's why I don't want to hear people's ideas.

    I'm not interested until I see their execution.

    (This post originally appeared on my O'Reilly blog on August 16, 2005. I'm re-posting it here since their site is getting filled with ads.)

    [Oct 14, 2011] Dennis Ritchie, 70, Dies, Programming Trailblazer - by Steve Rohr

    October 13, 2011 | NYTimes.com
    Dennis M. Ritchie, who helped shape the modern digital era by creating software tools that power things as diverse as search engines like Google and smartphones, was found dead on Wednesday at his home in Berkeley Heights, N.J. He was 70.

    Mr. Ritchie, who lived alone, was in frail health in recent years after treatment for prostate cancer and heart disease, said his brother Bill.

    In the late 1960s and early ’70s, working at Bell Labs, Mr. Ritchie made a pair of lasting contributions to computer science. He was the principal designer of the C programming language and co-developer of the Unix operating system, working closely with Ken Thompson, his longtime Bell Labs collaborator.

    The C programming language, a shorthand of words, numbers and punctuation, is still widely used today, and successors like C++ and Java build on the ideas, rules and grammar that Mr. Ritchie designed. The Unix operating system has similarly had a rich and enduring impact. Its free, open-source variant, Linux, powers many of the world’s data centers, like those at Google and Amazon, and its technology serves as the foundation of operating systems, like Apple’s iOS, in consumer computing devices.

    “The tools that Dennis built — and their direct descendants — run pretty much everything today,” said Brian Kernighan, a computer scientist at Princeton University who worked with Mr. Ritchie at Bell Labs.

    Those tools were more than inventive bundles of computer code. The C language and Unix reflected a point of view, a different philosophy of computing than what had come before. In the late ’60s and early ’70s, minicomputers were moving into companies and universities — smaller and at a fraction of the price of hulking mainframes.

    Minicomputers represented a step in the democratization of computing, and Unix and C were designed to open up computing to more people and collaborative working styles. Mr. Ritchie, Mr. Thompson and their Bell Labs colleagues were making not merely software but, as Mr. Ritchie once put it, “a system around which fellowship can form.”

    C was designed for systems programmers who wanted to get the fastest performance from operating systems, compilers and other programs. “C is not a big language — it’s clean, simple, elegant,” Mr. Kernighan said. “It lets you get close to the machine, without getting tied up in the machine.”

    Such higher-level languages had earlier been intended mainly to let people without a lot of programming skill write programs that could run on mainframes. Fortran was for scientists and engineers, while Cobol was for business managers.

    C, like Unix, was designed mainly to let the growing ranks of professional programmers work more productively. And it steadily gained popularity. With Mr. Kernighan, Mr. Ritchie wrote a classic text, “The C Programming Language,” also known as “K. & R.” after the authors’ initials, whose two editions, in 1978 and 1988, have sold millions of copies and been translated into 25 languages.

    Dennis MacAlistair Ritchie was born on Sept. 9, 1941, in Bronxville, N.Y. His father, Alistair, was an engineer at Bell Labs, and his mother, Jean McGee Ritchie, was a homemaker. When he was a child, the family moved to Summit, N.J., where Mr. Ritchie grew up and attended high school. He then went to Harvard, where he majored in applied mathematics.

    While a graduate student at Harvard, Mr. Ritchie worked at the computer center at the Massachusetts Institute of Technology, and became more interested in computing than math. He was recruited by the Sandia National Laboratories, which conducted weapons research and testing. “But it was nearly 1968,” Mr. Ritchie recalled in an interview in 2001, “and somehow making A-bombs for the government didn’t seem in tune with the times.”

    Mr. Ritchie joined Bell Labs in 1967, and soon began his fruitful collaboration with Mr. Thompson on both Unix and the C programming language. The pair represented the two different strands of the nascent discipline of computer science. Mr. Ritchie came to computing from math, while Mr. Thompson came from electrical engineering.

    “We were very complementary,” said Mr. Thompson, who is now an engineer at Google. “Sometimes personalities clash, and sometimes they meld. It was just good with Dennis.”

    Besides his brother Bill, of Alexandria, Va., Mr. Ritchie is survived by another brother, John, of Newton, Mass., and a sister, Lynn Ritchie of Hexham, England.

    Mr. Ritchie traveled widely and read voraciously, but friends and family members say his main passion was his work. He remained at Bell Labs, working on various research projects, until he retired in 2007.

    Colleagues who worked with Mr. Ritchie were struck by his code — meticulous, clean and concise. His writing, according to Mr. Kernighan, was similar. “There was a remarkable precision to his writing,” Mr. Kernighan said, “no extra words, elegant and spare, much like his code.”

    [Apr 24, 2011] A Short Guide To Lifestyle Design (LSD) The 7 Core Skills Of The Cyberpunk Survivalist

    February 28, 2011 | Sublime Oblivion

    Disagree that a person can become a competent computer programmer in under a year. Well, maybe the exceptional genius… For most people, it takes a minimum of 3 years to master the skills required to be a decent coder.

    It’s not just about learning Java (which I do agree is a good computer language to start with), there are certain prerequisites. Fortunately, not a lot of math is required, high-school algebra is sufficient, plus a grasp of “functions” (because programmers usually have to write a lot of functions). On the other hand, boolean logic is absolutely required, and that’s more than just knowing the difference between logical AND and logcial OR (or XOR). Also, if one gets into databases (my specialty, actually), then one also needs to master the mathematics of set theory.

    And a real programmer also needs to be able to write (and understand) a recursion algorithm. For example, every time I have interviewed a potential coder, I have asked them, “Are you familiar with the ‘Towers of Hanoi’ algorithim?” If they don’t know what that is, they still have a chance to impress me if they can describe a B-tree navigation algorithm. That’s first- or second-year computer science stuff. If they can’t recurse a directory tree (using whatever programming language of their choice), then they aren’t a real programmer. God knows there are plenty of fakes in the business. Sorry for the rant. Having to deal with “pretend programmers” (rookies who think they’re programmers because they know how to update their Facebook page) is one of my pet peeves… Grrrrrrrr!

    [Nov 30, 2010] Professor Sir Maurice Wilkes

    Telegraph

    The computer, known as EDSAC (Electronic Delay Storage Automatic Calculator) was a huge contraption that took up a room in what was the University’s old Mathematical Library. It contained 3,000 vacuum valves arranged on 12 racks and used tubes filled with mercury for memory. Despite its impressive size, it could only carry out 650 operations per second.

    Before the development of EDSAC, digital computers, such as the American Moore School’s ENIAC (Electronic Numeral Integrator and Computer), were only capable of dealing with one particular type of problem. To solve a different kind of problem, thousands of switches had to be reset and miles of cable re-routed. Reprogramming took days.

    In 1946, a paper by the Hungarian-born scientist John von Neumann and others suggested that the future lay in developing computers with memory which could not only store data, but also sets of instructions, or programs. Users would then be able to change programs, written in binary number format, without rewiring the whole machine. The challenge was taken up by three groups of scientists — one at the University of Manchester, an American team led by JW Mauchly and JP Eckert, and the Cambridge team led by Wilkes.

    Eckert and Mauchly had been working on developing a stored-program computer for two years before Wilkes became involved at Cambridge. While the University of Manchester machine, known as “Baby”, was the first to store data and program, it was Wilkes who became the first to build an operational machine based on von Neumann’s ideas (which form the basis for modern computers) to deliver a service.

    Wilkes chose to adopt mercury delay lines suggested by Eckert to serve as an internal memory store. In such a delay line, an electrical signal is converted into a sound wave travelling through a long tube of mercury at a speed of 1,450 metres per second. The signal can be transmitted back and forth along the tube, several of which were combined to form the machine’s memory. This memory meant the computer could store both data and program. The main program was loaded by paper tape, but once loaded this was executed from memory, making the machine the first of its kind.

    After two years of development, on May 6 1949 Wilkes’s EDSAC “rather suddenly” burst into life, computing a table of square numbers. From early 1950 it offered a regular computing service to the members of Cambridge University, the first of its kind in the world, with Wilkes and his group developing programs and compiling a program library. The world’s first scientific paper to be published using computer calculations — a paper on genetics by RA Fisher – was completed with the help of EDSAC.

    Wilkes was probably the first computer programmer to spot the coming significance of program testing: “In 1949 as soon as we started programming”, he recalled in his memoirs, “we found to our surprise that it wasn’t as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realised that a large part of my life from then on was going to be spent in finding mistakes in my own programs.”

    In 1951 Wilkes (with David J Wheeler and Stanley Gill) published the world’s first textbook on computer programming, Preparation of Programs for an Electronic Digital Computer. Two years later he established the world’s first course in Computer Science at Cambridge.

    EDSAC remained in operation until 1958, but the future lay not in delay lines but in magnetic storage and, when it came to the end of its life, the machine was cannibalised and scrapped, its old program tapes used as streamers at Cambridge children’s parties.

    Wilkes, though, remained at the forefront of computing technology and made several other breakthroughs. In 1958 he built EDSAC’s replacement, EDSAC II, which not only incorporated magnetic storage but was the first computer in the world to have a micro-programmed control unit. In 1965 he published the first paper on cache memories, followed later by a book on time-sharing.

    In 1974 he developed the “Cambridge Ring”, a digital communication system linking computers together. The network was originally designed to avoid the expense of having a printer at every computer, but the technology was soon developed commercially by others.

    When EDSAC was built, Wilkes sought to allay public fears by describing the stored-program computer as “a calculating machine operated by a moron who cannot think, but can be trusted to do what he is told”. In 1964, however, predicting the world in “1984”, he drew a more Orwellian picture: “How would you feel,” he wrote, “if you had exceeded the speed limit on a deserted road in the dead of night, and a few days later received a demand for a fine that had been automatically printed by a computer coupled to a radar system and vehicle identification device? It might not be a demand at all, but simply a statement that your bank account had been debited automatically.”

    Maurice Vincent Wilkes was born at Dudley, Worcestershire, on June 26 1913. His father was a switchboard operator for the Earl of Dudley whose extensive estate in south Staffordshire had its own private telephone network; he encouraged his son’s interest in electronics and at King Edward VI’s Grammar School, Stourbridge, Maurice built his own radio transmitter and was allowed to operate it from home.

    Encouraged by his headmaster, a Cambridge-educated mathematician, Wilkes went up to St John’s College, Cambridge to read Mathematics, but he studied electronics in his spare time in the University Library and attended lectures at the Engineering Department. After obtaining an amateur radio licence he constructed radio equipment in his vacations with which to make contact, via the ionosphere, with radio “hams” around the world.

    Wilkes took a First in Mathematics and stayed on at Cambridge to do a PhD on the propagation of radio waves in the ionosphere. This led to an interest in tidal motion in the atmosphere and to the publication of his first book Oscillations of the Earth’s Atmosphere (1949). In 1937 he was appointed university demonstrator at the new Mathematical Laboratory (later renamed the Computer Laboratory) housed in part of the old Anatomy School.

    When war broke out, Wilkes left Cambridge to work with R Watson-Watt and JD Cockroft on the development of radar. Later he became involved in designing aircraft, missile and U-boat radio tracking systems.

    In 1945 Wilkes was released from war work to take up the directorship of the Cambridge Mathematical Laboratory and given the task of constructing a computer service for the University.

    The following year he attended a course on “Theory and Techniques for Design of Electronic Digital Computers” at the Moore School of Electrical Engineering at the University of Pennsylvania, the home of the ENIAC. The visit inspired Wilkes to try to build a stored-program computer and on his return to Cambridge, he immediately began work on EDSAC.

    Wilkes was appointed Professor of Computing Technology in 1965, a post he held until his retirement in 1980. Under his guidance the Cambridge University Computer Laboratory became one of the country’s leading research centres. He also played an important role as an adviser to British computer companies and was instrumental in founding the British Computer Society, serving as its first president from 1957 to 1960.

    After his retirement, Wilkes spent six years as a consultant to Digital Equipment in Massachusetts, and was Adjunct Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology from 1981 to 1985. Later he returned to Cambridge as a consultant researcher with a research laboratory funded variously by Olivetti, Oracle and AT&T, continuing to work until well into his 90s.

    Maurice Wilkes was elected a fellow of the Royal Society in 1956, a Foreign Honorary Member of the American Academy of Arts and Sciences in 1974, a Fellow of the Royal Academy of Engineering in 1976 and a Foreign Associate of the American National Academy of Engineering in 1977. He was knighted in 2000.

    Among other prizes he received the ACM Turing Award in 1967; the Faraday Medal of the Institute of Electrical Engineers in 1981; and the Harry Goode Memorial Award of the American Federation for Information Processing Societies in 1968.

    In 1985 he provided a lively account of his work in Memoirs of a Computer Pioneer.

    Maurice Wilkes married, in 1947, Nina Twyman. They had a son and two daughters.

    Computer Laboratory Maurice V. Wilkes

    [Apr 25, 2008] Interview with Donald Knuth By Donald E. Knuth,Andrew Binstock

    Apr 25, 2008

    Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

    Andrew Binstock: You are one of the fathers of the open-source revolution, even if you aren’t widely heralded as such. You previously have stated that you released TeX as open source because of the problem of proprietary implementations at the time, and to invite corrections to the code—both of which are key drivers for open-source projects today. Have you been surprised by the success of open source since that time?

    Donald Knuth: The success of open source code is perhaps the only thing in the computer field that hasn’t surprised me during the past several decades. But it still hasn’t reached its full potential; I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code.

    For example, open-source code can produce thousands of binaries, tuned perfectly to the configurations of individual users, whereas commercial software usually will exist in only a few versions. A generic binary executable file must include things like inefficient "sync" instructions that are totally inappropriate for many installations; such wastage goes away when the source code is highly configurable. This should be a huge win for open source.

    Yet I think that a few programs, such as Adobe Photoshop, will always be superior to competitors like the Gimp—for some reason, I really don’t know why! I’m quite willing to pay good money for really good software, if I believe that it has been produced by the best programmers.

    Remember, though, that my opinion on economic questions is highly suspect, since I’m just an educator and scientist. I understand almost nothing about the marketplace.

    Andrew: A story states that you once entered a programming contest at Stanford (I believe) and you submitted the winning entry, which worked correctly after a single compilation. Is this story true? In that vein, today’s developers frequently build programs writing small code increments followed by immediate compilation and the creation and running of unit tests. What are your thoughts on this approach to software development?

    Donald: The story you heard is typical of legends that are based on only a small kernel of truth. Here’s what actually happened: John McCarthy decided in 1971 to have a Memorial Day Programming Race. All of the contestants except me worked at his AI Lab up in the hills above Stanford, using the WAITS time-sharing system; I was down on the main campus, where the only computer available to me was a mainframe for which I had to punch cards and submit them for processing in batch mode. I used Wirth’s ALGOL W system (the predecessor of Pascal). My program didn’t work the first time, but fortunately I could use Ed Satterthwaite’s excellent offline debugging system for ALGOL W, so I needed only two runs. Meanwhile, the folks using WAITS couldn’t get enough machine cycles because their machine was so overloaded. (I think that the second-place finisher, using that "modern" approach, came in about an hour after I had submitted the winning entry with old-fangled methods.) It wasn’t a fair contest.

    As to your real question, the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."

    Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you’ve published for Volume 4 of The Art of Computer Programming (TAOCP) doesn’t seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you’re currently working on?

    Donald: The field of combinatorial algorithms is so vast that I’ll be lucky to pack its sequential aspects into three or four physical volumes, and I don’t think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity.

    Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions?

    Donald: I don’t want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they’re trying to pass the blame for the future demise of Moore’s Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.

    Let me put it this way: During the past 50 years, I’ve written well over a thousand programs, many of which have substantial size. I can’t think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.[1]

    How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I’m wrong.

    I know that important applications for parallelism exist—rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years.

    Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts. (Similarly, when I prepare the third edition of Volume 3 I plan to rip out much of the material about how to sort on magnetic tapes. That stuff was once one of the hottest topics in the whole software field, but now it largely wastes paper when the book is printed.)

    The machine I use today has dual processors. I get to use them both only when I’m running two independent jobs at the same time; that’s nice, but it happens only a few minutes every week. If I had four processors, or eight, or more, I still wouldn’t be any better off, considering the kind of work I do—even though I’m using my computer almost every day during most of the day. So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it’s a pipe dream. (No—that’s the wrong metaphor! "Pipelines" actually work for me, but threads don’t. Maybe the word I want is "bubble.")

    From the opposite point of view, I do grant that web browsing probably will get better with multicores. I’ve been talking about my technical work, however, not recreation. I also admit that I haven’t got many bright ideas about what I wish hardware designers would provide instead of multicores, now that they’ve begun to hit a wall with respect to sequential computation. (But my MMIX design contains several ideas that would substantially improve the current performance of the kinds of programs that concern me most—at the cost of incompatibility with legacy x86 programs.)

    Andrew: One of the few projects of yours that hasn’t been embraced by a widespread community is literate programming. What are your thoughts about why literate programming didn’t catch on? And is there anything you’d have done differently in retrospect regarding literate programming?

    Donald: Literate programming is a very personal thing. I think it’s terrific, but that might well be because I’m a very strange person. It has tens of thousands of fans, but not millions.

    In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay—I’d give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they’re understood by a vast community of programmers, most people have no big incentive to change, just as I’m not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched).

    Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn’t taken the whole world by storm. He observed that a small percentage of the world’s population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets.

    Yet to me, literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s—it has actually been indispensable at times. Some of my major programs, such as the MMIX meta-simulator, could not have been written with any other methodology that I’ve ever heard of. The complexity was simply too daunting for my limited brain to handle; without literate programming, the whole enterprise would have flopped miserably.

    If people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming. Literate programming is what you need to rise above the ordinary level of achievement. But I don’t believe in forcing ideas on anybody. If literate programming isn’t your style, please forget it and do what you like. If nobody likes it but me, let it die.

    On a positive note, I’ve been pleased to discover that the conventions of CWEB are already standard equipment within preinstalled software such as Makefiles, when I get off-the-shelf Linux these days.

    Andrew: In Fascicle 1 of Volume 1, you reintroduced the MMIX computer, which is the 64-bit upgrade to the venerable MIX machine comp-sci students have come to know over many years. You previously described MMIX in great detail in MMIXware. I’ve read portions of both books, but can’t tell whether the Fascicle updates or changes anything that appeared in MMIXware, or whether it’s a pure synopsis. Could you clarify?

    Donald: Volume 1 Fascicle 1 is a programmer’s introduction, which includes instructive exercises and such things. The MMIXware book is a detailed reference manual, somewhat terse and dry, plus a bunch of literate programs that describe prototype software for people to build upon. Both books define the same computer (once the errata to MMIXware are incorporated from my website). For most readers of TAOCP, the first fascicle contains everything about MMIX that they’ll ever need or want to know.

    I should point out, however, that MMIX isn’t a single machine; it’s an architecture with almost unlimited varieties of implementations, depending on different choices of functional units, different pipeline configurations, different approaches to multiple-instruction-issue, different ways to do branch prediction, different cache sizes, different strategies for cache replacement, different bus speeds, etc. Some instructions and/or registers can be emulated with software on "cheaper" versions of the hardware. And so on. It’s a test bed, all simulatable with my meta-simulator, even though advanced versions would be impossible to build effectively until another five years go by (and then we could ask for even further advances just by advancing the meta-simulator specs another notch).

    Suppose you want to know if five separate multiplier units and/or three-way instruction issuing would speed up a given MMIX program. Or maybe the instruction and/or data cache could be made larger or smaller or more associative. Just fire up the meta-simulator and see what happens.

    Andrew: As I suspect you don’t use unit testing with MMIXAL, could you step me through how you go about making sure that your code works correctly under a wide variety of conditions and inputs? If you have a specific work routine around verification, could you describe it?

    Donald: Most examples of machine language code in TAOCP appear in Volumes 1-3; by the time we get to Volume 4, such low-level detail is largely unnecessary and we can work safely at a higher level of abstraction. Thus, I’ve needed to write only a dozen or so MMIX programs while preparing the opening parts of Volume 4, and they’re all pretty much toy programs—nothing substantial. For little things like that, I just use informal verification methods, based on the theory that I’ve written up for the book, together with the MMIXAL assembler and MMIX simulator that are readily available on the Net (and described in full detail in the MMIXware book).

    That simulator includes debugging features like the ones I found so useful in Ed Satterthwaite’s system for ALGOL W, mentioned earlier. I always feel quite confident after checking a program with those tools.

    Andrew: Despite its formulation many years ago, TeX is still thriving, primarily as the foundation for LaTeX. While TeX has been effectively frozen at your request, are there features that you would want to change or add to it, if you had the time and bandwidth? If so, what are the major items you add/change?

    Donald: I believe changes to TeX would cause much more harm than good. Other people who want other features are creating their own systems, and I’ve always encouraged further development—except that nobody should give their program the same name as mine. I want to take permanent responsibility for TeX and Metafont, and for all the nitty-gritty things that affect existing documents that rely on my work, such as the precise dimensions of characters in the Computer Modern fonts.

    Andrew: One of the little-discussed aspects of software development is how to do design work on software in a completely new domain. You were faced with this issue when you undertook TeX: No prior art was available to you as source code, and it was a domain in which you weren’t an expert. How did you approach the design, and how long did it take before you were comfortable entering into the coding portion?

    Donald: That’s another good question! I’ve discussed the answer in great detail in Chapter 10 of my book Literate Programming, together with Chapters 1 and 2 of my book Digital Typography. I think that anybody who is really interested in this topic will enjoy reading those chapters. (See also Digital Typography Chapters 24 and 25 for the complete first and second drafts of my initial design of TeX in 1977.)

    Andrew: The books on TeX and the program itself show a clear concern for limiting memory usage—an important problem for systems of that era. Today, the concern for memory usage in programs has more to do with cache sizes. As someone who has designed a processor in software, the issues of cache-aware and cache-oblivious algorithms surely must have crossed your radar screen. Is the role of processor caches on algorithm design something that you expect to cover, even if indirectly, in your upcoming work?

    Donald: I mentioned earlier that MMIX provides a test bed for many varieties of cache. And it’s a software-implemented machine, so we can perform experiments that will be repeatable even a hundred years from now. Certainly the next editions of Volumes 1-3 will discuss the behavior of various basic algorithms with respect to different cache parameters.

    In Volume 4 so far, I count about a dozen references to cache memory and cache-friendly approaches (not to mention a "memo cache," which is a different but related idea in software).

    Andrew: What set of tools do you use today for writing TAOCP? Do you use TeX? LaTeX? CWEB? Word processor? And what do you use for the coding?

    Donald: My general working style is to write everything first with pencil and paper, sitting beside a big wastebasket. Then I use Emacs to enter the text into my machine, using the conventions of TeX. I use tex, dvips, and gv to see the results, which appear on my screen almost instantaneously these days. I check my math with Mathematica.

    I program every algorithm that’s discussed (so that I can thoroughly understand it) using CWEB, which works splendidly with the GDB debugger. I make the illustrations with MetaPost (or, in rare cases, on a Mac with Adobe Photoshop or Illustrator). I have some homemade tools, like my own spell-checker for TeX and CWEB within Emacs. I designed my own bitmap font for use with Emacs, because I hate the way the ASCII apostrophe and the left open quote have morphed into independent symbols that no longer match each other visually. I have special Emacs modes to help me classify all the tens of thousands of papers and notes in my files, and special Emacs keyboard shortcuts that make bookwriting a little bit like playing an organ. I prefer rxvt to xterm for terminal input. Since last December, I’ve been using a file backup system called backupfs, which meets my need beautifully to archive the daily state of every file.

    According to the current directories on my machine, I’ve written 68 different CWEB programs so far this year. There were about 100 in 2007, 90 in 2006, 100 in 2005, 90 in 2004, etc. Furthermore, CWEB has an extremely convenient "change file" mechanism, with which I can rapidly create multiple versions and variations on a theme; so far in 2008 I’ve made 73 variations on those 68 themes. (Some of the variations are quite short, only a few bytes; others are 5KB or more. Some of the CWEB programs are quite substantial, like the 55-page BDD package that I completed in January.) Thus, you can see how important literate programming is in my life.

    I currently use Ubuntu Linux, on a standalone laptop—it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux. Incidentally, with Linux I much prefer the keyboard focus that I can get with classic FVWM to the GNOME and KDE environments that other people seem to like better. To each his own.

    Andrew: You state in the preface of Fascicle 0 of Volume 4 of TAOCP that Volume 4 surely will comprise three volumes and possibly more. It’s clear from the text that you’re really enjoying writing on this topic. Given that, what is your confidence in the note posted on the TAOCP website that Volume 5 will see light of day by 2015?

    Donald: If you check the Wayback Machine for previous incarnations of that web page, you will see that the number 2015 has not been constant.

    You’re certainly correct that I’m having a ball writing up this material, because I keep running into fascinating facts that simply can’t be left out—even though more than half of my notes don’t make the final cut.

    Precise time estimates are impossible, because I can’t tell until getting deep into each section how much of the stuff in my files is going to be really fundamental and how much of it is going to be irrelevant to my book or too advanced. A lot of the recent literature is academic one-upmanship of limited interest to me; authors these days often introduce arcane methods that outperform the simpler techniques only when the problem size exceeds the number of protons in the universe. Such algorithms could never be important in a real computer application. I read hundreds of such papers to see if they might contain nuggets for programmers, but most of them wind up getting short shrift.

    From a scheduling standpoint, all I know at present is that I must someday digest a huge amount of material that I’ve been collecting and filing for 45 years. I gain important time by working in batch mode: I don’t read a paper in depth until I can deal with dozens of others on the same topic during the same week. When I finally am ready to read what has been collected about a topic, I might find out that I can zoom ahead because most of it is eminently forgettable for my purposes. On the other hand, I might discover that it’s fundamental and deserves weeks of study; then I’d have to edit my website and push that number 2015 closer to infinity.

    Andrew: In late 2006, you were diagnosed with prostate cancer. How is your health today?

    Donald: Naturally, the cancer will be a serious concern. I have superb doctors. At the moment I feel as healthy as ever, modulo being 70 years old. Words flow freely as I write TAOCP and as I write the literate programs that precede drafts of TAOCP. I wake up in the morning with ideas that please me, and some of those ideas actually please me also later in the day when I’ve entered them into my computer.

    On the other hand, I willingly put myself in God’s hands with respect to how much more I’ll be able to do before cancer or heart disease or senility or whatever strikes. If I should unexpectedly die tomorrow, I’ll have no reason to complain, because my life has been incredibly blessed. Conversely, as long as I’m able to write about computer science, I intend to do my best to organize and expound upon the tens of thousands of technical papers that I’ve collected and made notes on since 1962.

    Andrew: On your website, you mention that the Peoples Archive recently made a series of videos in which you reflect on your past life. In segment 93, "Advice to Young People," you advise that people shouldn’t do something simply because it’s trendy. As we know all too well, software development is as subject to fads as any other discipline. Can you give some examples that are currently in vogue, which developers shouldn’t adopt simply because they’re currently popular or because that’s the way they’re currently done? Would you care to identify important examples of this outside of software development?

    Donald: Hmm. That question is almost contradictory, because I’m basically advising young people to listen to themselves rather than to others, and I’m one of the others. Almost every biography of every person whom you would like to emulate will say that he or she did many things against the "conventional wisdom" of the day.

    Still, I hate to duck your questions even though I also hate to offend other people’s sensibilities—given that software methodology has always been akin to religion. With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I’ve ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other’s code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

    I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.

    Here’s a question that you may well have meant to ask: Why is the new book called Volume 4 Fascicle 0, instead of Volume 4 Fascicle 1? The answer is that computer programmers will understand that I wasn’t ready to begin writing Volume 4 of TAOCP at its true beginning point, because we know that the initialization of a program can’t be written until the program itself takes shape. So I started in 2005 with Volume 4 Fascicle 2, after which came Fascicles 3 and 4. (Think of Star Wars, which began with Episode 4.)

    Finally I was psyched up to write the early parts, but I soon realized that the introductory sections needed to include much more stuff than would fit into a single fascicle. Therefore, remembering Dijkstra’s dictum that counting should begin at 0, I decided to launch Volume 4 with Fascicle 0. Look for Volume 4 Fascicle 1 later this year.

    References

    [1] My colleague Kunle Olukotun points out that, if the usage of TeX became a major bottleneck so that people had a dozen processors and really needed to speed up their typesetting terrifically, a super-parallel version of TeX could be developed that uses "speculation" to typeset a dozen chapters at once: Each chapter could be typeset under the assumption that the previous chapters don’t do anything strange to mess up the default logic. If that assumption fails, we can fall back on the normal method of doing a chapter at a time; but in the majority of cases, when only normal typesetting was being invoked, the processing would indeed go 12 times faster. Users who cared about speed could adapt their behavior and use TeX in a disciplined way.

    Andrew Binstock is the principal analyst at Pacific Data Works. He is a columnist for SD Times and senior contributing editor for InfoWorld magazine. His blog can be found at: http://binstock.blogspot.com.

    [Feb 21, 2008] Project details for Bare Bones interpreter

    freshmeat.net

    BareBones is an interpreter for the "Bare Bones" programming language defined in Chapter 11 of "Computer Science: An Overview", 9th Edition, by J. Glenn Brookshear.

    Release focus: Minor feature enhancements

    Changes:
    Identifiers were made case-insensitive. A summary of the language was added to the README file.

    Author:
    Eric Smith [contact developer]

    Bill Joy Quotes

    [Jan 2008] Computer Science Education Where Are the Software Engineers of Tomorrow

    STSC CrossTalk

    Computer Science Education: Where Are the Software Engineers of Tomorrow?

    Dr. Robert B.K. Dewar, AdaCore Inc.
    Dr. Edmond Schonberg, AdaCore Inc.

    It is our view that Computer Science (CS) education is neglecting basic skills, in particular in the areas of programming and formal methods. We consider that the general adoption of Java as a first programming language is in part responsible for this decline. We examine briefly the set of programming skills that should be part of every software professional’s repertoire.


    It is all about programming! Over the last few years we have noticed worrisome trends in CS education. The following represents a summary of those trends:

    1. Mathematics requirements in CS programs are shrinking.
    2. The development of programming skills in several languages is giving way to cookbook approaches using large libraries and special-purpose packages.
    3. The resulting set of skills is insufficient for today’s software industry (in particular for safety and security purposes) and, unfortunately, matches well what the outsourcing industry can offer. We are training easily replaceable professionals.

    These trends are visible in the latest curriculum recommendations from the Association for Computing Machinery (ACM). Curriculum 2005 does not mention mathematical prerequisites at all, and it mentions only one course in the theory of programming languages [1].

    We have seen these developments from both sides: As faculty members at New York University for decades, we have regretted the introduction of Java as a first language of instruction for most computer science majors. We have seen he has weakened the formation of our students, as reflected in their performance in systems and architecture courses. As founders of a company that specializes in Ada programming tools for mission-critical systems, we find it harder to recruit qualified applicants who have the right foundational skills. We want to advocate a more rigorous formation, in which formal methods are introduced early on, and programming languages play a central role in CS education.

    Formal Methods and Software Construction

    Formal techniques for proving the correctness of programs were an extremely active subject of research 20 years ago. However, the methods (and the hardware) of the time prevented these techniques from becoming widespread, and as a result they are more or less ignored by most CS programs. This is unfortunate because the techniques have evolved to the point that they can be used in large-scale systems and can contribute substantially to the reliability of these systems. A case in point is the use of SPARK in the re-engineering of the ground-based air traffic control system in the United Kingdom (see a description of iFACTS – Interim Future Area Control Tools Support, at <www.nats.co.uk/article/90>). SPARK is a subset of Ada augmented with assertions that allow the designer to prove important properties of a program: termination, absence of run-time exceptions, finite memory usage, etc. [2]. It is obvious that this kind of design and analysis methodology (dubbed Correctness by Construction) will add substantially to the reliability of a system whose design has involved SPARK from the beginning. However, PRAXIS, the company that developed SPARK and which is designing iFACTS, finds it hard to recruit people with the required mathematical competence (and this is present even in the United Kingdom, where formal methods are more widely taught and used than in the United States).

    Another formal approach to which CS students need exposure is model checking and linear temporal logic for the design of concurrent systems. For a modern discussion of the topic, which is central to mission-critical software, see [3].

    Another area of computer science which we find neglected is the study of floating-point computations. At New York University, a course in numerical methods and floating-point computing used to be required, but this requirement was dropped many years ago, and now very few students take this course. The topic is vital to all scientific and engineering software and is semantically delicate. One would imagine that it would be a required part of all courses in scientific computing, but these often take MatLab to be the universal programming tool and ignore the topic altogether.

    The Pitfalls of Java as a First Programming Language

    Because of its popularity in the context of Web applications and the ease with which beginners can produce graphical programs, Java has become the most widely used language in introductory programming courses. We consider this to be a misguided attempt to make programming more fun, perhaps in reaction to the drop in CS enrollments that followed the dot-com bust. What we observed at New York University is that the Java programming courses did not prepare our students for the first course in systems, much less for more advanced ones. Students found it hard to write programs that did not have a graphic interface, had no feeling for the relationship between the source program and what the hardware would actually do, and (most damaging) did not understand the semantics of pointers at all, which made the use of C in systems programming very challenging.

    Let us propose the following principle: The irresistible beauty of programming consists in the reduction of complex formal processes to a very small set of primitive operations. Java, instead of exposing this beauty, encourages the programmer to approach problem-solving like a plumber in a hardware store: by rummaging through a multitude of drawers (i.e. packages) we will end up finding some gadget (i.e. class) that does roughly what we want. How it does it is not interesting! The result is a student who knows how to put a simple program together, but does not know how to program. A further pitfall of the early use of Java libraries and frameworks is that it is impossible for the student to develop a sense of the run-time cost of what is written because it is extremely hard to know what any method call will eventually execute. A lucid analysis of the problem is presented in [4].

    We are seeing some backlash to this approach. For example, Bjarne Stroustrup reports from Texas A & M University that the industry is showing increasing unhappiness with the results of this approach. Specifically, he notes the following:

    I have had a lot of complaints about that [the use of Java as a first programming language] from industry, specifically from AT&T, IBM, Intel, Bloomberg, NI, Microsoft, Lockheed-Martin, and more. [5]

    He noted in a private discussion on this topic, reporting the following:

    It [Texas A&M] did [teach Java as the first language]. Then I started teaching C++ to the electrical engineers and when the EE students started to out-program the CS students, the CS department switched to C++. [5]

    It will be interesting to see how many departments follow this trend. At AdaCore, we are certainly aware of many universities that have adopted Ada as a first language because of similar concerns.

    A Real Programmer Can Write in Any Language (C, Java, Lisp, Ada)

    Software professionals of a certain age will remember the slogan of old-timers from two generations ago when structured programming became the rage: Real programmers can write Fortran in any language. The slogan is a reminder of how thinking habits of programmers are influenced by the first language they learn and how hard it is to shake these habits if you do all your programming in a single language. Conversely, we want to say that a competent programmer is comfortable with a number of different languages and that the programmer must be able to use the mental tools favored by one of them, even when programming in another. For example, the user of an imperative language such as Ada or C++ must be able to write in a functional style, acquired through practice with Lisp and ML1, when manipulating recursive structures. This is one indication of the importance of learning in-depth a number of different programming languages. What follows summarizes what we think are the critical contributions that well-established languages make to the mental tool-set of real programmers. For example, a real programmer should be able to program inheritance and dynamic dispatching in C, information hiding in Lisp, tree manipulation libraries in Ada, and garbage collection in anything but Java. The study of a wide variety of languages is, thus, indispensable to the well-rounded programmer.

    Why C Matters

    C is the low-level language that everyone must know. It can be seen as a portable assembly language, and as such it exposes the underlying machine and forces the student to understand clearly the relationship between software and hardware. Performance analysis is more straightforward, because the cost of every software statement is clear. Finally, compilers (GCC for example) make it easy to examine the generated assembly code, which is an excellent tool for understanding machine language and architecture.

    Why C++ Matters

    C++ brings to C the fundamental concepts of modern software engineering: encapsulation with classes and namespaces, information hiding through protected and private data and operations, programming by extension through virtual methods and derived classes, etc. C++ also pushes storage management as far as it can go without full-blown garbage collection, with constructors and destructors.

    Why Lisp Matters

    Every programmer must be comfortable with functional programming and with the important notion of referential transparency. Even though most programmers find imperative programming more intuitive, they must recognize that in many contexts that a functional, stateless style is clear, natural, easy to understand, and efficient to boot.

    An additional benefit of the practice of Lisp is that the program is written in what amounts to abstract syntax, namely the internal representation that most compilers use between parsing and code generation. Knowing Lisp is thus an excellent preparation for any software work that involves language processing.

    Finally, Lisp (at least in its lean Scheme incarnation) is amenable to a very compact self-definition. Seeing a complete Lisp interpreter written in Lisp is an intellectual revelation that all computer scientists should experience.

    Why Java Matters

    Despite our comments on Java as a first or only language, we think that Java has an important role to play in CS instruction. We will mention only two aspects of the language that must be part of the real programmer’s skill set:

    1. An understanding of concurrent programming (for which threads provide a basic low-level model).
    2. Reflection, namely the understanding that a program can be instrumented to examine its own state and to determine its own behavior in a dynamically changing environment.
    Why Ada Matters

    Ada is the language of software engineering par excellence. Even when it is not the language of instruction in programming courses, it is the language chosen to teach courses in software engineering. This is because the notions of strong typing, encapsulation, information hiding, concurrency, generic programming, inheritance, and so on, are embodied in specific features of the language. From our experience and that of our customers, we can say that a real programmer writes Ada in any language. For example, an Ada programmer accustomed to Ada’s package model, which strongly separates specification from implementation, will tend to write C in a style where well-commented header files act in somewhat the same way as package specs in Ada. The programmer will include bounds checking and consistency checks when passing mutable structures between subprograms to mimic the strong-typing checks that Ada mandates [6]. She will organize concurrent programs into tasks and protected objects, with well-defined synchronization and communication mechanisms.

    The concurrency features of Ada are particularly important in our age of multi-core architectures. We find it surprising that these architectures should be presented as a novel challenge to software design when Ada had well-designed mechanisms for writing safe, concurrent software 30 years ago.

    Programming Languages Are Not the Whole Story

    A well-rounded CS curriculum will include an advanced course in programming languages that covers a wide variety of languages, chosen to broaden the understanding of the programming process, rather than to build a résumé in perceived hot languages. We are somewhat dismayed to see the popularity of scripting languages in introductory programming courses. Such languages (Javascript, PHP, Atlas) are indeed popular tools of today for Web applications. Such languages have all the pedagogical defaults that we ascribe to Java and provide no opportunity to learn algorithms and performance analysis. Their absence of strong typing leads to a trial-and-error programming style and prevents students from acquiring the discipline of separating design of interfaces from specifications.

    However, teaching the right languages alone is not enough. Students need to be exposed to the tools to construct large-scale reliable programs, as we discussed at the start of this article. Topics of relevance are studying formal specification methods and formal proof methodologies, as well as gaining an understanding of how high-reliability code is certified in the real world. When you step into a plane, you are putting your life in the hands of software which had better be totally reliable. As a computer scientist, you should have some knowledge of how this level of reliability is achieved. In this day and age, the fear of terrorist cyber attacks have given a new urgency to the building of software that is not only bug free, but is also immune from malicious attack. Such high-security software relies even more extensively on formal methodologies, and our students need to be prepared for this new world.

    References
    1. Joint Taskforce for Computing Curricula. “Computing Curricula 2005: The Overview Report.” ACM/AIS/ IEEE, 2005 <www.acm.org/education /curric_vols/CC2005-March06 Final.pdf>.
    2. Barnes, John. High Integrity Ada: The Spark Approach. Addison-Wesley, 2003.
    3. Ben-Ari, M. Principles of Concurrent and Distributed Programming. 2nd ed. Addison-Wesley, 2006.
    4. Mitchell, Nick, Gary Sevitsky, and Harini Srinivasan. “The Diary of a Datum: An Approach to Analyzing Runtime Complexity in Framework-Based Applications.” Workshop on Library-Centric Software Design, Object-Oriented Programming, Systems, Languages, and Applications, San Diego, CA, 2005.
    5. Stroustrup, Bjarne. Private communication. Aug. 2007.
    6. Holzmann Gerard J. “The Power of Ten – Rules for Developing Safety Critical Code.” IEEE Computer June 2006: 93-95.
    Note
    1. Several programming language and system names have evolved from acronyms whose formal spellings are no longer considered applicable to the current names for which they are readily known. ML, Lisp, GCC, PHP, and SPARK fall under this category.

    Who Killed the Software Engineer (Hint It Happened in College)

    One of the article’s main points (one that was misunderstood, Dewar tells me) is that the adoption of Java as a first programming language in college courses has led to this decline. Not exactly. Yes, Dewar believes that Java’s graphic libraries allow students to cobble together software without understanding the underlying source code.

    But the problem with CS programs goes far beyond their focus on Java, he says.

    “A lot of it is, ‘Let’s make this all more fun.’ You know, ‘Math is not fun, let’s reduce math requirements. Algorithms are not fun, let’s get rid of them. Ewww – graphic libraries, they’re fun. Let’s have people mess with libraries. And [forget] all this business about ‘command line’ – we’ll have people use nice visual interfaces where they can point and click and do fancy graphic stuff and have fun."

    Dewar says his email in-box is crammed full of positive responses to his article, from students as well as employers. Many readers have thanked him for speaking up about a situation they believe needs addressing, he says.

    One email was from an IT staffer who is working with a junior programmer. The older worker suggested that the young engineer check the call stack to see about a problem, but unfortunately, “he’d never heard of a call stack.”

    Comment on Professor Dewar's views on today's CS programs

    Mama, Don’t Let Your Babies Grow Up to be Cowboys (or Computer Programmers)

    At fault, in Dewar’s view, are universities that are desperate to make up for lower enrollment in CS programs – even if that means gutting the programs.

    It’s widely acknowledged that enrollments in computer science programs have declined. The chief causes: the dotcom crash made a CS career seem scary, and the never-ending headlines about outsourcing makes it seem even scarier. Once seen as a reliable meal ticket, some concerned parents now view CS with an anxiety usually reserved for Sociology or Philosophy degrees. Why waste your time?

    College administrators are understandably alarmed by smaller student head counts. “Universities tend to be in the raw numbers mode,” Dewar says. “‘Oh my God, the number of computer science majors has dropped by a factor of two, how are we going to reverse that?’”

    They’ve responded, he claims, by dumbing down programs, hoping to make them more accessible and popular. Aspects of curriculum that are too demanding, or perceived as tedious, are downplayed in favor of simplified material that attracts a larger enrollment. This effort is counterproductive, Dewar says.

    “To me, raw numbers are not necessarily the first concern. The first concern is that people get a good education.”

    These students who have been spoon-fed easy material aren’t prepared to compete globally. Dewar, who also co-owns a software company and so deals with clients and programmers internationally, says, “We see French engineers much better trained than American engineers,” coming out of school.

    [Mar 2, 2007] Microsoft rolls out tutorial site for new programmers

    Microsoft has unveiled a new Web site offering lessons to new programmers on building applications using the tools in Visual Studio 2005.

    [Sep 30, 2006] Dreamsongs Essays Downloads Triggers & Practice: How Extremes in Writing Relate to Creativity and Learning [pdf]

    I presented this keynote at XP/Agile Universe 2002 in Chicago, Illinois. The thrust of the talk is that it is possible to teach creative activities through an MFA process and to get better by practicing, but computer science and software engineering education on one hand and software practices on the other do not begin to match up to the discipline the arts demonstrate. Get to work.

    [Sep 30, 2006] Google Code - Summer of Code - Summer of Code

    Welcome to the Summer of Code 2006 site. We are no longer accepting applications from students or mentoring organizations. Students can view previously submitted applications and respond to mentor comments via the student home page. Accepted student projects will be announced on code.google.com/soc/ on May 23, 2006. You can talk to us in the Summer-Discuss-2006 group or via IRC in #summer-discuss on SlashNET.

    If you're feeling nostalgic, you can still access the Summer of Code 2005 site.

    Participating Mentoring Organizations

    AbiSource (ideas)

    Adium (ideas)

    Apache Software Foundation (ideas)

    Ardour (ideas)

    ArgoUML (ideas)

    BBC Research (ideas)

    Beagle (ideas)

    Blender (ideas)

    Boost (ideas)

    Bricolage (ideas)

    ClamAV (ideas)

    Cockos Incorporated (ideas)

    Codehaus (ideas)

    Common Unix Printing System (ideas)

    Creative Commons (ideas)

    Crystal Space (ideas)

    CUWiN Wireless Project (ideas)

    Daisy CMS (ideas)

    Debian (ideas)

    Detached Solutions (ideas)

    Django (Lawrence Journal-World) (ideas)

    Dojo (ideas)

    Drupal (ideas)

    Eclipse (ideas)

    Etherboot Project (ideas)

    FFmpeg (ideas)

    FreeBSD Project (ideas)

    Gaim (ideas)

    Gallery (ideas)

    GCC (ideas)

    Gentoo (ideas)

    GIMP (ideas)

    GNOME (ideas)

    Google (ideas)

    Handhelds.org (ideas)

    Haskell.org (ideas)

    Horde (ideas)

    ICU (ideas)

    Inkscape (ideas)

    Internet Archive (ideas)

    Internet2 (ideas)

    Irssi (ideas)

    Jabber Software Foundation (ideas)

    Joomla! (ideas)

    JXTA (ideas)

    KDE (ideas)

    Lanka Software Foundation (LSF) (ideas)

    LispNYC (ideas)

    LiveJournal (ideas)

    Mars Space Flight Facility (ideas)

    MoinMoin (ideas)
    Monotone (ideas)

    Moodle (ideas)

    MythTV (ideas)

    NetBSD (ideas)

    Nmap Security Scanner (ideas)

    OGRE (ideas)

    OhioLINK (ideas)

    One Laptop Per Child (ideas)

    Open Security Foundation (OSVDB) (ideas)

    Open Source Applications Foundation (ideas)

    Open Source Cluster Application Resources (OSCAR) (ideas)

    Open Source Development Labs (OSDL) (ideas)

    OpenOffice.org (ideas)

    OpenSolaris (ideas)

    openSUSE (ideas)

    Oregon State University Open Source Lab (OSL) (ideas)

    PHP (ideas)

    PlanetMath (ideas)

    Plone Foundation (ideas)

    Portland State University (ideas)

    PostgreSQL Project (ideas)

    Project Looking Glass (ideas)

    Python Software Foundation (ideas)

    ReactOS (ideas)

    Refractions Research (ideas)

    Ruby Central, Inc. (ideas)

    Samba (ideas)

    SCons (ideas)

    Subversion (ideas)

    The Fedora Project (ideas)

    The Free Earth Foundation (ideas)

    The Free Network Project (ideas)

    The Free Software Initiative of Japan (ideas)

    The GNU Project (ideas)

    The LLVM Compiler Infrastructure (ideas)

    The Mono Project (ideas)

    The Mozilla Foundation (ideas)

    The Perl Foundation (ideas)

    The Shmoo Group (ideas)

    The University of Texas at Austin: RTF New Media Initiative (ideas)

    The Wine Project (ideas)

    Ubuntu & Bazaar (ideas)

    University of Michigan Aerospace Engineering & Space Science Departments

    Wikimedia Foundation (ideas)

    WinLibre (ideas)

    wxWidgets (ideas)

    XenSource (ideas)

    Xiph.org (ideas)

    XMMS2 (ideas)

    Xorg (ideas)

    XWiki (ideas)

    Questions?

    Please peruse our Student FAQ, Mentor FAQ

    [May. 12, 2003] What I Hate About Your Programming Language

    the article is pretty weak, but the discussion after it contains some interesting points
    ONLamp.com

    The Pragmatic Programmers suggest learning a new language every year. This has already paid off for me. The more different languages I learn, the more I understand about programming in general. It's a lot easier to solve problems if you have a toolbox full of good tools.

    Ideal language: Delphi w/ Clarion influence
    2003-05-16 12:27:29 anonymous [Reply]

    Sadly Delphi/Kylix (Object Pascal) is often overlooked. Perl, Ruby, etc. are all find for scripts, but in most cases, a compiiled program in a better way to do. Delphi lets you program procedurally like C, or with Objects like C++, only the union is much more natural. It prevents you from making many stupid mistakes, while allowing you 99.9% of the power C has. It borrows some syntax from perhaps better languages (Oberon, Modula, etc.), but has a much bigger and more useful standard library. (Unofficially, anyway...)

    It has never let me down... FOXPRO (VFP)
    2003-05-15 06:41:48 anonymous [Reply]

    VFP is great. It has its own easy to deploy runtime. You can compile to .exe. Its IDE if excellent. It is complete with the front-end user interface, middle-ware code and it's own multi-user safe & high performance database engine (desktop). BUT: M$ (aka the Borg) assimilated back in the early 90's what was then a cross platform development tool. Now M$ vision of cross platform for VFP is multiple versions of Windows. Plus M$ can not make a lot of end-user money on a product whos runtime is free.
    Bej - Philadelphia.

    [May 12, 2003]What I Hate About Your Programming Language

    ONLamp.com

    These are my preferences, based on the kind of work I've done and continue to do, the order in which I learned the languages, and just plain personal taste. In the spirit of generating new ideas, learning new techniques, and maybe understanding why things are done the way they're done, it's worth considering the different ways to do them.

    The Pragmatic Programmers suggest learning a new language every year. This has already paid off for me. The more different languages I learn, the more I understand about programming in general. It's a lot easier to solve problems if you have a toolbox full of good tools.

    ... ... ...

    Every language is sacred in the eyes of its zealots, but there's bound to be someone out there for whom the language just doesn't feel right. In the open source world, we're fortunate to be able to pick and choose from several high-quality and free and open languages to find what fits our minds the best.

    Professional Programmers

    ...was this article really about programming in general, or a hyping of open source software? open source programmers (i'm thinking of Python, Ruby, etc.) are really no better than, say for example, C++ programmers or JAVA programmers.

    just because they use open source software solutions and technologies, does not mean they have any more a grasp on programming concepts and the tricks of the trade then those using proprietary solutions.

    i consider myself to be more a teacher of programming (i am just better at that), but i don't think that someone who has been programming for years or uses open source solutions is any more qualified a programmer than i am.

    A grain of salt, posted 11 Jun 2002 by tk (Journeyer)

    Though many free software programmers exhibit high quality in their work, I'll hesitate before concluding that a good way to nurture good coders is to throw them into the midst of the free community. It may well be that many people go into free software because they are already competent enough and want to contribute.

    That said, I'm not sure either what's the best way to groom people into truly professional coders.

    <off-topic>
    An excellent (IMO) book which introduces assembly languages to complete beginners is "Peter Norton's Assembly Language Book for the IBM PC", by Peter Norton and John Socha.
    </off-topic>

    Kids These Days..., posted 12 Jun 2002 by goingware (Master)

    I've written some stuff on this topic. Here's a sampler:

    Study Fundamentals Not Tools, APIs or OSes.

    Also see the last two sections, the ones entitled "The Value of Constant Factor Optimization" and "Old School Programming" in Musings on Good C++ Style as well as the conclusion of Pointers, References and Values.

    I think everyone should learn at least two architectures of assembly code (RISC and CISC), no matter what language they're programming in.

    Also read University of California at Davis Professor Norman Matloff's testimony to Congress: Debunking the Myth of a Desperate Software Labor Shortage.

    It happens that I have a very long resume. The reason I make it so long is that I depend on potential clients finding it via the search engines for a large portion of my business. If I just wanted to help someone understand my employability it could be considerably shorter. But in an effort to make my resume show up in a lot of searches for skills, I mention every skill keyword that I can legitimately claim to have experience in somewhere in the resume, sometimes several times. The resume is designed to appeal to buzzword hunters.

    But it annoys me, I shouldn't have to do that. So my resume has an editorial statement in it, aimed squarely at the HR managers you complain about:

    I strive to achieve quality, correctness, performance and maintainability in the products I write.

    I believe a sound understanding and application of software engineering principles is more valuable than knowledge of APIs or toolsets. In particular, this makes one flexible enough to handle any sort of programming task.

    It helps if you don't deal with headhunters or contract brokers. They're much worse than most HR managers for only attempting to place people that match a buzzword search in a database rather than understanding someone's real talent. Read my policy on recruiters and contract agencies.

    It's generally easier to get smaller companies to take real depth seriously than the larger companies. One reason for this is that they are too small to employ HR managers, so the person you're talking to is likely to be another engineer. My first jobs writing retail Macintosh software, Smalltalk, and Java were gotten at small companies where the person I contacted first at the company was an engineer.

    If you're looking for permanent employment, many companies post their openings on their own web pages. I give some tips on locating these job postings via search engines on this page.

    If you're a consultant like me, and you're fed up with the body shops, may I suggest you read my article Market Yourself - Tips for High-Tech Consultants.

    I've been consulting full-time for over four years, and I've only taken one contract through a broker. I've actually bent my own rules and tried to find other work through the body shops, but they have been useless to me. I've had far better luck finding work on my own, through the web, and through referrals from friends and former coworkers.

    elj.com - A Web Site dedicated to exposing an eclectic mix of elegant programming technologies

    Programming Language Critiques

    The first incarnation of this page was started by John W.F. McClain at MIT. He took it with him when he moved to Loral, but was unable to update and maintain it there, so I offered to take it over.

    In John's original page, he said:

    Computer programmers create new languages all the time (often without even realizing it.) My hope is this collection of critiques will help raise the general quality of computer language design.

    The Future of Programming

    DDJ

    Predicting the future is easier said than done, and yet, we persist in trying to do it. As futile as it may seem to forecast the future of programming, if we're going to try, it's helpful to recognize certain fundamental characteristics of programming and programmers. We know, for example, that programming is hard. We know that the industry is driven by the desire to make programming easier. And we know, as Perl creator Larry Wall has often observed, that programmers are lazy, impatient, and excessively proud.

    This first condition formed the basis of Frederick Brooks's classic text on software engineering, The Mythical Man Month (Addison-Wesley, 1995; ISBN 0201835959) first published in 1975, where he wrote:

    As we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity.

    Brooks's prediction was dire and, unfortunately, accurate. There was no silver bullet, and as far as we can tell, there never will be. However, programming is undoubtedly easier today than it was in the past, and the latter two principles of programming and programmers explain why. Programming became easier because the software industry was motivated to make it so, and because lazy and impatient programmers wouldn't accept anything less. And there is no reason to believe that this will change in the future.

    FORTRANSIT -- the 650 Processor that made FORTRAN

    The FORTRANSIT story is covered in the Annals of Computing History [4, 5], but an additional and more informal slant doesn't hurt.

    The historical development of Fortran in Fortran 90 for the Fortran 77 Programmer by Bo Einarsson and Yurij Shokin.

    The following simple program, which uses many different and usual concepts in programming, is based on "The early development of Programming Languages" by Donald E. Knuth and Luis Trabb Pardo, published in "A History of Computing in the Twentieth Century" edited by N. Metropolis, J. Howlett and Gian-Carlo Cota, Academic Press, New York, 1980, pp. 197-273. They gave an example in Algol 60 and translated into some very old languages such as Zuse's Plankalkül, Goldstine's Flow diagrams, Mauchly's Short Code, Burks' Intermediate PL, Rutishauser's Klammerausdrücke, Bohm's Formules, Hopper's A-2, Laning and Zierler's Algebraic interpreter, Backus' FORTRAN 0 and Brooker's AUTOCODE.

    Klammerausdrücke is a German expression, we keep the German expression also in the Russian and English versions. A direct English translation is "bracket expression". FORTRAN 0 was really not called FORTRAN 0, it is just the very first version of Fortran.

    The program is given here in Pascal, C and five variants of Fortran. The purpose of this is to show how Fortran has been developed from a cryptical, almost machine-dependent language, into a modern structured high-level programming language.

    The final example shows the program in the new programming language F.

    Slashdot NASA Releases Classic Software To Public Domain

    xpccx writes in with a bit from NewsBytes, "NASA turned 43 this month and marked the occasion by releasing more than 200 of its scientific and engineering applications for public use. The modular Fortran programs can be modified, compiled and run on most Linux platforms." The software can be found at OpenChannelSoftware.com. At long last I am ready to prepare my own space mission. I wonder if a whiskey barrel is gonna be air tight after I launch it/me into space with a trebuchet. (It's this sort of unconventional thinking that should get me my job at NASA. Or at least get me put to sleep).

    [Sept 8, 2001] Lisp as an Alternative to Java

    Introduction

    In a recent study [1], Prechelt compared the relative performance of Java and C++ in terms of execution time and memory utilization. Unlike many benchmark studies, Prechelt compared multiple implementations of the same task by multiple programmers in order to control for the effects of differences in programmer skill. Prechelt concluded that, "as of JDK 1.2, Java programs are typically much slower than programs written in C or C++. They also consume much more memory."

    We have repeated Precheltнs study using Lisp as the implementation language. Our results show that Lisp's performance is comparable to or better than C++ in terms of execution speed, with significantly lower variability which translates into reduced project risk. Furthermore, development time is significantly lower and less variable than either C++ or Java. Memory consumption is comparable to Java. Lisp thus presents a viable alternative to Java for dynamic applications where performance is important.

    Conclusions

    Lisp is often considered an esoteric AI language. Our results suggest that it might be worthwhile to revisit this view. Lisp provides nearly all of the advantages that make Java attractive, including automatic memory management, dynamic object-oriented programming, and portability. Our results suggest that Lisp is superior to Java and comparable to C++ in terms of runtime, and superior to both in terms of programming effort, and variability of results. This last item is particularly significant as it translates directly into reduced risk for software development.

    Slashdot Lisp as an Alternative to Java

    There is more data available for other languages.. (Score:4, Interesting)
    by crealf on Saturday September 08, @07:53AM (#2266890)
    (User #414283 Info)

    The article about Lisp is a follow-up of an article by Lutz Prechelt in CACM99 (a draft [ira.uka.de] is available on his page along with other articles).

    However there is more data now, as, Prechelt itself widdened the study, and published in 2000 An empirical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl [ira.uka.de] (a detailed technical report is here [ira.uka.de]).

    If you look, from the developer point of view, Python and Perl work times are similar to those of Lisp, along with program sizes.
    Of course, from the speed point of view, in the test, none of the scripting language could compete with Lisp.

    Anyway some articles by Prechelt [ira.uka.de] are interesting too (as many other research papers ; found via citeseer [nec.com] for instance)

    Smalltalk a better alternative to Java (Score:1, Interesting)
    by Anonymous Coward on Saturday September 08, @08:33AM (#2266985)

    In my opinion Smalltalk makes a much better alternative to Java.

    Smalltalk has all the trappings--a very rich set of base classes, byte-coded, garbage collected, etc.

    There are many Smalltalks out there...Smalltalk/X is quite good, and even has a Smalltalk-to-C compiler to boot. It's not totally free, but pretty cheap (and I believe for non-commercial use everything works but the S-to-C compiler).

    Squeak is an even better place to start...it is highly portable (moreso than Java), very extensible (thanks to VM plugins) and has as very active community that includes Alan Kay, the man who INVENTED the term "object-oriented programming". Squeak has a just-in-time compiler (JITTER), support for multiple front-ends, and can be tied to any kind of external libraries and DLL's. It's not GPL'd, but it is free under an old Apple license (I believe the only issue is with the fonts..they are still Apple fonts). It's already been ported to every platform I've ever seen, including the iPaq (both WinCE and Linux). It runs even on STOCK iPaqs (ie 32m) without any expansion...Java, from what I understand, still has big problems just running on the iPaq, not to mention unexpanded iPaqs.

    And of course, we can't forget about old GNU Smalltalk, which is still seeing development.

    Smalltalk is quite easy to learn--you can just pick up the old "Smalltalk-80: The Language" (Goldberg) and work right from there. Squeak already has two really good books that have just come into print (go to Amazon and search for Mark Guzdial).

    (this is not meant as a language flame...I'm just throwing this out on the table, since we're discussing alternatives to Java. Scheme/LISP is a cool idea as well, but I think Smalltalk deserves some mention.)

    I've written 2 Lisp and 4 Java books (Score:3, Informative)
    by MarkWatson on Saturday September 08, @09:56AM (#2267225)
    (User #189759 Info)

    First, great topic!

    I have written 2 Lisp books for Springer-Verlag and 4 Java books, so you bet that I have an opinion on my two favorite languages.

    First, given free choice, I would use Common LISP for most of my devlopment work. Common LISP has a huge library and is a very stable language. Although I prefer Xanalys LispWorks, there are also good free Common LISP systems.

    Java is also a great language, mainly because of the awesome class libraries and the J2EE framework (I am biased here because I am just finishing up writing a J2EE book).

    Peter Norvig once made a great comment on Java and Lisp (roughly quoting him): Java is only half as good as Lisp for AI but that is good enough.

    Anyway, I find that both Java and Common LISP are very efficient environments to code in. I only use Java for my work because that is what my customers want.

    BTW, I have a new free web book on Java and AI on my web site - help yourself!

    Best regards,

    Mark

    -- www.markwatson.com -- Open Source and Content

    Why Java succeeded, LISP can't make headway now (Score:5, Informative)
    by joneshenry on Saturday September 08, @10:44AM (#2267438)
    (User #9497 Info)

    Java was never marketted as the ultimate fast language to do searching or to manipulate large data structures. What Java was marketted as was a language that was good enough for programming paradigms popular at the time such as object orientation and automatic garbage collection while providing the most comprehensive APIs under the control of one entity who would continue to push the extension of those APIs.

    In this LinuxWorld interview [linuxworld.com] look what Stroustrup is hoping to someday have in the C++ standard for libraries. It's a joke, almost all of those features are already in Java. As Stroustrup says, a standard GUI framework is not "politically feasible".

    Now go listen to what Linux Torvalds is saying [ddj.com] about what he finds to be the most exciting thing to happen to Linux the past year. Hint, it's not the completion of the kernel 2.4.x, it's KDE. The foundation of KDE's success is the triumph of Qt as the de facto standard that a large community has embraced to build an entire reimplementation of end user applications.

    To fill the void of a standard GUI framework for C++, Microsoft has dictated a set of de facto standards for Windows, and Trolltech has successfully pushed Qt as the de facto standard for Linux.

    I claim that as a whole the programming community doesn't care whether a standard is de jure or de facto, but they do care that SOME standard exists. When it comes to talking people into making the investment of time and money to learn a platform on which to base their careers, a multitude of incompatible choices is NOT the way to market.

    I find talking about LISP as one language compared to Java to be a complete joke. Whose LISP? Scheme? Whose version of Scheme, GNU's Guile? Is the Elisp in Emacs the most widely distributed implementation of LISP? Can Emacs be rewritten using Guile? What is the GUI framework for all of LISP? Anyone come up with a set of LISP APIs that are the equivalent of J2EE or Jini?

    I find it extremely disheartening that the same people who can grasp the argument that the value of networks lies in the communication people can do are incapable of applying the same reasoning to programming languages. Is it that hard to read Odlyzko [umn.edu] and not see that people just want to do the same thing with programming languages--talk among themselves. The modern paradigm for software where the money is being made is getting things to work with each other. Dinosaur languages that wait around for decades while slow bureaucratic committees create nonsolutions are going to get stomped by faster moving mammals such as Java pushed by single-decision vendors. And so are fragmented languages with a multitude of incompatible and incomplete implementations such as LISP.

    Some hopefully useful points (Score:2, Informative)
    by dlakelan (qynxryna@lnu-spam-bb.pbz) on Saturday September 08, @02:20PM (#2268461)
    (User #43245 Info | http://www.endpointcomputing.com)

    First off, one of the best spokespersons for Lisp is Paul Graham, author of "On Lisp" and "ANSI Common Lisp". His web site is Here [paulgraham.com].

    Reading through his articles [paulgraham.com] will give you a better sense of what lisp is about. One that I'd like to see people comment on is: java's cover [paulgraham.com] ... It resonates with my experience as well. Also This response [paulgraham.com] to his java's cover article succinctly makes a good point that covers most of the bickering found here...

    I personally think that the argument that Lisp is not widely known, and therefore not enough programmers exist to support corporate projects is bogus. The fact that you can hire someone who claims to know C++ does NOT in any way shape or form mean that you can hire someone who will solve your C++ programming problem! See my own web site [endpointcomputing.com] for more on that.

    I personally believe that if you have a large C++ program you're working on and need to hire a new person or a replacement who already claims to know C++, the start up cost for that person is the same as if you have a Lisp program doing the same thing, and need to hire someone AND train them to use Lisp. Why? the training more than pays for itself because it gives the new person a formal introduction to your project, and Lisp is a more productive system than C++ for most tasks. Furthermore, it's quite likely that the person who claims to know C++ doesn't know it as well as you would like, and therefore the fact that you haven't formally trained them on your project is a cost you aren't considering.

    One of the points that the original article by the fellow at NASA makes is that Lisp turned out to have a very low standard deviation of run-time and development time. What this basically says is that the lisp programs were more consistent. This is a very good thing as anyone who has ever had deadlines knows.

    Yes, the JVM version used in this study is old, but lets face it that would affect the average, but wouldn't affect the standard deviation much. Java programs are more likely to be slow, as are C++ programs!

    The point about lisp being a memory hog that a few people have made here is invalid as well. The NASA article states:

    Memory consumption for Lisp was significantly higher than for C/C++ and roughly comparable to Java. However, this result is somewhat misleading for two reasons. First, Lisp and Java both do internal memory management using garbage collection, so it is often the case that the Lisp and Java runtimes will allocate memory from the operating system this is not actually being used by the application program.

    People here have interpreted this to mean that the system is a memory hog anyway. In fact many lisp systems reserve a large chunk of their address space, which makes it look like a large amount of memory is in use. However the operating system has really just reserved it, not allocated it. When you touch one of the pages it does get allocated. So it LOOKS like you're using a LOT of memory, but in fact because of the VM system, you are NOT using very much memory at all.

    The biggest reasons people don't use Lisp are they either don't understand Lisp, or have been forced by clients or supervisors to use something else.

    Interesting, but flawed? (Score:5, Insightful)
    by tkrotchko on Saturday September 08, @07:41AM (#2266864)
    (User #124118 Info | http://www.toad.net/~tomk)

    Its interesting to see the results of a short study, even though the author admits to the flaw in his methodolody (primarily the subjects were self-chosen). Still, I don't think that's a fatal flaw, and I think his results do have some validity.

    However, I think the author misses a more important issue: development involving a single programmer for a relatively small task isn't the point for most organizations. Maintainability and a large pool of potential developers (for example) are a significant factor in deciding what language to use. LISP is a fabulous language, but try to find 10 programmers at a reasonable price in the next 2 weeks. Good luck.

    Also, while initial development time is important, typically testing/debug cycles are the costly part of implementation, so that's what should weigh on your mind as the area that the most gains can be made. Further, large projects are collaborative efforts, so the objects and libraries available for a particular language plays a role in how quickly you can produce quality code.

    As an aside, it would've been interesting to see the same development done with experienced Visual Basic programmer. My guess is he/she would have the lowest development cycle, and yet it wouldn't be my first choice for a large scale development project (although at the risk of being flamed, its not a bad language for just banging out a quick set of tools for my own use).

    Some of thing things I believe are more important when thinking about a programming language:

    1) Amenable to use by team of programmers
    2) Viability over a period of time (5-10 years).
    3) Large developer base
    4) Cross platform - not because I think cross-platform is a good thing by itself; rather, I think its important to avoid being locked-in to a single hardware or Operating System vendor.
    5) Mature IDE, debugging tools, and compilers.
    6) Wide applicability

    Computer languages tend to develop in response to specific needs, and most programmers will probably end up learning 5-10 languages over the course of their career. It would be helpful to have a discussion of the appropriate roles for certain computer languages, since I'm not sure any computer languages is better than any other.

    Perhaps not quite as illuminating as it appears (Score:1)
    by ascholl (ascholl-at-max(dot)cs(dot)kzoo(dot)edu) on Saturday September 08, @07:53AM (#2266888)
    (User #225398 Info)

    The study does show an advantage of lisp over java/c/c++ -- but only for small problems which depend heavily on the types of tasks lisp was designed for. The author recognizes the second problem ("It might be because the benchmark task involved search and managing a complex linked data structure, two jobs for which Lisp happens to be specifically designed and particularly well suited.") but doesn't even mention the first.
    While I haven't seen the example programs, I suspect that the reason the java versions performed poorly time-wise was probably directly related to object instantiation. Instantiating an object is a pretty expensive task in java; typical 'by the book' methods would involve instantiating new numbers for every collection of digits, word, digit/character set representation, etc. The performance cut due to instantiation can be minimized dramatically by re-using program wide collections of commonly used objects, but the effect would only be seen on large inputs. Since the example input was much smaller than the actual test case, it seems likely that the programmers may have neglected to include this functianality.
    Hypothising about implementation aside, the larger question is one of problem scope. If you're going to claim that language A is better than language B, you probably aren't concerned about tiny (albeit non-trivial) problems like the example. Now, I don't know whether this is true, but it seems possible that a large project implemented in java or c/c++ might be built quicker, be easier to maintain, and be less fragile than its equivilent in lisp. It may even perform better. It's not fair to assume blindly that the advantages of lisp seen in this study will scale up. I'm not claiming that they don't ... but still. If we're choosing a language for a task, this should be a primary consideration.

    why language advocacy is bad

    Why Language Advocacy is Bad

    Here is another relevant view that explains that advocacy of a particular language might has little in common with the desire to innovate. Most people simply hate to be wrong after they made their (important and time consuming) choice ;-)
    Slashdot

    Nobody wants to be obsolete (Score:2, Interesting)
    by e4 on Thursday December 14, @12:27PM EST (#102)
    (User #102617 Info) http://www.razorlist.com

    I think one of the biggest reasons for language advocacy (/OS advocacy/DB advocacy/etc.) is that we have a vested interest in "our" language succeeding. Each of us has worked hard to learn the subtleties and intricacies of [language X], and if something else comes along that's better, we're suddenly newbies again. That hard-won expertise doesn't carry much weight if [language Y] makes it easy for "any idiot" to accomplish and/or understand what took you a week to figure out.

    We start trying to come up with reasons why it's not really better: It doesn't give you enough control; it's not as efficient; it has fewer options...

    PC vs. Mac. BSD vs. Mac. Mainframe vs. client-server. Command line vs. GUI. How many people were a little saddened to see MS-DOS fading into the mist, not because it was a great tool, but because they knew how to use it?

    A language advocate needs [language X] to succeed, to be dominant, to be the best, because he has more status and more useful knowledge that way.

    Bottom line, it's an ego thing.

    [Sep 02, 2000] Programming Languages of Choice

    Languages are very interesting things. They can either tie you up, or set you free. But no programming language can be everything to everyone, despite the fact that sometimes it looks like one does.
    What is it that you like about programming languages? What is it that you hate? What did you start on? What do you find yourself coding with most often today? Has your choice of programming languages affected other choices in software? (I.e. Lisp hackers tend to gravitate toward emacs, whereas others go to vi)

    It is quite interesting to me the amount of influence that programming languages have on the way programmers think about how they do things. One example from one perspective is this; if you didn't know that most UNIXen were implemented in C, would you be able to tell? If so, why or why not? What are the different properties that UNIX has that makes it pretty obvious that it wasn't written by somebody programming in a functional language, or in an object-oriented language (or style)?

    ... ... ...

    One of the responces

    My favorite language is Chez Scheme for two reasons: syntactic abstraction and control abstraction.

    Syntactic abstraction is macros. As opposed to other implementations of Scheme, Chez Scheme in my opinion has the best story on macros, and its macro system is among the most powerful I have seen.

    Control abstraction is the power to add new control operations to your language. For example, backtracking and coroutines. More esoterically, monads in direct-style code. Control abstraction boils down to first-class continuations (call/cc). With the single exception of SML/NJ, no other language I know of has call/cc.

    I know I will be using Scheme for years to come, and my company will also continue to use it in its systems. We code a lot in C++ and Delphi, but the Real Hard Stuff(tm) is done in Scheme because macros and continuations are big hammers. Despite Scheme being over 20 years old and despite demonstrated, efficient implementations of these "advanced" language concepts, I don't see new language designs adopting these features from Scheme. I hope this changes

    [Jul 29, 2000] Slashdot Are Buffer Overflow Sploits Intel's Fault -- interesting discussion about problems with C

    [Sep 1, 1999] **** Programmers Heaven - Where programmers go! -- great collection of file and links by Tore Nestenius

    [August 2, 1999] Turbo Vision Salvador Eduardo Tropea (SET) - June 11th 1999, 05:33 EST

    Turbo Vision provides a very nice user interface (comparable with the very well known GUIs) but only for console applications. This UNIX port is based on Borland's version 2.0 with fixes and was made to create RHIDE (a nice IDE for gcc and other GNU compilers). The library supports /dev/vcsa devices to speed-up, ncurses to run from telnet and xterm. This port, in contrast to the Sigala's port, doesn't have "100% compatibility with the original library" as goal, instead we modified a lot of code in favor of security (specially buffer overflows). The port is also available for the original platform (DOS).

    Download: http://www.geocities.com/SiliconValley/Vista/6552/rhtvision-1.0.6.src.tar.gz http://www.geocities.com/SiliconValley/Vista/6552/tvision.html (7 hits)

    [June 11, 1999] Undergraduate Courses About Programming Languages

    [June 11, 1999] Graduate Courses About Programming Languages

    Programming Languages: Design and Implementation (Third edition)

    The following have made material available related to the book Programming Languages: Design and Implementation (Third edition) by Terrence W. Pratt and Marvin Zelkowitz (Prentice-Hall, 1995).

    Recommended Links

    Softpanorama hot topic of the month

    Softpanorama Recommended

    Top 7:

    Etc.

    Paradigms

    Other Resources


    Classic Papers

    Donald Knuth Turing Award Lecture: The Art of Computer Programming (PDF)

    The Rise of ``Worse is Better''

    I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase ``the right thing.'' To such a designer it is important to get all of the following characteristics right:

    I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the ``MIT approach.'' Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation.

    The worse-is-better philosophy is only slightly different:

    Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the ``New Jersey approach.'' I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.

    However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.

    Worse Is Better by Richard P Gabriel

    The concept known as “worse is better” holds that in software making (and perhaps in other arenas as well) it is better to start with a minimal creation and grow it as needed. Christopher Alexander might call this “piecemeal growth.” This is the story of the evolution of that concept.

    From 1984 until 1994 I had a Lisp company called “Lucid, Inc.” In 1989 it was clear that the Lisp business was not going well, partly because the AI companies were floundering and partly because those AI companies were starting to blame Lisp and its implementations for the failures of AI. One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, “because, well, worse is better.” We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.

    A few months later, in Summer 1989, a small Lisp conference called EuroPAL (European Conference on the Practical Applications of Lisp) invited me to give a keynote, probably since Lucid was the premier Lisp company. I agreed, and while casting about for what to talk about, I gravitated toward a detailed explanation of the worse-is-better ideas we joked about as applied to Lisp. At Lucid we knew a lot about how we would do Lisp over to survive business realities as we saw them, and so the result was called “Lisp: Good News, Bad News, How to Win Big.” [html] (slightly abridged version) [pdf] (has more details about the Treeshaker and delivery of Lisp applications).

    I gave the talk in March, 1990 at Cambridge University. I had never been to Cambridge (nor to Oxford), and I was quite nervous about speaking at Newton’s school. There were about 500-800 people in the auditorium, and before my talk they played the Notting Hillbillies over the sound system - I had never heard the group before, and indeed, the album was not yet released in the US. The music seemed appropriate because I had decided to use a very colloquial American-style of writing in the talk, and the Notting Hillbillies played a style of music heavily influenced by traditional American music, though they were a British band. I gave my talk with some fear since the room was standing room only, and at the end, there was a long silence. The first person to speak up was Gerry Sussman, who largely ridiculed the talk, followed by Carl Hewitt who was similarly none too kind. I spent 30 minutes trying to justify my speech to a crowd in no way inclined to have heard such criticism - perhaps they were hoping for a cheerleader-type speech.

    I survived, of course, and made my way home to California. Back then, the Internet was just starting up, so it was reasonable to expect not too many people would hear about the talk and its disastrous reception. However, the press was at the talk and wrote about it extensively in the UK. Headlines in computer rags proclaimed “Lisp Dead, Gabriel States.” In one, there was a picture of Bruce Springsteen with the caption, “New Jersey Style,” referring to the humorous name I gave to the worse-is-better approach to design. Nevertheless, I hid the talk away and soon was convinced nothing would come of it.

    About a year later we hired a young kid from Pittsburgh named Jamie Zawinski. He was not much more than 20 years old and came highly recommended by Scott Fahlman. We called him “The Kid.” He was a lot of fun to have around: not a bad hacker and definitely in a demographic we didn’t have much of at Lucid. He wanted to find out about the people at the company, particularly me since I had been the one to take a risk on him, including moving him to the West Coast. His way of finding out was to look through my computer directories - none of them were protected. He found the EuroPAL paper, and found the part about worse is better. He connected these ideas to those of Richard Stallman, whom I knew fairly well since I had been a spokesman for the League for Programming Freedom for a number of years. JWZ excerpted the worse-is-better sections and sent them to his friends at CMU, who sent them to their friends at Bell Labs, who sent them to their friends everywhere.

    Soon I was receiving 10 or so e-mails a day requesting the paper. Departments from several large companies requested permission to use the piece as part of their thought processes for their software strategies for the 1990s. The companies I remember were DEC, HP, and IBM. In June 1991, AI Expert magazine republished the piece to gain a larger readership in the US.

    However, despite the apparent enthusiasm by the rest of the world, I was uneasy about the concept of worse is better, and especially with my association with it. In the early 1990s, I was writing a lot of essays and columns for magazines and journals, so much so that I was using a pseudonym for some of that work: Nickieben Bourbaki. The original idea for the name was that my staff at Lucid would help with the writing, and the single pseudonym would represent the collective, much as the French mathematicians in the 1930s used “Nicolas Bourbaki” as their collective name while rewriting the foundations of mathematics in their image. However, no one but I wrote anything under that name.

    In the Winter of 1991-1992 I wrote an essay called “Worse Is Better Is Worse” under the name “Nickieben Bourbaki.” This piece attacked worse is better. In it, the fiction was created that Nickieben was a childhood friend and colleague of Richard P. Gabriel, and as a friend and for Richard’s own good, Nickieben was correcting Richard’s beliefs.

    In the Autumn of 1992, the Journal of Object-Oriented Programming (JOOP) published a “rebuttal” editorial I wrote to “Worse Is Better Is Worse” called “Is Worse Really Better?” The folks at Lucid were starting to get a little worried because I would bring them review drafts of papers arguing (as me) for worse is better, and later I would bring them rebuttals (as Nickieben) against myself. One fellow was seriously nervous that I might have a mental disease.

    In the middle of the 1990s I was working as a management consultant (more or less), and I became interested in why worse is better really could work, so I was reading books on economics and biology to understand how evolution happened in economic systems. Most of what I learned was captured in a presentation I would give back then, typically as a keynote, called “Models of Software Acceptance: How Winners Win,” and in a chapter called “Money Through Innovation Reconsidered,” in my book of essays, “Patterns of Software: Tales from the Software Community.”

    You might think that by the year 2000 I would have settled what I think of worse is better - after over a decade of thinking and speaking about it, through periods of clarity and periods of muck, and through periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was scheduled to be on a panel entitled “Back to the Future: Is Worse (Still) Better?” And in preparation for this panel, the organizer, Martine Devos, asked me to write a position paper, which I did, called “Back to the Future: Is Worse (Still) Better?” In this short paper, I came out against worse is better. But a month or so later, I wrote a second one, called “Back to the Future: Worse (Still) is Better!” which was in favor of it. I still can’t decide. Martine combined the two papers into the single position paper for the panel, and during the panel itself, run as a fishbowl, participants routinely shifted from the pro-worse-is-better side of the table to the anti-side. I sat in the audience, having lost my voice giving my Mob Software talk that morning, during which I said, “risk-taking and a willingness to open one’s eyes to new possibilities and a rejection of worse-is-better make an environment where excellence is possible. Xenia invites the duende, which is battled daily because there is the possibility of failure in an aesthetic rather than merely a technical sense.”

    Decide for yourselves.


    Education



    Etc

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

    ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least


    Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

    The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

    Last modified: June 28, 2017