Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Programming Languages Usage and Design Problems

News Scripting languages Recommended Books Recommended Links Typical errors The Art of Debugging Programming as a Profession Programming style
Software Engineering Object-Oriented Cult: A Slightly Skeptical View on the Object-Oriented Programming Real Insights into Architecture Come Only From Actual Programming Perl-based Bug Tracking  CMM (Capability Maturity Model) KISS Principle Brooks law Slightly Skeptical View on Extreme Programming
Donald Knuth TAoCP and its Influence of Computer Science Algorithms and Data Structures Bit Tricks Searching Algorithms Sorting Algorithms Pattern Matching Graph Algorithms
Assembler C Cpp Java Pascal PL/1 Prolog R programming language
GCC UNIX Make Program Understanding Refactoring vs Restructuring Version Control & Configuration Management Tools  Literate Programming Programming style Software Testing
Forth Compilers Lexical analysis Recursive Descent Parsing Coroutines A symbol table Bit Tricks Debugging
Conway Law Perl-based Bug Tracking Code Reviews and Inspections Code Metrics Structured programming Design patterns Extreme Programming CMM (Capability Maturity Model)
Software Fashion Project Management Inhouse vs Outsourced Applications Development Tips Quotes History Humor Etc

As Donald Knuth noted (Don Knuth and the Art of Computer Programming The Interview):

I think of a programming language as a tool to convert a programmer's mental images into precise operations that a machine can perform. The main idea is to match the user's intuition as well as possible. There are many kinds of users, and many kinds of application areas, so we need many kinds of languages.
 
Ordinarily technology changes fast. But programming languages are different: programming languages are not just technology, but what programmers think in.

They're half technology and half religion. And so the median language, meaning whatever language the median programmer uses, moves as slow as an iceberg.

Paul Graham: Beating the Averages

Libraries are more important that the language.

Donald Knuth


Introduction

A fruitful way to think about language development is to consider it a to be special type of theory building. Peter Naur suggested that programming in general is theory building activity in his 1985 paper "Programming as Theory Building". But idea is especially applicable to compilers and interpreters. What Peter Naur failed to understand was that design of programming languages has religious overtones and sometimes represent an activity, which is pretty close to the process of creating a new, obscure cult ;-). Clueless academics publishing junk papers at obscure conferences are high priests of the church of programming languages. some like Niklaus Wirth and Edsger W. Dijkstra (temporary) reached the status close to those of (false) prophets :-).

On a deep conceptual level building of a new language is a human way of solving complex problems. That means that complier construction in probably the most underappreciated paradigm of programming of large systems much more so then greatly oversold object-oriented programming. OO benefits are greatly overstated. For users, programming languages distinctly have religious aspects, so decisions about what language to use are often far from being rational and are mainly cultural.  Indoctrination at the university plays a very important role. Recently they were instrumental in making Java a new Cobol.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities,  popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things.  A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked.  This is  a story behind success of  Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

Progress in programming languages has been very uneven and contain several setbacks. Currently this progress is mainly limited to development of so called scripting languages.  Traditional high level languages field is stagnant for many decades.

At the same time there are some mysterious, unanswered question about factors that help the language to succeed or fail. Among them:

Those are difficult questions to answer without some way of classifying languages into different categories. Several such classifications exists. First of all like with natural languages, the number of people who speak a given language is a tremendous force that can overcome any real of perceived deficiencies of the language. In programming languages, like in natural languages nothing succeed like success.

Complexity Curse

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that a language with simpler core, or even simplistic core Basic, Pascal) have better chances to acquire high level of popularity.  The underlying fact here probably is that most programmers are at best mediocre and such programmers tend on intuitive level to avoid more complex, more rich languages and prefer, say, Pascal to PL/1 and PHP to Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because of the progress of hardware, availability of compilers and not the least, because it was associated with OO exactly at the time OO became a mainstream fashion).  Complex non-orthogonal languages can succeed only as a result of a long period of language development (which usually adds complexly -- just compare Fortran IV with Fortran 99; or PHP 3 with PHP 5 ) from a smaller core. The banner of some fashionable new trend extending existing popular language to this new "paradigm" is also a possibility (OO programming in case of C++, which is a superset of C).

Historically, few complex languages were successful (PL/1, Ada, Perl, C++), but even if they were successful, their success typically was temporary rather then permanent  (PL/1, Ada, Perl). As Professor Wilkes noted   (iee90):

Things move slowly in the computer language field but, over a sufficiently long period of time, it is possible to discern trends. In the 1970s, there was a vogue among system programmers for BCPL, a typeless language. This has now run its course, and system programmers appreciate some typing support. At the same time, they like a language with low level features that enable them to do things their way, rather than the compiler’s way, when they want to.

They continue, to have a strong preference for a lean language. At present they tend to favor C in its various versions. For applications in which flexibility is important, Lisp may be said to have gained strength as a popular programming language.

Further progress is necessary in the direction of achieving modularity. No language has so far emerged which exploits objects in a fully satisfactory manner, although C++ goes a long way. ADA was progressive in this respect, but unfortunately it is in the process of collapsing under its own great weight.

ADA is an example of what can happen when an official attempt is made to orchestrate technical advances. After the experience with PL/1 and ALGOL 68, it should have been clear that the future did not lie with massively large languages.

I would direct the reader’s attention to Modula-3, a modest attempt to build on the appeal and success of Pascal and Modula-2 [12].

Complexity of the compiler/interpreter also matter as it affects portability: this is one thing that probably doomed PL/1 (and later Ada), although those days a new language typically come with open source compiler (or in case of scripting languages, an interpreter) and this is less of a problem.

Here is an interesting take on language design from the preface to The D programming language book:

Programming language design seeks power in simplicity and, when successful, begets beauty.

Choosing the trade-offs among contradictory requirements is a difficult task that requires good taste from the language designer as much as mastery of theoretical principles and of practical implementation matters. Programming language design is software-engineering-complete.

D is a language that attempts to consistently do the right thing within the constraints it chose: system-level access to computing resources, high performance, and syntactic similarity with C-derived languages. In trying to do the right thing, D sometimes stays with tradition and does what other languages do, and other times it breaks tradition with a fresh, innovative solution. On occasion that meant revisiting the very constraints that D ostensibly embraced. For example, large program fragments or indeed entire programs can be written in a well-defined memory-safe subset of D, which entails giving away a small amount of system-level access for a large gain in program debuggability.

You may be interested in D if the following values are important to you:

The role of fashion

At the initial, the most difficult stage of language development the language should solve an important problem that was inadequately solved by currently popular languages.  But at the same time the language has few chances rto cesseed unless it perfectly fits into the current software fashion. This "fashion factor" is probably as important as several other factors combined with the exclution of "language sponsor" factor.

Like in woman dress fashion rules in language design.  And with time this trend became more and more prononced.  A new language should simultaneously represent the current fashionable trend.  For example OO-programming was a visit card into the world of "big, successful languages" since probably early 90th (C++, Java, Python).  Before that "structured programming" and "verification" (Pascal, Modula) played similar role.

Programming environment and the role of "powerful sponsor" in language success

PL/1, Java, C#, Ada are languages that had powerful sponsors. Pascal, Basic, Forth are examples of the languages that had no such sponsor during the initial period of development.  C and C++ are somewhere in between.

But any language now need a "programming environment" which consists of a set of libraries, debugger and other tools (make tool, link, pretty-printer, etc). The set of standard" libraries and debugger are probably two most important elements. They cost  lot of time (or money) to develop and here the role of powerful sponsor is difficult to underestimate.

While this is not a necessary condition for becoming popular, it really helps: other things equal the weight of the sponsor of the language does matter. For example Java, being a weak, inconsistent language (C-- with garbage collection and OO) was pushed through the throat on the strength of marketing and huge amount of money spend on creating Java programming environment.  The same was partially true for  C# and Python. That's why Python, despite its "non-Unix" origin is more viable scripting language now then, say, Perl (which is better integrated with Unix and has pretty innovative for scripting languages support of pointers and regular expressions), or Ruby (which has support of coroutines form day 1, not as "bolted on" feature like in Python). Like in political campaigns, negative advertizing also matter. For example Perl suffered greatly from blackmail comparing programs in it with "white noise".   And then from withdrawal of O'Reilly from the role of sponsor of the language (although it continue to milk that Perl book publishing franchise ;-)

People proved to be pretty gullible and in this sense language marketing is not that different from woman clothing marketing :-)

Language level and success

One very important classification of programming languages is based on so called the level of the language.  Essentially after there is at least one language that is successful on a given level, the success of other languages on the same level became more problematic. Higher chances for success are for languages that have even slightly higher, but still higher level then successful predecessors.

The level of the language informally can be described as the number of statements (or, more correctly, the number of  lexical units (tokens)) needed to write a solution of a particular problem in one language versus another. This way we can distinguish several levels of programming languages:

 "Nanny languages" vs "Sharp razor" languages

Some people distinguish between "nanny languages" and "sharp razor" languages. The latter do not attempt to protect user from his errors while the former usually go too far... Right compromise is extremely difficult to find.

For example, I consider the explicit availability of pointers as an important feature of the language that greatly increases its expressive power and far outweighs risks of errors in hands of unskilled practitioners.  In other words attempts to make the language "safer" often misfire.

Expressive style of the languages

Another useful typology is based in expressive style of the language:

Those categories are not pure and somewhat overlap. For example, it's possible to program in an object-oriented style in C, or even assembler. Some scripting languages like Perl have built-in regular expressions engines that are a part of the language so they have functional component despite being procedural. Some relatively low level languages (Algol-style languages) implement garbage collection. A good example is Java. There are scripting languages that compile into common language framework which was designed for high level languages. For example, Iron Python compiles into .Net.

Weak correlation between quality of design and popularity

Popularity of the programming languages is not strongly connected to their quality. Some languages that look like a collection of language designer blunders (PHP, Java ) became quite popular. Java became especially a new Cobol and PHP dominates dynamic Web sites construction. The dominant technology for such Web sites is often called LAMP, which means Linux - Apache -My SQL PHP. Being a highly simplified but badly constructed subset of Perl, kind of new Basic for dynamic Web sites construction PHP provides the most depressing experience. I was unpleasantly surprised when I had learnt the Wikipedia engine was rewritten in PHP from Perl some time ago, but this quite illustrates the trend.

So language design quality has little to do with the language success in the marketplace. Simpler languages have more wide appeal as success of PHP (which at the beginning was at the expense of Perl) suggests. In addition much depends whether the language has powerful sponsor like was the case with Java (Sun and IBM) as well as Python (Google).

Progress in programming languages has been very uneven and contain several setbacks like Java. Currently this progress is usually associated with scripting languages. History of programming languages raises interesting general questions about "laws" of programming language design. First let's reproduce several notable quotes:

  1. Knuth law of optimization: "Premature optimization is the root of all evil (or at least most of it) in programming." - Donald Knuth
  2. "Greenspun's Tenth Rule of Programming: any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp." - Phil Greenspun
  3. "The key to performance is elegance, not battalions of special cases." - Jon Bentley and Doug McIlroy
  4. "Some may say Ruby is a bad rip-off of Lisp or Smalltalk, and I admit that. But it is nicer to ordinary people." - Matz, LL2
  5. Most papers in computer science describe how their author learned what someone else already knew. - Peter Landin
  6. "The only way to learn a new programming language is by writing programs in it." - Kernighan and Ritchie
  7. "If I had a nickel for every time I've written "for (i = 0; i < N; i++)" in C, I'd be a millionaire." - Mike Vanier
  8. "Language designers are not intellectuals. They're not as interested in thinking as you might hope. They just want to get a language done and start using it." - Dave Moon
  9. "Don't worry about what anybody else is going to do. The best way to predict the future is to invent it." - Alan Kay
  10. "Programs must be written for people to read, and only incidentally for machines to execute." - Abelson & Sussman, SICP, preface to the first edition

Please note that one thing is to read language manual and appreciate how good the concepts are, and another to bet your project on a new, unproved language without good debuggers, manuals and, what is very important, libraries. Debugger is very important but standard libraries are crucial: they represent a factor that makes or breaks new languages.

In this sense languages are much like cars. For many people car is the thing that they use get to work and shopping mall and they are not very interesting is engine inline or V-type and the use of fuzzy logic in the transmission. What they care is safety, reliability, mileage, insurance and the size of trunk. In this sense "Worse is better" is very true. I already mentioned the importance of the debugger. The other important criteria is quality and availability of libraries. Actually libraries are what make 80% of the usability of the language, moreover in a sense libraries are more important than the language...

A popular belief that scripting is "unsafe" or "second rate" or "prototype" solution is completely wrong. If a project had died than it does not matter what was the implementation language, so for any successful project and tough schedules scripting language (especially in dual scripting language+C combination, for example TCL+C) is an optimal blend that for a large class of tasks. Such an approach helps to separate architectural decisions from implementation details much better that any OO model does.

Moreover even for tasks that handle a fair amount of computations and data (computationally intensive tasks) such languages as Python and Perl are often (but not always !) competitive with C++, C# and, especially, Java.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities, popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things. A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked. This is a story behind success of Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that languages with simpler core, or even simplistic core has more chanced to acquire high level of popularity. The underlying fact here probably is that most programmers are at best mediocre and such programmer tend on intuitive level to avoid more complex, more rich languages like, say, PL/1 and Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because OO became a fashion). Complex non-orthogonal languages can succeed only as a result on long period of language development from a smaller core or with the banner of some fashionable new trend (OO programming in case of C++).

Programming Language Development Timeline

Here is modified from Byte the timeline of Programming Languages (for the original see BYTE.com September 1995 / 20th Anniversary /)

Forties

ca. 1946


1949

Fifties


1951


1952

1957


1958


1959

Sixties


1960


1962


1963


1964


1965


1966


1967



1969

Seventies


1970


1972


 

1974


1975


1976

 


1977


1978


1979

Eighties


1980


1981


1982


1983


1984


1985


1986


1987


1988


1989

Nineties


1990


1991


1992


1993


1994


1995


1996


1997


2006

2007 

2011

There are several interesting "language-induced" errors  -- errors that particular programming language facilitates rather then helps to avoid. They are most studied for C-style languages. Funny but Pl/1 (from which C was derived) was a better designed language then much simpler C in several of those categories.

Avoiding C-style languages design blunder of "easy" mistyping "=" instead of "=="

One of most famous C design blunders was two small lexical difference between assignment and comparison (remember that Algol used := for assignment) caused by the design decision  to make the language more compact (terminals at this type were not very reliable and number of symbols typed matter greatly. In C assignment is allowed in if statement but no attempts were  made to make language more failsafe by avoiding possibility of mixing up  "=" and "==".  In  C syntax  the statement

if (alpha = beta) ... 

assigns the contents of the variable beta to the variable alpha and executes the code in then branch if beta <> 0.

It is easy to mix thing and write if (alpha = beta ) instead of (if (alpha == beta)  which is a pretty nasty, and remarkably consistent C-induced bug.  in case you are comparing the constant to a variable, you can often reverse the sequence and put constant first like in

if ( 1==i ) ...
as
if ( 1=i ) ...
does not make any sense. In this case such a blunder will be detected on syntax level.

Dealing with unbalanced "{" and "}" problem in C-style languages

Another nasty problems with C, C++, Java, Perl and other C-style languages is that missing curvy brackets are pretty difficult to find. they also canbe insertd incorrectly endign with the even more nasty logical error.  One effective solution that was first implemented in PL/1 and was based on calculation of the level of nesting (in compiler listing) and ability of multiple closure of blocks in the end statement (PL/1 did not use brackets {}, they were introduced in C).

In C one can use pseudo comments that signify nesting level zero and check those points with special program or by writing an editor macro.

Many editors have the ability to point to the closing bracket for any given opening bracket and vise versa. This is also useful, but less efficient way to solve the problem.  

Problem of unclosed literal

Specifying max length of literals is an effecting way of catching missing quote. This idea was forst implemented in debugging PL/1 compilers. You can also have an option to limit literal to a single line. In general multi-line literals should have different lexical markers (like "here" construct in shell). Some language like Perl provide opportunity to use concatenation operator for splitting literals into multiple lines, which are "merged" at compile time. But if there is no limit on the number of lines string literal can occupy some bug can slip in which unmatched quote can closed by another unmatched quote in a nearby literal " commenting out" some part of the code. So this does not help much.

Limit on the language of the literal can be communicated via pragma statement at compile type in a particular fragment of text. This is an effective way to avoid the problem. Usually only few places in program use multiline literals, if any. 

Editors that use coloring help to detect unclosed literal problem,  but there are cases when they are useless.

Commenting out blocks  of code

This is best done not with comments, but with a preprocessor if the language has one (PL/1, C, etc)

The "dangling else" problem

Having both an if-else and an if statement leads to some possibilities of confusion when one of the clause of a selection statement is itself a selection statement. For example, the C code

if (level >= good)
   if (level == excellent)
      cout << "excellent" << endl;
else
   cout << "bad" << endl;

is intended to process a three-state situation in which something can be bad, good or (as a special case of good) excellent; it is supposed to print an appropriate description for the excellent and bad cases, and print nothing for the good case. The indentation of the code reflects these expectations. Unfortunately, the code does not do this. Instead, it prints excellent for the excellent case, bad for the good case, and nothing for the bad case.

The problem is deciding which if matches the else in this expression. The basic rule is

an else matches the nearest previous unmatched if

There are two ways to avoid the dangling else problem:

In fact, you can avoid the dangling else problem completely by always using brackets around the clauses of an if or if-else statement, even if they only enclose a single statement.
So a good strategy for notation of if-else statements is always use { brace brackets } around the clauses of an if-else or if statement
Always use { brace brackets } around the clauses of an if-else or if statement
(This strategy also helps if you need to cut-and-paste more code into one of the clauses: if a clause consists of only one statement, without enclosing brace brackets, and you add another statement to it, then you also need to add the brace brackets. Having the brace brackets there already makes the job easier.)

 


NEWS CONTENTS

Old News ;-)

[Nov 29, 2017] Take This GUI and Shove It

Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.
Notable quotes:
"... Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI. ..."
"... What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers. ..."
"... AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT. ..."
"... Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI. ..."
"... Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead. ..."
"... Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location. ..."
"... I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script. ..."
Slashdot

Deep End's Paul Venezia speaks out against the overemphasis on GUIs in today's admin tools, saying that GUIs are fine and necessary in many cases, but only after a complete CLI is in place, and that they cannot interfere with the use of the CLI, only complement it. Otherwise, the GUI simply makes easy things easy and hard things much harder. He writes, 'If you have to make significant, identical changes to a bunch of Linux servers, is it easier to log into them one-by-one and run through a GUI or text-menu tool, or write a quick shell script that hits each box and either makes the changes or simply pulls down a few new config files and restarts some services? And it's not just about conservation of effort - it's also about accuracy. If you write a script, you're certain that the changes made will be identical on each box. If you're doing them all by hand, you aren't.'"

alain94040 (785132)

Here is a Link to the print version of the article [infoworld.com] (that conveniently fits on 1 page instead of 3).

Providing a great GUI for complex routers or Linux admin is hard. Of course there has to be a CLI, that's how pros get the job done. But a great GUI is one that teaches a new user to eventually graduate to using CLI.

A bad GUI with no CLI is the worst of both worlds, the author of the article got that right. The 80/20 rule applies: 80% of the work is common to everyone, and should be offered with a GUI. And the 20% that is custom to each sysadmin, well use the CLI.

maxwell demon:

What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers.

0123456 (636235) writes:

What would be nice is if the GUI could automatically create a shell script doing the change.

While it's not quite the same thing, our GUI-based home router has an option to download the config as a text file so you can automatically reconfigure it from that file if it has to be reset to defaults. You could presumably use sed to change IP addresses, etc, and copy it to a different router. Of course it runs Linux.

Alain Williams:

AIX's SMIT did this, or rather it wrote the commands that it executed to achieve what you asked it to do. This meant that you could learn: look at what it did and find out about which CLI commands to run. You could also take them, build them into a script, copy elsewhere, ... I liked SMIT.

Ephemeriis:

What would be nice is if the GUI could automatically create a shell script doing the change. That way you could (a) learn about how to do it per CLI by looking at the generated shell script, and (b) apply the generated shell script (after proper inspection, of course) to other computers.

Cisco's GUI stuff doesn't really generate any scripts, but the commands it creates are the same things you'd type into a CLI. And the resulting configuration is just as human-readable (barring any weird naming conventions) as one built using the CLI. I've actually learned an awful lot about the Cisco CLI by using their GUI.

We've just started working with Aruba hardware. Installed a mobility controller last week. They've got a GUI that does something similar. It's all a pretty web-based front-end, but it again generates CLI commands and a human-readable configuration. I'm still very new to the platform, but I'm already learning about their CLI through the GUI. And getting work done that I wouldn't be able to if I had to look up the CLI commands for everything.

Microsoft's more recent tools are also doing this. Exchange 2007 and newer, for example, are really completely driven by the PowerShell CLI. The GUI generates commands and just feeds them into PowerShell for you. So you can again issue your commands through the GUI, and learn how you could have done it in PowerShell instead.

Anpheus:

Just about every Microsoft tool newer than 2007 does this. Virtual machine manager, SQL Server has done it for ages, I think almost all the system center tools do, etc.

It's a huge improvement.

PoV:

All good admins document their work (don't they? DON'T THEY?). With a CLI or a script that's easy: it comes down to "log in as user X, change to directory Y, run script Z with arguments A B and C - the output should look like D". Try that when all you have is a GLUI (like a GUI, but you get stuck): open this window, select that option, drag a slider, check these boxes, click Yes, three times. The output might look a little like this blurry screen shot and the only record of a successful execution is a window that disappears as soon as the application ends.

I suppose the Linux community should be grateful that windows made the fundemental systems design error of making everything graphic. Without that basic failure, Linux might never have even got the toe-hold it has now.

skids:

I think this is a stronger point than the OP: GUIs do not lead to good documentation. In fact, GUIs pretty much are limited to procedural documentation like the example you gave.

The best they can do as far as actual documentation, where the precise effect of all the widgets is explained, is a screenshot with little quote bubbles pointing to each doodad. That's a ridiculous way to document.

This is as opposed to a command reference which can organize, usually in a pretty sensible fashion, exact descriptions of what each command does.

Moreover, the GUI authors seem to have a penchant to find new names for existing CLI concepts. Even worse, those names are usually inappropriate vagueries quickly cobbled together in an off-the-cuff afterthought, and do not actually tell you where the doodad resides in the menu system. With a CLI, the name of the command or feature set is its location.

Not that even good command references are mandatory by today's pathetic standards. Even the big boys like Cisco have shown major degradation in the quality of their documentation during the last decade.

pedantic bore:

I think the author might not fully understand who most admins are. They're people who couldn't write a shell script if their lives depended on it, because they've never had to. GUI-dependent users become GUI-dependent admins.

As a percentage of computer users, people who can actually navigate a CLI are an ever-diminishing group.

arth1: /etc/resolv.conf

/etc/init.d/NetworkManager stop
chkconfig NetworkManager off
chkconfig network on
vi /etc/sysconfig/network
vi /etc/sysconfig/network-scripts/eth0

At least they named it NetworkManager, so experienced admins could recognize it as a culprit. Anything named in CamelCase is almost invariably written by new school programmers who don't grok the Unix toolbox concept and write applications instead of tools, and the bloated drivel is usually best avoided.

Darkness404 (1287218) writes: on Monday October 04, @07:21PM (#33789446)

There are more and more small businesses (5, 10 or so employees) realizing that they can get things done easier if they had a server. Because the business can't really afford to hire a sysadmin or a full-time tech person, its generally the employee who "knows computers" (you know, the person who has to help the boss check his e-mail every day, etc.) and since they don't have the knowledge of a skilled *Nix admin, a GUI makes their administration a lot easier.

So with the increasing use of servers among non-admins, it only makes sense for a growth in GUI-based solutions.

Svartalf (2997) writes: Ah... But the thing is... You don't NEED the GUI with recent Linux systems- you do with Windows.

oatworm (969674) writes: on Monday October 04, @07:38PM (#33789624) Homepage

Bingo. Realistically, if you're a company with less than a 100 employees (read: most companies), you're only going to have a handful of servers in house and they're each going to be dedicated to particular roles. You're not going to have 100 clustered fileservers - instead, you're going to have one or maybe two. You're not going to have a dozen e-mail servers - instead, you're going to have one or two. Consequently, the office admin's focus isn't going to be scalability; it just won't matter to the admin if they can script, say, creating a mailbox for 100 new users instead of just one. Instead, said office admin is going to be more focused on finding ways to do semi-unusual things (e.g. "create a VPN between this office and our new branch office", "promote this new server as a domain controller", "install SQL", etc.) that they might do, oh, once a year.

The trouble with Linux, and I'm speaking as someone who's used YaST in precisely this context, is that you have to make a choice - do you let the GUI manage it or do you CLI it? If you try to do both, there will be inconsistencies because the grammar of the config files is too ambiguous; consequently, the GUI config file parser will probably just overwrite whatever manual changes it thinks is "invalid", whether it really is or not. If you let the GUI manage it, you better hope the GUI has the flexibility necessary to meet your needs. If, for example, YaST doesn't understand named Apache virtual hosts, well, good luck figuring out where it's hiding all of the various config files that it was sensibly spreading out in multiple locations for you, and don't you dare use YaST to manage Apache again or it'll delete your Apache-legal but YaST-"invalid" directive.

The only solution I really see is for manual config file support with optional XML (or some other machine-friendly but still human-readable format) linkages. For example, if you want to hand-edit your resolv.conf, that's fine, but if the GUI is going to take over, it'll toss a directive on line 1 that says "#import resolv.conf.xml" and immediately overrides (but does not overwrite) everything following that. Then, if you still want to use the GUI but need to hand-edit something, you can edit the XML file using the appropriate syntax and know that your change will be reflected on the GUI.

That's my take. Your mileage, of course, may vary.

icebraining (1313345) writes: on Monday October 04, @07:24PM (#33789494) Homepage

I have a cheap router with only a web gui. I wrote a two line bash script that simply POSTs the right requests to URL. Simply put, HTTP interfaces, especially if they implement the right response codes, are actually very nice to script.

devent (1627873) writes:

Why Windows servers have a GUI is beyond me anyway. The servers are running 99,99% of the time without a monitor and normally you just login per ssh to a console if you need to administer them. But they are consuming the extra RAM, the extra CPU cycles and the extra security threats. I don't now, but can you de-install the GUI from a Windows server? Or better, do you have an option for no-GUI installation? Just saw the minimum hardware requirements. 512 MB RAM and 32 GB or greater disk space. My server runs

sirsnork (530512) writes: on Monday October 04, @07:43PM (#33789672)

it's called a "core" install in Server 2008 and up, and if you do that, there is no going back, you can't ever add the GUI back.

What this means is you can run a small subset of MS services that don't need GUI interaction. With R2 that subset grew somwhat as they added the ability to install .Net too, which mean't you could run IIS in a useful manner (arguably the strongest reason to want to do this in the first place).

Still it's a one way trip and you better be damn sure what services need to run on that box for the lifetime of that box or you're looking at a reinstall. Most windows admins will still tell you the risk isn't worth it.

Simple things like network configuration without a GUI in windows is tedious, and, at least last time i looked, you lost the ability to trunk network poers because the NIC manufactuers all assumed you had a GUI to configure your NICs

prichardson (603676) writes: on Monday October 04, @07:27PM (#33789520) Journal

This is also a problem with Max OS X Server. Apple builds their services from open source products and adds a GUI for configuration to make it all clickable and easy to set up. However, many options that can be set on the command line can't be set in the GUI. Even worse, making CLI changes to services can break the GUI entirely.

The hardware and software are both super stable and run really smoothly, so once everything gets set up, it's awesome. Still, it's hard for a guy who would rather make changes on the CLI to get used to.

MrEricSir (398214) writes:

Just because you're used to a CLI doesn't make it better. Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages just to do a simple task. In essence, the question here is whether it's okay for the user to be lazy and use a GUI, or whether the programmer should be too lazy to develop a GUI.

ak_hepcat (468765) writes: <leif@MENCKENdenali.net minus author> on Monday October 04, @07:38PM (#33789626) Homepage Journal

Probably because it's also about the ease of troubleshooting issues.

How do you troubleshoot something with a GUI after you've misconfigured? How do you troubleshoot a programming error (bug) in the GUI -> device communication? How do you scale to tens, hundreds, or thousands of devices with a GUI?

CLI makes all this easier and more manageable.

arth1 (260657) writes:

Why would I want to read a bunch of documentation, mess with command line options, then read whole block of text to see what it did? I'd much rather sit back in my chair, click something, and then see if it worked. Don't make me read a bunch of man pages just to do a simple task. Because then you'll be stuck at doing simple tasks, and will never be able to do more advanced tasks. Without hiring a team to write an app for you instead of doing it yourself in two minutes, that is. The time you spend reading man

fandingo (1541045) writes: on Monday October 04, @07:54PM (#33789778)

I don't think you really understand systems administration. 'Users,' or in this case admins, don't typically do stuff once. Furthermore, they need to know what he did and how to do it again (i.e. new server or whatever) or just remember what he did. One-off stuff isn't common and is a sign of poor administration (i.e. tracking changes and following processes).

What I'm trying to get at is that admins shouldn't do anything without reading the manual. As a Windows/Linux admin, I tend to find Linux easier to properly administer because I either already know how to perform an operation or I have to read the manual (manpage) and learn a decent amount about the operation (i.e. more than click here/use this flag).

Don't get me wrong, GUIs can make unknown operations significantly easier, but they often lead to poor process management. To document processes, screenshots are typically needed. They can be done well, but I find that GUI documentation (created by admins, not vendor docs) tend to be of very low quality. They are also vulnerable to 'upgrades' where vendors change the interface design. CLI programs typically have more stable interfaces, but maybe that's just because they have been around longer...

maotx (765127) writes: <maotx@NoSPAM.yahoo.com> on Monday October 04, @07:42PM (#33789666)

That's one thing Microsoft did right with Exchange 2007. They built it entirely around their new powershell CLI and then built a GUI for it. The GUI is limited in compared to what you can do with the CLI, but you can get most things done. The CLI becomes extremely handy for batch jobs and exporting statistics to csv files. I'd say it's really up there with BASH in terms of scripting, data manipulation, and integration (not just Exchange but WMI, SQL, etc.)

They tried to do similar with Windows 2008 and their Core [petri.co.il] feature, but they still have to load a GUI to present a prompt...Reply to This

Charles Dodgeson (248492) writes: <jeffrey@goldmark.org> on Monday October 04, @08:51PM (#33790206) Homepage Journal

Probably Debian would have been OK, but I was finding admin of most Linux distros a pain for exactly these reasons. I couldn't find a layer where I could do everything that I needed to do without worrying about one thing stepping on another. No doubt there are ways that I could manage a Linux system without running into different layers of management tools stepping on each other, but it was a struggle.

There were other reasons as well (although there is a lot that I miss about Linux), but I think that this was one of the leading reasons.

(NB: I realize that this is flamebait (I've got karma to burn), but that isn't my intention here.)

[Nov 28, 2017] Sometimes the Old Ways Are Best by Brian Kernighan

Notable quotes:
"... Sometimes the old ways are best, and they're certainly worth knowing well ..."
Nov 01, 2008 | IEEE Software, pp.18-19

As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.

... ... ...

Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time.

But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past.

On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well

[Nov 28, 2017] Rees Re OO

Notable quotes:
"... The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as a definition of OO. ..."
"... Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser. ..."
"... In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept. acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language restrictions, "lint" program, etc.) shows up at the door when the project starts. ..."
Nov 04, 2017 | www.paulgraham.com

(Jonathan Rees had a really interesting response to Why Arc isn't Especially Object-Oriented , which he has allowed me to reproduce here.)

Here is an a la carte menu of features or properties that are related to these terms; I have heard OO defined to be many different subsets of this list.

  1. Encapsulation - the ability to syntactically hide the implementation of a type. E.g. in C or Pascal you always know whether something is a struct or an array, but in CLU and Java you can hide the difference.
  2. Protection - the inability of the client of a type to detect its implementation. This guarantees that a behavior-preserving change to an implementation will not break its clients, and also makes sure that things like passwords don't leak out.
  3. Ad hoc polymorphism - functions and data structures with parameters that can take on values of many different types.
  4. Parametric polymorphism - functions and data structures that parameterize over arbitrary values (e.g. list of anything). ML and Lisp both have this. Java doesn't quite because of its non-Object types.
  5. Everything is an object - all values are objects. True in Smalltalk (?) but not in Java (because of int and friends).
  6. All you can do is send a message (AYCDISAM) = Actors model - there is no direct manipulation of objects, only communication with (or invocation of) them. The presence of fields in Java violates this.
  7. Specification inheritance = subtyping - there are distinct types known to the language with the property that a value of one type is as good as a value of another for the purposes of type correctness. (E.g. Java interface inheritance.)
  8. Implementation inheritance/reuse - having written one pile of code, a similar pile (e.g. a superset) can be generated in a controlled manner, i.e. the code doesn't have to be copied and edited. A limited and peculiar kind of abstraction. (E.g. Java class inheritance.)
  9. Sum-of-product-of-function pattern - objects are (in effect) restricted to be functions that take as first argument a distinguished method key argument that is drawn from a finite set of simple names.

So OO is not a well defined concept. Some people (eg. Abelson and Sussman?) say Lisp is OO, by which they mean {3,4,5,7} (with the proviso that all types are in the programmers' heads). Java is supposed to be OO because of {1,2,3,7,8,9}. E is supposed to be more OO than Java because it has {1,2,3,4,5,7,9} and almost has 6; 8 (subclassing) is seen as antagonistic to E's goals and not necessary for OO.

The conventional Simula 67-like pattern of class and instance will get you {1,3,7,9}, and I think many people take this as a definition of OO.

Because OO is a moving target, OO zealots will choose some subset of this menu by whim and then use it to try to convince you that you are a loser.

Perhaps part of the confusion - and you say this in a different way in your little memo - is that the C/C++ folks see OO as a liberation from a world that has nothing resembling a first-class functions, while Lisp folks see OO as a prison since it limits their use of functions/objects to the style of (9.). In that case, the only way OO can be defended is in the same manner as any other game or discipline -- by arguing that by giving something up (e.g. the freedom to throw eggs at your neighbor's house) you gain something that you want (assurance that your neighbor won't put you in jail).

This is related to Lisp being oriented to the solitary hacker and discipline-imposing languages being oriented to social packs, another point you mention. In a pack you want to restrict everyone else's freedom as much as possible to reduce their ability to interfere with and take advantage of you, and the only way to do that is by either becoming chief (dangerous and unlikely) or by submitting to the same rules that they do. If you submit to rules, you then want the rules to be liberal so that you have a chance of doing most of what you want to do, but not so liberal that others nail you.

In such a pack-programming world, the language is a constitution or set of by-laws, and the interpreter/compiler/QA dept. acts in part as a rule checker/enforcer/police force. Co-programmers want to know: If I work with your code, will this help me or hurt me? Correctness is undecidable (and generally unenforceable), so managers go with whatever rule set (static type system, language restrictions, "lint" program, etc.) shows up at the door when the project starts.

I recently contributed to a discussion of anti-OO on the e-lang list. My main anti-OO message (actually it only attacks points 5/6) was http://www.eros-os.org/pipermail/e-lang/2001-October/005852.html . The followups are interesting but I don't think they're all threaded properly.

(Here are the pet definitions of terms used above:

Complete Exchange

[Nov 28, 2017] Sometimes the Old Ways Are Best by Brian Kernighan

Nov 01, 2008 | IEEE Software, pp.18-19

As I write this column, I'm in the middle of two summer projects; with luck, they'll both be finished by the time you read it.

... ... ...

Here has surely been much progress in tools over the 25 years that IEEE Software has been around, and I wouldn't want to go back in time.

But the tools I use today are mostly the same old ones-grep, diff, sort, awk, and friends. This might well mean that I'm a dinosaur stuck in the past.

On the other hand, when it comes to doing simple things quickly, I can often have the job done while experts are still waiting for their IDE to start up. Sometimes the old ways are best, and they're certainly worth knowing well

[Nov 27, 2017] Stop Writing Classes

Notable quotes:
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code. ..."
Nov 27, 2017 | www.youtube.com

Tom coAdjoint , 1 year ago

My god I wish the engineers at my work understood this

kobac , 2 years ago

If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code.

If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it. No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.

[Nov 27, 2017] Stop Writing Classes

Notable quotes:
"... If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code. ..."
Nov 27, 2017 | www.youtube.com

Tom coAdjoint , 1 year ago

My god I wish the engineers at my work understood this

kobac , 2 years ago

If there's something I've noticed in my career that is that there are always some guys that desperately want to look "smart" and they reflect that in their code.

If there's something else that I've noticed in my career, it's that their code is the hardest to maintain and for some reason they want the rest of the team to depend on them since they are the only "enough smart" to understand that code and change it. No need to say that these guys are not part of my team. Your code should be direct, simple and readable. End of story.

[Nov 01, 2017] Compiling in $HOME by Tom Ryder

Sep 04, 2012 | sanctum.geek.nz

If you don't have root access on a particular GNU/Linux system that you use, or if you don't want to install anything to the system directories and potentially interfere with others' work on the machine, one option is to build your favourite tools in your $HOME directory.

This can be useful if there's some particular piece of software that you really need for whatever reason, particularly on legacy systems that you share with other users or developers. The process can include not just applications, but libraries as well; you can link against a mix of your own libraries and the system's libraries as you need.

Preparation

In most cases this is actually quite a straightforward process, as long as you're allowed to use the system's compiler and any relevant build tools such as autoconf . If the ./configure script for your application allows a --prefix option, this is generally a good sign; you can normally test this with --help :

$ mkdir src
$ cd src
$ wget -q http://fooapp.example.com/fooapp-1.2.3.tar.gz
$ tar -xf fooapp-1.2.3.tar.gz
$ cd fooapp-1.2.3
$ pwd
/home/tom/src/fooapp-1.2.3
$ ./configure --help | grep -- --prefix
  --prefix=PREFIX    install architecture-independent files in PREFIX

Don't do this if the security policy on your shared machine explicitly disallows compiling programs! However, it's generally quite safe as you never need root privileges at any stage of the process.

Naturally, this is not a one-size-fits-all process; the build process will vary for different applications, but it's a workable general approach to the task.

Installing

Configure the application or library with the usual call to ./configure , but use your home directory for the prefix:

$ ./configure --prefix=$HOME

If you want to include headers or link against libraries in your home directory, it may be appropriate to add definitions for CFLAGS and LDFLAGS to refer to those directories:

$ CFLAGS="-I$HOME/include" \
> LDFLAGS="-L$HOME/lib" \
> ./configure --prefix=$HOME

Some configure scripts instead allow you to specify the path to particular libraries. Again, you can generally check this with --help .

$ ./configure --prefix=$HOME --with-foolib=$HOME/lib

You should then be able to install the application with the usual make and make install , needing root privileges for neither:

$ make
$ make install

If successful, this process will insert files into directories like $HOME/bin and $HOME/lib . You can then try to call the application by its full path:

$ $HOME/bin/fooapp -v
fooapp v1.2.3
Environment setup

To make this work smoothly, it's best to add to a couple of environment variables, probably in your .bashrc file, so that you can use the home-built application transparently.

First of all, if you linked the application against libraries also in your home directory, it will be necessary to add the library directory to LD_LIBRARY_PATH , so that the correct libraries are found and loaded at runtime:

$ /home/tom/bin/fooapp -v
/home/tom/bin/fooapp: error while loading shared libraries: libfoo.so: cannot open shared...
Could not load library foolib
$ export LD_LIBRARY_PATH=$HOME/lib
$ /home/tom/bin/fooapp -v
fooapp v1.2.3

An obvious one is adding the $HOME/bin directory to your $PATH so that you can call the application without typing its path:

$ fooapp -v
-bash: fooapp: command not found
$ export PATH="$HOME/bin:$PATH"
$ fooapp -v
fooapp v1.2.3

Similarly, defining MANPATH so that calls to man will read the manual for your build of the application first is worthwhile. You may find that $MANPATH is empty by default, so you will need to append other manual locations to it. An easy way to do this is by appending the output of the manpath utility:

$ man -k fooapp
$ manpath
/usr/local/man:/usr/local/share/man:/usr/share/man
$ export MANPATH="$HOME/share/man:$(manpath)"
$ man -k fooapp
fooapp (1) - Fooapp, the programmer's foo apper

This done, you should be able to use your private build of the software comfortably, and all without never needing to reach for root .

Caveats

This tends to work best for userspace tools like editors or other interactive command-line apps; it even works for shells. However this is not a typical use case for most applications which expect to be packaged or compiled into /usr/local , so there are no guarantees it will work exactly as expected. I have found that Vim and Tmux work very well like this, even with Tmux linked against a home-compiled instance of libevent , on which it depends.

In particular, if any part of the install process requires root privileges, such as making a setuid binary, then things are likely not to work as expected.

[Oct 31, 2017] Unix as IDE: Debugging by Tom Ryder

Notable quotes:
"... Thanks to user samwyse for the .SUFFIXES suggestion in the comments. ..."
Feb 14, 2012 | sanctum.geek.nz

When unexpected behaviour is noticed in a program, GNU/Linux provides a wide variety of command-line tools for diagnosing problems. The use of gdb , the GNU debugger, and related tools like the lesser-known Perl debugger, will be familiar to those using IDEs to set breakpoints in their code and to examine program state as it runs. Other tools of interest are available however to observe in more detail how a program is interacting with a system and using its resources.

Debugging with gdb

You can use gdb in a very similar fashion to the built-in debuggers in modern IDEs like Eclipse and Visual Studio.

If you are debugging a program that you've just compiled, it makes sense to compile it with its debugging symbols added to the binary, which you can do with a gcc call containing the -g option. If you're having problems with some code, it helps to also use -Wall to show any errors you may have otherwise missed:

$ gcc -g -Wall example.c -o example

The classic way to use gdb is as the shell for a running program compiled in C or C++, to allow you to inspect the program's state as it proceeds towards its crash.

$ gdb example
...
Reading symbols from /home/tom/example...done.
(gdb)

At the (gdb) prompt, you can type run to start the program, and it may provide you with more detailed information about the causes of errors such as segmentation faults, including the source file and line number at which the problem occurred. If you're able to compile the code with debugging symbols as above and inspect its running state like this, it makes figuring out the cause of a particular bug a lot easier.

(gdb) run
Starting program: /home/tom/gdb/example 

Program received signal SIGSEGV, Segmentation fault.
0x000000000040072e in main () at example.c:43
43     printf("%d\n", *segfault);

After an error terminates the program within the (gdb) shell, you can type backtrace to see what the calling function was, which can include the specific parameters passed that may have something to do with what caused the crash.

(gdb) backtrace
#0  0x000000000040072e in main () at example.c:43

You can set breakpoints for gdb using the break to halt the program's run if it reaches a matching line number or function call:

(gdb) break 42
Breakpoint 1 at 0x400722: file example.c, line 42.
(gdb) break malloc
Breakpoint 1 at 0x4004c0
(gdb) run
Starting program: /home/tom/gdb/example 

Breakpoint 1, 0x00007ffff7df2310 in malloc () from /lib64/ld-linux-x86-64.so.2

Thereafter it's helpful to step through successive lines of code using step . You can repeat this, like any gdb command, by pressing Enter repeatedly to step through lines one at a time:

(gdb) step
Single stepping until exit from function _start,
which has no line number information.
0x00007ffff7a74db0 in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6

You can even attach gdb to a process that is already running, by finding the process ID and passing it to gdb :

$ pgrep example
1524
$ gdb -p 1524

This can be useful for redirecting streams of output for a task that is taking an unexpectedly long time to run.

Debugging with valgrind

The much newer valgrind can be used as a debugging tool in a similar way. There are many different checks and debugging methods this program can run, but one of the most useful is its Memcheck tool, which can be used to detect common memory errors like buffer overflow:

$ valgrind --leak-check=yes ./example
==29557== Memcheck, a memory error detector
==29557== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==29557== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==29557== Command: ./example
==29557== 
==29557== Invalid read of size 1
==29557==    at 0x40072E: main (example.c:43)
==29557==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
==29557== 
...

The gdb and valgrind tools can be used together for a very thorough survey of a program's run. Zed Shaw's Learn C the Hard Way includes a really good introduction for elementary use of valgrind with a deliberately broken program.

Tracing system and library calls with ltrace

The strace and ltrace tools are designed to allow watching system calls and library calls respectively for running programs, and logging them to the screen or, more usefully, to files.

You can run ltrace and have it run the program you want to monitor in this way for you by simply providing it as the sole parameter. It will then give you a listing of all the system and library calls it makes until it exits.

$ ltrace ./example
__libc_start_main(0x4006ad, 1, 0x7fff9d7e5838, 0x400770, 0x400760 
srand(4, 0x7fff9d7e5838, 0x7fff9d7e5848, 0, 0x7ff3aebde320) = 0
malloc(24)                                                  = 0x01070010
rand(0, 0x1070020, 0, 0x1070000, 0x7ff3aebdee60)            = 0x754e7ddd
malloc(24)                                                  = 0x01070030
rand(0x7ff3aebdee60, 24, 0, 0x1070020, 0x7ff3aebdeec8)      = 0x11265233
malloc(24)                                                  = 0x01070050
rand(0x7ff3aebdee60, 24, 0, 0x1070040, 0x7ff3aebdeec8)      = 0x18799942
malloc(24)                                                  = 0x01070070
rand(0x7ff3aebdee60, 24, 0, 0x1070060, 0x7ff3aebdeec8)      = 0x214a541e
malloc(24)                                                  = 0x01070090
rand(0x7ff3aebdee60, 24, 0, 0x1070080, 0x7ff3aebdeec8)      = 0x1b6d90f3
malloc(24)                                                  = 0x010700b0
rand(0x7ff3aebdee60, 24, 0, 0x10700a0, 0x7ff3aebdeec8)      = 0x2e19c419
malloc(24)                                                  = 0x010700d0
rand(0x7ff3aebdee60, 24, 0, 0x10700c0, 0x7ff3aebdeec8)      = 0x35bc1a99
malloc(24)                                                  = 0x010700f0
rand(0x7ff3aebdee60, 24, 0, 0x10700e0, 0x7ff3aebdeec8)      = 0x53b8d61b
malloc(24)                                                  = 0x01070110
rand(0x7ff3aebdee60, 24, 0, 0x1070100, 0x7ff3aebdeec8)      = 0x18e0f924
malloc(24)                                                  = 0x01070130
rand(0x7ff3aebdee60, 24, 0, 0x1070120, 0x7ff3aebdeec8)      = 0x27a51979
--- SIGSEGV (Segmentation fault) ---
+++ killed by SIGSEGV +++

You can also attach it to a process that's already running:

$ pgrep example
5138
$ ltrace -p 5138

Generally, there's quite a bit more than a couple of screenfuls of text generated by this, so it's helpful to use the -o option to specify an output file to which to log the calls:

$ ltrace -o example.ltrace ./example

You can then view this trace in a text editor like Vim, which includes syntax highlighting for ltrace output:

Vim session with ltrace output

Vim session with ltrace output

I've found ltrace very useful for debugging problems where I suspect improper linking may be at fault, or the absence of some needed resource in a chroot environment, since among its output it shows you its search for libraries at dynamic linking time and opening configuration files in /etc , and the use of devices like /dev/random or /dev/zero .

Tracking open files with lsof

If you want to view what devices, files, or streams a running process has open, you can do that with lsof :

$ pgrep example
5051
$ lsof -p 5051

For example, the first few lines of the apache2 process running on my home server are:

# lsof -p 30779
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF    NODE NAME
apache2 30779 root  cwd    DIR    8,1     4096       2 /
apache2 30779 root  rtd    DIR    8,1     4096       2 /
apache2 30779 root  txt    REG    8,1   485384  990111 /usr/lib/apache2/mpm-prefork/apache2
apache2 30779 root  DEL    REG    8,1          1087891 /lib/x86_64-linux-gnu/libgcc_s.so.1
apache2 30779 root  mem    REG    8,1    35216 1079715 /usr/lib/php5/20090626/pdo_mysql.so
...

Interestingly, another way to list the open files for a process is to check the corresponding entry for the process in the dynamic /proc directory:

# ls -l /proc/30779/fd

This can be very useful in confusing situations with file locks, or identifying whether a process is holding open files that it needn't.

Viewing memory allocation with pmap

As a final debugging tip, you can view the memory allocations for a particular process with pmap :

# pmap 30779 
30779:   /usr/sbin/apache2 -k start
00007fdb3883e000     84K r-x--  /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted)
00007fdb38853000   2048K -----  /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted)
00007fdb38a53000      4K rw---  /lib/x86_64-linux-gnu/libgcc_s.so.1 (deleted)
00007fdb38a54000      4K -----    [ anon ]
00007fdb38a55000   8192K rw---    [ anon ]
00007fdb392e5000     28K r-x--  /usr/lib/php5/20090626/pdo_mysql.so
00007fdb392ec000   2048K -----  /usr/lib/php5/20090626/pdo_mysql.so
00007fdb394ec000      4K r----  /usr/lib/php5/20090626/pdo_mysql.so
00007fdb394ed000      4K rw---  /usr/lib/php5/20090626/pdo_mysql.so
...
total           152520K

This will show you what libraries a running process is using, including those in shared memory. The total given at the bottom is a little misleading as for loaded shared libraries, the running process is not necessarily the only one using the memory; determining "actual" memory usage for a given process is a little more in-depth than it might seem with shared libraries added to the picture. Posted in GNU/Linux Tagged backtrace , breakpoint , debugging , file , file handle , gdb , ltrace , memory , process , strace , trace , unix Unix as IDE: Building Posted on February 13, 2012 by Tom Ryder Because compiling projects can be such a complicated and repetitive process, a good IDE provides a means to abstract, simplify, and even automate software builds. Unix and its descendents accomplish this process with a Makefile , a prescribed recipe in a standard format for generating executable files from source and object files, taking account of changes to only rebuild what's necessary to prevent costly recompilation.

One interesting thing to note about make is that while it's generally used for compiled software build automation and has many shortcuts to that effect, it can actually effectively be used for any situation in which it's required to generate one set of files from another. One possible use is to generate web-friendly optimised graphics from source files for deployment for a website; another use is for generating static HTML pages from code, rather than generating pages on the fly. It's on the basis of this more flexible understanding of software "building" that modern takes on the tool like Ruby's rake have become popular, automating the general tasks for producing and installing code and files of all kinds.

Anatomy of a Makefile

The general pattern of a Makefile is a list of variables and a list of targets , and the sources and/or objects used to provide them. Targets may not necessarily be linked binaries; they could also constitute actions to perform using the generated files, such as install to instate built files into the system, and clean to remove built files from the source tree.

It's this flexibility of targets that enables make to automate any sort of task relevant to assembling a production build of software; not just the typical parsing, preprocessing, compiling proper and linking steps performed by the compiler, but also running tests ( make test ), compiling documentation source files into one or more appropriate formats, or automating deployment of code into production systems, for example, uploading to a website via a git push or similar content-tracking method.

An example Makefile for a simple software project might look something like the below:

all: example

example: main.o example.o library.o
    gcc main.o example.o library.o -o example

main.o: main.c
    gcc -c main.c -o main.o

example.o: example.c
    gcc -c example.c -o example.o

library.o: library.c
    gcc -c library.c -o library.o

clean:
    rm *.o example

install: example
    cp example /usr/bin

The above isn't the most optimal Makefile possible for this project, but it provides a means to build and install a linked binary simply by typing make . Each target definition contains a list of the dependencies required for the command that follows; this means that the definitions can appear in any order, and the call to make will call the relevant commands in the appropriate order.

Much of the above is needlessly verbose or repetitive; for example, if an object file is built directly from a single C file of the same name, then we don't need to include the target at all, and make will sort things out for us. Similarly, it would make sense to put some of the more repeated calls into variables so that we would not have to change them individually if our choice of compiler or flags changed. A more concise version might look like the following:

CC = gcc
OBJECTS = main.o example.o library.o
BINARY = example

all: example

example: $(OBJECTS)
    $(CC) $(OBJECTS) -o $(BINARY)

clean:
    rm -f $(BINARY) $(OBJECTS)

install: example
    cp $(BINARY) /usr/bin
More general uses of make

In the interests of automation, however, it's instructive to think of this a bit more generally than just code compilation and linking. An example could be for a simple web project involving deploying PHP to a live webserver. This is not normally a task people associate with the use of make , but the principles are the same; with the source in place and ready to go, we have certain targets to meet for the build.

PHP files don't require compilation, of course, but web assets often do. An example that will be familiar to web developers is the generation of scaled and optimised raster images from vector source files, for deployment to the web. You keep and version your original source file, and when it comes time to deploy, you generate a web-friendly version of it.

Let's assume for this particular project that there's a set of four icons used throughout the site, sized to 64 by 64 pixels. We have the source files to hand in SVG vector format, safely tucked away in version control, and now need to generate the smaller bitmaps for the site, ready for deployment. We could therefore define a target icons , set the dependencies, and type out the commands to perform. This is where command line tools in Unix really begin to shine in use with Makefile syntax:

icons: create.png read.png update.png delete.png

create.png: create.svg
    convert create.svg create.raw.png && \
    pngcrush create.raw.png create.png

read.png: read.svg
    convert read.svg read.raw.png && \
    pngcrush read.raw.png read.png

update.png: update.svg
    convert update.svg update.raw.png && \
    pngcrush update.raw.png update.png

delete.png: delete.svg
    convert delete.svg delete.raw.png && \
    pngcrush delete.raw.png delete.png

With the above done, typing make icons will go through each of the source icons files in a Bash loop, convert them from SVG to PNG using ImageMagick's convert , and optimise them with pngcrush , to produce images ready for upload.

A similar approach can be used for generating help files in various forms, for example, generating HTML files from Markdown source:

docs: README.html credits.html

README.html: README.md
    markdown README.md > README.html

credits.html: credits.md
    markdown credits.md > credits.html

And perhaps finally deploying a website with git push web , but only after the icons are rasterized and the documents converted:

deploy: icons docs
    git push web

For a more compact and abstract formula for turning a file of one suffix into another, you can use the .SUFFIXES pragma to define these using special symbols. The code for converting icons could look like this; in this case, $< refers to the source file, $* to the filename with no extension, and $@ to the target.

icons: create.png read.png update.png delete.png

.SUFFIXES: .svg .png

.svg.png:
    convert $< $*.raw.png && \
    pngcrush $*.raw.png $@
Tools for building a Makefile

A variety of tools exist in the GNU Autotools toolchain for the construction of configure scripts and make files for larger software projects at a higher level, in particular autoconf and automake . The use of these tools allows generating configure scripts and make files covering very large source bases, reducing the necessity of building otherwise extensive makefiles manually, and automating steps taken to ensure the source remains compatible and compilable on a variety of operating systems.

Covering this complex process would be a series of posts in its own right, and is out of scope of this survey.

Thanks to user samwyse for the .SUFFIXES suggestion in the comments. Posted in GNU/Linux Tagged build , clean , dependency , generate , install , make , makefile , target , unix Unix as IDE: Compiling Posted on February 12, 2012 by Tom Ryder There are a lot of tools available for compiling and interpreting code on the Unix platform, and they tend to be used in different ways. However, conceptually many of the steps are the same. Here I'll discuss compiling C code with gcc from the GNU Compiler Collection, and briefly the use of perl as an example of an interpreter. GCC

GCC is a very mature GPL-licensed collection of compilers, perhaps best-known for working with C and C++ programs. Its free software license and near ubiquity on free Unix-like systems like GNU/Linux and BSD has made it enduringly popular for these purposes, though more modern alternatives are available in compilers using the LLVM infrastructure, such as Clang .

The frontend binaries for GNU Compiler Collection are best thought of less as a set of complete compilers in their own right, and more as drivers for a set of discrete programming tools, performing parsing, compiling, and linking, among other steps. This means that while you can use GCC with a relatively simple command line to compile straight from C sources to a working binary, you can also inspect in more detail the steps it takes along the way and tweak it accordingly.

I won't be discussing the use of make files here, though you'll almost certainly be wanting them for any C project of more than one file; that will be discussed in the next article on build automation tools.

Compiling and assembling object code

You can compile object code from a C source file like so:

$ gcc -c example.c -o example.o

Assuming it's a valid C program, this will generate an unlinked binary object file called example.o in the current directory, or tell you the reasons it can't. You can inspect its assembler contents with the objdump tool:

$ objdump -D example.o

Alternatively, you can get gcc to output the appropriate assembly code for the object directly with the -S parameter:

$ gcc -c -S example.c -o example.s

This kind of assembly output can be particularly instructive, or at least interesting, when printed inline with the source code itself, which you can do with:

$ gcc -c -g -Wa,-a,-ad example.c > example.lst
Preprocessor

The C preprocessor cpp is generally used to include header files and define macros, among other things. It's a normal part of gcc compilation, but you can view the C code it generates by invoking cpp directly:

$ cpp example.c

This will print out the complete code as it would be compiled, with includes and relevant macros applied.

Linking objects

One or more objects can be linked into appropriate binaries like so:

$ gcc example.o -o example

In this example, GCC is not doing much more than abstracting a call to ld , the GNU linker. The command produces an executable binary called example .

Compiling, assembling, and linking

All of the above can be done in one step with:

$ gcc example.c -o example

This is a little simpler, but compiling objects independently turns out to have some practical performance benefits in not recompiling code unnecessarily, which I'll discuss in the next article.

Including and linking

C files and headers can be explicitly included in a compilation call with the -I parameter:

$ gcc -I/usr/include/somelib.h example.c -o example

Similarly, if the code needs to be dynamically linked against a compiled system library available in common locations like /lib or /usr/lib , such as ncurses , that can be included with the -l parameter:

$ gcc -lncurses example.c -o example

If you have a lot of necessary inclusions and links in your compilation process, it makes sense to put this into environment variables:

$ export CFLAGS=-I/usr/include/somelib.h
$ export CLIBS=-lncurses
$ gcc $CFLAGS $CLIBS example.c -o example

This very common step is another thing that a Makefile is designed to abstract away for you.

Compilation plan

To inspect in more detail what gcc is doing with any call, you can add the -v switch to prompt it to print its compilation plan on the standard error stream:

$ gcc -v -c example.c -o example.o

If you don't want it to actually generate object files or linked binaries, it's sometimes tidier to use -### instead:

$ gcc -### -c example.c -o example.o

This is mostly instructive to see what steps the gcc binary is abstracting away for you, but in specific cases it can be useful to identify steps the compiler is taking that you may not necessarily want it to.

More verbose error checking

You can add the -Wall and/or -pedantic options to the gcc call to prompt it to warn you about things that may not necessarily be errors, but could be:

$ gcc -Wall -pedantic -c example.c -o example.o

This is good for including in your Makefile or in your makeprg definition in Vim, as it works well with the quickfix window discussed in the previous article and will enable you to write more readable, compatible, and less error-prone code as it warns you more extensively about errors.

Profiling compilation time

You can pass the flag -time to gcc to generate output showing how long each step is taking:

$ gcc -time -c example.c -o example.o
Optimisation

You can pass generic optimisation options to gcc to make it attempt to build more efficient object files and linked binaries, at the expense of compilation time. I find -O2 is usually a happy medium for code going into production:

Like any other Bash command, all of this can be called from within Vim by:

:!gcc % -o example
Interpreters

The approach to interpreted code on Unix-like systems is very different. In these examples I'll use Perl, but most of these principles will be applicable to interpreted Python or Ruby code, for example.

Inline

You can run a string of Perl code directly into the interpreter in any one of the following ways, in this case printing the single line "Hello, world." to the screen, with a linebreak following. The first one is perhaps the tidiest and most standard way to work with Perl; the second uses a heredoc string, and the third a classic Unix shell pipe.

$ perl -e 'print "Hello world.\n";'
$ perl <<<'print "Hello world.\n";'
$ echo 'print "Hello world.\n";' | perl

Of course, it's more typical to keep the code in a file, which can be run directly:

$ perl hello.pl

In either case, you can check the syntax of the code without actually running it with the -c switch:

$ perl -c hello.pl

But to use the script as a logical binary , so you can invoke it directly without knowing or caring what the script is, you can add a special first line to the file called the "shebang" that does some magic to specify the interpreter through which the file should be run.

#!/usr/bin/env perl
print "Hello, world.\n";

The script then needs to be made executable with a chmod call. It's also good practice to rename it to remove the extension, since it is now taking the shape of a logic binary:

$ mv hello{.pl,}
$ chmod +x hello

And can thereafter be invoked directly, as if it were a compiled binary:

$ ./hello

This works so transparently that many of the common utilities on modern GNU/Linux systems, such as the adduser frontend to useradd , are actually Perl or even Python scripts.

In the next post, I'll describe the use of make for defining and automating building projects in a manner comparable to IDEs, with a nod to newer takes on the same idea with Ruby's rake . Posted in GNU/Linux Tagged assembler , compiler , gcc , interpreter , linker , perl , python , ruby , unix Unix as IDE: Editing Posted on February 11, 2012 by Tom Ryder The text editor is the core tool for any programmer, which is why choice of editor evokes such tongue-in-cheek zealotry in debate among programmers. Unix is the operating system most strongly linked with two enduring favourites, Emacs and Vi, and their modern versions in GNU Emacs and Vim, two editors with very different editing philosophies but comparable power.

Being a Vim heretic myself, here I'll discuss the indispensable features of Vim for programming, and in particular the use of shell tools called from within Vim to complement the editor's built-in functionality. Some of the principles discussed here will be applicable to those using Emacs as well, but probably not for underpowered editors like Nano.

This will be a very general survey, as Vim's toolset for programmers is enormous , and it'll still end up being quite long. I'll focus on the essentials and the things I feel are most helpful, and try to provide links to articles with a more comprehensive treatment of the topic. Don't forget that Vim's :help has surprised many people new to the editor with its high quality and usefulness.

Filetype detection

Vim has built-in settings to adjust its behaviour, in particular its syntax highlighting, based on the filetype being loaded, which it happily detects and generally does a good job at doing so. In particular, this allows you to set an indenting style conformant with the way a particular language is usually written. This should be one of the first things in your .vimrc file.

if has("autocmd")
  filetype on
  filetype indent on
  filetype plugin on
endif
Syntax highlighting

Even if you're just working with a 16-color terminal, just include the following in your .vimrc if you're not already:

syntax on

The colorschemes with a default 16-color terminal are not pretty largely by necessity, but they do the job, and for most languages syntax definition files are available that work very well. There's a tremendous array of colorschemes available, and it's not hard to tweak them to suit or even to write your own. Using a 256-color terminal or gVim will give you more options. Good syntax highlighting files will show you definite syntax errors with a glaring red background.

Line numbering

To turn line numbers on if you use them a lot in your traditional IDE:

set number

You might like to try this as well, if you have at least Vim 7.3 and are keen to try numbering lines relative to the current line rather than absolutely:

set relativenumber
Tags files

Vim works very well with the output from the ctags utility. This allows you to search quickly for all uses of a particular identifier throughout the project, or to navigate straight to the declaration of a variable from one of its uses, regardless of whether it's in the same file. For large C projects in multiple files this can save huge amounts of otherwise wasted time, and is probably Vim's best answer to similar features in mainstream IDEs.

You can run :!ctags -R on the root directory of projects in many popular languages to generate a tags file filled with definitions and locations for identifiers throughout your project. Once a tags file for your project is available, you can search for uses of an appropriate tag throughout the project like so:

:tag someClass

The commands :tn and :tp will allow you to iterate through successive uses of the tag elsewhere in the project. The built-in tags functionality for this already covers most of the bases you'll probably need, but for features such as a tag list window, you could try installing the very popular Taglist plugin . Tim Pope's Unimpaired plugin also contains a couple of useful relevant mappings.

Calling external programs

Until 2017, there were three major methods of calling external programs during a Vim session:

Since 2017, Vim 8.x now includes a :terminal command to bring up a terminal emulator buffer in a window. This seems to work better than previous plugin-based attempts at doing this, such as Conque . For the moment I still strongly recommend using one of the older methods, all of which also work in other vi -type editors.

Lint programs and syntax checkers

Checking syntax or compiling with an external program call (e.g. perl -c , gcc ) is one of the calls that's good to make from within the editor using :! commands. If you were editing a Perl file, you could run this like so:

:!perl -c %

/home/tom/project/test.pl syntax OK

Press Enter or type command to continue

The % symbol is shorthand for the file loaded in the current buffer. Running this prints the output of the command, if any, below the command line. If you wanted to call this check often, you could perhaps map it as a command, or even a key combination in your .vimrc file. In this case, we define a command :PerlLint which can be called from normal mode with \l :

command PerlLint !perl -c %
nnoremap <leader>l :PerlLint<CR>

For a lot of languages there's an even better way to do this, though, which allows us to capitalise on Vim's built-in quickfix window. We can do this by setting an appropriate makeprg for the filetype, in this case including a module that provides us with output that Vim can use for its quicklist, and a definition for its two formats:

:set makeprg=perl\ -c\ -MVi::QuickFix\ %
:set errorformat+=%m\ at\ %f\ line\ %l\.
:set errorformat+=%m\ at\ %f\ line\ %l

You may need to install this module first via CPAN, or the Debian package libvi-quickfix-perl . This done, you can type :make after saving the file to check its syntax, and if errors are found, you can open the quicklist window with :copen to inspect the errors, and :cn and :cp to jump to them within the buffer.

Vim quickfix working on a Perl file

Vim quickfix working on a Perl file

This also works for output from gcc , and pretty much any other compiler syntax checker that you might want to use that includes filenames, line numbers, and error strings in its error output. It's even possible to do this with web-focused languages like PHP , and for tools like JSLint for JavaScript . There's also an excellent plugin named Syntastic that does something similar.

Reading output from other commands

You can use :r! to call commands and paste their output directly into the buffer with which you're working. For example, to pull a quick directory listing for the current folder into the buffer, you could type:

:r!ls

This doesn't just work for commands, of course; you can simply read in other files this way with just :r , like public keys or your own custom boilerplate:

:r ~/.ssh/id_rsa.pub
:r ~/dev/perl/boilerplate/copyright.pl
Filtering output through other commands

You can extend this to actually filter text in the buffer through external commands, perhaps selected by a range or visual mode, and replace it with the command's output. While Vim's visual block mode is great for working with columnar data, it's very often helpful to bust out tools like column , cut , sort , or awk .

For example, you could sort the entire file in reverse by the second column by typing:

:%!sort -k2,2r

You could print only the third column of some selected text where the line matches the pattern /vim/ with:

:'<,'>!awk '/vim/ {print $3}'

You could arrange keywords from lines 1 to 10 in nicely formatted columns like:

:1,10!column -t

Really any kind of text filter or command can be manipulated like this in Vim, a simple interoperability feature that expands what the editor can do by an order of magnitude. It effectively makes the Vim buffer into a text stream, which is a language that all of these classic tools speak.

There is a lot more detail on this in my "Shell from Vi" post.

Built-in alternatives

It's worth noting that for really common operations like sorting and searching, Vim has built-in methods in :sort and :grep , which can be helpful if you're stuck using Vim on Windows, but don't have nearly the adaptability of shell calls.

Diffing

Vim has a diffing mode, vimdiff , which allows you to not only view the differences between different versions of a file, but also to resolve conflicts via a three-way merge and to replace differences to and fro with commands like :diffput and :diffget for ranges of text. You can call vimdiff from the command line directly with at least two files to compare like so:

$ vimdiff file-v1.c file-v2.c
Vim diffing a .vimrc file

Vim diffing a .vimrc file Version control

You can call version control methods directly from within Vim, which is probably all you need most of the time. It's useful to remember here that % is always a shortcut for the buffer's current file:

:!svn status
:!svn add %
:!git commit -a

Recently a clear winner for Git functionality with Vim has come up with Tim Pope's Fugitive , which I highly recommend to anyone doing Git development with Vim. There'll be a more comprehensive treatment of version control's basis and history in Unix in Part 7 of this series.

The difference

Part of the reason Vim is thought of as a toy or relic by a lot of programmers used to GUI-based IDEs is its being seen as just a tool for editing files on servers, rather than a very capable editing component for the shell in its own right. Its own built-in features being so composable with external tools on Unix-friendly systems makes it into a text editing powerhouse that sometimes surprises even experienced users.

[Oct 31, 2017] Understanding Shared Libraries in Linux by Aaron Kili

Oct 30, 2017 | sanctum.geek.nz
In programming, a library is an assortment of pre-compiled pieces of code that can be reused in a program. Libraries simplify life for programmers, in that they provide reusable functions, routines, classes, data structures and so on (written by a another programmer), which they can use in their programs.

For instance, if you are building an application that needs to perform math operations, you don't have to create a new math function for that, you can simply use existing functions in libraries for that programming language.

Examples of libraries in Linux include libc (the standard C library) or glibc (GNU version of the standard C library), libcurl (multiprotocol file transfer library), libcrypt (library used for encryption, hashing, and encoding in C) and many more.

Linux supports two classes of libraries, namely:

Dynamic or shared libraries can further be categorized into:

Shared Library Naming Conventions

Shared libraries are named in two ways: the library name (a.k.a soname ) and a "filename" (absolute path to file which stores library code).

For example, the soname for libc is libc.so.6 : where lib is the prefix, is a descriptive name, so means shared object, and is the version. And its filename is: /lib64/libc.so.6 . Note that the soname is actually a symbolic link to the filename.

Locating Shared Libraries in Linux

Shared libraries are loaded by ld.so (or ld.so.x ) and ld-linux.so (or ld-linux.so.x ) programs, where is the version. In Linux, /lib/ld-linux.so.x searches and loads all shared libraries used by a program.

A program can call a library using its library name or filename, and a library path stores directories where libraries can be found in the filesystem. By default, libraries are located in /usr/local/lib /usr/local/lib64 /usr/lib and /usr/lib64 ; system startup libraries are in /lib and /lib64 . Programmers can, however, install libraries in custom locations.

The library path can be defined in /etc/ld.so.conf file which you can edit with a command line editor.

# vi /etc/ld.so.conf

The line(s) in this file instruct the kernel to load file in /etc/ld.so.conf.d . This way, package maintainers or programmers can add their custom library directories to the search list.

If you look into the /etc/ld.so.conf.d directory, you'll see .conf files for some common packages (kernel, mysql and postgresql in this case):

# ls /etc/ld.so.conf.d
kernel-2.6.32-358.18.1.el6.x86_64.conf  kernel-2.6.32-696.1.1.el6.x86_64.conf  mariadb-x86_64.conf
kernel-2.6.32-642.6.2.el6.x86_64.conf   kernel-2.6.32-696.6.3.el6.x86_64.conf  postgresql-pgdg-libs.conf

If you take a look at the mariadb-x86_64.conf, you will see an absolute path to package's libraries.

# cat mariadb-x86_64.conf
/usr/lib64/mysql

The method above sets the library path permanently. To set it temporarily, use the LD_LIBRARY_PATH environment variable on the command line. If you want to keep the changes permanent, then add this line in the shell initialization file /etc/profile (global) or ~/.profile (user specific).

# export LD_LIBRARY_PATH=/path/to/library/file
Managing Shared Libraries in Linux

Let us now look at how to deal with shared libraries. To get a list of all shared library dependencies for a binary file, you can use the ldd utility . The output of ldd is in the form:

library name =>  filename (some hexadecimal value)
OR
filename (some hexadecimal value)  #this is shown when library name can't be read

This command shows all shared library dependencies for the ls command .

# ldd /usr/bin/ls
OR
# ldd /bin/ls
Sample Output
Oct 31, 2017 | www.tecmint.com
   linux-vdso.so.1 =>  (0x00007ffebf9c2000)
libselinux.so.1 => /lib64/libselinux.so.1 (0x0000003b71e00000)
librt.so.1 => /lib64/librt.so.1 (0x0000003b71600000)
libcap.so.2 => /lib64/libcap.so.2 (0x0000003b76a00000)
libacl.so.1 => /lib64/libacl.so.1 (0x0000003b75e00000)
libc.so.6 => /lib64/libc.so.6 (0x0000003b70600000)
libdl.so.2 => /lib64/libdl.so.2 (0x0000003b70a00000)
/lib64/ld-linux-x86-64.so.2 (0x0000561abfc09000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003b70e00000)
libattr.so.1 => /lib64/libattr.so.1 (0x0000003b75600000)

Because shared libraries can exist in many different directories, searching through all of these directories when a program is launched would be greatly inefficient: which is one of the likely disadvantages of dynamic libraries. Therefore a mechanism of caching employed, performed by a the program ldconfig

By default, ldconfig reads the content of /etc/ld.so.conf , creates the appropriate symbolic links in the dynamic link directories, and then writes a cache to /etc/ld.so.cache which is then easily used by other programs.

This is very important especially when you have just installed new shared libraries or created your own, or created new library directories. You need to run ldconfig command to effect the changes.

# ldconfig
OR
# ldconfig -V   #shows files and directories it works with

After creating your shared library, you need to install it. You can either move it into any of the standard directories mentioned above, and run the ldconfig command.

Alternatively, run the following command to create symbolic links from the soname to the filename:

# ldconfig -n /path/to/your/shared/libraries

To get started with creating your own libraries, check out this guide from The Linux Documentation Project(TLDP) .

Thats all for now! In this article, we gave you an introduction to libraries, explained shared libraries and how to manage them in Linux. If you have any queries or additional ideas to share, use the comment form below.

[Oct 28, 2017] Shared libraries with GCC on Linux

Oct 28, 2017 | www.cprogramming.com
Most larger software projects will contain several components, some of which you may find use for later on in some other project, or that you just want to separate out for organizational purposes. When you have a reusable or logically distinct set of functions, it is helpful to build a library from it so that you don’t have to copy the source code into your current project and recompile it all the timeâ€"and so you can keep different modules of your program disjoint and change one without affecting others. Once it’s been written and tested, you can safely reuse it over and over again, saving the time and hassle of building it into your project every time.

Building static libraries is fairly simple, and since we rarely get questions on them, I won’t cover them. I’ll stick with shared libraries, which seem to be more confusing for most people.

Before we get started, it might help to get a quick rundown of everything that happens from source code to running program:

  1. C Preprocessor: This stage processes all the preprocessor directives . Basically, any line that starts with a #, such as #define and #include.
  2. Compilation Proper: Once the source file has been preprocessed, the result is then compiled. Since many people refer to the entire build process as compilation, this stage is often referred to as “compilation proper.†This stage turns a .c file into an .o (object) file.
  3. Linking: Here is where all of the object files and any libraries are linked together to make your final program. Note that for static libraries, the actual library is placed in your final program, while for shared libraries, only a reference to the library is placed inside. Now you have a complete program that is ready to run. You launch it from the shell, and the program is handed off to the loader.
  4. Loading: This stage happens when your program starts up. Your program is scanned for references to shared libraries. Any references found are resolved and the libraries are mapped into your program.

Steps 3 and 4 are where the magic (and confusion) happens with shared libraries.

[Oct 26, 2017] Amazon.com Customer reviews Extreme Programming Explained Embrace Change

Oct 26, 2017 | www.amazon.com
2.0 out of 5 stars

By Mohammad B. Abdulfatah on February 10, 2003

Programming Malpractice Explained: Justifying Chaos

To fairly review this book, one must distinguish between the methodology it presents and the actual presentation. As to the presentation, the author attempts to win the reader over with emotional persuasion and pep talk rather than with facts and hard evidence. Stories of childhood and comradeship don't classify as convincing facts to me. A single case study-the C3 project-is often referred to, but with no specific information (do note that the project was cancelled by the client after staying in development for far too long).
As to the method itself, it basically boils down to four core practices:
1. Always have a customer available on site.
2. Unit test before you code.
3. Program in pairs.
4. Forfeit detailed design in favor of incremental, daily releases and refactoring.
If you do the above, and you have excellent staff on your hands, then the book promises that you'll reap the benefits of faster development, less overtime, and happier customers. Of course, the book fails to point out that if your staff is all highly qualified people, then the project is likely to succeed no matter what methodology you use. I'm sure that anyone who has worked in the software industry for sometime has noticed the sad state that most computer professionals are in nowadays.
However, assuming that you have all the topnotch developers that you desire, the outlined methodology is almost impossible to apply in real world scenarios. Having a customer always available on site would mean that the customer in question is probably a small, expendable fish in his organization and is unlikely to have any useful knowledge of its business practices. Unit testing code before it is written means that one would have to have a mental picture of what one is going to write before writing it, which is difficult without upfront design. And maintaining such tests as the code changes would be a nightmare. Programming in pairs all the time would assume that your topnotch developers are also sociable creatures, which is rarely the case, and even if they were, no one would be able to justify the practice in terms of productivity. I won't discuss why I think that abandoning upfront design is a bad practice; the whole idea is too ridiculous to debate.
Both book and methodology will attract fledgling developers with its promise of hacking as an acceptable software practice and a development universe revolving around the programmer. It's a cult, not a methodology, were the followers shall find salvation and 40-hour working weeks. Experience is a great teacher, but only a fool would learn from it alone. Listen to what the opponents have to say before embracing change, and don't forget to take the proverbial grain of salt.
Two stars out of five for the presentation for being courageous and attempting to defy the standard practices of the industry. Two stars for the methodology itself, because it underlines several common sense practices that are very useful once practiced without the extremity.

By wiredweird HALL OF FAME TOP 1000 REVIEWER on May 24, 2004
eXtreme buzzwording

Maybe it's an interesting idea, but it's just not ready for prime time.
Parts of Kent's recommended practice - including aggressive testing and short integration cycle - make a lot of sense. I've shared the same beliefs for years, but it was good to see them clarified and codified. I really have changed some of my practice after reading this and books like this.
I have two broad kinds of problem with this dogma, though. First is the near-abolition of documentation. I can't defend 2000 page specs for typical kinds of development. On the other hand, declaring that the test suite is the spec doesn't do it for me either. The test suite is code, written for machine interpretation. Much too often, it is not written for human interpretation. Based on the way I see most code written, it would be a nightmare to reverse engineer the human meaning out of any non-trivial test code. Some systematic way of ensuring human intelligibility in the code, traceable to specific "stories" (because "requirements" are part of the bad old way), would give me a lot more confidence in the approach.
The second is the dictatorial social engineering that eXtremity mandates. I've actually tried the pair programming - what a disaster. The less said the better, except that my experience did not actually destroy any professional relationships. I've also worked with people who felt that their slightest whim was adequate reason to interfere with my work. That's what Beck institutionalizes by saying that any request made of me by anyone on the team must be granted. It puts me completely at the mercy of anyone walking by. The requisite bullpen physical environment doesn't work for me either. I find that the visual and auditory distraction make intense concentration impossible.
I find revival tent spirit of the eXtremists very off-putting. If something works, it works for reasons, not as a matter of faith. I find much too much eXhortation to believe, to go ahead and leap in, so that I will eXperience the wonderfulness for myself. Isn't that what the evangelist on the subway platform keeps saying? Beck does acknowledge unbelievers like me, but requires their exile in order to maintain the group-think of the X-cult.
Beck's last chapters note a number of exceptions and special cases where eXtremism may not work - actually, most of the projects I've ever encountered.
There certainly is good in the eXtreme practice. I look to future authors to tease that good out from the positively destructive threads that I see interwoven.

By A customer on May 2, 2004
A work of fiction

The book presents extreme programming. It is divided into three parts:
(1) The problem
(2) The solution
(3) Implementing XP.
The problem, as presented by the author, is that requirements change but current methodologies are not agile enough to cope with this. This results in customer being unhappy. The solution is to embrace change and to allow the requirements to be changed. This is done by choosing the simplest solution, releasing frequently, refactoring with the security of unit tests.
The basic assumption which underscores the approach is that the cost of change is not exponential but reaches a flat asymptote. If this is not the case, allowing change late in the project would be disastrous. The author does not provide data to back his point of view. On the other hand there is a lot of data against a constant cost of change (see for example discussion of cost in Code Complete). The lack of reasonable argumentation is an irremediable flaw in the book. Without some supportive data it is impossible to believe the basic assumption, nor the rest of the book. This is all the more important since the only project that the author refers to was cancelled before full completion.
Many other parts of the book are unconvincing. The author presents several XP practices. Some of them are very useful. For example unit tests are a good practice. They are however better treated elsewhere (e.g., Code Complete chapter on unit test). On the other hand some practices seem overkill. Pair programming is one of them. I have tried it and found it useful to generate ideas while prototyping. For writing production code, I find that a quiet environment is by far the best (see Peopleware for supportive data). Again the author does not provide any data to support his point.
This book suggests an approach aiming at changing software engineering practices. However the lack of supportive data makes it a work of fiction.
I would suggest reading Code Complete for code level advice or Rapid Development for management level advice.

By A customer on November 14, 2002
Not Software Engineering.

Any Engineering discipline is based on solid reasoning and logic not on blind faith. Unfortunately, most of this book attempts to convince you that Extreme programming is better based on the author's experiences. A lot of the principles are counter - intutive and the author exhorts you just try it out and get enlightened. I'm sorry but these kind of things belong in infomercials not in s/w engineering.
The part about "code is the documentation" is the scariest part. It's true that keeping the documentation up to date is tough on any software project, but to do away with dcoumentation is the most ridiculous thing I have heard. It's like telling people to cut of their noses to avoid colds.
Yes we are always in search of a better software process. Let me tell you that this book won't lead you there.

By Philip K. Ronzone on November 24, 2000
The "gossip magazine diet plans" style of programming.

This book reminds me of the "gossip magazine diet plans", you know, the vinegar and honey diet, or the fat-burner 2000 pill diet etc. Occasionally, people actually lose weight on those diets, but, only because they've managed to eat less or exercise more. The diet plans themselves are worthless. XP is the same - it may sometimes help people program better, but only because they are (unintentionally) doing something different. People look at things like XP because, like dieters, they see a need for change. Overall, the book is a decently written "fad diet", with ideas that are just as worthless.

By A customer on August 11, 2003
Hackers! Salvation is nigh!!

It's interesting to see the phenomenon of Extreme Programming happening in the dawn of the 21st century. I suppose historians can explain such a reaction as a truly conservative movement. Of course, serious software engineering practice is hard. Heck, documentation is a pain in the neck. And what programmer wouldn't love to have divine inspiration just before starting to write the latest web application and so enlightened by the Almighty, write the whole thing in one go, as if by magic? No design, no documentation, you and me as a pair, and the customer too. Sounds like a hacker's dream with "Imagine" as the soundtrack (sorry, John).
The Software Engineering struggle is over 50 years old and it's only logical to expect some resistance, from time to time. In the XP case, the resistance comes in one of its worst forms: evangelism. A fundamentalist cult, with very little substance, no proof of any kind, but then again if you don't have faith you won't be granted the gift of the mystic revelation. It's Gnosticism for Geeks.
Take it with a pinch of salt.. well, maybe a sack of salt. If you can see through the B.S. that sells millions of dollars in books, consultancy fees, lectures, etc, you will recognise some common-sense ideas that are better explained, explored and detailed elsewhere.

By Ian K. VINE VOICE on February 27, 2015
Long have I hated this book

Kent is an excellent writer. He does an excellent job of presenting an approach to software development that is misguided for anything but user interface code. The argument that user interface code must be gotten into the hands of users to get feedback is used to suggest that complex system code should not be "designed up front". This is simply wrong. For example, if you are going to deply an application in the Amazon Cloud that you want to scale, you better have some idea of how this is going to happen. Simply waiting until you application falls over and fails is not an acceptable approach.

One of the things I despise the most about the software development culture is the mindless adoption of fads. Extreme programming has been adopted by some organizations like a religious dogma.

Engineering large software systems is one of the most difficult things that humans do. There are no silver bullets and there are no dogmatic solutions that will make the difficult simple.

By Anil Philip on March 24, 2005
not found - the silver bullet

Maybe I'm too cynical because I never got to work for the successful, whiz-kid companies; Maybe this book wasn't written for me!

This book reminds me of Jacobsen's "Use Cases" book of the 1990s. 'Use Cases' was all the rage but after several years, we slowly learned the truth: Uses Cases does not deal with the architecture - a necessary and good foundation for any piece of software.

Similarly, this book seems to be spotlighting Testing and taking it to extremes.

'the test plan is the design doc'

Not True. The design doc encapsulates wisdom and insight

a picture that accurately describes the interactions of the lower level software components is worth a thousand lines of code-reading.

Also present is an evangelistic fervor that reminds me of the rah-rah eighties' bestseller, "In Search Of Excellence" by Peters and Waterman. (Many people have since noted that most of the spotlighted companies of that book are bankrupt twenty five years later).

- in a room full of people with a bully supervisor (as I experienced in my last job at a major telco) innovation or good work is largely absent.

- deploy daily - are you kidding?

to run through the hundreds of test cases in a large application takes several hours if not days. Not all testing can be automated.

- I have found the principle of "baby steps", one of the principles in the book, most useful in my career - it is the basis for prototyping iteratively. However I heard it described in 1997 at a pep talk at MCI that the VP of our department gave to us. So I dont know who stole it from whom!

Lastly, I noted that the term 'XP' was used throughout the book, and the back cover has a blurb from an M$ architect. Was it simply coincidence that Windows shares the same name for its XP release? I wondered if M$ had sponsored part of the book as good advertising for Windows XP! :)

[Oct 13, 2017] 1.3. Compatibility of Red Hat Developer Toolset 6.1

Oct 13, 2017 | access.redhat.com

Compatibility Figure 1.1, "Red Hat Developer Toolset 6.1 Compatibility Matrix" illustrates the support for binaries built with Red Hat Developer Toolset on a certain version of Red Hat Enterprise Linux when those binaries are run on various other versions of this system. For ABI compatibility information, see Section 2.2.4, "C++ Compatibility" .

Figure 1.1. Red Hat Developer Toolset 6.1 Compatibility Matrix

[Oct 13, 2017] What gcc versions are available in Red Hat Enterprise Linux

Red Hat Developer Toolset
Notable quotes:
"... The scl ("Software Collections") tool is provided to make use of the tool versions from the Developer Toolset easy while minimizing the potential for confusion with the regular RHEL tools. ..."
"... Red Hat provides support to Red Hat Developer Tool Set for all Red Hat customers with an active Red Hat Enterprise Linux Developer subscription. ..."
"... You will need an active Red Hat Enterprise Linux Developer subscription to gain access to Red Hat Developer Tool set. ..."
Oct 13, 2017 | access.redhat.com

Red Hat provides another option via the Red Hat Developer Toolset.

With the developer toolset, developers can choose to take advantage of the latest versions of the GNU developer tool chain, packaged for easy installation on Red Hat Enterprise Linux. This version of the GNU development tool chain is an alternative to the toolchain offered as part of each Red Hat Enterprise Linux release. Of course, developers can continue to use the version of the toolchain provided in Red Hat Enterprise Linux.

The developer toolset gives software developers the ability to develop and compile an application once to run on multiple versions of Red Hat Enterprise Linux (such as Red Hat Enterprise Linux 5 and 6). Compatible with all supported versions of Red Hat Enterprise Linux, the developer toolset is available for users who develop applications for Red Hat Enterprise Linux 5 and 6. Please see the release notes for support of specific minor releases.

Unlike the compatibility and preview gcc packages provided with RHEL itself, the developer toolset packages put their content under a /opt/rh path. The scl ("Software Collections") tool is provided to make use of the tool versions from the Developer Toolset easy while minimizing the potential for confusion with the regular RHEL tools.

Red Hat provides support to Red Hat Developer Tool Set for all Red Hat customers with an active Red Hat Enterprise Linux Developer subscription.

You will need an active Red Hat Enterprise Linux Developer subscription to gain access to Red Hat Developer Tool set.

For further information on Red Hat Developer Toolset, refer to the relevant release documentation:

https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/ .

For further information on Red Hat Enterprise Linux Developer subscription, you may reference the following links:
* Red Hat Discussion
* Red Hat Developer Toolset Support Policy

[Oct 13, 2017] Building GCC from source

Oct 13, 2017 | unix.stackexchange.com

xxx

I've built newer gcc versions for rhel6 for several versions now (since 4.7.x to 5.3.1).

The process is fairly easy thanks to Redhat's Jakub Jelinek fedora gcc builds found on koji

Simply grab the latest src rpm for whichever version you require (e.g. 5.3.1 ).

Basically you would start by determining the build requirements by issuing rpm -qpR src.rpm looking for any version requirements:

rpm -qpR gcc-5.3.1-4.fc23.src.rpm | grep -E '= [[:digit:]]'
binutils >= 2.24
doxygen >= 1.7.1
elfutils-devel >= 0.147
elfutils-libelf-devel >= 0.147
gcc-gnat >= 3.1
glibc-devel >= 2.4.90-13
gmp-devel >= 4.1.2-8
isl = 0.14
isl-devel = 0.14
libgnat >= 3.1
libmpc-devel >= 0.8.1
mpfr-devel >= 2.2.1
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
systemtap-sdt-devel >= 1.3

Now comes the tedious part - any package which has a version higher than provided by yum fro your distro needs to be downloaded from koji , and recursively repeat the process until all dependency requirements are met.

I cheat, btw.

I usually repackage the rpm to contain a correct build tree using the gnu facility to use correctly placed and named requirements, so gmp/mpc/mpfr/isl (cloog is no longer required) are downloaded and untard into the correct path, and the new (bloated) tar is rebuilt into a new src rpm (with minor changes to spec file) with no dependency on their packaged (rpm) versions. Since I know of no one using ADA, I simply remove the portions pertaining to gnat from the specfile, further simplifying the build process, leaving me with just binutils to worry about.
Gcc can actually build with older binutils, so if you're in a hurry, further edit the specfile to require the binutils version already present on your system. This will result in a slightly crippled gcc, but mostly it will perform well enough.
This works quite well mostly.

UPDATE 1

The simplest method for opening a src rpm is probably yum install the rpm and access everything under ~/rpmbuild, but I prefer

mkdir gcc-5.3.1-4.fc23
cd gcc-5.3.1-4.fc23
rpm2cpio ../gcc-5.3.1-4.fc23.src.rpm | cpio -id
tar xf gcc-5.3.1-20160212.tar.bz2
cd gcc-5.3.1-20160212
contrib/download_prerequisites
cd ..
tar caf gcc-5.3.1-20160212.tar.bz2 gcc-5.3.1-20160212
rm -rf gcc-5.3.1-20160212
# remove gnat
sed -i '/%global build_ada 1/ s/1/0/' gcc.spec
sed -i '/%if !%{build_ada}/,/%endif/ s/^/#/' gcc.spec
# remove gmp/mpfr/mpc dependencies
sed -i '/BuildRequires: gmp-devel >= 4.1.2-8, mpfr-devel >= 2.2.1, libmpc-devel >= 0.8.1/ s/.*//' gcc.spec
# remove isl dependency
sed -i '/BuildRequires: isl = %{isl_version}/,/Requires: isl-devel = %{isl_version}/ s/^/#/' gcc.spec
# Either build binutils as I do, or lower requirements
sed -i '/Requires: binutils/ s/2.24/2.20/' gcc.spec
# Make sure you don't break on gcc-java
sed -i '/gcc-java/ s/^/#/' gcc.spec

You also have the choice to set prefix so this rpm will install side-by-side without breaking distro rpm (but requires changing name, and some modifications to internal package names). I usually add an environment-module so I can load and unload this gcc as required (similar to how collections work) as part of the rpm (so I add a new dependency).

Finally create the rpmbuild tree and place the files where hey should go and build:

yum install rpmdevtools rpm-build
rpmdev-setuptree
cp * ~/rpmbuild/SOURCES/
mv ~/rpmbuild/{SOURCES,SPECS}/gcc.spec
rpmbuild -ba ~/rpmbuild/SPECS/gcc.spec

UPDATE 2

Normally one should not use a "server" os for development - that's why you have fedora which already comes with latest gcc. I have some particular requirements, but you should really consider using the right tool for the task - rhel/centos to run production apps, fedora to develop those apps etc.

[Oct 13, 2017] devtoolset-3-gcc-4.9.1-10.el6.x86_64.rpm

This is a supported by RHEL package similar to one available from academic Linux
Oct 13, 2017 | access.redhat.com

View more details Hide details

Build Host
x86-027.build.eng.bos.redhat.com
Build Date
2014-09-22 12:43:02 UTC
Group
Development/Languages
License
GPLv3+ and GPLv3+ with exceptions and GPLv2+ with exceptions and LGPLv2+ and BSD
Available From
Product (Variant, Version, Architecture) Repo Label
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.7 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.6 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.5 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6.4 x86_64 rhel-server-rhscl-6-eus-rpms
Red Hat Software Collections (for RHEL Server) 1 for RHEL 6 x86_64 rhel-server-rhscl-6-rpms
Red Hat Software Collections (for RHEL Workstation) 1 for RHEL 6 x86_64 rhel-workstation-rhscl-6-rpms
Red Hat Software Collections (for RHEL Server) from RHUI 1 for RHEL 6 x86_64 rhel-server-rhscl-6-rhui-rpms
Download

[Oct 13, 2017] Installing GCC 4.8.2 on Red Hat Enterprise linux 6.5

Oct 13, 2017 | stackoverflow.com

suny6 , answered Jan 29 '16 at 21:53

The official way to have gcc 4.8.2 on RHEL 6 is via installing Red Hat Developer Toolset (yum install devtoolset-2), and in order to have it you need to have one of the below subscriptions:

You can check whether you have any of these subscriptions by running:

subscription-manager list --available

and

subscription-manager list --consumed .

If you don't have any of these subscriptions, you won't succeed in "yum install devtoolset-2". However, luckily cern provide a "back door" for their SLC6 which can also be used in RHEL 6. Run below three lines via root, and you should be able to have it:

wget -O /etc/yum.repos.d/slc6-devtoolset.repo http://linuxsoft.cern.ch/cern/devtoolset/slc6-devtoolset.repo

wget -O /etc/pki/rpm-gpg/RPM-GPG-KEY-cern http://ftp.scientificlinux.org/linux/scientific/5x/x86_64/RPM-GPG-KEYs/RPM-GPG-KEY-cern

yum install devtoolset-2

Once it's done completely, you should have the new development package in /opt/rh/devtoolset-2/root/.

answered Oct 29 '14 at 21:53

For some reason the mpc/mpfr/gmp packages aren't being downloaded. Just look in your gcc source directory, it should have created symlinks to those packages:
gcc/4.9.1/install$ ls -ad gmp mpc mpfr
gmp  mpc  mpfr

If those don't show up then simply download them from the gcc site: ftp://gcc.gnu.org/pub/gcc/infrastructure/

Then untar and symlink/rename them so you have the directories like above.

Then when you ./configure and make, gcc's makefile will automatically build them for you.

[Oct 08, 2017] >Disbelieving the 'many eyes' myth Opensource.com

Notable quotes:
"... This article originally appeared on Alice, Eve, and Bob – a security blog and is republished with permission. ..."
Oct 08, 2017 | opensource.com

Review by many eyes does not always prevent buggy code There is a view that because open source software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth. 06 Oct 2017 Mike Bursell (Red Hat) Feed 8 up Disbelieving the 'many eyes' hypothesis Image credits : Internet Archive Book Images . CC BY-SA 4.0 Writing code is hard. Writing secure code is harder -- much harder. And before you get there, you need to think about design and architecture. When you're writing code to implement security functionality, it's often based on architectures and designs that have been pored over and examined in detail. They may even reflect standards that have gone through worldwide review processes and are generally considered perfect and unbreakable. *

However good those designs and architectures are, though, there's something about putting things into actual software that's, well, special. With the exception of software proven to be mathematically correct, ** being able to write software that accurately implements the functionality you're trying to realize is somewhere between a science and an art. This is no surprise to anyone who's actually written any software, tried to debug software, or divine software's correctness by stepping through it; however, it's not the key point of this article.

Nobody *** actually believes that the software that comes out of this process is going to be perfect, but everybody agrees that software should be made as close to perfect and bug-free as possible. This is why code review is a core principle of software development. And luckily -- in my view, at least -- much of the code that we use in our day-to-day lives is open source, which means that anybody can look at it, and it's available for tens or hundreds of thousands of eyes to review.

And herein lies the problem: There is a view that because open source software is subject to review by many eyes, all the bugs will be ironed out of it. This is a myth. A dangerous myth. The problems with this view are at least twofold. The first is the "if you build it, they will come" fallacy. I remember when there was a list of all the websites in the world, and if you added your website to that list, people would visit it. **** In the same way, the number of open source projects was (maybe) once so small that there was a good chance that people might look at and review your code. Those days are past -- long past. Second, for many areas of security functionality -- crypto primitives implementation is a good example -- the number of suitably qualified eyes is low.

Don't think that I am in any way suggesting that the problem is any less in proprietary code: quite the opposite. Not only are the designs and architectures in proprietary software often hidden from review, but you have fewer eyes available to look at the code, and the dangers of hierarchical pressure and groupthink are dramatically increased. "Proprietary code is more secure" is less myth, more fake news. I completely understand why companies like to keep their security software secret, and I'm afraid that the "it's to protect our intellectual property" line is too often a platitude they tell themselves when really, it's just unsafe to release it. So for me, it's open source all the way when we're looking at security software.

So, what can we do? Well, companies and other organizations that care about security functionality can -- and have, I believe a responsibility to -- expend resources on checking and reviewing the code that implements that functionality. Alongside that, the open source community, can -- and is -- finding ways to support critical projects and improve the amount of review that goes into that code. ***** And we should encourage academic organizations to train students in the black art of security software writing and review, not to mention highlighting the importance of open source software.

We can do better -- and we are doing better. Because what we need to realize is that the reason the "many eyes hypothesis" is a myth is not that many eyes won't improve code -- they will -- but that we don't have enough expert eyes looking. Yet.


* Yeah, really: "perfect and unbreakable." Let's just pretend that's true for the purposes of this discussion.

** and that still relies on the design and architecture to actually do what you want -- or think you want -- of course, so good luck.

*** Nobody who's actually written more than about five lines of code (or more than six characters of Perl).

**** I added one. They came. It was like some sort of magic.

***** See, for instance, the Linux Foundation 's Core Infrastructure Initiative .

This article originally appeared on Alice, Eve, and Bob – a security blog and is republished with permission.

[Oct 03, 2017] Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model.

Notable quotes:
"... That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country! ..."
"... I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore... ..."
"... Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model. ..."
"... There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor ..."
"... If you want a high tech executive to suffer a stroke, mention the words "labor unions". ..."
"... India isn't being hired for the quality, they're being hired for cheap labor. ..."
"... Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again... ..."
"... Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology. ..."
"... I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children. ..."
"... Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact. ..."
Oct 02, 2017 | profile.theguardian.com
Terryl Dorian , 21 Sep 2017 13:26
That's Silicon Valley's dirty secret. Most tech workers in Palo Alto make about as much as the high school teachers who teach their kids. And these are the top coders in the country!
Ray D Wright -> RogTheDodge , , 21 Sep 2017 14:52
I don't see why more Americans would want to be coders. These companies want to drive down wages for workers here and then also ship jobs offshore...
Richard Livingstone -> KatieL , , 21 Sep 2017 14:50
+++1 to all of that.

Automated coding just pushes the level of coding further up the development food chain, rather than gets rid of it. It is the wrong approach for current tech. AI that is smart enough to model new problems and create their own descriptive and runnable language - hopefully after my lifetime but coming sometime.

Arne Babenhauserheide -> Evelita , , 21 Sep 2017 14:48
What coding does not teach is how to improve our non-code infrastructure and how to keep it running (that's the stuff which actually moves things). Code can optimize stuff, but it needs actual actuators to affect reality.

Sometimes these actuators are actual people walking on top of a roof while fixing it.

WyntonK , 21 Sep 2017 14:47
Silicon Valley companies have placed lowering wages and flooding the labor market with cheaper labor near the top of their goals and as a business model.

There are quite a few highly qualified American software engineers who lose their jobs to foreign engineers who will work for much lower salaries and benefits. This is a major ingredient of the libertarian virus that has engulfed and contaminating the Valley, going hand to hand with assembling products in China by slave labor .

If you want a high tech executive to suffer a stroke, mention the words "labor unions".

TheEgg -> UncommonTruthiness , , 21 Sep 2017 14:43

The ship has sailed on this activity as a career.

Nope. Married to a highly-technical skillset, you can still make big bucks. I say this as someone involved in this kind of thing academically and our Masters grads have to beat the banks and fintech companies away with dog shits on sticks. You're right that you can teach anyone to potter around and throw up a webpage but at the prohibitively difficult maths-y end of the scale, someone suitably qualified will never want for a job.

Mike_Dexter -> Evelita , , 21 Sep 2017 14:43
In a similar vein, if you accept the argument that it does drive down wages, wouldn't the culprit actually be the multitudes of online and offline courses and tutorials available to an existing workforce?
Terryl Dorian -> CountDooku , , 21 Sep 2017 14:42
Funny you should pick medicine, law, engineering... 3 fields that are *not* taught in high school. The writer is simply adding "coding" to your list. So it seems you agree with his "garbage" argument after all.
anticapitalist -> RogTheDodge , , 21 Sep 2017 14:42
Key word is "good". Teaching everyone is just going to increase the pool of programmers code I need to fix. India isn't being hired for the quality, they're being hired for cheap labor. As for women sure I wouldn't mind more women around but why does no one say their needs to be more equality in garbage collection or plumbing? (And yes plumbers are a high paid professional).

In the end I don't care what the person is, I just want to hire and work with the best and not someone I have to correct their work because they were hired by quota. If women only graduate at 15% why should IT contain more than that? And let's be a bit honest with the facts, of those 15% how many spend their high school years staying up all night hacking? Very few. Now the few that did are some of the better developers I work with but that pool isn't going to increase by forcing every child to program... just like sports aren't better by making everyone take gym class.

WithoutPurpose , 21 Sep 2017 14:42
I ran a development team for 10 years and I never had any trouble hiring programmers - we just had to pay them enough. Every job would have at least 10 good applicants.

Two years ago I decided to scale back a bit and go into programming (I can code real-time low latency financial apps in 4 languages) and I had four interviews in six months with stupidly low salaries. I'm lucky in that I can bounce between tech and the business side so I got a decent job out of tech.

My entirely anecdotal conclusion is that there is no shortage of good programmers just a shortage of companies willing to pay them.

oddbubble -> Tori Turner , , 21 Sep 2017 14:41
I've worn many hats so far, I started out as a started out as a sysadmin, then I moved on to web development, then back end and now I'm doing test automation because I am on almost the same money for half the effort.
peter nelson -> raffine , , 21 Sep 2017 14:38
But the concepts won't. Good programming requires the ability to break down a task, organise the steps in performing it, identify parts of the process that are common or repetitive so they can be bundled together, handed-off or delegated, etc.

These concepts can be applied to any programming language, and indeed to many non-software activities.

Oliver Jones -> Trumbledon , , 21 Sep 2017 14:37
In the city maybe with a financial background, the exception.
anticapitalist -> Ethan Hawkins , 21 Sep 2017 14:32
Well to his point sort of... either everything will go php or all those entry level php developers will be on the street. A good Java or C developer is hard to come by. And to the others, being a being a developer, especially a good one, is nothing like reading and writing. The industry is already saturated with poor coders just doing it for a paycheck.
peter nelson -> Tori Turner , 21 Sep 2017 14:31
I'm just going to say this once: not everyone with a computer science degree is a coder.

And vice versa. I'm retiring from a 40-year career as a software engineer. Some of the best software engineers I ever met did not have CS degrees.

KatieL -> Mishal Almohaimeed , 21 Sep 2017 14:30
"already developing automated coding scripts. "

Pretty much the entire history of the software industry since FORAST was developed for the ORDVAC has been about desperately trying to make software development in some way possible without driving everyone bonkers.

The gulf between FORAST and today's IDE-written, type-inferring high level languages, compilers, abstracted run-time environments, hypervisors, multi-computer architectures and general tech-world flavour-of-2017-ness is truly immense[1].

And yet software is still fucking hard to write. There's no sign it's getting easier despite all that work.

Automated coding was promised as the solution in the 1980s as well. In fact, somewhere in my archives, I've got paper journals which include adverts for automated systems that would programmers completely redundant by writing all your database code for you. These days, we'd think of those tools as automated ORM generators and they don't fix the problem; they just make a new one -- ORM impedance mismatch -- which needs more engineering on top to fix...

The tools don't change the need for the humans, they just change what's possible for the humans to do.

[1] FORAST executed in about 20,000 bytes of memory without even an OS. The compile artifacts for the map-reduce system I built today are an astonishing hundred million bytes... and don't include the necessary mapreduce environment, management interface, node operating system and distributed filesystem...

raffine , 21 Sep 2017 14:29
Whatever they are taught today will be obsolete tomorrow.
yannick95 -> savingUK , , 21 Sep 2017 14:27
"There are already top quality coders in China and India"

AHAHAHAHAHAHAHAHAHAHAHA *rolls on the floor laughting* Yes........ 1%... and 99% of incredibly bad, incompetent, untalented one that produce cost 50% of a good developer but produce only 5% in comparison. And I'm talking with a LOT of practical experience through more than a dozen corporations all over the world which have been outsourcing to India... all have been disasters for the companies (but good for the execs who pocketed big bonuses and left the company before the disaster blows up in their face)

Wiretrip -> mcharts , , 21 Sep 2017 14:25
Enough people have had their hands burnt by now with shit companies like TCS (Tata) that they are starting to look closer to home again...
TomRoche , 21 Sep 2017 14:11

Tech executives have pursued [the goal of suppressing workers' compensation] in a variety of ways. One is collusion – companies conspiring to prevent their employees from earning more by switching jobs. The prevalence of this practice in Silicon Valley triggered a justice department antitrust complaint in 2010, along with a class action suit that culminated in a $415m settlement.

Folks interested in the story of the Techtopus (less drily presented than in the links in this article) should check out Mark Ames' reporting, esp this overview article and this focus on the egregious Steve Jobs (whose canonization by the US corporate-funded media is just one more impeachment of their moral bankruptcy).

Another, more sophisticated method is importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status.

Folks interested in H-1B and US technical visas more generally should head to Norm Matloff 's summary page , and then to his blog on the subject .

Olympus68 , 21 Sep 2017 13:49

I have watched as schools run by trade unions have done the opposite for the 5 decades. By limiting the number of graduates, they were able to help maintain living wages and benefits. This has been stopped in my area due to the pressure of owners run "trade associations".

During that same time period I have witnessed trade associations controlled by company owners, while publicising their support of the average employee, invest enormous amounts of membership fees in creating alliances with public institutions. Their goal has been that of flooding the labor market and thus keeping wages low. A double hit for the average worker because membership fees were paid by employees as well as those in control.

And so it goes....

savingUK , 21 Sep 2017 13:38
Coding jobs are just as susceptible to being moved to lower cost areas of the world as hardware jobs already have. It's already happening. There are already top quality coders in China and India. There is a much larger pool to chose from and they are just as good as their western counterparts and work harder for much less money.

Globalisation is the reason, and trying to force wages up in one country simply moves the jobs elsewhere. The only way I can think of to limit this happening is to keep the company and coders working at the cutting edge of technology.

whitehawk66 , 21 Sep 2017 15:18

I'd be much more impressed if I saw that the hordes of young male engineers here in SF expressing a semblance of basic common sense, basic self awareness and basic life skills. I'd say 91.3% are oblivious, idiotic children.

They would definitely not survive the zombie apocalypse.

P.S. not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced bread.

UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

William Fitch III , 21 Sep 2017 13:52
Hi: As I have said many times before, there is no shortage of people who fully understand the problem and can see all the connections.

However, they all fall on their faces when it comes to the solution. To cut to the chase, Concentrated Wealth needs to go, permanently. Of course the challenge is how to best accomplish this.....

.....Bill

MostlyHarmlessD , , 21 Sep 2017 13:16

Damn engineers and their black and white world view, if they weren't so inept they would've unionized instead of being trampled again and again in the name of capitalism.
mcharts -> Aldous0rwell , , 21 Sep 2017 13:07
Not maybe. Too late. American corporations objective is to low ball wages here in US. In India they spoon feed these pupils with affordable cutting edge IT training for next to nothing ruppees. These pupils then exaggerate their CVs and ship them out en mass to the western world to dominate the IT industry. I've seen it with my own eyes in action. Those in charge will anything/everything to maintain their grip on power. No brag. Just fact.

Woe to our children and grandchildren.

Where's Bernie Sanders when we need him.

[Oct 03, 2017] The dream of coding automation remain illusive... Very illusive...

Oct 03, 2017 | discussion.theguardian.com

Richard Livingstone -> Mishal Almohaimeed , 21 Sep 2017 14:46

Wrong again, that approach has been tried since the 80s and will keep failing only because software development is still more akin to a technical craft than an engineering discipline. The number of elements required to assemble a working non trivial system is way beyond scriptable.
freeandfair -> Taylor Dotson , 21 Sep 2017 14:26
> That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry?

You don't believe there will be robots to do plumbing and cleaning? The cleaner's job will be to program robots to do what they need.
CEOs? Absolutely.

English teachers? Both of my kids have school laptops and everything is being done on the computers. The teachers use software and create websites and what not. Yes, even English teachers.

Not knowing / understanding how to code will be the same as not knowing how to use Word/ Excel. I am assuming there are people who don't, but I don't know any above the age of 6.

Wiretrip -> Mishal Almohaimeed , 21 Sep 2017 14:20
We've had 'automated coding scripts' for years for small tasks. However, anyone who says they're going to obviate programmers, analysts and designers doesn't understand the software development process.
Ethan Hawkins -> David McCaul , 21 Sep 2017 13:22
Even if expert systems (an 80's concept, BTW) could code, we'd still have a huge need for managers. The hard part of software isn't even the coding. It's determining the requirements and working with clients. It will require general intelligence to do 90% of what we do right now. The 10% we could automate right now, mostly gets in the way. I agree it will change, but it's going to take another 20-30 years to really happen.
Mishal Almohaimeed -> PolydentateBrigand , , 21 Sep 2017 13:17
wrong, software companies are already developing automated coding scripts. You'll get a bunch of door to door knives salespeople once the dust settles that's what you'll get.
freeandfair -> rgilyead , , 21 Sep 2017 14:22
> In 20 years time AI will be doing the coding

Possible, but your still have to understand how AI operates and what it can and cannot do.

[Oct 03, 2017] Coding and carpentry are not so distant, are they ?

Thw user "imipak" views are pretty common misconceptions. They are all wrong.
Notable quotes:
"... I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers. ..."
"... Many people can write, but few become journalists, and fewer still become real authors. ..."
Oct 03, 2017 | discussion.theguardian.com

imipak, 21 Sep 2017 15:13

Coding has little or nothing to do with Silicon Valley. They may or may not have ulterior motives, but ultimately they are nothing in the scheme of things.

I disagree with teaching coding as a discrete subject. I think it should be combined with home economics and woodworking because 90% of these subjects consist of transferable skills that exist in all of them. Only a tiny residual is actually topic-specific.

In the case of coding, the residual consists of drawing skills and typing skills. Programming language skills? Irrelevant. You should choose the tools to fit the problem. Neither of these needs a computer. You should only ever approach the computer at the very end, after you've designed and written the program.

Is cooking so very different? Do you decide on the ingredients before or after you start? Do you go shopping half-way through cooking an omelette?

With woodwork, do you measure first or cut first? Do you have a plan or do you randomly assemble bits until it does something useful?

Real coding, taught correctly, is barely taught at all. You teach the transferable skills. ONCE. You then apply those skills in each area in which they apply.

What other transferable skills apply? Top-down design, bottom-up implementation. The correct methodology in all forms of engineering. Proper testing strategies, also common across all forms of engineering. However, since these tests are against logic, they're a test of reasoning. A good thing to have in the sciences and philosophy.

Technical writing is the art of explaining things to idiots. Whether you're designing a board game, explaining what you like about a house, writing a travelogue or just seeing if your wild ideas hold water, you need to be able to put those ideas down on paper in a way that exposes all the inconsistencies and errors. It doesn't take much to clean it up to be readable by humans. But once it is cleaned up, it'll remain free of errors.

So I would teach a foundation course that teaches top-down reasoning, bottom-up design, flowcharts, critical path analysis and symbolic logic. Probably aimed at age 7. But I'd not do so wholly in the abstract. I'd have it thoroughly mixed in with one field, probably cooking as most kids do that and it lacks stigma at that age.

I'd then build courses on various crafts and engineering subjects on top of that, building further hierarchies where possible. Eliminate duplication and severely reduce the fictions we call disciplines.

oldzealand, 21 Sep 2017 14:58
I used to employ 200 computer scientists in my business and now teach children so I'm apparently as guilty as hell. To be compared with a carpenter is, however, a true compliment, if you mean those that create elegant, aesthetically-pleasing, functional, adaptable and long-lasting bespoke furniture, because our crafts of problem-solving using limited resources in confined environments to create working, life-improving artifacts both exemplify great human ingenuity in action. Capitalism or no.
peter nelson, 21 Sep 2017 14:29
"But coding is not magic. It is a technical skill, akin to carpentry."

But some people do it much better than others. Just like journalism. This article is complete nonsense, as I discuss in another comment. The author might want to consider a career in carpentry.

Fanastril, 21 Sep 2017 14:13
"But coding is not magic. It is a technical skill, akin to carpentry."

It is a way of thinking. Perhaps carpentry is too, but the arrogance of the above statement shows a soul who is done thinking.

NDReader, 21 Sep 2017 14:12
"But coding is not magic. It is a technical skill, akin to carpentry."

I was about to take offence on behalf of programmers, but then I realized that would be snobbish and insulting to carpenters too. Many people can code, but only a few can code well, and fewer still become the masters of the profession. Many people can learn carpentry, but few become joiners, and fewer still become cabinetmakers.

Many people can write, but few become journalists, and fewer still become real authors.

MostlyHarmlessD, 21 Sep 2017 13:08
A carpenter!? Good to know that engineers are still thought of as jumped up tradesmen.

[Oct 02, 2017] Programming vs coding

This idiotic US term "coder" is complete baloney.
Notable quotes:
"... You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field. ..."
"... Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to. ..."
"... I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it. ..."
"... "...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck. ..."
"... Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed. ..."
Oct 02, 2017 | profile.theguardian.com
Wiretrip -> Mark Mauvais , 21 Sep 2017 14:23
Yes, 'engineers' (and particularly mathematicians) write appalling code.
Trumbledon , 21 Sep 2017 14:23
A good developer can easily earn £600-800 per day, which suggests to me that they are in high demand, and society needs more of them.
Wiretrip -> KatieL , 21 Sep 2017 14:22
Agreed, to many people 'coding' consists of copying other people's JavaScript snippets from StackOverflow... I tire of the many frauds in the business...
stratplaya , 21 Sep 2017 14:21
You can learn to code, but that doesn't mean you'll be good at it. There will be a few who excel but most will not. This isn't a reflection on them but rather the reality of the situation. In any given area some will do poorly, more will do fairly, and a few will excel. The same applies in any field.
peter nelson -> UncommonTruthiness , 21 Sep 2017 14:21

The ship has sailed on this activity as a career.

Oh, rubbish. I'm in the process of retiring from my job as an Android software designer so I'm tasked with hiring a replacement for my organisation. It pays extremely well, the work is interesting, and the company is successful and serves an important worldwide industry.

Still, finding highly-qualified people is hard and they get snatched up in mid-interview because the demand is high. Not only that but at these pay scales, we can pretty much expect the Guardian will do yet another article about the unconscionable gap between what rich, privileged techies like software engineers make and everyone else.

Really, we're damned if we do and damned if we don't. If tech workers are well-paid we're castigated for gentrifying neighbourhoods and living large, and yet anything that threatens to lower what we're paid produces conspiracy-theory articles like this one.

Fanastril -> Taylor Dotson , 21 Sep 2017 14:17
I learned to cook in school. Was there a shortage of cooks? No. Did I become a professional cook? No. but I sure as hell would not have missed the skills I learned for the world, and I use them every day.
KatieL -> Taylor Dotson , 21 Sep 2017 14:13
Oh no, there's loads of people who say they're coders, who have on their CV that they're coders, that have been paid to be coders. Loads of them. Amazingly, about 9 out of 10 of them, experienced coders all, spent ages doing it, not a problem to do it, definitely a coder, not a problem being "hands on"... can't actually write working code when we actually ask them to.
youngsteveo -> Taylor Dotson , 21 Sep 2017 14:12
I feel for your brother, and I've experienced the exact same BS "test" that you're describing. However, when I said "rudimentary coding exam", I wasn't talking about classic fiz-buz questions, Fibonacci problems, whiteboard tests, or anything of the sort. We simply ask people to write a small amount of code that will solve a simple real world problem. Something that they would be asked to do if they got hired. We let them take a long time to do it. We let them use Google to look things up if they need. You would be shocked how many "qualified applicants" can't do it.
Fanastril -> Taylor Dotson , 21 Sep 2017 14:11
It is not zero-sum: If you teach something empowering, like programming, motivating is a lot easier, and they will learn more.
UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented.

Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

KatieL -> Taylor Dotson , 21 Sep 2017 14:10
"intelligence, creativity, diligence, communication ability, or anything else that a job"

None of those are any use if, when asked to turn your intelligent, creative, diligent, communicated idea into some software, you perform as well as most candidates do at simple coding assessments... and write stuff that doesn't work.

peter nelson , 21 Sep 2017 14:09

At its root, the campaign for code education isn't about giving the next generation a shot at earning the salary of a Facebook engineer. It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.

Of course the writer does not offer the slightest shred of evidence to support the idea that this is the actual goal of these programs. So it appears that the tinfoil-hat conspiracy brigade on the Guardian is operating not only below the line, but above it, too.

The fact is that few of these students will ever become software engineers (which, incidentally, is my profession) but programming skills are essential in many professions for writing little scripts to automate various tasks, or to just understand 21st century technology.

kcrane , 21 Sep 2017 14:07
Sadly this is another article by a partial journalist who knows nothing about the software industry, but hopes to subvert what he had read somewhere to support a position he had already assumed. As others had said, understanding coding had already become akin to being able to use a pencil. It is a basic requirement of many higher level roles.

But knowing which end of a pencil to put on the paper (the equivalent of the level of coding taught in schools) isn't the same as being an artist. Moreover anyone who knows the field recognises that top coders are gifted, they embody genius. There are coding Caravaggio's out there, but few have the experience to know that. No amount of teaching will produce high level coders from average humans, there is an intangible something needed, as there is in music and art, to elevate the merely good to genius.

All to say, however many are taught the basics, it won't push down the value of the most talented coders, and so won't reduce the costs of the technology industry in any meaningful way as it is an industry, like art, that relies on the few not the many.

DebuggingLife , 21 Sep 2017 14:06
Not all of those children will want to become programmers but at least the barrier to entry, - for more to at least experience it - will be lower.

Teaching music to only the children whose parents can afford music tuition means than society misses out on a greater potential for some incredible gifted musicians to shine through.

Moreover, learning to code really means learning how to wrangle with the practical application of abstract concepts, algorithms, numerical skills, logic, reasoning, etc. which are all transferrable skills some of which are not in the scope of other classes, certainly practically.
Like music, sport, literature etc. programming a computer, a website, a device, a smartphone is an endeavour that can be truly rewarding as merely a pastime, and similarly is limited only by ones imagination.

rgilyead , 21 Sep 2017 14:01
"...coding is not magic. It is a technical skill, akin to carpentry. " I think that is a severe underestimation of the level of expertise required to conceptualise and deliver robust and maintainable code. The complexity of integrating software is more equivalent to constructing an entire building with components of different materials. If you think teaching coding is enough to enable software design and delivery then good luck.
Taylor Dotson -> cwblackwell , 21 Sep 2017 14:00
Yeah, but mania over coding skills inevitably pushes over skills out of the curriculum (or deemphasizes it). Education is zero-sum in that there's only so much time and energy to devote to it. Hence, you need more than vague appeals to "enhancement," especially given the risks pointed out by the author.
Taylor Dotson -> PolydentateBrigand , 21 Sep 2017 13:57
"Talented coders will start new tech businesses and create more jobs."

That could be argued for any skill set, including those found in the humanities and social sciences likely to pushed out by the mania over coding ability. Education is zero-sum: Time spent on one subject is time that invariably can't be spent learning something else.

Taylor Dotson -> WumpieJr , 21 Sep 2017 13:49
"If they can't literally fix everything let's just get rid of them, right?"

That's a strawman. His point is rooted in the recognition that we only have so much time, energy, and money to invest in solutions. One's that feel good but may not do anything distract us for the deeper structural issues in our economy. The probably with thinking "education" will fix everything is that it leaves the status quo unquestioned.

martinusher , 21 Sep 2017 13:31
Being able to write code and being able to program are two very different skills. In language terms its the difference between being able to read and write (say) English and being able to write literature; obviously you need a grasp of the language to write literature but just knowing the language is not the same as being able to assemble and marshal thought into a coherent pattern prior to setting it down.

To confuse things further there's various levels of skill that all look the same to the untutored eye. Suppose you wished to bridge a waterway. If that waterway was a narrow ditch then you could just throw a plank across. As the distance to be spanned got larger and larger eventually you'd have to abandon intuition for engineering and experience. Exactly the same issues happen with software but they're less tangible; anyone can build a small program but a complex system requires a lot of other knowledge (in my field, that's engineering knowledge -- coding is almost an afterthought).

Its a good idea to teach young people to code but I wouldn't raise their expectations of huge salaries too much. For children educating them in wider, more general, fields and abstract activities such as music will pay off huge dividends, far more than just teaching them whatever the fashionable language du jour is. (...which should be Logo but its too subtle and abstract, it doesn't look "real world" enough!).

freeandfair , 21 Sep 2017 13:30
I don't see this is an issue. Sure, there could be ulterior motives there, but anyone who wants to still be employed in 20 years has to know how to code . It is not that everyone will be a coder, but their jobs will either include part-time coding or will require understanding of software and what it can and cannot do. AI is going to be everywhere.
WumpieJr , 21 Sep 2017 13:23
What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra.

But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right?

Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it.

youngsteveo , 21 Sep 2017 13:16
I'm not going to argue that the goal of mass education isn't to drive down wages, but the idea that the skills gap is a myth doesn't hold water in my experience. I'm a software engineer and manager at a company that pays well over the national average, with great benefits, and it is downright difficult to find a qualified applicant who can pass a rudimentary coding exam.

A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job. Secondarily, while I agree that one day our field might be replaced by automation, there's a level of creativity involved with good software engineering that makes your carpenter comparison a bit flawed.

[Oct 02, 2017] Does programming provides a new path to the middle class? Probably no longer, unless you are really talanted. In the latter case it is not that different from any other fields, but the pressure from H1B makes is harder for programmers. The neoliberal USA have a real problem with the social mobility

Notable quotes:
"... I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the talent here' is the main excuse ..."
"... This is interesting. Indeed, I do think there is excess supply of software programmers. ..."
"... Well, it is either that or the kids themselves who have to pay for it and they are even less prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the US. And the employer ideally should pay for the job related training, but again, it is not the case in the US. ..."
"... Plenty of people care about the arts but people can't survive on what the arts pay. That was pretty much the case all through human history. ..."
"... I was laid off at your age in the depths of the recent recession and I got a job. ..."
"... The great thing about software , as opposed to many other jobs, is that it can be done at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show them what I've done. ..."
"... Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers. Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to get round it. Denying children useful skills is indefensible. ..."
Oct 02, 2017 | discussion.theguardian.com
swelle , 21 Sep 2017 17:36
I do think it's peculiar that Silicon Valley requires so many H1B visas... 'we can't find the talent here' is the main excuse, though many 'older' (read: over 40) native-born tech workers will tell your that's plenty of talent here already, but even with the immigration hassles, H1B workers will be cheaper overall...

Julian Williams , 21 Sep 2017 18:06

This is interesting. Indeed, I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security. However, these jobs are usually occupied and the incumbents are not likely to move on quickly. Road blocks are also put up by creating sub networks of engineers who ensure that some knowledge is not ubiquitous.

Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.

Still, the ability to write a computer program in an enabler, knowing how it works means you have an ability to imagine something and make it real. To me it is a bit like language, some people can use language to make more money than others, but it is still important to be able to have a basic level of understanding.

FabBlondie -> peter nelson , 21 Sep 2017 17:42
And yet I know a lot of people that has happened to. Better to replace a $125K a year programmer with one who will do the same, or even less, job for $50K.

JMColwill , 21 Sep 2017 18:17

This could backfire if the programmers don't find the work or pay to match their expectations... Programmers, after all tend to make very good hackers if their minds are turned to it.

freeandfair -> FabBlondie , 21 Sep 2017 18:23

> While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.

Well, I am a software architect and what he says sounds correct for a certain type of applications. Maybe you do a different type of programming.

peter nelson -> FabBlondie , 21 Sep 2017 18:23

While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done.

How else can you do it?

Java is popular because it's a very versatile language - On this list it's the most popular general-purpose programming language. (Above it javascript is just a scripting language and HTML/CSS aren't even programming languages) https://fossbytes.com/most-used-popular-programming-languages/ ... and below it you have to go down to C# at 20% to come to another general-purpose language, and even that's a Microsoft house language.

Also the "correct" choice of programming languages is also based on how many people in the shop know it so they maintain code that's written in it by someone else.

freeandfair -> FabBlondie , 21 Sep 2017 18:22
> job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training.

Well, it is either that or the kids themselves who have to pay for it and they are even less prepared to do so. Ideally, college education should be tax payer paid but this is not the case in the US. And the employer ideally should pay for the job related training, but again, it is not the case in the US.

freeandfair -> mlzarathustra , 21 Sep 2017 18:20
> The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the buck

Plenty of people care about the arts but people can't survive on what the arts pay. That was pretty much the case all through human history.

theindyisbetter -> Game Cabbage , 21 Sep 2017 18:18
No. The amount of work is not a fixed sum. That's the lump of labour fallacy. We are not tied to the land.
ConBrio , 21 Sep 2017 18:10
Since newspaper are consolidating and cutting jobs gotta clamp down on colleges offering BA degrees, particularly in English Literature and journalism.

And then... and...then...and...

LMichelle -> chillisauce , 21 Sep 2017 18:03
This article focuses on the US schools, but I can imagine it's the same in the UK. I don't think these courses are going to be about creating great programmers capable of new innovations as much as having a work force that can be their own IT Help Desk.

They'll learn just enough in these classes to do that.

Then most companies will be hiring for other jobs, but want to make sure you have the IT skills to serve as your own "help desk" (although they will get no salary for their IT work).

edmundberk -> FabBlondie , 21 Sep 2017 17:57
I find that quite remarkable - 40 years ago you must have been using assembler and with hardly any memory to work with. If you blitzed through that without applying the thought processes described, well...I'm surprised.
James Dey , 21 Sep 2017 17:55
Funny. Every day in the Brexit articles, I read that increasing the supply of workers has negligible effect on wages.
peter nelson -> peterainbow , 21 Sep 2017 17:54
I was laid off at your age in the depths of the recent recession and I got a job. As I said in another posting, it usually comes down to fresh skills and good personal references who will vouch for your work-habits and how well you get on with other members of your team.

The great thing about software , as opposed to many other jobs, is that it can be done at home which you're laid off. Write mobile (IOS or Android) apps or work on open source projects and get stuff up on github. I've been to many job interviews with my apps loaded on mobile devices so I could show them what I've done.

Game Cabbage -> theindyisbetter , 21 Sep 2017 17:52
The situation has a direct comparison to today. It has nothing to do with land. There was a certain amount of profit making work and not enough labour to satisfy demand. There is currently a certain amount of profit making work and in many situations (especially unskilled low paid work) too much labour.
edmundberk , 21 Sep 2017 17:52
So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?

Or is this the most obtuse argument yet for avoiding what everyone in tech knows - even more blatantly than in many other industries, wages are curtailed by offshoring; and in the US, by having offshoring centres on US soil.

chillisauce , 21 Sep 2017 17:48
Well, speaking as someone who spends a lot of time trying to find really good programmers... frankly there aren't that many about. We take most of ours from Eastern Europe and SE Asia, which is quite expensive, given the relocation costs to the UK. But worth it.

So, yes, if more British kids learnt about coding, it might help a bit. But not much; the real problem is that few kids want to study IT in the first place, and that the tuition standards in most UK universities are quite low, even if they get there.

Baobab73 , 21 Sep 2017 17:48
True......
peter nelson -> rebel7 , 21 Sep 2017 17:47
There was recently an programme/podcast on ABC/RN about the HUGE shortage in Australia of techies with specialized security skills.
peter nelson -> jigen , 21 Sep 2017 17:46
Robots, or AI, are already making us more productive. I can write programs today in an afternoon that would have taken me a week a decade or two ago.

I can create a class and the IDE will take care of all the accessors, dependencies, enforce our style-guide compliance, stub-in the documentation ,even most test cases, etc, and all I have to write is very-specific stuff required by my application - the other 90% is generated for me. Same with UI/UX - stubs in relevant event handlers, bindings, dependencies, etc.

Programmers are a zillion times more productive than in the past, yet the demand keeps growing because so much more stuff in our lives has processors and code. Your car has dozens of processors running lots of software; your TV, your home appliances, your watch, etc.

Quaestor , 21 Sep 2017 17:43

Schools really can't win. Don't teach coding, and you're raising a generation of button-pushers. Teach it, and you're pandering to employers looking for cheap labour. Unions in London objected to children being taught carpentry in the twenties and thirties, so it had to be renamed "manual instruction" to get round it. Denying children useful skills is indefensible.

jamesupton , 21 Sep 2017 17:42
Getting children to learn how to write code, as part of core education, will be the first step to the long overdue revolution. The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.
cjenk415 -> LMichelle , 21 Sep 2017 17:40
did you misread? it seemed like he was emphasizing that learning to code, like learning art (and sports and languages), will help them develop skills that benefit them in whatever profession they choose.
FabBlondie -> peter nelson , 21 Sep 2017 17:40
While I like your idea of what designing a computer program involves, in my nearly 40 years experience as a programmer I have rarely seen this done. And, FWIW, IMHO choosing the tool (programming language) might reasonably be expected to follow designing a solution, in practice this rarely happens. No, these days it's Java all the way, from day one.
theindyisbetter -> Game Cabbage , 21 Sep 2017 17:40
There was a fixed supply of land and a reduced supply of labour to work the land.

Nothing like then situation in a modern economy.

LMichelle , 21 Sep 2017 17:39
I'd advise parents that the classes they need to make sure their kids excel in are acting/drama. There is no better way to getting that promotion or increasing your pay like being a skilled actor in the job market. It's a fake it till you make it deal.
theindyisbetter , 21 Sep 2017 17:36
What a ludicrous argument.

Let's not teach maths or science or literacy either - then anyone with those skills will earn more.

SheriffFatman -> Game Cabbage , 21 Sep 2017 17:36

After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions

It also produced wage-control legislation (which admittedly failed to work).

peter nelson -> peterainbow , 21 Sep 2017 17:32
if there were truly a shortage i wouldn't be unemployed

I've heard that before but when I've dug deeper I've usually found someone who either let their skills go stale, or who had some work issues.

LMichelle -> loveyy , 21 Sep 2017 17:26
Really? You think they are going to emphasize things like the importance of privacy and consumer rights?
loveyy , 21 Sep 2017 17:25
This really has to be one of the silliest articles I read here in a very long time.
People, let your children learn to code. Even more, educate yourselves and start to code just for the fun of it - look at it like a game.
The more people know how to code the less likely they are to understand how stuff works. If you were ever frustrated by how impossible it seems to shop on certain websites, learn to code and you will be frustrated no more. You will understand the intent behind the process.
Even more, you will understand the inherent limitations and what is the meaning of safety. You will be able to better protect yourself in a real time connected world.

Learning to code won't turn your kid into a programmer, just like ballet or piano classes won't mean they'll ever choose art as their livelihood. So let the children learn to code and learn along with them

Game Cabbage , 21 Sep 2017 17:24
Tipping power to employers in any profession by oversupply of labour is not a good thing. Bit of a macabre example here but...After the Black Death in the middle ages there was a huge under supply of labour. It produced a consistent rise in wages and conditions and economic development for hundreds of years after this. Not suggesting a massive depopulation. But you can achieve the same effects by altering the power balance. With decades of Neoliberalism, the employers side of the power see-saw is sitting firmly in the mud and is producing very undesired results for the vast majority of people.
Zuffle -> peterainbow , 21 Sep 2017 17:23
Perhaps you're just not very good. I've been a developer for 20 years and I've never had more than 1 week of unemployment.
Kevin P Brown -> peterainbow , 21 Sep 2017 17:20
" at 55 finding it impossible to get a job"

I am 59, and it is not just the age aspect it is the money aspect. They know you have experience and expectations, and yet they believe hiring someone half the age and half the price, times 2 will replace your knowledge. I have been contracting in IT for 30 years, and now it is obvious it is over. Experience at some point no longer mitigates age. I think I am at that point now.

TheLane82 , 21 Sep 2017 17:20
Completely true! What needs to happen instead is to teach the real valuable subjects.

Gender studies. Islamic studies. Black studies. All important issues that need to be addressed.

peter nelson -> mlzarathustra , 21 Sep 2017 17:06
Dear, dear, I know, I know, young people today . . . just not as good as we were. Everything is just going down the loo . . . Just have a nice cuppa camomile (or chamomile if you're a Yank) and try to relax ... " hey you kids, get offa my lawn !"
FabBlondie , 21 Sep 2017 17:06
There are good reasons to teach coding. Too many of today's computer users are amazingly unaware of the technology that allows them to send and receive emails, use their smart phones, and use websites. Few understand the basic issues involved in computer security, especially as it relates to their personal privacy. Hopefully some introductory computer classes could begin to remedy this, and the younger the students the better.

Security problems are not strictly a matter of coding.

Security issues persist in tech. Clearly that is not a function of the size of the workforce. I propose that it is a function of poor management and design skills. These are not taught in any programming class I ever took. I learned these on the job and in an MBA program, and because I was determined.

Don't confuse basic workforce training with an effective application of tech to authentic needs.

How can the "disruption" so prized in today's Big Tech do anything but aggravate our social problems? Tech's disruption begins with a blatant ignorance of and disregard for causes, and believes to its bones that a high tech app will truly solve a problem it cannot even describe.

Kool Aid anyone?

peterainbow -> brady , 21 Sep 2017 17:05
indeed that idea has been around as long as cobol and in practice has just made things worse, the fact that many people outside of software engineering don;t seem to realise is that the coding itself is a relatively small part of the job
FabBlondie -> imipak , 21 Sep 2017 17:04
Hurrah.
peterainbow -> rebel7 , 21 Sep 2017 17:04
so how many female and old software engineers are there who are unable to get a job, i'm one of them at 55 finding it impossible to get a job and unlike many 'developers' i know what i'm doing
peterainbow , 21 Sep 2017 17:02
meanwhile the age and sex discrimination in IT goes on, if there were truly a shortage i wouldn't be unemployed
Jared Hall -> peter nelson , 21 Sep 2017 17:01
Training more people for an occupation will result in more people becoming qualified to perform that occupation, irregardless of the fact that many will perform poorly at it. A CS degree is no guarantee of competency, but it is one of the best indicators of general qualification we have at the moment. If you can provide a better metric for analyzing the underlying qualifications of the labor force, I'd love to hear it.

Regarding your anecdote, while interesting, it poor evidence when compared to the aggregate statistical data analyzed in the EPI study.

peter nelson -> FabBlondie , 21 Sep 2017 17:00

Job-specific training is completely different.

Good grief. It's not job-specific training. You sound like someone who knows nothing about computer programming.

Designing a computer program requires analysing the task; breaking it down into its components, prioritising them and identifying interdependencies, and figuring out which parts of it can be broken out and done separately. Expressing all this in some programming language like Java, C, or C++ is quite secondary.

So once you learn to organise a task properly you can apply it to anything - remodeling a house, planning a vacation, repairing a car, starting a business, or administering a (non-software) project at work.

[Oct 02, 2017] Evaluation of potential job candidates for programming job should include evaluation of thier previous projects and code written

Notable quotes:
"... Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst. ..."
"... how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but facts ..."
"... And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that. ..."
"... most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe. ..."
Oct 02, 2017 | discussion.theguardian.com

peter nelson -> c mm , 21 Sep 2017 19:49

Instant feedback is one of the things I really like about programming, but it's also the thing that some people can't handle. As I'm developing a program all day long the compiler is telling me about build errors or warnings or when I go to execute it it crashes or produces unexpected output, etc. Software engineers are bombarded all day with negative feedback and little failures. You have to be thick-skinned for this work.
peter nelson -> peterainbow , 21 Sep 2017 19:42
How is it shallow and lazy? I'm hiring for the real world so I want to see some real world accomplishments. If the candidate is fresh out of university they can't point to work projects in industry because they don't have any. But they CAN point to stuff they've done on their own. That shows both motivation and the ability to finish something. Why do you object to it?
anticapitalist -> peter nelson , 21 Sep 2017 14:47
Thank you. The kids that spend high school researching independently and spend their nights hacking just for the love of it and getting a job without college are some of the most competent I've ever worked with. Passionless college grads that just want a paycheck are some of the worst.
John Kendall , 21 Sep 2017 19:42
There is a big difference between "coding" and programming. Coding for a smart phone app is a matter of calling functions that are built into the device. For example, there are functions for the GPS or for creating buttons or for simulating motion in a game. These are what we used to call subroutines. The difference is that whereas we had to write our own subroutines, now they are just preprogrammed functions. How those functions are written is of little or no importance to today's coders.

Nor are they able to program on that level. Real programming requires not only a knowledge of programming languages, but also a knowledge of the underlying algorithms that make up actual programs. I suspect that "coding" classes operate on a quite superficial level.

Game Cabbage -> theindyisbetter , 21 Sep 2017 19:40
Its not about the amount of work or the amount of labor. Its about the comparative availability of both and how that affects the balance of power, and that in turn affects the overall quality of life for the 'majority' of people.
c mm -> Ed209 , 21 Sep 2017 19:39
Most of this is not true. Peter Nelson gets it right by talking about breaking steps down and thinking rationally. The reason you can't just teach the theory, however, is that humans learn much better with feedback. Think about trying to learn how to build a fast car, but you never get in and test its speed. That would be silly. Programming languages take the system of logic that has been developed for centuries and gives instant feedback on the results. It's a language of rationality.
peter nelson -> peterainbow , 21 Sep 2017 19:37
This article is about the US. The tech industry in the EU is entirely different, and basically moribund. Where is the EU's Microsoft, Apple, Google, Amazon, Oracle, Intel, Facebook, etc, etc? The opportunities for exciting interesting work, plus the time and schedule pressures that force companies to overlook stuff like age because they need a particular skill Right Now, don't exist in the EU. I've done very well as a software engineer in my 60's in the US; I cannot imagine that would be the case in the EU.
peterainbow -> peter nelson , 21 Sep 2017 19:37
sorry but that's just not true, i doubt you are really programming still, or quasi programmer but really a manager who like to keep their hand in, you certainly aren't busy as you've been posting all over this cif. also why would you try and hire someone with such disparate skillsets, makes no sense at all

oh and you'd be correct that i do have workplace issues, ie i have a disability and i also suffer from depression, but that shouldn't bar me from employment and again regarding my skills going stale, that again contradicts your statement that it's about planning/analysis/algorithms etc that you said above ( which to some extent i agree with )

c mm -> peterainbow , 21 Sep 2017 19:36
Not at all, it's really egalitarian. If I want to hire someone to paint my portrait, the best way to know if they're any good is to see their previous work. If they've never painted a portrait before then I may want to go with the girl who has
c mm -> ragingbull , 21 Sep 2017 19:34
There is definitely not an excess. Just look at projected jobs for computer science on the Bureau of Labor statistics.
c mm -> perble conk , 21 Sep 2017 19:32
Right? It's ridiculous. "Hey, there's this industry you can train for that is super valuable to society and pays really well!"
Then Ben Tarnoff, "Don't do it! If you do you'll drive down wages for everyone else in the industry. Build your fire starting and rock breaking skills instead."
peterainbow -> peter nelson , 21 Sep 2017 19:29
how about how new labor tried to sign away IT access in England to India in exchange for banking access there, how about the huge loopholes in bringing in cheap IT workers from elsewhere in the world, not conspiracies, but facts
peter nelson -> eirsatz , 21 Sep 2017 19:25
I think the difference between gifted and not is motivation. But I agree it's not innate. The kid who stayed up all night in high school hacking into the school server to fake his coding class grade is probably more gifted than the one who spent 4 years in college getting a BS in CS because someone told him he could get a job when he got out.

I've done some hiring in my life and I always ask them to tell me about stuff they did on their own.

peter nelson -> TheBananaBender , 21 Sep 2017 19:20

Most coding jobs are bug fixing.

The only bugs I have to fix are the ones I make.

peter nelson -> Ed209 , 21 Sep 2017 19:19
As several people have pointed out, writing a computer program requires analyzing and breaking down a task into steps, identifying interdependencies, prioritizing the order, figuring out what parts can be organized into separate tasks that be done separately, etc.

These are completely independent of the language - I've been programming for 40 years in everything from FORTRAN to APL to C to C# to Java and it's all the same. Not only that but they transcend programming - they apply to planning a vacation, remodeling a house, or fixing a car.

peter nelson -> ragingbull , 21 Sep 2017 19:14
Neither coding nor having a bachelor's degree in computer science makes you a suitable job candidate. I've done a lot of recruiting and interviews in my life, and right now I'm trying to hire someone. And I've never recommended hiring anyone right out of school who could not point me to a project they did on their own, i.e., not just grades and test scores. I'd like to see an IOS or Android app, or a open-source component, or utility or program of theirs on GitHub, or something like that.

That's the thing that distinguishes software from many other fields - you can do something real and significant on your own. If you haven't managed to do so in 4 years of college you're not a good candidate.

peter nelson -> nickGregor , 21 Sep 2017 19:07
Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it.

In a sense that's already true, as i noted elsewhere. 90% of the code in my projects (Java and C# in their respective IDEs) is machine generated. I do relatively little "coding". But the flaw in your idea is this: most of what software designers do is not coding. It requires domain knowledge and that's where the "smart" IDEs and AI coding wizards fall down. It will be a long time before we get where you describe.

Ricardo111 -> martinusher , 21 Sep 2017 19:03
Completely agree. At the highest levels there is more work that goes into managing complexity and making sure nothing is missed than in making the wheels turn and the beepers beep.
ragingbull , 21 Sep 2017 19:02
Hang on... if the current excess of computer science grads is not driving down wages, why would training more kids to code make any difference?
Ricardo111 -> youngsteveo , 21 Sep 2017 18:59
I've actually interviewed people for very senior technical positions in Investment Banks who had all the fancy talk in the world and yet failed at some very basic "write me a piece of code that does X" tests.

Next hurdle on is people who have learned how to deal with certain situations and yet don't really understand how it works so are unable to figure it out if you change the problem parameters.

That said, the average coder is only slightly beyond this point. The ones who can take in account maintenability and flexibility for future enhancements when developing are already a minority, and those who can understand the why of software development process steps, design software system architectures or do a proper Technical Analysis are very rare.

eirsatz -> Ricardo111 , 21 Sep 2017 18:57
Hubris. It's easy to mistake efficiency born of experience as innate talent. The difference between a 'gifted coder' and a 'non gifted junior coder' is much more likely to be 10 or 15 years sitting at a computer, less if there are good managers and mentors involved.
Ed209 , 21 Sep 2017 18:57
Politicians love the idea of teaching children to 'code', because it sounds so modern, and nobody could possible object... could they? Unfortunately it simply shows up their utter ignorance of technical matters because there isn't a language called 'coding'. Computer programming languages have changed enormously over the years, and continue to evolve. If you learn the wrong language you'll be about as welcome in the IT industry as a lamp-lighter or a comptometer operator.

The pace of change in technology can render skills and qualifications obsolete in a matter of a few years, and only the very best IT employers will bother to retrain their staff - it's much cheaper to dump them. (Most IT posts are outsourced through agencies anyway - those that haven't been off-shored. )

peter nelson -> YEverKnot , 21 Sep 2017 18:54
And this isn't even a good conspiracy theory; it's a bad one. He offers no evidence that there's an actual plan or conspiracy to do this. I'm looking for an account of where the advocates of coding education met to plot this in some castle in Europe or maybe a secret document like "The Protocols of the Elders of Google", or some such.
TheBananaBender , 21 Sep 2017 18:52
Most jobs in IT are shit - desktop support, operations droids. Most coding jobs are bug fixing.
Ricardo111 -> Wiretrip , 21 Sep 2017 18:49
Tool Users Vs Tool Makers. The really good coders actually get why certain things work as they do and can adjust them for different conditions. The mass produced coders are basically code copiers and code gluing specialists.
peter nelson -> AmyInNH , 21 Sep 2017 18:49
People who get Masters and PhD's in computer science are not usually "coders" or software engineers - they're usually involved in obscure, esoteric research for which there really is very little demand. So it doesn't surprise me that they're unemployed. But if someone has a Bachelor's in CS and they're unemployed I would have to wonder what they spent their time at university doing.

The thing about software that distinguishes it from lots of other fields is that you can make something real and significant on your own . I would expect any recent CS major I hire to be able to show me an app or an open-source component or something similar that they made themselves, and not just test scores and grades. If they could not then I wouldn't even think about hiring them.

Ricardo111 , 21 Sep 2017 18:44
Fortunately for those of us who are actually good at coding, the difference in productivity between a gifted coder and a non-gifted junior developer is something like 100-fold. Knowing how to code and actually being efficient at creating software programs and systems are about as far apart as knowing how to write and actually being able to write a bestselling exciting Crime trilogy.
peter nelson -> jamesupton , 21 Sep 2017 18:36

The rest of us will still have to stick to burning buildings down and stringing up the aristocracy.

If you know how to write software you can get a robot to do those things.

peter nelson -> Julian Williams , 21 Sep 2017 18:34
I do think there is excess supply of software programmers. There is only a modest number of decent jobs, say as an algorithms developer in finance, general architecture of complex systems or to some extent in systems security.

This article is about coding; most of those jobs require very little of that.

Most very high paying jobs in the technology sector are in the same standard upper management roles as in every other industry.

How do you define "high paying". Everyone I know (and I know a lot because I've been a sw engineer for 40 years) who is working fulltime as a software engineer is making a high-middle-class salary, and can easily afford a home, travel on holiday, investments, etc.

YEverKnot , 21 Sep 2017 18:32

Tech's push to teach coding isn't about kids' success – it's about cutting wages

Nowt like a good conspiracy theory.
freeandfair -> WithoutPurpose , 21 Sep 2017 18:31
What is a stupidly low salary? 100K?
freeandfair -> AmyInNH , 21 Sep 2017 18:30
> Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work.

That just means 50% of them are no good and need to develop their skills further or try something else.
Not every with a STEM degree from some 3rd rate college is capable of doing complex IT or STEM work.

peter nelson -> edmundberk , 21 Sep 2017 18:30

So, is teaching people English or arithmetic all about reducing wages for the literate and numerate?

Yes. Haven't you noticed how wage growth has flattened? That's because some do-gooders" thought it would be a fine idea to educate the peasants. There was a time when only the well-to do knew how to read and write, and that's why they well-to-do were well-to-do. Education is evil. Stop educating people and then those of us who know how to read and write can charge them for reading and writing letters and email. Better yet, we can have Chinese and Indians do it for us and we just charge a transaction fee.

AmyInNH -> peter nelson , 21 Sep 2017 18:27
Massive amounts of public use cars, it doesn't mean millions need schooling in auto mechanics. Same for software coding. We aren't even using those who have Bachelors, Masters and PhDs in CS.
carlospapafritas , 21 Sep 2017 18:27
"..importing large numbers of skilled guest workers from other countries through the H1-B visa program..."

"skilled" is good. H1B has long ( appx 17 years) been abused and turned into trafficking scheme. One can buy H1B in India. Powerful ethnic networks wheeling & dealing in US & EU selling IT jobs to essentially migrants.

The real IT wages haven't been stagnant but steadily falling from the 90s. It's easy to see why. $82K/year IT wage was about average in the 90s. Comparing the prices of housing (& pretty much everything else) between now gives you the idea.

freeandfair -> whitehawk66 , 21 Sep 2017 18:27
> not every kid wants or needs to have their soul sucked out of them sitting in front of a screen full of code for some idiotic service that some other douchbro thinks is the next iteration of sliced bread

Taking a couple of years of programming are not enough to do this as a job, don't worry.
But learning to code is like learning maths, - it helps to develop logical thinking, which will benefit you in every area of your life.

James Dey , 21 Sep 2017 18:25
We should stop teaching our kids to be journalists, then your wage might go up.
peter nelson -> AmyInNH , 21 Sep 2017 18:23
What does this even mean?

[Oct 02, 2017] Programming is a culturally important skill

Notable quotes:
"... A lot of basic entry level jobs require a good level of Excel skills. ..."
"... Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. ..."
"... What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra. ..."
"... Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it. ..."
"... We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money. ..."
"... Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled. ..."
"... Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely. ..."
"... We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? ..."
www.moonofalabama.org
David McCaul -> IanMcLzzz , 21 Sep 2017 13:03
There are very few professional Scribes nowadays, a good level of reading & writing is simplely a default even for the lowest paid jobs. A lot of basic entry level jobs require a good level of Excel skills. Several years from now basic coding will be necessary to manipulate basic tools for entry level jobs, especially as increasingly a lot of real code will be generated by expert systems supervised by a tiny number of supervisors. Coding jobs will go the same way that trucking jobs will go when driverless vehicles are perfected.

anticapitalist, 21 Sep 2017 14:25

Offer the class but not mandatory. Just like I could never succeed playing football others will not succeed at coding. The last thing the industry needs is more bad developers showing up for a paycheck.

Fanastril , 21 Sep 2017 14:08

Programming is a cultural skill; master it, or even understand it on a simple level, and you understand how the 21st century works, on the machinery level. To bereave the children of this crucial insight is to close off a door to their future. What's next, keep them off Math, because, you know . .
Taylor Dotson -> freeandfair , 21 Sep 2017 13:59
That's some crystal ball you have there. English teachers will need to know how to code? Same with plumbers? Same with janitors, CEOs, and anyone working in the service industry?
PolydentateBrigand , 21 Sep 2017 12:59
The economy isn't a zero-sum game. Developing a more skilled workforce that can create more value will lead to economic growth and improvement in the general standard of living. Talented coders will start new tech businesses and create more jobs.

WumpieJr , 21 Sep 2017 13:23

What a dumpster argument. I am not a programmer or even close, but a basic understanding of coding has been important to my professional life. Coding isn't just about writing software. Understanding how algorithms work, even simple ones, is a general skill on par with algebra.

But is isn't just about coding for Tarnoff. He seems to hold education in contempt generally. "The far-fetched premise of neoliberal school reform is that education can mend our disintegrating social fabric." If they can't literally fix everything let's just get rid of them, right?

Never mind that a good education is clearly one of the most important things you can do for a person to improve their quality of life wherever they live in the world. It's "neoliberal," so we better hate it.

mlzarathustra , 21 Sep 2017 16:52
I agree with the basic point. We've seen this kind of tactic for some time now. Silicon Valley is turning into a series of micromanaged sweatshops (that's what "agile" is truly all about) with little room for genuine creativity, or even understanding of what that actually means. I've seen how impossible it is to explain to upper level management how crappy cheap developers actually diminish productivity and value. All they see is that the requisition is filled for less money.

The bigger problem is that nobody cares about the arts, and as expensive as education is, nobody wants to carry around a debt on a skill that won't bring in the bucks. And smartphone-obsessed millennials have too short an attention span to fathom how empty their lives are, devoid of the aesthetic depth as they are.

I can't draw a definite link, but I think algorithm fails, which are based on fanatical reliance on programmed routines as the solution to everything, are rooted in the shortage of education and cultivation in the arts.

Economics is a social science, and all this is merely a reflection of shared cultural values. The problem is, people think it's math (it's not) and therefore set in stone.

AmyInNH -> peter nelson , 21 Sep 2017 16:51
Geeze it'd be nice if you'd make an effort.
rucore.libraries.rutgers.edu/rutgers-lib/45960/PDF/1/
https://rucore.libraries.rutgers.edu/rutgers-lib/46156 /
https://rucore.libraries.rutgers.edu/rutgers-lib/46207 /
peter nelson -> WyntonK , 21 Sep 2017 16:45
Libertarianism posits that everyone should be free to sell their labour or negotiate their own arrangements without the state interfering. So if cheaper foreign labour really was undercutting American labout the Libertarians would be thrilled.

But it's not. I'm in my 60's and retiring but I've been a software engineer all my life. I've worked for many different companies, and in different industries and I've never had any trouble competing with cheap imported workers. The people I've seen fall behind were ones who did not keep their skills fresh. When I was laid off in 2009 in my mid-50's I made sure my mobile-app skills were bleeding edge (in those days ANYTHING having to do with mobile was bleeding edge) and I used to go to job interviews with mobile devices to showcase what I could do. That way they could see for themselves and not have to rely on just a CV.

They older guys who fell behind did so because their skills and toolsets had become obsolete.

Now I'm trying to hire a replacement to write Android code for use in industrial production and struggling to find someone with enough experience. So where is this oversupply I keep hearing about?

Jared Hall -> RogTheDodge , 21 Sep 2017 16:42
Not producing enough to fill vacancies or not producing enough to keep wages at Google's preferred rate? Seeing as research shows there is no lack of qualified developers, the latter option seems more likely.
JayThomas , 21 Sep 2017 16:39

It's about ensuring those salaries no longer exist, by creating a source of cheap labor for the tech industry.

We're already using Asia as a source of cheap labor for the tech industry. Why do we need to create cheap labor in the US? That just seems inefficient.

FabBlondie -> RogTheDodge , 21 Sep 2017 16:39
There was never any need to give our jobs to foreigners. That is, if you are comparing the production of domestic vs. foreign workers. The sole need was, and is, to increase profits.
peter nelson -> AmyInNH , 21 Sep 2017 16:34
Link?
FabBlondie , 21 Sep 2017 16:34
Schools MAY be able to fix big social problems, but only if they teach a well-rounded curriculum that includes classical history and the humanities. Job-specific training is completely different. What a joke to persuade public school districts to pick up the tab on job training. The existing social problems were not caused by a lack of programmers, and cannot be solved by Big Tech.

I agree with the author that computer programming skills are not that limited in availability. Big Tech solved the problem of the well-paid professional some years ago by letting them go, these were mostly workers in their 50s, and replacing them with H1-B visa-holders from India -- who work for a fraction of their experienced American counterparts.

It is all about profits. Big Tech is no different than any other "industry."

peter nelson -> Jared Hall , 21 Sep 2017 16:31
Supply of apples does not affect the demand for oranges. Teaching coding in high school does not necessarily alter the supply of software engineers. I studied Chinese History and geology at University but my doing so has had no effect on the job prospects of people doing those things for a living.
johnontheleft -> Taylor Dotson , 21 Sep 2017 16:30
You would be surprised just how much a little coding knowledge has transformed my ability to do my job (a job that is not directly related to IT at all).
peter nelson -> Jared Hall , 21 Sep 2017 16:29
Because teaching coding does not affect the supply of actual engineers. I've been a professional software engineer for 40 years and coding is only a small fraction of what I do.
peter nelson -> Jared Hall , 21 Sep 2017 16:28
You and the linked article don't know what you're talking about. A CS degree does not equate to a productive engineer.

A few years ago I was on the recruiting and interviewing committee to try to hire some software engineers for a scientific instrument my company was making. The entire team had about 60 people (hw, sw, mech engineers) but we needed 2 or 3 sw engineers with math and signal-processing expertise. The project was held up for SIX months because we could not find the people we needed. It would have taken a lot longer than that to train someone up to our needs. Eventually we brought in some Chinese engineers which cost us MORE than what we would have paid for an American engineer when you factor in the agency and visa paperwork.

Modern software engineers are not just generic interchangable parts - 21st century technology often requires specialised scientific, mathematical, production or business domain-specific knowledge and those people are hard to find.

freeluna -> freeluna , 21 Sep 2017 16:18
...also, this article is alarmist and I disagree with it. Dear Author, Phphphphtttt! Sincerely, freeluna
AmyInNH , 21 Sep 2017 16:16
Regimentation of the many, for benefit of the few.
AmyInNH -> Whatitsaysonthetin , 21 Sep 2017 16:15
Visa jobs are part of trade agreements. To be very specific, US gov (and EU) trade Western jobs for market access in the East.
http://www.marketwatch.com/story/in-india-british-leader-theresa-may-preaches-free-trade-2016-11-07
There is no shortage. This is selling off the West's middle class.
Take a look at remittances in wikipedia and you'll get a good idea just how much it costs the US and EU economies, for sake of record profits to Western industry.
jigen , 21 Sep 2017 16:13
And thanks to the author for not using the adjective "elegant" in describing coding.
freeluna , 21 Sep 2017 16:13
I see advantages in teaching kids to code, and for kids to make arduino and other CPU powered things. I don't see a lot of interest in science and tech coming from kids in school. There are too many distractions from social media and game platforms, and not much interest in developing tools for future tech and science.
jigen , 21 Sep 2017 16:13
Let the robots do the coding. Sorted.
FluffyDog -> rgilyead , 21 Sep 2017 16:13
Although coding per se is a technical skill it isn't designing or integrating systems. It is only a small, although essential, part of the whole software engineering process. Learning to code just gets you up the first steps of a high ladder that you need to climb a fair way if you intend to use your skills to earn a decent living.
rebel7 , 21 Sep 2017 16:11
BS.

Friend of mine in the SV tech industry reports that they are about 100,000 programmers short in just the internet security field.

Y'all are trying to create a problem where there isn't one. Maybe we shouldn't teach them how to read either. They might want to work somewhere besides the grill at McDonalds.

AmyInNH -> WyntonK , 21 Sep 2017 16:11
To which they will respond, offshore.
AmyInNH -> MrFumoFumo , 21 Sep 2017 16:10
They're not looking for good, they're looking for cheap + visa indentured. Non-citizens.
nickGregor , 21 Sep 2017 16:09
Within the next year coding will be old news and you will simply be able to describe things in ur native language in such a way that the machine will be able to execute any set of instructions you give it. Coding is going to change from its purely abstract form that is not utilized at peak- but if you can describe what you envision in an effective concise manner u could become a very good coder very quickly -- and competence will be determined entirely by imagination and the barriers of entry will all but be extinct
AmyInNH -> unclestinky , 21 Sep 2017 16:09
Already there. I take it you skipped right past the employment prospects for US STEM grads - 50% chance of finding STEM work.
AmyInNH -> User10006 , 21 Sep 2017 16:06
Apparently a whole lot of people are just making it up, eh?
http://www.motherjones.com/politics/2017/09/inside-the-growing-guest-worker-program-trapping-indian-students-in-virtual-servitude /
From today,
http://www.computerworld.com/article/2915904/it-outsourcing/fury-rises-at-disney-over-use-of-foreign-workers.html
All the way back to 1995,
https://www.youtube.com/watch?v=vW8r3LoI8M4&feature=youtu.be
JCA1507 -> whitehawk66 , 21 Sep 2017 16:04
Bravo
JCA1507 -> DirDigIns , 21 Sep 2017 16:01
Total... utter... no other way... huge... will only get worse... everyone... (not a very nuanced commentary is it).

I'm glad pieces like this are mounting, it is relevant that we counter the mix of messianism and opportunism of Silicon Valley propaganda with convincing arguments.

RogTheDodge -> WithoutPurpose , 21 Sep 2017 16:01
That's not my experience.
AmyInNH -> TTauriStellarbody , 21 Sep 2017 16:01
It's a stall tactic by Silicon Valley, "See, we're trying to resolve the [non-existant] shortage."
AmyInNH -> WyntonK , 21 Sep 2017 16:00
They aren't immigrants. They're visa indentured foreign workers. Why does that matter? It's part of the cheap+indentured hiring criteria. If it were only cheap, they'd be lowballing offers to citizen and US new grads.
RogTheDodge -> Jared Hall , 21 Sep 2017 15:59
No. Because they're the ones wanting them and realizing the US education system is not producing enough
RogTheDodge -> Jared Hall , 21 Sep 2017 15:58
Except the demand is increasing massively.
RogTheDodge -> WyntonK , 21 Sep 2017 15:57
That's why we are trying to educate American coders - so we don't need to give our jobs to foreigners.
AmyInNH , 21 Sep 2017 15:56
Correct premises,
- proletarianize programmers
- many qualified graduates simply can't find jobs.
Invalid conclusion:
- The problem is there aren't enough good jobs to be trained for.

That conclusion only makes sense if you skip right past ...
" importing large numbers of skilled guest workers from other countries through the H1-B visa program. These workers earn less than their American counterparts, and possess little bargaining power because they must remain employed to keep their status"

Hiring Americans doesn't "hurt" their record profits. It's incessant greed and collusion with our corrupt congress.

Oldvinyl , 21 Sep 2017 15:51
This column was really annoying. I taught my students how to program when I was given a free hand to create the computer studies curriculum for a new school I joined. (Not in the UK thank Dog). 7th graders began with studying the history and uses of computers and communications tech. My 8th grade learned about computer logic (AND, OR, NOT, etc) and moved on with QuickBASIC in the second part of the year. My 9th graders learned about databases and SQL and how to use HTML to make their own Web sites. Last year I received a phone call from the father of one student thanking me for creating the course, his son had just received a job offer and now works in San Francisco for Google.
I am so glad I taught them "coding" (UGH) as the writer puts it, rather than arty-farty subjects not worth a damn in the jobs market.
WyntonK -> DirDigIns , 21 Sep 2017 15:47
I live and work in Silicon Valley and you have no idea what you are talking about. There's no shortage of coders at all. Terrific coders are let go because of their age and the availability of much cheaper foreign coders(no, I am not opposed to immigration).
Sean May , 21 Sep 2017 15:43
Looks like you pissed off a ton of people who can't write code and are none to happy with you pointing out the reason they're slinging insurance for geico.

I think you're quite right that coding skills will eventually enter the mainstream and slowly bring down the cost of hiring programmers.

The fact is that even if you don't get paid to be a programmer you can absolutely benefit from having some coding skills.

There may however be some kind of major coding revolution with the advent of quantum computing. The way code is written now could become obsolete.

Jared Hall -> User10006 , 21 Sep 2017 15:43
Why is it a fantasy? Does supply and demand not apply to IT labor pools?
Jared Hall -> ninianpark , 21 Sep 2017 15:42
Why is it a load of crap? If you increase the supply of something with no corresponding increase in demand, the price will decrease.
pictonic , 21 Sep 2017 15:40
A well-argued article that hits the nail on the head. Amongst any group of coders, very few are truly productive, and they are self starters; training is really needed to do the admin.
Jared Hall -> DirDigIns , 21 Sep 2017 15:39
There is not a huge skills shortage. That is why the author linked this EPI report analyzing the data to prove exactly that. This may not be what people want to believe, but it is certainly what the numbers indicate. There is no skills gap.

http://www.epi.org/files/2013/bp359-guestworkers-high-skill-labor-market-analysis.pdf

Axel Seaton -> Jaberwocky , 21 Sep 2017 15:34
Yeah, but the money is crap
DirDigIns -> IanMcLzzz , 21 Sep 2017 15:32
Perfect response for the absolute crap that the article is pushing.
DirDigIns , 21 Sep 2017 15:30
Total and utter crap, no other way to put it.

There is a huge skills shortage in key tech areas that will only get worse if we don't educate and train the young effectively.

Everyone wants youth to have good skills for the knowledge economy and the ability to earn a good salary and build up life chances for UK youth.

So we get this verbal diarrhoea of an article. Defies belief.

Whatitsaysonthetin -> Evelita , 21 Sep 2017 15:27
Yes. China and India are indeed training youth in coding skills. In order that they take jobs in the USA and UK! It's been going on for 20 years and has resulted in many experienced IT staff struggling to get work at all and, even if they can, to suffer stagnating wages.
WmBoot , 21 Sep 2017 15:23
Wow. Congratulations to the author for provoking such a torrent of vitriol! Job well done.
TTauriStellarbody , 21 Sep 2017 15:22
Has anyones job is at risk from a 16 year old who can cobble together a couple of lines of javascript since the dot com bubble?

Good luck trying to teach a big enough pool of US school kids regular expressions let alone the kind of test driven continuous delivery that is the norm in the industry now.

freeandfair -> youngsteveo , 21 Sep 2017 13:27
> A lot of resumes come across my desk that look qualified on paper, but that's not the same thing as being able to do the job

I have exactly the same experience. There is undeniable a skill gap. It takes about a year for a skilled professional to adjust and learn enough to become productive, it takes about 3-5 years for a college grad.

It is nothing new. But the issue is, as the college grad gets trained, another company steal him/ her. And also keep in mind, all this time you are doing job and training the new employee as time permits. Many companies in the US cut the non-profit department (such as IT) to the bone, we cannot afford to lose a person and then train another replacement for 3-5 years.

The solution? Hire a skilled person. But that means nobody is training college grads and in 10-20 years we are looking at the skill shortage to the point where the only option is brining foreign labor.

American cut-throat companies that care only about the bottom line cannibalized themselves.

farabundovive -> Ethan Hawkins , 21 Sep 2017 15:10

Heh. You are not a coder, I take it. :) Going to be a few decades before even the easiest coding jobs vanish.

Given how shit most coders of my acquaintance have been - especially in matters of work ethic, logic, matching s/w to user requirements and willingness to test and correct their gormless output - most future coding work will probably be in the area of disaster recovery. Sorry, since the poor snowflakes can't face the sad facts, we have to call it "business continuation" these days, don't we?
UncommonTruthiness , 21 Sep 2017 14:10
The demonization of Silicon Valley is clearly the next place to put all blame. Look what "they" did to us: computers, smart phones, HD television, world-wide internet, on and on. Get a rope!

I moved there in 1978 and watched the orchards and trailer parks on North 1st St. of San Jose transform into a concrete jungle. There used to be quite a bit of semiconductor equipment and device manufacturing in SV during the 80s and 90s. Now quite a few buildings have the same name : AVAILABLE. Most equipment and device manufacturing has moved to Asia.

Programming started with binary, then machine code (hexadecimal or octal) and moved to assembler as a compiled and linked structure. More compiled languages like FORTRAN, BASIC, PL-1, COBOL, PASCAL, C (and all its "+'s") followed making programming easier for the less talented. Now the script based languages (HTML, JAVA, etc.) are even higher level and accessible to nearly all. Programming has become a commodity and will be priced like milk, wheat, corn, non-unionized workers and the like. The ship has sailed on this activity as a career.

[Sep 24, 2017] Do Strongly Typed Languages Reduce Bugs?

Sep 24, 2017 | developers.slashdot.org

(acolyer.org) Posted by EditorDavid on Saturday September 23, 2017 @05:19PM from the dynamic-discussions dept. "Static vs dynamic typing is always one of those topics that attracts passionately held positions," writes the Morning Paper -- reporting on an "encouraging" study that attempted to empirically evaluate the efficacy of statically-typed systems on mature, real-world code bases. The study was conducted by Christian Bird at Microsoft's "Research in Software Engineering" group with two researchers from University College London. Long-time Slashdot reader phantomfive writes: This study looked at bugs found in open source Javascript code. Looking through the commit history, they enumerated the bugs that would have been caught if a more strongly typed language (like Typescript) had been used. They found that a strongly typed language would have reduced bugs by 15%. Does this make you want to avoid Python?

[Jun 28, 2017] PBS Pro Tutorial by Krishna Arutwar

www.nakedcapitalism.com
What is PBS Pro?

Portable Batch System (PBS) is a software which is used in cluster computing to schedule jobs on multiple nodes. PBS was started as contract project by NASA. PBS is available in three different versions as below 1) Torque: Terascale Open-source Resource and QUEue Manager (Torque) is developed from OpenPBS. It is developed and maintain by Adaptive Computing Enterprises. It is used as a distributed resource manager can perform well when integrated with Maui cluster scheduler to improve performance. 2) PBS Professional (PBS Pro): It is commercial version of PBS offered by Altair Engineering. 3) OpenPBS: It is open source version released in 1998 developed by NASA. It is not actively developed.

In this article we are going to concentrate on tutorial of PBS Pro it is similar to some extent with Torque.

PBS contain three basic units server, MoM (execution host), scheduler.

  1. Server: It is heart of the PBS, with executable named "pbs_server". It uses IP network to communicate with the MoMs. PBS server create a batch job, modify the job requested from different MoMs. It keeps track of all resources available, assigned in the PBS complex from different MoMs. It will also monitor the PBS license for jobs. If your license expires it will throw an error.
  2. Scheduler: PBS scheduler uses various algorithms to decide when job should get executed on which node or vnode by using detail of resources available from server. It has executable as "pbs_sched".
  3. MoM: MoM is the mother of all execution job with executable "pbs_mom". When MoM gets job from server it will actually execute that job on the host. Each node must have MoM running to get participate in execution.

Installation and Setting up of environment (cluster with multiple nodes)

Extract compressed software of PBS Pro and go the path of extracted folder it contain "INSTALL" file, make that file executable you may use command like "chmod +x ./INSTALL". As shown in the image below run this executable. It will ask for the "execution directory" where you want to store the executable (such as qsub, pbsnodes, qdel etc.) used for different PBS operations and "home directory" which contain different configuration files. Keep both as default for simplicity. There are three kind of installation available as shown in figure:

1) Server node: PBS server, scheduler, MoM and commands are installed on this node. PBS server will keep track of all execution MoMs present in the cluster. It will schedule jobs on this execution nodes. As MoM and commands are also installed on server node it can be used to submit and execute the jobs. 2) Execution node: This type installs MoM and commands. This nodes are added as available nodes for execution in a cluster. They are also allowed to submit the jobs at server side with specific permission by server as we are going to see below. They are not involved in scheduling. This kind of installation ask for PBS server which is used to submit jobs, get status of jobs etc. 3 ) Client node: This are the nodes which are only allowed to submit a PBS job at server with specific permission by the server and allowed to see the status of the jobs. They are not involved in execution or scheduling.

Creating vnode in PBS Pro:

We can create multiple vnodes in a single node which contain some part of resources in a node. We can execute job on this vnodes with specified allocated resources. We can create vnode using qmgr command which is command line interface to PBS server. We can use command given below to create vnode using qmgr.

Qmgr:
create node Vnode1,Vnode2 resources_available.ncpus=8, resources_available.mem=10gb, 
resources_available.ngpus=1, sharing=default_excl 
The command above will create two vnodes named Vnode1 and Vnode2 with 8 cpus cores, 10gb of memory and 1 GPU with sharing mode as default_excl which means this vnode can execute exclusively only one job at a time independent of number of resources free. This sharing mode can be default_shared which means any number of jobs can run on that vnode until all resources are busy. To know more about all attributes which can be used with vnode creation are available in PBS Pro reference guide.

You can also create a file in " /var/spool/PBS/mom_priv/config.d/ " this folder with any name you want I prefer hostname -vnode with sample given below. It will select all files even temporary files with (~) and replace configuration for same vnode so delete unnecessary files to get proper configuration of vnodes.

e.g.

$configversion 2
hostname
:resources_available.ncpus=0
hostname
:resources_available.mem=0
hostname
:resources_available.ngpus=0
hostname
[0]:resources_available.ncpus=8
hostname
[0]:resources_available.mem=16gb
hostname
[0]:resources_available.ngpus=1
hostname
[0]:sharing=default_excl
hostname
[1]:resources_available.ncpus=8
hostname
[1]:resources_available.mem=16gb


hostname
[1]:resources_available.ngpus=1




hostname
[1]:sharing=default_excl


hostname
[2]:resources_available.ncpus=8


hostname
[2]:resources_available.mem=16gb


hostname
[2]:resources_available.ngpus=1




hostname
[2]:sharing=default_excl


hostname
[3]:resources_available.ncpus=8


hostname
[3]:resources_available.mem=16gb


hostname
[3]:resources_available.ngpus=1

hostname
[3]:sharing=default_excl
Here in this example we assigned default node configuration to resource available as 0 because by default it will detect and allocate all available resources to default node with sharing attribute as is default_shared.

Which cause problem as all the jobs will by default get scheduled on that default vnode because its sharing type is default_shared . If you want to schedule jobs on your customized vnodes you should allocate resources available as 0 on default vnode . Every time whenever you restart the PBS server

PBS get status:

get status of Jobs:

qstat will give details about jobs there states etc.

useful options:

To print detail about all jobs which are running or in hold state: qstat -a

To print detail about subjobs in JobArray which are running or in hold state: qstat -ta

get status of PBS nodes and vnodes:

"pbsnode -a" command will provide list of all nodes present in PBS complex with there resources available, assigned, status etc.

To get details of all nodes and vnodes you created use " pbsnodes -av" command.

You can also specify node or vnode name to get detail information of that specific node or vnode.

e.g.

pbsnodes wolverine (here wolverine is hostname of the node in PBS complex which is mapped with IP address in /etc/hosts file)

Job submission (qsub):

PBS MoM will submit jobs to the PBS server. Server maintain queue of jobs by default all jobs are submitted to default queue named "workq". You may create multiple queues by using "qmgr" command which is administrator interface mainly used to create, delete & modify queues and vnodes. PBS server will decide which job to be scheduled on which node or vnode based on scheduling policy and privileges set by user. To schedule jobs server will continuously ping to all MoMs in the PBS complex to get detail of resources available and assigned. PBS assigns unique job identifier to each and every job called JobID. For job submission PBS uses "qsub" command. It has syntax as shown below qsub script Here script may be a shell (sh, csh, tchs, ksh, bash) script. PBS by default uses /bin/sh. You may refer simple script given below #!/bin/sh

echo "This is PBS job"

When PBS completes execution of job it will store errors in file name with JobName.e{JobID} e.g. Job1.e1492

Output with file name

JobName.o{JobID} e.g. Job1.o1492

By default it will store this files in the current working directory (can be seen by pwd command) . You can change this location by giving path with -o option.

you may specify job name with -N option while submitting the job

qsub -N firstJob ./test.sh

If you don't specify job name it will store files by replacing JobName with script name. e.g. qsub ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in current working directory.

OR

qsub -N firstJob -o /home/user1/ ./test.sh this command will store results in file with test.sh.e1493 and test.sh.o.1493 in /home/user1/ directory.

If submitted job terminate abnormally (errors in job is not abnormal, this errors get stored in JobName.e{JobID} file) it will store its error and output files in "/var/spool/PBS/undelivered/ " folder.

Useful Options:

Select resources:

qsub -l select="chunks":ncpus=3:ngpus=1:mem=2gb script 

e.g.

This Job will selects 2 copies with 3 cpus, 1 gpu and 2gb memory which mean it will select 6 cpus, 2 gpus and 4 gb ram.

qsub -l nodes=megamind:ncpus=3 /home/titan/PBS/input/in.sh

This job will select one node specified with hostname.

To select multiple nodes you may use command given below

qsub -l nodes=megamind+titan:ncpus=3 /home/titan/PBS/input/in.sh
Submit multiple jobs with same script (JobArray):

qsub -J 1-20 script

Submit dependant jobs:

In some cases you may require job which should run after successful or unsuccessful completion of some specified jobs for that PBS provide some options such as

qsub -W depend=afterok:316.megamind /home/titan/PBS/input/in.sh

This specified job will start only after successful completion of job with job ID "316.megamind". Like afterok PBS has other options such as beforeok

beforenotok to , afternotok. You may find all this details in the man page of qsub .

Submit Job with priority :

There are two ways using which we can set priority to jobs which are going to execute.

1) Using single queue with different jobs with different priority:

To change sequence of jobs queued in a execution queue open "$PBS_HOME/sched_priv/sched_config" file, normally $PBS_HOME is present in "/var/spool/PBS/" folder. Open this file and uncomment the line below if present otherwise add it .

job_sort_key : "job_priority HIGH"

After saving this file you will need to restart the pbs_sched daemon on head node you may use command below

service pbs restart

After completing this task you have to submit the job with -p option to specify priority of job within queue. This value may range between (-1024) to 1023, where -1024 is the lowest priority and 1023 is the highest priority in the queue.

e.g.

qsub -p 100 ./X.sh

qsub -p 101 ./Y.sh


qsub -p 102 ./Z.sh 
In this case PBS will execute jobs as explain in the diagram given below

multipleJobsInoneQ

2) Using different queues with specified priority: We are going to discuss this point in PBS Queue section.

q

In this example all jobs in queue 2 will complete first then queue 3 then queue 1, since priority of queue 2 > queue 3 > queue 1. Because of this job execution flow is as shown below

J4=> J5=> J6=>J7=> J8=> J9=> J1=> J2=> J3 PBS Queue:

PBS Pro can manage multiple queue as per users requirement. By default every job is queued in "workq" for execution. There are two types of queue are available execution and routing queue. Jobs in execution queue are used by pbs server for execution. Jobs in routing queue can not be executed they can be redirected to execution queue or another routing queue by using command qmove command. By default queue "workq" is an execution queue. The sequence of job in queue may change by using priority defined while job submission as specified above in job submission section.

Useful qmgr commands:

First type qmgr which is Manager interface of PBS Pro.

To create queue:


Qmgr:
 create queue test2

To set type of queue you created:


Qmgr:
 set queue test2 queue_type=execution

OR


Qmgr:
 set queue test2 queue_type=route

To enable queue:


Qmgr:
 set queue test2 enabled=True

To set priority of queue:


Qmgr:
 set queue test2 priority=50

Jobs in queue with higher priority will get first preference. After completion of all jobs in the queue with higher priority jobs in lower priority queue are scheduled. There is huge probability of job starvation in queue with lower priority.

To start queue:


Qmgr:
 set queue test2 started = True

To activate all queue (present at particular node):


Qmgr:
 active queue @default

To set queue for specified users : You require to set acl_user_enable attribute to true which indicate PBS to only allow user present in acl_users list to submit the job.


 Qmgr:
 set queue test2 acl_user_enable=True

To set users permitted (to submit job in a queue):


Qmgr:
 set queue test2 acl_users="user1@
..
,user2@
..
,user3@
..
"

(in place of .. you have to specify hostname of compute node in PBS complex. Only user name without hostname will allow users ( with same name ) to submit job from all nodes ( permitted to submit job ) in a PBS Complex).

To delete queues we created:


Qmgr:
 delete queue test2

To see details of all queue status:

qstat -Q


You may specify specific queue name: qstat -Q test2

To see full details of all queue: qstat -Q -f

You may specify specific queue name: qstat -Q -f test2

[May 08, 2017] Betteridge's law of headlines

Apr 27, 2017 | en.wikipedia.org
Betteridge's law of headlines From Wikipedia, the free encyclopedia Jump to: navigation , search Betteridge's law of headlines is one name for an adage that states: "Any headline that ends in a question mark can be answered by the word no ." It is named after Ian Betteridge, a British technology journalist, [1] [2] although the principle is much older. As with similar "laws" (e.g., Murphy's law ), it is intended as a humorous adage rather than always being literally true. [3] [4]

The maxim has been cited by other names since as early as 1991, when a published compilation of Murphy's Law variants called it " Davis's law ", [5] a name that also crops up online, without any explanation of who Davis was. [6] [7] It has also been called just the " journalistic principle ", [8] and in 2007 was referred to in commentary as "an old truism among journalists". [9]

Ian Betteridge's name became associated with the concept after he discussed it in a February 2009 article, which examined a previous TechCrunch article that carried the headline "Did Last.fm Just Hand Over User Listening Data To the RIAA ?": [10]

This story is a great demonstration of my maxim that any headline which ends in a question mark can be answered by the word "no." The reason why journalists use that style of headline is that they know the story is probably bullshit, and don't actually have the sources and facts to back it up, but still want to run it. [1]

A similar observation was made by British newspaper editor Andrew Marr in his 2004 book My Trade , among Marr's suggestions for how a reader should interpret newspaper articles:

If the headline asks a question, try answering 'no'. Is This the True Face of Britain's Young? (Sensible reader: No.) Have We Found the Cure for AIDS? (No; or you wouldn't have put the question mark in.) Does This Map Provide the Key for Peace? (Probably not.) A headline with a question mark at the end means, in the vast majority of cases, that the story is tendentious or over-sold. It is often a scare story, or an attempt to elevate some run-of-the-mill piece of reporting into a national controversy and, preferably, a national panic. To a busy journalist hunting for real information a question mark means 'don't bother reading this bit'. [11]

Outside journalism

In the field of particle physics , the concept is known as Hinchliffe's Rule , [12] [13] after physicist Ian Hinchliffe , [14] who stated that if a research paper's title is in the form of a yes–no question, the answer to that question will be "no". [14] The adage was humorously led into a Liar's paradox by a pseudonymous 1988 paper which bore the title "Is Hinchliffe's Rule True?" [13] [14]

However, at least one article found that the "law" does not apply in research literature. [15]

See also

[Nov 08, 2015] 2013 Keynote: Dan Quinlan: C++ Use in High Performance Computing Within DOE: Past and Future

.YouTube.com: At 31 min there is an interesting slide that gives some information about the scale of system in DOE. Current system has 18,700 News system will have 50K to 500K nodes, 32 core per node (power consumption is ~15 MW equal to a small city power consumption). The cost is around $200M
Jun 09, 2013 | YouTube

watch-v=zZGYfM1iM7c

[Nov 08, 2015] The Anti-Java Professor and the Jobless Programmers

Nick Geoghegan

James Maguire's article raises some interesting questions as to why teaching Java to first year CS / IT students is a bad idea. The article mentions both Ada and Pascal – neither of which really "took off" outside of the States, with the former being used mainly by contractors of the US Dept. of Defense.

This is my own, personal, extension to the article – which I agree with – and why first year students should be taught C in first year. I'm biased though, I learned C as my first language and extensively use C or C++ in projects.

Java is a very high level language that has interesting features that make it easier for programmers. The two main points, that I like about Java, are libraries (although libraries exist for C / C++ ) and memory management.

Libraries

Libraries are fantastic. They offer an API and abstract a metric fuck tonne of work that a programmer doesn't care about. I don't care how the library works inside, just that I have a way of putting in input and getting expected output (see my post on abstraction). I've extensively used libraries, even this week, for audio codec decoding. Libraries mean not reinventing the wheel and reusing code (something students are discouraged from doing, as it's plagiarism, yet in the real world you are rewarded). Again, starting with C means that you appreciate the libraries more.

Memory Management

Managing your programs memory manually is a pain in the hole. We all know this after spending countless hours finding memory leaks in our programs. Java's inbuilt memory management tool is great – it saves me from having to do it. However, if I had have learned Java first, I would assume (for a short amount of time) that all languages managed memory for you or that all languages were shite compared to Java because they don't manage memory for you. Going from a "lesser" language like C to Java makes you appreciate the memory manager

What's so great about C?

In the context of a first language to teach students, C is perfect. C is

Java is a complex language that will spoil a first year student. However, as noted, CS / IT courses need to keep student retention rates high. As an example, my first year class was about 60 people, final year was 8. There are ways to keep students, possibly with other, easier, languages in the second semester of first year – so that students don't hate the subject when choosing the next years subject post exams.

Conversely, I could say that you should teach Java in first year and expand on more difficult languages like C or assembler (which should be taught side by side, in my mind) later down the line – keeping retention high in the initial years, and drilling down with each successive semester to more systems level programming.

There's a time and place for Java, which I believe is third year or final year. This will keep Java fresh in the students mind while they are going job hunting after leaving the bosom of academia. This will give them a good head start, as most companies are Java houses in Ireland.

[Nov 08, 2015] Abstraction

nickgeoghegan.net

Filed in Programming No Comments

A few things can confuse programming students, or new people to programming. One of these is abstraction.

Wikipedia says:

In computer science, abstraction is the process by which data and programs are defined with a representation similar to its meaning (semantics), while hiding away the implementation details. Abstraction tries to reduce and factor out details so that the programmer can focus on a few concepts at a time. A system can have several abstraction layers whereby different meanings and amounts of detail are exposed to the programmer. For example, low-level abstraction layers expose details of the hardware where the program is run, while high-level layers deal with the business logic of the program.

That might be a bit too wordy for some people, and not at all clear. Here's my analogy of abstraction.

Abstraction is like a car

A car has a few features that makes it unique.

If someone can drive a Manual transmission car, they can drive any Manual transmission car. Automatic drivers, sadly, cannot drive a Manual transmission drivers without "relearing" the car. That is an aside, we'll assume that all cars are Manual transmission cars – as is the case in Ireland for most cars.

Since I can drive my car, which is a Mitsubishi Pajero, that means that I can drive your car – a Honda Civic, Toyota Yaris, Volkswagen Passat.

All I need to know, in order to drive a car – any car – is how to use the breaks, accelerator, steering wheel, clutch and transmission. Since I already know this in my car, I can abstract away your car and it's controls.

I do not need to know the inner workings of your car in order to drive it, just the controls. I don't need to know how exactly the breaks work in your car, only that they work. I don't need to know, that your car has a turbo charger, only that when I push the accelerator, the car moves. I also don't need to know the exact revs that I should gear up or gear down (although that would be better on the engine!)

Virtually all controls are the same. Standardization means that the clutch, break and accelerator are all in the same place, regardless of the car. This means that I do not need to relearn how a car works. To me, a car is just a car, and is interchangeable with any other car.

Abstraction means not caring

As a programmer, or someone using a third party API (for example), abstraction means not caring how the inner workings of some function works – Linked list data structure, variable names inside the function, the sorting algorithm used, etc – just that I have a standard (preferable unchanging) interface to do whatever I need to do.

Abstraction can be taught of as a black box. For input, you get output. That shouldn't be the case, but often is. We need abstraction so that, as a programmer, we can concentrate on other aspects of the program – this is the corner-stone for large scale, multi developer, software projects.

[Oct 18, 2013] Tom Clancy, Best-Selling Master of Military Thrillers, Dies at 66

Fully applicable to programming...
NYTimes.com

"I tell them you learn to write the same way you learn to play golf," he once said. "You do it, and keep doing it until you get it right. A lot of people think something mystical happens to you, that maybe the muse kisses you on the ear. But writing isn't divinely inspired - it's hard work."

[Jan 14, 2013] Learn Basic Programming So You Aren't At the Mercy of Programmers

I like rephrased line: "You need to learn to program. Because if you don't, you're always going to be at the mercy of some asshole programmer."
January 13, 2013 | developers.slashdot.org

An anonymous reader writes "Derek Sivers, creator of online indie music store CD Baby, has a post about why he thinks basic programming is a useful skill for everybody.

He quotes a line from a musician he took guitar lessons from as a kid: "You need to learn to sing. Because if you don't, you're always going to be at the mercy of some a****** singer." Sivers recommends translating that to other areas of life.

He says, 'The most common thing I hear from aspiring entrepreneurs is, "I have this idea for an app or site. But I'm not technical, so I need to find someone who can make it for me." I point them to my advice about how to hire a programmer, but as most of the good ones are already booked solid, it's a pretty helpless position to be in. If you heard someone say, "I have this idea for a song. But I'm not musical, so I need to find someone who will write, perform, and record it for me." - you'd probably advise them to just take some time to sit down with a guitar or piano and learn enough to turn their ideas into reality.

And so comes my advice: Yes, learn some programming basics. Just some HTML, CSS, and JavaScript should be enough to start. ... You don't need to become an expert, just know the basics, so you're not helpless.'"

BrokenHalo (565198:

Well, no reason why it should. Just about anyone should be able to write some form of pseudocode, however incomplete, for whatever task they want to accomplish with or without the assistance of a computer.

That said, when I first started working with computers back in the '70s, programmers mostly didn't have access to the actual computer hardware, so if the chunk of code was large, we simply wrote out our FORTRAN, Assembly or COBOL programs on a cellulose-fibre "paper" substance called a Coding Sheet with a graphite-filled wooden stick known as a pencil. These were then transcribed on to mag tape by a platoon of very pretty but otherwise non-human keypunch ops who were universally capable of typing at a rate of 6.02 x 10^23 words per minute. (If the program or patch happened to be small or trivial, we used one of those metal card-punch contraptions with an 029 keypad, thus allowing the office door to slam with nothing to restrain it.)

This leisurely approach led to a very different and IMHO more creative attitude to coding, and it was probably no coincidence that many programmers back then were pipe-smokers.

Anonymous Coward:

"I have an idea for an app" is exactly what riles up programmers. Ideas are a dime a dozen. If you, the "nontechnical person", do your job right, then you'll find a competent and cooperative programmer.

If, on the other hand, and this is is much too common, you expect the programmer to do your work (requirements engineering, reading your mind for what you want, correcting your conceptual mistakes, graphics design, business planning to get the scale right, etc.) on top of the actual programming in return for a one-time payment while you expect to sell "your" startup for millions, then you'll get asshole programmers - and you deserve them.

Anonymous Coward:

A programmer's job is to implement a specification. People who "have an idea for an app" only want to pay a programmer (I'm being generous here, often they don't even want to pay a programmer, see the article), but expect to get a business analyst, graphics artist, software architect, marketer, programmer and system administrator rolled into one, so that they don't have to give away too much of the money they expect to earn with their creative idea.

Someone who thinks you can learn a little programming to avoid being at the mercy of programmers isn't looking for a partner, isn't willing to share with a partner and doesn't deserve the input from a partner.

aaarrrgggh (9205):

I'm an engineer. I want to remodel my home. I come up with ideas, document them, and give them to an architect to build into a complete design that conveys scope to the general contractor and trades. Me being educated about the process helps me to manage scope and hopefully get the product I want in the most efficient manner possible, while also taking advantage of the expertise of others. A prima donna architect that only wants to create something they find to be beautiful might not solve my problems.

Programming is no different. If I convey something in pseudo code or user interface, I would expect a skilled programmer to be able to provide a critical evaluation of my idea and guide me into the best direction. I might not be able to break down the functions for security the right way, but I would at least be highlighting the need for security as an example.

Moraelin (679338)

I'm not sure that learning some superficial idea of a language is going to help. And I'll give you a couple of reasons why:

  1. Dunning-Kruger. The people with the least knowledge on the domain are those who overrate their knowledge the most.

    Now I really wish to believe that some management or marketing guy is willing to sink 10,000 hours into becoming good at programming, and have a good idea of exactly what he's asking for. I really do. But we both know that even if he does a decent amount overtime, that's about 3 years of doing NOTHING BUT programming, i.e., he'd have to not do his real job at all any more. Or more like 15 years if he does some two-hours a day of hobby-style programming in the afternoon. And he probably won't even do that.

    What is actually going to happen, if at all, is that he'll plod through it up to first peak of his own sense of how much it knows, i.e., the Dunning-Kruger sweet spot. The point where he thinks he knows it all, except, you know, maybe some minor esoteric stuff that doesn't matter anyway. But is actually the point where he doesn't know jack.

  2. And from my experience, those are the worst problem bosses. The kind which is an illustration of Russell's, "The trouble with the world is that the stupid are cocksure and the intelligent are full of doubt." The kind who is cock-sure that he probably is better at programming than you anyway, he just, you know, doesn't have the time to actually do it. (Read: to actually get experience.)

    That's the kind who's just moved from just a paranoid suspicion that your making a fuss about the 32414'th change request is taking advantage of him, to the kind who "knows" that you're just an unreasonable asshole. After all, he has no problem making changes to the 1000 line JSP or PHP page he did for practice (half of which being just HTML mixed in with the business code.) If he wants to add a button to that one, hey, his editor even lets him drag and drop it in 5 seconds. Why, he can even change it from displaying a fictive list of widgets to a fictive list of employees. So your wanting to redo a part of the design to accommodate his request to change the whole functionality of a 1,000,000 line program (which is actually quite small) must be some kind of trying to shaft him.

    It's the kind who thinks that if he did a simple example program in Visual Fox Pro, a single-user "database", placed the database files on a file server, and then accessed them from another workstation, that makes him qualified to decide he doesn't need MySQL or Oracle for his enterprise system, he can just demand to have it done in Visual Fox Pro. In fact, he "knows" it can be done that way. No, really, this is an actual example that happened to me. Verbatim. I'm not making it up.

  3. Well, it doesn't work on other domains either, so I don't see why programming would be any different. People can have a superficial understanding of how a map editor for Skyrim works, and it won't prevent them from coming with some unreasonable idea like that someone should make him every outfit from [insert Anime series] and not just do it for free, but credit him, because, hey, he had the idea. No, seriously, just about every other idiot thinks that the reason someone hasn't done a total conversion from Skyrim to Star Wars is that they didn't have the precious idea.

    Basically it's Dunning-Kruger all over again.

I think more than understanding programming, what people need is understanding that ideas are a dime a dozen. What matters is the execution.

What they need to understand is that, no, you're probably not the next Edison or Ford or Steve Jobs or whatever. There are probably a thousand other guys who had the same idea, some may have even tried it, and there might actually be a reason why you never heard of it being actually finished. And even those are remembered for actually having the management skills to make those ideas work, not just for having an idea.

Ford didn't just make it for having the idea of making a cheap car, nor for being a mechanic himself. Why it worked was managing to sort things out like managing to hire and hold onto some good subordinates, reduce the turnover that previously had some departments literally hire 300 people a year to fill 100 positions, etc. It's the execution that mattered, not just having an idea.

Once they get disabused of the idea all that matters is that their brain farted a vague idea, I think it will go a longer way towards less frustration both for them and their employees.

RabidReindeer (2625839):

short version: "A little knowledge is a dangerous thing."

People who think they know what the job entails start out saying "It's Easy! All You Have To Do Is..." and the whole thing swiftly descends into Hell.

Ideas are just a multiplier of execution

2009-07-28

It's so funny when I hear people being so protective of ideas. (People who want me to sign an NDA to tell me the simplest idea.)

To me, ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions.

Explanation:

AWFUL IDEA = -1
WEAK IDEA = 1
SO-SO IDEA = 5
GOOD IDEA = 10
GREAT IDEA = 15
BRILLIANT IDEA = 20
-------- ---------
NO EXECUTION = $1
WEAK EXECUTION = $1000
SO-SO EXECUTION = $10,000
GOOD EXECUTION = $100,000
GREAT EXECUTION = $1,000,000
BRILLIANT EXECUTION = $10,000,000

To make a business, you need to multiply the two.

The most brilliant idea, with no execution, is worth $20.

The most brilliant idea takes great execution to be worth $20,000,000.

That's why I don't want to hear people's ideas.

I'm not interested until I see their execution.

(This post originally appeared on my O'Reilly blog on August 16, 2005. I'm re-posting it here since their site is getting filled with ads.)

[Oct 14, 2011] Dennis Ritchie, 70, Dies, Programming Trailblazer - by Steve Rohr

October 13, 2011 | NYTimes.com
Dennis M. Ritchie, who helped shape the modern digital era by creating software tools that power things as diverse as search engines like Google and smartphones, was found dead on Wednesday at his home in Berkeley Heights, N.J. He was 70.

Mr. Ritchie, who lived alone, was in frail health in recent years after treatment for prostate cancer and heart disease, said his brother Bill.

In the late 1960s and early '70s, working at Bell Labs, Mr. Ritchie made a pair of lasting contributions to computer science. He was the principal designer of the C programming language and co-developer of the Unix operating system, working closely with Ken Thompson, his longtime Bell Labs collaborator.

The C programming language, a shorthand of words, numbers and punctuation, is still widely used today, and successors like C++ and Java build on the ideas, rules and grammar that Mr. Ritchie designed. The Unix operating system has similarly had a rich and enduring impact. Its free, open-source variant, Linux, powers many of the world's data centers, like those at Google and Amazon, and its technology serves as the foundation of operating systems, like Apple's iOS, in consumer computing devices.

"The tools that Dennis built - and their direct descendants - run pretty much everything today," said Brian Kernighan, a computer scientist at Princeton University who worked with Mr. Ritchie at Bell Labs.

Those tools were more than inventive bundles of computer code. The C language and Unix reflected a point of view, a different philosophy of computing than what had come before. In the late '60s and early '70s, minicomputers were moving into companies and universities - smaller and at a fraction of the price of hulking mainframes.

Minicomputers represented a step in the democratization of computing, and Unix and C were designed to open up computing to more people and collaborative working styles. Mr. Ritchie, Mr. Thompson and their Bell Labs colleagues were making not merely software but, as Mr. Ritchie once put it, "a system around which fellowship can form."

C was designed for systems programmers who wanted to get the fastest performance from operating systems, compilers and other programs. "C is not a big language - it's clean, simple, elegant," Mr. Kernighan said. "It lets you get close to the machine, without getting tied up in the machine."

Such higher-level languages had earlier been intended mainly to let people without a lot of programming skill write programs that could run on mainframes. Fortran was for scientists and engineers, while Cobol was for business managers.

C, like Unix, was designed mainly to let the growing ranks of professional programmers work more productively. And it steadily gained popularity. With Mr. Kernighan, Mr. Ritchie wrote a classic text, "The C Programming Language," also known as "K. & R." after the authors' initials, whose two editions, in 1978 and 1988, have sold millions of copies and been translated into 25 languages.

Dennis MacAlistair Ritchie was born on Sept. 9, 1941, in Bronxville, N.Y. His father, Alistair, was an engineer at Bell Labs, and his mother, Jean McGee Ritchie, was a homemaker. When he was a child, the family moved to Summit, N.J., where Mr. Ritchie grew up and attended high school. He then went to Harvard, where he majored in applied mathematics.

While a graduate student at Harvard, Mr. Ritchie worked at the computer center at the Massachusetts Institute of Technology, and became more interested in computing than math. He was recruited by the Sandia National Laboratories, which conducted weapons research and testing. "But it was nearly 1968," Mr. Ritchie recalled in an interview in 2001, "and somehow making A-bombs for the government didn't seem in tune with the times."

Mr. Ritchie joined Bell Labs in 1967, and soon began his fruitful collaboration with Mr. Thompson on both Unix and the C programming language. The pair represented the two different strands of the nascent discipline of computer science. Mr. Ritchie came to computing from math, while Mr. Thompson came from electrical engineering.

"We were very complementary," said Mr. Thompson, who is now an engineer at Google. "Sometimes personalities clash, and sometimes they meld. It was just good with Dennis."

Besides his brother Bill, of Alexandria, Va., Mr. Ritchie is survived by another brother, John, of Newton, Mass., and a sister, Lynn Ritchie of Hexham, England.

Mr. Ritchie traveled widely and read voraciously, but friends and family members say his main passion was his work. He remained at Bell Labs, working on various research projects, until he retired in 2007.

Colleagues who worked with Mr. Ritchie were struck by his code - meticulous, clean and concise. His writing, according to Mr. Kernighan, was similar. "There was a remarkable precision to his writing," Mr. Kernighan said, "no extra words, elegant and spare, much like his code."

[Apr 24, 2011] A Short Guide To Lifestyle Design (LSD) The 7 Core Skills Of The Cyberpunk Survivalist

February 28, 2011 | Sublime Oblivion

Disagree that a person can become a competent computer programmer in under a year. Well, maybe the exceptional genius… For most people, it takes a minimum of 3 years to master the skills required to be a decent coder.

It's not just about learning Java (which I do agree is a good computer language to start with), there are certain prerequisites. Fortunately, not a lot of math is required, high-school algebra is sufficient, plus a grasp of "functions" (because programmers usually have to write a lot of functions). On the other hand, boolean logic is absolutely required, and that's more than just knowing the difference between logical AND and logcial OR (or XOR). Also, if one gets into databases (my specialty, actually), then one also needs to master the mathematics of set theory.

And a real programmer also needs to be able to write (and understand) a recursion algorithm. For example, every time I have interviewed a potential coder, I have asked them, "Are you familiar with the 'Towers of Hanoi' algorithim?" If they don't know what that is, they still have a chance to impress me if they can describe a B-tree navigation algorithm. That's first- or second-year computer science stuff. If they can't recurse a directory tree (using whatever programming language of their choice), then they aren't a real programmer. God knows there are plenty of fakes in the business. Sorry for the rant. Having to deal with "pretend programmers" (rookies who think they're programmers because they know how to update their Facebook page) is one of my pet peeves… Grrrrrrrr!

[Nov 30, 2010] Professor Sir Maurice Wilkes

Telegraph

The computer, known as EDSAC (Electronic Delay Storage Automatic Calculator) was a huge contraption that took up a room in what was the University's old Mathematical Library. It contained 3,000 vacuum valves arranged on 12 racks and used tubes filled with mercury for memory. Despite its impressive size, it could only carry out 650 operations per second.

Before the development of EDSAC, digital computers, such as the American Moore School's ENIAC (Electronic Numeral Integrator and Computer), were only capable of dealing with one particular type of problem. To solve a different kind of problem, thousands of switches had to be reset and miles of cable re-routed. Reprogramming took days.

In 1946, a paper by the Hungarian-born scientist John von Neumann and others suggested that the future lay in developing computers with memory which could not only store data, but also sets of instructions, or programs. Users would then be able to change programs, written in binary number format, without rewiring the whole machine. The challenge was taken up by three groups of scientists - one at the University of Manchester, an American team led by JW Mauchly and JP Eckert, and the Cambridge team led by Wilkes.

Eckert and Mauchly had been working on developing a stored-program computer for two years before Wilkes became involved at Cambridge. While the University of Manchester machine, known as "Baby", was the first to store data and program, it was Wilkes who became the first to build an operational machine based on von Neumann's ideas (which form the basis for modern computers) to deliver a service.

Wilkes chose to adopt mercury delay lines suggested by Eckert to serve as an internal memory store. In such a delay line, an electrical signal is converted into a sound wave travelling through a long tube of mercury at a speed of 1,450 metres per second. The signal can be transmitted back and forth along the tube, several of which were combined to form the machine's memory. This memory meant the computer could store both data and program. The main program was loaded by paper tape, but once loaded this was executed from memory, making the machine the first of its kind.

After two years of development, on May 6 1949 Wilkes's EDSAC "rather suddenly" burst into life, computing a table of square numbers. From early 1950 it offered a regular computing service to the members of Cambridge University, the first of its kind in the world, with Wilkes and his group developing programs and compiling a program library. The world's first scientific paper to be published using computer calculations - a paper on genetics by RA Fisher – was completed with the help of EDSAC.

Wilkes was probably the first computer programmer to spot the coming significance of program testing: "In 1949 as soon as we started programming", he recalled in his memoirs, "we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realised that a large part of my life from then on was going to be spent in finding mistakes in my own programs."

In 1951 Wilkes (with David J Wheeler and Stanley Gill) published the world's first textbook on computer programming, Preparation of Programs for an Electronic Digital Computer. Two years later he established the world's first course in Computer Science at Cambridge.

EDSAC remained in operation until 1958, but the future lay not in delay lines but in magnetic storage and, when it came to the end of its life, the machine was cannibalised and scrapped, its old program tapes used as streamers at Cambridge children's parties.

Wilkes, though, remained at the forefront of computing technology and made several other breakthroughs. In 1958 he built EDSAC's replacement, EDSAC II, which not only incorporated magnetic storage but was the first computer in the world to have a micro-programmed control unit. In 1965 he published the first paper on cache memories, followed later by a book on time-sharing.

In 1974 he developed the "Cambridge Ring", a digital communication system linking computers together. The network was originally designed to avoid the expense of having a printer at every computer, but the technology was soon developed commercially by others.

When EDSAC was built, Wilkes sought to allay public fears by describing the stored-program computer as "a calculating machine operated by a moron who cannot think, but can be trusted to do what he is told". In 1964, however, predicting the world in "1984", he drew a more Orwellian picture: "How would you feel," he wrote, "if you had exceeded the speed limit on a deserted road in the dead of night, and a few days later received a demand for a fine that had been automatically printed by a computer coupled to a radar system and vehicle identification device? It might not be a demand at all, but simply a statement that your bank account had been debited automatically."

Maurice Vincent Wilkes was born at Dudley, Worcestershire, on June 26 1913. His father was a switchboard operator for the Earl of Dudley whose extensive estate in south Staffordshire had its own private telephone network; he encouraged his son's interest in electronics and at King Edward VI's Grammar School, Stourbridge, Maurice built his own radio transmitter and was allowed to operate it from home.

Encouraged by his headmaster, a Cambridge-educated mathematician, Wilkes went up to St John's College, Cambridge to read Mathematics, but he studied electronics in his spare time in the University Library and attended lectures at the Engineering Department. After obtaining an amateur radio licence he constructed radio equipment in his vacations with which to make contact, via the ionosphere, with radio "hams" around the world.

Wilkes took a First in Mathematics and stayed on at Cambridge to do a PhD on the propagation of radio waves in the ionosphere. This led to an interest in tidal motion in the atmosphere and to the publication of his first book Oscillations of the Earth's Atmosphere (1949). In 1937 he was appointed university demonstrator at the new Mathematical Laboratory (later renamed the Computer Laboratory) housed in part of the old Anatomy School.

When war broke out, Wilkes left Cambridge to work with R Watson-Watt and JD Cockroft on the development of radar. Later he became involved in designing aircraft, missile and U-boat radio tracking systems.

In 1945 Wilkes was released from war work to take up the directorship of the Cambridge Mathematical Laboratory and given the task of constructing a computer service for the University.

The following year he attended a course on "Theory and Techniques for Design of Electronic Digital Computers" at the Moore School of Electrical Engineering at the University of Pennsylvania, the home of the ENIAC. The visit inspired Wilkes to try to build a stored-program computer and on his return to Cambridge, he immediately began work on EDSAC.

Wilkes was appointed Professor of Computing Technology in 1965, a post he held until his retirement in 1980. Under his guidance the Cambridge University Computer Laboratory became one of the country's leading research centres. He also played an important role as an adviser to British computer companies and was instrumental in founding the British Computer Society, serving as its first president from 1957 to 1960.

After his retirement, Wilkes spent six years as a consultant to Digital Equipment in Massachusetts, and was Adjunct Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology from 1981 to 1985. Later he returned to Cambridge as a consultant researcher with a research laboratory funded variously by Olivetti, Oracle and AT&T, continuing to work until well into his 90s.

Maurice Wilkes was elected a fellow of the Royal Society in 1956, a Foreign Honorary Member of the American Academy of Arts and Sciences in 1974, a Fellow of the Royal Academy of Engineering in 1976 and a Foreign Associate of the American National Academy of Engineering in 1977. He was knighted in 2000.

Among other prizes he received the ACM Turing Award in 1967; the Faraday Medal of the Institute of Electrical Engineers in 1981; and the Harry Goode Memorial Award of the American Federation for Information Processing Societies in 1968.

In 1985 he provided a lively account of his work in Memoirs of a Computer Pioneer.

Maurice Wilkes married, in 1947, Nina Twyman. They had a son and two daughters.

Computer Laboratory Maurice V. Wilkes

[Apr 25, 2008] Interview with Donald Knuth By Donald E. Knuth,Andrew Binstock

Apr 25, 2008

Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Andrew Binstock: You are one of the fathers of the open-source revolution, even if you aren't widely heralded as such. You previously have stated that you released TeX as open source because of the problem of proprietary implementations at the time, and to invite corrections to the code-both of which are key drivers for open-source projects today. Have you been surprised by the success of open source since that time?

Donald Knuth: The success of open source code is perhaps the only thing in the computer field that hasn't surprised me during the past several decades. But it still hasn't reached its full potential; I believe that open-source programs will begin to be completely dominant as the economy moves more and more from products towards services, and as more and more volunteers arise to improve the code.

For example, open-source code can produce thousands of binaries, tuned perfectly to the configurations of individual users, whereas commercial software usually will exist in only a few versions. A generic binary executable file must include things like inefficient "sync" instructions that are totally inappropriate for many installations; such wastage goes away when the source code is highly configurable. This should be a huge win for open source.

Yet I think that a few programs, such as Adobe Photoshop, will always be superior to competitors like the Gimp-for some reason, I really don't know why! I'm quite willing to pay good money for really good software, if I believe that it has been produced by the best programmers.

Remember, though, that my opinion on economic questions is highly suspect, since I'm just an educator and scientist. I understand almost nothing about the marketplace.

Andrew: A story states that you once entered a programming contest at Stanford (I believe) and you submitted the winning entry, which worked correctly after a single compilation. Is this story true? In that vein, today's developers frequently build programs writing small code increments followed by immediate compilation and the creation and running of unit tests. What are your thoughts on this approach to software development?

Donald: The story you heard is typical of legends that are based on only a small kernel of truth. Here's what actually happened: John McCarthy decided in 1971 to have a Memorial Day Programming Race. All of the contestants except me worked at his AI Lab up in the hills above Stanford, using the WAITS time-sharing system; I was down on the main campus, where the only computer available to me was a mainframe for which I had to punch cards and submit them for processing in batch mode. I used Wirth's ALGOL W system (the predecessor of Pascal). My program didn't work the first time, but fortunately I could use Ed Satterthwaite's excellent offline debugging system for ALGOL W, so I needed only two runs. Meanwhile, the folks using WAITS couldn't get enough machine cycles because their machine was so overloaded. (I think that the second-place finisher, using that "modern" approach, came in about an hour after I had submitted the winning entry with old-fangled methods.) It wasn't a fair contest.

As to your real question, the idea of immediate compilation and "unit tests" appeals to me only rarely, when I'm feeling my way in a totally unknown environment and need feedback about what works and what doesn't. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up."

Andrew: One of the emerging problems for developers, especially client-side developers, is changing their thinking to write programs in terms of threads. This concern, driven by the advent of inexpensive multicore PCs, surely will require that many algorithms be recast for multithreading, or at least to be thread-safe. So far, much of the work you've published for Volume 4 of The Art of Computer Programming (TAOCP) doesn't seem to touch on this dimension. Do you expect to enter into problems of concurrency and parallel programming in upcoming work, especially since it would seem to be a natural fit with the combinatorial topics you're currently working on?

Donald: The field of combinatorial algorithms is so vast that I'll be lucky to pack its sequential aspects into three or four physical volumes, and I don't think the sequential methods are ever going to be unimportant. Conversely, the half-life of parallel techniques is very short, because hardware changes rapidly and each new machine needs a somewhat different approach. So I decided long ago to stick to what I know best. Other people understand parallel machines much better than I do; programmers should listen to them, not me, for guidance on how to deal with simultaneity.

Andrew: Vendors of multicore processors have expressed frustration at the difficulty of moving developers to this model. As a former professor, what thoughts do you have on this transition and how to make it happen? Is it a question of proper tools, such as better native support for concurrency in languages, or of execution frameworks? Or are there other solutions?

Donald: I don't want to duck your question entirely. I might as well flame a bit about my personal unhappiness with the current trend toward multicore architecture. To me, it looks more or less like the hardware designers have run out of ideas, and that they're trying to pass the blame for the future demise of Moore's Law to the software writers by giving us machines that work faster only on a few key benchmarks! I won't be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Titanium" approach that was supposed to be so terrific-until it turned out that the wished-for compilers were basically impossible to write.

Let me put it this way: During the past 50 years, I've written well over a thousand programs, many of which have substantial size. I can't think of even five of those programs that would have been enhanced noticeably by parallelism or multithreading. Surely, for example, multiple processors are no help to TeX.[1]

How many programmers do you know who are enthusiastic about these promised machines of the future? I hear almost nothing but grief from software people, although the hardware folks in our department assure me that I'm wrong.

I know that important applications for parallelism exist-rendering graphics, breaking codes, scanning images, simulating physical and biological processes, etc. But all these applications require dedicated code and special-purpose techniques, which will need to be changed substantially every few years.

Even if I knew enough about such methods to write about them in TAOCP, my time would be largely wasted, because soon there would be little reason for anybody to read those parts. (Similarly, when I prepare the third edition of Volume 3 I plan to rip out much of the material about how to sort on magnetic tapes. That stuff was once one of the hottest topics in the whole software field, but now it largely wastes paper when the book is printed.)

The machine I use today has dual processors. I get to use them both only when I'm running two independent jobs at the same time; that's nice, but it happens only a few minutes every week. If I had four processors, or eight, or more, I still wouldn't be any better off, considering the kind of work I do-even though I'm using my computer almost every day during most of the day. So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it's a pipe dream. (No-that's the wrong metaphor! "Pipelines" actually work for me, but threads don't. Maybe the word I want is "bubble.")

From the opposite point of view, I do grant that web browsing probably will get better with multicores. I've been talking about my technical work, however, not recreation. I also admit that I haven't got many bright ideas about what I wish hardware designers would provide instead of multicores, now that they've begun to hit a wall with respect to sequential computation. (But my MMIX design contains several ideas that would substantially improve the current performance of the kinds of programs that concern me most-at the cost of incompatibility with legacy x86 programs.)

Andrew: One of the few projects of yours that hasn't been embraced by a widespread community is literate programming. What are your thoughts about why literate programming didn't catch on? And is there anything you'd have done differently in retrospect regarding literate programming?

Donald: Literate programming is a very personal thing. I think it's terrific, but that might well be because I'm a very strange person. It has tens of thousands of fans, but not millions.

In my experience, software created with literate programming has turned out to be significantly better than software developed in more traditional ways. Yet ordinary software is usually okay-I'd give it a grade of C (or maybe C++), but not F; hence, the traditional methods stay with us. Since they're understood by a vast community of programmers, most people have no big incentive to change, just as I'm not motivated to learn Esperanto even though it might be preferable to English and German and French and Russian (if everybody switched).

Jon Bentley probably hit the nail on the head when he once was asked why literate programming hasn't taken the whole world by storm. He observed that a small percentage of the world's population is good at programming, and a small percentage is good at writing; apparently I am asking everybody to be in both subsets.

Yet to me, literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s-it has actually been indispensable at times. Some of my major programs, such as the MMIX meta-simulator, could not have been written with any other methodology that I've ever heard of. The complexity was simply too daunting for my limited brain to handle; without literate programming, the whole enterprise would have flopped miserably.

If people do discover nice ways to use the newfangled multithreaded machines, I would expect the discovery to come from people who routinely use literate programming. Literate programming is what you need to rise above the ordinary level of achievement. But I don't believe in forcing ideas on anybody. If literate programming isn't your style, please forget it and do what you like. If nobody likes it but me, let it die.

On a positive note, I've been pleased to discover that the conventions of CWEB are already standard equipment within preinstalled software such as Makefiles, when I get off-the-shelf Linux these days.

Andrew: In Fascicle 1 of Volume 1, you reintroduced the MMIX computer, which is the 64-bit upgrade to the venerable MIX machine comp-sci students have come to know over many years. You previously described MMIX in great detail in MMIXware. I've read portions of both books, but can't tell whether the Fascicle updates or changes anything that appeared in MMIXware, or whether it's a pure synopsis. Could you clarify?

Donald: Volume 1 Fascicle 1 is a programmer's introduction, which includes instructive exercises and such things. The MMIXware book is a detailed reference manual, somewhat terse and dry, plus a bunch of literate programs that describe prototype software for people to build upon. Both books define the same computer (once the errata to MMIXware are incorporated from my website). For most readers of TAOCP, the first fascicle contains everything about MMIX that they'll ever need or want to know.

I should point out, however, that MMIX isn't a single machine; it's an architecture with almost unlimited varieties of implementations, depending on different choices of functional units, different pipeline configurations, different approaches to multiple-instruction-issue, different ways to do branch prediction, different cache sizes, different strategies for cache replacement, different bus speeds, etc. Some instructions and/or registers can be emulated with software on "cheaper" versions of the hardware. And so on. It's a test bed, all simulatable with my meta-simulator, even though advanced versions would be impossible to build effectively until another five years go by (and then we could ask for even further advances just by advancing the meta-simulator specs another notch).

Suppose you want to know if five separate multiplier units and/or three-way instruction issuing would speed up a given MMIX program. Or maybe the instruction and/or data cache could be made larger or smaller or more associative. Just fire up the meta-simulator and see what happens.

Andrew: As I suspect you don't use unit testing with MMIXAL, could you step me through how you go about making sure that your code works correctly under a wide variety of conditions and inputs? If you have a specific work routine around verification, could you describe it?

Donald: Most examples of machine language code in TAOCP appear in Volumes 1-3; by the time we get to Volume 4, such low-level detail is largely unnecessary and we can work safely at a higher level of abstraction. Thus, I've needed to write only a dozen or so MMIX programs while preparing the opening parts of Volume 4, and they're all pretty much toy programs-nothing substantial. For little things like that, I just use informal verification methods, based on the theory that I've written up for the book, together with the MMIXAL assembler and MMIX simulator that are readily available on the Net (and described in full detail in the MMIXware book).

That simulator includes debugging features like the ones I found so useful in Ed Satterthwaite's system for ALGOL W, mentioned earlier. I always feel quite confident after checking a program with those tools.

Andrew: Despite its formulation many years ago, TeX is still thriving, primarily as the foundation for LaTeX. While TeX has been effectively frozen at your request, are there features that you would want to change or add to it, if you had the time and bandwidth? If so, what are the major items you add/change?

Donald: I believe changes to TeX would cause much more harm than good. Other people who want other features are creating their own systems, and I've always encouraged further development-except that nobody should give their program the same name as mine. I want to take permanent responsibility for TeX and Metafont, and for all the nitty-gritty things that affect existing documents that rely on my work, such as the precise dimensions of characters in the Computer Modern fonts.

Andrew: One of the little-discussed aspects of software development is how to do design work on software in a completely new domain. You were faced with this issue when you undertook TeX: No prior art was available to you as source code, and it was a domain in which you weren't an expert. How did you approach the design, and how long did it take before you were comfortable entering into the coding portion?

Donald: That's another good question! I've discussed the answer in great detail in Chapter 10 of my book Literate Programming, together with Chapters 1 and 2 of my book Digital Typography. I think that anybody who is really interested in this topic will enjoy reading those chapters. (See also Digital Typography Chapters 24 and 25 for the complete first and second drafts of my initial design of TeX in 1977.)

Andrew: The books on TeX and the program itself show a clear concern for limiting memory usage-an important problem for systems of that era. Today, the concern for memory usage in programs has more to do with cache sizes. As someone who has designed a processor in software, the issues of cache-aware and cache-oblivious algorithms surely must have crossed your radar screen. Is the role of processor caches on algorithm design something that you expect to cover, even if indirectly, in your upcoming work?

Donald: I mentioned earlier that MMIX provides a test bed for many varieties of cache. And it's a software-implemented machine, so we can perform experiments that will be repeatable even a hundred years from now. Certainly the next editions of Volumes 1-3 will discuss the behavior of various basic algorithms with respect to different cache parameters.

In Volume 4 so far, I count about a dozen references to cache memory and cache-friendly approaches (not to mention a "memo cache," which is a different but related idea in software).

Andrew: What set of tools do you use today for writing TAOCP? Do you use TeX? LaTeX? CWEB? Word processor? And what do you use for the coding?

Donald: My general working style is to write everything first with pencil and paper, sitting beside a big wastebasket. Then I use Emacs to enter the text into my machine, using the conventions of TeX. I use tex, dvips, and gv to see the results, which appear on my screen almost instantaneously these days. I check my math with Mathematica.

I program every algorithm that's discussed (so that I can thoroughly understand it) using CWEB, which works splendidly with the GDB debugger. I make the illustrations with MetaPost (or, in rare cases, on a Mac with Adobe Photoshop or Illustrator). I have some homemade tools, like my own spell-checker for TeX and CWEB within Emacs. I designed my own bitmap font for use with Emacs, because I hate the way the ASCII apostrophe and the left open quote have morphed into independent symbols that no longer match each other visually. I have special Emacs modes to help me classify all the tens of thousands of papers and notes in my files, and special Emacs keyboard shortcuts that make bookwriting a little bit like playing an organ. I prefer rxvt to xterm for terminal input. Since last December, I've been using a file backup system called backupfs, which meets my need beautifully to archive the daily state of every file.

According to the current directories on my machine, I've written 68 different CWEB programs so far this year. There were about 100 in 2007, 90 in 2006, 100 in 2005, 90 in 2004, etc. Furthermore, CWEB has an extremely convenient "change file" mechanism, with which I can rapidly create multiple versions and variations on a theme; so far in 2008 I've made 73 variations on those 68 themes. (Some of the variations are quite short, only a few bytes; others are 5KB or more. Some of the CWEB programs are quite substantial, like the 55-page BDD package that I completed in January.) Thus, you can see how important literate programming is in my life.

I currently use Ubuntu Linux, on a standalone laptop-it has no Internet connection. I occasionally carry flash memory drives between this machine and the Macs that I use for network surfing and graphics; but I trust my family jewels only to Linux. Incidentally, with Linux I much prefer the keyboard focus that I can get with classic FVWM to the GNOME and KDE environments that other people seem to like better. To each his own.

Andrew: You state in the preface of Fascicle 0 of Volume 4 of TAOCP that Volume 4 surely will comprise three volumes and possibly more. It's clear from the text that you're really enjoying writing on this topic. Given that, what is your confidence in the note posted on the TAOCP website that Volume 5 will see light of day by 2015?

Donald: If you check the Wayback Machine for previous incarnations of that web page, you will see that the number 2015 has not been constant.

You're certainly correct that I'm having a ball writing up this material, because I keep running into fascinating facts that simply can't be left out-even though more than half of my notes don't make the final cut.

Precise time estimates are impossible, because I can't tell until getting deep into each section how much of the stuff in my files is going to be really fundamental and how much of it is going to be irrelevant to my book or too advanced. A lot of the recent literature is academic one-upmanship of limited interest to me; authors these days often introduce arcane methods that outperform the simpler techniques only when the problem size exceeds the number of protons in the universe. Such algorithms could never be important in a real computer application. I read hundreds of such papers to see if they might contain nuggets for programmers, but most of them wind up getting short shrift.

From a scheduling standpoint, all I know at present is that I must someday digest a huge amount of material that I've been collecting and filing for 45 years. I gain important time by working in batch mode: I don't read a paper in depth until I can deal with dozens of others on the same topic during the same week. When I finally am ready to read what has been collected about a topic, I might find out that I can zoom ahead because most of it is eminently forgettable for my purposes. On the other hand, I might discover that it's fundamental and deserves weeks of study; then I'd have to edit my website and push that number 2015 closer to infinity.

Andrew: In late 2006, you were diagnosed with prostate cancer. How is your health today?

Donald: Naturally, the cancer will be a serious concern. I have superb doctors. At the moment I feel as healthy as ever, modulo being 70 years old. Words flow freely as I write TAOCP and as I write the literate programs that precede drafts of TAOCP. I wake up in the morning with ideas that please me, and some of those ideas actually please me also later in the day when I've entered them into my computer.

On the other hand, I willingly put myself in God's hands with respect to how much more I'll be able to do before cancer or heart disease or senility or whatever strikes. If I should unexpectedly die tomorrow, I'll have no reason to complain, because my life has been incredibly blessed. Conversely, as long as I'm able to write about computer science, I intend to do my best to organize and expound upon the tens of thousands of technical papers that I've collected and made notes on since 1962.

Andrew: On your website, you mention that the Peoples Archive recently made a series of videos in which you reflect on your past life. In segment 93, "Advice to Young People," you advise that people shouldn't do something simply because it's trendy. As we know all too well, software development is as subject to fads as any other discipline. Can you give some examples that are currently in vogue, which developers shouldn't adopt simply because they're currently popular or because that's the way they're currently done? Would you care to identify important examples of this outside of software development?

Donald: Hmm. That question is almost contradictory, because I'm basically advising young people to listen to themselves rather than to others, and I'm one of the others. Almost every biography of every person whom you would like to emulate will say that he or she did many things against the "conventional wisdom" of the day.

Still, I hate to duck your questions even though I also hate to offend other people's sensibilities-given that software methodology has always been akin to religion. With the caveat that there's no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, let me just say that almost everything I've ever heard associated with the term "extreme programming" sounds like exactly the wrong way to go...with one exception. The exception is the idea of working in teams and reading each other's code. That idea is crucial, and it might even mask out all the terrible aspects of extreme programming that alarm me.

I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you're totally convinced that reusable code is wonderful, I probably won't be able to sway you anyway, but you'll never convince me that reusable code isn't mostly a menace.

Here's a question that you may well have meant to ask: Why is the new book called Volume 4 Fascicle 0, instead of Volume 4 Fascicle 1? The answer is that computer programmers will understand that I wasn't ready to begin writing Volume 4 of TAOCP at its true beginning point, because we know that the initialization of a program can't be written until the program itself takes shape. So I started in 2005 with Volume 4 Fascicle 2, after which came Fascicles 3 and 4. (Think of Star Wars, which began with Episode 4.)

Finally I was psyched up to write the early parts, but I soon realized that the introductory sections needed to include much more stuff than would fit into a single fascicle. Therefore, remembering Dijkstra's dictum that counting should begin at 0, I decided to launch Volume 4 with Fascicle 0. Look for Volume 4 Fascicle 1 later this year.

References

[1] My colleague Kunle Olukotun points out that, if the usage of TeX became a major bottleneck so that people had a dozen processors and really needed to speed up their typesetting terrifically, a super-parallel version of TeX could be developed that uses "speculation" to typeset a dozen chapters at once: Each chapter could be typeset under the assumption that the previous chapters don't do anything strange to mess up the default logic. If that assumption fails, we can fall back on the normal method of doing a chapter at a time; but in the majority of cases, when only normal typesetting was being invoked, the processing would indeed go 12 times faster. Users who cared about speed could adapt their behavior and use TeX in a disciplined way.

Andrew Binstock is the principal analyst at Pacific Data Works. He is a columnist for SD Times and senior contributing editor for InfoWorld magazine. His blog can be found at: http://binstock.blogspot.com.

[Feb 21, 2008] Project details for Bare Bones interpreter

freshmeat.net

BareBones is an interpreter for the "Bare Bones" programming language defined in Chapter 11 of "Computer Science: An Overview", 9th Edition, by J. Glenn Brookshear.

Release focus: Minor feature enhancements

Changes:
Identifiers were made case-insensitive. A summary of the language was added to the README file.

Author:
Eric Smith [contact developer]

Bill Joy Quotes

[Jan 1, 2008] Computer Science Education Where Are the Software Engineers of Tomorrow

STSC CrossTalk

Computer Science Education: Where Are the Software Engineers of Tomorrow?

Dr. Robert B.K. Dewar, AdaCore Inc.
Dr. Edmond Schonberg, AdaCore Inc.

It is our view that Computer Science (CS) education is neglecting basic skills, in particular in the areas of programming and formal methods. We consider that the general adoption of Java as a first programming language is in part responsible for this decline. We examine briefly the set of programming skills that should be part of every software professional's repertoire.


It is all about programming! Over the last few years we have noticed worrisome trends in CS education. The following represents a summary of those trends:

  1. Mathematics requirements in CS programs are shrinking.
  2. The development of programming skills in several languages is giving way to cookbook approaches using large libraries and special-purpose packages.
  3. The resulting set of skills is insufficient for today's software industry (in particular for safety and security purposes) and, unfortunately, matches well what the outsourcing industry can offer. We are training easily replaceable professionals.

These trends are visible in the latest curriculum recommendations from the Association for Computing Machinery (ACM). Curriculum 2005 does not mention mathematical prerequisites at all, and it mentions only one course in the theory of programming languages [1].

We have seen these developments from both sides: As faculty members at New York University for decades, we have regretted the introduction of Java as a first language of instruction for most computer science majors. We have seen he has weakened the formation of our students, as reflected in their performance in systems and architecture courses. As founders of a company that specializes in Ada programming tools for mission-critical systems, we find it harder to recruit qualified applicants who have the right foundational skills. We want to advocate a more rigorous formation, in which formal methods are introduced early on, and programming languages play a central role in CS education.

Formal Methods and Software Construction

Formal techniques for proving the correctness of programs were an extremely active subject of research 20 years ago. However, the methods (and the hardware) of the time prevented these techniques from becoming widespread, and as a result they are more or less ignored by most CS programs. This is unfortunate because the techniques have evolved to the point that they can be used in large-scale systems and can contribute substantially to the reliability of these systems. A case in point is the use of SPARK in the re-engineering of the ground-based air traffic control system in the United Kingdom (see a description of iFACTS – Interim Future Area Control Tools Support, at <www.nats.co.uk/article/90>). SPARK is a subset of Ada augmented with assertions that allow the designer to prove important properties of a program: termination, absence of run-time exceptions, finite memory usage, etc. [2]. It is obvious that this kind of design and analysis methodology (dubbed Correctness by Construction) will add substantially to the reliability of a system whose design has involved SPARK from the beginning. However, PRAXIS, the company that developed SPARK and which is designing iFACTS, finds it hard to recruit people with the required mathematical competence (and this is present even in the United Kingdom, where formal methods are more widely taught and used than in the United States).

Another formal approach to which CS students need exposure is model checking and linear temporal logic for the design of concurrent systems. For a modern discussion of the topic, which is central to mission-critical software, see [3].

Another area of computer science which we find neglected is the study of floating-point computations. At New York University, a course in numerical methods and floating-point computing used to be required, but this requirement was dropped many years ago, and now very few students take this course. The topic is vital to all scientific and engineering software and is semantically delicate. One would imagine that it would be a required part of all courses in scientific computing, but these often take MatLab to be the universal programming tool and ignore the topic altogether.

The Pitfalls of Java as a First Programming Language

Because of its popularity in the context of Web applications and the ease with which beginners can produce graphical programs, Java has become the most widely used language in introductory programming courses. We consider this to be a misguided attempt to make programming more fun, perhaps in reaction to the drop in CS enrollments that followed the dot-com bust. What we observed at New York University is that the Java programming courses did not prepare our students for the first course in systems, much less for more advanced ones. Students found it hard to write programs that did not have a graphic interface, had no feeling for the relationship between the source program and what the hardware would actually do, and (most damaging) did not understand the semantics of pointers at all, which made the use of C in systems programming very challenging.

Let us propose the following principle: The irresistible beauty of programming consists in the reduction of complex formal processes to a very small set of primitive operations. Java, instead of exposing this beauty, encourages the programmer to approach problem-solving like a plumber in a hardware store: by rummaging through a multitude of drawers (i.e. packages) we will end up finding some gadget (i.e. class) that does roughly what we want. How it does it is not interesting! The result is a student who knows how to put a simple program together, but does not know how to program. A further pitfall of the early use of Java libraries and frameworks is that it is impossible for the student to develop a sense of the run-time cost of what is written because it is extremely hard to know what any method call will eventually execute. A lucid analysis of the problem is presented in [4].

We are seeing some backlash to this approach. For example, Bjarne Stroustrup reports from Texas A & M University that the industry is showing increasing unhappiness with the results of this approach. Specifically, he notes the following:

I have had a lot of complaints about that [the use of Java as a first programming language] from industry, specifically from AT&T, IBM, Intel, Bloomberg, NI, Microsoft, Lockheed-Martin, and more. [5]

He noted in a private discussion on this topic, reporting the following:

It [Texas A&M] did [teach Java as the first language]. Then I started teaching C++ to the electrical engineers and when the EE students started to out-program the CS students, the CS department switched to C++. [5]

It will be interesting to see how many departments follow this trend. At AdaCore, we are certainly aware of many universities that have adopted Ada as a first language because of similar concerns.

A Real Programmer Can Write in Any Language (C, Java, Lisp, Ada)

Software professionals of a certain age will remember the slogan of old-timers from two generations ago when structured programming became the rage: Real programmers can write Fortran in any language. The slogan is a reminder of how thinking habits of programmers are influenced by the first language they learn and how hard it is to shake these habits if you do all your programming in a single language. Conversely, we want to say that a competent programmer is comfortable with a number of different languages and that the programmer must be able to use the mental tools favored by one of them, even when programming in another. For example, the user of an imperative language such as Ada or C++ must be able to write in a functional style, acquired through practice with Lisp and ML1, when manipulating recursive structures. This is one indication of the importance of learning in-depth a number of different programming languages. What follows summarizes what we think are the critical contributions that well-established languages make to the mental tool-set of real programmers. For example, a real programmer should be able to program inheritance and dynamic dispatching in C, information hiding in Lisp, tree manipulation libraries in Ada, and garbage collection in anything but Java. The study of a wide variety of languages is, thus, indispensable to the well-rounded programmer.

Why C Matters

C is the low-level language that everyone must know. It can be seen as a portable assembly language, and as such it exposes the underlying machine and forces the student to understand clearly the relationship between software and hardware. Performance analysis is more straightforward, because the cost of every software statement is clear. Finally, compilers (GCC for example) make it easy to examine the generated assembly code, which is an excellent tool for understanding machine language and architecture.

Why C++ Matters

C++ brings to C the fundamental concepts of modern software engineering: encapsulation with classes and namespaces, information hiding through protected and private data and operations, programming by extension through virtual methods and derived classes, etc. C++ also pushes storage management as far as it can go without full-blown garbage collection, with constructors and destructors.

Why Lisp Matters

Every programmer must be comfortable with functional programming and with the important notion of referential transparency. Even though most programmers find imperative programming more intuitive, they must recognize that in many contexts that a functional, stateless style is clear, natural, easy to understand, and efficient to boot.

An additional benefit of the practice of Lisp is that the program is written in what amounts to abstract syntax, namely the internal representation that most compilers use between parsing and code generation. Knowing Lisp is thus an excellent preparation for any software work that involves language processing.

Finally, Lisp (at least in its lean Scheme incarnation) is amenable to a very compact self-definition. Seeing a complete Lisp interpreter written in Lisp is an intellectual revelation that all computer scientists should experience.

Why Java Matters

Despite our comments on Java as a first or only language, we think that Java has an important role to play in CS instruction. We will mention only two aspects of the language that must be part of the real programmer's skill set:

  1. An understanding of concurrent programming (for which threads provide a basic low-level model).
  2. Reflection, namely the understanding that a program can be instrumented to examine its own state and to determine its own behavior in a dynamically changing environment.
Why Ada Matters

Ada is the language of software engineering par excellence. Even when it is not the language of instruction in programming courses, it is the language chosen to teach courses in software engineering. This is because the notions of strong typing, encapsulation, information hiding, concurrency, generic programming, inheritance, and so on, are embodied in specific features of the language. From our experience and that of our customers, we can say that a real programmer writes Ada in any language. For example, an Ada programmer accustomed to Ada's package model, which strongly separates specification from implementation, will tend to write C in a style where well-commented header files act in somewhat the same way as package specs in Ada. The programmer will include bounds checking and consistency checks when passing mutable structures between subprograms to mimic the strong-typing checks that Ada mandates [6]. She will organize concurrent programs into tasks and protected objects, with well-defined synchronization and communication mechanisms.

The concurrency features of Ada are particularly important in our age of multi-core architectures. We find it surprising that these architectures should be presented as a novel challenge to software design when Ada had well-designed mechanisms for writing safe, concurrent software 30 years ago.

Programming Languages Are Not the Whole Story

A well-rounded CS curriculum will include an advanced course in programming languages that covers a wide variety of languages, chosen to broaden the understanding of the programming process, rather than to build a résumé in perceived hot languages. We are somewhat dismayed to see the popularity of scripting languages in introductory programming courses. Such languages (Javascript, PHP, Atlas) are indeed popular tools of today for Web applications. Such languages have all the pedagogical defaults that we ascribe to Java and provide no opportunity to learn algorithms and performance analysis. Their absence of strong typing leads to a trial-and-error programming style and prevents students from acquiring the discipline of separating design of interfaces from specifications.

However, teaching the right languages alone is not enough. Students need to be exposed to the tools to construct large-scale reliable programs, as we discussed at the start of this article. Topics of relevance are studying formal specification methods and formal proof methodologies, as well as gaining an understanding of how high-reliability code is certified in the real world. When you step into a plane, you are putting your life in the hands of software which had better be totally reliable. As a computer scientist, you should have some knowledge of how this level of reliability is achieved. In this day and age, the fear of terrorist cyber attacks have given a new urgency to the building of software that is not only bug free, but is also immune from malicious attack. Such high-security software relies even more extensively on formal methodologies, and our students need to be prepared for this new world.

References
  1. Joint Taskforce for Computing Curricula. "Computing Curricula 2005: The Overview Report." ACM/AIS/ IEEE, 2005 <www.acm.org/education /curric_vols/CC2005-March06 Final.pdf>.
  2. Barnes, John. High Integrity Ada: The Spark Approach. Addison-Wesley, 2003.
  3. Ben-Ari, M. Principles of Concurrent and Distributed Programming. 2nd ed. Addison-Wesley, 2006.
  4. Mitchell, Nick, Gary Sevitsky, and Harini Srinivasan. "The Diary of a Datum: An Approach to Analyzing Runtime Complexity in Framework-Based Applications." Workshop on Library-Centric Software Design, Object-Oriented Programming, Systems, Languages, and Applications, San Diego, CA, 2005.
  5. Stroustrup, Bjarne. Private communication. Aug. 2007.
  6. Holzmann Gerard J. "The Power of Ten – Rules for Developing Safety Critical Code." IEEE Computer June 2006: 93-95.
Note
  1. Several programming language and system names have evolved from acronyms whose formal spellings are no longer considered applicable to the current names for which they are readily known. ML, Lisp, GCC, PHP, and SPARK fall under this category.

Who Killed the Software Engineer (Hint It Happened in College)

One of the article's main points (one that was misunderstood, Dewar tells me) is that the adoption of Java as a first programming language in college courses has led to this decline. Not exactly. Yes, Dewar believes that Java's graphic libraries allow students to cobble together software without understanding the underlying source code.

But the problem with CS programs goes far beyond their focus on Java, he says.

"A lot of it is, 'Let's make this all more fun.' You know, 'Math is not fun, let's reduce math requirements. Algorithms are not fun, let's get rid of them. Ewww – graphic libraries, they're fun. Let's have people mess with libraries. And [forget] all this business about 'command line' – we'll have people use nice visual interfaces where they can point and click and do fancy graphic stuff and have fun."

Dewar says his email in-box is crammed full of positive responses to his article, from students as well as employers. Many readers have thanked him for speaking up about a situation they believe needs addressing, he says.

One email was from an IT staffer who is working with a junior programmer. The older worker suggested that the young engineer check the call stack to see about a problem, but unfortunately, "he'd never heard of a call stack."

Comment on Professor Dewar's views on today's CS programs

Mama, Don't Let Your Babies Grow Up to be Cowboys (or Computer Programmers)

At fault, in Dewar's view, are universities that are desperate to make up for lower enrollment in CS programs – even if that means gutting the programs.

It's widely acknowledged that enrollments in computer science programs have declined. The chief causes: the dotcom crash made a CS career seem scary, and the never-ending headlines about outsourcing makes it seem even scarier. Once seen as a reliable meal ticket, some concerned parents now view CS with an anxiety usually reserved for Sociology or Philosophy degrees. Why waste your time?

College administrators are understandably alarmed by smaller student head counts. "Universities tend to be in the raw numbers mode," Dewar says. "'Oh my God, the number of computer science majors has dropped by a factor of two, how are we going to reverse that?'"

They've responded, he claims, by dumbing down programs, hoping to make them more accessible and popular. Aspects of curriculum that are too demanding, or perceived as tedious, are downplayed in favor of simplified material that attracts a larger enrollment. This effort is counterproductive, Dewar says.

"To me, raw numbers are not necessarily the first concern. The first concern is that people get a good education."

These students who have been spoon-fed easy material aren't prepared to compete globally. Dewar, who also co-owns a software company and so deals with clients and programmers internationally, says, "We see French engineers much better trained than American engineers," coming out of school.

[Mar 2, 2007] Microsoft rolls out tutorial site for new programmers

Microsoft has unveiled a new Web site offering lessons to new programmers on building applications using the tools in Visual Studio 2005.

[Sep 30, 2006] Dreamsongs Essays Downloads Triggers & Practice: How Extremes in Writing Relate to Creativity and Learning [pdf]

I presented this keynote at XP/Agile Universe 2002 in Chicago, Illinois. The thrust of the talk is that it is possible to teach creative activities through an MFA process and to get better by practicing, but computer science and software engineering education on one hand and software practices on the other do not begin to match up to the discipline the arts demonstrate. Get to work.

[Sep 30, 2006] Google Code - Summer of Code - Summer of Code

Welcome to the Summer of Code 2006 site. We are no longer accepting applications from students or mentoring organizations. Students can view previously submitted applications and respond to mentor comments via the student home page. Accepted student projects will be announced on code.google.com/soc/ on May 23, 2006. You can talk to us in the Summer-Discuss-2006 group or via IRC in #summer-discuss on SlashNET.

If you're feeling nostalgic, you can still access the Summer of Code 2005 site.

Participating Mentoring Organizations

AbiSource (ideas)

Adium (ideas)

Apache Software Foundation (ideas)

Ardour (ideas)

ArgoUML (ideas)

BBC Research (ideas)

Beagle (ideas)

Blender (ideas)

Boost (ideas)

Bricolage (ideas)

ClamAV (ideas)

Cockos Incorporated (ideas)

Codehaus (ideas)

Common Unix Printing System (ideas)

Creative Commons (ideas)

Crystal Space (ideas)

CUWiN Wireless Project (ideas)

Daisy CMS (ideas)

Debian (ideas)

Detached Solutions (ideas)

Django (Lawrence Journal-World) (ideas)

Dojo (ideas)

Drupal (ideas)

Eclipse (ideas)

Etherboot Project (ideas)

FFmpeg (ideas)

FreeBSD Project (ideas)

Gaim (ideas)

Gallery (ideas)

GCC (ideas)

Gentoo (ideas)

GIMP (ideas)

GNOME (ideas)

Google (ideas)

Handhelds.org (ideas)

Haskell.org (ideas)

Horde (ideas)

ICU (ideas)

Inkscape (ideas)

Internet Archive (ideas)

Internet2 (ideas)

Irssi (ideas)

Jabber Software Foundation (ideas)

Joomla! (ideas)

JXTA (ideas)

KDE (ideas)

Lanka Software Foundation (LSF) (ideas)

LispNYC (ideas)

LiveJournal (ideas)

Mars Space Flight Facility (ideas)

MoinMoin (ideas)
Monotone (ideas)

Moodle (ideas)

MythTV (ideas)

NetBSD (ideas)

Nmap Security Scanner (ideas)

OGRE (ideas)

OhioLINK (ideas)

One Laptop Per Child (ideas)

Open Security Foundation (OSVDB) (ideas)

Open Source Applications Foundation (ideas)

Open Source Cluster Application Resources (OSCAR) (ideas)

Open Source Development Labs (OSDL) (ideas)

OpenOffice.org (ideas)

OpenSolaris (ideas)

openSUSE (ideas)

Oregon State University Open Source Lab (OSL) (ideas)

PHP (ideas)

PlanetMath (ideas)

Plone Foundation (ideas)

Portland State University (ideas)

PostgreSQL Project (ideas)

Project Looking Glass (ideas)

Python Software Foundation (ideas)

ReactOS (ideas)

Refractions Research (ideas)

Ruby Central, Inc. (ideas)

Samba (ideas)

SCons (ideas)

Subversion (ideas)

The Fedora Project (ideas)

The Free Earth Foundation (ideas)

The Free Network Project (ideas)

The Free Software Initiative of Japan (ideas)

The GNU Project (ideas)

The LLVM Compiler Infrastructure (ideas)

The Mono Project (ideas)

The Mozilla Foundation (ideas)

The Perl Foundation (ideas)

The Shmoo Group (ideas)

The University of Texas at Austin: RTF New Media Initiative (ideas)

The Wine Project (ideas)

Ubuntu & Bazaar (ideas)

University of Michigan Aerospace Engineering & Space Science Departments

Wikimedia Foundation (ideas)

WinLibre (ideas)

wxWidgets (ideas)

XenSource (ideas)

Xiph.org (ideas)

XMMS2 (ideas)

Xorg (ideas)

XWiki (ideas)

Questions?

Please peruse our Student FAQ, Mentor FAQ

[Jun 30, 2005] Art and Computer Programming by John Littler

Knuth view holds; Stallman's views does not make any sense other then in context of his cult :-). See also Slashdot discussion Slashdot Is Programming Art
ONLamp.com

Art and hand-waving are two things that a lot of people consider to go very well together. Art and computer programming, less so. Donald Knuth put them together when he named his wonderful multivolume set on algorithms The Art of Computer Programming, but Knuth chose a craft-oriented definition of art (PDF) in order to do so.

... ... ...

Someone I didn't attempt to contact but whose words live on is Albert Einstein. Here are a couple of relevant quotes:

[W]e do science when we reconstruct in the language of logic what we have seen and experienced. We do art when we communicate through forms whose connections are not accessible to the conscious mind yet we intuitively recognise them as something meaningful.

Also:

After a certain level of technological skill is achieved, science and art tend to coalesce in aesthetic plasticity and form. The greater scientists are artists as well.[1]

This is a lofty place to start. Here's Fred Brooks with a more direct look at the subject:

The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures.[2]

He doesn't say it's art, but it sure sounds a lot like it.

In that vein, Andy Hunt from the Pragmatic Programmers says:

It is absolutely an art. No question about it. Check out this quote from the Marines:

An even greater part of the conduct of war falls under the realm of art, which is the employment of creative or intuitive skills. Art includes the creative, situational application of scientific knowledge through judgment and experience, and so the art of war subsumes the science of war. The art of war requires the intuitive ability to grasp the essence of a unique military situation and the creative ability to devise a practical solution.

Sounds like a similar situation to software development to me.

There are other similarities between programming and artists, see my essay at Art In Programming (PDF).

I could go on for hours about the topic...

Guido van Rossum, the creator of Python, has stronger alliances to Knuth's definition:

I'm with Knuth's definition (or use) of the word art.

To me, it relates strongly to creativity, which is very important for my line of work.

If there was no art in it, it wouldn't be any fun, and then I wouldn't still be doing it after 30 years.

Bjarne Stroustrup, the creator of C++, is also more like Knuth in refining his definition of art:

When done right, art and craft blends seamlessly. That's the view of several schools of design, though of course not the view of people into "art as provocation".

Define "craft"; define "art". The crafts and arts that I appreciate blend seamlessly into each other so that there is no dilemma.

So far, these views are very top-down. What happens when you change the viewpoint? Paul Graham, programmer and author of Hackers and Painters, responded that he'd written quite a bit on the subject and to feel free to grab something. This was my choice:

I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation.

For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging.

For a long time I felt bad about this, just as I once felt bad that I didn't hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do.[3]

Paul goes on to talk about the implications for software design and the joys of dynamic typing, which allows you to stay looser later.

Now, we're right down to the code. This is what Richard Stallman, founder of the GNU Project and the Free Software Foundation, has to say (throwing in a geek joke for good measure):

I would describe programming as a craft, which is a kind of art, but not a fine art. Craft means making useful objects with perhaps decorative touches. Fine art means making things purely for their beauty.

Programming in general is not fine art, but some entries in the obfuscated C contest may qualify. I saw one that could be read as a story in English or as a C program. For the English reading one had to ignore punctuation--for instance, the name Charlotte might appear as char *lotte.

(Once I was eating in Legal Sea Food and ordered arctic char. When it arrived, I looked for a signature, saw none, and complained to my friends, "This is an unsigned char. I wanted a signed char!" I would have complained to the waiter if I had thought he'd get the joke.)

... ... ...

Constraints and Art

The existence of so many restraints in the actual practice of code writing makes it tempting to dismiss programming as art, but when you think about it, people who create recognized art have constraints too. Writers, painters, and so on all have their code--writers must be comprehensible in some sort of way in their chosen language. Musicians have tools of expression in scales, harmonies, and timbres. Painters might seem to be free of this, but cultural rules exist, as they do for the other categories. An artist can break rules in an inspired way and receive the highest praise for it--but sometimes only after they've been dead for a long time.

Program syntax and logic might seem to be more restrictive than these rules, which is why it is more inspiring to think as Fred Brooks did--in the heart of the machine.

Perhaps it's more useful to look at the process. If there are ways in which the concept of art could be useful, then maybe we'll find them there.

If we broadly take the process as consisting of idea, design, and implementation, it's clear that even if we don't accept that implementation is art, there is plenty of scope in the first two stages, and there's certainly scope in the combination. Thinking about it a little more also highlights the reductio ad absurdum of looking at any art in this way, where sculpture becomes the mere act of chiseling stone or painting is the application of paint to a surface.

Looking at the process immediately focuses on the different situations of the lone hacker or small team as opposed to large corporate teams, who in some cases send specification documents to people they don't even know in other countries. The latter groups hope that they've specified things in such detail that they need to know nothing about the code writers other than the fact that they can deliver.

The process for the lone hacker or small team might be almost unrecognizable as a process to an outsider--a process like that described by Paul Graham, where writing the code itself alters and shapes an idea and its design. The design stage is implicit and ongoing. If there is art in idea and design, then this is kneaded through the dough of the project like a special magic ingredient--the seamless combination that Bjarne Stroustrup mentioned. In less mystical terms, the process from beginning to end has strong degrees of integrity.

The situation with larger project groups is more difficult. More people means more time constraints on communication, just because the sums are bigger. There is an immediate tendency for the existence of more rules and a concomitant tendency for thinking inside the box. You can't actually order people to be creative and brilliant. You can only make the environment where it's more likely and hope for the best. Xerox PARC and Bell Labs are two good examples of that.

The real question is how to be inspired for the small team, and additionally, how not to stop inspiration for the larger team. This is a question of personal development. Creative thinking requires knowledge outside of the usual and ordinary, and the freedom and imagination to roam.

Why It Matters

What's the prize? What's the point? At the micro level, it's an idea (which might not be a Wow idea) with a brilliant execution. At the macro level, it's a Wow idea (getting away from analogues, getting away from clones--something entirely new) brilliantly executed.

I realize now that I should have also asked my responders, if they were sympathetic to the idea of programming as art, to nominate some examples. I'll do that myself. Maybe you'd like to nominate some more? I think of the early computer game Elite, made by a team of two, which extended the whole idea of games both graphically and in game play. There are the first spreadsheets VisiCalc and Lotus 1-2-3 for the elegance of the first concept even if you didn't want to use one. Even though I don't use it anymore, the C language is artistic for the elegance of its basic building blocks, which can be assembled to do almost anything.

Anyway, go make some art. Why not?!

References

John Littler is chief gopher for Mstation.org.

Art and Computer Programming/Discussion

ONLamp.com

Trackbacks
Comments made on other sites via trackbacks appear below.

  • Trackback from [Smalltalk]
    Programming - Art or Science
    2005-07-06 01:05:47
    ONLamp has an essay by John Littler discussing the relationship between computer programming and art. Included in the essay are quotes on that topic from various luminaries. My favorite quotes are from Fred Brooks: The programmer, like the poet, works...
  • Trackback from Toadkillerdog's DogHouse
    Programming: Geek Art or Science?
    2005-07-05 20:56:46
    An interesting blurb on Slashdot on whether or not programming is art of science (includes link to the...
  • Trackback from Riaan's Blog
    Programming. Is is Art?
    2005-07-05 20:42:31
  • Trackback from Sashidhar Kokku 's Development blog.
    Is Programming Art? or is assembling???
    2005-07-05 17:33:50
    Is programming an art, or is assembling an art? Is drawing an art, or is filling the drawing with colors an art?
  • Trackback from Sashidhar Kokku 's Development blog.
    Is Programming Art? or is assembling???
    2005-07-05 17:31:08
    chromatic writes "A constant question for software developers is 'What is the nature of programming?'...
  • Trackback from Sexy Jihad
    Is Programming Art?
    2005-07-01 05:28:30

    Is programming art? This is a very interesting question that ONLAMP has an article about:
    What the heck is art anyway, at least as most people understand it? What do people mean when they say "art"? A straw poll showed a fair degree of ...

  • James Gosling on Java

    Java is a horrible language, but people are better then institutions :-)
    Slashdot
    Page 2 and scripting languages (Score:5, Interesting)
    by MarkEst1973 (769601) on Thursday June 30, @09:59PM (#12956728) The entire second page of the article talks about scripting languages, specifically Javascript (in browsers) and Groovy.

    1. Kudos to the Groovy [codehaus.org] authors. They've even garnered James Gosling's attention. If you write Java code and consider yourself even a little bit of a forward thinker, look up Groovy. It's a very important JSR (JSR-241 specifically).

    2. He talks about Javascript solely from the point of view of the browser. Yes, I agree that Javascript is predominently implemented in a browser, but it's reach can be felt everywhere. Javascript == ActionScript (Flash scripting language). Javascript == CFScript (ColdFusion scripting language). Javascript object notation == Python object notation.

    But what about Javascript and Rhino's [mozilla.org] inclusion in Java 6 [sun.com]? I've been using Rhino as a server side language for a while now because Struts is way too verbose for my taste. I just want a thin glue layer between the web interface and my java components. I'm sick and tired of endless xml configuration (that means you, too, EJB!). A Rhino script on the server (with embedded Request, Response, Application, and Session objects) is the perfect glue that does not need xml configuration. (See also Groovy's Groovlets for a thin glue layer).

    3. Javascript has been called Lisp in C's clothing. Javascript (via Rhino) will be included in Java 6. I also read that Java 6 will allow access to the parse trees created by the javac compiler (same link as Java 6 above).

    Java is now Lisp? Paul Graham writes about 9 features [paulgraham.com] that made Lisp unique when it debuted in the 50s. Access to the parse trees is one of the most advanced features of Lisp. He argues that when a language has all 9 features (and Java today is at about #5), you've not created a new language but a dialect of Lisp.

    I am a Very Big Fan of dynamic languages that can flex like a pretzel to fit my problem domain. Is Java evolving to be that pretzel?

    [May 12, 2003] What I Hate About Your Programming Language

    The article is pretty weak, but the discussion after it contains some interesting points
    ONLamp.com

    The Pragmatic Programmers suggest learning a new language every year. This has already paid off for me. The more different languages I learn, the more I understand about programming in general. It's a lot easier to solve problems if you have a toolbox full of good tools.

    Ideal language: Delphi w/ Clarion influence
    2003-05-16 12:27:29 anonymous

    Sadly Delphi/Kylix (Object Pascal) is often overlooked. Perl, Ruby, etc. are all find for scripts, but in most cases, a compiiled program in a better way to do. Delphi lets you program procedurally like C, or with Objects like C++, only the union is much more natural. It prevents you from making many stupid mistakes, while allowing you 99.9% of the power C has. It borrows some syntax from perhaps better languages (Oberon, Modula, etc.), but has a much bigger and more useful standard library. (Unofficially, anyway...)

    It has never let me down... FOXPRO (VFP)
    2003-05-15 06:41:48 anonymous

    VFP is great. It has its own easy to deploy runtime. You can compile to .exe. Its IDE if excellent. It is complete with the front-end user interface, middle-ware code and it's own multi-user safe & high performance database engine (desktop). BUT: M$ (aka the Borg) assimilated back in the early 90's what was then a cross platform development tool. Now M$ vision of cross platform for VFP is multiple versions of Windows. Plus M$ can not make a lot of end-user money on a product whos runtime is free.

    Bej - Philadelphia.

    And what of C#?

    2003-05-15 03:05:28 anonymous
    I've found that C# grows on me faster than any other language I've used. At first I was very disappointed, saying it was just 9% better than Java. I was dismissive of the funny ways they use the new and override keywords until I understood they had addressed an important set of problems.

    Having used it a while, I'd say it's very nice. Perhaps the best single advantage that C# has over Java, however, is that when it burst onto the public scene, it was much more complete than Java was for the first several years. Including libraries and documentation. It is of course completely unfair that the C# designers had years to use and study Delphi and Java and C++ before committing to a design for C#. So what!

    The single best thing about C# may be that it works just as the documentations says it does. This alone is worth the price of admission (which is steep).

    anonymous

    You need to look at REXX 2003-05-14 07:25:47

    Some great points on languages, but REXX beats them all in so many of the points you raise.

    bob hamilton

    > You need to look at REXX

    2003-05-14 10:59:36 anonymous

    I liked some parts of AREXX -- on the Amiga -- mostly the idea of the standard interprocess communication scripting. However, I always had problems with the syntax -- figuring out what was actually being passed, or being processed. It was weird. (I think in C.)

    I eventually did figure out how to do useful things -- my favorite is a script that controls 3D image rendering in Lightwave, uses an external image processing program to apply motion blur and watermarks, then loads the results into the Toaster frame buffer, and talks to a comm program that controls a SVHS single frame editing deck to write the frame out.

    All possible, because these programs that didn't know anything about each other all supported an Arexx port.

    I wish the same thing existed on Linux. Perl scripts and system() calls are not the same thing as interprocess communication. And don't get me started about that fu-"scripting" that gimp has.

    [May 12, 2003]What I Hate About Your Programming Language

    ONLamp.com

    These are my preferences, based on the kind of work I've done and continue to do, the order in which I learned the languages, and just plain personal taste. In the spirit of generating new ideas, learning new techniques, and maybe understanding why things are done the way they're done, it's worth considering the different ways to do them.

    The Pragmatic Programmers suggest learning a new language every year. This has already paid off for me. The more different languages I learn, the more I understand about programming in general. It's a lot easier to solve problems if you have a toolbox full of good tools.

    ... ... ...

    Every language is sacred in the eyes of its zealots, but there's bound to be someone out there for whom the language just doesn't feel right. In the open source world, we're fortunate to be able to pick and choose from several high-quality and free and open languages to find what fits our minds the best.

    Professional Programmers

    ...was this article really about programming in general, or a hyping of open source software? open source programmers (i'm thinking of Python, Ruby, etc.) are really no better than, say for example, C++ programmers or JAVA programmers.

    just because they use open source software solutions and technologies, does not mean they have any more a grasp on programming concepts and the tricks of the trade then those using proprietary solutions.

    i consider myself to be more a teacher of programming (i am just better at that), but i don't think that someone who has been programming for years or uses open source solutions is any more qualified a programmer than i am.

    A grain of salt, posted 11 Jun 2002 by tk (Journeyer)

    Though many free software programmers exhibit high quality in their work, I'll hesitate before concluding that a good way to nurture good coders is to throw them into the midst of the free community. It may well be that many people go into free software because they are already competent enough and want to contribute.

    That said, I'm not sure either what's the best way to groom people into truly professional coders.

    <off-topic>
    An excellent (IMO) book which introduces assembly languages to complete beginners is "Peter Norton's Assembly Language Book for the IBM PC", by Peter Norton and John Socha.
    </off-topic>

    Kids These Days..., posted 12 Jun 2002 by goingware (Master)

    I've written some stuff on this topic. Here's a sampler:

    Study Fundamentals Not Tools, APIs or OSes.

    Also see the last two sections, the ones entitled "The Value of Constant Factor Optimization" and "Old School Programming" in Musings on Good C++ Style as well as the conclusion of Pointers, References and Values.

    I think everyone should learn at least two architectures of assembly code (RISC and CISC), no matter what language they're programming in.

    Also read University of California at Davis Professor Norman Matloff's testimony to Congress: Debunking the Myth of a Desperate Software Labor Shortage.

    It happens that I have a very long resume. The reason I make it so long is that I depend on potential clients finding it via the search engines for a large portion of my business. If I just wanted to help someone understand my employability it could be considerably shorter. But in an effort to make my resume show up in a lot of searches for skills, I mention every skill keyword that I can legitimately claim to have experience in somewhere in the resume, sometimes several times. The resume is designed to appeal to buzzword hunters.

    But it annoys me, I shouldn't have to do that. So my resume has an editorial statement in it, aimed squarely at the HR managers you complain about:

    I strive to achieve quality, correctness, performance and maintainability in the products I write.

    I believe a sound understanding and application of software engineering principles is more valuable than knowledge of APIs or toolsets. In particular, this makes one flexible enough to handle any sort of programming task.

    It helps if you don't deal with headhunters or contract brokers. They're much worse than most HR managers for only attempting to place people that match a buzzword search in a database rather than understanding someone's real talent. Read my policy on recruiters and contract agencies.

    It's generally easier to get smaller companies to take real depth seriously than the larger companies. One reason for this is that they are too small to employ HR managers, so the person you're talking to is likely to be another engineer. My first jobs writing retail Macintosh software, Smalltalk, and Java were gotten at small companies where the person I contacted first at the company was an engineer.

    If you're looking for permanent employment, many companies post their openings on their own web pages. I give some tips on locating these job postings via search engines on this page.

    If you're a consultant like me, and you're fed up with the body shops, may I suggest you read my article Market Yourself - Tips for High-Tech Consultants.

    I've been consulting full-time for over four years, and I've only taken one contract through a broker. I've actually bent my own rules and tried to find other work through the body shops, but they have been useless to me. I've had far better luck finding work on my own, through the web, and through referrals from friends and former coworkers.

    elj.com - A Web Site dedicated to exposing an eclectic mix of elegant programming technologies

    Programming Language Critiques

    The first incarnation of this page was started by John W.F. McClain at MIT. He took it with him when he moved to Loral, but was unable to update and maintain it there, so I offered to take it over.

    In John's original page, he said:

    Computer programmers create new languages all the time (often without even realizing it.) My hope is this collection of critiques will help raise the general quality of computer language design.

    The Future of Programming

    DDJ

    Predicting the future is easier said than done, and yet, we persist in trying to do it. As futile as it may seem to forecast the future of programming, if we're going to try, it's helpful to recognize certain fundamental characteristics of programming and programmers. We know, for example, that programming is hard. We know that the industry is driven by the desire to make programming easier. And we know, as Perl creator Larry Wall has often observed, that programmers are lazy, impatient, and excessively proud.

    This first condition formed the basis of Frederick Brooks's classic text on software engineering, The Mythical Man Month (Addison-Wesley, 1995; ISBN 0201835959) first published in 1975, where he wrote:

    As we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity.

    Brooks's prediction was dire and, unfortunately, accurate. There was no silver bullet, and as far as we can tell, there never will be. However, programming is undoubtedly easier today than it was in the past, and the latter two principles of programming and programmers explain why. Programming became easier because the software industry was motivated to make it so, and because lazy and impatient programmers wouldn't accept anything less. And there is no reason to believe that this will change in the future.

    FORTRANSIT -- the 650 Processor that made FORTRAN

    The FORTRANSIT story is covered in the Annals of Computing History [4, 5], but an additional and more informal slant doesn't hurt.

    The historical development of Fortran in Fortran 90 for the Fortran 77 Programmer by Bo Einarsson and Yurij Shokin.

    The following simple program, which uses many different and usual concepts in programming, is based on "The early development of Programming Languages" by Donald E. Knuth and Luis Trabb Pardo, published in "A History of Computing in the Twentieth Century" edited by N. Metropolis, J. Howlett and Gian-Carlo Cota, Academic Press, New York, 1980, pp. 197-273. They gave an example in Algol 60 and translated into some very old languages such as Zuse's Plankalkül, Goldstine's Flow diagrams, Mauchly's Short Code, Burks' Intermediate PL, Rutishauser's Klammerausdrücke, Bohm's Formules, Hopper's A-2, Laning and Zierler's Algebraic interpreter, Backus' FORTRAN 0 and Brooker's AUTOCODE.

    Klammerausdrücke is a German expression, we keep the German expression also in the Russian and English versions. A direct English translation is "bracket expression". FORTRAN 0 was really not called FORTRAN 0, it is just the very first version of Fortran.

    The program is given here in Pascal, C and five variants of Fortran. The purpose of this is to show how Fortran has been developed from a cryptical, almost machine-dependent language, into a modern structured high-level programming language.

    The final example shows the program in the new programming language F.

    Slashdot NASA Releases Classic Software To Public Domain

    xpccx writes in with a bit from NewsBytes, "NASA turned 43 this month and marked the occasion by releasing more than 200 of its scientific and engineering applications for public use. The modular Fortran programs can be modified, compiled and run on most Linux platforms." The software can be found at OpenChannelSoftware.com. At long last I am ready to prepare my own space mission. I wonder if a whiskey barrel is gonna be air tight after I launch it/me into space with a trebuchet. (It's this sort of unconventional thinking that should get me my job at NASA. Or at least get me put to sleep).

    [Sept 8, 2001] Lisp as an Alternative to Java

    Introduction

    In a recent study [1], Prechelt compared the relative performance of Java and C++ in terms of execution time and memory utilization. Unlike many benchmark studies, Prechelt compared multiple implementations of the same task by multiple programmers in order to control for the effects of differences in programmer skill. Prechelt concluded that, "as of JDK 1.2, Java programs are typically much slower than programs written in C or C++. They also consume much more memory."

    We have repeated Precheltнs study using Lisp as the implementation language. Our results show that Lisp's performance is comparable to or better than C++ in terms of execution speed, with significantly lower variability which translates into reduced project risk. Furthermore, development time is significantly lower and less variable than either C++ or Java. Memory consumption is comparable to Java. Lisp thus presents a viable alternative to Java for dynamic applications where performance is important.

    Conclusions

    Lisp is often considered an esoteric AI language. Our results suggest that it might be worthwhile to revisit this view. Lisp provides nearly all of the advantages that make Java attractive, including automatic memory management, dynamic object-oriented programming, and portability. Our results suggest that Lisp is superior to Java and comparable to C++ in terms of runtime, and superior to both in terms of programming effort, and variability of results. This last item is particularly significant as it translates directly into reduced risk for software development.

    Slashdot Lisp as an Alternative to Java

    There is more data available for other languages.. (Score:4, Interesting)
    by crealf on Saturday September 08, @07:53AM (#2266890)
    (User #414283 Info)

    The article about Lisp is a follow-up of an article by Lutz Prechelt in CACM99 (a draft [ira.uka.de] is available on his page along with other articles).

    However there is more data now, as, Prechelt itself widdened the study, and published in 2000 An empirical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl [ira.uka.de] (a detailed technical report is here [ira.uka.de]).

    If you look, from the developer point of view, Python and Perl work times are similar to those of Lisp, along with program sizes.
    Of course, from the speed point of view, in the test, none of the scripting language could compete with Lisp.

    Anyway some articles by Prechelt [ira.uka.de] are interesting too (as many other research papers ; found via citeseer [nec.com] for instance)

    Smalltalk a better alternative to Java (Score:1, Interesting)
    by Anonymous Coward on Saturday September 08, @08:33AM (#2266985)

    In my opinion Smalltalk makes a much better alternative to Java.

    Smalltalk has all the trappings--a very rich set of base classes, byte-coded, garbage collected, etc.

    There are many Smalltalks out there...Smalltalk/X is quite good, and even has a Smalltalk-to-C compiler to boot. It's not totally free, but pretty cheap (and I believe for non-commercial use everything works but the S-to-C compiler).

    Squeak is an even better place to start...it is highly portable (moreso than Java), very extensible (thanks to VM plugins) and has as very active community that includes Alan Kay, the man who INVENTED the term "object-oriented programming". Squeak has a just-in-time compiler (JITTER), support for multiple front-ends, and can be tied to any kind of external libraries and DLL's. It's not GPL'd, but it is free under an old Apple license (I believe the only issue is with the fonts..they are still Apple fonts). It's already been ported to every platform I've ever seen, including the iPaq (both WinCE and Linux). It runs even on STOCK iPaqs (ie 32m) without any expansion...Java, from what I understand, still has big problems just running on the iPaq, not to mention unexpanded iPaqs.

    And of course, we can't forget about old GNU Smalltalk, which is still seeing development.

    Smalltalk is quite easy to learn--you can just pick up the old "Smalltalk-80: The Language" (Goldberg) and work right from there. Squeak already has two really good books that have just come into print (go to Amazon and search for Mark Guzdial).

    (this is not meant as a language flame...I'm just throwing this out on the table, since we're discussing alternatives to Java. Scheme/LISP is a cool idea as well, but I think Smalltalk deserves some mention.)

    I've written 2 Lisp and 4 Java books (Score:3, Informative)
    by MarkWatson on Saturday September 08, @09:56AM (#2267225)
    (User #189759 Info)

    First, great topic!

    I have written 2 Lisp books for Springer-Verlag and 4 Java books, so you bet that I have an opinion on my two favorite languages.

    First, given free choice, I would use Common LISP for most of my devlopment work. Common LISP has a huge library and is a very stable language. Although I prefer Xanalys LispWorks, there are also good free Common LISP systems.

    Java is also a great language, mainly because of the awesome class libraries and the J2EE framework (I am biased here because I am just finishing up writing a J2EE book).

    Peter Norvig once made a great comment on Java and Lisp (roughly quoting him): Java is only half as good as Lisp for AI but that is good enough.

    Anyway, I find that both Java and Common LISP are very efficient environments to code in. I only use Java for my work because that is what my customers want.

    BTW, I have a new free web book on Java and AI on my web site - help yourself!

    Best regards,

    Mark

    -- www.markwatson.com -- Open Source and Content

    Why Java succeeded, LISP can't make headway now (Score:5, Informative)
    by joneshenry on Saturday September 08, @10:44AM (#2267438)
    (User #9497 Info)

    Java was never marketted as the ultimate fast language to do searching or to manipulate large data structures. What Java was marketted as was a language that was good enough for programming paradigms popular at the time such as object orientation and automatic garbage collection while providing the most comprehensive APIs under the control of one entity who would continue to push the extension of those APIs.

    In this LinuxWorld interview [linuxworld.com] look what Stroustrup is hoping to someday have in the C++ standard for libraries. It's a joke, almost all of those features are already in Java. As Stroustrup says, a standard GUI framework is not "politically feasible".

    Now go listen to what Linux Torvalds is saying [ddj.com] about what he finds to be the most exciting thing to happen to Linux the past year. Hint, it's not the completion of the kernel 2.4.x, it's KDE. The foundation of KDE's success is the triumph of Qt as the de facto standard that a large community has embraced to build an entire reimplementation of end user applications.

    To fill the void of a standard GUI framework for C++, Microsoft has dictated a set of de facto standards for Windows, and Trolltech has successfully pushed Qt as the de facto standard for Linux.

    I claim that as a whole the programming community doesn't care whether a standard is de jure or de facto, but they do care that SOME standard exists. When it comes to talking people into making the investment of time and money to learn a platform on which to base their careers, a multitude of incompatible choices is NOT the way to market.

    I find talking about LISP as one language compared to Java to be a complete joke. Whose LISP? Scheme? Whose version of Scheme, GNU's Guile? Is the Elisp in Emacs the most widely distributed implementation of LISP? Can Emacs be rewritten using Guile? What is the GUI framework for all of LISP? Anyone come up with a set of LISP APIs that are the equivalent of J2EE or Jini?

    I find it extremely disheartening that the same people who can grasp the argument that the value of networks lies in the communication people can do are incapable of applying the same reasoning to programming languages. Is it that hard to read Odlyzko [umn.edu] and not see that people just want to do the same thing with programming languages--talk among themselves. The modern paradigm for software where the money is being made is getting things to work with each other. Dinosaur languages that wait around for decades while slow bureaucratic committees create nonsolutions are going to get stomped by faster moving mammals such as Java pushed by single-decision vendors. And so are fragmented languages with a multitude of incompatible and incomplete implementations such as LISP.

    Some hopefully useful points (Score:2, Informative)
    by dlakelan (qynxryna@lnu-spam-bb.pbz) on Saturday September 08, @02:20PM (#2268461)
    (User #43245 Info | http://www.endpointcomputing.com)

    First off, one of the best spokespersons for Lisp is Paul Graham, author of "On Lisp" and "ANSI Common Lisp". His web site is Here [paulgraham.com].

    Reading through his articles [paulgraham.com] will give you a better sense of what lisp is about. One that I'd like to see people comment on is: java's cover [paulgraham.com] ... It resonates with my experience as well. Also This response [paulgraham.com] to his java's cover article succinctly makes a good point that covers most of the bickering found here...

    I personally think that the argument that Lisp is not widely known, and therefore not enough programmers exist to support corporate projects is bogus. The fact that you can hire someone who claims to know C++ does NOT in any way shape or form mean that you can hire someone who will solve your C++ programming problem! See my own web site [endpointcomputing.com] for more on that.

    I personally believe that if you have a large C++ program you're working on and need to hire a new person or a replacement who already claims to know C++, the start up cost for that person is the same as if you have a Lisp program doing the same thing, and need to hire someone AND train them to use Lisp. Why? the training more than pays for itself because it gives the new person a formal introduction to your project, and Lisp is a more productive system than C++ for most tasks. Furthermore, it's quite likely that the person who claims to know C++ doesn't know it as well as you would like, and therefore the fact that you haven't formally trained them on your project is a cost you aren't considering.

    One of the points that the original article by the fellow at NASA makes is that Lisp turned out to have a very low standard deviation of run-time and development time. What this basically says is that the lisp programs were more consistent. This is a very good thing as anyone who has ever had deadlines knows.

    Yes, the JVM version used in this study is old, but lets face it that would affect the average, but wouldn't affect the standard deviation much. Java programs are more likely to be slow, as are C++ programs!

    The point about lisp being a memory hog that a few people have made here is invalid as well. The NASA article states:

    Memory consumption for Lisp was significantly higher than for C/C++ and roughly comparable to Java. However, this result is somewhat misleading for two reasons. First, Lisp and Java both do internal memory management using garbage collection, so it is often the case that the Lisp and Java runtimes will allocate memory from the operating system this is not actually being used by the application program.

    People here have interpreted this to mean that the system is a memory hog anyway. In fact many lisp systems reserve a large chunk of their address space, which makes it look like a large amount of memory is in use. However the operating system has really just reserved it, not allocated it. When you touch one of the pages it does get allocated. So it LOOKS like you're using a LOT of memory, but in fact because of the VM system, you are NOT using very much memory at all.

    The biggest reasons people don't use Lisp are they either don't understand Lisp, or have been forced by clients or supervisors to use something else.

    Interesting, but flawed? (Score:5, Insightful)
    by tkrotchko on Saturday September 08, @07:41AM (#2266864)
    (User #124118 Info | http://www.toad.net/~tomk)

    Its interesting to see the results of a short study, even though the author admits to the flaw in his methodolody (primarily the subjects were self-chosen). Still, I don't think that's a fatal flaw, and I think his results do have some validity.

    However, I think the author misses a more important issue: development involving a single programmer for a relatively small task isn't the point for most organizations. Maintainability and a large pool of potential developers (for example) are a significant factor in deciding what language to use. LISP is a fabulous language, but try to find 10 programmers at a reasonable price in the next 2 weeks. Good luck.

    Also, while initial development time is important, typically testing/debug cycles are the costly part of implementation, so that's what should weigh on your mind as the area that the most gains can be made. Further, large projects are collaborative efforts, so the objects and libraries available for a particular language plays a role in how quickly you can produce quality code.

    As an aside, it would've been interesting to see the same development done with experienced Visual Basic programmer. My guess is he/she would have the lowest development cycle, and yet it wouldn't be my first choice for a large scale development project (although at the risk of being flamed, its not a bad language for just banging out a quick set of tools for my own use).

    Some of thing things I believe are more important when thinking about a programming language:

    1) Amenable to use by team of programmers
    2) Viability over a period of time (5-10 years).
    3) Large developer base
    4) Cross platform - not because I think cross-platform is a good thing by itself; rather, I think its important to avoid being locked-in to a single hardware or Operating System vendor.
    5) Mature IDE, debugging tools, and compilers.
    6) Wide applicability

    Computer languages tend to develop in response to specific needs, and most programmers will probably end up learning 5-10 languages over the course of their career. It would be helpful to have a discussion of the appropriate roles for certain computer languages, since I'm not sure any computer languages is better than any other.

    Perhaps not quite as illuminating as it appears (Score:1)
    by ascholl (ascholl-at-max(dot)cs(dot)kzoo(dot)edu) on Saturday September 08, @07:53AM (#2266888)
    (User #225398 Info)

    The study does show an advantage of lisp over java/c/c++ -- but only for small problems which depend heavily on the types of tasks lisp was designed for. The author recognizes the second problem ("It might be because the benchmark task involved search and managing a complex linked data structure, two jobs for which Lisp happens to be specifically designed and particularly well suited.") but doesn't even mention the first.
    While I haven't seen the example programs, I suspect that the reason the java versions performed poorly time-wise was probably directly related to object instantiation. Instantiating an object is a pretty expensive task in java; typical 'by the book' methods would involve instantiating new numbers for every collection of digits, word, digit/character set representation, etc. The performance cut due to instantiation can be minimized dramatically by re-using program wide collections of commonly used objects, but the effect would only be seen on large inputs. Since the example input was much smaller than the actual test case, it seems likely that the programmers may have neglected to include this functianality.
    Hypothising about implementation aside, the larger question is one of problem scope. If you're going to claim that language A is better than language B, you probably aren't concerned about tiny (albeit non-trivial) problems like the example. Now, I don't know whether this is true, but it seems possible that a large project implemented in java or c/c++ might be built quicker, be easier to maintain, and be less fragile than its equivilent in lisp. It may even perform better. It's not fair to assume blindly that the advantages of lisp seen in this study will scale up. I'm not claiming that they don't ... but still. If we're choosing a language for a task, this should be a primary consideration.

    why language advocacy is bad

    Why Language Advocacy is Bad

    Here is another relevant view that explains that advocacy of a particular language might has little in common with the desire to innovate. Most people simply hate to be wrong after they made their (important and time consuming) choice ;-)
    Slashdot

    Nobody wants to be obsolete (Score:2, Interesting)
    by e4 on Thursday December 14, @12:27PM EST (#102)
    (User #102617 Info) http://www.razorlist.com

    I think one of the biggest reasons for language advocacy (/OS advocacy/DB advocacy/etc.) is that we have a vested interest in "our" language succeeding. Each of us has worked hard to learn the subtleties and intricacies of [language X], and if something else comes along that's better, we're suddenly newbies again. That hard-won expertise doesn't carry much weight if [language Y] makes it easy for "any idiot" to accomplish and/or understand what took you a week to figure out.

    We start trying to come up with reasons why it's not really better: It doesn't give you enough control; it's not as efficient; it has fewer options...

    PC vs. Mac. BSD vs. Mac. Mainframe vs. client-server. Command line vs. GUI. How many people were a little saddened to see MS-DOS fading into the mist, not because it was a great tool, but because they knew how to use it?

    A language advocate needs [language X] to succeed, to be dominant, to be the best, because he has more status and more useful knowledge that way.

    Bottom line, it's an ego thing.

    [Sep 02, 2000] Programming Languages of Choice

    Languages are very interesting things. They can either tie you up, or set you free. But no programming language can be everything to everyone, despite the fact that sometimes it looks like one does.
    What is it that you like about programming languages? What is it that you hate? What did you start on? What do you find yourself coding with most often today? Has your choice of programming languages affected other choices in software? (I.e. Lisp hackers tend to gravitate toward emacs, whereas others go to vi)

    It is quite interesting to me the amount of influence that programming languages have on the way programmers think about how they do things. One example from one perspective is this; if you didn't know that most UNIXen were implemented in C, would you be able to tell? If so, why or why not? What are the different properties that UNIX has that makes it pretty obvious that it wasn't written by somebody programming in a functional language, or in an object-oriented language (or style)?

    ... ... ...

    One of the responces

    My favorite language is Chez Scheme for two reasons: syntactic abstraction and control abstraction.

    Syntactic abstraction is macros. As opposed to other implementations of Scheme, Chez Scheme in my opinion has the best story on macros, and its macro system is among the most powerful I have seen.

    Control abstraction is the power to add new control operations to your language. For example, backtracking and coroutines. More esoterically, monads in direct-style code. Control abstraction boils down to first-class continuations (call/cc). With the single exception of SML/NJ, no other language I know of has call/cc.

    I know I will be using Scheme for years to come, and my company will also continue to use it in its systems. We code a lot in C++ and Delphi, but the Real Hard Stuff(tm) is done in Scheme because macros and continuations are big hammers. Despite Scheme being over 20 years old and despite demonstrated, efficient implementations of these "advanced" language concepts, I don't see new language designs adopting these features from Scheme. I hope this changes

    [Jul 29, 2000] Slashdot Are Buffer Overflow Sploits Intel's Fault -- interesting discussion about problems with C

    [Sep 1, 1999] Programmers Heaven - Where programmers go! -- great collection of file and links by Tore Nestenius

    [Aug 2, 1999] Turbo Vision Salvador Eduardo Tropea (SET) - June 11th 1999, 05:33 EST

    Turbo Vision provides a very nice user interface (comparable with the very well known GUIs) but only for console applications. This UNIX port is based on Borland's version 2.0 with fixes and was made to create RHIDE (a nice IDE for gcc and other GNU compilers). The library supports /dev/vcsa devices to speed-up, ncurses to run from telnet and xterm. This port, in contrast to the Sigala's port, doesn't have "100% compatibility with the original library" as goal, instead we modified a lot of code in favor of security (specially buffer overflows). The port is also available for the original platform (DOS).

    Download: http://www.geocities.com/SiliconValley/Vista/6552/rhtvision-1.0.6.src.tar.gz http://www.geocities.com/SiliconValley/Vista/6552/tvision.html (7 hits)

    [June 11, 1999] Undergraduate Courses About Programming Languages

    [June 11, 1999] Graduate Courses About Programming Languages

    Programming Languages: Design and Implementation (Third edition)

    The following have made material available related to the book Programming Languages: Design and Implementation (Third edition) by Terrence W. Pratt and Marvin Zelkowitz (Prentice-Hall, 1995).

    Recommended Links

    Softpanorama hot topic of the month

    Softpanorama Recommended

    Top articles

    Sites

    Top 7:

    Etc.

    Paradigms

    Other Resources


    Classic Papers

    Donald Knuth Turing Award Lecture: The Art of Computer Programming (PDF)

    The Rise of ``Worse is Better''

    I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase ``the right thing.'' To such a designer it is important to get all of the following characteristics right:

    I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the ``MIT approach.'' Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation.

    The worse-is-better philosophy is only slightly different:

    Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the ``New Jersey approach.'' I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.

    However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.

    Worse Is Better by Richard P Gabriel

    The concept known as "worse is better" holds that in software making (and perhaps in other arenas as well) it is better to start with a minimal creation and grow it as needed. Christopher Alexander might call this "piecemeal growth." This is the story of the evolution of that concept.

    From 1984 until 1994 I had a Lisp company called "Lucid, Inc." In 1989 it was clear that the Lisp business was not going well, partly because the AI companies were floundering and partly because those AI companies were starting to blame Lisp and its implementations for the failures of AI. One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, "because, well, worse is better." We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.

    A few months later, in Summer 1989, a small Lisp conference called EuroPAL (European Conference on the Practical Applications of Lisp) invited me to give a keynote, probably since Lucid was the premier Lisp company. I agreed, and while casting about for what to talk about, I gravitated toward a detailed explanation of the worse-is-better ideas we joked about as applied to Lisp. At Lucid we knew a lot about how we would do Lisp over to survive business realities as we saw them, and so the result was called "Lisp: Good News, Bad News, How to Win Big." [html] (slightly abridged version) [pdf] (has more details about the Treeshaker and delivery of Lisp applications).

    I gave the talk in March, 1990 at Cambridge University. I had never been to Cambridge (nor to Oxford), and I was quite nervous about speaking at Newton's school. There were about 500-800 people in the auditorium, and before my talk they played the Notting Hillbillies over the sound system - I had never heard the group before, and indeed, the album was not yet released in the US. The music seemed appropriate because I had decided to use a very colloquial American-style of writing in the talk, and the Notting Hillbillies played a style of music heavily influenced by traditional American music, though they were a British band. I gave my talk with some fear since the room was standing room only, and at the end, there was a long silence. The first person to speak up was Gerry Sussman, who largely ridiculed the talk, followed by Carl Hewitt who was similarly none too kind. I spent 30 minutes trying to justify my speech to a crowd in no way inclined to have heard such criticism - perhaps they were hoping for a cheerleader-type speech.

    I survived, of course, and made my way home to California. Back then, the Internet was just starting up, so it was reasonable to expect not too many people would hear about the talk and its disastrous reception. However, the press was at the talk and wrote about it extensively in the UK. Headlines in computer rags proclaimed "Lisp Dead, Gabriel States." In one, there was a picture of Bruce Springsteen with the caption, "New Jersey Style," referring to the humorous name I gave to the worse-is-better approach to design. Nevertheless, I hid the talk away and soon was convinced nothing would come of it.

    About a year later we hired a young kid from Pittsburgh named Jamie Zawinski. He was not much more than 20 years old and came highly recommended by Scott Fahlman. We called him "The Kid." He was a lot of fun to have around: not a bad hacker and definitely in a demographic we didn't have much of at Lucid. He wanted to find out about the people at the company, particularly me since I had been the one to take a risk on him, including moving him to the West Coast. His way of finding out was to look through my computer directories - none of them were protected. He found the EuroPAL paper, and found the part about worse is better. He connected these ideas to those of Richard Stallman, whom I knew fairly well since I had been a spokesman for the League for Programming Freedom for a number of years. JWZ excerpted the worse-is-better sections and sent them to his friends at CMU, who sent them to their friends at Bell Labs, who sent them to their friends everywhere.

    Soon I was receiving 10 or so e-mails a day requesting the paper. Departments from several large companies requested permission to use the piece as part of their thought processes for their software strategies for the 1990s. The companies I remember were DEC, HP, and IBM. In June 1991, AI Expert magazine republished the piece to gain a larger readership in the US.

    However, despite the apparent enthusiasm by the rest of the world, I was uneasy about the concept of worse is better, and especially with my association with it. In the early 1990s, I was writing a lot of essays and columns for magazines and journals, so much so that I was using a pseudonym for some of that work: Nickieben Bourbaki. The original idea for the name was that my staff at Lucid would help with the writing, and the single pseudonym would represent the collective, much as the French mathematicians in the 1930s used "Nicolas Bourbaki" as their collective name while rewriting the foundations of mathematics in their image. However, no one but I wrote anything under that name.

    In the Winter of 1991-1992 I wrote an essay called "Worse Is Better Is Worse" under the name "Nickieben Bourbaki." This piece attacked worse is better. In it, the fiction was created that Nickieben was a childhood friend and colleague of Richard P. Gabriel, and as a friend and for Richard's own good, Nickieben was correcting Richard's beliefs.

    In the Autumn of 1992, the Journal of Object-Oriented Programming (JOOP) published a "rebuttal" editorial I wrote to "Worse Is Better Is Worse" called "Is Worse Really Better?" The folks at Lucid were starting to get a little worried because I would bring them review drafts of papers arguing (as me) for worse is better, and later I would bring them rebuttals (as Nickieben) against myself. One fellow was seriously nervous that I might have a mental disease.

    In the middle of the 1990s I was working as a management consultant (more or less), and I became interested in why worse is better really could work, so I was reading books on economics and biology to understand how evolution happened in economic systems. Most of what I learned was captured in a presentation I would give back then, typically as a keynote, called "Models of Software Acceptance: How Winners Win," and in a chapter called "Money Through Innovation Reconsidered," in my book of essays, "Patterns of Software: Tales from the Software Community."

    You might think that by the year 2000 I would have settled what I think of worse is better - after over a decade of thinking and speaking about it, through periods of clarity and periods of muck, and through periods of multi-mindedness on the issues. But, at OOPSLA 2000, I was scheduled to be on a panel entitled "Back to the Future: Is Worse (Still) Better?" And in preparation for this panel, the organizer, Martine Devos, asked me to write a position paper, which I did, called "Back to the Future: Is Worse (Still) Better?" In this short paper, I came out against worse is better. But a month or so later, I wrote a second one, called "Back to the Future: Worse (Still) is Better!" which was in favor of it. I still can't decide. Martine combined the two papers into the single position paper for the panel, and during the panel itself, run as a fishbowl, participants routinely shifted from the pro-worse-is-better side of the table to the anti-side. I sat in the audience, having lost my voice giving my Mob Software talk that morning, during which I said, "risk-taking and a willingness to open one's eyes to new possibilities and a rejection of worse-is-better make an environment where excellence is possible. Xenia invites the duende, which is battled daily because there is the possibility of failure in an aesthetic rather than merely a technical sense."

    Decide for yourselves.


    Education



    Etc

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

    ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least


    Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

    The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

    Last modified: December 01, 2017