Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Programming Languages Usage and Design Problems

News Scripting languages Recommended Books Recommended Links Typical errors The Art of Debugging Programming as a Profession Programming style
Software Engineering Object-Oriented Cult: A Slightly Skeptical View on the Object-Oriented Programming Real Insights into Architecture Come Only From Actual Programming Perl-based Bug Tracking  CMM (Capability Maturity Model) KISS Principle Brooks law Slightly Skeptical View on Extreme Programming
Donald Knuth TAoCP and its Influence of Computer Science Algorithms and Data Structures Bit Tricks Searching Algorithms Sorting Algorithms Pattern Matching Graph Algorithms
Assembler C Cpp Java Pascal PL/1 Prolog R programming language
GCC UNIX Make Program Understanding Refactoring vs Restructuring Version Control & Configuration Management Tools  Literate Programming Programming style Software Testing
Forth Compilers Lexical analysis Recursive Descent Parsing Coroutines A symbol table Bit Tricks Debugging
Conway Law Perl-based Bug Tracking Code Reviews and Inspections Code Metrics Structured programming Design patterns Extreme Programming CMM (Capability Maturity Model)
Software Fashion Project Management Inhouse vs Outsourced Applications Development Tips Quotes History Humor Etc

As Donald Knuth noted (Don Knuth and the Art of Computer Programming The Interview):

I think of a programming language as a tool to convert a programmer's mental images into precise operations that a machine can perform. The main idea is to match the user's intuition as well as possible. There are many kinds of users, and many kinds of application areas, so we need many kinds of languages.
 
Ordinarily technology changes fast. But programming languages are different: programming languages are not just technology, but what programmers think in.

They're half technology and half religion. And so the median language, meaning whatever language the median programmer uses, moves as slow as an iceberg.

Paul Graham: Beating the Averages

Libraries are more important that the language.

Donald Knuth


Introduction

A fruitful way to think about language development is to consider it a to be special type of theory building. Peter Naur suggested that programming in general is theory building activity in his 1985 paper "Programming as Theory Building". But idea is especially applicable to compilers and interpreters. What Peter Naur failed to understand was that design of programming languages has religious overtones and sometimes represent an activity, which is pretty close to the process of creating a new, obscure cult ;-). Clueless academics publishing junk papers at obscure conferences are high priests of the church of programming languages. some like Niklaus Wirth and Edsger W. Dijkstra (temporary) reached the status close to those of (false) prophets :-).

On a deep conceptual level building of a new language is a human way of solving complex problems. That means that complier construction in probably the most underappreciated paradigm of programming of large systems much more so then greatly oversold object-oriented programming. OO benefits are greatly overstated. For users, programming languages distinctly have religious aspects, so decisions about what language to use are often far from being rational and are mainly cultural.  Indoctrination at the university plays a very important role. Recently they were instrumental in making Java a new Cobol.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities,  popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things.  A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked.  This is  a story behind success of  Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

Progress in programming languages has been very uneven and contain several setbacks. Currently this progress is mainly limited to development of so called scripting languages.  Traditional high level languages field is stagnant for many decades.

At the same time there are some mysterious, unanswered question about factors that help the language to succeed or fail. Among them:

Those are difficult questions to answer without some way of classifying languages into different categories. Several such classifications exists. First of all like with natural languages, the number of people who speak a given language is a tremendous force that can overcome any real of perceived deficiencies of the language. In programming languages, like in natural languages nothing succeed like success.

Complexity Curse

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that a language with simpler core, or even simplistic core Basic, Pascal) have better chances to acquire high level of popularity.  The underlying fact here probably is that most programmers are at best mediocre and such programmers tend on intuitive level to avoid more complex, more rich languages and prefer, say, Pascal to PL/1 and PHP to Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because of the progress of hardware, availability of compilers and not the least, because it was associated with OO exactly at the time OO became a mainstream fashion).  Complex non-orthogonal languages can succeed only as a result of a long period of language development (which usually adds complexly -- just compare Fortran IV with Fortran 99; or PHP 3 with PHP 5 ) from a smaller core. The banner of some fashionable new trend extending existing popular language to this new "paradigm" is also a possibility (OO programming in case of C++, which is a superset of C).

Historically, few complex languages were successful (PL/1, Ada, Perl, C++), but even if they were successful, their success typically was temporary rather then permanent  (PL/1, Ada, Perl). As Professor Wilkes noted   (iee90):

Things move slowly in the computer language field but, over a sufficiently long period of time, it is possible to discern trends. In the 1970s, there was a vogue among system programmers for BCPL, a typeless language. This has now run its course, and system programmers appreciate some typing support. At the same time, they like a language with low level features that enable them to do things their way, rather than the compiler’s way, when they want to.

They continue, to have a strong preference for a lean language. At present they tend to favor C in its various versions. For applications in which flexibility is important, Lisp may be said to have gained strength as a popular programming language.

Further progress is necessary in the direction of achieving modularity. No language has so far emerged which exploits objects in a fully satisfactory manner, although C++ goes a long way. ADA was progressive in this respect, but unfortunately it is in the process of collapsing under its own great weight.

ADA is an example of what can happen when an official attempt is made to orchestrate technical advances. After the experience with PL/1 and ALGOL 68, it should have been clear that the future did not lie with massively large languages.

I would direct the reader’s attention to Modula-3, a modest attempt to build on the appeal and success of Pascal and Modula-2 [12].

Complexity of the compiler/interpreter also matter as it affects portability: this is one thing that probably doomed PL/1 (and later Ada), although those days a new language typically come with open source compiler (or in case of scripting languages, an interpreter) and this is less of a problem.

Here is an interesting take on language design from the preface to The D programming language book:

Programming language design seeks power in simplicity and, when successful, begets beauty.

Choosing the trade-offs among contradictory requirements is a difficult task that requires good taste from the language designer as much as mastery of theoretical principles and of practical implementation matters. Programming language design is software-engineering-complete.

D is a language that attempts to consistently do the right thing within the constraints it chose: system-level access to computing resources, high performance, and syntactic similarity with C-derived languages. In trying to do the right thing, D sometimes stays with tradition and does what other languages do, and other times it breaks tradition with a fresh, innovative solution. On occasion that meant revisiting the very constraints that D ostensibly embraced. For example, large program fragments or indeed entire programs can be written in a well-defined memory-safe subset of D, which entails giving away a small amount of system-level access for a large gain in program debuggability.

You may be interested in D if the following values are important to you:

The role of fashion

At the initial, the most difficult stage of language development the language should solve an important problem that was inadequately solved by currently popular languages.  But at the same time the language has few chances rto cesseed unless it perfectly fits into the current software fashion. This "fashion factor" is probably as important as several other factors combined with the exclution of "language sponsor" factor.

Like in woman dress fashion rules in language design.  And with time this trend became more and more prononced.  A new language should simultaneously represent the current fashionable trend.  For example OO-programming was a visit card into the world of "big, successful languages" since probably early 90th (C++, Java, Python).  Before that "structured programming" and "verification" (Pascal, Modula) played similar role.

Programming environment and the role of "powerful sponsor" in language success

PL/1, Java, C#, Ada are languages that had powerful sponsors. Pascal, Basic, Forth are examples of the languages that had no such sponsor during the initial period of development.  C and C++ are somewhere in between.

But any language now need a "programming environment" which consists of a set of libraries, debugger and other tools (make tool, link, pretty-printer, etc). The set of standard" libraries and debugger are probably two most important elements. They cost  lot of time (or money) to develop and here the role of powerful sponsor is difficult to underestimate.

While this is not a necessary condition for becoming popular, it really helps: other things equal the weight of the sponsor of the language does matter. For example Java, being a weak, inconsistent language (C-- with garbage collection and OO) was pushed through the throat on the strength of marketing and huge amount of money spend on creating Java programming environment.  The same was partially true for  C# and Python. That's why Python, despite its "non-Unix" origin is more viable scripting language now then, say, Perl (which is better integrated with Unix and has pretty innovative for scripting languages support of pointers and regular expressions), or Ruby (which has support of coroutines form day 1, not as "bolted on" feature like in Python). Like in political campaigns, negative advertizing also matter. For example Perl suffered greatly from blackmail comparing programs in it with "white noise".   And then from withdrawal of O'Reilly from the role of sponsor of the language (although it continue to milk that Perl book publishing franchise ;-)

People proved to be pretty gullible and in this sense language marketing is not that different from woman clothing marketing :-)

Language level and success

One very important classification of programming languages is based on so called the level of the language.  Essentially after there is at least one language that is successful on a given level, the success of other languages on the same level became more problematic. Higher chances for success are for languages that have even slightly higher, but still higher level then successful predecessors.

The level of the language informally can be described as the number of statements (or, more correctly, the number of  lexical units (tokens)) needed to write a solution of a particular problem in one language versus another. This way we can distinguish several levels of programming languages:

 "Nanny languages" vs "Sharp razor" languages

Some people distinguish between "nanny languages" and "sharp razor" languages. The latter do not attempt to protect user from his errors while the former usually go too far... Right compromise is extremely difficult to find.

For example, I consider the explicit availability of pointers as an important feature of the language that greatly increases its expressive power and far outweighs risks of errors in hands of unskilled practitioners.  In other words attempts to make the language "safer" often misfire.

Expressive style of the languages

Another useful typology is based in expressive style of the language:

Those categories are not pure and somewhat overlap. For example, it's possible to program in an object-oriented style in C, or even assembler. Some scripting languages like Perl have built-in regular expressions engines that are a part of the language so they have functional component despite being procedural. Some relatively low level languages (Algol-style languages) implement garbage collection. A good example is Java. There are scripting languages that compile into common language framework which was designed for high level languages. For example, Iron Python compiles into .Net.

Weak correlation between quality of design and popularity

Popularity of the programming languages is not strongly connected to their quality. Some languages that look like a collection of language designer blunders (PHP, Java ) became quite popular. Java became especially a new Cobol and PHP dominates dynamic Web sites construction. The dominant technology for such Web sites is often called LAMP, which means Linux - Apache -My SQL PHP. Being a highly simplified but badly constructed subset of Perl, kind of new Basic for dynamic Web sites construction PHP provides the most depressing experience. I was unpleasantly surprised when I had learnt the Wikipedia engine was rewritten in PHP from Perl some time ago, but this quite illustrates the trend.

So language design quality has little to do with the language success in the marketplace. Simpler languages have more wide appeal as success of PHP (which at the beginning was at the expense of Perl) suggests. In addition much depends whether the language has powerful sponsor like was the case with Java (Sun and IBM) as well as Python (Google).

Progress in programming languages has been very uneven and contain several setbacks like Java. Currently this progress is usually associated with scripting languages. History of programming languages raises interesting general questions about "laws" of programming language design. First let's reproduce several notable quotes:

  1. Knuth law of optimization: "Premature optimization is the root of all evil (or at least most of it) in programming." - Donald Knuth
  2. "Greenspun's Tenth Rule of Programming: any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp." - Phil Greenspun
  3. "The key to performance is elegance, not battalions of special cases." - Jon Bentley and Doug McIlroy
  4. "Some may say Ruby is a bad rip-off of Lisp or Smalltalk, and I admit that. But it is nicer to ordinary people." - Matz, LL2
  5. Most papers in computer science describe how their author learned what someone else already knew. - Peter Landin
  6. "The only way to learn a new programming language is by writing programs in it." - Kernighan and Ritchie
  7. "If I had a nickel for every time I've written "for (i = 0; i < N; i++)" in C, I'd be a millionaire." - Mike Vanier
  8. "Language designers are not intellectuals. They're not as interested in thinking as you might hope. They just want to get a language done and start using it." - Dave Moon
  9. "Don't worry about what anybody else is going to do. The best way to predict the future is to invent it." - Alan Kay
  10. "Programs must be written for people to read, and only incidentally for machines to execute." - Abelson & Sussman, SICP, preface to the first edition

Please note that one thing is to read language manual and appreciate how good the concepts are, and another to bet your project on a new, unproved language without good debuggers, manuals and, what is very important, libraries. Debugger is very important but standard libraries are crucial: they represent a factor that makes or breaks new languages.

In this sense languages are much like cars. For many people car is the thing that they use get to work and shopping mall and they are not very interesting is engine inline or V-type and the use of fuzzy logic in the transmission. What they care is safety, reliability, mileage, insurance and the size of trunk. In this sense "Worse is better" is very true. I already mentioned the importance of the debugger. The other important criteria is quality and availability of libraries. Actually libraries are what make 80% of the usability of the language, moreover in a sense libraries are more important than the language...

A popular belief that scripting is "unsafe" or "second rate" or "prototype" solution is completely wrong. If a project had died than it does not matter what was the implementation language, so for any successful project and tough schedules scripting language (especially in dual scripting language+C combination, for example TCL+C) is an optimal blend that for a large class of tasks. Such an approach helps to separate architectural decisions from implementation details much better that any OO model does.

Moreover even for tasks that handle a fair amount of computations and data (computationally intensive tasks) such languages as Python and Perl are often (but not always !) competitive with C++, C# and, especially, Java.

The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities, popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things. A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked. This is a story behind success of Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.

History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that languages with simpler core, or even simplistic core has more chanced to acquire high level of popularity. The underlying fact here probably is that most programmers are at best mediocre and such programmer tend on intuitive level to avoid more complex, more rich languages like, say, PL/1 and Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because OO became a fashion). Complex non-orthogonal languages can succeed only as a result on long period of language development from a smaller core or with the banner of some fashionable new trend (OO programming in case of C++).

Programming Language Development Timeline

Here is modified from Byte the timeline of Programming Languages (for the original see BYTE.com September 1995 / 20th Anniversary /)

Forties

ca. 1946


1949

Fifties


1951


1952

1957


1958


1959

Sixties


1960


1962


1963


1964


1965


1966


1967



1969

Seventies


1970


1972


 

1974


1975


1976

 


1977


1978


1979

Eighties


1980


1981


1982


1983


1984


1985


1986


1987


1988


1989

Nineties


1990


1991


1992


1993


1994


1995


1996


1997


2006

2007 

2011

There are several interesting "language-induced" errors  -- errors that particular programming language facilitates rather then helps to avoid. They are most studied for C-style languages. Funny but Pl/1 (from which C was derived) was a better designed language then much simpler C in several of those categories.

Avoiding C-style languages design blunder of "easy" mistyping "=" instead of "=="

One of most famous C design blunders was two small lexical difference between assignment and comparison (remember that Algol used := for assignment) caused by the design decision  to make the language more compact (terminals at this type were not very reliable and number of symbols typed matter greatly. In C assignment is allowed in if statement but no attempts were  made to make language more failsafe by avoiding possibility of mixing up  "=" and "==".  In  C syntax  the statement

if (alpha = beta) ... 

assigns the contents of the variable beta to the variable alpha and executes the code in then branch if beta <> 0.

It is easy to mix thing and write if (alpha = beta ) instead of (if (alpha == beta)  which is a pretty nasty, and remarkably consistent C-induced bug.  in case you are comparing the constant to a variable, you can often reverse the sequence and put constant first like in

if ( 1==i ) ...
as
if ( 1=i ) ...
does not make any sense. In this case such a blunder will be detected on syntax level.

Dealing with unbalanced "{" and "}" problem in C-style languages

Another nasty problems with C, C++, Java, Perl and other C-style languages is that missing curvy brackets are pretty difficult to find. they also canbe insertd incorrectly endign with the even more nasty logical error.  One effective solution that was first implemented in PL/1 and was based on calculation of the level of nesting (in compiler listing) and ability of multiple closure of blocks in the end statement (PL/1 did not use brackets {}, they were introduced in C).

In C one can use pseudo comments that signify nesting level zero and check those points with special program or by writing an editor macro.

Many editors have the ability to point to the closing bracket for any given opening bracket and vise versa. This is also useful, but less efficient way to solve the problem.  

Problem of unclosed literal

Specifying max length of literals is an effecting way of catching missing quote. This idea was forst implemented in debugging PL/1 compilers. You can also have an option to limit literal to a single line. In general multi-line literals should have different lexical markers (like "here" construct in shell). Some language like Perl provide opportunity to use concatenation operator for splitting literals into multiple lines, which are "merged" at compile time. But if there is no limit on the number of lines string literal can occupy some bug can slip in which unmatched quote can closed by another unmatched quote in a nearby literal " commenting out" some part of the code. So this does not help much.

Limit on the language of the literal can be communicated via pragma statement at compile type in a particular fragment of text. This is an effective way to avoid the problem. Usually only few places in program use multiline literals, if any. 

Editors that use coloring help to detect unclosed literal problem,  but there are cases when they are useless.

Commenting out blocks  of code

This is best done not with comments, but with a preprocessor if the language has one (PL/1, C, etc)

The "dangling else" problem

Having both an if-else and an if statement leads to some possibilities of confusion when one of the clause of a selection statement is itself a selection statement. For example, the C code

if (level >= good)
   if (level == excellent)
      cout << "excellent" << endl;
else
   cout << "bad" << endl;

is intended to process a three-state situation in which something can be bad, good or (as a special case of good) excellent; it is supposed to print an appropriate description for the excellent and bad cases, and print nothing for the good case. The indentation of the code reflects these expectations. Unfortunately, the code does not do this. Instead, it prints excellent for the excellent case, bad for the good case, and nothing for the bad case.

The problem is deciding which if matches the else in this expression. The basic rule is

an else matches the nearest previous unmatched if

There are two ways to avoid the dangling else problem:

In fact, you can avoid the dangling else problem completely by always using brackets around the clauses of an if or if-else statement, even if they only enclose a single statement.
So a good strategy for notation of if-else statements is always use { brace brackets } around the clauses of an if-else or if statement
Always use { brace brackets } around the clauses of an if-else or if statement
(This strategy also helps if you need to cut-and-paste more code into one of the clauses: if a clause consists of only one statement, without enclosing brace brackets, and you add another statement to it, then you also need to add the brace brackets. Having the brace brackets there already makes the job easier.)

 


NEWS CONTENTS

Old News ;-)

[Nov 11, 2019] C, Python, Go, and the Generalized Greenspun Law

Dec 18, 2017 | esr.ibiblio.org

Posted on 2017-12-18 by esr In recent discussion on this blog of the GCC repository transition and reposurgeon, I observed "If I'd been restricted to C, forget it – reposurgeon wouldn't have happened at all"

I should be more specific about this, since I think the underlying problem is general to a great deal more that the implementation of reposurgeon. It ties back to a lot of recent discussion here of C, Python, Go, and the transition to a post-C world that I think I see happening in systems programming.

(This post perhaps best viewed as a continuation of my three-part series: The long goodbye to C , The big break in computer languages , and Language engineering for great justice .)

I shall start by urging that you must take me seriously when I speak of C's limitations. I've been programming in C for 35 years. Some of my oldest C code is still in wide production use. Speaking from that experience, I say there are some things only a damn fool tries to do in C, or in any other language without automatic memory management (AMM, for the rest of this article).

This is another angle on Greenspun's Law: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp." Anyone who's been in the trenches long enough gets that Greenspun's real point is not about C or Fortran or Common Lisp. His maxim could be generalized in a Henry-Spencer-does-Santyana style as this:

"At any sufficient scale, those who do not have automatic memory management in their language are condemned to reinvent it, poorly."

In other words, there's a complexity threshold above which lack of AMM becomes intolerable. Lack of it either makes expressive programming in your application domain impossible or sends your defect rate skyrocketing, or both. Usually both.

When you hit that point in a language like C (or C++), your way out is usually to write an ad-hoc layer or a bunch of semi-disconnected little facilities that implement parts of an AMM layer, poorly. Hello, Greenspun's Law!

It's not particularly the line count of your source code driving this, but rather the complexity of the data structures it uses internally; I'll call this its "greenspunity". Large programs that process data in simple, linear, straight-through ways may evade needing an ad-hoc AMM layer. Smaller ones with gnarlier data management (higher greenspunity) won't. Anything that has to do – for example – graph theory is doomed to need one (why, hello, there, reposurgeon!)

There's a trap waiting here. As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.

Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.

Sometimes you really have no alternative but to be stuck with an ad-hoc AMM layer. Usually you get pinned to this situation because real AMM would impose latency costs you can't afford. The major case of this is operating-system kernels. I could say a lot more about the costs and contortions this forces you to assume, and perhaps I will in a future post, but it's out of scope for this one.

On the other hand, reposurgeon is representative of a very large class of "systems" programs that don't have these tight latency constraints. Before I get to back to the implications of not being latency constrained, one last thing – the most important thing – about escalating AMM-layer complexity.

At high enough levels of greenspunity, the effort required to build and maintain your ad-hoc AMM layer becomes a black hole. You can't actually make any progress on the application domain at all – when you try it's like being nibbled to death by ducks.

Now consider this prospectively, from the point of view of someone like me who has architect skill. A lot of that skill is being pretty good at visualizing the data flows and structures – and thus estimating the greenspunity – implied by a problem domain. Before you've written any code, that is.

If you see the world that way, possible projects will be divided into "Yes, can be done in a language without AMM." versus "Nope. Nope. Nope. Not a damn fool, it's a black hole, ain't nohow going there without AMM."

This is why I said that if I were restricted to C, reposurgeon would never have happened at all. I wasn't being hyperbolic – that evaluation comes from a cool and exact sense of how far reposurgeon's problem domain floats above the greenspunity level where an ad-hoc AMM layer becomes a black hole. I shudder just thinking about it.

Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer. But, though software is sometimes written by people who are exceptionally good at managing that kind of hair, it then generally has to be maintained by people who are less so

The really smart people in my audience have already figured out that this is why Ken Thompson, the co-designer of C, put AMM in Go, in spite of the latency issues.

Ken understands something large and simple. Software expands, not just in line count but in greenspunity, to meet hardware capacity and user demand. In languages like C and C++ we are approaching a point of singularity at which typical – not just worst-case – greenspunity is so high that the ad-hoc AMM becomes a black hole, or at best a trap nigh-indistinguishable from one.

Thus, Go. It didn't have to be Go; I'm not actually being a partisan for that language here. It could have been (say) Ocaml, or any of half a dozen other languages I can think of. The point is the combination of AMM with compiled-code speed is ceasing to be a luxury option; increasingly it will be baseline for getting most kinds of systems work done at all.

Sociologically, this implies an interesting split. Historically the boundary between systems work under hard latency constraints and systems work without it has been blurry and permeable. People on both sides of it coded in C and skillsets were similar. People like me who mostly do out-of-kernel systems work but have code in several different kernels were, if not common, at least not odd outliers.

Increasingly, I think, this will cease being true. Out-of-kernel work will move to Go, or languages in its class. C – or non-AMM languages intended as C successors, like Rust – will keep kernels and real-time firmware, at least for the foreseeable future. Skillsets will diverge.

It'll be a more fragmented systems-programming world. Oh well; one does what one must, and the tide of rising software complexity is not about to be turned. This entry was posted in General , Software by esr . Bookmark the permalink . 144 thoughts on "C, Python, Go, and the Generalized Greenspun Law"

  1. Vote -1 Vote +1 David Collier-Brown on 2017-12-18 at 17:38:05 said: Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 17:40:45 said: >Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory.

      Oh, neat. I think that's a closer approximation to the most general statement than Greenspun's, actually. Reply ↓

      • Vote -1 Vote +1 Alex K. on 2017-12-20 at 09:50:37 said: For today, maybe -- but the first time I had Greenspun's Tenth quoted at me was in the late '90s. [I know this was around/just before the first C++ standard, maybe contrasting it to this new upstart Java thing?] This was definitely during the era where big computers still did your serious work, and pretty much all of it was in either C, COBOL, or FORTRAN. [Yeah, yeah, I know– COBOL is all caps for being an acronym, while Fortran ain't–but since I'm talking about an earlier epoch of computing, I'm going to use the conventions of that era.]

        Now the Object-Oriented paradigm has really mitigated this to an enormous degree, but I seem to recall at that time the argument was that multimethod dispatch (a benefit so great you happily accept the flaw of memory management) was the Killer Feature of LISP.

        Given the way the other advantage I would have given Lisp over the past two decades–anonymous functions [lambdas] and treating them as first-class values–are creeping into a more mainstream usage, I think automated memory management is the last visible "Lispy" feature people will associate with Greenspun. [What, are you now visualizing lisp macros? Perish the thought–anytime I see a foot cannon that big, I stop calling it a feature ] Reply ↓

  2. Vote -1 Vote +1 Mycroft Jones on 2017-12-18 at 17:41:04 said: After looking at the Linear Lisp paper, I think that is where Lutz Mueller got One Reference Only memory management from. For automatic memory management, I'm a big fan of ORO. Not sure how to apply it to a statically typed language though. Wish it was available for Go. ORO is extremely predictable and repeatable, not stuttery. Reply ↓
    • Vote -1 Vote +1 lliamander on 2017-12-18 at 19:28:04 said: > Not sure how to apply it to a statically typed language though.

      Clean is probably what you would be looking for: https://en.wikipedia.org/wiki/Clean_(programming_language) Reply ↓

    • Vote -1 Vote +1 Jeff Read on 2017-12-19 at 00:38:57 said: If Lutz was inspired by Linear Lisp, he didn't cite it. Actually ORO is more like region-based memory allocation with a single region: values which leave the current scope are copied which can be slow if you're passing large lists or vectors around.

      Linear Lisp is something quite a bit different, and allows for arbitrary data structures with arbitrarily deep linking within, so long as there are no cycles in the data structures. You can even pass references into and out of functions if you like; what you can't do is alias them. As for statically typed programming languages well, there are linear type systems , which as lliamander mentioned are implemented in Clean.

      Newlisp in general is smack in the middle between Rust and Urbit in terms of cultishness of its community, and that scares me right off it. That and it doesn't really bring anything to the table that couldn't be had by "old" lisps (and Lutz frequently doubles down on mistakes in the design that had been discovered and corrected decades ago by "old" Lisp implementers). Reply ↓

  3. Vote -1 Vote +1 Gary E. Miller on 2017-12-18 at 18:02:10 said: For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath. One probable reason Ken Thompson had to invent Go is to go around the tremendous difficulty in getting new stuff into C. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 18:40:53 said: >For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath.

      Yeah, good idea not to. People as smart/skilled as you and me have been poking at this problem since the 1980s and it's pretty easy to show that you can't do better than Boehm–Demers–Weiser, which has limitations that make it impractical. Sigh Reply ↓

      • Vote -1 Vote +1 John Cowan on 2018-04-15 at 00:11:56 said: What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well. Reply ↓
        • Vote -1 Vote +1 esr on 2018-04-15 at 08:30:12 said: >What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well.

          GCing data on the stack is a crapshoot. Pointers can get mistaken for data and vice-versa. Reply ↓

    • Vote -1 Vote +1 Konstantin Khomoutov on 2017-12-20 at 06:30:05 said: I think it's not about C. Let me cite a little bit from "The Go Programming Language" (A.Donovan, B. Kernigan) --
      in the section about Go influences, it states:

      "Rob Pike and others began to experiment with CSP implementations as actual languages. The first was called Squeak which provided a language with statically created channels. This was followed by Newsqueak, which offered C-like statement and expression syntax and Pascal-like type notation. It was a purely functional language with garbage collection, again aimed at managing keyboard, mouse, and window events. Channels became first-class values, dynamically created and storable in variables.

      The Plan 9 operating system carried these ideas forward in a language called Alef. Alef tried to make Newsqueak a viable system programming language, but its omission of garbage collection made concurrency too painful."

      So my takeaway was that AMM was key to get proper concurrency.
      Before Go, I dabbled with Erlang (which I enjoy, too), and I'd say there the AMM is also a key to have concurrency made easy.

      (Update: the ellipsises I put into the citation were eaten by the engine and won't appear when I tried to re-edit my comment; sorry.) Reply ↓

  4. Vote -1 Vote +1 tz on 2017-12-18 at 18:29:20 said: I think this is the key insight.
    There are programs with zero MM.
    There are programs with orderly MM, e.g. unzip does mallocs and frees in a stacklike formation, Malloc a,b,c, free c,b,a. (as of 1.1.4). This is laminar, not chaotic flow.

    Then there is the complex, nonlinear, turbulent flow, chaos. You can't do that in basic C, you need AMM. But it is easier in a language that includes it (and does it well).

    Virtual Memory is related to AMM – too often the memory leaks were hidden (think of your O(n**2) for small values of n) – small leaks that weren't visible under ordinary circumstances.

    Still, you aren't going to get AMM on the current Arduino variants. At least not easily.

    That is where the line is, how much resources. Because you require a medium to large OS, or the equivalent resources to do AMM.

    Yet this is similar to using FPGAs, or GPUs for blockchain coin mining instead of the CPU. Sometimes you have to go big. Your Cooper Mini might be great most of the time, but sometimes you need a Diesel big pickup. I think a Mini would fit in the bed of my F250.

    As tasks get bigger they need bigger machines. Reply ↓

  5. Vote -1 Vote +1 Zygo on 2017-12-18 at 18:31:34 said: > Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.

    I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-18 at 18:34:59 said: >I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that.

      Well, yeah. I have days like that occasionally, but it would be unwise to plan a project based on the assumption that I will. And deeply foolish to assume that J. Random Programmer will. Reply ↓

  6. Vote -1 Vote +1 tz on 2017-12-18 at 18:32:37 said: C displaced assembler because it had the speed and flexibility while being portable.

    Go, or something like it will displace C where they can get just the right features into the standard library including AMM/GC.

    Maybe we need Garbage Collecting C. GCC?

    One problem is you can't do the pointer aliasing if you have a GC (unless you also do some auxillary bits which would be hard to maintain). void x = y; might be decodable but there are deeper and more complex things a compiler can't detect. If the compiler gets it wrong, you get a memory leak, or have to constrain the language to prevent things which manipulate pointers when that is required or clearer. Reply ↓

    • Vote -1 Vote +1 Zygo on 2017-12-18 at 20:52:40 said: C++11 shared_ptr does handle the aliasing case. Each pointer object has two fields, one for the thing being pointed to, and one for the thing's containing object (or its associated GC metadata). A pointer alias assignment alters the former during the assignment and copies the latter verbatim. The syntax is (as far as a C programmer knows, after a few typedefs) identical to C.

      The trouble with applying that idea to C is that the standard pointers don't have space or time for the second field, and heap management isn't standardized at all (free() is provided, but programs are not required to use it or any other function exclusively for this purpose). Change either of those two things and the resulting language becomes very different from C. Reply ↓

  7. Vote -1 Vote +1 IGnatius T Foobar on 2017-12-18 at 18:39:28 said: Eric, I love you, you're a pepper, but you have a bad habit of painting a portrait of J. Random Hacker that is actually a portrait of Eric S. Raymond. The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 18:55:46 said: >The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit.

      Even if you're right, the escalation of complexity means that what I'm facing now, J. Random Hacker will face in a couple of years. Yes, not everybody writes reposurgeon but a string class won't suffice for much longer even if it does today. Reply ↓

      • Vote -1 Vote +1 tz on 2017-12-18 at 19:27:12 said: Here's another key.

        I once had a sign:

        I don't solve complex problems.
        I simplify complex problems and solve them.

        Complexity does escalate, or at least in the sense that we could cross oceans a few centuries ago, and can go to the planets and beyond today.

        We shouldn't use a rocket ship to get groceries from the local market.

        J Random H-1B will face some easily decomposed apparently complex problem and write a pile of spaghetti.

        The true nature of a hacker is not so much in being able to handle the most deep and complex situations, but in being able to recognize which situations are truly complex and in preference working hard to simplify and reduce complexity in preference to writing something to handle the complexity. Dealing with a slain dragon's corpse is easier than one that is live, annoyed, and immolating anything within a few hundred yards. Some are capable of handling the latter. The wise knight prefers to reduce the problem to the former. Reply ↓

        • Vote -1 Vote +1 William O. B'Livion on 2017-12-20 at 02:02:40 said: > J Random H-1B will face some easily decomposed
          > apparently complex problem and write a pile of spaghetti.

          J Random H-1B will do it with Informatica and Java. Reply ↓

  8. Vote -1 Vote +1 tz on 2017-12-18 at 18:42:33 said: I will add one last "perils of java school" comment.

    One of the epic fails of C++ is it being sold as C but where anyone could program because of all the safetys. Instead it created bloatware and the very memory leaks because the lesser programmers didn't KNOW (grok, understand) what they were doing. It was all "automatic".

    This is the opportunity and danger of AMM/GC. It is a tool, and one with hot areas and sharp edges. Wendy (formerly Walter) Carlos had a law that said "Whatever parameter you can control, you must control". Having a really good AMM/GC requires you to respect what it can and cannot do. OK, form a huge – into VM – linked list. Won't it just handle everything? NO!. You have to think reference counts, at least in the back of your mind. It simplifys the problem but doesn't eliminate it. It turns the black hole into a pulsar, but you still can be hit.

    Many will gloss over and either superficially learn (but can't apply) or ignore the "how to use automatic memory management" in their CS course. Like they didn't bother with pointers, recursion, or multithreading subtleties. Reply ↓

  9. Vote -1 Vote +1 lliamander on 2017-12-18 at 19:36:35 said: I would say that there is a parallel between concurrency models and memory management approaches. Beyond a certain level of complexity, it's simply infeasible for J. Random Hacker to implement a locks-based solution just as it is infeasible for Mr. Hacker to write a solution with manual memory management.

    My worry is that by allowing the unsafe sharing of mutable state between goroutines, Go will never be able to achieve the per-process (i.e. language-level process, not OS-level) GC that would allow for really low latencies necessary for a AMM language to move closer into the kernel space. But certainly insofar as many "systems" level applications don't require extremely low latencies, Go will probably viable solution going forward. Reply ↓

  10. Vote -1 Vote +1 Jeff Read on 2017-12-18 at 20:14:18 said: Putting aside the hard deadlines found in real-time systems programming, it has been empirically determined that a GC'd program requires five times as much memory as the equivalent program with explicit memory management. Applications which are both CPU- and RAM-intensive, where you need to have your performance cake and eat it in as little memory as possible, are thus severely constrained in terms of viable languages they could be implemented in. And by "severely constrained" I mean you get your choice of C++ or Rust. (C, Pascal, and Ada are on the table, but none offer quite the same metaprogramming flexibility as those two.)

    I think your problems with reposturgeon stem from the fact that you're just running up against the hard upper bound on the vector sum of CPU and RAM efficiency that a dynamic language like Python (even sped up with PyPy) can feasibly deliver on a hardware configuration you can order from Amazon. For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-18 at 20:27:20 said: > For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII.

      How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it? Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-18 at 20:48:24 said: You mentioned that reposurgeon wouldn't have been written under the constraints of C. But C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp, especially if you avail yourself of those wonderful features in C++1x. C++ programmers talk about "zero-cost abstractions" for a reason .

        Semantically, programming in a GC'd language and programming in a language that uses smart pointers and RAII are very similar: you create the objects you need, and they are automatically disposed of when no longer needed. But instead of delegating to a GC which cleans them up whenever, both you and the compiler have compile-time knowledge of when those cleanups will take place, allowing you finer-grained control over how memory -- or any other resource -- is used.

        Oh, that's another thing: GC only has something to say about memory -- not file handles, sockets, or any other resource. In C++, with appropriate types value semantics can be made to apply to those too and they will immediately be destructed after their last use. There is no special with construct in C++; you simply construct the objects you need and they're destructed when they go out of scope.

        This is how the big boys do systems programming. Again, Go has barely displaced C++ at all inside Google despite being intended for just that purpose. Their entire critical path in search is still C++ code. And it always will be until Rust gains traction.

        As for my Lisp experience, I know enough to know that Lisp has utterly failed and this is one of the major reasons why. It's not even a decent AI language, because the scruffies won, AI is basically large-scale statistics, and most practitioners these days use C++. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-18 at 20:54:08 said: >C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp,

          Oh, bullshit. I think you're just trolling, now.

          I've been a C++ programmer and know better than this.

          But don't argue with me. Argue with Ken Thompson, who designed Go because he knows better than this. Reply ↓

          • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 06:02:03 said: Modern C++ is a long way from C++ when it was first standardized in 1998. You should *never* be manually managing memory in modern C++. You want a dynamically sized array? Use std::vector. You want an adhoc graph? Use std::shared_ptr and std::weak_ptr.
            Any code I see which uses new or delete, malloc or free will fail code review.
            Destructors and the RAII idiom mean that this covers *any* resource, not just memory.
            See the C++ Core Guidelines on resource and memory management: http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource Reply ↓
            • Vote -1 Vote +1 esr on 2017-12-19 at 07:53:58 said: >Modern C++ is a long way from C++ when it was first standardized in 1998.

              That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad .

              My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like. Reply ↓

              • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:17:58 said: > My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like.

                C++ in 2009 with boost was C++ from 1998 with a few extra libraries. I mean that quite literally -- the standard was unchanged apart from minor fixes in 2003.

                C++ has changed a lot since then. There have been 3 standards issued, in 2011, 2014, and just now in 2017. Between them, there is a huge list of changes to the language and the standard library, and these are readily available -- both clang and gcc have kept up-to-date with the changes, and even MSVC isn't far behind. Even more changes are coming with C++20.

                So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.

                If you are interested, browse the wikipedia entries: https://en.wikipedia.org/wiki/C%2B%2B11 https://en.wikipedia.org/wiki/C%2B%2B14 and https://en.wikipedia.org/wiki/C%2B%2B17 along with articles like https://blog.smartbear.com/development/the-biggest-changes-in-c11-and-why-you-should-care/ http://www.drdobbs.com/cpp/the-c14-standard-what-you-need-to-know/240169034 and https://isocpp.org/files/papers/p0636r0.html Reply ↓

                • Vote -1 Vote +1 esr on 2017-12-19 at 08:37:11 said: >So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.

                  But the foundational abstractions are still leaky. So when you tell me "it's all better now", I don't believe you. I just plain do not.

                  I've been hearing this soothing song ever since around 1989. "Trust us, it's all fixed." Then I look at the "fixes" and they're horrifying monstrosities like templates – all the dangers of preprocessor macros and a whole new class of Turing-complete nightmares, too! In thirty years I'm certain I'll be hearing that C++2047 solves all the problems this time for sure , and I won't believe a word of it then, either. Reply ↓

                  • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:45:34 said: > But the foundational abstractions are still leaky.

                    If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about? Reply ↓

                    • Vote -1 Vote +1 esr on 2017-12-19 at 09:26:24 said: >If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about?

                      Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.

                      Are bare, untyped pointers still in the language? Never mind, I know the answer.

                      Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.

                      Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.

                    • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 09:54:51 said: Thank you for the list of concerns.

                      > Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.

                      You are right, bare arrays are not bounds-checked, but std::array provides an at() member function, so arr.at(3) will throw if the array is too small.

                      Also, ranged-for loops can avoid the need for explicit indexing lots of the time anyway.

                      > Are bare, untyped pointers still in the language? Never mind, I know the answer.

                      Yes, void* is still in the language. You need to cast it to use it, which is something that is easy to spot in a code review.

                      > Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.

                      Probably. Is it possible to write code in any language that dies horribly in an unintended fashion?

                      > Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.

                      Did I say C++ was perfect? This blog post was about the problems inherent in the lack of automatic memory management in C and C++, and thus why you wouldn't have written reposurgeon if that's all you had. My point is that it is easy to write C++ in a way that doesn't suffer from those problems.

                    • Vote -1 Vote +1 esr on 2017-12-19 at 10:10:11 said: > My point is that it is easy to write C++ in a way that doesn't suffer from those problems.

                      No, it is not. The error statistics of large C++ programs refute you.

                      My personal experience on Battle for Wesnoth refutes you.

                      The persistent self-deception I hear from C++ advocates on this score does nothing to endear the language to me.

                    • Vote -1 Vote +1 Ian Bruene on 2017-12-19 at 11:05:22 said: So what I am hearing in this is: "Use these new standards built on top of the language, and make sure every single one of your dependencies holds to them just as religiously are you are. And if anyone fails at any point in the chain you are doomed.".

                      Cool.

                  • Vote -1 Vote +1 Casey Barker on 2017-12-19 at 11:12:16 said: Using Go has been a revelation, so I mostly agree with Eric here. My only objection is to equating C++03/Boost with "modern" C++. I used both heavily, and given a green field, I would consider C++14 for some of these thorny designs that I'd never have used C++03/Boost for. It's a qualitatively different experience. Just browse a copy of Scott Meyer's _Effective Modern C++_ for a few minutes, and I think you'll at least understand why C++14 users object to the comparison. Modern C++ enables better designs.

                    Alas, C++ is a multi-layered tool chest. If you stick to the top two shelves, you can build large-scale, complex designs with pretty good safety and nigh unmatched performance. Everything below the third shelf has rusty tools with exposed wires and no blade guards, and on large-scale projects, it's impossible to keep J. Random Programmer from reaching for those tools.

                    So no, if they keep adding features, C++ 2047 won't materially improve this situation. But there is a contingent (including Meyers) pushing for the *removal* of features. I think that's the only way C++ will stay relevant in the long-term.
                    http://scottmeyers.blogspot.com/2015/11/breaking-all-eggs-in-c.html Reply ↓

                  • Vote -1 Vote +1 Zygo on 2017-12-19 at 11:52:17 said: My personal experience is that C++11 code (in particular, code that uses closures, deleted methods, auto (a feature you yourself recommended for C with different syntax), and the automatic memory and resource management classes) has fewer defects per developer-year than the equivalent C++03-and-earlier code.

                    This is especially so if you turn on compiler flags that disable the legacy features (e.g. -Werror=old-style-cast), and treat any legacy C or C++03 code like foreign language code that needs to be buried under a FFI to make it safe to use.

                    Qualitatively, the defects that do occur are easier to debug in C++11 vs C++03. There are fewer opportunities for the compiler to interpolate in surprising ways because the automatic rules are tighter, the library has better utility classes that make overloads and premature optimization less necessary, the core language has features that make templates less necessary, and it's now possible to explicitly select or rule out invalid candidates for automatic code generation.

                    I can design in Lisp, but write C++11 without much effort of mental translation. Contrast with C++03, where people usually just write all the Lispy bits in some completely separate language (or create shambling horrors like Boost to try to bandaid over the missing limbs boost::lambda, anyone?
                    Oh, look, since C++11 they've doubled down on something called boost::phoenix).

                    Does C++11 solve all the problems? Absolutely not, that would break compatibility. But C++11 is noticeably better than its predecessors. I would say the defect rates are now comparable to Perl with a bunch of custom C modules (i.e. exact defect rate depends on how much you wrote in each language). Reply ↓

                  • Vote -1 Vote +1 NHO on 2017-12-19 at 11:55:11 said: C++ happily turned into complexity metatarpit with "Everything that could be implemented in STL with templates should, instead of core language". And not deprecating/removing features, instead leaving there. Reply ↓
                • Vote -1 Vote +1 Michael on 2017-12-19 at 08:59:41 said: For the curious, can you point to a C++ tutorial/intro that shows how to do it the right way ? Reply ↓
              • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:26:13 said: > That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad.

                I agree that there is a lot of complexity in C++. That doesn't mean you have to use all of it. Yes, it makes maintaining legacy code harder, because the older code might use dangerous or complex parts, but for new code we can avoid the danger, and just stick to the simple, safe parts.

                The complexity isn't all bad, though. Part of the complexity arises by providing the ability to express more complex things in the language. This can then be used to provide something simple to the user.

                Take std::variant as an example. This is a new facility from C++17 that provides a type-safe discriminated variant. If you have a variant that could hold an int or a string and you store an int in it, then attempting to access it as a string will cause an exception rather than a silent error. The code that *implements* std::variant is complex. The code that uses it is simple. Reply ↓

          • Vote -1 Vote +1 Jeff Read on 2017-12-20 at 09:07:06 said: I won't argue with you. C++ is error-prone (albeit less so than C) and horrid to work in. But for certain classes of algorithmically complex, CPU- and RAM-intensive problems it is literally the only viable choice. And it looks like performing surgery on GCC-scale repos falls into that class of problem.

            I'm not even saying it was a bad idea to initially write reposurgeon in Python. Python and even Ruby are great languages to write prototypes or even small-scale production versions of things because of how rapidly they may be changed while you're hammering out the details. But scale comes around to bite you in the ass sooner than most people think and when it does, your choice of language hobbles you in a way that can't be compensated for by throwing more silicon at the problem. And it's in that niche where C++ and Rust dominate, absolutely uncontested. Reply ↓

          • Vote -1 Vote +1 jim on 2017-12-22 at 06:41:27 said: If you found rust hard going, you are not a C++ programmer who knows better than this.

            You were writing C in C++ Reply ↓

      • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 06:15:12 said: > How many times do I have to repeat "reposurgeon would never have been
        > written under that constraint" before somebody who claims LISP
        > experience gets it?

        That speaks to your lack of experience with modern C++, rather than an inherent limitation. *You* might not have written reposurgeon under that constraint, because *you* don't feel comfortable that you wouldn't have ended up with a black-hole of AMM. That does not mean that others wouldn't have or couldn't have, and that their code would necessarily be an unmaintainable black hole.

        In well-written modern C++, memory management errors are a solved problem. You can just write code, and know that the compiler and library will take care of cleaning up for you, just like with a GC-based system, but with the added benefit that it's deterministic, and can handle non-memory resources such as file handles and sockets too. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-19 at 07:59:30 said: >In well-written modern C++, memory management errors are a solved problem

          In well-written assembler memory management errors are a solved problem. I hate this idiotic cant repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue. Reply ↓

          • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:08:53 said: > I hate this idiotic repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue.

            It's not about being "just good enough". It's about *not* using the dangerous parts. If you never use manual memory management, then you can't forget to free, for example, and automatic memory management is *easy* to use. std::string is a darn sight easier to use than the C string functions, for example, and std::vector is a darn sight easier to use than dynamic arrays with new. In both cases, the runtime manages the memory, and it is *easier* to use than the dangerous version.

            Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems. The same is true with C++.

            The fact that historically there are areas where C++ didn't provide a good solution, and thus there are programs that don't use the modern solution, and experience the consequential problems is not an inherent problem with the language, but it does make it harder to educate people. Reply ↓

            • Vote -1 Vote +1 John D. Bell on 2017-12-19 at 10:48:09 said: > It's about *not* using the dangerous parts. Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems.

              Why not use a language that doesn't have "'dangerous' features"?

              NOTES: [1] I am not saying that Go is necessarily that language – I am not even saying that any existing language is necessarily that language.
              [2] /me is being someplace between naive and trolling here. Reply ↓

              • Vote -1 Vote +1 esr on 2017-12-19 at 11:10:15 said: >Why not use a language that doesn't have "'dangerous' features"?

                Historically, it was because hardware was weak and expensive – you couldn't afford the overhead imposed by those languages. Now it's because the culture of software engineering has bad habits formed in those days and reflexively flinches from using higher-overhead safe languages, though it should not. Reply ↓

              • Vote -1 Vote +1 Paul R on 2017-12-19 at 12:30:42 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.

                To be provocative, I think there are two types of programmer, the ones who watch Eric Niebler on Ranges https://www.youtube.com/watch?v=mFUXNMfaciE&t=4230s and think 'Wow, I want to find out more!' and the rest. The rest can have Go and Rust

                D of course is the baby elephant in the room, worth much more attention than it gets. Reply ↓

                • Vote -1 Vote +1 Michael on 2017-12-19 at 12:53:33 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.

                  Because you can't get runtime efficiency in any other language?

                  Because you can't innovate in any other language? Reply ↓

                  • Vote -1 Vote +1 Paul R on 2017-12-19 at 13:50:56 said: Obviously not.

                    Our three main advantages, runtime efficiency, innovation opportunity, building on a base of millions of lines of code that run the internet and an international standard.

                    Our four main advantages

                    More seriously, C++ enabled the STL, the STL transforms the approach of its users, with much increased reliability and readability, but no loss of performance. And at the same time your old code still runs. Now that is old stuff, and STL2 is on the way. Evolution.

                    That's five. Damn Reply ↓

                  • Vote -1 Vote +1 Zygo on 2017-12-19 at 14:14:42 said: > Because you can't innovate in any other language?

                    That claim sounded odd to me too. C++ looks like the place that well-proven features of younger languages go to die and become fossilized. The standardization process would seem to require it. Reply ↓

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 06:27:47 said: Such as?

                      My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.

                      Generic programming being a prime example.

                    • Vote -1 Vote +1 Michael on 2017-12-20 at 08:19:41 said: My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.

                      Are you sure it's that, or is it more the fact that the standards committee has forever had a me-too kitchen-sink no-feature-left-behind obsession?

                      (Makes me wonder if it doesn't share some DNA with the featuritis that has been Microsoft's calling card for so long. – they grew up together.)

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 11:13:20 said: No, because people come to the standards committee with ideas, and you cannot have too many libraries. You don't pay for what you don't use. Prime directive C++.
                    • Vote -1 Vote +1 Michael on 2017-12-20 at 11:35:06 said: and you cannot have too many libraries. You don't pay for what you don't use.

                      And this, I suspect, is the primary weakness in your perspective.

                      Is the defect rate of C++ code better or worse because of that?

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 15:49:29 said: The rate is obviously lower because I've written less code and library code only survives if it is sound. Are you suggesting that reusing code is a bad idea? Or that an indeterminate number of reimplementations of the same functionality is a good thing?

                      You're not on the most productive path to effective criticism of C++ here.

                    • Vote -1 Vote +1 Michael on 2017-12-20 at 17:40:45 said: The rate is obviously lower because I've written less code

                      Please reconsider that statement in light of how defect rates are measured.

                      Are you suggesting..

                      Arguing strawmen and words you put in someone's mouth is not the most productive path to effective defense of C++.

                      But thank you for the discussion.

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 18:46:53 said: This column is too narrow to have a decent discussion. WordPress should rewrite in C++ or I should dig out my Latin dictionary.

                      Seriously, extending the reach of libraries that become standardised is hard to criticise, extending the reach of the core language is.

                      It used to be a thing that C didn't have built in functionality for I/O (for example) rather it was supplied by libraries written in C interfacing to a lower level system interface. This principle seems to have been thrown out of the window for Go and the others. I'm not sure that's a long term win. YMMV.

                      But use what you like or what your cannot talk your employer out of using, or what you can get a job using. As long as it's not Rust.

            • Vote -1 Vote +1 Zygo on 2017-12-19 at 12:24:25 said: > Well-written programs in a given language don't use the dangerous features

              Some languages have dangerous features that are disabled by default and must be explicitly enabled prior to use. C++ should become one of those languages.

              I am very fond of the 'override' keyword in C++11, which allows me to say "I think this virtual method overrides something, and don't compile the code if I'm wrong about that." Making that assertion incorrectly was a huge source of C++ errors for me back in the days when I still used C++ virtual methods instead of lambdas. C++11 solved that problem two completely different ways: one informs me when I make a mistake, and the other makes it impossible to be wrong.

              Arguably, one should be able to annotate any C++ block and say "there shall be no manipulation of bare pointers here" or "all array access shall be bounds-checked here" or even " and that's the default for the entire compilation unit." GCC can already emit warnings for these without human help in some cases. Reply ↓

      • Vote -1 Vote +1 Kevin S. Van Horn on 2017-12-20 at 12:20:14 said: Is this a good summary of your objections to C++ smart pointers as a solution to AMM?

        1. Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.

        2. Subversion of AMM. Bare news and deletes are still available, so some later maintenance programmer could still introduce memory leaks. You could forbid the use of bare new and delete in your project, and write a check-in hook to look for violations of the policy, but that's one more complication to worry about and it would be difficult to impossible to implement reliably due to macros and the generally difficulty of parsing C++.

        3. Memory corruption. It's too easy to overrun the end of arrays, treat a pointer to a single object as an array pointer, or otherwise corrupt memory. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-20 at 15:51:55 said: >Is this a good summary of your objections to C++ smart pointers as a solution to AMM?

          That is at least a large subset of my objections, and probably the most important ones. Reply ↓

          • Vote -1 Vote +1 jim on 2017-12-22 at 07:15:20 said: It is uncommon to find a cyclic graph that cannot be rendered acyclic by weak pointers.

            C++17 cheerfully breaks backward compatibility by removing some dangerous idioms, refusing to compile code that should never have been written. Reply ↓

        • Vote -1 Vote +1 guest on 2017-12-20 at 19:12:01 said: > Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.

          General graphs with possibly-cyclical references are precisely the workload GC was created to deal with optimally, so ESR is right in a sense that reposturgeon _requires_ a GC-capable language to work. In most other programs, you'd still want to make sure that the extent of the resources that are under GC-control is properly contained (which a Rust-like language would help a lot with) but it's possible that even this is not quite worthwhile for reposturgeon. Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that. Reply ↓

          • Vote -1 Vote +1 esr on 2017-12-20 at 20:12:49 said: >Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that.

            Er, there's any language that does allow it? Reply ↓

            • Vote -1 Vote +1 Jeff Read on 2017-12-27 at 20:58:43 said: Yes -- ahem -- C++. That's why it's pretty much the only language taken seriously by game developers. Reply ↓
        • Vote -1 Vote +1 Zygo on 2017-12-21 at 12:56:20 said: > I'm guessing that reposurgeon's graphs are almost never DAGs

          Why would reposurgeon's graphs not be DAGs? Some exotic case that comes up with e.g. CVS imports that never arises in a SVN->Git conversion (admittedly the only case I've really looked deeply at)?

          Git repos, at least, are cannot-be-cyclic-without-astronomical-effort graphs (assuming no significant advances in SHA1 cracking and no grafts–and even then, all you have to do is detect the cycle and error out). I don't know how a generic revision history data structure could contain a cycle anywhere even if I wanted to force one in somehow. Reply ↓

          • Vote -1 Vote +1 esr on 2017-12-21 at 15:13:18 said: >Why would reposurgeon's graphs not be DAGs?

            The repo graph is, but a lot of the structures have reference loops for fast lookup. For example, a blob instance has a pointer back to the containing repo, as well as being part of the repo through a pointer chain that goes from the repo object to a list of commits to a blob.

            Without those loops, navigation in the repo structure would get very expensive. Reply ↓

            • Vote -1 Vote +1 guest on 2017-12-21 at 15:22:32 said: Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data, whereas the "true" DAG references do? In that case, and assuming you can be sufficiently sure that only DAGs will happen, refcounting (ideally using something like Rust) would very likely be the most efficient choice. No need for a fully-general GC here. Reply ↓
              • Vote -1 Vote +1 esr on 2017-12-21 at 15:34:40 said: >Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data

                I think they do. Unless you're using "ownership" in some sense I don't understand. Reply ↓

                • Vote -1 Vote +1 jim on 2017-12-22 at 07:31:39 said: A weak pointer does not own the object it points to. A shared pointer does.

                  When there are are zero shared pointers pointing to an object, it gets freed, regardless of how many weak pointers are pointing to it.

                  Shared pointers and unique pointers own, weak pointers do not own. Reply ↓

            • Vote -1 Vote +1 jim on 2017-12-22 at 07:23:35 said: In C++11, one would implement a pointer back to the owning object as a weak pointer. Reply ↓
      • Vote -1 Vote +1 jim on 2017-12-23 at 00:40:36 said:

        > How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it?

        Maybe it is true, but since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers, we hear you say that you would never write reposurgeon would never under that constraint.

        Which, since no one else is writing reposurgeon, is an argument, but not an argument that those who do get weak pointers and rust scopes find all that convincing.

        I am inclined to think that those who write C++98 (which is the gcc default) could not write reposurgeon under that constraint, but those who write C++11 could write reposurgeon under that constraint, and except for some rather unintelligible, complicated, and twisted class constructors invoking and enforcing the C++11 automatic memory management system, it would look very similar to your existing python code. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-23 at 02:49:13 said: >since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers

          Thank you, I understand those concepts quite well. I simply prefer to apply them in languages not made of barbed wire and landmines. Reply ↓

          • Vote -1 Vote +1 guest on 2017-12-23 at 07:11:48 said: I'm sure that you understand the _gist_ of all of these notions quite accurately, and this alone is of course quite impressive for any developer – but this is not quite the same as being comprehensively aware of their subtler implications. For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.

            For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language. You've criticized Rust before (not without some justification!) for having half-baked async-IO facilities, but I would think that reposturgeon does not depend significantly on async-IO. Reply ↓

            • Vote -1 Vote +1 esr on 2017-12-23 at 08:14:25 said: >For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.

              Yes, I got that between the time I wrote my first reply and JAD brought it up. I've used Python weakrefs in similar situations. I would have seemed less dense if I'd had more sleep at the time.

              >For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language.

              That acidulousness was mainly aimed at C++. Rust, if it implements its theory correctly (a point on which I am willing to be optimistic) doesn't have C++'s fatal structural flaws. It has problems of its own which I won't rehash as I've already anatomized them in detail. Reply ↓

    • Vote -1 Vote +1 Garrett on 2017-12-21 at 11:16:25 said: There's also development cost. I suspect that using eg. Python drastically reduces the cost for developing the code. And since most repositories are small enough that Eric hasn't noticed accidental O(n**2) or O(n**3) algorithms until recently, it's pretty obvious that execution time just plainly doesn't matter. Migration is going to involve a temporary interruption to service and is going to be performed roughly once per repo. The amount of time involved in just stopping the eg. SVN service and bringing up the eg. GIT hosting service is likely to be longer than the conversion time for the median conversion operation.

      So in these cases, most users don't care about the run-time, and outside of a handful of examples, wouldn't brush up against the CPU or memory limitations of a whitebox PC.

      This is in contrast to some other cases in which I've worked such as file-serving (where latency is measured in microseconds and is actually counted), or large data processing (where wasting resources reduces the total amount of stuff everybody can do). Reply ↓

  11. Vote -1 Vote +1 David Collier-Brown on 2017-12-18 at 20:20:59 said: Hmmn, I wonder if the virtual memory of Linux (and Unix, and Multics) is really the OS equivalent of the automatic memory management of application programs? One works in pages, admittedly, not bytes or groups of bytes, but one could argue that the sub-page stuff is just expensive anti-internal-fragmentation plumbing

    –dave
    [In polite Canajan, "I wonder" is the equivalent of saying "Hey everybody, look at this" in the US. And yes, I that's also the redneck's famous last words.] Reply ↓

  12. Vote -1 Vote +1 John Moore on 2017-12-18 at 22:20:21 said: In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity. The memory is often allocated by code which first encounters the need, and it is then passed on through layers and at some point, encounters code which determines the memory is no longer needed. All of this creates an implicit contract that he who is handed a pointer to something (say, a buffer) becomes responsible for disposing of it. But, there may be many places where that is needed – most of them in exception handling.

    That creates many, many opportunities for some to simply forget to release it. Also, when the code is handed off to someone unfamiliar, they may not even know about the contract. Crises (or bad habits) lead to failures to document this stuff (or create variable names or clear conventions that suggest one should look for the contract).

    I've also done a bunch of stuff in Java, both applications level (such as a very complex Android app with concurrency) and some infrastructural stuff that wasn't as performance constrained. Of course, none of this was hard real-time although it usually at least needed to provide response within human limits, which GC sometimes caused trouble with. But, the GC was worth it, as it substantially reduced bugs which showed up only at runtime, and it simplified things.

    On the side, I write hard real time stuff on tiny, RAM constrained embedded systems – PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast). In that world, there is no malloc used, and shouldn't be. It's compile time created buffers and structures for the most part. Fortunately, the applications don't require advanced dynamic structures (like symbol tables) where you need memory allocation. In that world, AMM isn't an issue. Reply ↓

    • Vote -1 Vote +1 Michael on 2017-12-18 at 22:47:26 said: PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast)
      LOL. Glad I'm not the only one who thought that. Most of my work was on the 16F – after I found out what it took to do a simple table lookup, I was ready for a stiff drink. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 23:45:03 said: >In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity.

      I believe you. I think I gravitate to problems with data-structure complexity because, well, that's just the way my brain works.

      But it's also true that I have never forgotten one of the earliest lessons I learned from Lisp. When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about. Reply ↓

      • Vote -1 Vote +1 Jeremy on 2017-12-19 at 01:36:47 said: Perhaps its because my coding experience has mostly been short python scripts of varying degrees of quick-and-dirtiness, but I'm having trouble grokking the difference between smart code/dumb data vs dumb code/smart data. How does one tell the difference?

        Now, as I type this, my intuition says it's more than just the scary mess of nested if statements being in the class definition for your data types, as opposed to the function definitions which munch on those data types; a scary mess of nested if statements is probably the former. The latter though I'm coming up blank.

        Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)? Reply ↓

        • Vote -1 Vote +1 jsn on 2017-12-19 at 02:35:48 said: I've always expressed it as "smart data + dumb logic = win".

          You almost said my favorite canned example: a big conditional block vs. a lookup table. The LUT can replace all the conditional logic with structured data and shorter (simpler, less bug-prone, faster, easier to read) unconditional logic that merely does the lookup. Concretely in Python, imagine a long list of "if this, assign that" replaced by a lookup into a dictionary. It's still all "code", but the amount of program logic is reduced.

          So I would answer your first question by saying look for places where data structures are used. Then guesstimate how complex some logic would have to be to replace that data. If that complexity would outstrip that of the data itself, then you have a "smart data" situation. Reply ↓

          • Vote -1 Vote +1 Emanuel Rylke on 2017-12-19 at 04:07:58 said: To expand on this, it can even be worth to use complex code to generate that dumb lookup table. This is so because the code generating the lookup table runs before, and therefore separately, from the code using the LUT. This means that both can be considered in isolation more often; bringing the combined complexity closer to m+n than m*n. Reply ↓
          • Vote -1 Vote +1 TheDividualist on 2017-12-19 at 05:39:39 said: Admittedly I have an SQL hammer and think everything is a nail, but why not would *every* program include a database, like the SQLLite that even comes bundled with Python distros, no sweat, and put that lookup table into it, not in a dictionary inside the code?

            Of course the more you go in this direction the more problems you will have with unit testing, in case you want to do such a thing. Generally we SQL-hammer guys don't do that much, because in theory any fuction can read any part of the database, making the whole database the potential "inputs" for every function.

            That is pretty lousy design, but I think good design patterns for separations of concerns and unit testability are not yet really known for database driven software, I mean, for example, model-view-controller claims to be one, but actually fails as these can and should call each other. So you have in the "customer" model or controller a function to check if the customer has unpaid invoices, and decide to call it from the "sales order" controller or model to ensure such customers get no new orders registered. In the same "sales order" controller you also check the "product" model or controller if it is not a discontinued product and check the "user" model or controller if they have the proper rights for this operation and the "state" controller if you are even offering this product in that state and so on a gazillion other things, so if you wanted to automatically unit test that "register a new sales order" function you have a potential "input" space of half the database. And all that with good separation of concerns MVC patterns. So I think no one really figured this out yet? Reply ↓

          • Vote -1 Vote +1 guest on 2017-12-20 at 19:21:13 said: There's a reason not to do this if you can help it – dispatching through a non-constant LUT is way slower than running easily-predicted conditionals. Like, an order of magnitude slower, or even worse. Reply ↓
        • Vote -1 Vote +1 esr on 2017-12-19 at 07:45:38 said: >Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)?

          I do not have an instant answer, sorry. I'll hand that question to my backbrain and hope an answer pops up. Reply ↓

      • Vote -1 Vote +1 Jon Brase on 2017-12-20 at 00:54:15 said: When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about.

        Doesn't "dumb code munching smart data" really reduce to "dumb code implementing a virtual machine that runs a different sort of dumb code to munch dumb data"? Reply ↓

        • Vote -1 Vote +1 jim on 2017-12-22 at 20:25:07 said: "Smart Data" is effectively a domain specific language.

          A domain specific language is easier to reason about within its proper domain, because it lowers the difference between the problem and the representation of the problem. Reply ↓

  13. Vote -1 Vote +1 wisd0me on 2017-12-19 at 02:35:10 said: I wonder why you talked about inventing an AMM-layer so much, but told nothing about the GC, which is available for C language. Why do you need to invent some AMM-layer in the first place, instead of just using the GC?
    For example, Bigloo Scheme and The GNU Objective C runtime successfully used it, among many others. Reply ↓
  14. Vote -1 Vote +1 Walter Bright on 2017-12-19 at 04:53:13 said: Maybe D, with its support for mixed GC / manual memory allocation is the right path after all! Reply ↓
  15. Vote -1 Vote +1 Jeremy Bowers on 2017-12-19 at 10:40:24 said: Rust seems like a good fit for the cases where you need the low latency (and other speed considerations) and can't afford the automation. Firefox finally got to benefit from that in the Quantum release, and there's more coming. I wouldn't dream of writing a browser engine in Go, let alone a highly-concurrent one. When you're willing to spend on that sort of quality, Rust is a good tool to get there.

    But the very characteristics necessary to be good in that space will prevent it from becoming the "default language" the way C was for so long. As much fun as it would be to fantasize about teaching Rust as a first language, I think that's crazy talk for anything but maybe MIT. (And I'm not claiming it's a good idea even then; just saying that's the level of student it would take for it to even be possible .) Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already. (I keep hearing about how Swift is going to be usable off of Apple platforms real soon now just around the corner just a bit longer .) Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-19 at 17:30:07 said: >Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already.

      I really think the significance of Go being an easy step up from C cannot be overestimated – see my previous blogging about the role of inward transition costs.

      Ken Thompson is insidiously clever. I like channels and subroutines and := but the really consequential hack in Go's design is the way it is almost perfectly designed to co-opt people like me – that is, experienced C programmers who have figured out that ad-hoc AMM is a disaster area. Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-20 at 08:58:23 said: Go probably owes as much to Rob Pike and Phil Winterbottom for its design as it does to Thompson -- because it's basically Alef with the feature whose lack, according to Pike, basically killed Alef: garbage collection.

        I don't know that it's "insidiously clever" to add concurrency primitives and GC to a C-like language, as concurrency and memory management were the two obvious banes of every C programmer's existence back in the 90s -- so if Go is "insidiously clever", so is Java. IMHO it's just smart, savvy design which is no small thing; languages are really hard to get right. And in the space Go thrives in, Go gets a lot right. Reply ↓

  16. Vote -1 Vote +1 John G on 2017-12-19 at 14:01:09 said: Eric, have you looked into D *lately*? These days:

    * it's fully open source (Boost license),

    * there's [three back-ends to choose from]( https://dlang.org/download.html ),

    * there's [exactly one standard library]( https://dlang.org/phobos/index.html ), and

    * it's got a standard package repository and management tool ([dub]( https://code.dlang.org/ )). Reply ↓

  17. Vote -1 Vote +1 Doctor Mist on 2017-12-19 at 18:28:54 said:

    As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.

    Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.

    Interesting. This certainly fits my experience.

    Has anybody looked for common patterns in whatever parasitic distractions plague you when you start to reach the limits of a language with AMM? Reply ↓

    • Vote -1 Vote +1 Dave taht on 2017-12-23 at 10:44:24 said: The biggest thing that I hate about go is the

      result, err = whatever()
      if (err) dosomethingtofixit();

      abstraction.

      I went through a phase earlier this year where I tried to eliminate the concept of an errno entirely (and failed, in the end reinventing lisp, badly), but sometimes I still think – to the tune of flight of the valkeries – "Kill the errno, kill the errno, kill the ERRno, kill the err!' Reply ↓

    • Vote -1 Vote +1 jim on 2017-12-23 at 23:37:46 said: I have on several occasions been part of big projects using languages with AMM, many programmers, much code, and they hit scaling problems and died, but it is not altogether easy to explain what the problem was.

      But it was very clear that the fact that I could get a short program, or a quick fix up and running with an AMM much faster than in C or C++ was failing to translate into getting a very large program containing far too many quick fixes up and running. Reply ↓

  18. Vote -1 Vote +1 François-René Rideau on 2017-12-19 at 21:39:05 said: Insightful, but I think you are missing a key point about Lisp and Greenspunning.

    AMM is not the only thing that Lisp brings on the table when it comes to dealing with Greenspunity. Actually, the whole point of Lisp is that there is not _one_ conceptual barrier to development, or a few, or even a lot, but that there are _arbitrarily_many_, and that is why you need be able to extend your language through _syntactic_abstraction_ to build DSLs so that every abstraction layer can be written in a language that is fit that that layer. [Actually, traditional Lisp is missing the fact that DSL tooling depends on _restriction_ as well as _extension_; but Haskell types and Racket languages show the way forward in this respect.]

    That is why all languages without macros, even with AMM, remain "blub" to those who grok Lisp. Even in Go, they reinvent macros, just very badly, with various preprocessors to cope with the otherwise very low abstraction ceiling.

    (Incidentally, I wouldn't say that Rust has no AMM; instead it has static AMM. It also has some support for macros.) Reply ↓

    • Vote -1 Vote +1 Patrick Maupin on 2017-12-23 at 18:44:27 said: " static AMM" ???

      WTF sort of abuse of language is this?

      Oh, yeah, rust -- the language developed by Humpty Dumpty acolytes:

      https://github.com/rust-lang/rust/pull/25640

      You just can't make this stuff up. Reply ↓

      • Vote -1 Vote +1 jim on 2017-12-23 at 22:02:18 said: Static AMM means that the compiler analyzes your code at compile time, and generates the appropriate frees,

        Static AMM means that the compiler automatically does what you manually do in C, and semi automatically do in C++11 Reply ↓

        • Vote -1 Vote +1 Patrick Maupin on 2017-12-24 at 13:36:35 said: To the extent that the compiler's insertion of calls to free() can be easily deduced from the code without special syntax, the insertion is merely an optimization of the sort of standard AMM semantics that, for example, a PyPy compiler could do.

          To the extent that the compiler's ability to insert calls to free() requires the sort of special syntax about borrowing that means that the programmer has explicitly described a non-stack-based scope for the variable, the memory management isn't automatic.

          Perhaps this is why a google search for "static AMM" doesn't return much. Reply ↓

          • Vote -1 Vote +1 Jeff Read on 2017-12-27 at 03:01:19 said: I think you fundamentally misunderstand how borrowing works in Rust.

            In Rust, as in C++ or even C, references have value semantics. That is to say any copies of a given reference are considered to be "the same". You don't have to "explicitly describe a non-stack-based scope for the variable", but the hitch is that there can be one, and only one, copy of the original reference to a variable in use at any time. In Rust this is called ownership, and only the owner of an object may mutate it.

            Where borrowing comes in is that functions called by the owner of an object may borrow a reference to it. Borrowed references are read-only, and may not outlast the scope of the function that does the borrowing. So everything is still scope-based. This provides a convenient way to write functions in such a way that they don't have to worry about where the values they operate on come from or unwrap any special types, etc.

            If you want the scope of a reference to outlast the function that created it, the way to do that is to use a std::Rc , which provides a regular, reference-counted pointer to a heap-allocated object, the same as Python.

            The borrow checker checks all of these invariants for you and will flag an error if they are violated. Since worrying about object lifetimes is work you have to do anyway lest you pay a steep price in performance degradation or resource leakage, you win because the borrow checker makes this job much easier.

            Rust does have explicit object lifetimes, but where these are most useful is to solve the problem of how to have structures, functions, and methods that contain/return values of limited lifetime. For example declaring a struct Foo { x: &'a i32 } means that any instance of struct Foo is valid only as long as the borrowed reference inside it is valid. The borrow checker will complain if you attempt to use such a struct outside the lifetime of the internal reference. Reply ↓

      • Vote -1 Vote +1 Doctor Locketopus on 2017-12-27 at 00:16:54 said: Good Lord (not to be confused with Audre Lorde). If I weren't already convinced that Rust is a cult, that would do it.

        However, I must confess to some amusement about Karl Marx and Michel Foucault getting purged (presumably because Dead White Male). Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-27 at 02:06:40 said: This is just a cost of doing business. Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada -- "it doesn't matter if you're a brain in a jar or a superintelligent dolphin as long as your code is good" -- but when it comes to actually putting its money where its mouth is, hacker culture has fallen far short. Now that's changing, and one of the side effects of that is how we use language and communicate internally, and to the wider community, has to change.

        But none of this has to do with automatic memory management. In Rust, management of memory is not only fully automatic, it's "have your cake and eat it too": you have to worry about neither releasing memory at the appropriate time, nor the severe performance costs and lack of determinism inherent in tracing GCs. You do have to be more careful in how you access the objects you've created, but the compiler will assist you with that. Think of the borrow checker as your friend, not an adversary. Reply ↓

  19. Vote -1 Vote +1 John on 2017-12-20 at 05:03:22 said: Present day C++ is far from C++ when it was first institutionalized in 1998. You should *never* be physically overseeing memory in present day C++. You need a powerfully measured cluster? Utilize std::vector. You need an adhoc diagram? Utilize std::shared_ptr and std::weak_ptr.

    Any code I see which utilizes new or erase, malloc or through and through freedom fall flat code audit. Reply ↓

  20. Vote -1 Vote +1 Garrett on 2017-12-21 at 11:24:41 said: What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.

    In contrast to what I think of as systems programming, you have automatic memory management. You aren't working in kernel-space. You aren't modifying the core libraries or doing significant programmatic interface design.

    I'm missing something in your semantic usage and my understanding of the solution implementation. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-21 at 15:08:28 said: >What makes you refer to this as a systems programming project?

      Never user-facing. Often scripted. Development-support tool. Used by systems programmers.

      I realize we're in an area where the "systems" vs. "application" distinction gets a little tricky to make. I hang out in that border zone a lot and have thought about this. Are GPSD and ntpd "applications"? Is giflib? Sure, they're out-of-kernel, but no end-user will ever touch them. Is GCC an application? Is apache or named?

      Inside kernel is clearly systems. Outside it, I think the "systems" vs. "application" distinction is about the skillset being applied and who your expected users are than anything else.

      I would not be upset at anyone who argued for a different distinction. I think you'll find the definitional questions start to get awfully slippery when you poke at them. Reply ↓

    • Vote -1 Vote +1 Jeff Read on 2017-12-24 at 03:21:34 said:

      What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.

      When you're talking about Unix, there is often considerable overlap between "systems" and "application" programming because the architecture of Unix, with pipes, input and output redirection, etc., allowed for essential OS components to be turned into simple, data-in-data-out user-space tools. The functionality of ls , cp , rm , or cat , for instance, would have been built into the shell of a pre-Unix OS (or many post-Unix ones). One of the great innovations of Unix is to turn these units of functionality into standalone programs, and then make spawning processes cheap enough to where using them interactively from the shell is easy and natural. This makes extending the system, as accessed through the shell, easy: just write a new, small program and add it to your PATH .

      So yeah, when you're working in an environment like Unix, there's no bright-line distinction between "systems" and "application" code, just like there's no bright-line distinction between "user" and "developer". Unix is a tool for facilitating humans working with computers. It cannot afford to discriminate, lest it lose its Unix-nature. (This is why Linux on the desktop will never be a thing, not without considerable decay in the facets of Linux that made it so great to begin with.) Reply ↓

  21. Vote -1 Vote +1 Peter Donis on 2017-12-21 at 22:15:44 said: @tz: you aren't going to get AMM on the current Arduino variants. At least not easily.

    At the upper end you can; the Yun has 64 MB, as do the Dragino variants. You can run OpenWRT on them and use its Python (although the latest OpenWRT release, Chaos Calmer, significantly increased its storage footprint from older firmware versions), which runs fine in that memory footprint, at least for the kinds of things you're likely to do on this type of device. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-21 at 22:43:57 said: >You can run OpenWRT on them and use its Python

      I'd be comfortable in that environment, but if we're talking AMM languages Go would probably be a better match for it. Reply ↓

  22. Vote -1 Vote +1 jim on 2017-12-22 at 06:37:36 said: C++11 has an excellent automatic memory management layer. Its only defect is that it is optional, for backwards compatibility with C and C++98 (though it really is not all that compatible with C++98)

    And, being optional, you are apt to take the short cut of not using it, which will bite you.

    Rust is, more or less, C++17 with the automatic memory management layer being almost mandatory. Reply ↓

  23. Vote -1 Vote +1 jim on 2017-12-22 at 20:39:27 said:

    > you are likely to find that more and more of your effort and defect chasing is related to the AMM layer

    But the AMM layer for C++ has already been written and debugged, and standards and idioms exist for integrating it into your classes and type definitions.

    Once built into your classes, you are then free to write code as if in a fully garbage collected language in which all types act like ints.

    C++14, used correctly, is a metalanguage for writing domain specific languages.

    Now sometimes building your classes in C++ is weird, nonobvious, and apt to break for reasons that are difficult to explain, but done correctly all the weird stuff is done once in a small number of places, not spread all over your code Reply ↓

  24. Vote -1 Vote +1 Dave taht on 2017-12-22 at 22:31:40 said: Linux is the best C library ever created. And it's often, terrifying. Things like RCU are nearly impossible for mortals to understand. Reply ↓
  25. Vote -1 Vote +1 Alex Beamish on 2017-12-23 at 11:18:48 said: Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago. Once I'd moved, I also appreciated the memory management in the shape of 'any size you want' arrays, hashes (where had they been all my life?) and autovivification -- on the spot creation of array or hash elements, at any depth.

    While C is a low-level language that masquerades as a high-level language, the original intent of the language was to make writing assembler easier and faster. It can still be used for that, when necessary, leaving the more complicated matters to higher level languages. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-23 at 14:36:26 said: >Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago.

      Prestty much all that goodness depends on AMM and could not be implemented without it. Reply ↓

    • Vote -1 Vote +1 jim on 2017-12-23 at 22:17:39 said: Autovivification saves you much effort, thought, and coding, because most of the time the perl interpreter correctly divines your intention, and does a pile of stuff for you, without you needing to think about it.

      And then it turns around and bites you because it does things for you that you did not intend or expect.

      The larger the program, and the longer you are keeping the program around, the more it is a problem. If you are writing a quick one off script to solve some specific problem, you are the only person who is going to use the script, and are then going to throw the script away, fine. If you are writing a big program that will be used by lots of people for a long time, autovivification, is going to turn around and bit you hard, as are lots of similar perl features where perl makes life easy for you by doing stuff automagically.

      With the result that there are in practice very few big perl programs used by lots of people for a long time, while there are an immense number of very big C and C++ programs used by lots of people for a very long time.

      On esr's argument, we should never be writing big programs in C any more, and yet, we are.

      I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.

      And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.

      This, my personal experience, is the exact opposite of the outcome claimed by esr.

      Well, that was perl, Microsoft Visual Basic, and PHP. Maybe Java scales better.

      But perl, Microsoft visual basic, and PHP did not scale. Reply ↓

      • Vote -1 Vote +1 esr on 2017-12-23 at 22:41:15 said: >But perl, Microsoft visual basic, and PHP did not scale.

        Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.

        Now I understand your fondness for C++ better. It's bad, but those are way worse at any large scale. AMM isn't enough to keep you out of trouble if the rest of the language is a tar-pit. Those three are full of the bones of drowned devops victims.

        Yes, Java scales better. CPython would too from a pure maintainability standpoint, but it's too slow for the kind of deployment you're implying – on the other hand, PyPy might not be, I'm finding the JIT compilation works extremely well and I get runtimes I think are within 2x or 3x of C. Go would probably be da bomb. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-23 at 23:35:29 said: I wrote:

          >All three of those languages are notorious sinkholes

          You know when you're in deep shit? You're in deep shit when your figure of merit is long-term maintainability and Perl is the least bad alternative.

          *shudder* Reply ↓

        • Vote -1 Vote +1 Jeff Read on 2017-12-24 at 02:56:28 said:

          Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.

          Can confirm -- Visual Basic (6 and VBA) is a toilet. An absolute cesspool. It's full of little gotchas -- such as non-short-circuiting AND and OR operators (there are no differentiated bitwise/logical operators) and the cryptic Dir() function that exactly mimics the broken semantics of MS-DOS's directory-walking system call -- that betray its origins as an extended version of Microsoft's 8-bit BASIC interpreter (the same one used to write toy programs on TRS-80s and Commodores from a bygone era), and prevent you from writing programs in a way that feels natural and correct if you've been exposed to nearly anything else.

          VB is a language optimized to a particular workflow -- and like many languages so optimized as long as you color within the lines provided by the vendor you're fine, but it's a minefield when you need to step outside those lines (which happens sooner than you may think). And that's the case with just about every all-in-one silver-bullet "solution" I've seen -- Rails and PHP belong in this category too.

          It's no wonder the cuddly new Microsoft under Nadella is considering making Python a first-class extension language for Excel (and perhaps other Office apps as well).

          Visual Basic .NET is something quite different -- a sort of Microsoft-flavored Object Pascal, really. But I don't know of too many shops actually using it; if you're targeting the .NET runtime it makes just as much sense to just use C#.

          As for Perl, it's possible to write large, readable, maintainable code bases in object-oriented Perl. I've seen it done. BUT -- you have to be careful. You have to establish coding standards, and if you come across the stereotype of "typical, looks-like-line-noise Perl code" then you have to flunk it at code review and never let it touch prod. (Do modern developers even know what line noise is, or where it comes from?) You also have to choose your libraries carefully, ensuring they follow a sane semantics that doesn't require weirdness in your code. I'd much rather just do it in Python. Reply ↓

          • Vote -1 Vote +1 TheDividualist on 2017-12-27 at 11:24:59 said: VB.NET is unusued in the kind of circles *you know* because these are competitive and status-conscious circles and anything with BASIC in the name is so obviously low-status and just looks so bad on the resume that it makes sense to add that 10-20% more effort and learn C#. C# sounds a whole lot more high status, as it has C in the name so obvious it looks like being a Real Programmer on the resume.

            What you don't know is what happens outside the circles where professional programmers compete for status and jobs.

            I can report that there are many "IT guys" who are not in these circles, they don't have the intra-programmer social life hence no status concerns, nor do they ever intend apply for Real Programmer jobs. They are just rural or not first worlder guys who grew up liking computers, and took a generic "IT guy" job at some business in a small town and there they taught themselves Excel VBscript when the need arised to automate some reports, and then VB.NET when it was time to try to build some actual application for in-house use. They like it because it looks less intimidating – it sends out those "not only meant for Real Programmers" vibes.

            I wish we lived in a world where Python would fill that non-intimidating amateur-friendly niche, as it could do that job very well, but we are already on a hell of a path dependence. Seriously, Bill Gates and Joel Spolsky got it seriously right when they made Excel scriptable. The trick is how to provide a smooth transition between non-programming and programming.

            One classic way is that you are a sysadmin, you use the shell, then you automate tasks with shell scripts, then you graduate to Perl.

            One, relatively new way is that you are a web designer, write HTML and CSS, and then slowly you get dragged, kicking and screaming into JavaScript and PHP.

            The genius was that they realized that a spreadsheet is basically modern paper. It is the most basic and universal tool of the office drone. I print all my automatically generated reports into xlsx files, simply because for me it is the "paper" of 2017, you can view it on any Android phone, and unlike PDF and like paper you can interact and work with the figures, like add other numbers to them.

            So it was automating the spreadsheet, the VBScript Excel macro that led the way from not-programming to programming for an immense number of office drones, who are far more numerous than sysadmins and web designers.

            Aaand I think it was precisely because of those microcomputers, like the Commodore. Out of every 100 office drone in 1991 or so, 1 or 2 had entertained themselves in 1987 typing in some BASIC programs published in computer mags. So when they were told Excel is programmable with a form of BASIC they were not too intidimated

            This created such a giant path dependency that still if you want to sell a language to millions and millions of not-Real Programmers you have to at least make it look somewhat like Basic.

            I think from this angle it was a masterwork of creating and exploiting path dependency. Put BASIC on microcomputers. Have a lot of hobbyists learn it for fun. Create the most universal office tool. Let it be programmable in a form of BASIC – you can just work on the screen, let it generate a macro and then you just have to modify it. Mostly copy-pasting, not real programming. But you slowly pick up some programming idioms. Then the path curves up to VB and then VB.NET.

            To challenge it all, one needs to find an application area as important as number cruching and reporting in an office: Excel is basically electronic paper from this angle and it is hard to come up with something like this. All our nearly computer illiterate salespeople use it. (90% of the use beyond just typing data in a grid is using the auto sum function.) And they don't use much else than that and Word and Outlook and chat apps.

            Anyway suppose such a purpose can be found, then you can make it scriptable in Python and it is also important to be able to record a macro so that people can learn from the generated code. Then maybe that dominance can be challenged. Reply ↓

            • Vote -1 Vote +1 Jeff Read on 2018-01-18 at 12:00:29 said: TIOBE says that while VB.NET saw an uptick in popularity in 2011, it's on its way down now and usage was moribund before then.

              In your attempt to reframe my statements in your usual reference frame of Academic Programmer Bourgeoisie vs. Office Drone Proletariat, you missed my point entirely: VB.NET struggled to get a foothold during the time when VB6 was fresh in developers' minds. It was too different (and too C#-like) to win over VB6 devs, and didn't offer enough value-add beyond C# to win over the people who would've just used C# or Java. Reply ↓

              • Vote -1 Vote +1 jim of jim's blog on 2018-02-10 at 19:10:17 said: Yes, but he has point.

                App -> macros -> macro script-> interpreted language with automatic memory management.

                So you tend to wind up with a widely used language that was not so much designed, as accreted.

                And, of course, programs written in this language fail to scale. Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-24 at 02:30:27 said:

        I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.

        And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.

        Sounds just like every Ruby on Fails deployment I've ever seen. It's great when you're slapping together Version 0.1 of a product or so I've heard. But I've never joined a Fails team on version 0.1. The ones I saw were already well-established, and between the PFM in Rails itself, and the amount of monkeypatching done to system classes, it's very, very hard to reason about the code you're looking at. From a management level, you're asking for enormous pain trying to onboard new developers into that sort of environment, or even expand the scope of your product with an existing team, without them tripping all over each other.

        There's a reason why Twitter switched from Rails to Scala. Reply ↓

  26. Vote -1 Vote +1 jim on 2017-12-27 at 03:53:42 said: Jeff Read wrote:

    > Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada , hacker culture has fallen far short. Now that's changing, has to change.|

    Observe that "has to change" in practice means that the social justice warriors take charge.

    Observe that in practice, when the social justice warriors take charge, old bugs don't get fixed, new bugs appear, and projects turn into aimless garbage, if any development occurs at all.

    "has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.

    Reflect on the attempted suicide of "Coraline" It is not people like me who keep using the correct pronouns that caused "her" to attempt suicide. It is the people who used "her" to grab power. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-27 at 14:30:33 said: >"has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.

      It's never happened before, and may very well never happen again but this once I completely agree with JAD. The "change" the SJWs actually want – as opposed to what they claim to want – would ruin us. Reply ↓

  27. Vote -1 Vote +1 jim on 2017-12-27 at 19:42:36 said: To get back on topic:

    Modern, mostly memory safe C++, is enforced by:
    https://blogs.msdn.microsoft.com/vcblog/2016/03/31/c-core-guidelines-checkers-preview-of-the-lifetime-safety-checker/
    http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-abstract
    http://clang.llvm.org/extra/clang-tidy/

    $ clang-tidy test.cpp -checks=clang-analyzer-cplusplus*, cppcoreguidelines-*, modernize-*

    cppcoreguidelines-* and modernize-* will catch most of the issues that esr complains about, in practice usually all of them, though I suppose that as the project gets bigger, some will slip through.

    Remember that gcc and g++ is C++98 by default, because of the vast base of old fashioned C++ code which is subtly incompatible with C++11, C++11 onwards being the version of C++ that optionally supports memory safety, hence necessarily subtly incompatible.

    To turn on C++11

    Place

    cmake_minimum_required(VERSION 3.5)
    # set standard required to ensure that you get
    # the same version of C++ on every platform
    # as some environments default to older dialects
    # of C++ and some do not.
    set(CMAKE_CXX_STANDARD 11)
    set(CMAKE_CXX_STANDARD_REQUIRED ON)

    in your CMakeLists.txt Reply ↓

  28. Vote -1 Vote +1 h0bby1 on 2018-05-19 at 09:27:02 said: I think i solved lots of those issues in C with a runtime i made.

    Originally i made this system because i wanted to test programming a micro kernel OS, with protected mode, PCI bus, usb, ACPI etc, and i didn't want to get close to the 'event horizon' of memory mannagement in C.

    But i didn't wait the Greenspun law to kick in, so i first developped a safe memory system as a runtime, and replaced the standard C runtime and memory mannagement with it.

    I wanted zero seg fault or memory error possible at all anywhere in the C code. Because debuguing bare metal exception, without debugger, with complex data structures made in C look very close to the black hole.

    I didn't want to use C++ because C++ compiler have very unpredictible binary format and function name decoration, which make it much harder to interface with at kernel level.

    I wanted also some system as efficient as possible to mannage lockless shared access between thread of the whole memory as much as possible, to avoid the 'exclusive borrow' syndrome of rust, with global variables shared between threads with lockless algorithm to access them.

    I took inspiration from the algorithm on this site http://www.1024cores.net/ to develop the basic system, with strong references as the norm, and direct 'bare pointer' only as weak references for fast access to memory in C.

    What i ended doing is basically a 'strongly typed hashmap DAG' to store object references hierarchy, which can be manipulated using 'lambda expressions', in sort that applications can manipulate objects in a indirect manner only through the DAG abstraction, without having to manipulate bare pointers at all.

    This also make a mark and sweep garbage collector easier to do, especially with an 'event based' system, the main loop can call the garbage collector between two executions of event/messages handlers, which has the advantage that it can be made at a point where there is no application data on the stack to mark, so it avoid mistaking application data in the stack for a pointer. All references that are only in stack variables can get automatically garbage collected when the function exit, much like in C++ actually.

    The garbage collector can still be called by the allocator when there is OOM error, it will attempt a garbage collection before failing the allocation, but all references in the stack should be garbage collected when the function return to the main loop and the garbage collector is run.

    As all the references hierarchy is expressed explicity in the DAG, there shouldn't be any pointer stored in the heap, outside of the module's data section, which correspond to C global variables that are used as the 'root element' of object hierarchy, which can be traversed to find all the actives references to heap data that the code can potentially use. A quick system could be made for that the compiler can automatically generate a list of the 'root references' in the global variables, to avoid memory leak if some global data can look like a reference.

    As each thread have their own heap, it also avoid the 'stop the world syndrome', all threads can garbage collect their own heap, and there is already some system of lockless synchronisation to access references based on expression in the DAG, to avoid having to rely only on 'bare pointers' to manipulate object hierarchy, which allow dynamic relocation, and make it easier to track active references.

    It's also very useful to track memory leak, as the allocator can keep the time of each memory allocation, it's easy to see all the allocations that happenned between two points of the program, and dump all their hierarchy and property only from the 'bare reference'.

    Each thread contain two heaps, one which is manually mannaged, mostly used for temporary strings , or IO buffers, and the other heap which can be mannaged either with atomic reference count, or mark and sweep.

    With this system, C program rarely have to use directly malloc/free, nor to manipulate pointers to allocated memory directly, other than for temporary buffer allocation, like a dynamic stack, for io buffers or temporary strings who can easily be mannaged manually. And all the memory manipulation can be made via a runtime which keep track internally of pointer address and size, data type and eventually a 'finalizer' function that will be callled when the pointer is freed,

    Since i started to use this system to make C programs, alongside with my own ABI which can dynamically link binaries compiled with visual studio and gcc together, i tested it for many different use case, i could make a mini multi thread window mannager/UI, with aysnc irq driven HID driver events, and a system of distributed application based on blockchain data, which include multi thread http server who can handle parrallel json/rpc calls, with an abstraction of applications stack via custom data type definition / scripts stored on the blockchain, and i have very little problem of memory, albeit it's 100% in C, multi threaded and deal with heavily dynamic data.

    With the mark and sweep mode, it can become quite easy to develop multi thread applications with good level of concurrency, even to do simple database system, driven by a script over asynch http/json/rpc, without having to care about complex memory mannagement.

    Even with the reference count mode, the manipulation of references is explicit, and it should not be to hard to detect leaks with simple parsers, i already did test with antlr C parser, with a visitor class to parse the grammar and detect potentially errors, as all memory referencing happen through specific type instead of bare pointers, it's not too hard to detect potential memory leak problem with a simple parser. Reply ↓

  29. Vote -1 Vote +1 Arron Grier on 2018-06-07 at 17:37:17 said: Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?

    Just wondering Reply ↓

    • Vote -1 Vote +1 esr on 2018-06-08 at 05:48:37 said: >Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?

      Too soon. Go is very interesting but it's not an essential tool yet.

      That might change in a few years. Reply ↓

  30. Vote -1 Vote +1 Yankes on 2018-12-18 at 19:20:46 said: I have one question, do you even need global AMM? Get one of element of graph, when it will/should be released in your reposugeon? Over all I think it is never because it usually link with other from this graph. Overall do you check how many objects are created and released during operations? I do not mean some temporal strings but object representing main working set.

    Depending on answer it could be if you load some graph element and it will stay indefinitely in memory then this could easy be converted to C/C++ by simply never using `free` for graph elements (and all problems with memory management goes out of the windows).
    If they should be released early then when it should happened? Do you have some code in reposurgeon that purge not needed objects when not needed any more? Depend on simply accessibility of some object do not mean it needed, many times is quite opposite.

    I now working on C# application that had similar bungle like this and previous developers "solution" was to restarting whole application instead of fixing lifetime problems. Correct solution was C++ like code, I create object, do work and purge it explicitly. With this non of components have memory issues now. Of corse problem there lay with lack of knowing tools they use and not complexity of domain, but did you do analysis what is needed and what not, and how long? AMM do not solve this.

    btw I big fan of lisp that is in C++11 aka templates, great pure functional language :D Reply ↓

    • Vote -1 Vote +1 esr on 2018-12-18 at 20:57:12 said: >I have one question, do you even need global AMM?

      Oh hell yes. Consider, for example, the demands of loading in ad operating on multiple repositories. Reply ↓

      • Vote -1 Vote +1 Yankes on 2018-12-19 at 08:56:36 said: If I understood this correctly situations look like:
        I have processes that loaded repo A, B and C and active working on each one.
        Now because of some demand we need load repo D.
        After we are done we back to A, B and C.
        Now question is should be D data be purged?
        If there are memory connection form previous repos then it will stay in memory if not then AMM will remove all data from memory.
        If this is complex graph when you have access to any element the you can crawl to any other element of this graph (this is simplification but probably safe assumption).
        In first case (there is connection) is equivalent to not using `free` in C. Of corse if not all graph is reachable then there will be partial purge of it memory (let say that 10% will stay), but what will happens when you need again load repo D? Current data avaialbe is hidden deep in other graphs and most of data is lost do AMM. you need load everything again and now repo D size is 110%.

        In case there is not connection between repos A, B, C and repo D then we can free it entirely.
        This is easy done in C++ (some kind of smart pointer that know if it pointing same repo or other).

        Do my reasoning is correct? or I miss something?

        btw there BIG difference between C and C++, I can implement things in C++ that I will NEVER be able to implement in C, example of this is my strong typed simple script language:
        https://github.com/Yankes/OpenXcom/blob/master/src/Engine/Script.cpp
        I would need drop functionalists/protections to be able to convert this to C (or even C++03).

        Another example of this is https://github.com/fmtlib/fmt from C++ and `printf` from C.
        Both do exactly same but C++ is lot of times better and safer.

        This mean if we add your statement on impossibility and my then we have:
        C <<< C++ <<< Go/Python
        but for me personally is more:
        C <<< C++ < Go/Python
        than yours:
        C/C++ <<< Go/Python Reply ↓

        • Vote -1 Vote +1 esr on 2018-12-19 at 09:08:46 said: >Do my reasoning is correct? or I miss something?

          Not much. The bigger issue is that it is fucking insane to try anything like this in a language where the core abstractions are leaky. That disqualifies C++. Reply ↓

          • Vote -1 Vote +1 Yankes on 2018-12-19 at 10:24:47 said: I only disagree with word `insane`, C++ have lot of problems like UB, lot of corner cases, leaking abstraction, whole crap form C (and my favorite: 1000 line errors from templates), but is not insane to work with memory problems.

            You can easy create tools that make all this problems bearable, and this is biggest flaw in C++, many problems are solvable but not out of box. C++ is good on crating abstraction:
            https://www.youtube.com/watch?v=sPhpelUfu8Q
            That will fit your domain then it will not leak much because it fit right the underling problem.
            And you can enforce lot of things that allow you to reason locally about behavior of program.

            In case of creating this new abstraction is indeed insane then I think you have problems in Go too because only problem that AMM solve is reachability of memory and how long you need for it.

            btw best thing that show difference between C++03 and C++11 is `std::vector<std::vector>`, in C++03 this is insane stupid and in C++11 is insane clever because it have performance characteristic of `std::vector` (thanks to `std::move`) and no problems with memory management (keep index stable and use `v.at(i).at(j).x = 5;` or warp it in helper class and use `v[i][j].x` that will throw on wrong index). Reply ↓

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tip Jar

Donate here to support my open-source projects. Small but continuing donations via Patreon help more than one-time donations via PayPal.

Patreon Archives Meta Hacker Emblem Eric Conspiracy Anti-Idiotarian Manifesto © 2019 Armed and Dangerous Admired Theme

[Nov 04, 2019] Go (programming language) - Wikipedia

Nov 04, 2019 | en.wikipedia.org

... ... ...

The designers were primarily motivated by their shared dislike of C++ . [26] [27] [28]

... ... ...

Omissions [ edit ]

Go deliberately omits certain features common in other languages, including (implementation) inheritance , generic programming , assertions , [e] pointer arithmetic , [d] implicit type conversions , untagged unions , [f] and tagged unions . [g] The designers added only those facilities that all three agreed on. [95]

Of the omitted language features, the designers explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful language, encouraging instead the use of interfaces to achieve dynamic dispatch [h] and composition to reuse code. Composition and delegation are in fact largely automated by struct embedding; according to researchers Schmager et al. , this feature "has many of the drawbacks of inheritance: it affects the public interface of objects, it is not fine-grained (i.e, no method-level control over embedding), methods of embedded objects cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse it to the extent that programmers in other languages are reputed to overuse inheritance. [61]

The designers express an openness to generic programming and note that built-in functions are in fact type-generic, but these are treated as special cases; Pike calls this a weakness that may at some point be changed. [53] The Google team built at least one compiler for an experimental Go dialect with generics, but did not release it. [96] They are also open to standardizing ways to apply code generation. [97]

Initially omitted, the exception -like panic / recover mechanism was eventually added, which the Go authors advise using for unrecoverable errors such as those that should halt an entire program or server request, or as a shortcut to propagate errors up the stack within a package (but not across package boundaries; there, error returns are the standard API). [98

[Oct 15, 2019] Learning doxygen for source code documentation by Arpan Sen

Jul 29, 2008 | developer.ibm.com
Maintaining and adding new features to legacy systems developed using Maintaining and adding new features to legacy systems developed using C/C++ is a daunting task. There are several facets to the problem -- understanding the existing class hierarchy and global variables, the different user-defined types, and function call graph analysis, to name a few. This article discusses several features of doxygen, with examples in the context of projects using C/C++ .

However, doxygen is flexible enough to be used for software projects developed using the Python, Java, PHP, and other languages, as well. The primary motivation of this article is to help extract information from C/C++ sources, but it also briefly describes how to document code using doxygen-defined tags.

Installing doxygen

You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources from the SVN repository and build it. You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources from the SVN repository and build it. Listing 1 shows the latter process.

Listing 1. Install and build doxygen sources
                
bash‑2.05$ svn co https://doxygen.svn.sourceforge.net/svnroot/doxygen/trunk doxygen‑svn

bash‑2.05$ cd doxygen‑svn
bash‑2.05$ ./configure –prefix=/home/user1/bin
bash‑2.05$ make

bash‑2.05$ make install
Show more Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the svn utility to check out sources. Generating documentation using doxygen To use doxygen to generate documentation of the sources, you perform three steps. To use doxygen to generate documentation of the sources, you perform three steps. Generate the configuration file At a shell prompt, type the command doxygen -g At a shell prompt, type the command doxygen -g doxygen -g . This command generates a text-editable configuration file called Doxyfile in the current directory. You can choose to override this file name, in which case the invocation should be doxygen -g <_user-specified file="file" name_="name_"> doxygen -g <user-specified file name> , as shown in Listing 2 .
Listing 2. Generate the default configuration file
                
bash‑2.05b$ doxygen ‑g
Configuration file 'Doxyfile' created.
Now edit the configuration file and enter
  doxygen Doxyfile
to generate the documentation for your project
bash‑2.05b$ ls Doxyfile
Doxyfile
Show more Edit the configuration file The configuration file is structured as The configuration file is structured as <TAGNAME> = <VALUE> , similar to the Make file format. Here are the most important tags: Listing 3 shows an example of a Doxyfile.
Listing 3. Sample doxyfile with user-provided tag values
                
OUTPUT_DIRECTORY = /home/user1/docs
EXTRACT_ALL = yes
EXTRACT_PRIVATE = yes
EXTRACT_STATIC = yes
INPUT = /home/user1/project/kernel
#Do not add anything here unless you need to. Doxygen already covers all 
#common formats like .c/.cc/.cxx/.c++/.cpp/.inl/.h/.hpp
FILE_PATTERNS = 
RECURSIVE = yes
Show more Run doxygen Run doxygen in the shell prompt as Run doxygen in the shell prompt as doxygen Doxyfile (or with whatever file name you've chosen for the configuration file). Doxygen issues several messages before it finally produces the documentation in Hypertext Markup Language (HTML) and Latex formats (the default). In the folder that the <OUTPUT_DIRECTORY> tag specifies, two sub-folders named html and latex are created as part of the documentation-generation process. Listing 4 shows a sample doxygen run log.
Listing 4. Sample log output from doxygen
                
Searching for include files...
Searching for example files...
Searching for images...
Searching for dot files...
Searching for files to exclude
Reading input files...
Reading and parsing tag files
Preprocessing /home/user1/project/kernel/kernel.h
 
Read 12489207 bytes
Parsing input...
Parsing file /project/user1/project/kernel/epico.cxx
 
Freeing input...
Building group list...
..
Generating docs for compound MemoryManager::ProcessSpec
 
Generating docs for namespace std
Generating group index...
Generating example index...
Generating file member index...
Generating namespace member index...
Generating page index...
Generating graph info page...
Generating search index...
Generating style sheet...
Show more Documentation output formats Doxygen can generate documentation in several output formats other than HTML. You can configure doxygen to produce documentation in the following formats: Doxygen can generate documentation in several output formats other than HTML. You can configure doxygen to produce documentation in the following formats: Listing 5 provides an example of a Doxyfile that generates documentation in all the formats discussed.
Listing 5. Doxyfile with tags for generating documentation in several formats
                
#for HTML 
GENERATE_HTML = YES
HTML_FILE_EXTENSION = .htm

#for CHM files
GENERATE_HTMLHELP = YES

#for Latex output
GENERATE_LATEX = YES
LATEX_OUTPUT = latex

#for RTF
GENERATE_RTF = YES
RTF_OUTPUT = rtf 
RTF_HYPERLINKS = YES

#for MAN pages
GENERATE_MAN = YES
MAN_OUTPUT = man
#for XML
GENERATE_XML = YES
Show more Special tags in doxygen Doxygen contains a couple of special tags. Doxygen contains a couple of special tags. Preprocessing C/C++ code First, doxygen must preprocess First, doxygen must preprocess C/C++ code to extract information. By default, however, it does only partial preprocessing -- conditional compilation statements ( #if #endif ) are evaluated, but macro expansions are not performed. Consider the code in Listing 6 .
Listing 6. Sample C code that makes use of macros
                
#include <cstring>
#include <rope>

#define USE_ROPE

#ifdef USE_ROPE
  #define STRING std::rope
#else
  #define STRING std::string
#endif

static STRING name;
Show more With With With With <USE_ROPE> defined in sources, generated documentation from doxygen looks like this:
Defines
    #define USE_ROPE
    #define STRING std::rope

Variables
    static STRING name
Show more Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of STRING . The <ENABLE_PREPROCESSING> tag in the Doxyfile is set by default to Yes . To allow for macro expansions, also set the <MACRO_EXPANSION> tag to Yes . Doing so produces this output from doxygen:
Defines
   #define USE_ROPE
    #define STRING std::string

Variables
    static std::rope name
Show more If you set the If you set the If you set the If you set the <ENABLE_PREPROCESSING> tag to No , the output from doxygen for the earlier sources looks like this:
Variables
    static STRING name
Show more Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of STRING . It thus makes sense always to set the <ENABLE_PREPROCESSING> tag to Yes . As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting <ENABLE_PREPROCESSING> and <MACRO_EXPANSION> to Yes , you must set the <EXPAND_ONLY_PREDEF> tag to Yes (this tag is set to No by default) and provide the macro details as part of the <PREDEFINED> or <EXPAND_AS_DEFINED> tag. Consider the code in Listing 7 , where only the macro CONTAINER would be expanded.
Listing 7. C source with multiple macros
                
#ifdef USE_ROPE
  #define STRING std::rope
#else
  #define STRING std::string
#endif

#if ALLOW_RANDOM_ACCESS == 1
  #define CONTAINER std::vector
#else
  #define CONTAINER std::list
#endif

static STRING name;
static CONTAINER gList;
Show more Listing 8 shows the configuration file.
Listing 8. Doxyfile set to allow select macro expansions
                
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
EXPAND_AS_DEFINED = CONTAINER

Show more Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with only CONTAINER expanded:
Defines
#define STRING   std::string 
#define CONTAINER   std::list

Variables
static STRING name
static std::list gList
Show more Notice that only the Notice that only the Notice that only the Notice that only the CONTAINER macro has been expanded. Subject to <MACRO_EXPANSION> and <EXPAND_AS_DEFINED> both being Yes , the <EXPAND_AS_DEFINED> tag selectively expands only those macros listed on the right-hand side of the equality operator. As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is <PREDEFINED> . Much like the same way you use the -D switch to pass the G++ compiler preprocessor definitions, you use this tag to define macros. Consider the Doxyfile in Listing 9 .
Listing 9. Doxyfile with macro expansion tags defined
                
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
EXPAND_AS_DEFINED = 
PREDEFINED = USE_ROPE= \
                             ALLOW_RANDOM_ACCESS=1
Show more Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output:
Defines
#define USE_CROPE 
#define STRING   std::rope 
#define CONTAINER   std::vector

Variables
static std::rope name 
static std::vector gList
Show more When used with the When used with the When used with the When used with the <PREDEFINED> tag, macros should be defined as <_macro name_="name_">=<_value_> <macro name>=<value> . If no value is provided -- as in the case of simple #define -- just using <_macro name_="name_">=<_spaces_> <macro name>=<spaces> suffices. Separate multiple macro definitions by spaces or a backslash ( \ ). Excluding specific files or directories from the documentation process In the In the <EXCLUDE> tag in the Doxyfile, add the names of the files and directories for which documentation should not be generated separated by spaces. This comes in handy when the root of the source hierarchy is provided and some sub-directories must be skipped. For example, if the root of the hierarchy is src_root and you want to skip the examples/ and test/memoryleaks folders from the documentation process, the Doxyfile should look like Listing 10 .
Listing 10. Using the EXCLUDE tag as part of the Doxyfile
                
INPUT = /home/user1/src_root
EXCLUDE = /home/user1/src_root/examples /home/user1/src_root/test/memoryleaks

Show more Generating graphs and diagrams By default, the Doxyfile has the By default, the Doxyfile has the <CLASS_DIAGRAMS> tag set to Yes . This tag is used for generation of class hierarchy diagrams. The following tags in the Doxyfile deal with generating diagrams: Listing 11 provides an example using a few data structures. Note that the <HAVE_DOT> , <CLASS_GRAPH> , and <COLLABORATION_GRAPH> tags are all set to Yes in the configuration file.
Listing 11. Interacting C classes and structures
                
struct D {
  int d;
};

class A {
  int a;
};

class B : public A {
  int b;
};

class C : public B {
  int c;
  D d;
};
Show more Figure 1 shows the output from doxygen.
Figure 1. The Class inheritance graph and collaboration graph generated using the dot tool
Code documentation style So far, you've used doxygen to extract information from code that is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses some of the more common tags doxygen advocates using as part of So far, you've used doxygen to extract information from code that is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses some of the more common tags doxygen advocates using as part of C/C++ code. For further details, see resources on the right. Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the in-body description, which is a concatenation of all comment blocks found within the function body. Some of the more common doxygen tags and styles of commenting are: To document global functions, variables, and enum types, the corresponding file must first be documented using the To document global functions, variables, and enum types, the corresponding file must first be documented using the <\file> tag. Listing 12 provides an example that discusses item 4 with a function tag ( <\fn> ), a function argument tag ( <\param> ), a variable name tag ( <\var> ), a tag for #define ( <\def> ), and a tag to indicate some specific issues related to a code snippet ( <\warning> ).
Listing 12. Typical doxygen tags and their use
                
/∗! \file globaldecls.h
      \brief Place to look for global variables, enums, functions
           and macro definitions
  ∗/

/∗∗ \var const int fileSize
      \brief Default size of the file on disk
  ∗/
const int fileSize = 1048576;

/∗∗ \def SHIFT(value, length)
      \brief Left shift value by length in bits
  ∗/
#define SHIFT(value, length) ((value) << (length))

/∗∗ \fn bool check_for_io_errors(FILE∗ fp)
      \brief Checks if a file is corrupted or not
      \param fp Pointer to an already opened file
      \warning Not thread safe!
  ∗/
bool check_for_io_errors(FILE∗ fp);
Show more

Here's how the generated documentation looks:

Defines
#define SHIFT(value, length)   ((value) << (length))  
             Left shift value by length in bits.

Functions
bool check_for_io_errors (FILE ∗fp)  
        Checks if a file is corrupted or not.

Variables
const int fileSize = 1048576;
Function Documentation
bool check_for_io_errors (FILE∗ fp)
Checks if a file is corrupted or not.

Parameters
              fp: Pointer to an already opened file

Warning
Not thread safe!
Show more Conclusion

This article discusses how doxygen can extract a lot of relevant information from legacy C/C++ code. If the code is documented using doxygen tags, doxygen generates output in an easy-to-read format. Put to good use, doxygen is a ripe candidate in any developer's arsenal for maintaining and managing legacy systems.

[Oct 15, 2019] Economist's View The Opportunity Cost of Computer Programming

Oct 15, 2019 | economistsview.typepad.com

From Reuters Odd News :

Man gets the poop on outsourcing , By Holly McKenna, May 2, Reuters

Computer programmer Steve Relles has the poop on what to do when your job is outsourced to India. Relles has spent the past year making his living scooping up dog droppings as the "Delmar Dog Butler." "My parents paid for me to get a (degree) in math and now I am a pooper scooper," "I can clean four to five yards in a hour if they are close together." Relles, who lost his computer programming job about three years ago ... has over 100 clients who pay $10 each for a once-a-week cleaning of their yard.

Relles competes for business with another local company called "Scoopy Do." Similar outfits have sprung up across America, including Petbutler.net, which operates in Ohio. Relles says his business is growing by word of mouth and that most of his clients are women who either don't have the time or desire to pick up the droppings. "St. Bernard (dogs) are my favorite customers since they poop in large piles which are easy to find," Relles said. "It sure beats computer programming because it's flexible, and I get to be outside,"

[Oct 13, 2019] https://www.quora.com/If-Donald-Knuth-were-25-years-old-today-which-programming-language-would-he-choose

Notable quotes:
"... He mostly writes in C today. ..."
Oct 13, 2019 | www.quora.com

Eugene Miya , A friend/colleague. Sometimes driver. Other shared experiences. Updated Mar 22 2017 · Author has 11.2k answers and 7.9m answer views

He mostly writes in C today.

I can assure you he at least knows about Python. Guido's office at Dropbox is 1 -- 2 blocks by a backdoor gate from Don's house.

I would tend to doubt that he would use R (I've used S before as one of my stat packages). Don would probably write something for himself.

Don is not big on functional languages, so I would doubt either Haskell (sorry Paul) or LISP (but McCarthy lived just around the corner from Don; I used to drive him to meetings; actually, I've driven all 3 of us to meetings, and he got his wife an electric version of my car based on riding in my car (score one for friend's choices)). He does use emacs and he does write MLISP macros, but he believes in being closer to the hardware which is why he sticks with MMIX (and MIX) in his books.

Don't discount him learning the machine language of a given architecture.

I'm having dinner with Don and Jill and a dozen other mutual friends in 3 weeks or so (our quarterly dinner). I can ask him then, if I remember (either a calendar entry or at job). I try not to bother him with things like this. Don is well connected to the hacker community

Don's name was brought up at an undergrad architecture seminar today, but Don was not in the audience (an amazing audience; I took a photo for the collection of architects and other computer scientists in the audience (Hennessey and Patterson were talking)). I came close to biking by his house on my way back home.

We do have a mutual friend (actually, I introduced Don to my biology friend at Don's request) who arrives next week, and Don is my wine drinking proxy. So there is a chance I may see him sooner.

Steven de Rooij , Theoretical computer scientist Answered Mar 9, 2017 · Author has 4.6k answers and 7.7m answer views

Nice question :-)

Don Knuth would want to use something that’s low level, because details matter . So no Haskell; LISP is borderline. Perhaps if the Lisp machine ever had become a thing.

He’d want something with well-defined and simple semantics, so definitely no R. Python also contains quite a few strange ad hoc rules, especially in its OO and lambda features. Yes Python is easy to learn and it looks pretty, but Don doesn’t care about superficialities like that. He’d want a language whose version number is converging to a mathematical constant, which is also not in favor of R or Python.

What remains is C. Out of the five languages listed, my guess is Don would pick that one. But actually, his own old choice of Pascal suits him even better. I don’t think any languages have been invented since was written that score higher on the Knuthometer than Knuth’s own original pick.

And yes, I feel that this is actually a conclusion that bears some thinking about. 24.1k views ·

Dan Allen , I've been programming for 34 years now. Still not finished. Answered Mar 9, 2017 · Author has 4.5k answers and 1.8m answer views

In The Art of Computer Programming I think he'd do exactly what he did. He'd invent his own architecture and implement programs in an assembly language targeting that theoretical machine.

He did that for a reason because he wanted to reveal the detail of algorithms at the lowest level of detail which is machine level.

He didn't use any available languages at the time and I don't see why that would suit his purpose now. All the languages above are too high-level for his purposes.

[Oct 08, 2019] Southwest Pilots Blast Boeing in Suit for Deception and Losses from -Unsafe, Unairworthy- 737 Max -

Notable quotes:
"... The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes: ..."
"... When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest." ..."
"... The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like: ..."
"... Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft. ..."
"... Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft. ..."
"... And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓. ..."
"... Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing. ..."
"... In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots " ..."
"... This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything. ..."
Oct 08, 2019 | www.nakedcapitalism.com

At first blush, the suit filed in Dallas by the Southwest Airlines Pilots Association (SwAPA) against Boeing may seem like a family feud. SWAPA is seeking an estimated $115 million for lost pilots' pay as a result of the grounding of the 34 Boeing 737 Max planes that Southwest owns and the additional 20 that Southwest had planned to add to its fleet by year end 2019. Recall that Southwest was the largest buyer of the 737 Max, followed by American Airlines. However, the damning accusations made by the pilots' union, meaning, erm, pilots, is likely to cause Boeing not just more public relations headaches, but will also give grist to suits by crash victims.

However, one reason that the Max is a sore point with the union was that it was a key leverage point in 2016 contract negotiations:

And Boeing's assurances that the 737 Max was for all practical purposes just a newer 737 factored into the pilots' bargaining stance. Accordingly, one of the causes of action is tortious interference, that Boeing interfered in the contract negotiations to the benefit of Southwest. The filing describes at length how Boeing and Southwest were highly motivated not to have the contract dispute drag on and set back the launch of the 737 Max at Southwest, its showcase buyer. The big point that the suit makes is the plane was unsafe and the pilots never would have agreed to fly it had they known what they know now.

We've embedded the compliant at the end of the post. It's colorful and does a fine job of recapping the sorry history of the development of the airplane. It has damning passages like:

Boeing concealed the fact that the 737 MAX aircraft was not airworthy because, inter alia, it incorporated a single-point failure condition -- a software/flight control logic called the Maneuvering Characteristics Augmentation System ("MCAS") -- that,if fed erroneous data from a single angle-of-attack sensor, would command the aircraft nose-down and into an unrecoverable dive without pilot input or knowledge.

The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes:

Had SWAPA known the truth about the 737 MAX aircraft in 2016, it never would have approved the inclusion of the 737 MAX aircraft as a term in its CBA [collective bargaining agreement], and agreed to operate the aircraft for Southwest. Worse still, had SWAPA known the truth about the 737 MAX aircraft, it would have demanded that Boeing rectify the aircraft's fatal flaws before agreeing to include the aircraft in its CBA, and to provide its pilots, and all pilots, with the necessary information and training needed to respond to the circumstances that the Lion Air Flight 610 and Ethiopian Airlines Flight 302 pilots encountered nearly three years later.

And (boldface original):

Boeing Set SWAPA Pilots Up to Fail

As SWAPA President Jon Weaks, publicly stated, SWAPA pilots "were kept in the dark" by Boeing.

Boeing did not tell SWAPA pilots that MCAS existed and there was no description or mention of MCAS in the Boeing Flight Crew Operations Manual.

There was therefore no way for commercial airline pilots, including SWAPA pilots, to know that MCAS would work in the background to override pilot inputs.

There was no way for them to know that MCAS drew on only one of two angle of attack sensors on the aircraft.

And there was no way for them to know of the terrifying consequences that would follow from a malfunction.

When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest."

SWAPA's pilots, like their counterparts all over the world, were set up for failure

The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like:

By March 2016, Boeing settled on a revision of the MCAS flight control logic.

However, Boeing chose to omit key safeguards that had previously been included in earlier iterations of MCAS used on the Boeing KC-46A Pegasus, a military tanker derivative of the Boeing 767 aircraft.

The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control. Those familiar with the tanker's design explained that these checks were incorporated because "[y]ou don't want the solution to be worse than the initial problem."

The 737 MAX version of MCAS abandoned the safeguards previously relied upon. As discussed below, the 737 MAX MCAS had greater control authority than its predecessor, activated repeatedly upon activation, and relied on input from just one of the plane's two sensors that measure the angle of the plane's nose.

In other words, Boeing can't credibly say that it didn't know better.

Here is one of the sections describing Boeing's cover-ups:

Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.

In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots.

We urge you to read the complaint in full, since it contains juicy insider details, like the significance of Southwest being Boeing's 737 Max "launch partner" and what that entailed in practice, plus recounting dates and names of Boeing personnel who met with SWAPA pilots and made misrepresentations about the aircraft.

If you are time-pressed, the best MSM account is from the Seattle Times, In scathing lawsuit, Southwest pilots' union says Boeing 737 MAX was unsafe

Even though Southwest Airlines is negotiating a settlement with Boeing over losses resulting from the grounding of the 737 Max and the airline has promised to compensate the pilots, the pilots' union at a minimum apparently feels the need to put the heat on Boeing directly. After all, the union could withdraw the complaint if Southwest were to offer satisfactory compensation for the pilots' lost income. And pilots have incentives not to raise safety concerns about the planes they fly. Don't want to spook the horses, after all.

But Southwest pilots are not only the ones most harmed by Boeing's debacle but they are arguably less exposed to the downside of bad press about the 737 Max. It's business fliers who are most sensitive to the risks of the 737 Max, due to seeing the story regularly covered in the business press plus due to often being road warriors. Even though corporate customers account for only 12% of airline customers, they represent an estimated 75% of profits.

Southwest customers don't pay up for front of the bus seats. And many of them presumably value the combination of cheap travel, point to point routes between cities underserved by the majors, and close-in airports, which cut travel times. In other words, that combination of features will make it hard for business travelers who use Southwest regularly to give the airline up, even if the 737 Max gives them the willies. By contrast, premium seat passengers on American or United might find it not all that costly, in terms of convenience and ticket cost (if they are budget sensitive), to fly 737-Max-free Delta until those passengers regain confidence in the grounded plane.

Note that American Airlines' pilot union, when asked about the Southwest claim, said that it also believes its pilots deserve to be compensated for lost flying time, but they plan to obtain it through American Airlines.

If Boeing were smart, it would settle this suit quickly, but so far, Boeing has relied on bluster and denial. So your guess is as good as mine as to how long the legal arm-wrestling goes on.

Update 5:30 AM EDT : One important point that I neglected to include is that the filing also recounts, in gory detail, how Boeing went into "Blame the pilots" mode after the Lion Air crash, insisting the cause was pilot error and would therefore not happen again. Boeing made that claim on a call to all operators, including SWAPA, and then three days later in a meeting with SWAPA.

However, Boeing's actions were inconsistent with this claim. From the filing:

Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft.

Relying on Boeing's description of the problem, the AD directed that in the event of un-commanded nose-down stabilizer trim such as what happened during the Lion Air crash, the flight crew should comply with the Runaway Stabilizer procedure in the Operating Procedures of the 737 MAX manual.

But the AD did not provide a complete description of MCAS or the problem in 737 MAX aircraft that led to the Lion Air crash, and would lead to another crash and the 737 MAX's grounding just months later.

An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft tail down again.

Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft.

Even after the Lion Air crash, Boeing's description of MCAS was still insufficient to put correct its lack of disclosure as demonstrated by a second MCAS-caused crash.

We hoisted this detail because insiders were spouting in our comments section, presumably based on Boeing's patter, that the Lion Air pilots were clearly incompetent, had they only executed the well-known "runaway stabilizer," all would have been fine. Needless to say, this assertion has been shown to be incorrect.


Titus , October 8, 2019 at 4:38 am

Excellent, by any standard. Which does remind of of the NYT zine story (William Langewiesche Published Sept. 18, 2019) making the claim that basically the pilots who crashed their planes weren't real "Airman".

And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓.

And especially if you as a pilot didn't know MCAS was there in the first place. This sort of engineering by Boeing is criminal. And the lying. To everyone. Oh, least we all forget the processing power of the in flight computer is that of a intel 286. There are times I just want to be beamed back to the home planet. Where we care for each other.

Carolinian , October 8, 2019 at 8:32 am

One should also point out that Langewiesche said that Boeing made disastrous mistakes with the MCAS and that the very future of the Max is cloudy. His article was useful both for greater detail about what happened and for offering some pushback to the idea that the pilots had nothing to do with the accidents.

As for the above, it was obvious from the first Seattle Times stories that these two events and the grounding were going to be a lawsuit magnet. But some of us think Boeing deserves at least a little bit of a defense because their side has been totally silent–either for legal reasons or CYA reasons on the part of their board and bad management.

Brooklin Bridge , October 8, 2019 at 8:08 am

Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing.

Summer , October 8, 2019 at 9:01 am

"The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control "

"Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.

In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots "

This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything.

[Oct 08, 2019] Serious question/Semi-Rant. What the hell is DevOps supposed to be and how does it affect me as a sysadmin in higher ed?

Notable quotes:
"... Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? ..."
"... So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming). ..."
Oct 08, 2019 | www.reddit.com

Posted by u/kevbo423 59 minutes ago

What the hell is DevOps? Every couple months I find myself trying to look into it as all I ever hear and see about is DevOps being the way forward. But each time I research it I can only find things talking about streamlining software updates and quality assurance and yada yada yada. It seems like DevOps only applies to companies that make software as a product. How does that affect me as a sysadmin for higher education? My "company's" product isn't software.

Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? Again, when I try to research them a majority of what I find just links back to software development.

To give a rough idea of what I deal with, below is a list of my three main responsibilities.

  1. macOS/iOS Systems Administration (I'm the only sysadmin that does this for around 150+ machines)

  2. Network Administration (I just started with this a couple months ago and I'm slowly learning about our infrastructure and network administration in general from our IT director. We have several buildings spread across our entire campus with a mixture of Juniper, Dell, and Brocade equipment.)

  3. AV Systems Design and Programming (I'm the only person who does anything related to video conferencing, meeting room equipment, presentation systems, digital signage, etc. for 7 buildings.)

So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming).

I've been working at the same job for 5 years and I feel like I'm being left in the dust by the entire rest of the industry. I'm being pulled in so many different directions that I feel like it's impossible for me to ever get another job. At the same time, I can't specialize in anything because I have so many different unrelated areas I'm supposed to be doing work in.

And this is what I go through/ask myself every few months I try to research and learn DevOps. This is mainly a rant, but I am more than open to any and all advice anyone is willing to offer. Thanks in advance.

kimvila 2 points · 27 minutes ago

· edited 23 minutes ago

there's a lot of tools that can be used to make your life much easier that's used on a daily basis for DevOps, but apparently that's not the case for you. when you manage infra as code, you're using DevOps.

there's a lot of space for operations guys like you (and me) so look to DevOps as an alternative source of knowledge, just to stay tuned on the trends of the industry and improve your skills.

for higher education, this is useful for managing large projects and looking for improvement during the development of the product/service itself. but again, that's not the case for you. if you intend to switch to another position, you may try to search for a certification program that suits your needs

Mongoloid_the_Retard 0 points · 46 minutes ago

DevOps is a cult.

[Oct 08, 2019] FALLACIES AND PITFALLS OF OO PROGRAMMING by David Hoag and Anthony Sintes

Notable quotes:
"... In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part! ..."
"... Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture. ..."
"... OO will insure the success of your project: An object-oriented approach to software development does not guarantee the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only careful analysis and a complete understanding of the problem will make the project succeed. A successful project will utilize sound techniques, competent programmers, sound processes and solid project management. ..."
"... OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. ..."
"... OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake. ..."
Apr 27, 2000 | www.chicagotribune.com

"Hooked on Objects" is dedicated to providing readers with insight into object-oriented technologies. In our first few articles, we introduced the three tenants of object-oriented programming: encapsulation, inheritance and polymorphism. We then covered software process and design patterns. We even got our hands dirty and dissected the Java class.

Each of our previous articles had a common thread. We have written about the strengths and benefits of the object paradigm and highlighted the advantages the object approach brings to the development effort. However, we do not want to give anyone a false sense that object-oriented techniques are always the perfect answer. Object-oriented techniques are not the magic "silver bullets" of programming.

In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part!

Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture.

If anything, using OO makes design and architecture more important because without a clear, well-planned design, OO will fail almost every time. Spaghetti code (that which is written without a coherent structure) spells trouble for procedural programming, and weak architecture and design can mean the death of an OO project. A poorly planned system will fail to achieve the promises of OO: increased productivity, reusability, scalability and easier maintenance.

Some critics claim OO has not lived up to its advance billing, while others claim its techniques are flawed. OO isn't flawed, but some of the hype has given OO developers and managers a false sense of security.

Successful OO requires careful analysis and design. Our previous articles have stressed the positive attributes of OO. This time we'll explore some of the common fallacies of this promising technology and some of the potential pitfalls.

Fallacies of OO

It is important to have realistic expectations before choosing to use object-oriented technologies. Do not allow these common fallacies to mislead you.

OO Pitfalls

Life is full of compromise and nothing comes without cost. OO is no exception. Before choosing to employ object technologies it is imperative to understand this. When used properly, OO has many benefits; when used improperly, however, the results can be disastrous.

OO technologies take time to learn: Don't expect to become an OO expert overnight. Good OO takes time and effort to learn. Like all technologies, change is the only constant. If you do not continue to enhance and strengthen your skills, you will fall behind.

OO benefits might not pay off in the short term: Because of the long learning curve and initial extra development costs, the benefits of increased productivity and reuse might take time to materialize. Don't forget this or you might be disappointed in your initial OO results.

OO technologies might not fit your corporate culture: The successful application of OO requires that your development team feels involved. If developers are frequently shifted, they will struggle to deliver reusable objects. There's less incentive to deliver truly robust, reusable code if you are not required to live with your work or if you'll never reap the benefits of it.

OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. This isn't as much of a problem today. Memory prices are dropping every day. CPUs continue to provide better performance and compilers and virtual machines continue to improve. The small efficiency that you trade for increased productivity and reuse should be well worth it. However, if you're developing an application that tracks millions of data points in real time, OO might not be the answer for you.

OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake.

What do you need to do to avoid these pitfalls and fallacies? The answer is to keep expectations realistic. Beware of the hype. Use an OO approach only when appropriate.

Programmers should not feel compelled to use every OO trick that the implementation language offers. It is wise to use only the ones that make sense. When used without forethought, object-oriented techniques could cause more harm than good. Of course, there is one other thing that you should always do to improve your OO: Don't miss a single installment of "Hooked on Objects."

David Hoag is vice president-development and chief object guru for ObjectWave, a Chicago-based object-oriented software engineering firm. Anthony Sintes is a Sun Certified Java Developer and team member specializing in telecommunications consulting for ObjectWave. Contact them at hob@objectwave.com or visit their Web site at www.objectwave.com.

BOOKMARKS

Hooked on Objects archive:

chicagotribune.com/go/HOBarchive

Associated message board:

chicagotribune.com/go/HOBtalk

[Oct 07, 2019] Pitfalls of Object Oriented Programming by Tony Albrecht - Technical Consultant

This isn't a general discussion of OO pitfalls and conceptual weaknesses, but a discussion of how conventional 'textbook' OO design approaches can lead to inefficient use of cache & RAM, especially on consoles or other hardware-constrained environments. But it's still good.
Sony Computer Entertainment Europe Research & Development Division

OO is not necessarily EVIL

Its all about the memory

Homogeneity

Data Oriented Design Delivers

[Oct 06, 2019] Weird Al Yankovic - Mission Statement

Highly recommended!
This song seriously streamlined my workflow.
Oct 06, 2019 | www.youtube.com

FanmaR , 4 years ago

Props to the artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up with everything.

Maxwelhse , 3 years ago

He missed "sea change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.

VenetianTemper , 4 years ago

From my experiences as an engineer, never trust a company that describes their product with the word "synergy".

Swag Mcfresh , 5 years ago

For those too young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.

112steinway , 4 years ago

Only in corporate speak can you use a whole lot of words while saying nothing at all.

Jonathan Ingersoll , 3 years ago

As a business major this is basically every essay I wrote.

A.J. Collins , 3 years ago

"The company has undergone organization optimization due to our strategy modification, which includes empowering the support to the operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.

meanmanturbo , 3 years ago

So this is basically a Dilbert strip turned into a song. I approve.

zyxwut321 , 4 years ago

In his big long career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most ambitious songs in pop music history.

teenygozer , 3 years ago

This should be played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius as usual, Mr. Yankovic!

Dunoid , 4 years ago

Maybe I'm too far gone to the world of computer nerds, but "Cloud Computing" seems like it should have been in the song somewhere.

Snoo Lee , 4 years ago

The "paradigm shift" at the end of the video / song is when the corporation screws everybody at the end. Brilliantly done, Al.

A Piece Of Bread , 3 years ago

Don't forget to triangulate the automatonic business monetizer to create exceptional synergy.

GeoffryHawk , 3 years ago

There's a quote it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly it and it's brilliant.

Sefie Ezephiel , 4 months ago

From the current Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to capital allocation."" yeah Weird Al totally nailed it

Phil H , 6 months ago

"People who enjoy meetings should not be put in charge of anything." -Thomas Sowell

Laff , 3 years ago

I heard "monetize our asses" for some reason...

Brett Naylor , 4 years ago

Excuse me, but "proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you of anything like that. [pause] I'm fired, aren't I?~George Meyer

Mark Kahn , 4 years ago

Brilliant social commentary, on how the height of 60's optimism was bastardized into corporate enthusiasm. I hope SteveJjobs got to see this.

Mark , 4 years ago

That's the strangest "Draw My Life" I've ever seen.

Δ , 17 hours ago

I watch this at least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment Associates" eventually to discover they want someone to run a cash register and sweep up.

Mike The SandbridgeKid , 5 years ago

The irony is a song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous excesses of Unbridled Capitalism.

Geetar Bear , 4 years ago (edited)

This reminds me of George carlin so much

Vaugn Ripen , 2 years ago

If you understand who and what he's taking a jab at, this is one of the greatest songs and videos of all time. So spot on. This and Frank's 2000 inch tv are my favorite songs of yours. Thanks Al!

Joolz Godfree , 4 years ago

hahaha, "Client-Centric Solutions...!" (or in my case at the time, 'Customer-Centric' solutions) now THAT's a term i haven't heard/read/seen in years, since last being an office drone. =D

Miles Lacey , 4 years ago

When I interact with this musical visual medium I am motivated to conceptualize how the English language can be better compartmentalized to synergize with the client-centric requirements of the microcosmic community focussed social entities that I administrate on social media while interfacing energetically about the inherent shortcomings of the current socio-economic and geo-political order in which we co-habitate. Now does this tedium flow in an effortless stream of coherent verbalisations capable of comprehension?

Soufriere , 5 years ago

When I bought "Mandatory Fun", put it in my car, and first heard this song, I busted a gut, laughing so hard I nearly crashed. All the corporate buzzwords! (except "pivot", apparently).

[Oct 06, 2019] Devop created huge opportunities for a new generation of snake oil salesman

Highly recommended!
Oct 06, 2019 | www.reddit.com

DragonDrew Jack of All Trades 772 points · 4 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2 OpenScore Sysadmin 527 points · 4 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value"

[Oct 05, 2019] Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, but they don t understand half of the stuff

Devop created a new generation of bullsheeters
Oct 05, 2019 | www.reddit.com

They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes.

ntengineer 613 points · 4 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 101 points · 4 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 42 points · 4 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4 PuzzledSwitch 35 points · 4 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

williamfny Jack of All Trades 26 points · 4 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 6 points · 4 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

CrazyTachikoma 4 days ago

Most DevOps I've met are devs trying to bypass the sysadmins. This, and the Cloud fad, are burning serious amount of money from companies managed by stupid people that get easily impressed by PR stunts and shiny conferences. Then when everything goes to shit, they call the infrastructure team to fix it...

[Oct 01, 2019] Being a woman in programming in the Soviet Union Vicki Boykis

Oct 01, 2019 | veekaybee.github.io

In 1976, after eight years in the Soviet education system, I graduated the equivalent of middle school. Afterwards, I could choose to go for two more years, which would earn me a high school diploma, and then do three years of college, which would get me a diploma in "higher education."

Or, I could go for the equivalent of a blend of an associate and bachelor's degree, with an emphasis on vocational skills. This option took four years.

I went with the second option, mainly because it was common knowledge in the Soviet Union at the time that there was a restrictive quota for Jews applying to the five-year college program, which almost certainly meant that I, as a Jew, wouldn't get in. I didn't want to risk it.

My best friend at the time proposed that we take the entrance exams to attend Nizhniy Novgorod Industrial and Economic College. (At that time, it was known as Gorky Industrial and Economic College - the city, originally named for famous poet Maxim Gorky, was renamed in the 1990s after the fall of the Soviet Union.)

They had a program called "Programming for high-speed computing machines." Since I got good grades in math and geometry, this looked like I'd be able to get in. It also didn't hurt that my aunt, a very good seamstress and dressmaker, sewed several dresses specifically for the school's chief accountant, who was involved in enrollment decisions. So I got in.

What's interesting is that from the almost sixty students accepted into the program that year, all of them were female. It was the same for the class before us, and for the class after us. Later, after I started working the Soviet Union, and even in the United States in the early 1990s, I understood that this was a trend. I'd say that 70% of the programmers I encountered in the IT industry were female. The males were mostly in middle and upper management.

image

My mom's code notebook, with her name and "Macroassembler" on it.

We started what would be considered our major concentration courses during the second year. Along with programming, there were a lot of related classes: "Computing Appliances and Their Organization", "Electro Technology", "Algorithms of Numerical Methods," and a lot of math that included integral and differential calculations. But programming was the main course, and we spent the most hours on it.

image

Notes on programming - Heading is "Directives (Commands) for job control implementation", covering the ABRT command

In the programming classes, we studied programming the "dry" way: using paper, pencil and eraser. In fact, this method was so important that students who forgot their pencils were sent to the main office to ask for one. It was extremely embarrassing, and we learned quickly not to forget them.

image

Paper and pencil code for opening a file in Macroassembler

Every semester we would take a new programming language to learn. We learned Algol, Fortran,and PL/1. We would learn from simplest commands to loop organization, function and sub-function programming, multi-dimensional array processing, and more.

After mastering the basics, we would take exams, which were logical computing tasks to code in this specific language.

At some point midway through the program, our school bought the very first physical computer I ever saw : the Nairi. The programming language was AP, which was one of the few computer languages with Russian keywords.

Then, we started taking labs. It was terrifying experience. You had to type your program in entering device which basically was a typewriter connected to a huge computer. The programs looked like step-by-step instructions, and if you made even one mistake you had to start all over again. To code a solution for a linear algebraic equation usually would take 10 - 12 steps.

image

Program output in Macroassembler ("I was so creative with my program names," jokes my mom.)

Our teacher used to go for one week of "practice work and curriculum development," to a serious IT shop with more advanced machines every once in a while. At that time, the heavy computing power was in the ES Series, produced by Soviet bloc countries.

These machines were clones of the IBM 360. They worked with punch cards and punch tapes. She would bring back tons of papers with printed code and debugging comments for us to learn in classroom.

After two and half years of rigorous study using pencil and paper, we had six months of practice. Most of the time it was one of several scientific research institutes existed in Nizhny Novgorod. I went to an institute that was oriented towards the auto industry.

I graduate with title "Programmer-Technician". Most of the girls from my class took computer operator jobs, but I did not want to settle. I continued my education at Lobachevsky State University , named after Lobachevsky , the famous Russian mathematician. Since I was taking evening classes, it took me six years to graduate.

I wrote a lot about my first college because now looking back I realize that this is where I really learned to code and developed my programming skills. At the State University, we took a huge amount of unnecessary courses. The only useful one was professional English. After this course I could read technical documentation in English without issues.

My final university degree was equivalent to a US master's in Computer Science. The actual major was called "Computational Mathematics and Cybernetics".

In total I worked for about seven years in the USSR as computer programmer, from 1982 to 1989. Technology changed rapidly, even there. I started out writing programs on special blanks for punch card machines using a Russian version of Assembler. To maximize performance, we would leave stacks of our punch cards for nightly processing.

After a couple years, we got terminals with keyboards. First they were installed in the same room where main computer was. Initially, there were not enough terminals and "machine time" was evenly divided between all of the programmers during the day.

Then, the terminals started to appear in the same room where programmers were. The displays were small, with black background and green font. We were now working in the terminal.

The languages were also changing. I switched to C and had to get hands-on training. I did not know then, but I picked profession where things are constantly moving. The most I've ever worked with the same software was for about three years.

In 1991, we emigrated to the States. I had to quit my job two years before to avoid any issues with the Soviet government. Every programmer I knew had to sign a special form commanding them to keep state secrets. Such a signature could prevent us from getting exit visas.

When I arrived in the US, I worried I had fallen behind. To refresh my skills and to become more marketable, I had to take programming course for six months. It was the then-popular mix of COBOL, DB2, JCL etc.

The main differences between USA and the USSR was the level at which computers were incorporated in every day life. In the USSR, they were still a novelty. There were not a lot of practical usage. Some of the reasons were planed organization of economy, politicized approach to science. Cybernetics was considered "capitalist" discovery and was in exile in 1950s. In the United States, computers were already widely in use, and even in consumer settings.

The other difference is gender of this profession. In the United States, it is more male-dominated. In Russia as I was starting my professional life, it was considered more of a female occupation. In both programs I studied , girls represented 100% of the class. Guys would go for something that was considered more masculine. These choices included majors like construction engineering and mechanical engineering.

Now, things have changed in Russia. Average salary for software developer in Moscow is around $21K annually, versus $10K average salary for Russia as a whole. It, like in the United States, has become a male-dominated field.

In conclusion, I have to say I picked the good profession to be in. Although I constantly have to learn new things, I've never had to worry about being employed. When I did go through a layoff, I was able to find a job very quickly. It is also a good paying job. I was very lucky compared to other immigrants, who had to study programming from scratch.

[Sep 21, 2019] Dr. Dobb's Journal February 1998 A Conversation with Larry Wall

Perl is unique complex non-orthogonal language and due to this it has unique level of expressiveness.
Also the complexity of Perl to a large extent reflect the complexity of Perl environment (which is Unix environment at the beginning, but now also Windows environment with its quirks)
Notable quotes:
"... On a syntactic level, in the particular case of Perl, I placed variable names in a separate namespace from reserved words. That's one of the reasons there are funny characters on the front of variable names -- dollar signs and so forth. That allowed me to add new reserved words without breaking old programs. ..."
"... A script is something that is easy to tweak, and a program is something that is locked in. There are all sorts of metaphorical tie-ins that tend to make programs static and scripts dynamic, but of course, it's a continuum. You can write Perl programs, and you can write C scripts. People do talk more about Perl programs than C scripts. Maybe that just means Perl is more versatile. ..."
"... A good language actually gives you a range, a wide dynamic range, of your level of discipline. We're starting to move in that direction with Perl. The initial Perl was lackadaisical about requiring things to be defined or declared or what have you. Perl 5 has some declarations that you can use if you want to increase your level of discipline. But it's optional. So you can say "use strict," or you can turn on warnings, or you can do various sorts of declarations. ..."
"... But Perl was an experiment in trying to come up with not a large language -- not as large as English -- but a medium-sized language, and to try to see if, by adding certain kinds of complexity from natural language, the expressiveness of the language grew faster than the pain of using it. And, by and large, I think that experiment has been successful. ..."
"... If you used the regular expression in a list context, it will pass back a list of the various subexpressions that it matched. A different computer language may add regular expressions, even have a module that's called Perl 5 regular expressions, but it won't be integrated into the language. You'll have to jump through an extra hoop, take that right angle turn, in order to say, "Okay, well here, now apply the regular expression, now let's pull the things out of the regular expression," rather than being able to use the thing in a particular context and have it do something meaningful. ..."
"... A language is not a set of syntax rules. It is not just a set of semantics. It's the entire culture surrounding the language itself. So part of the cultural context in which you analyze a language includes all the personalities and people involved -- how everybody sees the language, how they propagate the language to other people, how it gets taught, the attitudes of people who are helping each other learn the language -- all of this goes into the pot of context. ..."
"... In the beginning, I just tried to help everybody. Particularly being on USENET. You know, there are even some sneaky things in there -- like looking for people's Perl questions in many different newsgroups. For a long time, I resisted creating a newsgroup for Perl, specifically because I did not want it to be ghettoized. You know, if someone can say, "Oh, this is a discussion about Perl, take it over to the Perl newsgroup," then they shut off the discussion in the shell newsgroup. If there are only the shell newsgroups, and someone says, "Oh, by the way, in Perl, you can solve it like this," that's free advertising. So, it's fuzzy. We had proposed Perl as a newsgroup probably a year or two before we actually created it. It eventually came to the point where the time was right for it, and we did that. ..."
"... For most web applications, Perl is severely underutilized. Your typical CGI script says print, print, print, print, print, print, print. But in a sense, it's the dynamic range of Perl that allows for that. You don't have to say a whole lot to write a simple Perl script, whereas your minimal Java program is, you know, eight or ten lines long anyway. Many of the features that made it competitive in the UNIX space will make it competitive in other spaces. ..."
"... Over the years, much of the work of making Perl work for people has been in designing ways for people to come to Perl. I actually delayed the first version of Perl for a couple of months until I had a sed-to-Perl and an awk-to-Perl translator. One of the benefits of borrowing features from various other languages is that those subsets of Perl that use those features are familiar to people coming from that other culture. What would be best, in my book, is if someone had a way of saying, "Well, I've got this thing in Visual Basic. Now, can I just rewrite some of these things in Perl?" ..."
Feb 28, 1998 | www.ddj.com

... ... ...

The creator of Perl talks about language design and Perl. By Eugene Eric Kim

DDJ : Is Perl 5.005 what you envisioned Perl to be when you set out to do it?

LW: That assumes that I'm smart enough to envision something as complicated as Perl. I knew that Perl would be good at some things, and would be good at more things as time went on. So, in a sense, I'm sort of blessed with natural stupidity -- as opposed to artificial intelligence -- in the sense that I know what my intellectual limits are.

I'm not one of these people who can sit down and design an entire system from scratch and figure out how everything relates to everything else, so I knew from the start that I had to take the bear-of-very-little-brain approach, and design the thing to evolve. But that fit in with my background in linguistics, because natural languages evolve over time.

You can apply biological metaphors to languages. They move into niches, and as new needs arise, languages change over time. It's actually a practical way to design a computer language. Not all computer programs can be designed that way, but I think more can be designed that way than have been. A lot of the majestic failures that have occurred in computer science have been because people thought they could design the whole thing in advance.

DDJ : How do you design a language to evolve?

LW: There are several aspects to that, depending on whether you are talking about syntax or semantics. On a syntactic level, in the particular case of Perl, I placed variable names in a separate namespace from reserved words. That's one of the reasons there are funny characters on the front of variable names -- dollar signs and so forth. That allowed me to add new reserved words without breaking old programs.

DDJ : What is a scripting language? Does Perl fall into the category of a scripting language?

LW: Well, being a linguist, I tend to go back to the etymological meanings of "script" and "program," though, of course, that's fallacious in terms of what they mean nowadays. A script is what you hand to the actors, and a program is what you hand to the audience. Now hopefully, the program is already locked in by the time you hand that out, whereas the script is something you can tinker with. I think of phrases like "following the script," or "breaking from the script." The notion that you can evolve your script ties into the notion of rapid prototyping.

A script is something that is easy to tweak, and a program is something that is locked in. There are all sorts of metaphorical tie-ins that tend to make programs static and scripts dynamic, but of course, it's a continuum. You can write Perl programs, and you can write C scripts. People do talk more about Perl programs than C scripts. Maybe that just means Perl is more versatile.

... ... ...

DDJ : Would that be a better distinction than interpreted versus compiled -- run-time versus compile-time binding?

LW: It's a more useful distinction in many ways because, with late-binding languages like Perl or Java, you cannot make up your mind about what the real meaning of it is until the last moment. But there are different definitions of what the last moment is. Computer scientists would say there are really different "latenesses" of binding.

A good language actually gives you a range, a wide dynamic range, of your level of discipline. We're starting to move in that direction with Perl. The initial Perl was lackadaisical about requiring things to be defined or declared or what have you. Perl 5 has some declarations that you can use if you want to increase your level of discipline. But it's optional. So you can say "use strict," or you can turn on warnings, or you can do various sorts of declarations.

DDJ : Would it be accurate to say that Perl doesn't enforce good design?

LW: No, it does not. It tries to give you some tools to help if you want to do that, but I'm a firm believer that a language -- whether it's a natural language or a computer language -- ought to be an amoral artistic medium.

You can write pretty poems or you can write ugly poems, but that doesn't say whether English is pretty or ugly. So, while I kind of like to see beautiful computer programs, I don't think the chief virtue of a language is beauty. That's like asking an artist whether they use beautiful paints and a beautiful canvas and a beautiful palette. A language should be a medium of expression, which does not restrict your feeling unless you ask it to.

DDJ : Where does the beauty of a program lie? In the underlying algorithms, in the syntax of the description?

LW: Well, there are many different definitions of artistic beauty. It can be argued that it's symmetry, which in a computer language might be considered orthogonality. It's also been argued that broken symmetry is what is considered most beautiful and most artistic and diverse. Symmetry breaking is the root of our whole universe according to physicists, so if God is an artist, then maybe that's his definition of what beauty is.

This actually ties back in with the built-to-evolve concept on the semantic level. A lot of computer languages were defined to be naturally orthogonal, or at least the computer scientists who designed them were giving lip service to orthogonality. And that's all very well if you're trying to define a position in a space. But that's not how people think. It's not how natural languages work. Natural languages are not orthogonal, they're diagonal. They give you hypotenuses.

Suppose you're flying from California to Quebec. You don't fly due east, and take a left turn over Nashville, and then go due north. You fly straight, more or less, from here to there. And it's a network. And it's actually sort of a fractal network, where your big link is straight, and you have little "fractally" things at the end for your taxi and bicycle and whatever the mode of transport you use. Languages work the same way. And they're designed to get you most of the way here, and then have ways of refining the additional shades of meaning.

When they first built the University of California at Irvine campus, they just put the buildings in. They did not put any sidewalks, they just planted grass. The next year, they came back and built the sidewalks where the trails were in the grass. Perl is that kind of a language. It is not designed from first principles. Perl is those sidewalks in the grass. Those trails that were there before were the previous computer languages that Perl has borrowed ideas from. And Perl has unashamedly borrowed ideas from many, many different languages. Those paths can go diagonally. We want shortcuts. Sometimes we want to be able to do the orthogonal thing, so Perl generally allows the orthogonal approach also. But it also allows a certain number of shortcuts, and being able to insert those shortcuts is part of that evolutionary thing.

I don't want to claim that this is the only way to design a computer language, or that everyone is going to actually enjoy a computer language that is designed in this way. Obviously, some people speak other languages. But Perl was an experiment in trying to come up with not a large language -- not as large as English -- but a medium-sized language, and to try to see if, by adding certain kinds of complexity from natural language, the expressiveness of the language grew faster than the pain of using it. And, by and large, I think that experiment has been successful.

DDJ : Give an example of one of the things you think is expressive about Perl that you wouldn't find in other languages.

LW: The fact that regular-expression parsing and the use of regular expressions is built right into the language. If you used the regular expression in a list context, it will pass back a list of the various subexpressions that it matched. A different computer language may add regular expressions, even have a module that's called Perl 5 regular expressions, but it won't be integrated into the language. You'll have to jump through an extra hoop, take that right angle turn, in order to say, "Okay, well here, now apply the regular expression, now let's pull the things out of the regular expression," rather than being able to use the thing in a particular context and have it do something meaningful.

The school of linguistics I happened to come up through is called tagmemics, and it makes a big deal about context. In a real language -- this is a tagmemic idea -- you can distinguish between what the conventional meaning of the "thing" is and how it's being used. You think of "dog" primarily as a noun, but you can use it as a verb. That's the prototypical example, but the "thing" applies at many different levels. You think of a sentence as a sentence. Transformational grammar was built on the notion of analyzing a sentence. And they had all their cute rules, and they eventually ended up throwing most of them back out again.

But in the tagmemic view, you can take a sentence as a unit and use it differently. You can say a sentence like, "I don't like your I-can-use-anything-like-a-sentence attitude." There, I've used the sentence as an adjective. The sentence isn't an adjective if you analyze it, any way you want to analyze it. But this is the way people think. If there's a way to make sense of something in a particular context, they'll do so. And Perl is just trying to make those things make sense. There's the basic distinction in Perl between singular and plural context -- call it list context and scalar context, if you will. But you can use a particular construct in a singular context that has one meaning that sort of makes sense using the list context, and it may have a different meaning that makes sense in the plural context.

That is where the expressiveness comes from. In English, you read essays by people who say, "Well, how does this metaphor thing work?" Owen Barfield talks about this. You say one thing and mean another. That's how metaphors arise. Or you take two things and jam them together. I think it was Owen Barfield, or maybe it was C.S. Lewis, who talked about "a piercing sweetness." And we know what "piercing" is, and we know what "sweetness" is, but you put those two together, and you've created a new meaning. And that's how languages ought to work.

DDJ : Is a more expressive language more difficult to learn?

LW: Yes. It was a conscious tradeoff at the beginning of Perl that it would be more difficult to master the whole language. However, taking another clue from a natural language, we do not require 5-year olds to speak with the same diction as 50-year olds. It is okay for you to use the subset of a language that you are comfortable with, and to learn as you go. This is not true of so many computer-science languages. If you program C++ in a subset that corresponds to C, you get laughed out of the office.

There's a whole subject that we haven't touched here. A language is not a set of syntax rules. It is not just a set of semantics. It's the entire culture surrounding the language itself. So part of the cultural context in which you analyze a language includes all the personalities and people involved -- how everybody sees the language, how they propagate the language to other people, how it gets taught, the attitudes of people who are helping each other learn the language -- all of this goes into the pot of context.

Because I had already put out other freeware projects (rn and patch), I realized before I ever wrote Perl that a great deal of the value of those things was from collaboration. Many of the really good ideas in rn and Perl came from other people.

I think that Perl is in its adolescence right now. There are places where it is grown up, and places where it's still throwing tantrums. I have a couple of teenagers, and the thing you notice about teenagers is that they're always plus or minus ten years from their real age. So if you've got a 15-year old, they're either acting 25 or they're acting 5. Sometimes simultaneously! And Perl is a little that way, but that's okay.

DDJ : What part of Perl isn't quite grown up?

LW: Well, I think that the part of Perl, which has not been realistic up until now has been on the order of how you enable people in certain business situations to actually use it properly. There are a lot of people who cannot use freeware because it is, you know, schlocky. Their bosses won't let them, their government won't let them, or they think their government won't let them. There are a lot of people who, unknown to their bosses or their government, are using Perl.

DDJ : So these aren't technical issues.

LW: I suppose it depends on how you define technology. Some of it is perceptions, some of it is business models, and things like that. I'm trying to generate a new symbiosis between the commercial and the freeware interests. I think there's an artificial dividing line between those groups and that they could be more collaborative.

As a linguist, the generation of a linguistic culture is a technical issue. So, these adjustments we might make in people's attitudes toward commercial operations or in how Perl is being supported, distributed, advertised, and marketed -- not in terms of trying to make bucks, but just how we propagate the culture -- these are technical ideas in the psychological and the linguistic sense. They are, of course, not technical in the computer-science sense. But I think that's where Perl has really excelled -- its growth has not been driven solely by technical merits.

DDJ : What are the things that you do when you set out to create a culture around the software that you write?

LW: In the beginning, I just tried to help everybody. Particularly being on USENET. You know, there are even some sneaky things in there -- like looking for people's Perl questions in many different newsgroups. For a long time, I resisted creating a newsgroup for Perl, specifically because I did not want it to be ghettoized. You know, if someone can say, "Oh, this is a discussion about Perl, take it over to the Perl newsgroup," then they shut off the discussion in the shell newsgroup. If there are only the shell newsgroups, and someone says, "Oh, by the way, in Perl, you can solve it like this," that's free advertising. So, it's fuzzy. We had proposed Perl as a newsgroup probably a year or two before we actually created it. It eventually came to the point where the time was right for it, and we did that.

DDJ : Perl has really been pigeonholed as a language of the Web. One result is that people mistakenly try to compare Perl to Java. Why do you think people make the comparison in the first place? Is there anything to compare?

LW: Well, people always compare everything.

DDJ : Do you agree that Perl has been pigeonholed?

LW: Yes, but I'm not sure that it bothers me. Before it was pigeonholed as a web language, it was pigeonholed as a system-administration language, and I think that -- this goes counter to what I was saying earlier about marketing Perl -- if the abilities are there to do a particular job, there will be somebody there to apply it, generally speaking. So I'm not too worried about Perl moving into new ecological niches, as long as it has the capability of surviving in there.

Perl is actually a scrappy language for surviving in a particular ecological niche. (Can you tell I like biological metaphors?) You've got to understand that it first went up against C and against shell, both of which were much loved in the UNIX community, and it succeeded against them. So that early competition actually makes it quite a fit competitor in many other realms, too.

For most web applications, Perl is severely underutilized. Your typical CGI script says print, print, print, print, print, print, print. But in a sense, it's the dynamic range of Perl that allows for that. You don't have to say a whole lot to write a simple Perl script, whereas your minimal Java program is, you know, eight or ten lines long anyway. Many of the features that made it competitive in the UNIX space will make it competitive in other spaces.

Now, there are things that Perl can't do. One of the things that you can't do with Perl right now is compile it down to Java bytecode. And if that, in the long run, becomes a large ecological niche (and this is not yet a sure thing), then that is a capability I want to be certain that Perl has.

DDJ : There's been a movement to merge the two development paths between the ActiveWare Perl for Windows and the main distribution of Perl. You were talking about ecological niches earlier, and how Perl started off as a text-processing language. The scripting languages that are dominant on the Microsoft platforms -- like VB -- tend to be more visual than textual. Given Perl's UNIX origins -- awk, sed, and C, for that matter -- do you think that Perl, as it currently stands, has the tools to fit into a Windows niche?

LW: Yes and no. It depends on your problem domain and who's trying to solve the problem. There are problems that only need a textual solution or don't need a visual solution. Automation things of certain sorts don't need to interact with the desktop, so for those sorts of things -- and for the programmers who aren't really all that interested in visual programming -- it's already good for that. And people are already using it for that. Certainly, there is a group of people who would be enabled to use Perl if it had more of a visual interface, and one of the things we're talking about doing for the O'Reilly NT Perl Resource Kit is some sort of a visual interface.

A lot of what Windows is designed to do is to get mere mortals from 0 to 60, and there are some people who want to get from 60 to 100. We are not really interested in being in Microsoft's crosshairs. We're not actually interested in competing head-to-head with Visual Basic, and to the extent that we do compete with them, it's going to be kind of subtle. There has to be some way to get people from the slow lane to the fast lane. It's one thing to give them a way to get from 60 to 100, but if they have to spin out to get from the slow lane to the fast lane, then that's not going to work either.

Over the years, much of the work of making Perl work for people has been in designing ways for people to come to Perl. I actually delayed the first version of Perl for a couple of months until I had a sed-to-Perl and an awk-to-Perl translator. One of the benefits of borrowing features from various other languages is that those subsets of Perl that use those features are familiar to people coming from that other culture. What would be best, in my book, is if someone had a way of saying, "Well, I've got this thing in Visual Basic. Now, can I just rewrite some of these things in Perl?"

We're already doing this with Java. On our UNIX Perl Resource Kit, I've got a hybrid language called "jpl" -- that's partly a pun on my old alma mater, Jet Propulsion Laboratory, and partly for Java, Perl...Lingo, there we go! That's good. "Java Perl Lingo." You've heard it first here! jpl lets you take a Java program and magically turn one of the methods into a chunk of Perl right there inline. It turns Perl code into a native method, and automates the linkage so that when you pull in the Java code, it also pulls in the Perl code, and the interpreter, and everything else. It's actually calling out from Java's Virtual Machine into Perl's virtual machine. And we can call in the other direction, too. You can embed Java in Perl, except that there's a bug in JDK having to do with threads that prevents us from doing any I/O. But that's Java's problem.

It's a way of letting somebody evolve from a purely Java solution into, at least partly, a Perl solution. It's important not only to make Perl evolve, but to make it so that people can evolve their own programs. It's how I program, and I think a lot of people program that way. Most of us are too stupid to know what we want at the beginning.

DDJ : Is there hope down the line to present Perl to a standardization body?

LW: Well, I have said in jest that people will be free to standardize Perl when I'm dead. There may come a time when that is the right thing to do, but it doesn't seem appropriate yet.

DDJ : When would that time be?

LW: Oh, maybe when the federal government declares that we can't export Perl unless it's standardized or something.

DDJ : Only when you're forced to, basically.

LW: Yeah. To me, once things get to a standards body, it's not very interesting anymore. The most efficient form of government is a benevolent dictatorship. I remember walking into some BOF that USENIX held six or seven years ago, and John Quarterman was running it, and he saw me sneak in, sit in the back corner, and he said, "Oh, here comes Larry Wall! He's a standards committee all of his own!"

A great deal of the success of Perl so far has been based on some of my own idiosyncrasies. And I recognize that they are idiosyncrasies, and I try to let people argue me out of them whenever appropriate. But there are still ways of looking at things that I seem to do differently than anybody else. It may well be that perl5-porters will one day degenerate into a standards committee. So far, I have not abused my authority to the point that people have written me off, and so I am still allowed to exercise a certain amount of absolute power over the Perl core.

I just think headless standards committees tend to reduce everything to mush. There is a conservatism that committees have that individuals don't, and there are times when you want to have that conservatism and times you don't. I try to exercise my authority where we don't want that conservatism. And I try not to exercise it at other times.

DDJ : How did you get involved in computer science? You're a linguist by background?

LW: Because I talk to computer scientists more than I talk to linguists, I wear the linguistics mantle more than I wear the computer-science mantle, but they actually came along in parallel, and I'm probably a 50/50 hybrid. You know, basically, I'm no good at either linguistics or computer science.

DDJ : So you took computer-science courses in college?

LW: In college, yeah. In college, I had various majors, but what I eventually graduated in -- I'm one of those people that packed four years into eight -- what I eventually graduated in was a self-constructed major, and it was Natural and Artificial Languages, which seems positively prescient considering where I ended up.

DDJ : When did you join O'Reilly as a salaried employee? And how did that come about?

LW: A year-and-a-half ago. It was partly because my previous job was kind of winding down.

DDJ : What was your previous job?

LW: I was working for Seagate Software. They were shutting down that branch of operations there. So, I was just starting to look around a little bit, and Tim noticed me looking around and said, "Well, you know, I've wanted to hire you for a long time," so we talked. And Gina Blaber (O'Reilly's software director) and I met. So, they more or less offered to pay me to mess around with Perl.

So it's sort of my dream job. I get to work from home, and if I feel like taking a nap in the afternoon, I can take a nap in the afternoon and work all night.

DDJ : Do you have any final comments, or tips for aspiring programmers? Or aspiring Perl programmers?

LW: Assume that your first idea is wrong, and try to think through the various options. I think that the biggest mistake people make is latching onto the first idea that comes to them and trying to do that. It really comes to a thing that my folks taught me about money. Don't buy something unless you've wanted it three times. Similarly, don't throw in a feature when you first think of it. Think if there's a way to generalize it, think if it should be generalized. Sometimes you can generalize things too much. I think like the things in Scheme were generalized too much. There is a level of abstraction beyond which people don't want to go. Take a good look at what you want to do, and try to come up with the long-term lazy way, not the short-term lazy way.

[Sep 21, 2019] The list of programming languages by dates

Sep 21, 2019 | www.scriptol.com

1948

1949 1951 1952 1955 1956 1957 1958 1959 1960 1962 1963 1964

[Sep 21, 2019] Papers on programming languages ideas from 70's for today by Mikhail Barash

Jul 03, 2018 | medium.com

I present here a small bibliography of papers on programming languages from the 1970's. I have personally considered these papers interesting in my research on the syntax of programming languages. I give here short annotations and comments (adapted to modern's day notions) on some of these papers.

[Sep 18, 2019] MCAS design, Boeing and ethics of software architect

Sep 18, 2019 | www.moonofalabama.org

... ... ...

Boeing screwed up by designing and installing a faulty systems that was unsafe. It did not even tell the pilots that MCAS existed. It still insists that the system's failure should not be trained in simulator type training. Boeing's failure and the FAA's negligence, not the pilots, caused two major accidents.

Nearly a year after the first incident Boeing has still not presented a solution that the FAA would accept. Meanwhile more safety critical issues on the 737 MAX were found for which Boeing has still not provided any acceptable solution.

But to Langewiesche this anyway all irrelevant. He closes his piece out with more "blame the pilots" whitewash of "poor Boeing":

The 737 Max remains grounded under impossibly close scrutiny, and any suggestion that this might be an overreaction, or that ulterior motives might be at play, or that the Indonesian and Ethiopian investigations might be inadequate, is dismissed summarily. To top it off, while the technical fixes to the MCAS have been accomplished, other barely related imperfections have been discovered and added to the airplane's woes. All signs are that the reintroduction of the 737 Max will be exceedingly difficult because of political and bureaucratic obstacles that are formidable and widespread. Who in a position of authority will say to the public that the airplane is safe?

I would if I were in such a position. What we had in the two downed airplanes was a textbook failure of airmanship . In broad daylight, these pilots couldn't decipher a variant of a simple runaway trim, and they ended up flying too fast at low altitude, neglecting to throttle back and leading their passengers over an aerodynamic edge into oblivion. They were the deciding factor here -- not the MCAS, not the Max.

One wonders how much Boeing paid the author to assemble his screed.

foolisholdman , Sep 18 2019 17:14 utc | 5

14,000 Words Of "Blame The Pilots" That Whitewash Boeing Of 737 MAX Failure
The New York Times

No doubt, this WAS intended as a whitewash of Boeing, but having read the 14,000 words, I don't think it qualifies as more than a somewhat greywash. It is true he blames the pilots for mishandling a situation that could, perhaps, have been better handled, but Boeing still comes out of it pretty badly and so does the NTSB. The other thing I took away from the article is that Airbus planes are, in principle, & by design, more failsafe/idiot-proof.

William Herschel , Sep 18 2019 17:18 utc | 6
Key words: New York Times Magazine. I think when your body is for sale you are called a whore. Trump's almost hysterical bashing of the NYT is enough to make anyone like the paper, but at its core it is a mouthpiece for the military industrial complex. Cf. Judith Miller.
BM , Sep 18 2019 17:23 utc | 7
The New York Times Magazine just published a 14,000 words piece

An ill-disguised attempt to prepare the ground for premature approval for the 737max. It won't succeed - impossible. Opposition will come from too many directions. The blowback from this article will make Boeing regret it very soon, I am quite sure.

foolisholdman , Sep 18 2019 17:23 utc | 8
Come to think about it: (apart from the MCAS) what sort of crap design is it, if an absolutely vital control, which the elevator is, can become impossibly stiff under just those conditions where you absolutely have to be able to move it quickly?
A.L. , Sep 18 2019 17:27 utc | 9
This NYT article is great.

It will only highlight the hubris of "my sh1t doesn't stink" mentality of the American elite and increase the resolve of other civil aviation authorities with a backbone (or in ascendancy) to put Boeing through the wringer.

For the longest time FAA was the gold standard and years of "Air Crash Investigation" TV shows solidified its place but has been taken for granted. Unitl now if it's good enough for the FAA it's good enough for all.

That reputation has now been irreparably damaged over this sh1tshow. I can't help but think this NYT article is only meant for domestic sheeple or stock brokers' consumption as anyone who is going to have anything technical to do with this investigation is going to see right through this load literal diarroeh.

I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.

jayc , Sep 18 2019 17:38 utc | 10
An appalling indifference to life/lives has been a signature feature of the American experience.
psychohistorian , Sep 18 2019 17:40 utc | 11
Thanks for the ongoing reporting of this debacle b....you are saving peoples lives

@ A.L who wrote

"
I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.
"

I agree but would pluralize your "insider" to "insiders". This SOP gut and run financialization strategy is just like we are seeing with Purdue Pharma that just filed bankruptcy because their opioids have killed so many....the owners will never see jail time and their profits are protected by the God of Mammon legal system.

Hopefully the WWIII we are engaged in about public/private finance will put an end to this perfidy by the God of Mammon/private finance cult of the Western form of social organization.

b , Sep 18 2019 17:46 utc | 14
Peter Lemme, the satcom guru , was once an engineer at Boeing. He testified over technical MAX issue before Congress and wrote lot of technical details about it. He retweeted the NYT Mag piece with this comment :
Peter Lemme @Satcom_Guru

Blame the pilots.
Blame the training.
Blame the airline standards.
Imply rampant corruption at all levels.
Claim Airbus flight envelope protection is superior to Boeing.
Fumble the technical details.
Stack the quotes with lots of hearsay to drive the theme.
Ignore everything else

[Sep 18, 2019] the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

Sep 18, 2019 | www.moonofalabama.org

A.L. , Sep 18 2019 19:56 utc | 31

@30 David G

perhaps, just like proponents of AI and self driving cars. They just love the technology, financially and emotionally invested in it so much they can't see the forest from the trees.

I like technology, I studied engineering. But the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

engineering used to be a discipline with ethics and responsibilities... But now anybody who could write two lines of code can call themselves a software engineer....

[Sep 14, 2019] The Man Who Could Speak Japanese

This impostor definitely demonstrated programming abilities, although at the time there was not such ter :-)
Notable quotes:
"... "We wrote it down. ..."
"... The next phrase was: ..."
"... " ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' " ..."
"... " ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' " ..."
"... "Tong what ?" rasped the colonel. ..."
"... "Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?" ..."
"... Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction. ..."
"... https://www.americanheritage.com/man-who-could-speak-japanese ..."
Sep 14, 2019 | www.nakedcapitalism.com

Wukchumni , September 13, 2019 at 4:29 pm

Re: Fake list of grunge slang:

a fabulous tale of the South Pacific by William Manchester

The Man Who Could Speak Japanese

"We wrote it down.

The next phrase was:

" ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' "

Then:

" ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' "

"Tong what ?" rasped the colonel.

"Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?"

Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction.

https://www.americanheritage.com/man-who-could-speak-japanese

[Sep 10, 2019] Thinking Forth by Leo Brodie is available online

Aug 31, 2019 | developers.slashdot.org

mccoma ( 64578 ) , Friday February 22, 2019 @06:10PM ( #58166468 )

Thinking Forth ( Score: 3 )

I wish I had read Thinking Forth by Leo Brodie ISBN-10: 0976458705 ISBN-13: 978-0976458708 much earlier. It is an amazing book to really show you a different way to approach programming problems. It is available online these days.

[Sep 07, 2019] As soon as you stop writing code on a regular basis you stop being a programmer. You lose you qualification very quickly. That's a typical tragedy of talented programmers who became mediocre managers or, worse, theoretical computer scientists

Programming skills are somewhat similar to the skills of people who play violin or piano. As soon a you stop playing violin or piano still start to evaporate. First slowly, then quicker. In two yours you probably will lose 80%.
Notable quotes:
"... I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. ..."
Sep 07, 2019 | archive.computerhistory.org

Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude completely, and I think he wrote his last computer program in the 1980s. At this conference I went to in 1967 about simulation language, Chris Strachey was going around asking everybody at the conference what was the last computer program you wrote. This was 1967. Some of the people said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did last week." I asked Edsger this question when I visited him in Texas in the 90s and he said, "Don, I write programs now with pencil and paper, and I execute them in my head." He finds that a good enough discipline.

I think he was mistaken on that. He taught me a lot of things, but I really think that if he had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we had. It was in the first apartment, our little rented house, before we had electricity in the house.

We were sitting there in the dark, and he was telling me how he had just learned about the specifications of the IBM System/360, and it made him so ill that his heart was actually starting to flutter.

He intensely disliked things that he didn't consider clean to work with. So I can see that he would have distaste for the languages that he had to work with on real computers. My reaction to that was to design my own language, and then make Pascal so that it would work well for me in those days. But his response was to do everything only intellectually.

So, programming.

I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. I think of a question that I want to answer, or I have part of my book where I want to present something. But I can't just present it by reading about it in a book. As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have to translate my knowledge of this method into something that the machine is going to understand just forces me to make that crystal-clear in my head. Then I can explain it to somebody else infinitely better. The exposition is always better if I've implemented it, even though it's going to take me more time.

[Sep 07, 2019] Knuth about computer science and money: At that point I made the decision in my life that I wasn't going to optimize my income;

Sep 07, 2019 | archive.computerhistory.org

So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a mathematician taking my grad studies. A startup company, called Green Tree Corporation because green is the color of money, came to me and said, "Don, name your price. Write compilers for us and we will take care of finding computers for you to debug them on, and assistance for you to do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that era this was not quite at Bill Gate's level today, but it was sort of out there.

The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not going to do it. I just thought this was an impossible number."

At that point I made the decision in my life that I wasn't going to optimize my income; I was really going to do what I thought I could do for well, I don't know. If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software". But number infinity would be Well, no. Number infinity minus one would be "I bought your book". It's not as good as "I read your book", you know. Then there is "I bought your software"; that was not in my own personal value. So that decision came up. I kept up with the literature about compilers. The Communications of the ACM was where the action was. I also worked with people on trying to debug the ALGOL language, which had problems with it. I published a few papers, like "The Remaining Trouble Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called "Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active in programming languages.

[Sep 07, 2019] Knuth: maybe 1 in 50 people have the "computer scientist's" type of intellect

Sep 07, 2019 | conservancy.umn.edu

Frana: You have made the comment several times that maybe 1 in 50 people have the "computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those people are trained professional librarians? [laughter] There is some strangeness there. But can you pinpoint what it is about the mind of the computer scientist that is....

Knuth: That is different?

Frana: What are the characteristics?

Knuth: Two things: one is the ability to deal with non-uniform structure, where you have case one, case two, case three, case four. Or that you have a model of something where the first component is integer, the next component is a Boolean, and the next component is a real number, or something like that, you know, non-uniform structure. To deal fluently with those kinds of entities, which is not typical in other branches of mathematics, is critical. And the other characteristic ability is to shift levels quickly, from looking at something in the large to looking at something in the small, and many levels in between, jumping from one level of abstraction to another. You know that, when you are adding one to some number, that you are actually getting closer to some overarching goal. These skills, being able to deal with nonuniform objects and to see through things from the top level to the bottom level, these are very essential to computer programming, it seems to me. But maybe I am fooling myself because I am too close to it.

Frana: It is the hardest thing to really understand that which you are existing within.

Knuth: Yes.

[Sep 07, 2019] Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together

Sep 07, 2019 | conservancy.umn.edu

Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together. I can see that I could be viewed as a scholar that does his best to check out sources of material, so that people get credit where it is due. And to check facts over, not just to look at the abstract of something, but to see what the methods were that did it and to fill in holes if necessary. I look at my role as being able to understand the motivations and terminology of one group of specialists and boil it down to a certain extent so that people in other parts of the field can use it. I try to listen to the theoreticians and select what they have done that is important to the programmer on the street; to remove technical jargon when possible.

But I have never been good at any kind of a role that would be making policy, or advising people on strategies, or what to do. I have always been best at refining things that are there and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but not really in a way that would be in any way controlling the flow. The only time I have ever advocated something strongly was with literate programming; but I do this always with the caveat that it works for me, not knowing if it would work for anybody else.

When I work with a system that I have created myself, I can always change it if I don't like it. But everybody who works with my system has to work with what I give them. So I am not able to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says, 'Don, please forecast the future,'...

[Sep 06, 2019] Knuth: Programming and architecture are interrelated and it is impossible to create good architecure wthout actually programming at least of a prototype

Notable quotes:
"... When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?" ..."
"... When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this." ..."
Sep 06, 2019 | archive.computerhistory.org

...I showed the second version of this design to two of my graduate students, and I said, "Okay, implement this, please, this summer. That's your summer job." I thought I had specified a language. I had to go away. I spent several weeks in China during the summer of 1977, and I had various other obligations. I assumed that when I got back from my summer trips, I would be able to play around with TeX and refine it a little bit. To my amazement, the students, who were outstanding students, had not competed [it]. They had a system that was able to do about three lines of TeX. I thought, "My goodness, what's going on? I thought these were good students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a miracle."

Because going from my specification, which I thought was complete, they really had an impossible task, and they had succeeded wonderfully with it. These students, by the way, [were] Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the key people for Microsoft Word.

He did important mathematical things as well as his hyphenation methods which are quite used in all languages now. These guys were actually doing great work, but I was amazed that they couldn't do what I thought was just sort of a routine task. Then I became a programmer in earnest, where I had to do it. The reason is when you're doing programming, you have to explain something to a computer, which is dumb.

When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?"

It just didn't occur to the person writing the design specification. When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this."

If I hadn't been in China they would've scheduled an appointment with me and stopped their programming for a day. Then they would come in at the designated hour and we would talk. They would take 15 minutes to present to me what the problem was, and then I would think about it for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would write code for another five minutes and they'd have to schedule another appointment.

I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got going. Bob worked many years on a beautiful idea for a programming language, where he designed a language called Chiron, but he never touched the programming himself. I think this was actually the reason that he had trouble with that project, because it's so hard to do the design unless you're faced with the low-level aspects of it, explaining it to a machine instead of to another person.

Forsythe, I think it was, who said, "People have said traditionally that you don't understand something until you've taught it in a class. The truth is you don't really understand something until you've taught it to a computer, until you've been able to program it." At this level, programming was absolutely important

[Sep 06, 2019] Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered

Sep 06, 2019 | conservancy.umn.edu

Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered. I can cope with learning about one new technique per day, but I can't take ten in a day all at once. So conferences are depressing; it means I have so much more work to do. If I hide myself from the truth I am much happier.

[Sep 06, 2019] How TAOCP was hatched

Notable quotes:
"... Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill. ..."
"... But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly. ..."
Sep 06, 2019 | archive.computerhistory.org

Knuth: This is, of course, really the story of my life, because I hope to live long enough to finish it. But I may not, because it's turned out to be such a huge project. I got married in the summer of 1961, after my first year of graduate school. My wife finished college, and I could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for our honeymoon.

We had four months of wedded bliss in Southern California, and then a man from Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to write compilers."

The more I thought about it, I decided "Oh yes, I've got this book inside of me."

I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I think I'm going to write a book."

As I say, we had four months of bliss, because the rest of our marriage has all been devoted to this book. Well, we still have had happiness. But really, I wake up every morning and I still haven't finished the book. So I try to -- I have to -- organize the rest of my life around this, as one main unifying theme. The book was supposed to be about how to write a compiler. They had heard about me from one of their editorial advisors, that I knew something about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy writing. In high school I had been editor of the weekly paper. In college I was editor of the science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote the manual for that compiler that we wrote. I enjoyed writing, number one.

Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill.

Another very important reason at the time was that I knew that there was a great need for a book about compilers, because there were a lot of people who even in 1962 -- this was January of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't been explained. The people who had discovered it, though, were scattered all over the world and they didn't know of each other's work either, very much. I had been following it. Everybody I could think of who could write a book about compilers, as far as I could see, they would only give a piece of the fabric. They would slant it to their own view of it. There might be four people who could write about it, but they would write four different books. I could present all four of their viewpoints in what I would think was a balanced way, without any axe to grind, without slanting it towards something that I thought would be misleading to the compiler writer for the future. I considered myself as a journalist, essentially. I could be the expositor, the tech writer, that could do the job that was needed in order to take the work of these brilliant people and make it accessible to the world. That was my motivation. Now, I didn't have much time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a contract, but they said "We know it'll take you a while." I didn't really begin to have much time to work on it until 1963, my third year of graduate school, as I'm already finishing up on my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the salary for that was $15,000, which was much more than an assistant professor. I think assistant professors were getting eight or nine thousand in those days.

Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was getting $7,600 for the nine-month year.

Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks up identifiers in a hash table. The method that we used is called linear probing. Basically you take the variable name that you want to look up, you scramble it, like you square it or something like this, and that gives you a number between one and, well in those days it would have been between 1 and 1000, and then you look there. If you find it, good; if you don't find it, go to the next place and keep on going until you either get to an empty place, or you find the number you're looking for. It's called linear probing. There was a rumor that one of Professor Feller's students at Princeton had tried to figure out how fast linear probing works and was unable to succeed. This was a new thing for me. It was a case where I was doing programming, but I also had a mathematical problem that would go into my other [job]. My winter job was being a math student, my summer job was writing compilers. There was no mix. These worlds did not intersect at all in my life at that point. So I spent one day during the summer while writing the compiler looking at the mathematics of how fast does linear probing work. I got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the internet now, because this became really the genesis of my main research work, which developed not to be working on compilers, but to be working on what they call analysis of algorithms, which is, have a computer method and find out how good is it quantitatively. I can say, if I got so many things to look up in the table, how long is linear probing going to take. It dawned on me that this was just one of many algorithms that would be important, and each one would lead to a fascinating mathematical problem. This was easily a good lifetime source of rich problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and I had one day to do the research and mathematics that changed my life for my future research trends. But now I've gotten off the topic of what your original question was.

Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of Computing. The compiler book morphed into The Art of Computer Programming, which became a seven-volume plan.

Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm starting, after I finish this summer job, then I began to do things that were going to be relating to the book. One of the things I knew I had to have in the book was an artificial machine, because I'm writing a compiler book but machines are changing faster than I can write books. I have to have a machine that I'm totally in control of. I invented this machine called MIX, which was typical of the computers of 1962.

In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I taught a class at Caltech on how to write programs in assembly language for this hypothetical computer. Then I started writing the parts that dealt with sorting problems and searching problems, like the linear probing idea. I began to write those parts, which are part of a compiler, of the book. I had several hundred pages of notes gathering for those chapters for The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing on The Art of Computer Programming.

I met George Forsythe about this time. George was the man who inspired both of us [Knuth and Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't do that. I just got married, and I've got to finish this book first." I said, "I think I'll finish the book next year, and then I can come up [and] start thinking about the rest of my life, but I want to get my book done before my son is born." Well, John is now 40-some years old and I'm not done with the book. Part of my lack of expertise is any good estimation procedure as to how long projects are going to take. I way underestimated how much needed to be written about in this book. Anyway, I started writing the manuscript, and I went merrily along writing pages of things that I thought really needed to be said. Of course, it didn't take long before I had started to discover a few things of my own that weren't in any of the existing literature. I did have an axe to grind. The message that I was presenting was in fact not going to be unbiased at all. It was going to be based on my own particular slant on stuff, and that original reason for why I should write the book became impossible to sustain. But the fact that I had worked on linear probing and solved the problem gave me a new unifying theme for the book. I was going to base it around this idea of analyzing algorithms, and have some quantitative ideas about how good methods were. Not just that they worked, but that they worked well: this method worked 3 times better than this method, or 3.1 times better than this method. Also, at this time I was learning mathematical techniques that I had never been taught in school. I found they were out there, but they just hadn't been emphasized openly, about how to solve problems of this kind.

So my book would also present a different kind of mathematics than was common in the curriculum at the time, that was very relevant to analysis of algorithm. I went to the publishers, I went to Addison Wesley, and said "How about changing the title of the book from 'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never sell; their focus group couldn't buy that one. I'm glad they stuck to the original title, although I'm also glad to see that several books have now come out called "The Analysis of Algorithms", 20 years down the line.

But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly.

I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of notes, including a very good example of what you mentioned about seeing holes in the fabric. One of the most important chapters in the book is parsing: going from somebody's algebraic formula and figuring out the structure of the formula. Just the way I had done in seventh grade finding the structure of English sentences, I had to do this with mathematical sentences.

Chapter ten is all about parsing of context-free language, [which] is what we called it at the time. I covered what people had published about context-free languages and parsing. I got to the end of the chapter and I said, well, you can combine these ideas and these ideas, and all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But now, all of a sudden, I saw there was a way to say I can find the most general condition that can be done efficiently without looking ahead to the end of the sentence. That you could make a decision on the fly, reading from left to right, about the structure of the thing. That was just a natural outgrowth of seeing the different pieces of the fabric that other people had put together, and writing it into a chapter for the first time. But I felt that this general concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it was right, but it was too hard for me, really, to explain it well.

So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my book. Textbooks don't have to cover everything when you get to the harder things; then you have to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's going to be published very soon, so any little things I discover and put in the book I didn't bother to write a paper and publish in the journal because I figure it'll be in my book pretty soon anyway. Computer science is changing so fast, my book is bound to be obsolete.

It takes a year for it to go through editing, and people drawing the illustrations, and then they have to print it and bind it and so on. I have to be a little bit ahead of the state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the stuff to myself that I had, these little ideas I had been coming up with. But when I got to this idea of left-to-right parsing, I said "Well here's something I don't really understand very well. I'll publish this, let other people figure out what it is, and then they can tell me what I should have said." I published that paper I believe in 1965, at the end of finishing my draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of computer science start with LR(k) and take off from there. But I want to give you an idea of

[Sep 06, 2019] Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping).

Notable quotes:
"... Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping). ..."
Sep 06, 2019 | news.ycombinator.com
fhars on Mar 29, 2011
Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping).

The only widely used OO language (for sufficiently narrow values of wide and wide values of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala. So it is actually FP that helps you with the classification.

Xurinos on Mar 29, 2011

This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class methods, friend methods, and so on).

I agree with you that subclassing (for the purpose of reusing behavior), traits (for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to depart from type systems and be used for mere code organization.

ajays on Mar 29, 2011
"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?"

Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main" in each class, but only the one in the specified class will be the entry point.

GrooveStomp on Mar 29, 2011
The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation. Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous training. Of course I jest. :)

[Sep 04, 2019] 737 MAX - Boeing Insults International Safety Regulators As New Problems Cause Longer Grounding

The 80286 Intel processors: The Intel 80286[3] (also marketed as the iAPX 286[4] and often called Intel 286) is a 16-bit microprocessor that was introduced on February 1, 1982. The 80286 was employed for the IBM PC/AT, introduced in 1984, and then widely used in most PC/AT compatible computers until the early 1990s.
Notable quotes:
"... The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes. ..."
"... In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials. ..."
"... Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities. ..."
"... The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo. ..."
"... Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance. ..."
"... I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership. ..."
"... So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. ..."
"... Arrogance? When the money keeps flowing in anyway, it comes naturally. ..."
"... In the neoliberal world order governments, regulators and the public are secondary to corporate profits. ..."
"... I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX. ..."
"... I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees. ..."
"... Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. ..."
"... Military Industrial Complex welfare programs, including wars in Syria and Yemen, are slowly winding down. We are about to get a massive bill from the financiers who already own everything in this sector, because what they have left now is completely unsustainable, with or without a Third World War. ..."
"... In my mind, the fact that Boeing transferred its head office from Seattle (where the main manufacturing and presumable the main design and engineering functions are based) to Chicago (centre of the neoliberal economic universe with the University of Chicago being its central shrine of worship, not to mention supply of future managers and administrators) in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance. ..."
"... For many amerikans, a good "offensive" is far preferable than a good defense even if that only involves an apology. Remember what ALL US presidents say.. We will never apologize.. ..."
"... Actually can you show me a single place in the US where ethics are considered a bastion of governorship? ..."
"... You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla cpu, the intel chips were dinosaurs in every way. ..."
"... Initially I thought it was just the new over-sized engines they retro-fitted. A situation that would surely have been easier to get around by just going back to the original engines -- any inefficiencies being less $costly than the time the planes have been grounded. But this post makes the whole rabbit warren 10 miles deeper. ..."
"... That is because the price is propped up by $9 billion share buyback per year . Share buyback is an effective scheme to airlift all the cash out of a company towards the major shareholders. I mean, who wants to develop reliable airplanes if you can funnel the cash into your pockets? ..."
"... If Boeing had invested some of this money that it blew on share buybacks to design a new modern plane from ground up to replace the ancient 737 airframe, these tragedies could have been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate cost-cutters and financial engineers, rather than real engineers, had the final word. ..."
"... Markets don't care about any of this. They don't care about real engineers either. They love corporate cost-cutters and financial engineers. They want share buybacks, and if something bad happens, they'll overlook the $5 billion to pay for the fallout because it's just a "one-time item." ..."
"... Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or quarter of that would suffice to build a plane that logically combines the latest technologies. E.g. the entire frame design to fit together with engines, processors proper for the information processing load, hydraulics for steering that satisfy force requirements in almost all circumstances etc. New technologies also fail because they are not completely understood, but when the overall design is logical with margins of safety, the faults can be eliminated. ..."
"... Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics. ..."
"... The problem is not new, and it is well understood. What computer modelling is is cheap, and easy to fudge, and that is why it is popular with people who care about money a lot. Much of what is called "AI" is very similar in its limitations, a complicated way to fudge up the results you want, or something close enough for casual examination. ..."
Sep 04, 2019 | www.moonofalabama.org

United Airline and American Airlines further prolonged the grounding of their Boeing 737 MAX airplanes. They now schedule the plane's return to the flight line in December. But it is likely that the grounding will continue well into the next year.

After Boeing's shabby design and lack of safety analysis of its Maneuver Characteristics Augmentation System (MCAS) led to the death of 347 people, the grounding of the type and billions of losses, one would expect the company to show some decency and humility. Unfortunately Boeing behavior demonstrates none.

There is still little detailed information on how Boeing will fix MCAS. Nothing was said by Boeing about the manual trim system of the 737 MAX that does not work when it is needed . The unprotected rudder cables of the plane do not meet safety guidelines but were still certified. The planes flight control computers can be overwhelmed by bad data and a fix will be difficult to implement. Boeing continues to say nothing about these issues.

International flight safety regulators no longer trust the Federal Aviation Administration (FAA) which failed to uncover those problems when it originally certified the new type. The FAA was also the last regulator to ground the plane after two 737 MAX had crashed. The European Aviation Safety Agency (EASA) asked Boeing to explain and correct five major issues it identified. Other regulators asked additional questions.

Boeing needs to regain the trust of the airlines, pilots and passengers to be able to again sell those planes. Only full and detailed information can achieve that. But the company does not provide any.

As Boeing sells some 80% of its airplanes abroad it needs the good will of the international regulators to get the 737 MAX back into the air. This makes the arrogance it displayed in a meeting with those regulators inexplicable:

Friction between Boeing Co. and international air-safety authorities threatens a new delay in bringing the grounded 737 MAX fleet back into service, according to government and pilot union officials briefed on the matter.

The latest complication in the long-running saga, these officials said, stems from a Boeing briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers.

The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes.

What did the Boeing managers think those regulatory agencies are? Hapless lapdogs like the FAA managers`who signed off on Boeing 'features' even after their engineers told them that these were not safe?

Buried in the Wall Street Journal piece quoted above is another little shocker:

In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials.

The new issue must be going beyond the flight control computer (FCC) issues the FAA identified in June .

Boeing's original plan to fix the uncontrolled activation of MCAS was to have both FCCs active at the same time and to switch MCAS off when the two computers disagree. That was already a huge change in the general architecture which so far consisted of one active and one passive FCC system that could be switched over when a failure occurred.

Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities.

Changing software in a delicate environment like a flight control computer is extremely difficult. There will always be surprising side effects or regressions where already corrected errors unexpectedly reappear.

The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo.

All procedures and functions of the software will have to be tested in all thinkable combinations to ensure that they will not block or otherwise influence each other. This will take months and there is a high chance that new issues will appear during these tests. They will require more software changes and more testing.

Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance.

Building safe airplanes requires engineers who know that they may make mistakes and who have the humility to allow others to check and correct their work. It requires open communication about such issues. Boeing's say-nothing strategy will prolong the grounding of its planes. It will increases the damage to Boeing's financial situation and reputation.

--- Previous Moon of Alabama posts on Boeing 737 MAX issues:

Posted by b on September 3, 2019 at 18:05 UTC | Permalink


Choderlos de Laclos , Sep 3 2019 18:15 utc | 1

"The 80286 Intel processors the FCC software is running on is limited in its capacity." You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.
b , Sep 3 2019 18:22 utc | 2
You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.

One of the two is writing this.

Half-joking aside. The 737 MAX FCC runs on 80286 processors. There are ten thousands of programmers available who can program them though not all are qualified to write real-time systems. That resource is not a problem. The processors inherent limits are one.

Meshpal , Sep 3 2019 18:24 utc | 3
Thanks b for the fine 737 max update. Others news sources seem to have dropped coverage. It is a very big deal that this grounding has lasted this long. Things are going to get real bad for Boeing if this bird does not get back in the air soon. In any case their credibility is tarnished if not down right trashed.
BraveNewWorld , Sep 3 2019 18:35 utc | 4
@1 Choderlos de Laclos

What ever software language these are programmed in (my guess is C) the compilers still exist for it and do the translation from the human readable code to the machine code for you. Of course the code could be assembler but writing assembly code for a 286 is far easier than writing it for say an i9 becuase the CPU is so much simpler and has a far smaller set of instructions to work with.

Choderlos de Laclos , Sep 3 2019 18:52 utc | 5
@b: It was a hyperbole. I might be another one, but left them behind as fast as I could. The last time I had to deal with it was an embedded system in 1998-ish. But I am also retiring, and so are thousands of others. The problems with support of a legacy system are a legend.
psychohistorian , Sep 3 2019 18:56 utc | 6
Thanks for the demise of Boeing update b

I commented when you first started writing about this that it would take Boeing down and still believe that to be true. To the extent that Boeing is stonewalling the international safety regulators says to me that upper management and big stock holders are being given time to minimize their exposure before the axe falls.

I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership.

This is how private financialization works in the Western world. Their bottom line is profit, not service to the flying public. It is in line with the recent public statement by the CEO's from the Business Roundtable that said that they were going to focus more on customer satisfaction over profit but their actions continue to say profit is their primary motive.

The God of Mammon private finance religion can not end soon enough for humanity's sake. It is not like we all have to become China but their core public finance example is well worth following.

karlof1 , Sep 3 2019 19:13 utc | 7
So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. IMO, Boeing shareholders's hair ought to be on fire given their BoD's behavior and getting ready to litigate.

As b notes, Boeing's international credibility's hanging by a very thin thread. A year from now, Boeing could very well see its share price deeply dive into the Penny Stock category--its current P/E is 41.5:1 which is massively overpriced. Boeing Bombs might come to mean something vastly different from its initial meaning.

bjd , Sep 3 2019 19:22 utc | 8
Arrogance? When the money keeps flowing in anyway, it comes naturally.
What did I just read , Sep 3 2019 19:49 utc | 10
Such seemingly archaic processors are the norm in aerospace. If the planes flight characteristics had been properly engineered from the start the processor wouldn't be an issue. You can't just spray perfume on a garbage pile and call it a rose.
VietnamVet , Sep 3 2019 20:31 utc | 12
In the neoliberal world order governments, regulators and the public are secondary to corporate profits. This is the same belief system that is suspending the British Parliament to guarantee the chaos of a no deal Brexit. The irony is that globalist, Joe Biden's restart the Cold War and nationalist Donald Trump's Trade Wars both assure that foreign regulators will closely scrutinize the safety of the 737 Max. Even if ignored by corporate media and cleared by the FAA to fly in the USA, Boeing and Wall Street's Dow Jones average are cooked gooses with only 20% of the market. Taking the risk of flying the 737 Max on their family vacation or to their next business trip might even get the credentialed class to realize that their subservient service to corrupt Plutocrats is deadly in the long term.
jared , Sep 3 2019 20:55 utc | 14
It doesn't get any TBTF'er than Boing. Bail-out is only phone-call away. With down-turn looming, the line is forming.
Piotr Berman , Sep 3 2019 21:11 utc | 15
"The latest complication in the long-running saga, these officials said, stems from a Boeing BA, -2.66% briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers."

It seems to me that Boeing had no intention to insult anybody, but it has an impossible task. After decades of applying duct tape and baling wire with much success, they finally designed an unfixable plane, and they can either abandon this line of business (narrow bodied airliners) or start working on a new design grounded in 21st century technologies.

Ken Murray , Sep 3 2019 21:12 utc | 16
Boeing's military sales are so much more significant and important to them, they are just ignoring/down-playing their commercial problem with the 737 MAX. Follow the real money.
Arata , Sep 3 2019 21:57 utc | 17
That is unblievable FLight Control comptuer is based on 80286! A control system needs Real Time operation, at least some pre-emptive task operation, in terms of milisecond or microsecond. What ever way you program 80286 you can not achieve RT operation on 80286. I do not think that is the case. My be 80286 is doing some pripherial work, other than control.
Bemildred , Sep 3 2019 22:11 utc | 18
It is quite likely (IMHO) that they are no longer able to provide the requested information, but of course they cannot say that.

I once wrote a keyboard driver for an 80286, part of an editor, in assembler, on my first PC type computer, I still have it around here somewhere I think, the keyboard driver, but I would be rusty like the Titanic when it comes to writing code. I wrote some things in DEC assembler too, on VAXen.

Peter AU 1 , Sep 3 2019 22:14 utc | 19
Arata 16

The spoiler system is fly by wire.

Bemildred , Sep 3 2019 22:17 utc | 20
arata @16: 80286 does interrupts just fine, but you have to grok asynchronous operation, and most coders don't really, I see that every day in Linux and my browser. I wish I could get that box back, it had DOS, you could program on the bare wires, but God it was slow.
Tod , Sep 3 2019 22:28 utc | 21
Boeing will just need to press the TURBO button on the 286 processor. Problem solved.
karlof1 , Sep 3 2019 22:43 utc | 23
Ken Murray @15--

Boeing recently lost a $6+Billion weapons contract thanks to its similar Q&A in that realm of its business. Its annual earnings are due out in October. Plan to short-sell soon!

Godfree Roberts , Sep 3 2019 22:56 utc | 24
I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX.

I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees.

The telecommunications giant also said law enforcement in the U.S. have searched, detained and arrested Huawei employees and its business partners, and have sent FBI agents to the homes of its workers to pressure them to collect information on behalf of the U.S."

https://www.wsj.com/articles/huawei-accuses-the-u-s-of-cyberattacks-threatening-its-employees-11567500484?mod=hp_lead_pos2

Arioch , Sep 3 2019 23:18 utc | 25
I wonder how much blind trust in Boeing is intertwined into the fabric of civic aviation all around the world.

I mean something like this: Boeing publishes some research into failure statistics, solid materials aging or something. One that is really hard and expensive to proceed with. Everything take the results for granted without trying to independently reproduce and verify, because The Boeing!

Some later "derived" researches being made, upon the foundation of some prior works *including* that old Boeing research. Then FAA and similar company institutions around the world make some official regulations and guidelines deriving from the research which was in part derived form original Boeing work. Then insurance companies calculate their tarifs and rate plans, basing their estimation upon those "government standards", and when governments determine taxation levels they use that data too. Then airline companies and airliner leasing companies make their business plans, take huge loans in the banks (and banks do make their own plans expecting those loans to finally be paid back), and so on and so forth, building the cards-deck house, layer after layer.

And among the very many of the cornerstones - there would be dust covered and god-forgotten research made by Boeing 10 or maybe 20 years ago when no one even in drunk delirium could ever imagine questioning Boeing's verdicts upon engineering and scientific matters.

Now, the longevity of that trust is slowly unraveled. Like, the so universally trusted 737NG generation turned out to be inherently unsafe, and while only pilots knew it before, and even of them - only most curious and pedantic pilots, today it becomes public knowledge that 737NG are tainted.

Now, when did this corruption started? Wheat should be some deadline cast into the past, that since the day every other technical data coming from Boeing should be considered unreliable unless passing full-fledged independent verification? Should that day be somewhere in 2000-s? 1990-s? Maybe even 1970-s?

And ALL THE BODY of civic aviation industry knowledge that was accumulated since that date can NO MORE BE TRUSTED and should be almost scrapped and re-researched new! ALL THE tacit INPUT that can be traced back to Boeing and ALL THE DERIVED KNOWLEDGE now has to be verified in its entirety.

Miss Lacy , Sep 3 2019 23:19 utc | 26
Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. Until the lawsuits become too enormous. I wonder how much that will cost. And speaking of rigged markets - why do ya suppose that Trumpilator et al have been so keen to make huge sales to the Saudis, etc. etc. ? Ya don't suppose they had an inkling of trouble in the wind do ya? Speaking of insiders, how many million billions do ya suppose is being made in the Wall Street "trade war" roller coaster by peeps, munchkins not muppets, who have access to the Tweeter-in-Chief?
C I eh? , Sep 3 2019 23:25 utc | 27
@6 psychohistorian
I commented when you first started writing about this that it would take Boeing down and still believe that to be true. To the extent that Boeing is stonewalling the international safety regulators says to me that upper management and big stock holders are being given time to minimize their exposure before the axe falls.

Have you considered the costs of restructuring versus breaking apart Boeing and selling it into little pieces; to the owners specifically?

The MIC is restructuring itself - by first creating the political conditions to make the transformation highly profitable. It can only be made highly profitable by forcing the public to pay the associated costs of Rape and Pillage Incorporated.

Military Industrial Complex welfare programs, including wars in Syria and Yemen, are slowly winding down. We are about to get a massive bill from the financiers who already own everything in this sector, because what they have left now is completely unsustainable, with or without a Third World War.

It is fine that you won't fly Boeing but that is not the point. You may not ever fly again since air transit is subsidized at every level and the US dollar will no longer be available to fund the world's air travel infrastructure.

You will instead be paying for the replacement of Boeing and seeing what google is planning it may not be for the renewal of the airline business but rather for dedicated ground transportation, self driving cars and perhaps 'aerospace' defense forces, thank you Russia for setting the trend.

Lochearn , Sep 3 2019 23:45 utc | 30
As readers may remember I made a case study of Boeing for a fairly recent PHD. The examiners insisted that this case study be taken out because it was "speculative." I had forecast serious problems with the 787 and the 737 MAX back in 2012. I still believe the 787 is seriously flawed and will go the way of the MAX. I came to admire this once brilliant company whose work culminated in the superb 777.

America really did make some excellent products in the 20th century - with the exception of cars. Big money piled into GM from the early 1920s, especially the ultra greedy, quasi fascist Du Pont brothers, with the result that GM failed to innovate. It produced beautiful cars but technically they were almost identical to previous models.

The only real innovation over 40 years was automatic transmission. Does this sound reminiscent of the 737 MAX? What glued together GM for more than thirty years was the brilliance of CEO Alfred Sloan who managed to keep the Du Ponts (and J P Morgan) more or less happy while delegating total responsibility for production to divisional managers responsible for the different GM brands. When Sloan went the company started falling apart and the memoirs of bad boy John DeLorean testify to the complete disfunctionality of senior management.

At Ford the situation was perhaps even worse in the 1960s and 1970s. Management was at war with the workers, faulty transmissions were knowingly installed. All this is documented in an excellent book by ex-Ford supervisor Robert Dewar in his book "A Savage Factory."

dus7 , Sep 3 2019 23:53 utc | 32
Well, the first thing that came to mind upon reading about Boeing's apparent arrogance overseas - silly, I know - was that Boeing may be counting on some weird Trump sanctions for anyone not cooperating with the big important USian corporation! The U.S. has influence on European and many other countries, but it can only be stretched so far, and I would guess messing with Euro/internation airline regulators, especially in view of the very real fatal accidents with the 737MAX, would be too far.
david , Sep 4 2019 0:09 utc | 34
Please read the following article to get further info about how the 5 big Funds that hold 67% of Boeing stocks are working hard with the big banks to keep the stock high. Meanwhile Boeing is also trying its best to blackmail US taxpayers through Pentagon, for example, by pretending to walk away from a competitive bidding contract because it wants the Air Force to provide better cost formula.

https://www.theamericanconservative.com/articles/despite-devastating-737-crashes-boeing-stocks-fly-high/

So basically, Boeing is being kept afloat by US taxpayers because it is "too big to fail" and an important component of Dow. Please tell. Who is the biggest suckers here?

chu teh , Sep 4 2019 0:13 utc | 36
re Piotr Berman | Sep 3 2019 21:11 utc [I have a tiny bit of standing in this matter based on experience with an amazingly similar situation that has not heretofore been mentioned. More at end. Thus I offer my opinion.] Indeed, an impossible task to design a workable answer and still maintain the fiction that 737MAX is a hi-profit-margin upgrade requiring minimal training of already-trained 737-series pilots , either male or female. Turning-off autopilot to bypass runaway stabilizer necessitates : [1]

the earlier 737-series "rollercoaster" procedure to overcome too-high aerodynamic forces must be taught and demonstrated as a memory item to all pilots.

The procedure was designed for early Model 737-series, not the 737MAX which has uniquely different center-of-gravity and pitch-up problem requiring MCAS to auto-correct, especially on take-off. [2] but the "rollercoaster" procedure does not work at all altitudes.

It causes aircraft to lose some altitude and, therefore, requires at least [about] 7,000-feet above-ground clearance to avoid ground contact. [This altitude loss consumed by the procedure is based on alleged reports of simulator demonstrations. There seems to be no known agreement on the actual amount of loss]. [3] The physical requirements to perform the "rollercoaster" procedure were established at a time when female pilots were rare.

Any 737MAX pilots, male or female, will have to pass new physical requirements demonstrating actual conditions on newly-designed flight simulators that mimic the higher load requirements of the 737MAX . Such new standards will also have to compensate for left vs right-handed pilots because the manual-trim wheel is located between the .pilot/copilot seats.

================

Now where/when has a similar situation occurred? I.e., wherein a Federal regulator agency [FAA] allowed a vendor [Boeing] to claim that a modified product did not need full inspection/review to get agency certification of performance [airworthiness]. As you may know, 2 working, nuclear, power plants were forced to shut down and be decommissioned when, in 2011, 2 newly-installed, critical components in each plant were discovered to be defective, beyond repair and not replaceable. These power plants were each producing over 1,000 megawatts of power for over 20 years. In short, the failed components were modifications of the original, successful design that claimed to need only a low-level of Federal Nuclear Regulatory Commission oversight and approval. The mods were, in fact, new and untried and yet only tested by computer modeling and theoretical estimations based on experience with smaller/different designs.

<<< The NRC had not given full inspection/oversight to the new units because of manufacturer/operator claims that the changes were not significant. The NRC did not verify the veracity of those claims. >>>

All 4 components [2 required in each plant] were essentially heat-exchangers weighing 640 tons each, having 10,000 tubes carrying radioactive water surrounded by [transferring their heat to] a separate flow of "clean" water. The tubes were progressively damaged and began leaking. The new design failed. It can not be fixed. Thus, both plants of the San Onofre Nuclear Generating Station are now a complete loss and await dismantling [as the courts will decide who pays for the fiasco].

Jen , Sep 4 2019 0:20 utc | 37
In my mind, the fact that Boeing transferred its head office from Seattle (where the main manufacturing and presumable the main design and engineering functions are based) to Chicago (centre of the neoliberal economic universe with the University of Chicago being its central shrine of worship, not to mention supply of future managers and administrators) in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance.

Phew! I barely took a breath there! :-)

Lochearn , Sep 4 2019 0:22 utc | 38
@ 32 david

Good article. Boeing is, or used to be, America's biggest manufacturing export. So you are right it cannot be allowed to fail. Boeing is also a manufacturer of military aircraft. The fact that it is now in such a pitiful state is symptomatic of America's decline and decadence and its takeover by financial predators.

jo6pac , Sep 4 2019 0:39 utc | 40
Posted by: Jen | Sep 4 2019 0:20 utc | 35

Nailed, moved to city of dead but not for gotten uncle Milton Frieman friend of aynn rand.

vk , Sep 4 2019 0:53 utc | 41
I don't think Boeing was arrogant. I think the 737 is simply unfixable and that they know that -- hence they went to the meeting with empty hands.
C I eh? , Sep 4 2019 1:14 utc | 42
They did the same with Nortel, whose share value exceeded 300 billion not long before it was scrapped. Insiders took everything while pension funds were wiped out of existence.

It is so very helpful to understand everything you read is corporate/intel propaganda, and you are always being setup to pay for the next great scam. The murder of 300+ people by boeing was yet another tragedy our sadistic elites could not let go to waste.

Walter , Sep 4 2019 3:10 utc | 43

...And to the idea that Boeing is being kept afloat by financial agencies.

Willow , Sep 4 2019 3:16 utc | 44
Aljazerra has a series of excellent investigative documentaries they did on Boeing. Here is one from 2014. https://www.aljazeera.com/investigations/boeing787/
Igor Bundy , Sep 4 2019 3:17 utc | 45
For many amerikans, a good "offensive" is far preferable than a good defense even if that only involves an apology. Remember what ALL US presidents say.. We will never apologize.. For the extermination of natives, for shooting down civilian airliners, for blowing up mosques full of worshipers, for bombing hospitals.. for reducing many countries to the stone age and using biological and chemical and nuclear weapons against the planet.. For supporting terrorists who plague the planet now. For basically being able to be unaccountable to anyone including themselves as a peculiar race of feces. So it is not the least surprising that amerikan corporations also follow the same bad manners as those they put into and pre-elect to rule them.
Igor Bundy , Sep 4 2019 3:26 utc | 46
People talk about Seattle as if its a bastion of integrity.. Its the same place Microsoft screwed up countless companies to become the largest OS maker? The same place where Amazon fashions how to screw its own employees to work longer and cheaper? There are enough examples that Seattle is not Toronto.. and will never be a bastion of ethics..

Actually can you show me a single place in the US where ethics are considered a bastion of governorship? Other than the libraries of content written about ethics, rarely do amerikans ever follow it. Yet expect others to do so.. This is getting so perverse that other cultures are now beginning to emulate it. Because its everywhere..

Remember Dallas? I watched people who saw in fascination how business can function like that. Well they cant in the long run but throw enough money and resources and it works wonders in the short term because it destroys the competition. But yea around 1998 when they got rid of the laws on making money by magic, most every thing has gone to hell.. because now there are no constraints but making money.. anywhich way.. Thats all that matters..

Igor Bundy , Sep 4 2019 3:54 utc | 47
You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla cpu, the intel chips were dinosaurs in every way. Requiring the cpu to be almost twice as fast to get the same thing done.. Also its interrupt control was not upto par. A simple example was how the commodore amiga could read from the disk and not stutter or slow down anything else you were doing. I never seen this fixed.. In fact going from 8Mhz to 4GHz seems to have fixed it by brute force. Yes the 8Mhz motorolla cpu worked wonders when you had music, video, IO all going at the same time. Its not just the CPU but the support chips which don't lock up the bus. Why would anyone use Intel? When there are so many specific embedded controllers designed for such specific things.
imo , Sep 4 2019 4:00 utc | 48
Initially I thought it was just the new over-sized engines they retro-fitted. A situation that would surely have been easier to get around by just going back to the original engines -- any inefficiencies being less $costly than the time the planes have been grounded. But this post makes the whole rabbit warren 10 miles deeper.

I do not travel much these days and find the cattle-class seating on these planes a major disincentive. Becoming aware of all these added technical issues I will now positively select for alternatives to 737 and bear the cost.

Joost , Sep 4 2019 4:25 utc | 50
I'm surprised Boeing stock still haven't taken nose dive

Posted by: Bob burger | Sep 3 2019 19:27 utc | 9

That is because the price is propped up by $9 billion share buyback per year . Share buyback is an effective scheme to airlift all the cash out of a company towards the major shareholders. I mean, who wants to develop reliable airplanes if you can funnel the cash into your pockets?

Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics.

Henkie , Sep 4 2019 7:04 utc | 53
Hi , I am new here in writing but not in reading.. About the 80286 , where is the coprocessor the 80287? How can the 80286 make IEEE math calculations? So how can it fly a controlled flight when it can not calculate its accuracy...... How is it possible that this system is certified? It should have at least a 80386 DX not SX!!!!
snake , Sep 4 2019 7:35 utc | 54
moved to Chicago in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance.

Jen @ 35 < ==

yes, the morally of the companies and their exclusive hold on a complicit or controlled government always defaults the government to support, enforce and encourage the principles of economic Zionism.

But it is more than just the corporate culture => the corporate fat cats 1. use the rule-making powers of the government to make law for them. Such laws create high valued assets from the pockets of the masses. The most well know of those corporate uses of government is involved with the intangible property laws (copyright, patent, and government franchise). The government generated copyright, franchise and Patent laws are monopolies. So when government subsidizes a successful outcome R&D project its findings are packaged up into a set of monopolies [copyrights, privatized government franchises which means instead of 50 companies or more competing for the next increment in technology, one gains the full advantage of that government research only one can use or abuse it. and the patented and copyrighted technology is used to extract untold billions, in small increments from the pockets of the public. 2. use of the judicial power of governments and their courts in both domestic and international settings, to police the use and to impose fake values in intangible property monopolies. Government-rule made privately owned monopoly rights (intangible property rights) generated from the pockets of the masses, do two things: they exclude, deny and prevent would be competition and their make value in a hidden revenue tax that passes to the privately held monopolist with each sale of a copyrighted, government franchised, or patented service or product. . Please note the one two nature of the "use of government law making powers to generate intangible private monopoly property rights"

Canthama , Sep 4 2019 10:37 utc | 56
There is no doubt Boeing has committed crimes on the 737MAX, its arrogance & greedy should be severely punished by the international commitment as an example to other global Corporations. It represents what is the worst of Corporate America that places profits in front of lives.
Christian J Chuba , Sep 4 2019 11:55 utc | 59
How the U.S. is keeping Russia out of the international market?

Iran and other sanctioned countries are a potential captive market and they have growth opportunities in what we sometimes call the non-aligned, emerging markets countries (Turkey, Africa, SE Asia, India, ...).

One thing I have learned is that the U.S. always games the system, we never play fair. So what did we do. Do their manufacturers use 1% U.S. made parts and they need that for international certification?

BM , Sep 4 2019 12:48 utc | 60
Ultimately all of the issues in the news these days are the same one and the same issue - as the US gets closer and closer to the brink of catastrophic collapse they get ever more desperate. As they get more and more desperate they descend into what comes most naturally to the US - throughout its entire history - frenzied violence, total absence of morality, war, murder, genocide, and everything else that the US is so well known for (by those who are not blinded by exceptionalist propaganda).

The Hong Kong violence is a perfect example - it is impossible that a self-respecting nation state could allow itself to be seen to degenerate into such idiotic degeneracy, and so grossly flaunt the most basic human decency. Ergo , the US is not a self-respecting nation state. It is a failed state.

I am certain the arrogance of Boeing reflects two things: (a) an assurance from the US government that the government will back them to the hilt, come what may, to make sure that the 737Max flies again; and (b) a threat that if Boeing fails to get the 737Max in the air despite that support, the entire top level management and board of directors will be jailed. Boeing know very well they cannot deliver. But just as the US government is desperate to avoid the inevitable collapse of the US, the Boeing top management are desperate to avoid jail. It is a charade.

It is time for international regulators to withdraw certification totally - after the problems are all fixed (I don't believe they ever will be), the plane needs complete new certification of every detail from the bottom up, at Boeing's expense, and with total openness from Boeing. The current Boeing management are not going to cooperate with that, therefore the international regulators need to demand a complete replacement of the management and board of directors as a condition for working with them.

Piotr Berman , Sep 4 2019 13:23 utc | 61
From ZeroHedge link:

If Boeing had invested some of this money that it blew on share buybacks to design a new modern plane from ground up to replace the ancient 737 airframe, these tragedies could have been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate cost-cutters and financial engineers, rather than real engineers, had the final word.

Markets don't care about any of this. They don't care about real engineers either. They love corporate cost-cutters and financial engineers. They want share buybacks, and if something bad happens, they'll overlook the $5 billion to pay for the fallout because it's just a "one-time item."

And now Boeing still has this plane, instead of a modern plane, and the history of this plane is now tainted, as is its brand, and by extension, that of Boeing. But markets blow that off too. Nothing matters.

Companies are getting away each with their own thing. There are companies that are losing a ton of money and are burning tons of cash, with no indications that they will ever make money. And market valuations are just ludicrous.

======

Thus Boeing issue is part of a much larger picture. Something systemic had to make "markets" less rational. And who is this "market"? In large part, fund managers wracking their brains how to create "decent return" while the cost of borrowing and returns on lending are super low. What remains are forms of real estate and stocks.

Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or quarter of that would suffice to build a plane that logically combines the latest technologies. E.g. the entire frame design to fit together with engines, processors proper for the information processing load, hydraulics for steering that satisfy force requirements in almost all circumstances etc. New technologies also fail because they are not completely understood, but when the overall design is logical with margins of safety, the faults can be eliminated.

Instead, 737 was slowly modified toward failure, eliminating safety margins one by one.

morongobill , Sep 4 2019 14:08 utc | 63

Regarding the 80286 and the 737, don't forget that the air traffic control system and the ICBM system uses old technology as well.

Seems our big systems have feet of old silicon.

Allan Bowman , Sep 4 2019 15:15 utc | 66
Boeing has apparently either never heard of, or ignores a procedure that is mandatory in satellite design and design reviews. This is FMEA or Failure Modes and Effects Analysis. This requires design engineers to document the impact of every potential failure and combination of failures thereby highlighting everthing from catastrophic effects to just annoyances. Clearly BOEING has done none of these and their troubles are a direct result. It can be assumed that their arrogant and incompetent management has not yet understood just how serious their behavior is to the future of the company.
fx , Sep 4 2019 16:08 utc | 69
Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics.

Posted by: Joost | Sep 4 2019 4:25 utc | 50

Well put!

Bemildred , Sep 4 2019 16:11 utc | 70
Computer modelling is what they are talking about in the cliche "Garbage in, garbage out".

The problem is not new, and it is well understood. What computer modelling is is cheap, and easy to fudge, and that is why it is popular with people who care about money a lot. Much of what is called "AI" is very similar in its limitations, a complicated way to fudge up the results you want, or something close enough for casual examination.

In particular cases where you have a well-defined and well-mathematized theory, then you can get some useful results with models. Like in Physics, Chemistry.

And they can be useful for "realistic" training situations, like aircraft simulators. The old story about wargame failures against Iran is another such situation. A lot of video games are big simulations in essence. But that is not reality, it's fake reality.

Trond , Sep 4 2019 17:01 utc | 79
@ SteveK9 71 "By the way, the problem was caused by Mitsubishi, who designed the heat exchangers."

Ahh. The furriners...

I once made the "mistake" of pointing out (in a comment under an article in Salon) that the reactors that exploded at Fukushima was made by GE and that GE people was still in charge of the reactors of American quality when they exploded. (The amerikans got out on one of the first planes out of the country).

I have never seen so many angry replies to one of my comments. I even got e-mails for several weeks from angry Americans.

c1ue , Sep 4 2019 19:44 utc | 80
@Henkie #53 You need floating point for scientific calculations, but I really doubt the 737 is doing any scientific research. Also, a regular CPU can do mathematical calculations. It just isn't as fast nor has the same capacity as a dedicated FPU. Another common use for FPUs is in live action shooter games - the neo-physics portions utilize scientific-like calculations to create lifelike actions. I sold computer systems in the 1990s while in school - Doom was a significant driver for newer systems (as well as hedge fund types). Again, don't see why an airplane needs this.

[Aug 31, 2019] Programming is about Effective Communication

Aug 31, 2019 | developers.slashdot.org

Anonymous Coward , Friday February 22, 2019 @02:42PM ( #58165060 )

Algorithms, not code ( Score: 4 , Insightful)

Sad to see these are all books about coding and coding style. Nothing at all here about algorithms, or data structures.

My vote goes for Algorithms by Sedgewick

Seven Spirals ( 4924941 ) , Friday February 22, 2019 @02:57PM ( #58165150 )
MOTIF Programming by Marshall Brain ( Score: 3 )

Amazing how little memory and CPU MOTIF applications take. Once you get over the callbacks, it's actually not bad!

Seven Spirals ( 4924941 ) writes:
Re: ( Score: 2 )

Interesting. Sorry you had that experience. I'm not sure what you mean by a "multi-line text widget". I can tell you that early versions of OpenMOTIF were very very buggy in my experience. You probably know this, but after OpenMOTIF was completed and revved a few times the original MOTIF code was released as open-source. Many of the bugs I'd been seeing (and some just strange visual artifacts) disappeared. I know a lot of people love QT and it's produced real apps and real results - I won't poo-poo it. How

SuperKendall ( 25149 ) writes:
Design and Evolution of C++ ( Score: 2 )

Even if you don't like C++ much, The Design and Evolution of C++ [amazon.com] is a great book for understanding why pretty much any language ends up the way it does, seeing the tradeoffs and how a language comes to grow and expand from simple roots. It's way more interesting to read than you might expect (not very dry, and more about human interaction than you would expect).

Other than that reading through back posts in a lot of coding blogs that have been around a long time is probably a really good idea.

Also a side re

shanen ( 462549 ) writes:
What about books that hadn't been written yet? ( Score: 2 )

You young whippersnappers don't 'preciate how good you have it!

Back in my day, the only book about programming was the 1401 assembly language manual!

But seriously, folks, it's pretty clear we still don't know shite about how to program properly. We have some fairly clear success criteria for improving the hardware, but the criteria for good software are clear as mud, and the criteria for ways to produce good software are much muddier than that.

Having said that, I will now peruse the thread rather carefully

shanen ( 462549 ) writes:
TMI, especially PII ( Score: 2 )

Couldn't find any mention of Guy Steele, so I'll throw in The New Hacker's Dictionary , which I once owned in dead tree form. Not sure if Version 4.4.7 http://catb.org/jargon/html/ [catb.org] is the latest online... Also remember a couple of his language manuals. Probably used the Common Lisp one the most...

Didn't find any mention of a lot of books that I consider highly relevant, but that may reflect my personal bias towards history. Not really relevant for most programmers.

TMI, but if I open up my database on all t

UnknownSoldier ( 67820 ) , Friday February 22, 2019 @03:52PM ( #58165532 )
Programming is about **Effective Communication** ( Score: 5 , Insightful)

I've been programming for the past ~40 years and I'll try to summarize what I believe are the most important bits about programming (pardon the pun.) Think of this as a META: " HOWTO: Be A Great Programmer " summary. (I'll get to the books section in a bit.)

1. All code can be summarized as a trinity of 3 fundamental concepts:

* Linear ; that is, sequence: A, B, C
* Cyclic ; that is, unconditional jumps: A-B-C-goto B
* Choice ; that is, conditional jumps: if A then B

2. ~80% of programming is NOT about code; it is about Effective Communication. Whether that be:

* with your compiler / interpreter / REPL
* with other code (levels of abstraction, level of coupling, separation of concerns, etc.)
* with your boss(es) / manager(s)
* with your colleagues
* with your legal team
* with your QA dept
* with your customer(s)
* with the general public

The other ~20% is effective time management and design. A good programmer knows how to budget their time. Programming is about balancing the three conflicting goals of the Program Management Triangle [wikipedia.org]: You can have it on time, on budget, on quality. Pick two.

3. Stages of a Programmer

There are two old jokes:

In Lisp all code is data. In Haskell all data is code.

And:

Progression of a (Lisp) Programmer:

* The newbie realizes that the difference between code and data is trivial.
* The expert realizes that all code is data.
* The true master realizes that all data is code.

(Attributed to Aristotle Pagaltzis)

The point of these jokes is that as you work with systems you start to realize that a data-driven process can often greatly simplify things.

4. Know Thy Data

Fred Books once wrote

"Show me your flowcharts (source code), and conceal your tables (domain model), and I shall continue to be mystified; show me your tables (domain model) and I won't usually need your flowcharts (source code): they'll be obvious."

A more modern version would read like this:

Show me your code and I'll have to see your data,
Show me your data and I won't have to see your code.

The importance of data can't be understated:

* Optimization STARTS with understanding HOW the data is being generated and used, NOT the code as has been traditionally taught.
* Post 2000 "Big Data" has been called the new oil. We are generating upwards to millions of GB of data every second. Analyzing that data is import to spot trends and potential problems.

5. There are three levels of optimizations. From slowest to fastest run-time:

a) Bit-twiddling hacks [stanford.edu]
b) Algorithmic -- Algorithmic complexity or Analysis of algorithms [wikipedia.org] (such as Big-O notation)
c) Data-Orientated Design [dataorienteddesign.com] -- Understanding how hardware caches such as instruction and data caches matter. Optimize for the common case, NOT the single case that OOP tends to favor.

Optimizing is understanding Bang-for-the-Buck. 80% of code execution is spent in 20% of the time. Speeding up hot-spots with bit twiddling won't be as effective as using a more efficient algorithm which, in turn, won't be as efficient as understanding HOW the data is manipulated in the first place.

6. Fundamental Reading

Since the OP specifically asked about books -- there are lots of great ones. The ones that have impressed me that I would mark as "required" reading:

* The Mythical Man-Month
* Godel, Escher, Bach
* Knuth: The Art of Computer Programming
* The Pragmatic Programmer
* Zero Bugs and Program Faster
* Writing Solid Code / Code Complete by Steve McConnell
* Game Programming Patterns [gameprogra...tterns.com] (*)
* Game Engine Design
* Thinking in Java by Bruce Eckel
* Puzzles for Hackers by Ivan Sklyarov

(*) I did NOT list Design Patterns: Elements of Reusable Object-Oriented Software as that leads to typical, bloated, over-engineered crap. The main problem with "Design Patterns" is that a programmer will often get locked into a mindset of seeing everything as a pattern -- even when a simple few lines of code would solve th eproblem. For example here is 1,100+ of Crap++ code such as Boost's over-engineered CRC code [boost.org] when a mere ~25 lines of SIMPLE C code would have done the trick. When was the last time you ACTUALLY needed to _modify_ a CRC function? The BIG picture is that you are probably looking for a BETTER HASHING function with less collisions. You probably would be better off using a DIFFERENT algorithm such as SHA-2, etc.

7. Do NOT copy-pasta

Roughly 80% of bugs creep in because someone blindly copied-pasted without thinking. Type out ALL code so you actually THINK about what you are writing.

8. K.I.S.S.

Over-engineering and aka technical debt, will be your Achilles' heel. Keep It Simple, Silly.

9. Use DESCRIPTIVE variable names

You spend ~80% of your time READING code, and only ~20% writing it. Use good, descriptive variable names. Far too programmers write usless comments and don't understand the difference between code and comments:

Code says HOW, Comments say WHY

A crap comment will say something like: // increment i

No, Shit Sherlock! Don't comment the obvious!

A good comment will say something like: // BUGFIX: 1234: Work-around issues caused by A, B, and C.

10. Ignoring Memory Management doesn't make it go away -- now you have two problems. (With apologies to JWZ)

TINSTAAFL.

11. Learn Multi-Paradigm programming [wikipedia.org].

If you don't understand both the pros and cons of these programming paradigms ...

* Procedural
* Object-Orientated
* Functional, and
* Data-Orientated Design

... then you will never really understand programming, nor abstraction, at a deep level, along with how and when it should and shouldn't be used.

12. Multi-disciplinary POV

ALL non-trivial code has bugs. If you aren't using static code analysis [wikipedia.org] then you are not catching as many bugs as the people who are.

Also, a good programmer looks at his code from many different angles. As a programmer you must put on many different hats to find them:

* Architect -- design the code
* Engineer / Construction Worker -- implement the code
* Tester -- test the code
* Consumer -- doesn't see the code, only sees the results. Does it even work?? Did you VERIFY it did BEFORE you checked your code into version control?

13. Learn multiple Programming Languages

Each language was designed to solve certain problems. Learning different languages, even ones you hate, will expose you to different concepts. e.g. If you don't how how to read assembly language AND your high level language then you will never be as good as the programmer who does both.

14. Respect your Colleagues' and Consumers Time, Space, and Money.

Mobile game are the WORST at respecting people's time, space and money turning "players into payers." They treat customers as whales. Don't do this. A practical example: If you are a slack channel with 50+ people do NOT use @here. YOUR fire is not their emergency!

15. Be Passionate

If you aren't passionate about programming, that is, you are only doing it for the money, it will show. Take some pride in doing a GOOD job.

16. Perfect Practice Makes Perfect.

If you aren't programming every day you will never be as good as someone who is. Programming is about solving interesting problems. Practice solving puzzles to develop your intuition and lateral thinking. The more you practice the better you get.

"Sorry" for the book but I felt it was important to summarize the "essentials" of programming.

--
Hey Slashdot. Fix your shitty filter so long lists can be posted.: "Your comment has too few characters per line (currently 37.0)."

raymorris ( 2726007 ) , Friday February 22, 2019 @05:39PM ( #58166230 ) Journal
Shared this with my team ( Score: 4 , Insightful)

You crammed a lot of good ideas into a short post.
I'm sending my team at work a link to your post.

You mentioned code can data. Linus Torvalds had this to say:

"I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful [â¦] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important."

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

I'm inclined to agree. Once the data structure is right, the code oftem almost writes itself. It'll be easy to write and easy to read because it's obvious how one would handle data structured in that elegant way.

Writing the code necessary to transform the data from the input format into the right structure can be non-obvious, but it's normally worth it.

[Aug 31, 2019] Slashdot Asks How Did You Learn How To Code - Slashdot

Aug 31, 2019 | ask.slashdot.org

GreatDrok ( 684119 ) , Saturday June 04, 2016 @10:03PM ( #52250917 ) Journal

Programming, not coding ( Score: 5 , Interesting)

i learnt to program at school from a Ph.D computer scientist. We never even had computers in the class. We learnt to break the problem down into sections using flowcharts or pseudo-code and then we would translate that program into whatever coding language we were using. I still do this usually in my notebook where I figure out all the things I need to do and then write the skeleton of the code using a series of comments for what each section of my program and then I fill in the code for each section. It is a combination of top down and bottom up programming, writing routines that can be independently tested and validated.

[Jul 23, 2019] Object-Oriented Programming -- The Trillion Dollar Disaster

While OO critique is good (althoth most point are far from new) and up to the point the proposed solution is not. There is no universal opener for creating elegant reliable programs.
Notable quotes:
"... Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization. There's no objective and open evidence that OOP is better than plain procedural programming. ..."
"... The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard. ..."
"... C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any idiotic "object model" c&@p. -- Linus Torvalds, the creator of Linux ..."
"... Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good programming framework should provide mechanisms that prevent us from doing stupid things. ..."
"... Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate code (low signal-to-noise ratio). ..."
Jul 23, 2019 | medium.com

The ultimate goal of every software developer should be to write reliable code. Nothing else matters if the code is buggy and unreliable. And what is the best way to write code that is reliable? Simplicity . Simplicity is the opposite of complexity . Therefore our first and foremost responsibility as software developers should be to reduce code complexity.

Disclaimer

I'll be honest, I'm not a raving fan of object-orientation. Of course, this article is going to be biased. However, I have good reasons to dislike OOP.

I also understand that criticism of OOP is a very sensitive topic -- I will probably offend many readers. However, I'm doing what I think is right. My goal is not to offend, but to raise awareness of the issues that OOP introduces.

I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he designed it. I'm criticizing the modern Java/C# approach to OOP.

I will also admit that I'm angry. Very angry. I think that it is plain wrong that OOP is considered the de-facto standard for code organization by many people, including those in very senior technical positions. It is also wrong that many mainstream languages don't offer any other alternatives to code organization other than OOP.

Hell, I used to struggle a lot myself while working on OOP projects. And I had no single clue why I was struggling this much. Maybe I wasn't good enough? I had to learn a couple more design patterns (I thought)! Eventually, I got completely burned out.

This post sums up my first-hand decade-long journey from Object-Oriented to Functional programming. I've seen it all. Unfortunately, no matter how hard I try, I can no longer find use cases for OOP. I have personally seen OOP projects fail because they become too complex to maintain.


TLDR

Object oriented programs are offered as alternatives to correct ones -- Edsger W. Dijkstra , pioneer of computer science

<img src="https://miro.medium.com/max/1400/1*MTb-Xx5D0H6LUJu_cQ9fMQ.jpeg" width="700" height="467"/>
Photo by Sebastian Herrmann on Unsplash

Object-Oriented Programming has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization. There's no objective and open evidence that OOP is better than plain procedural programming.

The bitter truth is that OOP fails at the only task it was intended to address. It looks good on paper -- we have clean hierarchies of animals, dogs, humans, etc. However, it falls flat once the complexity of the application starts increasing. Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard.

Some might disagree with me, but the truth is that modern OOP has never been properly designed. It never came out of a proper research institution (in contrast with Haskell/FP). I do not consider Xerox or another enterprise to be a "proper research institution". OOP doesn't have decades of rigorous scientific research to back it up. Lambda calculus offers a complete theoretical foundation for Functional Programming. OOP has nothing to match that. OOP mainly "just happened".

Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But what are the long-term consequences of using OOP? OOP is a time bomb, set to explode sometime in the future when the codebase gets big enough.

Projects get delayed, deadlines get missed, developers get burned-out, adding in new features becomes next to impossible. The organization labels the codebase as the "legacy codebase" , and the development team plans a rewrite .

OOP is not natural for the human brain, our thought process is centered around "doing" things -- go for a walk, talk to a friend, eat pizza. Our brains have evolved to do things, not to organize the world into complex hierarchies of abstract objects.

OOP code is non-deterministic -- unlike with functional programming, we're not guaranteed to get the same output given the same inputs. This makes reasoning about the program very hard. As an oversimplified example, the output of 2+2 or calculator.Add(2, 2) mostly is equal to four, but sometimes it might become equal to three, five, and maybe even 1004. The dependencies of the Calculator object might change the result of the computation in subtle, but profound ways.


The Need for a Resilient Framework

I know, this may sound weird, but as programmers, we shouldn't trust ourselves to write reliable code. Personally, I am unable to write good code without a strong framework to base my work on. Yes, there are frameworks that concern themselves with some very particular problems (e.g. Angular or ASP.Net).

I'm not talking about the software frameworks. I'm talking about the more abstract dictionary definition of a framework: "an essential supporting structure " -- frameworks that concern themselves with the more abstract things like code organization and tackling code complexity. Even though Object-Oriented and Functional Programming are both programming paradigms, they're also both very high-level frameworks.

Limiting our choices

C++ is a horrible [object-oriented] language And limiting your project to C means that people don't screw things up with any idiotic "object model" c&@p. -- Linus Torvalds, the creator of Linux

Linus Torvalds is widely known for his open criticism of C++ and OOP. One thing he was 100% right about is limiting programmers in the choices they can make. In fact, the fewer choices programmers have, the more resilient their code becomes. In the quote above, Linus Torvalds highly recommends having a good framework to base our code upon.

<img src="https://miro.medium.com/max/1400/1*ujt2PMrbhCZuGhufoxfr5w.jpeg" width="700" height="465"/>
Photo by specphotops on Unsplash

Many dislike speed limits on the roads, but they're essential to help prevent people from crashing to death. Similarly, a good programming framework should provide mechanisms that prevent us from doing stupid things.

A good programming framework helps us to write reliable code. First and foremost, it should help reduce complexity by providing the following things:

  1. Modularity and reusability
  2. Proper state isolation
  3. High signal-to-noise ratio

Unfortunately, OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises (more on this later). OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again. OOP typically requires a lot of boilerplate code (low signal-to-noise ratio).

... ... ...

Messaging

Alan Kay coined the term "Object Oriented Programming" in the 1960s. He had a background in biology and was attempting to make computer programs communicate the same way living cells do.

<img src="https://miro.medium.com/max/1400/1*bzRsnzakR7O4RMbDfEZ1sA.jpeg" width="700" height="467"/>
Photo by Muukii on Unsplash

Alan Kay's big idea was to have independent programs (cells) communicate by sending messages to each other. The state of the independent programs would never be shared with the outside world (encapsulation).

That's it. OOP was never intended to have things like inheritance, polymorphism, the "new" keyword, and the myriad of design patterns.

OOP in its purest form

Erlang is OOP in its purest form. Unlike more mainstream languages, it focuses on the core idea of OOP -- messaging. In Erlang, objects communicate by passing immutable messages between objects.

Is there proof that immutable messages are a superior approach compared to method calls?

Hell yes! Erlang is probably the most reliable language in the world. It powers most of the world's telecom (and hence the internet) infrastructure. Some of the systems written in Erlang have reliability of 99.9999999% (you read that right -- nine nines). Code Complexity

With OOP-inflected programming languages, computer software becomes more verbose, less readable, less descriptive, and harder to modify and maintain.

-- Richard Mansfield

The most important aspect of software development is keeping the code complexity down. Period. None of the fancy features matter if the codebase becomes impossible to maintain. Even 100% test coverage is worth nothing if the codebase becomes too complex and unmaintainable .

What makes the codebase complex? There are many things to consider, but in my opinion, the top offenders are: shared mutable state, erroneous abstractions, and low signal-to-noise ratio (often caused by boilerplate code). All of them are prevalent in OOP.


The Problems of State

<img src="https://miro.medium.com/max/1400/1*1WeuR9OoKyD5EvtT9KjXOA.jpeg" width="700" height="467"/>
Photo by Mika Baumeister on Unsplash

What is state? Simply put, state is any temporary data stored in memory. Think variables or fields/properties in OOP. Imperative programming (including OOP) describes computation in terms of the program state and changes to that state . Declarative (functional) programming describes the desired results instead, and don't specify changes to the state explicitly.

... ... ...

To make the code more efficient, objects are passed not by their value, but by their reference . This is where "dependency injection" falls flat.

Let me explain. Whenever we create an object in OOP, we pass references to its dependencies to the constructor . Those dependencies also have their own internal state. The newly created object happily stores references to those dependencies in its internal state and is then happy to modify them in any way it pleases. And it also passes those references down to anything else it might end up using.

This creates a complex graph of promiscuously shared objects that all end up changing each other's state. This, in turn, causes huge problems since it becomes almost impossible to see what caused the program state to change. Days might be wasted trying to debug such state changes. And you're lucky if you don't have to deal with concurrency (more on this later).

Methods/Properties

The methods or properties that provide access to particular fields are no better than changing the value of a field directly. It doesn't matter whether you mutate an object's state by using a fancy property or method -- the result is the same: mutated state.

Some people say that OOP tries to model the real world. This is simply not true -- OOP has nothing to relate to in the real world. Trying to model programs as objects probably is one of the biggest OOP mistakes.

The real world is not hierarchical

OOP attempts to model everything as a hierarchy of objects. Unfortunately, that is not how things work in the real world. Objects in the real world interact with each other using messages, but they mostly are independent of each other.

Inheritance in the real world

OOP inheritance is not modeled after the real world. The parent object in the real world is unable to change the behavior of child objects at run-time. Even though you inherit your DNA from your parents, they're unable to make changes to your DNA as they please. You do not inherit "behaviors" from your parents, you develop your own behaviors. And you're unable to "override" your parents' behaviors.

The real world has no methods

Does the piece of paper you're writing on have a "write" method ? No! You take an empty piece of paper, pick up a pen, and write some text. You, as a person, don't have a "write" method either -- you make the decision to write some text based on outside events or your internal thoughts.


The Kingdom of Nouns

Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds.

-- Joe Armstrong , creator of Erlang

Objects (or nouns) are at the very core of OOP. A fundamental limitation of OOP is that it forces everything into nouns. And not everything should be modeled as nouns. Operations (functions) should not be modeled as objects. Why are we forced to create a Multiplier class when all we need is a function that multiplies two numbers? Simply have a Multiply function, let data be data and let functions be functions!

In non-OOP languages, doing trivial things like saving data to a file is straightforward -- very similar to how you would describe an action in plain English.

Real-world example, please!

Sure, going back to the painter example, the painter owns a PaintingFactory . He has hired a dedicated BrushManager , ColorManager , a CanvasManager and a MonaLisaProvider . His good friend zombie makes use of a BrainConsumingStrategy . Those objects, in turn, define the following methods: CreatePainting , FindBrush , PickColor , CallMonaLisa , and ConsumeBrainz .

Of course, this is plain stupidity, and could never have happened in the real world. How much unnecessary complexity has been created for the simple act of drawing a painting?

There's no need to invent strange concepts to hold your functions when they're allowed to exist separately from the objects.


Unit Testing
<img src="https://miro.medium.com/max/1400/1*xGn4uGgVyrRAXnqSwTF69w.jpeg" width="700" height="477"/>
Photo by Ani Kolleshi on Unsplash

Automated testing is an important part of the development process and helps tremendously in preventing regressions (i.e. bugs being introduced into existing code). Unit Testing plays a huge role in the process of automated testing.

Some might disagree, but OOP code is notoriously difficult to unit test. Unit Testing assumes testing things in isolation, and to make a method unit-testable:

  1. Its dependencies have to be extracted into a separate class.
  2. Create an interface for the newly created class.
  3. Declare fields to hold the instance of the newly created class.
  4. Make use of a mocking framework to mock the dependencies.
  5. Make use of a dependency-injection framework to inject the dependencies.

How much more complexity has to be created just to make a piece of code testable? How much time was wasted just to make some code testable?

> PS we'd also have to instantiate the entire class in order to test a single method. This will also bring in the code from all of its parent classes.

With OOP, writing tests for legacy code is even harder -- almost impossible. Entire companies have been created ( TypeMock ) around the issue of testing legacy OOP code.

Boilerplate code

Boilerplate code is probably the biggest offender when it comes to the signal-to-noise ratio. Boilerplate code is "noise" that is required to get the program to compile. Boilerplate code takes time to write and makes the codebase less readable because of the added noise.

While "program to an interface, not to an implementation" is the recommended approach in OOP, not everything should become an interface. We'd have to resort to using interfaces in the entire codebase, for the sole purpose of testability. We'd also probably have to make use of dependency injection, which further introduced unnecessary complexity.

Testing private methods

Some people say that private methods shouldn't be tested I tend to disagree, unit testing is called "unit" for a reason -- test small units of code in isolation. Yet testing of private methods in OOP is nearly impossible. We shouldn't be making private methods internal just for the sake of testability.

In order to achieve testability of private methods, they usually have to be extracted into a separate object. This, in turn, introduces unnecessary complexity and boilerplate code.


Refactoring

Refactoring is an important part of a developer's day-to-day job. Ironically, OOP code is notoriously hard to refactor. Refactoring is supposed to make the code less complex, and more maintainable. On the contrary, refactored OOP code becomes significantly more complex -- to make the code testable, we'd have to make use of dependency injection, and create an interface for the refactored class. Even then, refactoring OOP code is really hard without dedicated tools like Resharper.

https://medium.com/media/b557e5152569ad4569e250d2c2ba21b6

In the simple example above, the line count has more than doubled just to extract a single method. Why does refactoring create even more complexity, when the code is being refactored in order to decrease complexity in the first place?

Contrast this to a similar refactor of non-OOP code in JavaScript:

https://medium.com/media/36d6f2f2e78929c6bcd783f12c929f90

The code has literally stayed the same -- we simply moved the isValidInput function to a different file and added a single line to import that function. We've also added _isValidInput to the function signature for the sake of testability.

This is a simple example, but in practice the complexity grows exponentially as the codebase gets bigger.

And that's not all. Refactoring OOP code is extremely risky . Complex dependency graphs and state scattered all over OOP codebase, make it impossible for the human brain to consider all of the potential issues.


The Band-aids
<img src="https://miro.medium.com/max/1400/1*JOtbVvacgu-nH3ZR4mY2Og.jpeg" width="700" height="567"/>
Image source: Photo by Pixabay from Pexels

What do we do when something is not working? It is simple, we only have two options -- throw it away or try fixing it. OOP is something that can't be thrown away easily, millions of developers are trained in OOP. And millions of organizations worldwide are using OOP.

You probably see now that OOP doesn't really work , it makes our code complex and unreliable. And you're not alone! People have been thinking hard for decades trying to address the issues prevalent in OOP code. They've come up with a myriad of design patterns.

Design patterns

OOP provides a set of guidelines that should theoretically allow developers to incrementally build larger and larger systems: SOLID principle, dependency injection, design patterns, and others.

Unfortunately, the design patterns are nothing other than band-aids. They exist solely to address the shortcomings of OOP. A myriad of books has even been written on the topic. They wouldn't have been so bad, had they not been responsible for the introduction of enormous complexity to our codebases.

The problem factory

In fact, it is impossible to write good and maintainable Object-Oriented code.

On one side of the spectrum we have an OOP codebase that is inconsistent and doesn't seem to adhere to any standards. On the other side of the spectrum, we have a tower of over-engineered code, a bunch of erroneous abstractions built one on top of one another. Design patterns are very helpful in building such towers of abstractions.

Soon, adding in new functionality, and even making sense of all the complexity, gets harder and harder. The codebase will be full of things like SimpleBeanFactoryAwareAspectInstanceFactory , AbstractInterceptorDrivenBeanDefinitionDecorator , TransactionAwarePersistenceManagerFactoryProxy or RequestProcessorFactoryFactory .

Precious brainpower has to be wasted trying to understand the tower of abstractions that the developers themselves have created. The absence of structure is in many cases better than having bad structure (if you ask me).

<img src="https://miro.medium.com/max/1400/1*_xDSrTC0F2lke6OYtkRm8g.png" width="700" height="308"/>
Image source: https://www.reddit.com/r/ProgrammerHumor/comments/418x95/theory_vs_reality/

Further reading: FizzBuzzEnterpriseEdition

[Jul 22, 2019] Is Object-Oriented Programming a Trillion Dollar Disaster - Slashdot

Jul 22, 2019 | developers.slashdot.org

Is Object-Oriented Programming a Trillion Dollar Disaster? (medium.com) Posted by EditorDavid on Monday July 22, 2019 @01:04AM from the OOPs dept. Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling object-oriented programming "a trillion dollar disaster." Precious time and brainpower are being spent thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented Programming (OOP) has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization . There's no objective and open evidence that OOP is better than plain procedural programming ... Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard...

Using OOP is seemingly innocent in the short-term, especially on greenfield projects. But what are the long-term consequences of using OOP? OOP is a time bomb, set to explode sometime in the future when the codebase gets big enough. Projects get delayed, deadlines get missed, developers get burned-out, adding in new features becomes next to impossible . The organization labels the codebase as the " legacy codebase ", and the development team plans a rewrite .... OOP provides developers too many tools and choices, without imposing the right kinds of limitations. Even though OOP promises to address modularity and improve reusability, it fails to deliver on its promises...

I'm not criticizing Alan Kay's OOP -- he is a genius. I wish OOP was implemented the way he designed it. I'm criticizing the modern Java/C# approach to OOP... I think that it is plain wrong that OOP is considered the de-facto standard for code organization by many people, including those in very senior technical positions. It is also wrong that many mainstream languages don't offer any other alternatives to code organization other than OOP.

The essay ultimately blames Java for the popularity of OOP, citing Alan Kay's comment that Java "is the most distressing thing to happen to computing since MS-DOS." It also quotes Linus Torvalds's observation that "limiting your project to C means that people don't screw things up with any idiotic 'object model'."

And it ultimately suggests Functional Programming as a superior alternative, making the following assertions about OOP:

"OOP code encourages the use of shared mutable state, which has been proven to be unsafe time and time again... [E]ncapsulation, in fact, is glorified global state." "OOP typically requires a lot of boilerplate code (low signal-to-noise ratio)." "Some might disagree, but OOP code is notoriously difficult to unit test... [R]efactoring OOP code is really hard without dedicated tools like Resharper." "It is impossible to write good and maintainable Object-Oriented code."

segedunum ( 883035 ) , Monday July 22, 2019 @05:36AM ( #58964224 )

Re:Not Tiresome, Hilariously Hypocritical ( Score: 4 , Informative)
There's no objective and open evidence that OOP is better than plain procedural programming...

...which is followed by the author's subjective opinions about why procedural programming is better than OOP. There's no objective comparison of the pros and cons of OOP vs procedural just a rant about some of OOP's problems.

We start from the point-of-view that OOP has to prove itself. Has it? Has any project or programming exercise ever taken less time because it is object-oriented?

Precious time and brainpower are being spent thinking about "abstractions" and "design patterns" instead of solving real-world problems...

...says the person who took the time to write a 6,000 word rant on "why I hate OOP".

Sadly, that was something you hallucinated. He doesn't say that anywhere.

mfnickster ( 182520 ) , Monday July 22, 2019 @10:54AM ( #58965660 )
Re:Tiresome ( Score: 5 , Interesting)

Inheritance, while not "inherently" bad, is often the wrong solution. See: Why extends is evil [javaworld.com]

Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny little anecdote in Cocoa Programming for Mac OS X [google.com]:

"Once upon a time, there was a company called Taligent. Taligent was created by IBM and Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the peak of its mindshare, I met one of its engineers at a trade show. I asked him to create a simple application for me: A window would appear with a button, and when the button was clicked, the words 'Hello, World!' would appear in a text field. The engineer created a project and started subclassing madly: subclassing the window and the button and the event handler. Then he started generating code: dozens of lines to get the button and the text field onto the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its doors forever."

Darinbob ( 1142669 ) , Monday July 22, 2019 @03:00AM ( #58963760 )
Re:The issue ( Score: 5 , Insightful)

Almost every programming methodology can be abused by people who really don't know how to program well, or who don't want to. They'll happily create frameworks, implement new development processes, and chart tons of metrics, all while avoiding the work of getting the job done. In some cases the person who writes the most code is the same one who gets the least amount of useful work done.

So, OOP can be misused the same way. Never mind that OOP essentially began very early and has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially an object oriented system. It's just data encapsulation and separating work into manageable modules. That's how it was before anyone ever came up with the dumb name "full-stack developer".

cardpuncher ( 713057 ) , Monday July 22, 2019 @04:06AM ( #58963948 )
Re:The issue ( Score: 5 , Insightful)

As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the rise of OOP with some curiosity. I think there's a general consensus that abstraction and re-usability are good things - they're the reason subroutines exist - the issue is whether they are ends in themselves.

I struggle with the whole concept of "design patterns". There are clearly common themes in software, but there seems to be a great deal of pressure these days to make your implementation fit some pre-defined template rather than thinking about the application's specific needs for state and concurrency. I have seen some rather eccentric consequences of "patternism".

Correctly written, OOP code allows you to encapsulate just the logic you need for a specific task and to make that specific task available in a wide variety of contexts by judicious use of templating and virtual functions that obviate the need for "refactoring". Badly written, OOP code can have as many dangerous side effects and as much opacity as any other kind of code. However, I think the key factor is not the choice of programming paradigm, but the design process. You need to think first about what your code is intended to do and in what circumstances it might be reused. In the context of a larger project, it means identifying commonalities and deciding how best to implement them once. You need to document that design and review it with other interested parties. You need to document the code with clear information about its valid and invalid use. If you've done that, testing should not be a problem.

Some people seem to believe that OOP removes the need for some of that design and documentation. It doesn't and indeed code that you intend to be reused needs *more* design and documentation than the glue that binds it together in any one specific use case. I'm still a firm believer that coding begins with a pencil, not with a keyboard. That's particularly true if you intend to design abstract interfaces that will serve many purposes. In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the costs - and that usually means you not only know your code will be genuinely reusable but will also genuinely be reused.

ImdatS ( 958642 ) , Monday July 22, 2019 @04:43AM ( #58964070 ) Homepage
Re:The issue ( Score: 5 , Insightful)
[...] I'm still a firm believer that coding begins with a pencil, not with a keyboard. [...]

This!
In fact, even more: I'm a firm believer that coding begins with a pencil designing the data model that you want to implement.

Everything else is just code that operates on that data model. Though I agree with most of what you say, I believe the classical "MVC" design-pattern is still valid. And, you know what, there is a reason why it is called "M-V-C": Start with the Model, continue with the View and finalize with the Controller. MVC not only stood for Model-View-Controller but also for the order of the implementation of each.

And preferably, as you stated correctly, "... start with pencil & paper ..."

Rockoon ( 1252108 ) , Monday July 22, 2019 @05:23AM ( #58964192 )
Re:The issue ( Score: 5 , Insightful)
I struggle with the whole concept of "design patterns".

Because design patterns are stupid.

A reasonable programmer can understand reasonable code so long as the data is documented even when the code isnt documented, but will struggle immensely if it were the other way around. Bad programmers create objects for objects sake, and because of that they have to follow so called "design patterns" because no amount of code commenting makes the code easily understandable when its a spaghetti web of interacting "objects" The "design patterns" dont make the code easier the read, just easier to write.

Those OOP fanatics, if they do "document" their code, add comments like "// increment the index" which is useless shit.

The big win of OOP is only in the encapsulation of the data with the code, and great code treats objects like data structures with attached subroutines, not as "objects" , and document the fuck out of the contained data, while more or less letting the code document itself. and keep OO elements to a minimum. As it turns out, OOP is just much more effort than procedural and it rarely pays off to invest that effort, at least for me.

Z00L00K ( 682162 ) , Monday July 22, 2019 @05:14AM ( #58964162 ) Homepage
Re:The issue ( Score: 4 , Insightful)

The problem isn't the object orientation paradigm itself, it's how it's applied.

The big problem in any project is that you have to understand how to break down the final solution into modules that can be developed independently of each other to a large extent and identify the items that are shared. But even when you have items that are apparently identical don't mean that they will be that way in the long run, so shared code may even be dangerous because future developers don't know that by fixing problem A they create problems B, C, D and E.

Futurepower(R) ( 558542 ) writes: < MJennings.USA@NOT_any_of_THISgmail.com > on Monday July 22, 2019 @06:03AM ( #58964326 ) Homepage
Eternal September? ( Score: 4 , Informative)

Eternal September [wikipedia.org]

gweihir ( 88907 ) , Monday July 22, 2019 @07:48AM ( #58964672 )
Re:The issue ( Score: 3 )
Any time you make something easier, you lower the bar as well and now have a pack of idiots that never could have been hired if it weren't for a programming language that stripped out a lot of complexity for them.

Exactly. There are quite a few aspects of writing code that are difficult regardless of language and there the difference in skill and insight really matters.

Joce640k ( 829181 ) , Monday July 22, 2019 @04:14AM ( #58963972 ) Homepage
Re:The issue ( Score: 2 )

OO programming doesn't have any real advantages for small projects.

ImdatS ( 958642 ) , Monday July 22, 2019 @04:36AM ( #58964040 ) Homepage
Re:The issue ( Score: 5 , Insightful)

I have about 35+ years of software development experience, including with procedural, OOP and functional programming languages.

My experience is: The question "is procedural better than OOP or functional?" (or vice-versa) has a single answer: "it depends".

Like in your cases above, I would exactly do the same: use some procedural language that solves my problem quickly and easily.

In large-scale applications, I mostly used OOP (having learned OOP with Smalltalk & Objective-C). I don't like C++ or Java - but that's a matter of personal preference.

I use Python for large-scale scripts or machine learning/AI tasks.

I use Perl for short scripts that need to do a quick task.

Procedural is in fact easier to grasp for beginners as OOP and functional require a different way of thinking. If you start developing software, after a while (when the project gets complex enough) you will probably switch to OOP or functional.

Again, in my opinion neither is better than the other (procedural, OOP or functional). It just depends on the task at hand (and of course on the experience of the software developer).

spazmonkey ( 920425 ) , Monday July 22, 2019 @01:22AM ( #58963430 )
its the way OOP is taught ( Score: 5 , Interesting)

There is nothing inherently wrong with some of the functionality it offers, its the way OOP is abused as a substitute for basic good programming practices. I was helping interns - students from a local CC - deal with idiotic assignments like making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES. Seriously, WTF? A room full of career programmers could not even figure out how you were supposed to do that, much less why. What was worse was a lack of understanding of basic programming skill or even the use of variables, as the kids were being taught EVERY program was to to be assembled solely by sticking together bits of libraries. There was no coding, just hunting for snippets of preexisting code to glue together. Zero idea they could add their own, much less how to do it. OOP isn't the problem, its the idea that it replaces basic programming skills and best practice.

sjames ( 1099 ) , Monday July 22, 2019 @02:30AM ( #58963680 ) Homepage Journal
Re:its the way OOP is taught ( Score: 5 , Interesting)

That and the obsession with absofrackinglutely EVERYTHING just having to be a formally declared object including the while program being an object with a run() method.

Some things actually cry out to be objects, some not so much.Generally, I find that my most readable and maintainable code turns out to be a procedural program that manipulates objects.

Even there, some things just naturally want to be a struct or just an array of values.

The same is true of most ingenious ideas in programming. It's one thing if code is demonstrating a particular idea, but production code is supposed to be there to do work, not grind an academic ax.

For example, slavish adherence to "patterns". They're quite useful for thinking about code and talking about code, but they shouldn't be the end of the discussion. They work better as a starting point. Some programs seem to want patterns to be mixed and matched.

In reality those problems are just cargo cult programming one level higher.

I suspect a lot of that is because too many developers barely grasp programming and never learned to go beyond the patterns they were explicitly taught.

When all you have is a hammer, the whole world looks like a nail.

bradley13 ( 1118935 ) , Monday July 22, 2019 @02:15AM ( #58963622 ) Homepage
It depends... ( Score: 5 , Insightful)

There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They know OOP, so they think that every problem must be solved in an OOP way. In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modelling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm.

In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules. For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What "object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative, static "generate answer" method makes at least as much sense.

There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO is the only possible answer. The world is not a nail.

Beechmere ( 538241 ) , Monday July 22, 2019 @02:31AM ( #58963684 )
Class? Object? ( Score: 5 , Interesting)

I'm approaching 60, and I've been coding in COBOL, VB, FORTRAN, REXX, SQL for almost 40 years. I remember seeing Object Oriented Programming being introduced in the 80s, and I went on a course once (paid by work). I remember not understanding the concept of "Classes", and my impression was that the software we were buying was just trying to invent stupid new words for old familiar constructs (eg: Files, Records, Rows, Tables, etc). So I never transitioned away from my reliable mainframe programming platform. I thought the phrase OOP had dies out long ago, along with "Client Server" (whatever that meant). I'm retiring in a few years, and the mainframe will outlive me. Everything else is buggy.

cb88 ( 1410145 ) , Monday July 22, 2019 @03:11AM ( #58963794 )
Going back to Torvald's quote.... ( Score: 5 , Funny)

"limiting your project to C means that people don't screw things up with any idiotic 'object model'."

GTK .... hold by beer... it is not a good argument against OOP languages. But first, lets see how OOP came into place. OOP was designed to provide encapsulation, like components, support reuse and code sharing. It was the next step coming from modules and units, which where better than libraries, as functions and procedures had namespaces, which helped structuring code. OOP is a great idea when writing UI toolkits or similar stuff, as you can as

DrXym ( 126579 ) , Monday July 22, 2019 @04:57AM ( #58964116 )
No ( Score: 3 )

Like all things OO is fine in moderation but it's easy to go completely overboard, decomposing, normalizing, producing enormous inheritance trees. Yes your enormous UML diagram looks impressive, and yes it will be incomprehensible, fragile and horrible to maintain.

That said, it's completely fine in moderation. The same goes for functional programming. Most programmers can wrap their heads around things like functions, closures / lambdas, streams and so on. But if you mean true functional programming then forget it.

As for the kernel's choice to use C, that really boils down to the fact that a kernel needs to be lower level than a typical user land application. It has to do its own memory allocation and other things that were beyond C++ at the time. STL would have been usable, so would new / delete, and exceptions & unwinding. And at that point why even bother? That doesn't mean C is wonderful or doesn't inflict its own pain and bugs on development. But at the time, it was the only sane choice.

[Jul 22, 2019] Almost right

Jul 22, 2019 | developers.slashdot.org
Tough Love ( 215404 ), Monday July 22, 2019 @01:27AM ( #58963442 )

The entire software world is a multi-trillion dollar disaster.

Agile, Waterfall, Oop, fucking Javascript or worse its wannabe spawn of the devil Node. C, C++, Java wankers, did I say wankers? Yes wankers.

IT architects, pundit of the week, carpetbaggers, Aspies, total incompetents moving from job to job, you name it.

Disaster, complete and utter. Anybody who doesn't know this hasn't been paying attention.

About the only bright spot is a few open source projects like Linux Kernel, Postgres, Samba, Squid etc, totally outnumbered by wankers and posers.

[Jul 01, 2019] I worked twenty years in commercial software development including in aviation for UA and while Indian software developers are capable, their corporate culture is completely different as is based on feudal workplace relations of subordinates and management that results with extreme cronyism

Notable quotes:
"... Being powerless within calcifies totalitarian corporate culture ..."
"... ultimately promoted wide spread culture of obscurantism and opportunism what amounts to extreme office politics of covering their own butts often knowing that entire development strategy is flawed, as long as they are not personally blamed or if they in fact benefit by collapse of the project. ..."
"... As I worked side by side and later as project manager with Indian developers I can attest to that culture which while widely spread also among American developers reaches extremes among Indian corporations which infect often are engaged in fraud to be blamed on developers. ..."
Jul 01, 2019 | www.moonofalabama.org
dh-mtl , Jun 30, 2019 3:51:11 PM | 29

@Kalen , Jun 30, 2019 12:58:14 PM | 13

The programmers in India are well capable of writing good software. The difficulty lies in communicating the design requirements for the software. If they do not know in detail how air planes are engineered, they will implement the design to the letter but not to its intent.

I worked twenty years in commercial software development including in aviation for UA and while Indian software developers are capable, their corporate culture is completely different as is based on feudal workplace relations of subordinates and management that results with extreme cronyism, far exceeding that in the US as such relations are not only based on extreme exploitation (few jobs hundreds of qualified candidates) but on personal almost paternal like relations that preclude required independence of judgment and practically eliminates any major critical discussions about efficacy of technological solutions and their risks.

Being powerless within calcifies totalitarian corporate culture facing alternative of hurting family-like relations with bosses' and their feelings, who emotionally and in financial terms committed themselves to certain often wrong solutions dictated more by margins than technological imperatives, ultimately promoted wide spread culture of obscurantism and opportunism what amounts to extreme office politics of covering their own butts often knowing that entire development strategy is flawed, as long as they are not personally blamed or if they in fact benefit by collapse of the project.

As I worked side by side and later as project manager with Indian developers I can attest to that culture which while widely spread also among American developers reaches extremes among Indian corporations which infect often are engaged in fraud to be blamed on developers.

In fact it is shocking contrast with German culture that practically prevents anyone engaging in any project as it is almost always, in its entirety, discussed, analyzed, understood and fully supported by every member of the team, otherwise they often simply refused to work on project citing professional ethics. High quality social welfare state and handsome unemployment benefits definitely supported such ethical stand back them

While what I describe happened over twenty years ago it is still applicable I believe.

[Jun 30, 2019] Design Genius Jony Ive Leaves Apple, Leaving Behind Crapified Products That Cannot Be Repaired naked capitalism

Notable quotes:
"... Honestly, since 2015 feels like Apple wants to abandon it's PC business but just doesn't know how so ..."
"... The new line seems like a valid refresh, but the prices are higher than ever, and remember young people are earning less than ever, so I still think they are looking for a way out of the PC trade, maybe this refresh is to just buy time for an other five years before they close up. ..."
"... I wonder how much those tooling engineers in the US make compared to their Chinese competitors? It seems like a neoliberal virtuous circle: loot/guts education, then find skilled labor from places that still support education, by moving abroad or importing workers, reducing wages and further undermining the local skill base. ..."
"... I sympathize with y'all. It's not uncommon for good products to become less useful and more trouble as the original designers, etc., get arrogant from their success and start to believe that every idea they have is a useful improvement. Not even close. Too much of fixing things that aren't broken and gilding lilies. ..."
Jun 30, 2019 | www.nakedcapitalism.com

As iFixit notes :

The iPod, the iPhone, the MacBook Air, the physical Apple Store, even the iconic packaging of Apple products -- these products changed how we view and use their categories, or created new categories, and will be with us a long time.

But the title of that iFixit post, Jony Ive's Fragmented Legacy: Unreliable, Unrepairable, Beautiful Gadgets , makes clear that those beautiful products carried with them considerable costs- above and beyond their high prices. They're unreliable, and difficult to repair.

Ironically. both Jobs and Ive were inspired by Dieter Rams – whom iFixit calls "the legendary industrial designer renowned for functional and simple consumer products." And unlike Apple. Rams believed that good design didn't have to come at the expense of either durability or the environment:

Rams loves durable products that are environmentally friendly. That's one of his 10 principles for good design : "Design makes an important contribution to the preservation of the environment." But Ive has never publicly discussed the dissonance between his inspiration and Apple's disposable, glued-together products. For years, Apple has openly combated green standards that would make products easier to repair and recycle, stating that they need "complete design flexibility" no matter the impact on the environment.

Complete Design Flexibility Spells Environmental Disaster

In fact, that complete design flexibility – at least as practiced by Ive – has resulted in crapified products that are an environmental disaster. Their lack of durability means they must be repaired to be functional, and the lack of repairability means many of these products end up being tossed prematurely – no doubt not a bug, but a feature. As Vice recounts :

But history will not be kind to Ive, to Apple, or to their design choices. While the company popularized the smartphone and minimalistic, sleek, gadget design, it also did things like create brand new screws designed to keep consumers from repairing their iPhones.

Under Ive, Apple began gluing down batteries inside laptops and smartphones (rather than screwing them down) to shave off a fraction of a millimeter at the expense of repairability and sustainability.

It redesigned MacBook Pro keyboards with mechanisms that are, again, a fraction of a millimeter thinner, but that are easily defeated by dust and crumbs (the computer I am typing on right now -- which is six months old -- has a busted spacebar and 'r' key). These keyboards are not easily repairable, even by Apple, and many MacBook Pros have to be completely replaced due to a single key breaking. The iPhone 6 Plus had a design flaw that led to its touch screen spontaneously breaking -- it then told consumers there was no problem for months before ultimately creating a repair program . Meanwhile, Apple's own internal tests showed those flaws . He designed AirPods, which feature an unreplaceable battery that must be physically destroyed in order to open .

Vice also notes that in addition to Apple's products becoming "less modular, less consumer friendly, less upgradable, less repairable, and, at times, less functional than earlier models", Apple's design decisions have not been confined to Apple. Instead, "Ive's influence is obvious in products released by Samsung, HTC, Huawei, and others, which have similarly traded modularity for sleekness."

Right to Repair

As I've written before, Apple is leading opponent of giving consumers a right to repair. Nonetheless, there's been some global progress on this issue (see Global Gains on Right to Repair ). And we've also seen a widening of support in the US for such a right. The issue has arisen in the current presidential campaign, with Elizabeth Warren throwing down the gauntlet by endorsing a right to repair for farm tractors. The New York Times has also taken up the cause more generally (see Right to Repair Initiatives Gain Support in US ). More than twenty states are considering enacting right to repair statutes.


samhill , June 30, 2019 at 5:41 pm

I've been using Apple since 1990, I concur with the article about h/w and add that from Snow Leopard to Sierra the OSX was buggy as anything from the Windows world if not more so. Got better with High Sierra but still not up to the hype. I haven't lived with Mojave. I use Apple out of habit, haven't felt the love from them since Snow Leopard, exactly when they became a cell phone company. People think Apple is Mercedes and PCs are Fords, but for a long time now in practical use, leaving aside the snazzy aesthetics, under the hood it's GM vs Ford. I'm not rich enough to buy a $1500 non-upgradable, non-repairable product so the new T2 protected computers can't be for me.

The new Dell XPS's are tempting, they got the right idea, if you go to their service page you can dl complete service instructions, diagrams, and blow ups. They don't seem at all worried about my hurting myself.

In the last few years PCs offer what before I could only get from Apple; good screen, back lit keyboard, long battery life, trim size.

Honestly, since 2015 feels like Apple wants to abandon it's PC business but just doesn't know how so it's trying to drive off all the old legacy power users, the creative people that actually work hard for their money, exchanging them for rich dilettantes, hedge fund managers, and status seekers – an easier crowd to finally close up shop on.

The new line seems like a valid refresh, but the prices are higher than ever, and remember young people are earning less than ever, so I still think they are looking for a way out of the PC trade, maybe this refresh is to just buy time for an other five years before they close up.

When you start thinking like this about a company you've been loyal to for 30 years something is definitely wrong.

TG , June 30, 2019 at 6:09 pm

The reason that Apple moved the last of its production to China is, quite simply, that China now has basically the entire industrial infrastructure that we used to have. We have been hollowed out, and are now essentially third-world when it comes to industry. The entire integrated supply chain that defines an industrial power, is now gone.

The part about China no longer being a low-wage country is correct. China's wages have been higher than Mexico's for some time. But the part about the skilled workers is a slap in the face.

How can US workers be skilled at manufacturing, when there are no longer any jobs here where they can learn or use those skills?

fdr-fan , June 30, 2019 at 6:10 pm

A thin rectangle isn't more beautiful than a thick rectangle. They're both just rectangles.

Skip Intro , June 30, 2019 at 2:14 pm

I wonder how much those tooling engineers in the US make compared to their Chinese competitors? It seems like a neoliberal virtuous circle: loot/guts education, then find skilled labor from places that still support education, by moving abroad or importing workers, reducing wages and further undermining the local skill base.

EMtz , June 30, 2019 at 4:08 pm

They lost me when they made the iMac so thin it couldn't play a CD – and had the nerve to charge $85 for an Apple player. Bought another brand for $25. I don't care that it's not as pretty. I do care that I had to buy it at all.

I need a new cellphone. You can bet it won't be an iPhone.

John Zelnicker , June 30, 2019 at 4:24 pm

Jerri-Lynn – Indeed, a great article.

Although I have never used an Apple product, I sympathize with y'all. It's not uncommon for good products to become less useful and more trouble as the original designers, etc., get arrogant from their success and start to believe that every idea they have is a useful improvement. Not even close. Too much of fixing things that aren't broken and gilding lilies.

Charles Leseau , June 30, 2019 at 5:13 pm

Worst computer I've ever owned: Apple Macbook Pro, c. 2011 or so.

Died within 2 years, and also more expensive than the desktops I've built since that absolutely crush it in every possible performance metric (and last longer).

Meanwhile, I also still use a $300 Best Buy Toshiba craptop that has now lasted for 8 straight years.

Never again.

Alfred , June 30, 2019 at 5:23 pm

"Beautiful objects" – aye, there's the rub. In point of fact, the goal of industrial design is not to create beautiful objects. It is the goal of the fine arts to create beautiful objects. The goal of design is to create useful things that are easy to use and are effective at their tasks. Some -- including me -- would add to those most basic goals, the additional goals of being safe to use, durable, and easy to repair; perhaps even easy to adapt or suitable for recycling, or conservative of precious materials. The principles of good product design are laid out admirably in the classic book by Donald A. Norman, The Design of Everyday Things (1988). So this book was available to Jony Ive (born 1967) during his entire career (which overlapped almost exactly the wonder years of Postmodernism – and therein lies a clue). It would indeed be astonishing to learn that Ive took no notice of it. Yet Norman's book can be used to show that Ive's Apple violated so many of the principles of good design, so habitually, as to raise the suspicion that the company was not engaged in "product design" at all. The output Apple in the Ive era, I'd say, belongs instead to the realm of so-called "commodity aesthetics," which aims to give manufactured items a sufficiently seductive appearance to induce their purchase – nothing more. Aethetics appears as Dieter Rams's principle 3, as just one (and the only purely commercial) function in his 10; so in a theoretical context that remains ensconced within a genuine, Modernist functionalism. But in the Apple dispensation that single (aesthetic) principle seems to have subsumed the entire design enterprise – precisely as one would expect from "the cultural logic of late capitalism" (hat tip to Mr Jameson). Ive and his staff of formalists were not designing industrial products, or what Norman calls "everyday things," let alone devices; they were aestheticizing products in ways that first, foremost, and almost only enhanced their performance as expressions of a brand. Their eyes turned away from the prosaic prize of functionality to focus instead on the more profitable prize of sales -- to repeat customers, aka the devotees of 'iconic' fetishism. Thus did they serve not the masses but Mammon, and they did so as minions of minimalism. Nor was theirs the minimalism of the Frankfurt kitchen, with its deep roots in ethics and ergonomics. It was only superficially Miesian. Bauhaus-inspired? Oh, please. Only the more careless readers of Tom Wolfe and Wikipedia could believe anything so preposterous. Surely Steve Jobs, he of the featureless black turtleneck by Issey Miyake, knew better. Anyone who has so much as walked by an Apple Store, ever, should know better. And I guess I should know how to write shorter

[Jun 29, 2019] Boeing Outsourced Its 737 MAX Software To $9-Per-Hour Engineers

Jun 29, 2019 | www.zerohedge.com

The software at the heart of the Boeing 737 MAX crisis was developed at a time when the company was laying off experienced engineers and replacing them with temporary workers making as little as $9 per hour, according to Bloomberg .

In an effort to cut costs, Boeing was relying on subcontractors making paltry wages to develop and test its software. Often times, these subcontractors would be from countries lacking a deep background in aerospace, like India.

Boeing had recent college graduates working for Indian software developer HCL Technologies Ltd. in a building across from Seattle's Boeing Field, in flight test groups supporting the MAX. The coders from HCL designed to specifications set by Boeing but, according to Mark Rabin, a former Boeing software engineer, "it was controversial because it was far less efficient than Boeing engineers just writing the code."

Rabin said: "...it took many rounds going back and forth because the code was not done correctly."

In addition to cutting costs, the hiring of Indian companies may have landed Boeing orders for the Indian military and commercial aircraft, like a $22 billion order received in January 2017 . That order included 100 737 MAX 8 jets and was Boeing's largest order ever from an Indian airline. India traditionally orders from Airbus.

HCL engineers helped develop and test the 737 MAX's flight display software while employees from another Indian company, Cyient Ltd, handled the software for flight test equipment. In 2011, Boeing named Cyient, then known as Infotech, to a list of its "suppliers of the year".

One HCL employee posted online: "Provided quick workaround to resolve production issue which resulted in not delaying flight test of 737-Max (delay in each flight test will cost very big amount for Boeing) ."

But Boeing says the company didn't rely on engineers from HCL for the Maneuvering Characteristics Augmentation System, which was linked to both last October's crash and March's crash. The company also says it didn't rely on Indian companies for the cockpit warning light issue that was disclosed after the crashes.

A Boeing spokesperson said: "Boeing has many decades of experience working with supplier/partners around the world. Our primary focus is on always ensuring that our products and services are safe, of the highest quality and comply with all applicable regulations."

HCL, on the other hand, said: "HCL has a strong and long-standing business relationship with The Boeing Company, and we take pride in the work we do for all our customers. However, HCL does not comment on specific work we do for our customers. HCL is not associated with any ongoing issues with 737 Max."

Recent simulator tests run by the FAA indicate that software issues on the 737 MAX run deeper than first thought. Engineers who worked on the plane, which Boeing started developing eight years ago, complained of pressure from managers to limit changes that might introduce extra time or cost.

Rick Ludtke, a former Boeing flight controls engineer laid off in 2017, said: "Boeing was doing all kinds of things, everything you can imagine, to reduce cost , including moving work from Puget Sound, because we'd become very expensive here. All that's very understandable if you think of it from a business perspective. Slowly over time it appears that's eroded the ability for Puget Sound designers to design."

Rabin even recalled an incident where senior software engineers were told they weren't needed because Boeing's productions were mature. Rabin said: "I was shocked that in a room full of a couple hundred mostly senior engineers we were being told that we weren't needed."

Any given jetliner is made up of millions of parts and millions of lines of code. Boeing has often turned over large portions of the work to suppliers and subcontractors that follow its blueprints. But beginning in 2004 with the 787 Dreamliner, Boeing sought to increase profits by providing high-level specs and then asking suppliers to design more parts themselves.

Boeing also promised to invest $1.7 billion in Indian companies as a result of an $11 billion order in 2005 from Air India. This investment helped HCL and other software developers.

For the 787, HCL offered a price to Boeing that they couldn't refuse, either: free. HCL "took no up-front payments on the 787 and only started collecting payments based on sales years later".

Rockwell Collins won the MAX contract for cockpit displays and relied in part on HCL engineers and contract engineers from Cyient to test flight test equipment.

Charles LoveJoy, a former flight-test instrumentation design engineer at the company, said: "We did have our challenges with the India team. They met the requirements, per se, but you could do it better."


Anonymous IX , 2 minutes ago link

I love it. A company which fell in love so much with their extraordinary profits that they sabatoged their design and will now suffer enormous financial consequences. They're lucky to have all their defense/military contracts.

scraping_by , 4 minutes ago link

Oftentimes, it's the cut-and-paste code that's the problem. If you don't have a good appreciation for what every line does, you're never going to know what the sub or entire program does.

vienna_proxy , 7 minutes ago link

hahahaha non-technical managers making design decisions are complete **** ups wherever they go and here it blew up in their faces rofl

Ignorance is bliss , 2 minutes ago link

I see this all the time, and a lot of the time these non-technical decision makers are women.

hispanicLoser , 13 minutes ago link

By 2002 i could not sit down with any developers without hearing at least one story about how they had been in a code review meeting and seen absolute garbage turned out by H-1B workers.

Lots of people have known about this problem for many years now.

brazilian , 11 minutes ago link

May the gods damn all financial managers! One of the two professions, along with bankers, which have absolutely no social value whatsoever. There should be open hunting season on both!

scraping_by , 15 minutes ago link

Shifting to high-level specs puts more power in the hands of management/accounting types, since it doesn't require engineering knowledge to track a deadline. Indeed, this whole story is the wet dream of business school, the idea of being able to accomplish technical tasks purely by demand. A lot of public schools teach kids science is magic so when they grow up, the think they can just give directions and technology appears.

pops , 20 minutes ago link

In this country, one must have a license from the FAA to work on commercial aircraft. That means training and certification that usually results in higher pay for those qualified to perform the repairs to the aircraft your family will fly on.

In case you're not aware, much of the heavy stuff like D checks (overhaul) have been outsourced by the airlines to foreign countries where the FAA has nothing to say about it. Those contractors can hire whoever they wish for whatever they'll accept. I have worked with some of those "mechanics" who cannot even read.

Keep that in mind next time the TSA perv is fondling your junk. That might be your last sexual encounter.

Klassenfeind , 22 minutes ago link

Boeing Outsourced Its 737 MAX Software To $9-Per-Hour Engineers

Long live the free market, right Tylers?

You ZH guys always rally against minimum wage here, well there you go: $9/hr aircraft 'engineers!' Happy now?

asteroids , 25 minutes ago link

You gotta be kidding. You let kids straight out of school write mission critical code? How ******* stupid are you BA?

reader2010 , 20 minutes ago link

Go to India. There are many outsourcing companies that only hire new college graduates for work and they are paid less than $2 an hour for the job.

For the DoD contractors, they have to bring them to the US to work. There are tons of H1B guys from India working for defense contractors.

[Jun 29, 2019] Hiring aircraft computer engineers at $9/hr by Boeing is a great idea. Who could argue with smart cost saving?

Jun 29, 2019 | www.zerohedge.com

Anonymous IX , 3 minutes ago link

I love it. A company which fell in love so much with their extraordinary profits that they sabatoged their design and will now suffer enormous financial consequences. They're lucky to have all their defense/military contracts.

[Jun 29, 2019] If you have to be told that H-1B code in critical aircraft software might be not reliable you are too stupid to live

Jun 29, 2019 | www.zerohedge.com

hispanicLoser , 25 minutes ago link

If you have to be told that H-1B code in aircraft software is not reliable you are too stupid to live.

zob2020 , 16 minutes ago link

Or this online shop designed back in 1997. It was supposed to take over all internet shopping that didn't really exist back then yet. And they used Indian doctors to code. Well sure they ended up with a site... but one so heavy with pictures it took 30min to open one page, another 20min to even click on a product to read its text etc-. This with good university internet.

Unsurprisingly i don't think they ever managed to sell anything. But they gave out free movie tickets to every registered customer... so me & friend each registered some 80 accounts and went to free movies for a good bit over a year.

mailman must have had fun delivering 160 letters to random names in the same student apartment :D

[May 17, 2019] Shareholder Capitalism, the Military, and the Beginning of the End for Boeing

Highly recommended!
Notable quotes:
"... Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve). ..."
"... The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally. ..."
"... "Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. ..."
"... If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can. ..."
"... It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism ..."
"... When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. ..."
May 17, 2019 | www.nakedcapitalism.com

The fall of the Berlin Wall and the corresponding end of the Soviet Empire gave the fullest impetus imaginable to the forces of globalized capitalism, and correspondingly unfettered access to the world's cheapest labor. What was not to like about that? It afforded multinational corporations vastly expanded opportunities to fatten their profit margins and increase the bottom line with seemingly no risk posed to their business model.

Or so it appeared. In 2000, aerospace engineer L.J. Hart-Smith's remarkable paper, sardonically titled "Out-Sourced Profits – The Cornerstone of Successful Subcontracting," laid out the case against several business practices of Hart-Smith's previous employer, McDonnell Douglas, which had incautiously ridden the wave of outsourcing when it merged with the author's new employer, Boeing. Hart-Smith's intention in telling his story was a cautionary one for the newly combined Boeing, lest it follow its then recent acquisition down the same disastrous path.

Of the manifold points and issues identified by Hart-Smith, there is one that stands out as the most compelling in terms of understanding the current crisis enveloping Boeing: The embrace of the metric "Return on Net Assets" (RONA). When combined with the relentless pursuit of cost reduction (via offshoring), RONA taken to the extreme can undermine overall safety standards.

Related to this problem is the intentional and unnecessary use of complexity as an instrument of propaganda. Like many of its Wall Street counterparts, Boeing also used complexity as a mechanism to obfuscate and conceal activity that is incompetent, nefarious and/or harmful to not only the corporation itself but to society as a whole (instead of complexity being a benign byproduct of a move up the technology curve).

All of these pernicious concepts are branches of the same poisoned tree: " shareholder capitalism ":

[A] notion best epitomized by Milton Friedman that the only social responsibility of a corporation is to increase its profits, laying the groundwork for the idea that shareholders, being the owners and the main risk-bearing participants, ought therefore to receive the biggest rewards. Profits therefore should be generated first and foremost with a view toward maximizing the interests of shareholders, not the executives or managers who (according to the theory) were spending too much of their time, and the shareholders' money, worrying about employees, customers, and the community at large. The economists who built on Friedman's work, along with increasingly aggressive institutional investors, devised solutions to ensure the primacy of enhancing shareholder value, via the advocacy of hostile takeovers, the promotion of massive stock buybacks or repurchases (which increased the stock value), higher dividend payouts and, most importantly, the introduction of stock-based pay for top executives in order to align their interests to those of the shareholders. These ideas were influenced by the idea that corporate efficiency and profitability were impinged upon by archaic regulation and unionization, which, according to the theory, precluded the ability to compete globally.

"Return on Net Assets" (RONA) forms a key part of the shareholder capitalism doctrine. In essence, it means maximizing the returns of those dollars deployed in the operation of the business. Applied to a corporation, it comes down to this: If the choice is between putting a million bucks into new factory machinery or returning it to shareholders, say, via dividend payments, the latter is the optimal way to go because in theory it means higher net returns accruing to the shareholders (as the "owners" of the company), implicitly assuming that they can make better use of that money than the company itself can.

It is an absurd conceit to believe that a dilettante portfolio manager is in a better position than an aviation engineer to gauge whether corporate investment in fixed assets will generate productivity gains well north of the expected return for the cash distributed to the shareholders. But such is the perverse fantasy embedded in the myth of shareholder capitalism.

Engineering reality, however, is far more complicated than what is outlined in university MBA textbooks. For corporations like McDonnell Douglas, for example, RONA was used not as a way to prioritize new investment in the corporation but rather to justify disinvestment in the corporation. This disinvestment ultimately degraded the company's underlying profitability and the quality of its planes (which is one of the reasons the Pentagon helped to broker the merger with Boeing; in another perverse echo of the 2008 financial disaster, it was a politically engineered bailout).

RONA in Practice

When real engineering clashes with financial engineering, the damage takes the form of a geographically disparate and demoralized workforce: The factory-floor denominator goes down. Workers' wages are depressed, testing and quality assurance are curtailed. Productivity is diminished, even as labor-saving technologies are introduced. Precision machinery is sold off and replaced by inferior, but cheaper, machines. Engineering quality deteriorates. And the upshot is that a reliable plane like Boeing's 737, which had been a tried and true money-spinner with an impressive safety record since 1967, becomes a high-tech death trap.

The drive toward efficiency is translated into a drive to do more with less. Get more out of workers while paying them less. Make more parts with fewer machines. Outsourcing is viewed as a way to release capital by transferring investment from skilled domestic human capital to offshore entities not imbued with the same talents, corporate culture and dedication to quality. The benefits to the bottom line are temporary; the long-term pathologies become embedded as the company's market share begins to shrink, as the airlines search for less shoddy alternatives.

You must do one more thing if you are a Boeing director: you must erect barriers to bad news, because there is nothing that bursts a magic bubble faster than reality, particularly if it's bad reality.

The illusion that Boeing sought to perpetuate was that it continued to produce the same thing it had produced for decades: namely, a safe, reliable, quality airplane. But it was doing so with a production apparatus that was stripped, for cost reasons, of many of the means necessary to make good aircraft. So while the wine still came in a bottle signifying Premier Cru quality, and still carried the same price, someone had poured out the contents and replaced them with cheap plonk.

And that has become remarkably easy to do in aviation. Because Boeing is no longer subject to proper independent regulatory scrutiny. This is what happens when you're allowed to " self-certify" your own airplane , as the Washington Post described: "One Boeing engineer would conduct a test of a particular system on the Max 8, while another Boeing engineer would act as the FAA's representative, signing on behalf of the U.S. government that the technology complied with federal safety regulations."

This is a recipe for disaster. Boeing relentlessly cut costs, it outsourced across the globe to workforces that knew nothing about aviation or aviation's safety culture. It sent things everywhere on one criteria and one criteria only: lower the denominator. Make it the same, but cheaper. And then self-certify the plane, so that nobody, including the FAA, was ever the wiser.

Boeing also greased the wheels in Washington to ensure the continuation of this convenient state of regulatory affairs for the company. According to OpenSecrets.org , Boeing and its affiliates spent $15,120,000 in lobbying expenses in 2018, after spending, $16,740,000 in 2017 (along with a further $4,551,078 in 2018 political contributions, which placed the company 82nd out of a total of 19,087 contributors). Looking back at these figures over the past four elections (congressional and presidential) since 2012, these numbers represent fairly typical spending sums for the company.

But clever financial engineering, extensive political lobbying and self-certification can't perpetually hold back the effects of shoddy engineering. One of the sad byproducts of the FAA's acquiescence to "self-certification" is how many things fall through the cracks so easily.

[Feb 11, 2019] 6 most prevalent problems in the software development world

Dec 01, 2018 | www.catswhocode.com

November 20, 2018

[Jan 14, 2019] Quickly move an executable between systems with ELF Statifier Linux.com The source for Linux information by Ben Martin

Oct 23, 2008 | monkeyiq.blogspot.com

Shared libraries that are dynamically linked make more efficient use of disk space than those that are statically linked, and more importantly allow you to perform security updates in a more efficient manner, but executables compiled against a particular version of a dynamic library expect that version of the shared library to be available on the machine they run on. If you are running machines with both Fedora 9 and openSUSE 11, the versions of some shared libraries are likely to be slightly different, and if you copy an executable between the machines, the file might fail to execute because of these version differences.

With ELF Statifier you can create a statically linked version of an executable, so the executable includes the shared libraries instead of seeking them at run time. A statically linked executable is much more likely to run on a different Linux distribution or a different version of the same distribution.

Of course, to do this you sacrifice some disk space, because the statically linked executable includes a copy of the shared libraries that it needs, but in these days of terabyte disks the space consideration is less important than the security one. Consider what happens if your executables are dynamically linked to a shared library, say libfoo, and there is a security update to libfoo. When your applications are dynamically linked you can just update the shared copy of libfoo and your applications will no longer be vulnerable to the security issue in the older libfoo. If on the other hand you have a statically linked executable, it will still include and use its own private copy of the old libfoo. You'll have to recreate the statically linked executable to get the newer libfoo and security update.

Still, there are times when you want to take a daemon you compiled on a Fedora machine and run it on your openSUSE machine without having to recompile it and all its dependencies. Sometimes you just want it to execute now and can rebuild it later if desired. Of course, the machine you copy the executable from and the one on which you want to run it must have the same architecture.

ELF Statifier is packaged as a 1-Click install for openSUSE 10.3 but not for Ubuntu Hardy or Fedora. I'll use version 1.6.14 of ELF Statifier and build it from source on a Fedora 9 x86 machine. ELF Statifier does not use autotools, so you compile by simply invoking make . Compilation and installation is shown below.

$ tar xzvf statifier-1.6.14.tar.gz
$ cd ./statifier-*
$ make
$ sudo make install

As an example of how to use the utility, I'll create a statically linked version of the ls binary in the commands shown below. First I create a personal copy of the dynamically linked executable and inspect it to see what it dynamically links to. You run statifier with the path to the dynamically linked executable as the first argument and the path where you want to create the statically linked executable as the second argument. Notice that the ldd command reports that no dynamically linked libraries are required by ls-static. The next command shows that the binary size has grown significantly for the static version of ls.

$ mkdir test
$ cd ./test
$ cp -a /bin/ls ls-dynamic
$ ls -lh
-rwxr-xr-x 1 ben ben 112K 2008-08-01 04:05 ls-dynamic
$ ldd ls-dynamic
linux-gate.so.1 => (0x00110000)
librt.so.1 => /lib/librt.so.1 (0x00a3a000)
libselinux.so.1 => /lib/libselinux.so.1 (0x00a06000)
libacl.so.1 => /lib/libacl.so.1 (0x00d8a000)
libc.so.6 => /lib/libc.so.6 (0x0084e000)
libpthread.so.0 => /lib/libpthread.so.0 (0x009eb000)
/lib/ld-linux.so.2 (0x0082e000)
libdl.so.2 => /lib/libdl.so.2 (0x009e4000)
libattr.so.1 => /lib/libattr.so.1 (0x0606d000)

$ statifier ls-dynamic ls-static
$ ldd ls-static
not a dynamic executable

$ ls -lh ls-static
-rwxr-x--- 1 ben ben 2.0M 2008-10-03 12:05 ls-static

$ ls-static /tmp
...
$ ls-static -lh
Segmentation fault

As you can see above, the statified ls crashes when you run it with the -l option. If you get segmentation faults when running your statified executables you should disable stack randomization and recreate the statified executable. The stack and address space randomization feature of the Linux kernel makes the locations used for the stack and other important parts of an executable change every time it is executed. Randomizing things each time you run a binary hinders attacks such as the return-to-libc attack because the location of libc functions changes all the time.

You are giving away some security by changing the randomize_va_space parameter as shown below. The change to randomize_va_space affects not only attacks on the executables themselves but also exploit attempts that rely on buffer overflows to compromise the system. Without randomization, both attacks become more straightforward. If you set randomize_va_space to zero as shown below and recreate the ls-static binary, things should work as expected. You'll have to leave the stack randomization feature disabled in order to execute the statified executable.

# cd /proc/sys/kernel
# cat randomize_va_space
2
# echo -n 0 >| randomize_va_space
# cat randomize_va_space
0

There are a few other tricks up statifier's sleeve: you can set or unset environment variables for the statified executable, and include additional libraries (LD_PRELOAD libraries) into the static executable. Being able to set additional environment variables for a static executable is useful when the binary you are statifying relies on finding additional resources like configuration files. If the binary allows you to tell it where to find its resources through environment variables, you can include these settings directly into the statified executable.

The ability to include preloaded shared libraries into the statified binary (LD_PRELOADing) is probably a less commonly used feature. One use is including additional functionality such as making the statically linked executable "trashcan friendly" by default, perhaps using delsafe , but without needing to install any additional software on the machine that is running the statically linked executable.

Security measures that randomize the address space of binaries might interfere with ELF Statifier and cause it not to work. But when you just want to move the execution of an application to another Linux machine, ELF Statifier might get you up and running without the hassle of a recompile.

Categories:

[Dec 27, 2018] The Yoda of Silicon Valley by Siobhan Roberts

Highly recommended!
Although he is certainly a giant, Knuth will never be able to complete this monograph - the technology developed too quickly. Three volumes came out in 1963-1968 and then there was a lull. January 10, he will be 81. At this age it is difficult to work in the field of mathematics and system programming. So we will probably never see the complete fourth volume.
This inability to finish the work he devoted a large part of hi life is definitely a tragedy. The key problem here is that now it is simply impossible to cover the whole area of ​​system programming and related algorithms for one person. But the first three volumes played tremendous positive role for sure.
Also he was distracted for several years to create TeX. He needed to create a non-profit and complete this work by attracting the best minds from the outside. But he is by nature a loner, as many great scientists are, and prefer to work this way.
His other mistake is due to the fact that MIX - his emulator was too far from the IBM S/360, which became the standard de-facto in mid-60th. He then realized that this was a blunder and replaced MIX with more modem emulator MIXX, but it was "too little, too late" and it took time and effort. So the first three volumes and fragments of the fourth is all that we have now and probably forever.
Not all volumes fared equally well with time. The third volume suffered most IMHO and as of 2019 is partially obsolete. Also it was written by him in some haste and some parts of it are are far from clearly written ( it was based on earlier lectures of Floyd, so it was oriented of single CPU computers only. Now when multiprocessor machines, huge amount of RAM and SSD hard drives are the norm, the situation is very different from late 60th. It requires different sorting algorithms (the importance of mergesort increased, importance of quicksort decreased). He also got too carried away with sorting random numbers and establishing upper bound and average run time. The real data is almost never random and typically contain sorted fragments. For example, he overestimated the importance of quicksort and thus pushed the discipline in the wrong direction.
Notable quotes:
"... These days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it kind of works, then fix the bugs in the post-release, or the next update. ..."
"... AI is a joke. None of the current 'AI' actually is. It is just another new buzz-word to throw around to people that do not understand it at all. ..."
"... One good teacher makes all the difference in life. More than one is a rare blessing. ..."
Dec 17, 2018 | www.nytimes.com

With more than one million copies in print, "The Art of Computer Programming " is the Bible of its field. "Like an actual bible, it is long and comprehensive; no other book is as comprehensive," said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: "You should definitely send me a résumé if you can read the whole thing."

The volume opens with an excerpt from " McCall's Cookbook ":

Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect.

Inside are algorithms, the recipes that feed the digital age -- although, as Dr. Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field's most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text -- for instance, when you hit Command+F to search for a keyword in a document.

... ... ...

During summer vacations, Dr. Knuth made more money than professors earned in a year by writing compilers. A compiler is like a translator, converting a high-level programming language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally, improving it in the process. In computer science, "optimization" is truly an art, and this is articulated in another Knuthian proverb: "Premature optimization is the root of all evil."

Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he came to call the "analysis of algorithms." A publisher hired him to write a book about compilers, but it evolved into a book collecting everything he knew about how to write for computers -- a book about algorithms.

... ... ...

When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes out sub-volumes, called fascicles. The next installation, "Volume 4, Fascicle 5," covering, among other things, "backtracking" and "dancing links," was meant to be published in time for Christmas. It is delayed until next April because he keeps finding more and more irresistible problems that he wants to present.

In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time. He retired at 55, restricted his public engagements and quit email (officially, at least). Andrei Broder recalled that time management was his professor's defining characteristic even in the early 1980s.

Dr. Knuth typically held student appointments on Friday mornings, until he started spending his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to the computers when they were free. Horrified by what his beloved book looked like on the page with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX computer typesetting system, which remains the gold standard for all forms of scientific communication and publication. Some consider it Dr. Knuth's greatest contribution to the world, and the greatest contribution to typography since Gutenberg.

This decade-long detour took place back in the age when computers were shared among users and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to midnight. Dr. Broder recalled, "When I told my girlfriend that we can't do anything Friday night because Friday night at 10 I have to meet with my adviser, she thought, 'This is something that is so stupid it must be true.'"

... ... ...

Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish "The Art of Computer Programming," although that time frame has been a constant since about 1980. Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue? "Definitely not," said Dr. Knuth.

"I am worried that algorithms are getting too prominent in the world," he added. "It started out that computer scientists were worried nobody was listening to us. Now I'm worried that too many people are listening."


Scott Kim Burlingame, CA Dec. 18

Thanks Siobhan for your vivid portrait of my friend and mentor. When I came to Stanford as an undergrad in 1973 I asked who in the math dept was interested in puzzles. They pointed me to the computer science dept, where I met Knuth and we hit it off immediately. Not only a great thinker and writer, but as you so well described, always present and warm in person. He was also one of the best teachers I've ever had -- clear, funny, and interested in every student (his elegant policy was each student can only speak twice in class during a period, to give everyone a chance to participate, and he made a point of remembering everyone's names). Some thoughts from Knuth I carry with me: finding the right name for a project is half the work (not literally true, but he labored hard on finding the right names for TeX, Metafont, etc.), always do your best work, half of why the field of computer science exists is because it is a way for mathematically minded people who like to build things can meet each other, and the observation that when the computer science dept began at Stanford one of the standard interview questions was "what instrument do you play" -- there was a deep connection between music and computer science, and indeed the dept had multiple string quartets. But in recent decades that has changed entirely. If you do a book on Knuth (he deserves it), please be in touch.

IMiss America US Dec. 18

I remember when programming was art. I remember when programming was programming. These days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it kind of works, then fix the bugs in the post-release, or the next update.

AI is a joke. None of the current 'AI' actually is. It is just another new buzz-word to throw around to people that do not understand it at all. We should be in a golden age of computing. Instead, we are cutting all corners to get something out as fast as possible. The technology exists to do far more. It is the human element that fails us.

Ronald Aaronson Armonk, NY Dec. 18

My particular field of interest has always been compiler writing and have been long awaiting Knuth's volume on that subject. I would just like to point out that among Kunth's many accomplishments is the invention of LR parsers, which are widely used for writing programming language compilers.

Edward Snowden Russia Dec. 18

Yes, \TeX, and its derivative, \LaTeX{} contributed greatly to being able to create elegant documents. It is also available for the web in the form MathJax, and it's about time the New York Times supported MathJax. Many times I want one of my New York Times comments to include math, but there's no way to do so! It comes up equivalent to: $e^{i\pi}+1$.

48 Recommend
henry pick new york Dec. 18

I read it at the time, because what I really wanted to read was volume 7, Compilers. As I understood it at the time, Professor Knuth wrote it in order to make enough money to build an organ. That apparantly happened by 3:Knuth, Searching and Sorting. The most impressive part is the mathemathics in Semi-numerical (2:Knuth). A lot of those problems are research projects over the literature of the last 400 years of mathematics.

Steve Singer Chicago Dec. 18

I own the three volume "Art of Computer Programming", the hardbound boxed set. Luxurious. I don't look at it very often thanks to time constraints, given my workload. But your article motivated me to at least pick it up and carry it from my reserve library to a spot closer to my main desk so I can at least grab Volume 1 and try to read some of it when the mood strikes. I had forgotten just how heavy it is, intellectual content aside. It must weigh more than 25 pounds.

Terry Hayes Los Altos, CA Dec. 18

I too used my copies of The Art of Computer Programming to guide me in several projects in my career, across a variety of topic areas. Now that I'm living in Silicon Valley, I enjoy seeing Knuth at events at the Computer History Museum (where he was a 1998 Fellow Award winner), and at Stanford. Another facet of his teaching is the annual Christmas Lecture, in which he presents something of recent (or not-so-recent) interest. The 2018 lecture is available online - https://www.youtube.com/watch?v=_cR9zDlvP88

Chris Tong Kelseyville, California Dec. 17

One of the most special treats for first year Ph.D. students in the Stanford University Computer Science Department was to take the Computer Problem-Solving class with Don Knuth. It was small and intimate, and we sat around a table for our meetings. Knuth started the semester by giving us an extremely challenging, previously unsolved problem. We then formed teams of 2 or 3. Each week, each team would report progress (or lack thereof), and Knuth, in the most supportive way, would assess our problem-solving approach and make suggestions for how to improve it. To have a master thinker giving one feedback on how to think better was a rare and extraordinary experience, from which I am still benefiting! Knuth ended the semester (after we had all solved the problem) by having us over to his house for food, drink, and tales from his life. . . And for those like me with a musical interest, he let us play the magnificent pipe organ that was at the center of his music room. Thank you Professor Knuth, for giving me one of the most profound educational experiences I've ever had, with such encouragement and humor!

Been there Boulder, Colorado Dec. 17

I learned about Dr. Knuth as a graduate student in the early 70s from one of my professors and made the financial sacrifice (graduate student assistantships were not lucrative) to buy the first and then the second volume of the Art of Computer Programming. Later, at Bell Labs, when I was a bit richer, I bought the third volume. I have those books still and have used them for reference for years. Thank you Dr, Knuth. Art, indeed!

Gianni New York Dec. 18

@Trerra In the good old days, before Computer Science, anyone could take the Programming Aptitude Test. Pass it and companies would train you. Although there were many mathematicians and scientists, some of the best programmers turned out to be music majors. English, Social Sciences, and History majors were represented as well as scientists and mathematicians. It was a wonderful atmosphere to work in . When I started to look for a job as a programmer, I took Prudential Life Insurance's version of the Aptitude Test. After the test, the interviewer was all bent out of shape because my verbal score was higher than my math score; I was a physics major. Luckily they didn't hire me and I got a job with IBM.

M Martínez Miami Dec. 17

In summary, "May the force be with you" means: Did you read Donald Knuth's "The Art of Computer Programming"? Excellent, we loved this article. We will share it with many young developers we know.

mds USA Dec. 17

Dr. Knuth is a great Computer Scientist. Around 25 years ago, I met Dr. Knuth in a small gathering a day before he was awarded a honorary Doctorate in a university. This is my approximate recollection of a conversation. I said-- " Dr. Knuth, you have dedicated your book to a computer (one with which he had spent a lot of time, perhaps a predecessor to PDP-11). Isn't it unusual?". He said-- "Well, I love my wife as much as anyone." He then turned to his wife and said --"Don't you think so?". It would be nice if scientists with the gift of such great minds tried to address some problems of ordinary people, e.g. a model of economy where everyone can get a job and health insurance, say, like Dr. Paul Krugman.

Nadine NYC Dec. 17

I was in a training program for women in computer systems at CUNY graduate center, and they used his obtuse book. It was one of the reasons I dropped out. He used a fantasy language to describe his algorithms in his book that one could not test on computers. I already had work experience as a programmer with algorithms and I know how valuable real languages are. I might as well have read Animal Farm. It might have been different if he was the instructor.

Doug McKenna Boulder Colorado Dec. 17

Don Knuth's work has been a curious thread weaving in and out of my life. I was first introduced to Knuth and his The Art of Computer Programming back in 1973, when I was tasked with understanding a section of the then-only-two-volume Book well enough to give a lecture explaining it to my college algorithms class. But when I first met him in 1981 at Stanford, he was all-in on thinking about typography and this new-fangled system of his called TeX. Skip a quarter century. One day in 2009, I foolishly decided kind of on a whim to rewrite TeX from scratch (in my copious spare time), as a simple C library, so that its typesetting algorithms could be put to use in other software such as electronic eBook's with high-quality math typesetting and interactive pictures. I asked Knuth for advice. He warned me, prepare yourself, it's going to consume five years of your life. I didn't believe him, so I set off and tried anyway. As usual, he was right.

Baddy Khan San Francisco Dec. 17

I have signed copied of "Fundamental Algorithms" in my library, which I treasure. Knuth was a fine teacher, and is truly a brilliant and inspiring individual. He taught during the same period as Vint Cerf, another wonderful teacher with a great sense of humor who is truly a "father of the internet". One good teacher makes all the difference in life. More than one is a rare blessing.

Indisk Fringe Dec. 17

I am a biologist, specifically a geneticist. I became interested in LaTeX typesetting early in my career and have been either called pompous or vilified by people at all levels for wanting to use. One of my PhD advisors famously told me to forget LaTeX because it was a thing of the past. I have now forgotten him completely. I still use LaTeX almost every day in my work even though I don't generally typeset with equations or algorithms. My students always get trained in using proper typesetting. Unfortunately, the publishing industry has all but largely given up on TeX. Very few journals in my field accept TeX manuscripts, and most of them convert to word before feeding text to their publishing software. Whatever people might argue against TeX, the beauty and elegance of a property typeset document is unparalleled. Long live LaTeX

PaulSFO San Francisco Dec. 17

A few years ago Severo Ornstein (who, incidentally, did the hardware design for the first router, in 1969), and his wife Laura, hosted a concert in their home in the hills above Palo Alto. During a break a friend and I were chatting when a man came over and *asked* if he could chat with us (a high honor, indeed). His name was Don. After a few minutes I grew suspicious and asked "What's your last name?" Friendly, modest, brilliant; a nice addition to our little chat.

Tim Black Wilmington, NC Dec. 17

When I was a physics undergraduate (at Trinity in Hartford), I was hired to re-write professor's papers into TeX. Seeing the beauty of TeX, I wrote a program that re-wrote my lab reports (including graphs!) into TeX. My lab instructors were amazed! How did I do it? I never told them. But I just recognized that Knuth was a genius and rode his coat-tails, as I have continued to do for the last 30 years!

Jack512 Alexandria VA Dec. 17

A famous quote from Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it." Anyone who has ever programmed a computer will feel the truth of this in their bones.

[Dec 11, 2018] Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up.

Dec 11, 2018 | www.ianwelsh.net

S Brennan permalink April 24, 2016

My grandfather, in the early 60's could board a 707 in New York and arrive in LA in far less time than I can today. And no, I am not counting 4 hour layovers with the long waits to be "screened", the jets were 50-70 knots faster, back then your time was worth more, today less.

Not counting longer hours AT WORK, we spend far more time commuting making for much longer work days, back then your time was worth more, today less!

Software "upgrades" require workers to constantly relearn the same task because some young "genius" observed that a carefully thought out interface "looked tired" and glitzed it up. Think about the almost perfect Google Maps driver interface being redesigned by people who take private buses to work. Way back in the '90's your time was worth more than today!

Life is all the "time" YOU will ever have and if we let the elite do so, they will suck every bit of it out of you.

[Nov 07, 2018] The Computer Languages Employers Want Most in Silicon Valley

Nov 07, 2018 | qz.com


Quartz
Michael J. Coren
November 2, 2018

The Indeed jobs website determined, by counting the most requested computer languages in technology job postings, that Java and Python were most desired by employers across the U.S. The site compared posts from employers in San Francisco, San Jose, and the wider U.S. between October 2017 and October 2018. Analysis found most of the non-Python or non-Java languages to be a reflection of the digital economy's demands, with HTML, CSS, and JavaScript buttressing the everyday Web. Meanwhile, SQL and PHP drive back-end functions such as data retrieval and dynamic content display. Although languages from tech giants such as Microsoft's C# and Apple's Swift for iOS and macOS applications were not among the top 10, both were cited as among the language skills most wanted by developers. Meanwhile, Amazon Web Services, which has proved vital to cloud computing, did crack the top 10.

[Nov 05, 2018] Revisiting the Unix philosophy in 2018 Opensource.com by Michael Hausenblas

Nov 05, 2018 | opensource.com

Revisiting the Unix philosophy in 2018 The old strategy of building small, focused applications is new again in the modern microservices environment.

Program Design in the Unix Environment " in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD's cat -v implementation. In a nutshell that philosophy is: Build small, focused programs -- in whatever language -- that do only one thing but do this thing well, communicate via stdin / stdout , and are connected through pipes.

Sound familiar?

Yeah, I thought so. That's pretty much the definition of microservices offered by James Lewis and Martin Fowler:

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.

While one *nix program or one microservice may be very limited or not even very interesting on its own, it's the combination of such independently working units that reveals their true benefit and, therefore, their power.

*nix vs. microservices

The following table compares programs (such as cat or lsof ) in a *nix environment against programs in a microservices environment.

*nix Microservices
Unit of execution program using stdin / stdout service with HTTP or gRPC API
Data flow Pipes ?
Configuration & parameterization Command-line arguments,
environment variables, config files
JSON/YAML docs
Discovery Package manager, man, make DNS, environment variables, OpenAPI

Let's explore each line in slightly greater detail.

Unit of execution

More on Microservices

The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input from stdin and writes output to stdout . A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens. Data flow

Traditionally, *nix programs could communicate via pipes. In other words, thanks to Doug McIlroy , you don't need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little Apache Kafka-based experiment from 2017 .

Configuration and parameterization

How do you configure a program or service -- either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include Kubernetes resource definitions , Nomad job specifications , or Docker Compose files. These may or may not be parameterized; that is, either you have some templating language, such as Helm in Kubernetes, or you find yourself doing an awful lot of sed -i commands.

Discovery

How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there's a bit more automation in finding a service. In addition to bespoke approaches like Airbnb's SmartStack or Netflix's Eureka , there usually are environment variable-based or DNS-based approaches that allow you to discover services dynamically. Equally important, OpenAPI provides a de-facto standard for HTTP API documentation and design, and gRPC does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good Makefiles and ending with writing your docs with (or in?) style .

Pros and cons

Both *nix and microservices offer a number of challenges and opportunities

Composability

It's hard to design something that has a clear, sharp focus and can also play well with others. It's even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts -- maybe it's a better option to outsource these features into a service mesh? It's hard, but if you get it right, its reusability can be enormous.

Observability

In a monolith (in 2018) or a big program that tries to do it all (in 1984), it's rather straightforward to find the culprit when things go south. But, in a

yes | tr \\n x | head -c 450m | grep n

or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably OpenCensus and OpenTracing . Observability still might be the biggest single blocker if you are looking to move to microservices.

Global state

While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible.

Wrapping up

In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith is the best option for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices -- maybe we can learn something from the former to benefit the latter.

Michael Hausenblas is a Developer Advocate for Kubernetes and OpenShift at Red Hat where he helps appops to build and operate apps. His background is in large-scale data processing and container orchestration and he's experienced in advocacy and standardization at W3C and IETF. Before Red Hat, Michael worked at Mesosphere, MapR and in two research institutions in Ireland and Austria. He contributes to open source software incl. Kubernetes, speaks at conferences and user groups, and shares good practices...

[Nov 05, 2018] The Linux Philosophy for SysAdmins And Everyone Who Wants To Be One eBook by David Both

Nov 05, 2018 | www.amazon.com

Elegance is one of those things that can be difficult to define. I know it when I see it, but putting what I see into a terse definition is a challenge. Using the Linux diet
command, Wordnet provides one definition of elegance as, "a quality of neatness and ingenious simplicity in the solution of a problem (especially in science or mathematics); 'the simplicity and elegance of his invention.'"

In the context of this book, I think that elegance is a state of beauty and simplicity in the design and working of both hardware and software. When a design is elegant,
software and hardware work better and are more efficient. The user is aided by simple, efficient, and understandable tools.

Creating elegance in a technological environment is hard. It is also necessary. Elegant solutions produce elegant results and are easy to maintain and fix. Elegance does not happen by accident; you must work for it.

The quality of simplicity is a large part of technical elegance. So large, in fact that it deserves a chapter of its own, Chapter 18, "Find the Simplicity," but we do not ignore it here. This chapter discusses what it means for hardware and software to be elegant.

Hardware Elegance

Yes, hardware can be elegant -- even beautiful, pleasing to the eye. Hardware that is well designed is more reliable as well. Elegant hardware solutions improve reliability'.

[Oct 27, 2018] One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip.

Oct 27, 2018 | www.moonofalabama.org

Piotr Berman , Oct 26, 2018 2:55:29 PM | 5 ">link

"Even Microsoft, the biggest software company in the world, recently screwed up..."

Isn't it rather logical than the larger a company is, the more screw ups it can make? After all, Microsofts has armies of programmers to make those bugs.

Once I created a joke that the best way to disable missile defense would be to have a rocket that can stop in mid-air, thus provoking the software to divide be zero and crash. One day I told that joke to a military officer who told me that something like that actually happened, but it was in the Navy and it involved a test with a torpedo. Not only the program for "torpedo defense" went down but the system crashed too and the engine of the ship stopped working as well. I also recall explanations that a new complex software system typically has all major bugs removed after being used for a year. And the occasion was Internal Revenue Service changing hardware and software leading to widely reported problems.

One issue with Microsoft (not just Microsoft) is that their business model (not the benefit of the users) requires frequent changes in the systems, so bugs are introduced at the steady clip. Of course, they do not make money on bugs per se, but on new features that in time make it impossible to use older versions of the software and hardware.

[Sep 21, 2018] 'It Just Seems That Nobody is Interested in Building Quality, Fast, Efficient, Lasting, Foundational Stuff Anymore'

Sep 21, 2018 | tech.slashdot.org

Nikita Prokopov, a software programmer and author of Fira Code, a popular programming font, AnyBar, a universal status indicator, and some open-source Clojure libraries, writes :

Remember times when an OS, apps and all your data fit on a floppy? Your desktop todo app is probably written in Electron and thus has userland driver for Xbox 360 controller in it, can render 3d graphics and play audio and take photos with your web camera. A simple text chat is notorious for its load speed and memory consumption. Yes, you really have to count Slack in as a resource-heavy application. I mean, chatroom and barebones text editor, those are supposed to be two of the less demanding apps in the whole world. Welcome to 2018.

At least it works, you might say. Well, bigger doesn't imply better. Bigger means someone has lost control. Bigger means we don't know what's going on. Bigger means complexity tax, performance tax, reliability tax. This is not the norm and should not become the norm . Overweight apps should mean a red flag. They should mean run away scared. 16Gb Android phone was perfectly fine 3 years ago. Today with Android 8.1 it's barely usable because each app has become at least twice as big for no apparent reason. There are no additional functions. They are not faster or more optimized. They don't look different. They just...grow?

iPhone 4s was released with iOS 5, but can barely run iOS 9. And it's not because iOS 9 is that much superior -- it's basically the same. But their new hardware is faster, so they made software slower. Don't worry -- you got exciting new capabilities like...running the same apps with the same speed! I dunno. [...] Nobody understands anything at this point. Neither they want to. We just throw barely baked shit out there, hope for the best and call it "startup wisdom." Web pages ask you to refresh if anything goes wrong. Who has time to figure out what happened? Any web app produces a constant stream of "random" JS errors in the wild, even on compatible browsers.

[...] It just seems that nobody is interested in building quality, fast, efficient, lasting, foundational stuff anymore. Even when efficient solutions have been known for ages, we still struggle with the same problems: package management, build systems, compilers, language design, IDEs. Build systems are inherently unreliable and periodically require full clean, even though all info for invalidation is there. Nothing stops us from making build process reliable, predictable and 100% reproducible. Just nobody thinks its important. NPM has stayed in "sometimes works" state for years.


K. S. Kyosuke ( 729550 ) , Friday September 21, 2018 @11:32AM ( #57354556 )

Re:Why should they? ( Score: 4 , Insightful)

Less resource use to accomplish the required tasks? Both in manufacturing (more chips from the same amount of manufacturing input) and in operation (less power used)?

K. S. Kyosuke ( 729550 ) writes: on Friday September 21, 2018 @11:58AM ( #57354754 )
Re:Why should they? ( Score: 2 )

Ehm...so for example using smaller cars with better mileage to commute isn't more environmentally friendly either, according to you?https://slashdot.org/comments.pl?sid=12644750&cid=57354556#

DontBeAMoran ( 4843879 ) writes: on Friday September 21, 2018 @12:04PM ( #57354826 )
Re:Why should they? ( Score: 2 )

iPhone 4S used to be the best and could run all the applications.

Today, the same power is not sufficient because of software bloat. So you could say that all the iPhones since the iPhone 4S are devices that were created and then dumped for no reason.

It doesn't matter since we can't change the past and it doesn't matter much since improvements are slowing down so people are changing their phones less often.

Mark of the North ( 19760 ) , Friday September 21, 2018 @01:02PM ( #57355296 )
Re:Why should they? ( Score: 5 , Interesting)

Can you really not see the connection between inefficient software and environmental harm? All those computers running code that uses four times as much data, and four times the number crunching, as is reasonable? That excess RAM and storage has to be built as well as powered along with the CPU. Those material and electrical resources have to come from somewhere.

But the calculus changes completely when the software manufacturer hosts the software (or pays for the hosting) for their customers. Our projected AWS bill motivated our management to let me write the sort of efficient code I've been trained to write. After two years of maintaining some pretty horrible legacy code, it is a welcome change.

The big players care a great deal about efficiency when they can't outsource inefficiency to the user's computing resources.

eth1 ( 94901 ) , Friday September 21, 2018 @11:45AM ( #57354656 )
Re:Why should they? ( Score: 5 , Informative)
We've been trained to be a consuming society of disposable goods. The latest and greatest feature will always be more important than something that is reliable and durable for the long haul.

It's not just consumer stuff.

The network team I'm a part of has been dealing with more and more frequent outages, 90% of which are due to bugs in software running our devices. These aren't fly-by-night vendors either, they're the "no one ever got fired for buying X" ones like Cisco, F5, Palo Alto, EMC, etc.

10 years ago, outages were 10% bugs, and 90% human error, now it seems to be the other way around. Everyone's chasing features, because that's what sells, so there's no time for efficiency/stability/security any more.

LucasBC ( 1138637 ) , Friday September 21, 2018 @12:05PM ( #57354836 )
Re:Why should they? ( Score: 3 , Interesting)

Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software. This, in turn, leads to people having to replace computers that are otherwise working well, solely for the reason to keep up with software that requires more and more system resources for no tangible benefit. In a nutshell -- sloppy, lazy programming leads to more technology waste. That impacts the environment. I have a unique perspective in this topic. I do web development for a company that does electronics recycling. I have suffered the continued bloat in software in the tools I use (most egregiously, Adobe), and I see the impact of technological waste in the increasing amount of electronics recycling that is occurring. Ironically, I'm working at home today because my computer at the office kept stalling every time I had Photoshop and Illustrator open at the same time. A few years ago that wasn't a problem.

arglebargle_xiv ( 2212710 ) writes:
Re: ( Score: 3 )

There is one place where people still produce stuff like the OP wants, and that's embedded. Not IoT wank, but real embedded, running on CPUs clocked at tens of MHz with RAM in two-digit kilobyte (not megabyte or gigabyte) quantities. And a lot of that stuff is written to very exacting standards, particularly where something like realtime control and/or safety is involved.

The one problem in this area is the endless battle with standards morons who begin each standard with an implicit "assume an infinitely

commodore64_love ( 1445365 ) , Friday September 21, 2018 @03:58PM ( #57356680 ) Journal
Re:Why should they? ( Score: 3 )

> Poor software engineering means that very capable computers are no longer capable of running modern, unnecessarily bloated software.

Not just computers.

You can add Smart TVs, settop internet boxes, Kindles, tablets, et cetera that must be thrown-away when they become too old (say 5 years) to run the latest bloatware. Software non-engineering is causing a lot of working hardware to be landfilled, and for no good reason.

[Sep 21, 2018] Fast, cheap (efficient) and reliable (robust, long lasting): pick 2

Sep 21, 2018 | tech.slashdot.org

JoeDuncan ( 874519 ) , Friday September 21, 2018 @12:58PM ( #57355276 )

Obligatory ( Score: 2 )

Fast, cheap (efficient) and reliable (robust, long lasting): pick 2.

roc97007 ( 608802 ) , Friday September 21, 2018 @12:16PM ( #57354946 ) Journal
Re:Bloat = growth ( Score: 2 )

There's probably some truth to that. And it's a sad commentary on the industry.

[Sep 21, 2018] Since Moore's law appears to have stalled since at least five years ago, it will be interesting to see if we start to see algorithm research or code optimization techniques coming to the fore again.

Sep 21, 2018 | tech.slashdot.org

Anonymous Coward , Friday September 21, 2018 @11:26AM ( #57354512 )

Moore's law ( Score: 5 , Interesting)

When the speed of your processor doubles every two year along with a concurrent doubling of RAM and disk space, then you can get away with bloatware.

Since Moore's law appears to have stalled since at least five years ago, it will be interesting to see if we start to see algorithm research or code optimization techniques coming to the fore again.

[Sep 16, 2018] After the iron curtain fell, there was a big demand for Russian-trained programmers because they could program in a very efficient and light manner that didn't demand too much of the hardware, if I remember correctly

Notable quotes:
"... It's a bit of chicken-and-egg problem, though. Russia, throughout 20th century, had problem with developing small, effective hardware, so their programmers learned how to code to take maximum advantage of what they had, with their technological deficiency in one field giving rise to superiority in another. ..."
"... Russian tech ppl should always be viewed with certain amount of awe and respect...although they are hardly good on everything. ..."
"... Soviet university training in "cybernetics" as it was called in the late 1980s involved two years of programming on blackboards before the students even touched an actual computer. ..."
"... I recall flowcharting entirely on paper before committing a program to punched cards. ..."
Aug 01, 2018 | turcopolier.typepad.com

Bill Herschel 2 days ago ,

Very, very slightly off-topic.

Much has been made, including in this post, of the excellent organization of Russian forces and Russian military technology.

I have been re-investigating an open-source relational database system known as PosgreSQL (variously), and I remember finding perhaps a decade ago a very useful whole text search feature of this system which I vaguely remember was written by a Russian and, for that reason, mildly distrusted by me.

Come to find out that the principle developers and maintainers of PostgreSQL are Russian. OMG. Double OMG, because the reason I chose it in the first place is that it is the best non-proprietary RDBS out there and today is supported on Google Cloud, AWS, etc.

The US has met an equal or conceivably a superior, case closed. Trump's thoroughly odd behavior with Putin is just one but a very obvious one example of this.

Of course, Trump's nationalistic blather is creating a "base" of people who believe in the godliness of the US. They are in for a very serious disappointment.

kao_hsien_chih Bill Herschel a day ago ,

After the iron curtain fell, there was a big demand for Russian-trained programmers because they could program in a very efficient and "light" manner that didn't demand too much of the hardware, if I remember correctly.

It's a bit of chicken-and-egg problem, though. Russia, throughout 20th century, had problem with developing small, effective hardware, so their programmers learned how to code to take maximum advantage of what they had, with their technological deficiency in one field giving rise to superiority in another.

Russia has plenty of very skilled, very well-trained folks and their science and math education is, in a way, more fundamentally and soundly grounded on the foundational stuff than US (based on my personal interactions anyways).

Russian tech ppl should always be viewed with certain amount of awe and respect...although they are hardly good on everything.

TTG kao_hsien_chih a day ago ,

Well said. Soviet university training in "cybernetics" as it was called in the late 1980s involved two years of programming on blackboards before the students even touched an actual computer.

It gave the students an understanding of how computers works down to the bit flipping level. Imagine trying to fuzz code in your head.

FarNorthSolitude TTG a day ago ,

I recall flowcharting entirely on paper before committing a program to punched cards. I used to do hex and octal math in my head as part of debugging core dumps. Ah, the glory days.

Honeywell once made a military computer that was 10 bit. That stumped me for a while, as everything was 8 or 16 bit back then.

kao_hsien_chih FarNorthSolitude 10 hours ago ,

That used to be fairly common in the civilian sector (in US) too: computing time was expensive, so you had to make sure that the stuff worked flawlessly before it was committed.

No opportunity to seeing things go wrong and do things over like much of how things happen nowadays. Russians, with their hardware limitations/shortages, I imagine must have been much more thorough than US programmers were back in the old days, and you could only get there by being very thoroughly grounded n the basics.

[Sep 07, 2018] How Can We Fix The Broken Economics of Open Source?

Notable quotes:
"... [with some subset of features behind a paywall] ..."
Sep 07, 2018 | news.slashdot.org

If we take consulting, services, and support off the table as an option for high-growth revenue generation (the only thing VCs care about), we are left with open core [with some subset of features behind a paywall] , software as a service, or some blurring of the two... Everyone wants infrastructure software to be free and continuously developed by highly skilled professional developers (who in turn expect to make substantial salaries), but no one wants to pay for it. The economics of this situation are unsustainable and broken ...

[W]e now come to what I have recently called "loose" open core and SaaS. In the future, I believe the most successful OSS projects will be primarily monetized via this method. What is it? The idea behind "loose" open core and SaaS is that a popular OSS project can be developed as a completely community driven project (this avoids the conflicts of interest inherent in "pure" open core), while value added proprietary services and software can be sold in an ecosystem that forms around the OSS...

Unfortunately, there is an inflection point at which in some sense an OSS project becomes too popular for its own good, and outgrows its ability to generate enough revenue via either "pure" open core or services and support... [B]uilding a vibrant community and then enabling an ecosystem of "loose" open core and SaaS businesses on top appears to me to be the only viable path forward for modern VC-backed OSS startups.
Klein also suggests OSS foundations start providing fellowships to key maintainers, who currently "operate under an almost feudal system of patronage, hopping from company to company, trying to earn a living, keep the community vibrant, and all the while stay impartial..."

"[A]s an industry, we are going to have to come to terms with the economic reality: nothing is free, including OSS. If we want vibrant OSS projects maintained by engineers that are well compensated and not conflicted, we are going to have to decide that this is something worth paying for. In my opinion, fellowships provided by OSS foundations and funded by companies generating revenue off of the OSS is a great way to start down this path."

[Apr 30, 2018] New Book Describes Bluffing Programmers in Silicon Valley

Notable quotes:
"... Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley ..."
"... Older generations called this kind of fraud "fake it 'til you make it." ..."
"... Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring ..."
"... It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. ..."
"... In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz ..."
"... So what are the uses for that? I am curious what things people have put these to use for. ..."
"... Also, Oracle, SAP, IBM... I would never buy from them, nor use their products. I have used plenty of IBM products and they suck big time. They make software development 100 times harder than it could be. ..."
"... I have a theory that 10% of people are good at what they do. It doesn't really matter what they do, they will still be good at it, because of their nature. These are the people who invent new things, who fix things that others didn't even see as broken and who automate routine tasks or simply question and erase tasks that are not necessary. ..."
"... 10% are just causing damage. I'm not talking about terrorists and criminals. ..."
"... Programming is statistically a dead-end job. Why should anyone hone a dead-end skill that you won't be able to use for long? For whatever reason, the industry doesn't want old programmers. ..."
Apr 30, 2018 | news.slashdot.org

Long-time Slashdot reader Martin S. pointed us to this an excerpt from the new book Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley by Portland-based investigator reporter Corey Pein.

The author shares what he realized at a job recruitment fair seeking Java Legends, Python Badasses, Hadoop Heroes, "and other gratingly childish classifications describing various programming specialities.

" I wasn't the only one bluffing my way through the tech scene. Everyone was doing it, even the much-sought-after engineering talent.

I was struck by how many developers were, like myself, not really programmers , but rather this, that and the other. A great number of tech ninjas were not exactly black belts when it came to the actual onerous work of computer programming. So many of the complex, discrete tasks involved in the creation of a website or an app had been automated that it was no longer necessary to possess knowledge of software mechanics. The coder's work was rarely a craft. The apps ran on an assembly line, built with "open-source", off-the-shelf components. The most important computer commands for the ninja to master were copy and paste...

[M]any programmers who had "made it" in Silicon Valley were scrambling to promote themselves from coder to "founder". There wasn't necessarily more money to be had running a startup, and the increase in status was marginal unless one's startup attracted major investment and the right kind of press coverage. It's because the programmers knew that their own ladder to prosperity was on fire and disintegrating fast. They knew that well-paid programming jobs would also soon turn to smoke and ash, as the proliferation of learn-to-code courses around the world lowered the market value of their skills, and as advances in artificial intelligence allowed for computers to take over more of the mundane work of producing software. The programmers also knew that the fastest way to win that promotion to founder was to find some new domain that hadn't yet been automated. Every tech industry campaign designed to spur investment in the Next Big Thing -- at that time, it was the "sharing economy" -- concealed a larger programme for the transformation of society, always in a direction that favoured the investor and executive classes.

"I wasn't just changing careers and jumping on the 'learn to code' bandwagon," he writes at one point. "I was being steadily indoctrinated in a specious ideology."


Anonymous Coward , Saturday April 28, 2018 @11:40PM ( #56522045 )

older generations already had a term for this ( Score: 5 , Interesting)

Older generations called this kind of fraud "fake it 'til you make it."

raymorris ( 2726007 ) , Sunday April 29, 2018 @02:05AM ( #56522343 ) Journal
The people who are smarter won't ( Score: 5 , Informative)

> The people can do both are smart enough to build their own company and compete with you.

Been there, done that. Learned a few lessons. Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring, managing people, corporate strategy, staying up on the competition, figuring out tax changes each year and getting taxes filed six times each year, the various state and local requirements, legal changes, contract hassles, etc, while hoping the company makes money this month so they can take a paycheck and lay their rent.

I learned that I'm good at creating software systems and I enjoy it. I don't enjoy all-nighters, partners being dickheads trying to pull out of a contract, or any of a thousand other things related to running a start-up business. I really enjoy a consistent, six-figure compensation package too.

brian.stinar ( 1104135 ) writes:
Re: ( Score: 2 )

* getting taxes filled eighteen times a year.

I pay monthly gross receipts tax (12), quarterly withholdings (4) and a corporate (1) and individual (1) returns. The gross receipts can vary based on the state, so I can see how six times a year would be the minimum.

Cederic ( 9623 ) writes:
Re: ( Score: 2 )

Fuck no. Cost of full automation: $4m Cost of manual entry: $0 Opportunity cost of manual entry: $800/year

At worse, pay for an accountant, if you can get one that cheaply. Bear in mind talking to them incurs most of that opportunity cost anyway.

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

Nowadays I work 9:30-4:30 for a very good, consistent paycheck and let some other "smart person" put in 75 hours a week dealing with hiring

There's nothing wrong with not wnting to run your own business, it's not for most people, and even if it was, the numbers don't add up. But putting the scare qoutes in like that makes it sound like you have huge chip on your shoulder. Those things re just as essential to the business as your work and without them you wouldn't have the steady 9:30-4:30 with good paycheck.

raymorris ( 2726007 ) writes:
Important, and dumb. ( Score: 3 , Informative)

Of course they are important. I wouldn't have done those things if they weren't important!

I frequently have friends say things like "I love baking. I can't get enough of baking. I'm going to open a bakery.". I ask them "do you love dealing with taxes, every month? Do you love contract law? Employment law? Marketing? Accounting?" If you LOVE baking, the smart thing to do is to spend your time baking. Running a start-up business, you're not going to do much baking.

If you love marketing, employment law, taxes

raymorris ( 2726007 ) writes:
Four tips for a better job. Who has more? ( Score: 3 )

I can tell you a few things that have worked for me. I'll go in chronological order rather than priority order.

Make friends in the industry you want to be in. Referrals are a major way people get jobs.

Look at the job listings for jobs you'd like to have and see which skills a lot of companies want, but you're missing. For me that's Java. A lot companies list Java skills and I'm not particularly good with Java. Then consider learning the skills you lack, the ones a lot of job postings are looking for.

Certifi

goose-incarnated ( 1145029 ) , Sunday April 29, 2018 @02:34PM ( #56524475 ) Journal
Re: older generations already had a term for this ( Score: 5 , Insightful)
You don't understand the point of an ORM do you? I'd suggest reading why they exist

They exist because programmers value code design more than data design. ORMs are the poster-child for square-peg-round-hole solutions, which is why all ORMs choose one of three different ways of squashing hierarchical data into a relational form, all of which are crappy.

If the devs of the system (the ones choosing to use an ORM) had any competence at all they'd design their database first because in any application that uses a database the database is the most important bit, not the OO-ness or Functional-ness of the design.

Over the last few decades I've seen programs in a system come and go; a component here gets rewritten, a component there gets rewritten, but you know what? They all have to work with the same damn data.

You can more easily switch out your code for new code with new design in a new language, than you can switch change the database structure. So explain to me why it is that you think the database should be mangled to fit your OO code rather than mangling your OO code to fit the database?

cheekyboy ( 598084 ) writes:
im sick of reinventors and new frameworks ( Score: 3 )

Stick to the one thing for 10-15years. Often all this new shit doesn't do jack different to the old shit, its not faster, its not better. Every dick wants to be famous so make another damn library/tool with his own fancy name and feature, instead of enhancing an existing product.

gbjbaanb ( 229885 ) writes:
Re: ( Score: 2 )

amen to that.

Or kids who can't hack the main stuff, suddenly discover the cool new, and then they can pretend they're "learning" it, and when the going gets tough (as it always does) they can declare the tech to be pants and move to another.

hence we had so many people on the bandwagon for functional programming, then dumped it for ruby on rails, then dumped that for Node.js, not sure what they're on at currently, probably back to asp.net.

Greyfox ( 87712 ) writes:
Re: ( Score: 2 )

How much code do you have to reuse before you're not really programming anymore? When I started in this business, it was reasonably possible that you could end up on a project that didn't particularly have much (or any) of an operating system. They taught you assembly language and the process by which the system boots up, but I think if I were to ask most of the programmers where I work, they wouldn't be able to explain how all that works...

djinn6 ( 1868030 ) writes:
Re: ( Score: 2 )
It really feels like if you know what you're doing it should be possible to build a team of actually good programmers and put everyone else out of business by actually meeting your deliverables, but no one has yet. I wonder why that is.

You mean Amazon, Google, Facebook and the like? People may not always like what they do, but they manage to get things done and make plenty of money in the process. The problem for a lot of other businesses is not having a way to identify and promote actually good programmers. In your example, you could've spent 10 minutes fixing their query and saved them days of headache, but how much recognition will you actually get? Where is your motivation to help them?

Junta ( 36770 ) writes:
Re: ( Score: 2 )

It's not a "kids these days" sort of issue, it's *always* been the case that shameless, baseless self-promotion wins out over sincere skill without the self-promotion, because the people who control the money generally understand boasting more than they understand the technology. Yes it can happen that baseless boasts can be called out over time by a large enough mass of feedback from competent peers, but it takes a *lot* to overcome the tendency for them to have faith in the boasts.

It does correlate stron

cheekyboy ( 598084 ) writes:
Re: ( Score: 2 )

And all these modern coders forget old lessons, and make shit stuff, just look at instagram windows app, what a load of garbage shit, that us old fuckers could code in 2-3 weeks.

Instagram - your app sucks, cookie cutter coders suck, no refinement, coolness. Just cheap ass shit, with limited usefulness.

Just like most of commercial software that's new - quick shit.

Oh and its obvious if your an Indian faking it, you haven't worked in 100 companies at the age of 29.

Junta ( 36770 ) writes:
Re: ( Score: 2 )

Here's another problem, if faced with a skilled team that says "this will take 6 months to do right" and a more naive team that says "oh, we can slap that together in a month", management goes with the latter. Then the security compromises occur, then the application fails due to pulling in an unvetted dependency update live into production. When the project grows to handling thousands instead of dozens of users and it starts mysteriously folding over and the dev team is at a loss, well the choice has be

molarmass192 ( 608071 ) , Sunday April 29, 2018 @02:15AM ( #56522359 ) Homepage Journal
Re:older generations already had a term for this ( Score: 5 , Interesting)

These restrictions is a large part of what makes Arduino programming "fun". If you don't plan out your memory usage, you're gonna run out of it. I cringe when I see 8MB web pages of bloated "throw in everything including the kitchen sink and the neighbor's car". Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something. Of course, I don't have time to review it but I'm sure everybody else has peer reviewed it for flaws and exploits line by line.

AmiMoJo ( 196126 ) writes: < mojo@@@world3...net > on Sunday April 29, 2018 @05:15AM ( #56522597 ) Homepage Journal
Re:older generations already had a term for this ( Score: 4 , Informative)
Unfortunately, the careful and cautious way is a dying in favor of the throw 3rd party code at it until it does something.

Of course. What is the business case for making it efficient? Those massive frameworks are cached by the browser and run on the client's system, so cost you nothing and save you time to market. Efficient costs money with no real benefit to the business.

If we want to fix this, we need to make bloat have an associated cost somehow.

locketine ( 1101453 ) writes:
Re: older generations already had a term for this ( Score: 2 )

My company is dealing with the result of this mentality right now. We released the web app to the customer without performance testing and doing several majorly inefficient things to meet deadlines. Once real load was put on the application by users with non-ideal hardware and browsers, the app was infuriatingly slow. Suddenly our standard sub-40 hour workweek became a 50+ hour workweek for months while we fixed all the inefficient code and design issues.

So, while you're right that getting to market and opt

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

In the bad old days we had a hell of a lot of ridiculous restriction We must somehow made our programs to run successfully inside a RAM that was 48KB in size (yes, 48KB, not 48MB or 48GB), on a CPU with a clock speed of 1.023 MHz

We still have them. In fact some of the systems I've programmed have been more resource limited than the gloriously spacious 32KiB memory of the BBC model B. Take the PIC12F or 10F series. A glorious 64 bytes of RAM, max clock speed of 16MHz, but not unusual to run it 32kHz.

serviscope_minor ( 664417 ) writes:
Re: ( Score: 2 )

So what are the uses for that? I am curious what things people have put these to use for.

It's hard to determine because people don't advertise use of them at all. However, I know that my electric toothbrush uses an Epson 4 bit MCU of some description. It's got a status LED, basic NiMH batteryb charger and a PWM controller for an H Bridge. Braun sell a *lot* of electric toothbrushes. Any gadget that's smarter than a simple switch will probably have some sort of basic MCU in it. Alarm system components, sensor

tlhIngan ( 30335 ) writes:
Re: ( Score: 3 , Insightful)
b) No computer ever ran at 1.023 MHz. It was either a nice multiple of 1Mhz or maybe a multiple of 3.579545Mhz (ie. using the TV output circuit's color clock crystal to drive the CPU).

Well, it could be used to drive the TV output circuit, OR, it was used because it's a stupidly cheap high speed crystal. You have to remember except for a few frequencies, most crystals would have to be specially cut for the desired frequency. This occurs even today, where most oscillators are either 32.768kHz (real time clock

Anonymous Coward writes:
Re: ( Score: 2 , Interesting)

Yeah, nice talk. You could have stopped after the first sentence. The other AC is referring to the Commodore C64 [wikipedia.org]. The frequency has nothing to do with crystal availability but with the simple fact that everything in the C64 is synced to the TV. One clock cycle equals 8 pixels. The graphics chip and the CPU take turns accessing the RAM. The different frequencies dictated by the TV standards are the reason why the CPU in the NTSC version of the C64 runs at 1.023MHz and the PAL version at 0.985MHz.

Wraithlyn ( 133796 ) writes:
Re: ( Score: 2 )

LOL what exactly is so special about 16K RAM? https://yourlogicalfallacyis.c... [yourlogicalfallacyis.com]

I cut my teeth on a VIC20 (5K RAM), then later a C64 (which ran at 1.023MHz...)

Anonymous Coward writes:
Re: ( Score: 2 , Interesting)

Commodore 64 for the win. I worked for a company that made detection devices for the railroad, things like monitoring axle temperatures, reading the rail car ID tags. The original devices were made using Commodore 64 boards using software written by an employee at the one rail road company working with them.

The company then hired some electrical engineers to design custom boards using the 68000 chips and I was hired as the only programmer. Had to rewrite all of the code which was fine...

wierd_w ( 1375923 ) , Saturday April 28, 2018 @11:58PM ( #56522075 )
... A job fair can easily test this competency. ( Score: 4 , Interesting)

Many of these languages have an interactive interpreter. I know for a fact that Python does.

So, since job-fairs are an all day thing, and setup is already a thing for them -- set up a booth with like 4 computers at it, and an admin station. The 4 terminals have an interactive session with the interpreter of choice. Every 20min or so, have a challenge for "Solve this problem" (needs to be easy and already solved in general. Programmers hate being pimped without pay. They don't mind tests of skill, but hate being pimped. Something like "sort this array, while picking out all the prime numbers" or something.) and see who steps up. The ones that step up have confidence they can solve the problem, and you can quickly see who can do the work and who can't.

The ones that solve it, and solve it to your satisfaction, you offer a nice gig to.

ShanghaiBill ( 739463 ) , Sunday April 29, 2018 @01:50AM ( #56522321 )
Re:... A job fair can easily test this competency. ( Score: 5 , Informative)
Then you get someone good at sorting arrays while picking out prime numbers, but potentially not much else.

The point of the test is not to identify the perfect candidate, but to filter out the clearly incompetent. If you can't sort an array and write a function to identify a prime number, I certainly would not hire you. Passing the test doesn't get you a job, but it may get you an interview ... where there will be other tests.

wierd_w ( 1375923 ) writes:
Re: ( Score: 2 )

BINGO!

(I am not even a professional programmer, but I can totally perform such a trivially easy task. The example tests basic understanding of loop construction, function construction, variable use, efficient sorting, and error correction-- especially with mixed type arrays. All of these are things any programmer SHOULD now how to do, without being overly complicated, or clearly a disguised occupational problem trying to get a free solution. Like I said, programmers hate being pimped, and will be turned off

wierd_w ( 1375923 ) , Sunday April 29, 2018 @04:02AM ( #56522443 )
Re: ... A job fair can easily test this competency ( Score: 5 , Insightful)

Again, the quality applicant and the code monkey both have something the fakers do not-- Actual comprehension of what a program is, and how to create one.

As Bill points out, this is not the final exam. This is the "Oh, I see you do actually know how to program-- show me more" portion of the process. This is the part that HR drones are not capable of performing, due to Dunning-Krueger. Those that are actually, REALLY competent will do more than just satisfy the requirements of the challenge, they will provide actually working solutions to the challenge that properly validate their input, and return proper error states if the input is invalid, etc-- You can learn a LOT about a potential hire by observing their work. *THAT* is what this is really about. The triviality of the problem is a necessity, because you ***DON'T*** try to get free solutions out of people.

I realize that may be difficult for you to comprehend, but you *DON'T* do that. The job fair is to let people know that you have a position available, and try to curry interest in people to apply. A successful pre-screening is confidence building, and helps the potential hire to feel that your company is actually interested in actually hiring somebody, and not just fucking off in the booth, to cover for "failing to find somebody" and then "Getting yet another H1B". It gives them a chance to show you what they can do. That is what it is for, and what it does. It also excludes the fakers that this article is about-- The ones that can talk a good talk, but could not program a simple boolean check condition if their life depended on it.

If it were not for the time constraints of a job fair (usually only 2 days, and in that time you need to try and pre-screen as many as possible), I would suggest a tiered challenge, with progressively harder challenges, where you hand out resumes to the ones that make it to the top 3 brackets, but that is not the way the world works.

luis_a_espinal ( 1810296 ) writes:
Re: ( Score: 2 )
This in my opinion is really a waste of time. Challenges like this have to be so simple they can be done walking up to a booth are not likely to filter the "all talks" any better than a few interview questions could (imperson so the candidate can't just google it).

Tougher more involved stuff isn't good either it gives a huge advantage to the full time job hunter, the guy or gal that already has a 9-5 and a family that wants to seem them has not got time for games. We have been struggling with hiring where I work ( I do a lot of the interviews ) and these are the conclusions we have reached

You would be surprised at the number of people with impeccable-looking resumes failing at something as simple as the FizzBuzz test [codinghorror.com]

PaulRivers10 ( 4110595 ) writes:
Re: ... A job fair can easily test this competenc ( Score: 2 )

The only thing fuzzbuzz tests is "have you done fizzbuzz before"? It's a short question filled with every petty trick the author could think ti throw in there. If you haven't seen the tricks they trip you up for no reason related to your actual coding skills. Once you have seen them they're trivial and again unrelated to real work. Fizzbuzz is best passed by someone aiming to game the interview system. It passes people gaming it and trips up people who spent their time doing on the job real work.

Hognoxious ( 631665 ) writes:
Re: ( Score: 2 )
they trip you up for no reason related to your actual codung skills.

Bullshit!

luis_a_espinal ( 1810296 ) , Sunday April 29, 2018 @07:49AM ( #56522861 ) Homepage
filter the lame code monkeys ( Score: 4 , Informative)
Lame monkey tests select for lame monkeys.

A good programmer first and foremost has a clean mind. Experience suggests puzzle geeks, who excel at contrived tests, are usually sloppy thinkers.

No. Good programmers can trivially knock out any of these so-called lame monkey tests. It's lame code monkeys who can't do it. And I've seen their work. Many night shifts and weekends I've burned trying to fix their shit because they couldn't actually do any of the things behind what you call "lame monkey tests", like:

    pulling expensive invariant calculations out of loops using for loops to scan a fucking table to pull rows or calculate an aggregate when they could let the database do what it does best with a simple SQL statement systems crashing under actual load because their shitty code was never stress tested ( but it worked on my dev box! .) again with databases, having to redo their schemas because they were fattened up so much with columns like VALUE1, VALUE2, ... VALUE20 (normalize you assholes!) chatting remote APIs - because these code monkeys cannot think about the need for bulk operations in increasingly distributed systems. storing dates in unsortable strings because the idiots do not know most modern programming languages have a date data type.

Oh and the most important, off-by-one looping errors. I see this all the time, the type of thing a good programmer can spot on quickly because he or she can do the so-called "lame monkey tests" that involve arrays and sorting.

I've seen the type: "I don't need to do this shit because I have business knowledge and I code for business and IT not google", and then they go and code and fuck it up... and then the rest of us have to go clean up their shit at 1AM or on weekends.

If you work as an hourly paid contractor cleaning that crap, it can be quite lucrative. But sooner or later it truly sucks the energy out of your soul.

So yeah, we need more lame monkey tests ... to filter the lame code monkeys.

ShanghaiBill ( 739463 ) writes:
Re: ( Score: 3 )
Someone could Google the problem with the phone then step up and solve the challenge.

If given a spec, someone can consistently cobble together working code by Googling, then I would love to hire them. That is the most productive way to get things done.

There is nothing wrong with using external references. When I am coding, I have three windows open: an editor, a testing window, and a browser with a Stackoverflow tab open.

Junta ( 36770 ) writes:
Re: ( Score: 2 )

Yeah, when we do tech interviews, we ask questions that we are certain they won't be able to answer, but want to see how they would think about the problem and what questions they ask to get more data and that they don't just fold up and say "well that's not the sort of problem I'd be thinking of" The examples aren't made up or anything, they are generally selection of real problems that were incredibly difficult that our company had faced before, that one may not think at first glance such a position would

bobstreo ( 1320787 ) writes:
Not