Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Software Engineering

News Programming Languages Design Recommended Books Recommended Links Selected Papers Architecture Unix Component Model
Conceptual Integrity Brooks law Conway Law A Note on the Relationship of Brooks Law and Conway Law Configuration Management Simplification and KISS Defensive programming
Software Life Cycle Models Software Prototyping  CMM (Capability Maturity Model) Exteme programming as yet another SE fad Agile -- Fake Solution to an Important Problem anti-OO Literate Programming
Reverse Engineering Links Programming style Project Management Code Reviews and Inspections The Mythical Man-Month Design patterns CMM
Bad Software Information Overload Inhouse vs outsourced applications development OSS Development as a Special Type of Academic Research A Second Look at the Cathedral and Bazaar Labyrinth of Software Freedom  
Program Understanding Git Software archeology Refactoring vs Restructuring Software Testing Featuritis Managing Distributed Software Development
Distributed software development LAMP Stack Perl-based Bug Tracking Code Metrics   Cargo cult programming Document Management Systems
Testing Over 50 and unemployed          
Programming as a profession Primitive views of the Software Design Sysadmin Horror Stories Health Issues SE quotes Humor Etc

Software Engineering: A study akin to numerology and astrology, but lacking the precision of the former and the success of the latter.

KISS Principle     /kis' prin'si-pl/ n.     "Keep It Simple, Stupid". A maxim often invoked when discussing design to fend off creeping featurism and control development complexity. Possibly related to the marketroid maxim on sales presentations, "Keep It Short and Simple".

creeping featurism     /kree'ping fee'chr-izm/ n.     [common] 1. Describes a systematic tendency to load more chrome and features onto systems at the expense of whatever elegance they may have possessed when originally designed. See also feeping creaturism. "You know, the main problem with BSD Unix has always been creeping featurism." 2. More generally, the tendency for anything complicated to become even more complicated because people keep saying "Gee, it would be even better if it had this feature too". (See feature.) The result is usually a patchwork because it grew one ad-hoc step at a time, rather than being planned. Planning is a lot of work, but it's easy to add just one extra little feature to help someone ... and then another ... and another... When creeping featurism gets out of hand, it's like a cancer. Usually this term is used to describe computer programs, but it could also be said of the federal government, the IRS 1040 form, and new cars. A similar phenomenon sometimes afflicts conscious redesigns; see second-system effect. See also creeping elegance.
Jargon file

Software engineering (SE)  has probably largest concentration of snake oil salesman after OO programming and software architecture is far from being an exclusion.  Many published software methodologies/architectures claim to provide the benefits, that most of them can not deliver (UML is one good example). I see a lot of oversimplification of the real situation and unnecessary (and useless) formalisms.  The main idea advocated here is simplification of software architecture (including usage of well-understood "Pipe and Filter model")  and scripting languages.

There are few quality general architectural resources available from the Net, therefore the list below represent only some links that I am interested personally. The stress here is on skepticism and this collection is neither complete, nor up to date. But still it might help students that are trying to study this complex and interesting subject. Or perhaps, if you already a software architect you might be able to expand your knowledge of the subject. 

Excessive zeal in adopting some fashionable but questionable methodology is a "real and present danger" in software engineering. This is not a new threat, it started with structured programming revolution and then verification "holy land" searching with Edsger W. Dijkstra as a new prophet of an obsure cult.  The main problem here that all those methodologies contain 20% of useful elements; but the other 80% kill all the useful elements and introduce probably some real disadvantages. After a dozen or so partially useful but mostly useless methodologies came, were enthusiastically  adopted and went into oblivion we should definitely be skeptical.

All this "extreme programming" idiotism or CMM Lysenkoism should be treated as we treat dangerous religious sects.  It's undemocratic and stupid to prohibit them but it's equally dangerous and stupid to follow their recommendations ;-). As Talleyrand advised to junior diplomats: "Above all, gentlemen, not too much zeal. "  By this phrase, Talleyrand was reportedly recommended to his subordinates that important decisions must be based upon the exercise of cool-headed reason and not upon emotions or any waxing or waning popular delusion.

One interesting fact about software architecture is that it can't be practiced from the "ivory tower". Only when you do coding yourself and faces limitations of the tools and hardware you can create a great architecture.  See Real Insights into Architecture Come Only From Actual Programming

One interesting fact about software architecture is that it can't be practiced from the "ivory tower". Only when you do coding yourself and faces limitations of the tools and hardware you can create a great architecture.  See Real Insights into Architecture Come Only From Actual Programming

The primary purpose of Software Architecture courses is to teach students some higher level skills useful in designing and implementing complex software systems. In usually includes some information about classification (general and domain specific architectures), analysis and tools.  As guys in Breadmear consulting aptly noted in their paper  Who Software Architect Role:

A simplistic view of the role is that architects create architectures, and their responsibilities encompass all that is involved in doing so. This would include articulating the architectural vision, conceptualizing and experimenting with alternative architectural approaches, creating models and component and interface specification documents, and validating the architecture against requirements and assumptions.

However, any experienced architect knows that the role involves not just these technical activities, but others that are more political and strategic in nature on the one hand, and more like those of a consultant, on the other. A sound sense of business and technical strategy is required to envision the "right" architectural approach to the customer's problem set, given the business objectives of the architect's organization. Activities in this area include the creation of technology roadmaps, making assertions about technology directions and determining their consequences for the technical strategy and hence architectural approach.

Further, architectures are seldom embraced without considerable challenges from many fronts. The architect thus has to shed any distaste for what may be considered "organizational politics", and actively work to sell the architecture to its various stakeholders, communicating extensively and working networks of influence to ensure the ongoing success of the architecture.

But "buy-in" to the architecture vision is not enough either. Anyone involved in implementing the architecture needs to understand it. Since weighty architectural documents are notorious dust-gatherers, this involves creating and teaching tutorials and actively consulting on the application of the architecture, and being available to explain the rationale behind architectural choices and to make amendments to the architecture when justified.

Lastly, the architect must lead--the architecture team, the developer community, and, in its technical direction, the organization.

Again, I would like to stress that the main principle of software architecture is simple and well known -- it's famous KISS principle. While principle is simple its implementation is not and a lot of developers (especially developers with limited resources) paid dearly for violating this principle.  I have found one one reference on simplicity in SE: R. S. Pressman. Simplicity. In Software Engineering, A Practitioner's Approach, page 452. McGraw Hill, 1997. Here open source tools can help because for those tools a complexity is not such a competitive advantage as for closed source tools. But that not necessary true about actual tools as one problem with open source projects is change of the leader. This is the moment when many projects lose architectural integrity and became a Byzantium compendium of conflicting approaches.

I appreciate am architecture of software system that lead to small size implementations with simple, Spartan interface. In these days the usage of scripting languages can cut the volume of code more than in half in comparison with Java.  That's why this site is advocating usage of scripting languages for complex software projects.

"Real Beauty can be found in Simplicity," and as you may know already, ' "Less" sometimes equal "More".' I continue to adhere to that philosophy. If you, too, have an eye for simplicity in software engineering, then you might benefit from this collection of links.

I think writing a good software system is somewhat similar to writing a multivolume series of books. Most writers will rewrite each chapter of book several times and changes general structure a lot. Rewriting large systems is more difficult, but also very beneficial. It make sense always consider the current version of the system a draft that can be substantially improved and simplified by discovering some new unifying and simplifying paradigm.  Sometimes you can take a wrong direction, but still "nothing venture nothing have."

On a subsystem level a decent configuration management system can help going back. Too often people try to write and debug their fundamentally flawed architecturally "first draft", when it would have been much simpler and faster to rewrite it based on better understanding of architecture and better understanding of the problem.  Actually rewriting can save time spend in debugging of the old version.  That way, when you're done, you may get easy-to-understand, simple software systems, instead of just systems that "seems to work okay" (only as correct as your testing).

On component level refactoring (see Refactoring: Improving the Design of Existing Code) might be a useful simplification technique. Actually rewriting is a simpler term, but let's assume that refactoring is rewriting with some ideological frosting ;-). See Slashdot Book Reviews Refactoring Improving the Design of Existing Code.

I have found one reference on simplicity in SE: R. S. Pressman. Simplicity. In Software Engineering, A Practitioner's Approach, page 452. McGraw Hill, 1997.

Another relevant work (he try to promote his own solution -- you can skip this part) is the critique of "the technology mud slide" in a  book The Innovator's Dilemma by Harvard Business School Professor Clayton M. Christensen . He defined  the term"technology mudslide", the concept very similar to Brooks "software development tar pit" -- a perpetual cycle of abandonment or retooling of existing systems in pursuit of the latest fashionable technology trend -- a cycle in which

 "Coping with the relentless onslaught of technology change was akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you've got to stay on top of it. and if you ever once stop to catch your breath, you get buried."

The complexity caused by adopting new technology for the sake of new technology is further exacerbated by the narrow focus and inexperience of many project leaders -- inexperience with mission-critical systems, systems of larger scale then previously built, software development disciplines, and project management. A Standish Group International survey recently showed that 46% of IT projects were over budget and overdue -- and 28% failed altogether. That's normal and probably the real failures figures are higher: great software managers and architects are rare and it is those people who determine the success of a software project.

Dr. Nikolai Bezroukov


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Dec 01, 2019] Academic Conformism is the road to 1984. - Sic Semper Tyrannis

Highly recommended!
Dec 01, 2019 | turcopolier.typepad.com

Academic Conformism is the road to "1984."

Symptoms-of-groupthink-janis-72-l

The world is filled with conformism and groupthink. Most people do not wish to think for themselves. Thinking for oneself is dangerous, requires effort and often leads to rejection by the herd of one's peers.

The profession of arms, the intelligence business, the civil service bureaucracy, the wondrous world of groups like the League of Women Voters, Rotary Club as well as the empire of the thinktanks are all rotten with this sickness, an illness which leads inevitably to stereotyped and unrealistic thinking, thinking that does not reflect reality.

The worst locus of this mentally crippling phenomenon is the world of the academics. I have served on a number of boards that awarded Ph.D and post doctoral grants. I was on the Fulbright Fellowship federal board. I was on the HF Guggenheim program and executive boards for a long time. Those are two examples of my exposure to the individual and collective academic minds.

As a class of people I find them unimpressive. The credentialing exercise in acquiring a doctorate is basically a nepotistic process of sucking up to elders and a crutch for ego support as well as an entrance ticket for various hierarchies, among them the world of the academy. The process of degree acquisition itself requires sponsorship by esteemed academics who recommend candidates who do not stray very far from the corpus of known work in whichever narrow field is involved. The endorsements from RESPECTED academics are often decisive in the award of grants.

This process is continued throughout a career in academic research. PEER REVIEW is the sine qua non for acceptance of a "paper," invitation to career making conferences, or to the Holy of Holies, TENURE.

This life experience forms and creates CONFORMISTS, people who instinctively boot-lick their fellows in a search for the "Good Doggy" moments that make up their lives. These people are for sale. Their price may not be money, but they are still for sale. They want to be accepted as members of their group. Dissent leads to expulsion or effective rejection from the group.

This mentality renders doubtful any assertion that a large group of academics supports any stated conclusion. As a species academics will say or do anything to be included in their caste.

This makes them inherently dangerous. They will support any party or parties, of any political inclination if that group has the money, and the potential or actual power to maintain the academics as a tribe. pl


doug , 01 December 2019 at 01:01 PM

Sir,

That is the nature of tribes and humans are very tribal. At least most of them. Fortunately, there are outliers. I was recently reading "Political Tribes" which was written by a couple who are both law professors that examines this.

Take global warming (aka the rebranded climate change). Good luck getting grants to do any skeptical research. This highly complex subject which posits human impact is a perfect example of tribal bias.

My success in the private sector comes from consistent questioning what I wanted to be true to prevent suboptimal design decisions.

I also instinctively dislike groups that have some idealized view of "What is to be done?"

As Groucho said: "I refuse to join any club that would have me as a member"

J , 01 December 2019 at 01:22 PM
Reminds one of the Borg, doesn't it?

The 'isms' had it, be it Nazism, Fascism, Communism, Totalitarianism, Elitism all demand conformity and adherence to group think. If one does not co-tow to whichever 'ism' is at play, those outside their group think are persecuted, ostracized, jailed, and executed all because they defy their conformity demands, and defy allegiance to them.

One world, one religion, one government, one Borg. all lead down the same road to -- Orwell's 1984.

Factotum , 01 December 2019 at 03:18 PM
David Halberstam: The Best and the Brightest. (Reminder how the heck we got into Vietnam, when the best and the brightest were serving as presidential advisors.)

Also good Halberstam re-read: The Powers that Be - when the conservative media controlled the levers of power; not the uber-liberal one we experience today.

[Nov 26, 2019] OOP has been a golden hammer ever since Java, but we ve noticed the downsides quite a while ago. Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is Qbertino ( 265505 )

Notable quotes:
"... In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm. ..."
"... In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules ..."
"... I get tired of the purists who think that OO is the only possible answer. The world is not a nail. ..."
Nov 15, 2019 | developers.slashdot.org
No, not really, don't think so. ( Score: 2 )

OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago. Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is. Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast, despite having a DB model that was built by non-programmers on crack.

Most critical processes are procedural, even today if only for the OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago.

Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is.

Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast, despite having a DB model that was built by non-programmers on crack.

bradley13 ( 1118935 ) , Monday July 22, 2019 @01:15AM ( #58963622 ) Homepage

It depends... ( Score: 5 , Insightful)

There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They know OOP, so they think that every problem must be solved in an OOP way.

In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm.

In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules. For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What "object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative, static "generate answer" method makes at least as much sense.

There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO is the only possible answer. The world is not a nail.

[Nov 22, 2019] python - Using global variables in a function

Nov 22, 2019 | stackoverflow.com

Peter Mortensen, Mar 4 '17 at 22:00

If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables.

However, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can overshadow any existing global variable with the same name.

Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP is overkill.

J S, Jan 8 '09

Absolutely re. zealots. Most Python users use it for scripting and create little functions to separate out small bits of code. – Paul Uszak Sep 22 at 22:57

[Nov 15, 2019] Your Code: OOP or POO?

Mar 02, 2007 | blog.codinghorror.com
I'm not a fan of object orientation for the sake of object orientation. Often the proper OO way of doing things ends up being a productivity tax . Sure, objects are the backbone of any modern programming language, but sometimes I can't help feeling that slavish adherence to objects is making my life a lot more difficult . I've always found inheritance hierarchies to be brittle and unstable , and then there's the massive object-relational divide to contend with. OO seems to bring at least as many problems to the table as it solves.

Perhaps Paul Graham summarized it best :

Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.

Eric Lippert observed a similar occupational hazard among developers. It's something he calls object happiness .

What I sometimes see when I interview people and review code is symptoms of a disease I call Object Happiness. Object Happy people feel the need to apply principles of OO design to small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual abstract base classes -- writing programs where IFoos talk to IBars but there is only one implementation of each interface! I suspect that early exposure to OO design principles divorced from any practical context that motivates those principles leads to object happiness. People come away as OO True Believers rather than OO pragmatists.

I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented programming is inherently bad, mind you, but a little OOP goes a very long way . Adding objects to your code is like adding salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err on the side of simplicity, and I tend to favor the approach that results in less code, not more .

Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page . Patrick is a French software developer. Evidently the acronym for object oriented programming is spelled a little differently in French than it is in English: POO.

S.S. Adams gag fake dog poo 'Doggonit'

That's exactly what I've imagined when I had to work on code that abused objects.

But POO code can have another, more constructive, meaning. This blog author argues that OOP pales in importance to POO. Programming fOr Others , that is.

The problem is that programmers are taught all about how to write OO code, and how doing so will improve the maintainability of their code. And by "taught", I don't just mean "taken a class or two". I mean: have pounded into head in school, spend years as a professional being mentored by senior OO "architects" and only then finally kind of understand how to use properly, some of the time. Most engineers wouldn't consider using a non-OO language, even if it had amazing features. The hype is that major.

So what, then, about all that code programmers write before their 10 years OO apprenticeship is complete? Is it just doomed to suck? Of course not, as long as they apply other techniques than OO. These techniques are out there but aren't as widely discussed.

The improvement [I propose] has little to do with any specific programming technique. It's more a matter of empathy; in this case, empathy for the programmer who might have to use your code. The author of this code actually thought through what kinds of mistakes another programmer might make, and strove to make the computer tell the programmer what they did wrong.

In my experience the best code, like the best user interfaces, seems to magically anticipate what you want or need to do next. Yet it's discussed infrequently relative to OO. Maybe what's missing is a buzzword. So let's make one up, Programming fOr Others, or POO for short.

The principles of object oriented programming are far more important than mindlessly, robotically instantiating objects everywhere:

Stop worrying so much about the objects. Concentrate on satisfying the principles of object orientation rather than object-izing everything. And most of all, consider the poor sap who will have to read and support this code after you're done with it . That's why POO trumps OOP: programming as if people mattered will always be a more effective strategy than satisfying the architecture astronauts .

[Nov 15, 2019] Why do many people assume OOP is on the decline?

Nov 15, 2019 | www.quora.com

Daniel Korenblum

Daniel Korenblum , works at Bayes Impact Updated May 25, 2015 There are many reasons why non-OOP languages and paradigms/practices are on the rise, contributing to the relative decline of OOP.

First off, there are a few things about OOP that many people don't like, which makes them interested in learning and using other approaches. Below are some references from the OOP wiki article:

  1. Cardelli, Luca (1996). "Bad Engineering Properties of Object-Oriented Languages". ACM Comput. Surv. (ACM) 28 (4es): 150. doi:10.1145/242224.242415. ISSN 0360-0300. Retrieved 21 April 2010.
  2. Armstrong, Joe. In Coders at Work: Reflections on the Craft of Programming. Peter Seibel, ed. Codersatwork.com , Accessed 13 November 2009.
  3. Stepanov, Alexander. "STLport: An Interview with A. Stepanov". Retrieved 21 April 2010.
  4. Rich Hickey, JVM Languages Summit 2009 keynote, Are We There Yet? November 2009. (edited)
taken from:

Object-oriented programming

Also see this post and discussion on hackernews:

Object Oriented Programming is an expensive disaster which must end

One of the comments therein linked a few other good wikipedia articles which also provide relevant discussion on increasingly-popular alternatives to OOP:

  1. Modularity and design-by-contract are better implemented by module systems ( Standard ML )
  2. Encapsulation is better served by lexical scope ( http://en.wikipedia.org/wiki/Sco... )
  3. Data is better modelled by algebraic datatypes ( Algebraic data type )
  4. Type-checking is better performed structurally ( Structural type system )
  5. Polymorphism is better handled by first-class functions ( First-class function ) and parametricity ( Parametric polymorphism )

Personally, I sometimes think that OOP is a bit like an antique car. Sure, it has a bigger engine and fins and lots of chrome etc., it's fun to drive around, and it does look pretty. It is good for some applications, all kidding aside. The real question is not whether it's useful or not, but for how many projects?

When I'm done building an OOP application, it's like a large and elaborate structure. Changing the way objects are connected and organized can be hard, and the design choices of the past tend to become "frozen" or locked in place for all future times. Is this the best choice for every application? Probably not.

If you want to drive 500-5000 miles a week in a car that you can fix yourself without special ordering any parts, it's probably better to go with a Honda or something more easily adaptable than an antique vehicle-with-fins.

Finally, the best example is the growth of JavaScript as a language (officially called EcmaScript now?). Although JavaScript/EcmaScript (JS/ES) is not a pure functional programming language, it is much more "functional" than "OOP" in its design. JS/ES was the first mainstream language to promote the use of functional programming concepts such as higher-order functions, currying, and monads.

The recent growth of the JS/ES open-source community has not only been impressive in its extent but also unexpected from the standpoint of many established programmers. This is partly evidenced by the overwhelming number of active repositories on Github using JavaScript/EcmaScript:

Top Github Languages of 2014 (So far)

Because JS/ES treats both functions and objects as structs/hashes, it encourages us to blur the line dividing them in our minds. This is a division that many other languages impose - "there are functions and there are objects/variables, and they are different".

This seemingly minor (and often confusing) design choice enables a lot of flexibility and power. In part this seemingly tiny detail has enabled JS/ES to achieve its meteoric growth between 2005-2015.

This partially explains the rise of JS/ES and the corresponding relative decline of OOP. OOP had become a "standard" or "fixed" way of doing things for a while, and there will probably always be a time and place for OOP. But as programmers we should avoid getting too stuck in one way of thinking / doing things, because different applications may require different approaches.

Above and beyond the OOP-vs-non-OOP debate, one of our main goals as engineers should be custom-tailoring our designs by skillfully choosing the most appropriate programming paradigm(s) for each distinct type of application, in order to maximize the "bang for the buck" that our software provides.

Although this is something most engineers can agree on, we still have a long way to go until we reach some sort of consensus about how best to teach and hone these skills. This is not only a challenge for us as programmers today, but also a huge opportunity for the next generation of educators to create better guidelines and best practices than the current OOP-centric pedagogical system.

Here are a couple of good books that elaborates on these ideas and techniques in more detail. They are free-to-read online:

  1. https://leanpub.com/javascriptal...
  2. https://leanpub.com/javascript-s...
Mike MacHenry , software engineer, improv comedian, maker Answered Feb 14, 2015 · Author has 286 answers and 513.7k answer views Because the phrase itself was over hyped to an extrodinary degree. Then as is common with over hyped things many other things took on that phrase as a name. Then people got confused and stopped calling what they are don't OOP.

Yes I think OOP ( the phrase ) is on the decline because people are becoming more educated about the topic.

It's like, artificial intelligence, now that I think about it. There aren't many people these days that say they do AI to anyone but the laymen. They would say they do machine learning or natural language processing or something else. These are fields that the vastly over hyped and really nebulous term AI used to describe but then AI ( the term ) experienced a sharp decline while these very concrete fields continued to flourish.

[Nov 15, 2019] There is nothing inherently wrong with some of the functionality it offers, its the way OOP is abused as a substitute for basic good programming practices

Nov 15, 2019 | developers.slashdot.org

spazmonkey ( 920425 ) , Monday July 22, 2019 @12:22AM ( #58963430 )

its the way OOP is taught ( Score: 5 , Interesting)

There is nothing inherently wrong with some of the functionality it offers, its the way OOP is abused as a substitute for basic good programming practices.

I was helping interns - students from a local CC - deal with idiotic assignments like making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES. Seriously, WTF?

A room full of career programmers could not even figure out how you were supposed to do that, much less why.

What was worse was a lack of understanding of basic programming skill or even the use of variables, as the kids were being taught EVERY program was to to be assembled solely by sticking together bits of libraries.

There was no coding, just hunting for snippets of preexisting code to glue together. Zero idea they could add their own, much less how to do it. OOP isn't the problem, its the idea that it replaces basic programming skills and best practice.

sjames ( 1099 ) , Monday July 22, 2019 @01:30AM ( #58963680 ) Homepage Journal

Re:its the way OOP is taught ( Score: 5 , Interesting)

That and the obsession with absofrackinglutely EVERYTHING just having to be a formally declared object including the while program being an object with a run() method.

Some things actually cry out to be objects, some not so much. Generally, I find that my most readable and maintainable code turns out to be a procedural program that manipulates objects.

Even there, some things just naturally want to be a struct or just an array of values.

The same is true of most ingenious ideas in programming. It's one thing if code is demonstrating a particular idea, but production code is supposed to be there to do work, not grind an academic ax.

For example, slavish adherence to "patterns". They're quite useful for thinking about code and talking about code, but they shouldn't be the end of the discussion. They work better as a starting point. Some programs seem to want patterns to be mixed and matched.

In reality those problems are just cargo cult programming one level higher.

I suspect a lot of that is because too many developers barely grasp programming and never learned to go beyond the patterns they were explicitly taught.

When all you have is a hammer, the whole world looks like a nail.

[Nov 15, 2019] Inheritance, while not "inherently" bad, is often the wrong solution

Nov 15, 2019 | developers.slashdot.org

mfnickster ( 182520 ) , Monday July 22, 2019 @09:54AM ( #58965660 )

Re:Tiresome ( Score: 5 , Interesting)

Inheritance, while not "inherently" bad, is often the wrong solution. See: Why extends is evil [javaworld.com]

Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny little anecdote in Cocoa Programming for Mac OS X [google.com]:

"Once upon a time, there was a company called Taligent. Taligent was created by IBM and Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the peak of its mindshare, I met one of its engineers at a trade show.

I asked him to create a simple application for me: A window would appear with a button, and when the button was clicked, the words 'Hello, World!' would appear in a text field. The engineer created a project and started subclassing madly: subclassing the window and the button and the event handler.

Then he started generating code: dozens of lines to get the button and the text field onto the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its doors forever."

[Nov 15, 2019] Never mind that OOP essentially began very early and has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially an object oriented system. It's just data encapsulation and separating work into manageable modules

Nov 15, 2019 | developers.slashdot.org

Darinbob ( 1142669 ) , Monday July 22, 2019 @02:00AM ( #58963760 )

Re:The issue ( Score: 5 , Insightful)

Almost every programming methodology can be abused by people who really don't know how to program well, or who don't want to. They'll happily create frameworks, implement new development processes, and chart tons of metrics, all while avoiding the work of getting the job done. In some cases the person who writes the most code is the same one who gets the least amount of useful work done.

So, OOP can be misused the same way. Never mind that OOP essentially began very early and has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially an object oriented system. It's just data encapsulation and separating work into manageable modules. That's how it was before anyone ever came up with the dumb name "full-stack developer".

[Nov 15, 2019] Is Object-Oriented Programming a Trillion Dollar Disaster?

Nov 15, 2019 | developers.slashdot.org

(medium.com) 782 Posted by EditorDavid on Monday July 22, 2019 @12:04AM from the OOPs dept. Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling object-oriented programming "a trillion dollar disaster."

Precious time and brainpower are being spent thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented Programming (OOP) has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization . There's no objective and open evidence that OOP is better than plain procedural programming ...

Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard...

[Nov 15, 2019] Bad programmers create objects for objects sake following so called "design patterns" and no amount of comments saves this spaghetti interacting "objects"

Nov 15, 2019 | developers.slashdot.org

cardpuncher ( 713057 ) , Monday July 22, 2019 @03:06AM ( #58963948 )

Re:The issue ( Score: 5 , Insightful)

As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the rise of OOP with some curiosity. I think there's a general consensus that abstraction and re-usability are good things - they're the reason subroutines exist - the issue is whether they are ends in themselves.

I struggle with the whole concept of "design patterns". There are clearly common themes in software, but there seems to be a great deal of pressure these days to make your implementation fit some pre-defined template rather than thinking about the application's specific needs for state and concurrency. I have seen some rather eccentric consequences of "patternism".

Correctly written, OOP code allows you to encapsulate just the logic you need for a specific task and to make that specific task available in a wide variety of contexts by judicious use of templating and virtual functions that obviate the need for "refactoring".

Badly written, OOP code can have as many dangerous side effects and as much opacity as any other kind of code. However, I think the key factor is not the choice of programming paradigm, but the design process.

You need to think first about what your code is intended to do and in what circumstances it might be reused. In the context of a larger project, it means identifying commonalities and deciding how best to implement them once. You need to document that design and review it with other interested parties. You need to document the code with clear information about its valid and invalid use. If you've done that, testing should not be a problem.

Some people seem to believe that OOP removes the need for some of that design and documentation. It doesn't and indeed code that you intend to be reused needs *more* design and documentation than the glue that binds it together in any one specific use case. I'm still a firm believer that coding begins with a pencil, not with a keyboard. That's particularly true if you intend to design abstract interfaces that will serve many purposes. In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the costs - and that usually means you not only know your code will be genuinely reusable but will also genuinely be reused.

Rockoon ( 1252108 ) , Monday July 22, 2019 @04:23AM ( #58964192 )

Re:The issue ( Score: 5 , Insightful)
I struggle with the whole concept of "design patterns".

Because design patterns are stupid.

A reasonable programmer can understand reasonable code so long as the data is documented even when the code isn't documented, but will struggle immensely if it were the other way around.

Bad programmers create objects for objects sake, and because of that they have to follow so called "design patterns" because no amount of code commenting makes the code easily understandable when its a spaghetti web of interacting "objects" The "design patterns" don't make the code easier the read, just easier to write.

Those OOP fanatics, if they do "document" their code, add comments like "// increment the index" which is useless shit.

The big win of OOP is only in the encapsulation of the data with the code, and great code treats objects like data structures with attached subroutines, not as "objects", and document the fuck out of the contained data, while more or less letting the code document itself.

[Nov 15, 2019] 600K line of code probably would have been more like 100K lines if you had used a language whose ecosystem doesn't goad people into writing so many superfluous layers of indirection, abstraction and boilerplate.

Nov 15, 2019 | developers.slashdot.org

Waffle Iron ( 339739 ) , Monday July 22, 2019 @01:22AM ( #58963646 )

Re:680,303 lines ( Score: 4 , Insightful)
680,303 lines of Java code in the main project in my system.

Probably would've been more like 100,000 lines if you had used a language whose ecosystem doesn't goad people into writing so many superfluous layers of indirection, abstraction and boilerplate.

[Nov 11, 2019] C, Python, Go, and the Generalized Greenspun Law

Dec 18, 2017 | esr.ibiblio.org

Posted on 2017-12-18 by esr In recent discussion on this blog of the GCC repository transition and reposurgeon, I observed "If I'd been restricted to C, forget it – reposurgeon wouldn't have happened at all"

I should be more specific about this, since I think the underlying problem is general to a great deal more that the implementation of reposurgeon. It ties back to a lot of recent discussion here of C, Python, Go, and the transition to a post-C world that I think I see happening in systems programming.

(This post perhaps best viewed as a continuation of my three-part series: The long goodbye to C , The big break in computer languages , and Language engineering for great justice .)

I shall start by urging that you must take me seriously when I speak of C's limitations. I've been programming in C for 35 years. Some of my oldest C code is still in wide production use. Speaking from that experience, I say there are some things only a damn fool tries to do in C, or in any other language without automatic memory management (AMM, for the rest of this article).

This is another angle on Greenspun's Law: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp." Anyone who's been in the trenches long enough gets that Greenspun's real point is not about C or Fortran or Common Lisp. His maxim could be generalized in a Henry-Spencer-does-Santyana style as this:

"At any sufficient scale, those who do not have automatic memory management in their language are condemned to reinvent it, poorly."

In other words, there's a complexity threshold above which lack of AMM becomes intolerable. Lack of it either makes expressive programming in your application domain impossible or sends your defect rate skyrocketing, or both. Usually both.

When you hit that point in a language like C (or C++), your way out is usually to write an ad-hoc layer or a bunch of semi-disconnected little facilities that implement parts of an AMM layer, poorly. Hello, Greenspun's Law!

It's not particularly the line count of your source code driving this, but rather the complexity of the data structures it uses internally; I'll call this its "greenspunity". Large programs that process data in simple, linear, straight-through ways may evade needing an ad-hoc AMM layer. Smaller ones with gnarlier data management (higher greenspunity) won't. Anything that has to do – for example – graph theory is doomed to need one (why, hello, there, reposurgeon!)

There's a trap waiting here. As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.

Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.

Sometimes you really have no alternative but to be stuck with an ad-hoc AMM layer. Usually you get pinned to this situation because real AMM would impose latency costs you can't afford. The major case of this is operating-system kernels. I could say a lot more about the costs and contortions this forces you to assume, and perhaps I will in a future post, but it's out of scope for this one.

On the other hand, reposurgeon is representative of a very large class of "systems" programs that don't have these tight latency constraints. Before I get to back to the implications of not being latency constrained, one last thing – the most important thing – about escalating AMM-layer complexity.

At high enough levels of greenspunity, the effort required to build and maintain your ad-hoc AMM layer becomes a black hole. You can't actually make any progress on the application domain at all – when you try it's like being nibbled to death by ducks.

Now consider this prospectively, from the point of view of someone like me who has architect skill. A lot of that skill is being pretty good at visualizing the data flows and structures – and thus estimating the greenspunity – implied by a problem domain. Before you've written any code, that is.

If you see the world that way, possible projects will be divided into "Yes, can be done in a language without AMM." versus "Nope. Nope. Nope. Not a damn fool, it's a black hole, ain't nohow going there without AMM."

This is why I said that if I were restricted to C, reposurgeon would never have happened at all. I wasn't being hyperbolic – that evaluation comes from a cool and exact sense of how far reposurgeon's problem domain floats above the greenspunity level where an ad-hoc AMM layer becomes a black hole. I shudder just thinking about it.

Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer. But, though software is sometimes written by people who are exceptionally good at managing that kind of hair, it then generally has to be maintained by people who are less so

The really smart people in my audience have already figured out that this is why Ken Thompson, the co-designer of C, put AMM in Go, in spite of the latency issues.

Ken understands something large and simple. Software expands, not just in line count but in greenspunity, to meet hardware capacity and user demand. In languages like C and C++ we are approaching a point of singularity at which typical – not just worst-case – greenspunity is so high that the ad-hoc AMM becomes a black hole, or at best a trap nigh-indistinguishable from one.

Thus, Go. It didn't have to be Go; I'm not actually being a partisan for that language here. It could have been (say) Ocaml, or any of half a dozen other languages I can think of. The point is the combination of AMM with compiled-code speed is ceasing to be a luxury option; increasingly it will be baseline for getting most kinds of systems work done at all.

Sociologically, this implies an interesting split. Historically the boundary between systems work under hard latency constraints and systems work without it has been blurry and permeable. People on both sides of it coded in C and skillsets were similar. People like me who mostly do out-of-kernel systems work but have code in several different kernels were, if not common, at least not odd outliers.

Increasingly, I think, this will cease being true. Out-of-kernel work will move to Go, or languages in its class. C – or non-AMM languages intended as C successors, like Rust – will keep kernels and real-time firmware, at least for the foreseeable future. Skillsets will diverge.

It'll be a more fragmented systems-programming world. Oh well; one does what one must, and the tide of rising software complexity is not about to be turned. This entry was posted in General , Software by esr . Bookmark the permalink . 144 thoughts on "C, Python, Go, and the Generalized Greenspun Law"

  1. Vote -1 Vote +1 David Collier-Brown on 2017-12-18 at 17:38:05 said: Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 17:40:45 said: >Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory.

      Oh, neat. I think that's a closer approximation to the most general statement than Greenspun's, actually. Reply ↓

      • Vote -1 Vote +1 Alex K. on 2017-12-20 at 09:50:37 said: For today, maybe -- but the first time I had Greenspun's Tenth quoted at me was in the late '90s. [I know this was around/just before the first C++ standard, maybe contrasting it to this new upstart Java thing?] This was definitely during the era where big computers still did your serious work, and pretty much all of it was in either C, COBOL, or FORTRAN. [Yeah, yeah, I know– COBOL is all caps for being an acronym, while Fortran ain't–but since I'm talking about an earlier epoch of computing, I'm going to use the conventions of that era.]

        Now the Object-Oriented paradigm has really mitigated this to an enormous degree, but I seem to recall at that time the argument was that multimethod dispatch (a benefit so great you happily accept the flaw of memory management) was the Killer Feature of LISP.

        Given the way the other advantage I would have given Lisp over the past two decades–anonymous functions [lambdas] and treating them as first-class values–are creeping into a more mainstream usage, I think automated memory management is the last visible "Lispy" feature people will associate with Greenspun. [What, are you now visualizing lisp macros? Perish the thought–anytime I see a foot cannon that big, I stop calling it a feature ] Reply ↓

  2. Vote -1 Vote +1 Mycroft Jones on 2017-12-18 at 17:41:04 said: After looking at the Linear Lisp paper, I think that is where Lutz Mueller got One Reference Only memory management from. For automatic memory management, I'm a big fan of ORO. Not sure how to apply it to a statically typed language though. Wish it was available for Go. ORO is extremely predictable and repeatable, not stuttery. Reply ↓
    • Vote -1 Vote +1 lliamander on 2017-12-18 at 19:28:04 said: > Not sure how to apply it to a statically typed language though.

      Clean is probably what you would be looking for: https://en.wikipedia.org/wiki/Clean_(programming_language) Reply ↓

    • Vote -1 Vote +1 Jeff Read on 2017-12-19 at 00:38:57 said: If Lutz was inspired by Linear Lisp, he didn't cite it. Actually ORO is more like region-based memory allocation with a single region: values which leave the current scope are copied which can be slow if you're passing large lists or vectors around.

      Linear Lisp is something quite a bit different, and allows for arbitrary data structures with arbitrarily deep linking within, so long as there are no cycles in the data structures. You can even pass references into and out of functions if you like; what you can't do is alias them. As for statically typed programming languages well, there are linear type systems , which as lliamander mentioned are implemented in Clean.

      Newlisp in general is smack in the middle between Rust and Urbit in terms of cultishness of its community, and that scares me right off it. That and it doesn't really bring anything to the table that couldn't be had by "old" lisps (and Lutz frequently doubles down on mistakes in the design that had been discovered and corrected decades ago by "old" Lisp implementers). Reply ↓

  3. Vote -1 Vote +1 Gary E. Miller on 2017-12-18 at 18:02:10 said: For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath. One probable reason Ken Thompson had to invent Go is to go around the tremendous difficulty in getting new stuff into C. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 18:40:53 said: >For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath.

      Yeah, good idea not to. People as smart/skilled as you and me have been poking at this problem since the 1980s and it's pretty easy to show that you can't do better than Boehm–Demers–Weiser, which has limitations that make it impractical. Sigh Reply ↓

      • Vote -1 Vote +1 John Cowan on 2018-04-15 at 00:11:56 said: What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well. Reply ↓
        • Vote -1 Vote +1 esr on 2018-04-15 at 08:30:12 said: >What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well.

          GCing data on the stack is a crapshoot. Pointers can get mistaken for data and vice-versa. Reply ↓

    • Vote -1 Vote +1 Konstantin Khomoutov on 2017-12-20 at 06:30:05 said: I think it's not about C. Let me cite a little bit from "The Go Programming Language" (A.Donovan, B. Kernigan) --
      in the section about Go influences, it states:

      "Rob Pike and others began to experiment with CSP implementations as actual languages. The first was called Squeak which provided a language with statically created channels. This was followed by Newsqueak, which offered C-like statement and expression syntax and Pascal-like type notation. It was a purely functional language with garbage collection, again aimed at managing keyboard, mouse, and window events. Channels became first-class values, dynamically created and storable in variables.

      The Plan 9 operating system carried these ideas forward in a language called Alef. Alef tried to make Newsqueak a viable system programming language, but its omission of garbage collection made concurrency too painful."

      So my takeaway was that AMM was key to get proper concurrency.
      Before Go, I dabbled with Erlang (which I enjoy, too), and I'd say there the AMM is also a key to have concurrency made easy.

      (Update: the ellipsises I put into the citation were eaten by the engine and won't appear when I tried to re-edit my comment; sorry.) Reply ↓

  4. Vote -1 Vote +1 tz on 2017-12-18 at 18:29:20 said: I think this is the key insight.
    There are programs with zero MM.
    There are programs with orderly MM, e.g. unzip does mallocs and frees in a stacklike formation, Malloc a,b,c, free c,b,a. (as of 1.1.4). This is laminar, not chaotic flow.

    Then there is the complex, nonlinear, turbulent flow, chaos. You can't do that in basic C, you need AMM. But it is easier in a language that includes it (and does it well).

    Virtual Memory is related to AMM – too often the memory leaks were hidden (think of your O(n**2) for small values of n) – small leaks that weren't visible under ordinary circumstances.

    Still, you aren't going to get AMM on the current Arduino variants. At least not easily.

    That is where the line is, how much resources. Because you require a medium to large OS, or the equivalent resources to do AMM.

    Yet this is similar to using FPGAs, or GPUs for blockchain coin mining instead of the CPU. Sometimes you have to go big. Your Cooper Mini might be great most of the time, but sometimes you need a Diesel big pickup. I think a Mini would fit in the bed of my F250.

    As tasks get bigger they need bigger machines. Reply ↓

  5. Vote -1 Vote +1 Zygo on 2017-12-18 at 18:31:34 said: > Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.

    I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-18 at 18:34:59 said: >I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that.

      Well, yeah. I have days like that occasionally, but it would be unwise to plan a project based on the assumption that I will. And deeply foolish to assume that J. Random Programmer will. Reply ↓

  6. Vote -1 Vote +1 tz on 2017-12-18 at 18:32:37 said: C displaced assembler because it had the speed and flexibility while being portable.

    Go, or something like it will displace C where they can get just the right features into the standard library including AMM/GC.

    Maybe we need Garbage Collecting C. GCC?

    One problem is you can't do the pointer aliasing if you have a GC (unless you also do some auxillary bits which would be hard to maintain). void x = y; might be decodable but there are deeper and more complex things a compiler can't detect. If the compiler gets it wrong, you get a memory leak, or have to constrain the language to prevent things which manipulate pointers when that is required or clearer. Reply ↓

    • Vote -1 Vote +1 Zygo on 2017-12-18 at 20:52:40 said: C++11 shared_ptr does handle the aliasing case. Each pointer object has two fields, one for the thing being pointed to, and one for the thing's containing object (or its associated GC metadata). A pointer alias assignment alters the former during the assignment and copies the latter verbatim. The syntax is (as far as a C programmer knows, after a few typedefs) identical to C.

      The trouble with applying that idea to C is that the standard pointers don't have space or time for the second field, and heap management isn't standardized at all (free() is provided, but programs are not required to use it or any other function exclusively for this purpose). Change either of those two things and the resulting language becomes very different from C. Reply ↓

  7. Vote -1 Vote +1 IGnatius T Foobar on 2017-12-18 at 18:39:28 said: Eric, I love you, you're a pepper, but you have a bad habit of painting a portrait of J. Random Hacker that is actually a portrait of Eric S. Raymond. The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 18:55:46 said: >The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit.

      Even if you're right, the escalation of complexity means that what I'm facing now, J. Random Hacker will face in a couple of years. Yes, not everybody writes reposurgeon but a string class won't suffice for much longer even if it does today. Reply ↓

      • Vote -1 Vote +1 tz on 2017-12-18 at 19:27:12 said: Here's another key.

        I once had a sign:

        I don't solve complex problems.
        I simplify complex problems and solve them.

        Complexity does escalate, or at least in the sense that we could cross oceans a few centuries ago, and can go to the planets and beyond today.

        We shouldn't use a rocket ship to get groceries from the local market.

        J Random H-1B will face some easily decomposed apparently complex problem and write a pile of spaghetti.

        The true nature of a hacker is not so much in being able to handle the most deep and complex situations, but in being able to recognize which situations are truly complex and in preference working hard to simplify and reduce complexity in preference to writing something to handle the complexity. Dealing with a slain dragon's corpse is easier than one that is live, annoyed, and immolating anything within a few hundred yards. Some are capable of handling the latter. The wise knight prefers to reduce the problem to the former. Reply ↓

        • Vote -1 Vote +1 William O. B'Livion on 2017-12-20 at 02:02:40 said: > J Random H-1B will face some easily decomposed
          > apparently complex problem and write a pile of spaghetti.

          J Random H-1B will do it with Informatica and Java. Reply ↓

  8. Vote -1 Vote +1 tz on 2017-12-18 at 18:42:33 said: I will add one last "perils of java school" comment.

    One of the epic fails of C++ is it being sold as C but where anyone could program because of all the safetys. Instead it created bloatware and the very memory leaks because the lesser programmers didn't KNOW (grok, understand) what they were doing. It was all "automatic".

    This is the opportunity and danger of AMM/GC. It is a tool, and one with hot areas and sharp edges. Wendy (formerly Walter) Carlos had a law that said "Whatever parameter you can control, you must control". Having a really good AMM/GC requires you to respect what it can and cannot do. OK, form a huge – into VM – linked list. Won't it just handle everything? NO!. You have to think reference counts, at least in the back of your mind. It simplifys the problem but doesn't eliminate it. It turns the black hole into a pulsar, but you still can be hit.

    Many will gloss over and either superficially learn (but can't apply) or ignore the "how to use automatic memory management" in their CS course. Like they didn't bother with pointers, recursion, or multithreading subtleties. Reply ↓

  9. Vote -1 Vote +1 lliamander on 2017-12-18 at 19:36:35 said: I would say that there is a parallel between concurrency models and memory management approaches. Beyond a certain level of complexity, it's simply infeasible for J. Random Hacker to implement a locks-based solution just as it is infeasible for Mr. Hacker to write a solution with manual memory management.

    My worry is that by allowing the unsafe sharing of mutable state between goroutines, Go will never be able to achieve the per-process (i.e. language-level process, not OS-level) GC that would allow for really low latencies necessary for a AMM language to move closer into the kernel space. But certainly insofar as many "systems" level applications don't require extremely low latencies, Go will probably viable solution going forward. Reply ↓

  10. Vote -1 Vote +1 Jeff Read on 2017-12-18 at 20:14:18 said: Putting aside the hard deadlines found in real-time systems programming, it has been empirically determined that a GC'd program requires five times as much memory as the equivalent program with explicit memory management. Applications which are both CPU- and RAM-intensive, where you need to have your performance cake and eat it in as little memory as possible, are thus severely constrained in terms of viable languages they could be implemented in. And by "severely constrained" I mean you get your choice of C++ or Rust. (C, Pascal, and Ada are on the table, but none offer quite the same metaprogramming flexibility as those two.)

    I think your problems with reposturgeon stem from the fact that you're just running up against the hard upper bound on the vector sum of CPU and RAM efficiency that a dynamic language like Python (even sped up with PyPy) can feasibly deliver on a hardware configuration you can order from Amazon. For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-18 at 20:27:20 said: > For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII.

      How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it? Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-18 at 20:48:24 said: You mentioned that reposurgeon wouldn't have been written under the constraints of C. But C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp, especially if you avail yourself of those wonderful features in C++1x. C++ programmers talk about "zero-cost abstractions" for a reason .

        Semantically, programming in a GC'd language and programming in a language that uses smart pointers and RAII are very similar: you create the objects you need, and they are automatically disposed of when no longer needed. But instead of delegating to a GC which cleans them up whenever, both you and the compiler have compile-time knowledge of when those cleanups will take place, allowing you finer-grained control over how memory -- or any other resource -- is used.

        Oh, that's another thing: GC only has something to say about memory -- not file handles, sockets, or any other resource. In C++, with appropriate types value semantics can be made to apply to those too and they will immediately be destructed after their last use. There is no special with construct in C++; you simply construct the objects you need and they're destructed when they go out of scope.

        This is how the big boys do systems programming. Again, Go has barely displaced C++ at all inside Google despite being intended for just that purpose. Their entire critical path in search is still C++ code. And it always will be until Rust gains traction.

        As for my Lisp experience, I know enough to know that Lisp has utterly failed and this is one of the major reasons why. It's not even a decent AI language, because the scruffies won, AI is basically large-scale statistics, and most practitioners these days use C++. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-18 at 20:54:08 said: >C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp,

          Oh, bullshit. I think you're just trolling, now.

          I've been a C++ programmer and know better than this.

          But don't argue with me. Argue with Ken Thompson, who designed Go because he knows better than this. Reply ↓

          • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 06:02:03 said: Modern C++ is a long way from C++ when it was first standardized in 1998. You should *never* be manually managing memory in modern C++. You want a dynamically sized array? Use std::vector. You want an adhoc graph? Use std::shared_ptr and std::weak_ptr.
            Any code I see which uses new or delete, malloc or free will fail code review.
            Destructors and the RAII idiom mean that this covers *any* resource, not just memory.
            See the C++ Core Guidelines on resource and memory management: http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource Reply ↓
            • Vote -1 Vote +1 esr on 2017-12-19 at 07:53:58 said: >Modern C++ is a long way from C++ when it was first standardized in 1998.

              That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad .

              My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like. Reply ↓

              • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:17:58 said: > My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like.

                C++ in 2009 with boost was C++ from 1998 with a few extra libraries. I mean that quite literally -- the standard was unchanged apart from minor fixes in 2003.

                C++ has changed a lot since then. There have been 3 standards issued, in 2011, 2014, and just now in 2017. Between them, there is a huge list of changes to the language and the standard library, and these are readily available -- both clang and gcc have kept up-to-date with the changes, and even MSVC isn't far behind. Even more changes are coming with C++20.

                So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.

                If you are interested, browse the wikipedia entries: https://en.wikipedia.org/wiki/C%2B%2B11 https://en.wikipedia.org/wiki/C%2B%2B14 and https://en.wikipedia.org/wiki/C%2B%2B17 along with articles like https://blog.smartbear.com/development/the-biggest-changes-in-c11-and-why-you-should-care/ http://www.drdobbs.com/cpp/the-c14-standard-what-you-need-to-know/240169034 and https://isocpp.org/files/papers/p0636r0.html Reply ↓

                • Vote -1 Vote +1 esr on 2017-12-19 at 08:37:11 said: >So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.

                  But the foundational abstractions are still leaky. So when you tell me "it's all better now", I don't believe you. I just plain do not.

                  I've been hearing this soothing song ever since around 1989. "Trust us, it's all fixed." Then I look at the "fixes" and they're horrifying monstrosities like templates – all the dangers of preprocessor macros and a whole new class of Turing-complete nightmares, too! In thirty years I'm certain I'll be hearing that C++2047 solves all the problems this time for sure , and I won't believe a word of it then, either. Reply ↓

                  • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:45:34 said: > But the foundational abstractions are still leaky.

                    If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about? Reply ↓

                    • Vote -1 Vote +1 esr on 2017-12-19 at 09:26:24 said: >If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about?

                      Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.

                      Are bare, untyped pointers still in the language? Never mind, I know the answer.

                      Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.

                      Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.

                    • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 09:54:51 said: Thank you for the list of concerns.

                      > Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.

                      You are right, bare arrays are not bounds-checked, but std::array provides an at() member function, so arr.at(3) will throw if the array is too small.

                      Also, ranged-for loops can avoid the need for explicit indexing lots of the time anyway.

                      > Are bare, untyped pointers still in the language? Never mind, I know the answer.

                      Yes, void* is still in the language. You need to cast it to use it, which is something that is easy to spot in a code review.

                      > Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.

                      Probably. Is it possible to write code in any language that dies horribly in an unintended fashion?

                      > Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.

                      Did I say C++ was perfect? This blog post was about the problems inherent in the lack of automatic memory management in C and C++, and thus why you wouldn't have written reposurgeon if that's all you had. My point is that it is easy to write C++ in a way that doesn't suffer from those problems.

                    • Vote -1 Vote +1 esr on 2017-12-19 at 10:10:11 said: > My point is that it is easy to write C++ in a way that doesn't suffer from those problems.

                      No, it is not. The error statistics of large C++ programs refute you.

                      My personal experience on Battle for Wesnoth refutes you.

                      The persistent self-deception I hear from C++ advocates on this score does nothing to endear the language to me.

                    • Vote -1 Vote +1 Ian Bruene on 2017-12-19 at 11:05:22 said: So what I am hearing in this is: "Use these new standards built on top of the language, and make sure every single one of your dependencies holds to them just as religiously are you are. And if anyone fails at any point in the chain you are doomed.".

                      Cool.

                  • Vote -1 Vote +1 Casey Barker on 2017-12-19 at 11:12:16 said: Using Go has been a revelation, so I mostly agree with Eric here. My only objection is to equating C++03/Boost with "modern" C++. I used both heavily, and given a green field, I would consider C++14 for some of these thorny designs that I'd never have used C++03/Boost for. It's a qualitatively different experience. Just browse a copy of Scott Meyer's _Effective Modern C++_ for a few minutes, and I think you'll at least understand why C++14 users object to the comparison. Modern C++ enables better designs.

                    Alas, C++ is a multi-layered tool chest. If you stick to the top two shelves, you can build large-scale, complex designs with pretty good safety and nigh unmatched performance. Everything below the third shelf has rusty tools with exposed wires and no blade guards, and on large-scale projects, it's impossible to keep J. Random Programmer from reaching for those tools.

                    So no, if they keep adding features, C++ 2047 won't materially improve this situation. But there is a contingent (including Meyers) pushing for the *removal* of features. I think that's the only way C++ will stay relevant in the long-term.
                    http://scottmeyers.blogspot.com/2015/11/breaking-all-eggs-in-c.html Reply ↓

                  • Vote -1 Vote +1 Zygo on 2017-12-19 at 11:52:17 said: My personal experience is that C++11 code (in particular, code that uses closures, deleted methods, auto (a feature you yourself recommended for C with different syntax), and the automatic memory and resource management classes) has fewer defects per developer-year than the equivalent C++03-and-earlier code.

                    This is especially so if you turn on compiler flags that disable the legacy features (e.g. -Werror=old-style-cast), and treat any legacy C or C++03 code like foreign language code that needs to be buried under a FFI to make it safe to use.

                    Qualitatively, the defects that do occur are easier to debug in C++11 vs C++03. There are fewer opportunities for the compiler to interpolate in surprising ways because the automatic rules are tighter, the library has better utility classes that make overloads and premature optimization less necessary, the core language has features that make templates less necessary, and it's now possible to explicitly select or rule out invalid candidates for automatic code generation.

                    I can design in Lisp, but write C++11 without much effort of mental translation. Contrast with C++03, where people usually just write all the Lispy bits in some completely separate language (or create shambling horrors like Boost to try to bandaid over the missing limbs boost::lambda, anyone?
                    Oh, look, since C++11 they've doubled down on something called boost::phoenix).

                    Does C++11 solve all the problems? Absolutely not, that would break compatibility. But C++11 is noticeably better than its predecessors. I would say the defect rates are now comparable to Perl with a bunch of custom C modules (i.e. exact defect rate depends on how much you wrote in each language). Reply ↓

                  • Vote -1 Vote +1 NHO on 2017-12-19 at 11:55:11 said: C++ happily turned into complexity metatarpit with "Everything that could be implemented in STL with templates should, instead of core language". And not deprecating/removing features, instead leaving there. Reply ↓
                • Vote -1 Vote +1 Michael on 2017-12-19 at 08:59:41 said: For the curious, can you point to a C++ tutorial/intro that shows how to do it the right way ? Reply ↓
              • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:26:13 said: > That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad.

                I agree that there is a lot of complexity in C++. That doesn't mean you have to use all of it. Yes, it makes maintaining legacy code harder, because the older code might use dangerous or complex parts, but for new code we can avoid the danger, and just stick to the simple, safe parts.

                The complexity isn't all bad, though. Part of the complexity arises by providing the ability to express more complex things in the language. This can then be used to provide something simple to the user.

                Take std::variant as an example. This is a new facility from C++17 that provides a type-safe discriminated variant. If you have a variant that could hold an int or a string and you store an int in it, then attempting to access it as a string will cause an exception rather than a silent error. The code that *implements* std::variant is complex. The code that uses it is simple. Reply ↓

          • Vote -1 Vote +1 Jeff Read on 2017-12-20 at 09:07:06 said: I won't argue with you. C++ is error-prone (albeit less so than C) and horrid to work in. But for certain classes of algorithmically complex, CPU- and RAM-intensive problems it is literally the only viable choice. And it looks like performing surgery on GCC-scale repos falls into that class of problem.

            I'm not even saying it was a bad idea to initially write reposurgeon in Python. Python and even Ruby are great languages to write prototypes or even small-scale production versions of things because of how rapidly they may be changed while you're hammering out the details. But scale comes around to bite you in the ass sooner than most people think and when it does, your choice of language hobbles you in a way that can't be compensated for by throwing more silicon at the problem. And it's in that niche where C++ and Rust dominate, absolutely uncontested. Reply ↓

          • Vote -1 Vote +1 jim on 2017-12-22 at 06:41:27 said: If you found rust hard going, you are not a C++ programmer who knows better than this.

            You were writing C in C++ Reply ↓

      • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 06:15:12 said: > How many times do I have to repeat "reposurgeon would never have been
        > written under that constraint" before somebody who claims LISP
        > experience gets it?

        That speaks to your lack of experience with modern C++, rather than an inherent limitation. *You* might not have written reposurgeon under that constraint, because *you* don't feel comfortable that you wouldn't have ended up with a black-hole of AMM. That does not mean that others wouldn't have or couldn't have, and that their code would necessarily be an unmaintainable black hole.

        In well-written modern C++, memory management errors are a solved problem. You can just write code, and know that the compiler and library will take care of cleaning up for you, just like with a GC-based system, but with the added benefit that it's deterministic, and can handle non-memory resources such as file handles and sockets too. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-19 at 07:59:30 said: >In well-written modern C++, memory management errors are a solved problem

          In well-written assembler memory management errors are a solved problem. I hate this idiotic cant repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue. Reply ↓

          • Vote -1 Vote +1 Anthony Williams on 2017-12-19 at 08:08:53 said: > I hate this idiotic repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue.

            It's not about being "just good enough". It's about *not* using the dangerous parts. If you never use manual memory management, then you can't forget to free, for example, and automatic memory management is *easy* to use. std::string is a darn sight easier to use than the C string functions, for example, and std::vector is a darn sight easier to use than dynamic arrays with new. In both cases, the runtime manages the memory, and it is *easier* to use than the dangerous version.

            Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems. The same is true with C++.

            The fact that historically there are areas where C++ didn't provide a good solution, and thus there are programs that don't use the modern solution, and experience the consequential problems is not an inherent problem with the language, but it does make it harder to educate people. Reply ↓

            • Vote -1 Vote +1 John D. Bell on 2017-12-19 at 10:48:09 said: > It's about *not* using the dangerous parts. Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems.

              Why not use a language that doesn't have "'dangerous' features"?

              NOTES: [1] I am not saying that Go is necessarily that language – I am not even saying that any existing language is necessarily that language.
              [2] /me is being someplace between naive and trolling here. Reply ↓

              • Vote -1 Vote +1 esr on 2017-12-19 at 11:10:15 said: >Why not use a language that doesn't have "'dangerous' features"?

                Historically, it was because hardware was weak and expensive – you couldn't afford the overhead imposed by those languages. Now it's because the culture of software engineering has bad habits formed in those days and reflexively flinches from using higher-overhead safe languages, though it should not. Reply ↓

              • Vote -1 Vote +1 Paul R on 2017-12-19 at 12:30:42 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.

                To be provocative, I think there are two types of programmer, the ones who watch Eric Niebler on Ranges https://www.youtube.com/watch?v=mFUXNMfaciE&t=4230s and think 'Wow, I want to find out more!' and the rest. The rest can have Go and Rust

                D of course is the baby elephant in the room, worth much more attention than it gets. Reply ↓

                • Vote -1 Vote +1 Michael on 2017-12-19 at 12:53:33 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.

                  Because you can't get runtime efficiency in any other language?

                  Because you can't innovate in any other language? Reply ↓

                  • Vote -1 Vote +1 Paul R on 2017-12-19 at 13:50:56 said: Obviously not.

                    Our three main advantages, runtime efficiency, innovation opportunity, building on a base of millions of lines of code that run the internet and an international standard.

                    Our four main advantages

                    More seriously, C++ enabled the STL, the STL transforms the approach of its users, with much increased reliability and readability, but no loss of performance. And at the same time your old code still runs. Now that is old stuff, and STL2 is on the way. Evolution.

                    That's five. Damn Reply ↓

                  • Vote -1 Vote +1 Zygo on 2017-12-19 at 14:14:42 said: > Because you can't innovate in any other language?

                    That claim sounded odd to me too. C++ looks like the place that well-proven features of younger languages go to die and become fossilized. The standardization process would seem to require it. Reply ↓

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 06:27:47 said: Such as?

                      My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.

                      Generic programming being a prime example.

                    • Vote -1 Vote +1 Michael on 2017-12-20 at 08:19:41 said: My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.

                      Are you sure it's that, or is it more the fact that the standards committee has forever had a me-too kitchen-sink no-feature-left-behind obsession?

                      (Makes me wonder if it doesn't share some DNA with the featuritis that has been Microsoft's calling card for so long. – they grew up together.)

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 11:13:20 said: No, because people come to the standards committee with ideas, and you cannot have too many libraries. You don't pay for what you don't use. Prime directive C++.
                    • Vote -1 Vote +1 Michael on 2017-12-20 at 11:35:06 said: and you cannot have too many libraries. You don't pay for what you don't use.

                      And this, I suspect, is the primary weakness in your perspective.

                      Is the defect rate of C++ code better or worse because of that?

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 15:49:29 said: The rate is obviously lower because I've written less code and library code only survives if it is sound. Are you suggesting that reusing code is a bad idea? Or that an indeterminate number of reimplementations of the same functionality is a good thing?

                      You're not on the most productive path to effective criticism of C++ here.

                    • Vote -1 Vote +1 Michael on 2017-12-20 at 17:40:45 said: The rate is obviously lower because I've written less code

                      Please reconsider that statement in light of how defect rates are measured.

                      Are you suggesting..

                      Arguing strawmen and words you put in someone's mouth is not the most productive path to effective defense of C++.

                      But thank you for the discussion.

                    • Vote -1 Vote +1 Paul R on 2017-12-20 at 18:46:53 said: This column is too narrow to have a decent discussion. WordPress should rewrite in C++ or I should dig out my Latin dictionary.

                      Seriously, extending the reach of libraries that become standardised is hard to criticise, extending the reach of the core language is.

                      It used to be a thing that C didn't have built in functionality for I/O (for example) rather it was supplied by libraries written in C interfacing to a lower level system interface. This principle seems to have been thrown out of the window for Go and the others. I'm not sure that's a long term win. YMMV.

                      But use what you like or what your cannot talk your employer out of using, or what you can get a job using. As long as it's not Rust.

            • Vote -1 Vote +1 Zygo on 2017-12-19 at 12:24:25 said: > Well-written programs in a given language don't use the dangerous features

              Some languages have dangerous features that are disabled by default and must be explicitly enabled prior to use. C++ should become one of those languages.

              I am very fond of the 'override' keyword in C++11, which allows me to say "I think this virtual method overrides something, and don't compile the code if I'm wrong about that." Making that assertion incorrectly was a huge source of C++ errors for me back in the days when I still used C++ virtual methods instead of lambdas. C++11 solved that problem two completely different ways: one informs me when I make a mistake, and the other makes it impossible to be wrong.

              Arguably, one should be able to annotate any C++ block and say "there shall be no manipulation of bare pointers here" or "all array access shall be bounds-checked here" or even " and that's the default for the entire compilation unit." GCC can already emit warnings for these without human help in some cases. Reply ↓

      • Vote -1 Vote +1 Kevin S. Van Horn on 2017-12-20 at 12:20:14 said: Is this a good summary of your objections to C++ smart pointers as a solution to AMM?

        1. Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.

        2. Subversion of AMM. Bare news and deletes are still available, so some later maintenance programmer could still introduce memory leaks. You could forbid the use of bare new and delete in your project, and write a check-in hook to look for violations of the policy, but that's one more complication to worry about and it would be difficult to impossible to implement reliably due to macros and the generally difficulty of parsing C++.

        3. Memory corruption. It's too easy to overrun the end of arrays, treat a pointer to a single object as an array pointer, or otherwise corrupt memory. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-20 at 15:51:55 said: >Is this a good summary of your objections to C++ smart pointers as a solution to AMM?

          That is at least a large subset of my objections, and probably the most important ones. Reply ↓

          • Vote -1 Vote +1 jim on 2017-12-22 at 07:15:20 said: It is uncommon to find a cyclic graph that cannot be rendered acyclic by weak pointers.

            C++17 cheerfully breaks backward compatibility by removing some dangerous idioms, refusing to compile code that should never have been written. Reply ↓

        • Vote -1 Vote +1 guest on 2017-12-20 at 19:12:01 said: > Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.

          General graphs with possibly-cyclical references are precisely the workload GC was created to deal with optimally, so ESR is right in a sense that reposturgeon _requires_ a GC-capable language to work. In most other programs, you'd still want to make sure that the extent of the resources that are under GC-control is properly contained (which a Rust-like language would help a lot with) but it's possible that even this is not quite worthwhile for reposturgeon. Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that. Reply ↓

          • Vote -1 Vote +1 esr on 2017-12-20 at 20:12:49 said: >Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that.

            Er, there's any language that does allow it? Reply ↓

            • Vote -1 Vote +1 Jeff Read on 2017-12-27 at 20:58:43 said: Yes -- ahem -- C++. That's why it's pretty much the only language taken seriously by game developers. Reply ↓
        • Vote -1 Vote +1 Zygo on 2017-12-21 at 12:56:20 said: > I'm guessing that reposurgeon's graphs are almost never DAGs

          Why would reposurgeon's graphs not be DAGs? Some exotic case that comes up with e.g. CVS imports that never arises in a SVN->Git conversion (admittedly the only case I've really looked deeply at)?

          Git repos, at least, are cannot-be-cyclic-without-astronomical-effort graphs (assuming no significant advances in SHA1 cracking and no grafts–and even then, all you have to do is detect the cycle and error out). I don't know how a generic revision history data structure could contain a cycle anywhere even if I wanted to force one in somehow. Reply ↓

          • Vote -1 Vote +1 esr on 2017-12-21 at 15:13:18 said: >Why would reposurgeon's graphs not be DAGs?

            The repo graph is, but a lot of the structures have reference loops for fast lookup. For example, a blob instance has a pointer back to the containing repo, as well as being part of the repo through a pointer chain that goes from the repo object to a list of commits to a blob.

            Without those loops, navigation in the repo structure would get very expensive. Reply ↓

            • Vote -1 Vote +1 guest on 2017-12-21 at 15:22:32 said: Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data, whereas the "true" DAG references do? In that case, and assuming you can be sufficiently sure that only DAGs will happen, refcounting (ideally using something like Rust) would very likely be the most efficient choice. No need for a fully-general GC here. Reply ↓
              • Vote -1 Vote +1 esr on 2017-12-21 at 15:34:40 said: >Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data

                I think they do. Unless you're using "ownership" in some sense I don't understand. Reply ↓

                • Vote -1 Vote +1 jim on 2017-12-22 at 07:31:39 said: A weak pointer does not own the object it points to. A shared pointer does.

                  When there are are zero shared pointers pointing to an object, it gets freed, regardless of how many weak pointers are pointing to it.

                  Shared pointers and unique pointers own, weak pointers do not own. Reply ↓

            • Vote -1 Vote +1 jim on 2017-12-22 at 07:23:35 said: In C++11, one would implement a pointer back to the owning object as a weak pointer. Reply ↓
      • Vote -1 Vote +1 jim on 2017-12-23 at 00:40:36 said:

        > How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it?

        Maybe it is true, but since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers, we hear you say that you would never write reposurgeon would never under that constraint.

        Which, since no one else is writing reposurgeon, is an argument, but not an argument that those who do get weak pointers and rust scopes find all that convincing.

        I am inclined to think that those who write C++98 (which is the gcc default) could not write reposurgeon under that constraint, but those who write C++11 could write reposurgeon under that constraint, and except for some rather unintelligible, complicated, and twisted class constructors invoking and enforcing the C++11 automatic memory management system, it would look very similar to your existing python code. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-23 at 02:49:13 said: >since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers

          Thank you, I understand those concepts quite well. I simply prefer to apply them in languages not made of barbed wire and landmines. Reply ↓

          • Vote -1 Vote +1 guest on 2017-12-23 at 07:11:48 said: I'm sure that you understand the _gist_ of all of these notions quite accurately, and this alone is of course quite impressive for any developer – but this is not quite the same as being comprehensively aware of their subtler implications. For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.

            For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language. You've criticized Rust before (not without some justification!) for having half-baked async-IO facilities, but I would think that reposturgeon does not depend significantly on async-IO. Reply ↓

            • Vote -1 Vote +1 esr on 2017-12-23 at 08:14:25 said: >For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.

              Yes, I got that between the time I wrote my first reply and JAD brought it up. I've used Python weakrefs in similar situations. I would have seemed less dense if I'd had more sleep at the time.

              >For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language.

              That acidulousness was mainly aimed at C++. Rust, if it implements its theory correctly (a point on which I am willing to be optimistic) doesn't have C++'s fatal structural flaws. It has problems of its own which I won't rehash as I've already anatomized them in detail. Reply ↓

    • Vote -1 Vote +1 Garrett on 2017-12-21 at 11:16:25 said: There's also development cost. I suspect that using eg. Python drastically reduces the cost for developing the code. And since most repositories are small enough that Eric hasn't noticed accidental O(n**2) or O(n**3) algorithms until recently, it's pretty obvious that execution time just plainly doesn't matter. Migration is going to involve a temporary interruption to service and is going to be performed roughly once per repo. The amount of time involved in just stopping the eg. SVN service and bringing up the eg. GIT hosting service is likely to be longer than the conversion time for the median conversion operation.

      So in these cases, most users don't care about the run-time, and outside of a handful of examples, wouldn't brush up against the CPU or memory limitations of a whitebox PC.

      This is in contrast to some other cases in which I've worked such as file-serving (where latency is measured in microseconds and is actually counted), or large data processing (where wasting resources reduces the total amount of stuff everybody can do). Reply ↓

  11. Vote -1 Vote +1 David Collier-Brown on 2017-12-18 at 20:20:59 said: Hmmn, I wonder if the virtual memory of Linux (and Unix, and Multics) is really the OS equivalent of the automatic memory management of application programs? One works in pages, admittedly, not bytes or groups of bytes, but one could argue that the sub-page stuff is just expensive anti-internal-fragmentation plumbing

    –dave
    [In polite Canajan, "I wonder" is the equivalent of saying "Hey everybody, look at this" in the US. And yes, I that's also the redneck's famous last words.] Reply ↓

  12. Vote -1 Vote +1 John Moore on 2017-12-18 at 22:20:21 said: In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity. The memory is often allocated by code which first encounters the need, and it is then passed on through layers and at some point, encounters code which determines the memory is no longer needed. All of this creates an implicit contract that he who is handed a pointer to something (say, a buffer) becomes responsible for disposing of it. But, there may be many places where that is needed – most of them in exception handling.

    That creates many, many opportunities for some to simply forget to release it. Also, when the code is handed off to someone unfamiliar, they may not even know about the contract. Crises (or bad habits) lead to failures to document this stuff (or create variable names or clear conventions that suggest one should look for the contract).

    I've also done a bunch of stuff in Java, both applications level (such as a very complex Android app with concurrency) and some infrastructural stuff that wasn't as performance constrained. Of course, none of this was hard real-time although it usually at least needed to provide response within human limits, which GC sometimes caused trouble with. But, the GC was worth it, as it substantially reduced bugs which showed up only at runtime, and it simplified things.

    On the side, I write hard real time stuff on tiny, RAM constrained embedded systems – PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast). In that world, there is no malloc used, and shouldn't be. It's compile time created buffers and structures for the most part. Fortunately, the applications don't require advanced dynamic structures (like symbol tables) where you need memory allocation. In that world, AMM isn't an issue. Reply ↓

    • Vote -1 Vote +1 Michael on 2017-12-18 at 22:47:26 said: PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast)
      LOL. Glad I'm not the only one who thought that. Most of my work was on the 16F – after I found out what it took to do a simple table lookup, I was ready for a stiff drink. Reply ↓
    • Vote -1 Vote +1 esr on 2017-12-18 at 23:45:03 said: >In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity.

      I believe you. I think I gravitate to problems with data-structure complexity because, well, that's just the way my brain works.

      But it's also true that I have never forgotten one of the earliest lessons I learned from Lisp. When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about. Reply ↓

      • Vote -1 Vote +1 Jeremy on 2017-12-19 at 01:36:47 said: Perhaps its because my coding experience has mostly been short python scripts of varying degrees of quick-and-dirtiness, but I'm having trouble grokking the difference between smart code/dumb data vs dumb code/smart data. How does one tell the difference?

        Now, as I type this, my intuition says it's more than just the scary mess of nested if statements being in the class definition for your data types, as opposed to the function definitions which munch on those data types; a scary mess of nested if statements is probably the former. The latter though I'm coming up blank.

        Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)? Reply ↓

        • Vote -1 Vote +1 jsn on 2017-12-19 at 02:35:48 said: I've always expressed it as "smart data + dumb logic = win".

          You almost said my favorite canned example: a big conditional block vs. a lookup table. The LUT can replace all the conditional logic with structured data and shorter (simpler, less bug-prone, faster, easier to read) unconditional logic that merely does the lookup. Concretely in Python, imagine a long list of "if this, assign that" replaced by a lookup into a dictionary. It's still all "code", but the amount of program logic is reduced.

          So I would answer your first question by saying look for places where data structures are used. Then guesstimate how complex some logic would have to be to replace that data. If that complexity would outstrip that of the data itself, then you have a "smart data" situation. Reply ↓

          • Vote -1 Vote +1 Emanuel Rylke on 2017-12-19 at 04:07:58 said: To expand on this, it can even be worth to use complex code to generate that dumb lookup table. This is so because the code generating the lookup table runs before, and therefore separately, from the code using the LUT. This means that both can be considered in isolation more often; bringing the combined complexity closer to m+n than m*n. Reply ↓
          • Vote -1 Vote +1 TheDividualist on 2017-12-19 at 05:39:39 said: Admittedly I have an SQL hammer and think everything is a nail, but why not would *every* program include a database, like the SQLLite that even comes bundled with Python distros, no sweat, and put that lookup table into it, not in a dictionary inside the code?

            Of course the more you go in this direction the more problems you will have with unit testing, in case you want to do such a thing. Generally we SQL-hammer guys don't do that much, because in theory any fuction can read any part of the database, making the whole database the potential "inputs" for every function.

            That is pretty lousy design, but I think good design patterns for separations of concerns and unit testability are not yet really known for database driven software, I mean, for example, model-view-controller claims to be one, but actually fails as these can and should call each other. So you have in the "customer" model or controller a function to check if the customer has unpaid invoices, and decide to call it from the "sales order" controller or model to ensure such customers get no new orders registered. In the same "sales order" controller you also check the "product" model or controller if it is not a discontinued product and check the "user" model or controller if they have the proper rights for this operation and the "state" controller if you are even offering this product in that state and so on a gazillion other things, so if you wanted to automatically unit test that "register a new sales order" function you have a potential "input" space of half the database. And all that with good separation of concerns MVC patterns. So I think no one really figured this out yet? Reply ↓

          • Vote -1 Vote +1 guest on 2017-12-20 at 19:21:13 said: There's a reason not to do this if you can help it – dispatching through a non-constant LUT is way slower than running easily-predicted conditionals. Like, an order of magnitude slower, or even worse. Reply ↓
        • Vote -1 Vote +1 esr on 2017-12-19 at 07:45:38 said: >Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)?

          I do not have an instant answer, sorry. I'll hand that question to my backbrain and hope an answer pops up. Reply ↓

      • Vote -1 Vote +1 Jon Brase on 2017-12-20 at 00:54:15 said: When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about.

        Doesn't "dumb code munching smart data" really reduce to "dumb code implementing a virtual machine that runs a different sort of dumb code to munch dumb data"? Reply ↓

        • Vote -1 Vote +1 jim on 2017-12-22 at 20:25:07 said: "Smart Data" is effectively a domain specific language.

          A domain specific language is easier to reason about within its proper domain, because it lowers the difference between the problem and the representation of the problem. Reply ↓

  13. Vote -1 Vote +1 wisd0me on 2017-12-19 at 02:35:10 said: I wonder why you talked about inventing an AMM-layer so much, but told nothing about the GC, which is available for C language. Why do you need to invent some AMM-layer in the first place, instead of just using the GC?
    For example, Bigloo Scheme and The GNU Objective C runtime successfully used it, among many others. Reply ↓
  14. Vote -1 Vote +1 Walter Bright on 2017-12-19 at 04:53:13 said: Maybe D, with its support for mixed GC / manual memory allocation is the right path after all! Reply ↓
  15. Vote -1 Vote +1 Jeremy Bowers on 2017-12-19 at 10:40:24 said: Rust seems like a good fit for the cases where you need the low latency (and other speed considerations) and can't afford the automation. Firefox finally got to benefit from that in the Quantum release, and there's more coming. I wouldn't dream of writing a browser engine in Go, let alone a highly-concurrent one. When you're willing to spend on that sort of quality, Rust is a good tool to get there.

    But the very characteristics necessary to be good in that space will prevent it from becoming the "default language" the way C was for so long. As much fun as it would be to fantasize about teaching Rust as a first language, I think that's crazy talk for anything but maybe MIT. (And I'm not claiming it's a good idea even then; just saying that's the level of student it would take for it to even be possible .) Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already. (I keep hearing about how Swift is going to be usable off of Apple platforms real soon now just around the corner just a bit longer .) Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-19 at 17:30:07 said: >Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already.

      I really think the significance of Go being an easy step up from C cannot be overestimated – see my previous blogging about the role of inward transition costs.

      Ken Thompson is insidiously clever. I like channels and subroutines and := but the really consequential hack in Go's design is the way it is almost perfectly designed to co-opt people like me – that is, experienced C programmers who have figured out that ad-hoc AMM is a disaster area. Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-20 at 08:58:23 said: Go probably owes as much to Rob Pike and Phil Winterbottom for its design as it does to Thompson -- because it's basically Alef with the feature whose lack, according to Pike, basically killed Alef: garbage collection.

        I don't know that it's "insidiously clever" to add concurrency primitives and GC to a C-like language, as concurrency and memory management were the two obvious banes of every C programmer's existence back in the 90s -- so if Go is "insidiously clever", so is Java. IMHO it's just smart, savvy design which is no small thing; languages are really hard to get right. And in the space Go thrives in, Go gets a lot right. Reply ↓

  16. Vote -1 Vote +1 John G on 2017-12-19 at 14:01:09 said: Eric, have you looked into D *lately*? These days:

    * it's fully open source (Boost license),

    * there's [three back-ends to choose from]( https://dlang.org/download.html ),

    * there's [exactly one standard library]( https://dlang.org/phobos/index.html ), and

    * it's got a standard package repository and management tool ([dub]( https://code.dlang.org/ )). Reply ↓

  17. Vote -1 Vote +1 Doctor Mist on 2017-12-19 at 18:28:54 said:

    As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.

    Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.

    Interesting. This certainly fits my experience.

    Has anybody looked for common patterns in whatever parasitic distractions plague you when you start to reach the limits of a language with AMM? Reply ↓

    • Vote -1 Vote +1 Dave taht on 2017-12-23 at 10:44:24 said: The biggest thing that I hate about go is the

      result, err = whatever()
      if (err) dosomethingtofixit();

      abstraction.

      I went through a phase earlier this year where I tried to eliminate the concept of an errno entirely (and failed, in the end reinventing lisp, badly), but sometimes I still think – to the tune of flight of the valkeries – "Kill the errno, kill the errno, kill the ERRno, kill the err!' Reply ↓

    • Vote -1 Vote +1 jim on 2017-12-23 at 23:37:46 said: I have on several occasions been part of big projects using languages with AMM, many programmers, much code, and they hit scaling problems and died, but it is not altogether easy to explain what the problem was.

      But it was very clear that the fact that I could get a short program, or a quick fix up and running with an AMM much faster than in C or C++ was failing to translate into getting a very large program containing far too many quick fixes up and running. Reply ↓

  18. Vote -1 Vote +1 François-René Rideau on 2017-12-19 at 21:39:05 said: Insightful, but I think you are missing a key point about Lisp and Greenspunning.

    AMM is not the only thing that Lisp brings on the table when it comes to dealing with Greenspunity. Actually, the whole point of Lisp is that there is not _one_ conceptual barrier to development, or a few, or even a lot, but that there are _arbitrarily_many_, and that is why you need be able to extend your language through _syntactic_abstraction_ to build DSLs so that every abstraction layer can be written in a language that is fit that that layer. [Actually, traditional Lisp is missing the fact that DSL tooling depends on _restriction_ as well as _extension_; but Haskell types and Racket languages show the way forward in this respect.]

    That is why all languages without macros, even with AMM, remain "blub" to those who grok Lisp. Even in Go, they reinvent macros, just very badly, with various preprocessors to cope with the otherwise very low abstraction ceiling.

    (Incidentally, I wouldn't say that Rust has no AMM; instead it has static AMM. It also has some support for macros.) Reply ↓

    • Vote -1 Vote +1 Patrick Maupin on 2017-12-23 at 18:44:27 said: " static AMM" ???

      WTF sort of abuse of language is this?

      Oh, yeah, rust -- the language developed by Humpty Dumpty acolytes:

      https://github.com/rust-lang/rust/pull/25640

      You just can't make this stuff up. Reply ↓

      • Vote -1 Vote +1 jim on 2017-12-23 at 22:02:18 said: Static AMM means that the compiler analyzes your code at compile time, and generates the appropriate frees,

        Static AMM means that the compiler automatically does what you manually do in C, and semi automatically do in C++11 Reply ↓

        • Vote -1 Vote +1 Patrick Maupin on 2017-12-24 at 13:36:35 said: To the extent that the compiler's insertion of calls to free() can be easily deduced from the code without special syntax, the insertion is merely an optimization of the sort of standard AMM semantics that, for example, a PyPy compiler could do.

          To the extent that the compiler's ability to insert calls to free() requires the sort of special syntax about borrowing that means that the programmer has explicitly described a non-stack-based scope for the variable, the memory management isn't automatic.

          Perhaps this is why a google search for "static AMM" doesn't return much. Reply ↓

          • Vote -1 Vote +1 Jeff Read on 2017-12-27 at 03:01:19 said: I think you fundamentally misunderstand how borrowing works in Rust.

            In Rust, as in C++ or even C, references have value semantics. That is to say any copies of a given reference are considered to be "the same". You don't have to "explicitly describe a non-stack-based scope for the variable", but the hitch is that there can be one, and only one, copy of the original reference to a variable in use at any time. In Rust this is called ownership, and only the owner of an object may mutate it.

            Where borrowing comes in is that functions called by the owner of an object may borrow a reference to it. Borrowed references are read-only, and may not outlast the scope of the function that does the borrowing. So everything is still scope-based. This provides a convenient way to write functions in such a way that they don't have to worry about where the values they operate on come from or unwrap any special types, etc.

            If you want the scope of a reference to outlast the function that created it, the way to do that is to use a std::Rc , which provides a regular, reference-counted pointer to a heap-allocated object, the same as Python.

            The borrow checker checks all of these invariants for you and will flag an error if they are violated. Since worrying about object lifetimes is work you have to do anyway lest you pay a steep price in performance degradation or resource leakage, you win because the borrow checker makes this job much easier.

            Rust does have explicit object lifetimes, but where these are most useful is to solve the problem of how to have structures, functions, and methods that contain/return values of limited lifetime. For example declaring a struct Foo { x: &'a i32 } means that any instance of struct Foo is valid only as long as the borrowed reference inside it is valid. The borrow checker will complain if you attempt to use such a struct outside the lifetime of the internal reference. Reply ↓

      • Vote -1 Vote +1 Doctor Locketopus on 2017-12-27 at 00:16:54 said: Good Lord (not to be confused with Audre Lorde). If I weren't already convinced that Rust is a cult, that would do it.

        However, I must confess to some amusement about Karl Marx and Michel Foucault getting purged (presumably because Dead White Male). Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-27 at 02:06:40 said: This is just a cost of doing business. Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada -- "it doesn't matter if you're a brain in a jar or a superintelligent dolphin as long as your code is good" -- but when it comes to actually putting its money where its mouth is, hacker culture has fallen far short. Now that's changing, and one of the side effects of that is how we use language and communicate internally, and to the wider community, has to change.

        But none of this has to do with automatic memory management. In Rust, management of memory is not only fully automatic, it's "have your cake and eat it too": you have to worry about neither releasing memory at the appropriate time, nor the severe performance costs and lack of determinism inherent in tracing GCs. You do have to be more careful in how you access the objects you've created, but the compiler will assist you with that. Think of the borrow checker as your friend, not an adversary. Reply ↓

  19. Vote -1 Vote +1 John on 2017-12-20 at 05:03:22 said: Present day C++ is far from C++ when it was first institutionalized in 1998. You should *never* be physically overseeing memory in present day C++. You need a powerfully measured cluster? Utilize std::vector. You need an adhoc diagram? Utilize std::shared_ptr and std::weak_ptr.

    Any code I see which utilizes new or erase, malloc or through and through freedom fall flat code audit. Reply ↓

  20. Vote -1 Vote +1 Garrett on 2017-12-21 at 11:24:41 said: What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.

    In contrast to what I think of as systems programming, you have automatic memory management. You aren't working in kernel-space. You aren't modifying the core libraries or doing significant programmatic interface design.

    I'm missing something in your semantic usage and my understanding of the solution implementation. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-21 at 15:08:28 said: >What makes you refer to this as a systems programming project?

      Never user-facing. Often scripted. Development-support tool. Used by systems programmers.

      I realize we're in an area where the "systems" vs. "application" distinction gets a little tricky to make. I hang out in that border zone a lot and have thought about this. Are GPSD and ntpd "applications"? Is giflib? Sure, they're out-of-kernel, but no end-user will ever touch them. Is GCC an application? Is apache or named?

      Inside kernel is clearly systems. Outside it, I think the "systems" vs. "application" distinction is about the skillset being applied and who your expected users are than anything else.

      I would not be upset at anyone who argued for a different distinction. I think you'll find the definitional questions start to get awfully slippery when you poke at them. Reply ↓

    • Vote -1 Vote +1 Jeff Read on 2017-12-24 at 03:21:34 said:

      What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.

      When you're talking about Unix, there is often considerable overlap between "systems" and "application" programming because the architecture of Unix, with pipes, input and output redirection, etc., allowed for essential OS components to be turned into simple, data-in-data-out user-space tools. The functionality of ls , cp , rm , or cat , for instance, would have been built into the shell of a pre-Unix OS (or many post-Unix ones). One of the great innovations of Unix is to turn these units of functionality into standalone programs, and then make spawning processes cheap enough to where using them interactively from the shell is easy and natural. This makes extending the system, as accessed through the shell, easy: just write a new, small program and add it to your PATH .

      So yeah, when you're working in an environment like Unix, there's no bright-line distinction between "systems" and "application" code, just like there's no bright-line distinction between "user" and "developer". Unix is a tool for facilitating humans working with computers. It cannot afford to discriminate, lest it lose its Unix-nature. (This is why Linux on the desktop will never be a thing, not without considerable decay in the facets of Linux that made it so great to begin with.) Reply ↓

  21. Vote -1 Vote +1 Peter Donis on 2017-12-21 at 22:15:44 said: @tz: you aren't going to get AMM on the current Arduino variants. At least not easily.

    At the upper end you can; the Yun has 64 MB, as do the Dragino variants. You can run OpenWRT on them and use its Python (although the latest OpenWRT release, Chaos Calmer, significantly increased its storage footprint from older firmware versions), which runs fine in that memory footprint, at least for the kinds of things you're likely to do on this type of device. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-21 at 22:43:57 said: >You can run OpenWRT on them and use its Python

      I'd be comfortable in that environment, but if we're talking AMM languages Go would probably be a better match for it. Reply ↓

  22. Vote -1 Vote +1 jim on 2017-12-22 at 06:37:36 said: C++11 has an excellent automatic memory management layer. Its only defect is that it is optional, for backwards compatibility with C and C++98 (though it really is not all that compatible with C++98)

    And, being optional, you are apt to take the short cut of not using it, which will bite you.

    Rust is, more or less, C++17 with the automatic memory management layer being almost mandatory. Reply ↓

  23. Vote -1 Vote +1 jim on 2017-12-22 at 20:39:27 said:

    > you are likely to find that more and more of your effort and defect chasing is related to the AMM layer

    But the AMM layer for C++ has already been written and debugged, and standards and idioms exist for integrating it into your classes and type definitions.

    Once built into your classes, you are then free to write code as if in a fully garbage collected language in which all types act like ints.

    C++14, used correctly, is a metalanguage for writing domain specific languages.

    Now sometimes building your classes in C++ is weird, nonobvious, and apt to break for reasons that are difficult to explain, but done correctly all the weird stuff is done once in a small number of places, not spread all over your code Reply ↓

  24. Vote -1 Vote +1 Dave taht on 2017-12-22 at 22:31:40 said: Linux is the best C library ever created. And it's often, terrifying. Things like RCU are nearly impossible for mortals to understand. Reply ↓
  25. Vote -1 Vote +1 Alex Beamish on 2017-12-23 at 11:18:48 said: Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago. Once I'd moved, I also appreciated the memory management in the shape of 'any size you want' arrays, hashes (where had they been all my life?) and autovivification -- on the spot creation of array or hash elements, at any depth.

    While C is a low-level language that masquerades as a high-level language, the original intent of the language was to make writing assembler easier and faster. It can still be used for that, when necessary, leaving the more complicated matters to higher level languages. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-23 at 14:36:26 said: >Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago.

      Prestty much all that goodness depends on AMM and could not be implemented without it. Reply ↓

    • Vote -1 Vote +1 jim on 2017-12-23 at 22:17:39 said: Autovivification saves you much effort, thought, and coding, because most of the time the perl interpreter correctly divines your intention, and does a pile of stuff for you, without you needing to think about it.

      And then it turns around and bites you because it does things for you that you did not intend or expect.

      The larger the program, and the longer you are keeping the program around, the more it is a problem. If you are writing a quick one off script to solve some specific problem, you are the only person who is going to use the script, and are then going to throw the script away, fine. If you are writing a big program that will be used by lots of people for a long time, autovivification, is going to turn around and bit you hard, as are lots of similar perl features where perl makes life easy for you by doing stuff automagically.

      With the result that there are in practice very few big perl programs used by lots of people for a long time, while there are an immense number of very big C and C++ programs used by lots of people for a very long time.

      On esr's argument, we should never be writing big programs in C any more, and yet, we are.

      I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.

      And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.

      This, my personal experience, is the exact opposite of the outcome claimed by esr.

      Well, that was perl, Microsoft Visual Basic, and PHP. Maybe Java scales better.

      But perl, Microsoft visual basic, and PHP did not scale. Reply ↓

      • Vote -1 Vote +1 esr on 2017-12-23 at 22:41:15 said: >But perl, Microsoft visual basic, and PHP did not scale.

        Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.

        Now I understand your fondness for C++ better. It's bad, but those are way worse at any large scale. AMM isn't enough to keep you out of trouble if the rest of the language is a tar-pit. Those three are full of the bones of drowned devops victims.

        Yes, Java scales better. CPython would too from a pure maintainability standpoint, but it's too slow for the kind of deployment you're implying – on the other hand, PyPy might not be, I'm finding the JIT compilation works extremely well and I get runtimes I think are within 2x or 3x of C. Go would probably be da bomb. Reply ↓

        • Vote -1 Vote +1 esr on 2017-12-23 at 23:35:29 said: I wrote:

          >All three of those languages are notorious sinkholes

          You know when you're in deep shit? You're in deep shit when your figure of merit is long-term maintainability and Perl is the least bad alternative.

          *shudder* Reply ↓

        • Vote -1 Vote +1 Jeff Read on 2017-12-24 at 02:56:28 said:

          Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.

          Can confirm -- Visual Basic (6 and VBA) is a toilet. An absolute cesspool. It's full of little gotchas -- such as non-short-circuiting AND and OR operators (there are no differentiated bitwise/logical operators) and the cryptic Dir() function that exactly mimics the broken semantics of MS-DOS's directory-walking system call -- that betray its origins as an extended version of Microsoft's 8-bit BASIC interpreter (the same one used to write toy programs on TRS-80s and Commodores from a bygone era), and prevent you from writing programs in a way that feels natural and correct if you've been exposed to nearly anything else.

          VB is a language optimized to a particular workflow -- and like many languages so optimized as long as you color within the lines provided by the vendor you're fine, but it's a minefield when you need to step outside those lines (which happens sooner than you may think). And that's the case with just about every all-in-one silver-bullet "solution" I've seen -- Rails and PHP belong in this category too.

          It's no wonder the cuddly new Microsoft under Nadella is considering making Python a first-class extension language for Excel (and perhaps other Office apps as well).

          Visual Basic .NET is something quite different -- a sort of Microsoft-flavored Object Pascal, really. But I don't know of too many shops actually using it; if you're targeting the .NET runtime it makes just as much sense to just use C#.

          As for Perl, it's possible to write large, readable, maintainable code bases in object-oriented Perl. I've seen it done. BUT -- you have to be careful. You have to establish coding standards, and if you come across the stereotype of "typical, looks-like-line-noise Perl code" then you have to flunk it at code review and never let it touch prod. (Do modern developers even know what line noise is, or where it comes from?) You also have to choose your libraries carefully, ensuring they follow a sane semantics that doesn't require weirdness in your code. I'd much rather just do it in Python. Reply ↓

          • Vote -1 Vote +1 TheDividualist on 2017-12-27 at 11:24:59 said: VB.NET is unusued in the kind of circles *you know* because these are competitive and status-conscious circles and anything with BASIC in the name is so obviously low-status and just looks so bad on the resume that it makes sense to add that 10-20% more effort and learn C#. C# sounds a whole lot more high status, as it has C in the name so obvious it looks like being a Real Programmer on the resume.

            What you don't know is what happens outside the circles where professional programmers compete for status and jobs.

            I can report that there are many "IT guys" who are not in these circles, they don't have the intra-programmer social life hence no status concerns, nor do they ever intend apply for Real Programmer jobs. They are just rural or not first worlder guys who grew up liking computers, and took a generic "IT guy" job at some business in a small town and there they taught themselves Excel VBscript when the need arised to automate some reports, and then VB.NET when it was time to try to build some actual application for in-house use. They like it because it looks less intimidating – it sends out those "not only meant for Real Programmers" vibes.

            I wish we lived in a world where Python would fill that non-intimidating amateur-friendly niche, as it could do that job very well, but we are already on a hell of a path dependence. Seriously, Bill Gates and Joel Spolsky got it seriously right when they made Excel scriptable. The trick is how to provide a smooth transition between non-programming and programming.

            One classic way is that you are a sysadmin, you use the shell, then you automate tasks with shell scripts, then you graduate to Perl.

            One, relatively new way is that you are a web designer, write HTML and CSS, and then slowly you get dragged, kicking and screaming into JavaScript and PHP.

            The genius was that they realized that a spreadsheet is basically modern paper. It is the most basic and universal tool of the office drone. I print all my automatically generated reports into xlsx files, simply because for me it is the "paper" of 2017, you can view it on any Android phone, and unlike PDF and like paper you can interact and work with the figures, like add other numbers to them.

            So it was automating the spreadsheet, the VBScript Excel macro that led the way from not-programming to programming for an immense number of office drones, who are far more numerous than sysadmins and web designers.

            Aaand I think it was precisely because of those microcomputers, like the Commodore. Out of every 100 office drone in 1991 or so, 1 or 2 had entertained themselves in 1987 typing in some BASIC programs published in computer mags. So when they were told Excel is programmable with a form of BASIC they were not too intidimated

            This created such a giant path dependency that still if you want to sell a language to millions and millions of not-Real Programmers you have to at least make it look somewhat like Basic.

            I think from this angle it was a masterwork of creating and exploiting path dependency. Put BASIC on microcomputers. Have a lot of hobbyists learn it for fun. Create the most universal office tool. Let it be programmable in a form of BASIC – you can just work on the screen, let it generate a macro and then you just have to modify it. Mostly copy-pasting, not real programming. But you slowly pick up some programming idioms. Then the path curves up to VB and then VB.NET.

            To challenge it all, one needs to find an application area as important as number cruching and reporting in an office: Excel is basically electronic paper from this angle and it is hard to come up with something like this. All our nearly computer illiterate salespeople use it. (90% of the use beyond just typing data in a grid is using the auto sum function.) And they don't use much else than that and Word and Outlook and chat apps.

            Anyway suppose such a purpose can be found, then you can make it scriptable in Python and it is also important to be able to record a macro so that people can learn from the generated code. Then maybe that dominance can be challenged. Reply ↓

            • Vote -1 Vote +1 Jeff Read on 2018-01-18 at 12:00:29 said: TIOBE says that while VB.NET saw an uptick in popularity in 2011, it's on its way down now and usage was moribund before then.

              In your attempt to reframe my statements in your usual reference frame of Academic Programmer Bourgeoisie vs. Office Drone Proletariat, you missed my point entirely: VB.NET struggled to get a foothold during the time when VB6 was fresh in developers' minds. It was too different (and too C#-like) to win over VB6 devs, and didn't offer enough value-add beyond C# to win over the people who would've just used C# or Java. Reply ↓

              • Vote -1 Vote +1 jim of jim's blog on 2018-02-10 at 19:10:17 said: Yes, but he has point.

                App -> macros -> macro script-> interpreted language with automatic memory management.

                So you tend to wind up with a widely used language that was not so much designed, as accreted.

                And, of course, programs written in this language fail to scale. Reply ↓

      • Vote -1 Vote +1 Jeff Read on 2017-12-24 at 02:30:27 said:

        I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.

        And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.

        Sounds just like every Ruby on Fails deployment I've ever seen. It's great when you're slapping together Version 0.1 of a product or so I've heard. But I've never joined a Fails team on version 0.1. The ones I saw were already well-established, and between the PFM in Rails itself, and the amount of monkeypatching done to system classes, it's very, very hard to reason about the code you're looking at. From a management level, you're asking for enormous pain trying to onboard new developers into that sort of environment, or even expand the scope of your product with an existing team, without them tripping all over each other.

        There's a reason why Twitter switched from Rails to Scala. Reply ↓

  26. Vote -1 Vote +1 jim on 2017-12-27 at 03:53:42 said: Jeff Read wrote:

    > Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada , hacker culture has fallen far short. Now that's changing, has to change.|

    Observe that "has to change" in practice means that the social justice warriors take charge.

    Observe that in practice, when the social justice warriors take charge, old bugs don't get fixed, new bugs appear, and projects turn into aimless garbage, if any development occurs at all.

    "has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.

    Reflect on the attempted suicide of "Coraline" It is not people like me who keep using the correct pronouns that caused "her" to attempt suicide. It is the people who used "her" to grab power. Reply ↓

    • Vote -1 Vote +1 esr on 2017-12-27 at 14:30:33 said: >"has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.

      It's never happened before, and may very well never happen again but this once I completely agree with JAD. The "change" the SJWs actually want – as opposed to what they claim to want – would ruin us. Reply ↓

  27. Vote -1 Vote +1 jim on 2017-12-27 at 19:42:36 said: To get back on topic:

    Modern, mostly memory safe C++, is enforced by:
    https://blogs.msdn.microsoft.com/vcblog/2016/03/31/c-core-guidelines-checkers-preview-of-the-lifetime-safety-checker/
    http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-abstract
    http://clang.llvm.org/extra/clang-tidy/

    $ clang-tidy test.cpp -checks=clang-analyzer-cplusplus*, cppcoreguidelines-*, modernize-*

    cppcoreguidelines-* and modernize-* will catch most of the issues that esr complains about, in practice usually all of them, though I suppose that as the project gets bigger, some will slip through.

    Remember that gcc and g++ is C++98 by default, because of the vast base of old fashioned C++ code which is subtly incompatible with C++11, C++11 onwards being the version of C++ that optionally supports memory safety, hence necessarily subtly incompatible.

    To turn on C++11

    Place

    cmake_minimum_required(VERSION 3.5)
    # set standard required to ensure that you get
    # the same version of C++ on every platform
    # as some environments default to older dialects
    # of C++ and some do not.
    set(CMAKE_CXX_STANDARD 11)
    set(CMAKE_CXX_STANDARD_REQUIRED ON)

    in your CMakeLists.txt Reply ↓

  28. Vote -1 Vote +1 h0bby1 on 2018-05-19 at 09:27:02 said: I think i solved lots of those issues in C with a runtime i made.

    Originally i made this system because i wanted to test programming a micro kernel OS, with protected mode, PCI bus, usb, ACPI etc, and i didn't want to get close to the 'event horizon' of memory mannagement in C.

    But i didn't wait the Greenspun law to kick in, so i first developped a safe memory system as a runtime, and replaced the standard C runtime and memory mannagement with it.

    I wanted zero seg fault or memory error possible at all anywhere in the C code. Because debuguing bare metal exception, without debugger, with complex data structures made in C look very close to the black hole.

    I didn't want to use C++ because C++ compiler have very unpredictible binary format and function name decoration, which make it much harder to interface with at kernel level.

    I wanted also some system as efficient as possible to mannage lockless shared access between thread of the whole memory as much as possible, to avoid the 'exclusive borrow' syndrome of rust, with global variables shared between threads with lockless algorithm to access them.

    I took inspiration from the algorithm on this site http://www.1024cores.net/ to develop the basic system, with strong references as the norm, and direct 'bare pointer' only as weak references for fast access to memory in C.

    What i ended doing is basically a 'strongly typed hashmap DAG' to store object references hierarchy, which can be manipulated using 'lambda expressions', in sort that applications can manipulate objects in a indirect manner only through the DAG abstraction, without having to manipulate bare pointers at all.

    This also make a mark and sweep garbage collector easier to do, especially with an 'event based' system, the main loop can call the garbage collector between two executions of event/messages handlers, which has the advantage that it can be made at a point where there is no application data on the stack to mark, so it avoid mistaking application data in the stack for a pointer. All references that are only in stack variables can get automatically garbage collected when the function exit, much like in C++ actually.

    The garbage collector can still be called by the allocator when there is OOM error, it will attempt a garbage collection before failing the allocation, but all references in the stack should be garbage collected when the function return to the main loop and the garbage collector is run.

    As all the references hierarchy is expressed explicity in the DAG, there shouldn't be any pointer stored in the heap, outside of the module's data section, which correspond to C global variables that are used as the 'root element' of object hierarchy, which can be traversed to find all the actives references to heap data that the code can potentially use. A quick system could be made for that the compiler can automatically generate a list of the 'root references' in the global variables, to avoid memory leak if some global data can look like a reference.

    As each thread have their own heap, it also avoid the 'stop the world syndrome', all threads can garbage collect their own heap, and there is already some system of lockless synchronisation to access references based on expression in the DAG, to avoid having to rely only on 'bare pointers' to manipulate object hierarchy, which allow dynamic relocation, and make it easier to track active references.

    It's also very useful to track memory leak, as the allocator can keep the time of each memory allocation, it's easy to see all the allocations that happenned between two points of the program, and dump all their hierarchy and property only from the 'bare reference'.

    Each thread contain two heaps, one which is manually mannaged, mostly used for temporary strings , or IO buffers, and the other heap which can be mannaged either with atomic reference count, or mark and sweep.

    With this system, C program rarely have to use directly malloc/free, nor to manipulate pointers to allocated memory directly, other than for temporary buffer allocation, like a dynamic stack, for io buffers or temporary strings who can easily be mannaged manually. And all the memory manipulation can be made via a runtime which keep track internally of pointer address and size, data type and eventually a 'finalizer' function that will be callled when the pointer is freed,

    Since i started to use this system to make C programs, alongside with my own ABI which can dynamically link binaries compiled with visual studio and gcc together, i tested it for many different use case, i could make a mini multi thread window mannager/UI, with aysnc irq driven HID driver events, and a system of distributed application based on blockchain data, which include multi thread http server who can handle parrallel json/rpc calls, with an abstraction of applications stack via custom data type definition / scripts stored on the blockchain, and i have very little problem of memory, albeit it's 100% in C, multi threaded and deal with heavily dynamic data.

    With the mark and sweep mode, it can become quite easy to develop multi thread applications with good level of concurrency, even to do simple database system, driven by a script over asynch http/json/rpc, without having to care about complex memory mannagement.

    Even with the reference count mode, the manipulation of references is explicit, and it should not be to hard to detect leaks with simple parsers, i already did test with antlr C parser, with a visitor class to parse the grammar and detect potentially errors, as all memory referencing happen through specific type instead of bare pointers, it's not too hard to detect potential memory leak problem with a simple parser. Reply ↓

  29. Vote -1 Vote +1 Arron Grier on 2018-06-07 at 17:37:17 said: Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?

    Just wondering Reply ↓

    • Vote -1 Vote +1 esr on 2018-06-08 at 05:48:37 said: >Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?

      Too soon. Go is very interesting but it's not an essential tool yet.

      That might change in a few years. Reply ↓

  30. Vote -1 Vote +1 Yankes on 2018-12-18 at 19:20:46 said: I have one question, do you even need global AMM? Get one of element of graph, when it will/should be released in your reposugeon? Over all I think it is never because it usually link with other from this graph. Overall do you check how many objects are created and released during operations? I do not mean some temporal strings but object representing main working set.

    Depending on answer it could be if you load some graph element and it will stay indefinitely in memory then this could easy be converted to C/C++ by simply never using `free` for graph elements (and all problems with memory management goes out of the windows).
    If they should be released early then when it should happened? Do you have some code in reposurgeon that purge not needed objects when not needed any more? Depend on simply accessibility of some object do not mean it needed, many times is quite opposite.

    I now working on C# application that had similar bungle like this and previous developers "solution" was to restarting whole application instead of fixing lifetime problems. Correct solution was C++ like code, I create object, do work and purge it explicitly. With this non of components have memory issues now. Of corse problem there lay with lack of knowing tools they use and not complexity of domain, but did you do analysis what is needed and what not, and how long? AMM do not solve this.

    btw I big fan of lisp that is in C++11 aka templates, great pure functional language :D Reply ↓

    • Vote -1 Vote +1 esr on 2018-12-18 at 20:57:12 said: >I have one question, do you even need global AMM?

      Oh hell yes. Consider, for example, the demands of loading in ad operating on multiple repositories. Reply ↓

      • Vote -1 Vote +1 Yankes on 2018-12-19 at 08:56:36 said: If I understood this correctly situations look like:
        I have processes that loaded repo A, B and C and active working on each one.
        Now because of some demand we need load repo D.
        After we are done we back to A, B and C.
        Now question is should be D data be purged?
        If there are memory connection form previous repos then it will stay in memory if not then AMM will remove all data from memory.
        If this is complex graph when you have access to any element the you can crawl to any other element of this graph (this is simplification but probably safe assumption).
        In first case (there is connection) is equivalent to not using `free` in C. Of corse if not all graph is reachable then there will be partial purge of it memory (let say that 10% will stay), but what will happens when you need again load repo D? Current data avaialbe is hidden deep in other graphs and most of data is lost do AMM. you need load everything again and now repo D size is 110%.

        In case there is not connection between repos A, B, C and repo D then we can free it entirely.
        This is easy done in C++ (some kind of smart pointer that know if it pointing same repo or other).

        Do my reasoning is correct? or I miss something?

        btw there BIG difference between C and C++, I can implement things in C++ that I will NEVER be able to implement in C, example of this is my strong typed simple script language:
        https://github.com/Yankes/OpenXcom/blob/master/src/Engine/Script.cpp
        I would need drop functionalists/protections to be able to convert this to C (or even C++03).

        Another example of this is https://github.com/fmtlib/fmt from C++ and `printf` from C.
        Both do exactly same but C++ is lot of times better and safer.

        This mean if we add your statement on impossibility and my then we have:
        C <<< C++ <<< Go/Python
        but for me personally is more:
        C <<< C++ < Go/Python
        than yours:
        C/C++ <<< Go/Python Reply ↓

        • Vote -1 Vote +1 esr on 2018-12-19 at 09:08:46 said: >Do my reasoning is correct? or I miss something?

          Not much. The bigger issue is that it is fucking insane to try anything like this in a language where the core abstractions are leaky. That disqualifies C++. Reply ↓

          • Vote -1 Vote +1 Yankes on 2018-12-19 at 10:24:47 said: I only disagree with word `insane`, C++ have lot of problems like UB, lot of corner cases, leaking abstraction, whole crap form C (and my favorite: 1000 line errors from templates), but is not insane to work with memory problems.

            You can easy create tools that make all this problems bearable, and this is biggest flaw in C++, many problems are solvable but not out of box. C++ is good on crating abstraction:
            https://www.youtube.com/watch?v=sPhpelUfu8Q
            That will fit your domain then it will not leak much because it fit right the underling problem.
            And you can enforce lot of things that allow you to reason locally about behavior of program.

            In case of creating this new abstraction is indeed insane then I think you have problems in Go too because only problem that AMM solve is reachability of memory and how long you need for it.

            btw best thing that show difference between C++03 and C++11 is `std::vector<std::vector>`, in C++03 this is insane stupid and in C++11 is insane clever because it have performance characteristic of `std::vector` (thanks to `std::move`) and no problems with memory management (keep index stable and use `v.at(i).at(j).x = 5;` or warp it in helper class and use `v[i][j].x` that will throw on wrong index). Reply ↓

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Tip Jar

Donate here to support my open-source projects. Small but continuing donations via Patreon help more than one-time donations via PayPal.

Patreon Archives Meta Hacker Emblem Eric Conspiracy Anti-Idiotarian Manifesto © 2019 Armed and Dangerous Admired Theme

[Nov 04, 2019] Go (programming language) - Wikipedia

Nov 04, 2019 | en.wikipedia.org

... ... ...

The designers were primarily motivated by their shared dislike of C++ . [26] [27] [28]

... ... ...

Omissions [ edit ]

Go deliberately omits certain features common in other languages, including (implementation) inheritance , generic programming , assertions , [e] pointer arithmetic , [d] implicit type conversions , untagged unions , [f] and tagged unions . [g] The designers added only those facilities that all three agreed on. [95]

Of the omitted language features, the designers explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful language, encouraging instead the use of interfaces to achieve dynamic dispatch [h] and composition to reuse code. Composition and delegation are in fact largely automated by struct embedding; according to researchers Schmager et al. , this feature "has many of the drawbacks of inheritance: it affects the public interface of objects, it is not fine-grained (i.e, no method-level control over embedding), methods of embedded objects cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse it to the extent that programmers in other languages are reputed to overuse inheritance. [61]

The designers express an openness to generic programming and note that built-in functions are in fact type-generic, but these are treated as special cases; Pike calls this a weakness that may at some point be changed. [53] The Google team built at least one compiler for an experimental Go dialect with generics, but did not release it. [96] They are also open to standardizing ways to apply code generation. [97]

Initially omitted, the exception -like panic / recover mechanism was eventually added, which the Go authors advise using for unrecoverable errors such as those that should halt an entire program or server request, or as a shortcut to propagate errors up the stack within a package (but not across package boundaries; there, error returns are the standard API). [98

[Oct 29, 2019] Blame the Policies, Not the Robots

Oct 29, 2019 | economistsview.typepad.com

anne , October 26, 2019 at 11:59 AM

http://cepr.net/publications/op-eds-columns/blame-the-policies-not-the-robots

October 23, 2019

Blame the Policies, Not the Robots
By Jared Bernstein and Dean Baker - Washington Post

The claim that automation is responsible for massive job losses has been made in almost every one of the Democratic debates. In the last debate, technology entrepreneur Andrew Yang told of automation closing stores on Main Street and of self-driving trucks that would shortly displace "3.5 million truckers or the 7 million Americans who work in truck stops, motels, and diners" that serve them. Rep. Tulsi Gabbard (Hawaii) suggested that the "automation revolution" was at "the heart of the fear that is well-founded."

When Sen. Elizabeth Warren (Mass.) argued that trade was a bigger culprit than automation, the fact-checker at the Associated Press claimed she was "off" and that "economists mostly blame those job losses on automation and robots, not trade deals."

In fact, such claims about the impact of automation are seriously at odds with the standard data that we economists rely on in our work. And because the data so clearly contradict the narrative, the automation view misrepresents our actual current challenges and distracts from effective solutions.

Output-per-hour, or productivity, is one of those key data points. If a firm applies a technology that increases its output without adding additional workers, its productivity goes up, making it a critical diagnostic in this space.

Contrary to the claim that automation has led to massive job displacement, data from the Bureau of Labor Statistics (BLS) show that productivity is growing at a historically slow pace. Since 2005, it has been increasing at just over a 1 percent annual rate. That compares with a rate of almost 3 percent annually in the decade from 1995 to 2005.

This productivity slowdown has occurred across advanced economies. If the robots are hiding from the people compiling the productivity data at BLS, they are also managing to hide from the statistical agencies in other countries.

Furthermore, the idea that jobs are disappearing is directly contradicted by the fact that we have the lowest unemployment rate in 50 years. The recovery that began in June 2009 is the longest on record. To be clear, many of those jobs are of poor quality, and there are people and places that have been left behind, often where factories have closed. But this, as Warren correctly claimed, was more about trade than technology.

Consider, for example, the "China shock" of the 2000s, when sharply rising imports from countries with much lower-paid labor than ours drove up the U.S. trade deficit by 2.4 percentage points of GDP (almost $520 billion in today's economy). From 2000 to 2007 (before the Great Recession), the country lost 3.4 million manufacturing jobs, or 20 percent of the total.

Addressing that loss, Susan Houseman, an economist who has done exhaustive, evidence-based analysis debunking the automation explanation, argues that "intuitively and quite simply, there doesn't seem to have been a technology shock that could have caused a 20 to 30 percent decline in manufacturing employment in the space of a decade." What really happened in those years was that policymakers sat by while millions of U.S. factory workers and their communities were exposed to global competition with no plan for transition or adjustment to the shock, decimating parts of Ohio, Michigan and Pennsylvania. That was the fault of the policymakers, not the robots.

Before the China shock, from 1970 to 2000, the number (not the share) of manufacturing jobs held remarkably steady at around 17 million. Conversely, since 2010 and post-China shock, the trade deficit has stabilized and manufacturing has been adding jobs at a modest pace. (Most recently, the trade war has significantly dented the sector and worsened the trade deficit.) Over these periods, productivity, automation and robotics all grew apace.

In other words, automation isn't the problem. We need to look elsewhere to craft a progressive jobs agenda that focuses on the real needs of working people.

First and foremost, the low unemployment rate -- which wouldn't prevail if the automation story were true -- is giving workers at the middle and the bottom a bit more of the bargaining power they require to achieve real wage gains. The median weekly wage has risen at an annual average rate, after adjusting for inflation, of 1.5 percent over the past four years. For workers at the bottom end of the wage ladder (the 10th percentile), it has risen 2.8 percent annually, boosted also by minimum wage increases in many states and cities.

To be clear, these are not outsize wage gains, and they certainly are not sufficient to reverse four decades of wage stagnation and rising inequality. But they are evidence that current technologies are not preventing us from running hotter-for-longer labor markets with the capacity to generate more broadly shared prosperity.

National minimum wage hikes will further boost incomes at the bottom. Stronger labor unions will help ensure that workers get a fairer share of productivity gains. Still, many toiling in low-wage jobs, even with recent gains, will still be hard-pressed to afford child care, health care, college tuition and adequate housing without significant government subsidies.

Contrary to those hawking the automation story, faster productivity growth -- by boosting growth and pretax national income -- would make it easier to meet these challenges. The problem isn't and never was automation. Working with better technology to produce more efficiently, not to mention more sustainably, is something we should obviously welcome.

The thing to fear isn't productivity growth. It's false narratives and bad economic policy.

Paine -> anne... , October 27, 2019 at 06:54 AM
The domestic manufacturing sector and emplyment both shrank because of net off shoring of formerly domestic production

Simple fact


The net job losses are not evenly distributed Nor are the lost jobs to over seas primarily low wage rate jobs

Okay so we need special federal actions in areas with high concentrations of off-shoring induced job loses

But more easily we can simply raise service sector raises by heating up demand

Caution

Two sectors need controls however: Health and housing. Otherwise wage gains will be drained by rent sucking operations in these two sectors

Mr. Bill -> Paine... , October 28, 2019 at 02:21 PM
It is easy to spot the ignorance of those that have enough. Comfort reprises a certain arrogance.

The aura of deservedly is palpable. There are those here that would be excommunicated by society when the troubles come to their town.

[Oct 27, 2019] by Michael Olenick

Notable quotes:
"... "Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine. ..."
"... You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade and know many elderly French. The bread, from the right bakeries, remains great. ..."
"... I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and globalization. ..."
"... When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot". ..."
"... The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility. It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate. ..."
"... And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and some nefarious intent, then you get into trouble. ..."
Oct 25, 2019 | www.nakedcapitalism.com

By Michael Olenick, a research fellow at INSEAD who writes regularly at Olen on Economics and Innowiki . Originally published at Innowiki

Part I , "Automation Armageddon: a Legitimate Worry?" reviewed the history of automation, focused on projections of gloom-and-doom.

"It smells like death," is how a friend of mine described a nearby chain grocery store. He tends to exaggerate and visiting France admittedly brings about strong feelings of passion. Anyway, the only reason we go there is for things like foil or plastic bags that aren't available at any of the smaller stores.

Before getting to why that matters – and, yes, it does matter – first a tasty digression.

I live in a French village. To the French, high-quality food is a vital component to good life.

My daughter counts eight independent bakeries on the short drive between home and school. Most are owned by a couple of people. Counting high-quality bakeries embedded in grocery stores would add a few more. Going out of our way more than a minute or two would more than double that number.

Typical Bakery: Bread is cooked at least twice daily

Despite so many, the bakeries seem to do well. In the half-decade I've been here, three new ones opened and none of the old ones closed. They all seem to be busy. Bakeries are normally owner operated. The busiest might employ a few people but many are mom-and-pop operations with him baking and her selling. To remain economically viable, they rely on a dance of people and robots. Flour arrives in sacks with high-quality grains milled by machines. People measure ingredients, with each bakery using slightly different recipes. A human-fed robot mixes and kneads the ingredients into the dough. Some kind of machine churns the lumps of dough into baguettes.

https://www.youtube.com/embed/O22jWIjcdaY?feature=oembed


Baguette Forming Machine: This would make a good animated GIF

The baker places the formed baguettes onto baking trays then puts them in the oven. Big ovens maintain a steady temperature while timers keep track of how long various loaves of bread have been baking. Despite the sensors, bakers make the final decision when to pull the loaves out, with some preferring a bien cuit more cooked flavor and others a softer crust. Finally, a person uses a robot in the form of a cash register to ring up transactions and processes payments, either by cash or card.

Nobody -- not the owners, workers, or customers -- think twice about any of this. I doubt most people realize how much automation technology is involved or even that much of the equipment is automation tech. There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase in cost. The baguette forming machines churn out exactly what a person would do by hand, only faster and at a far lower cost. We take the thermostatically controlled ovens for granted. However, for anybody who has tried to cook over wood controlling heat via air and fuel, thermostatically controlled ovens are clearly automation technology.

Is the cash register really a robot? James Ritty, who invented it, didn't think so; he sold the patent for cheap. The person who bought the patent built it into NCR, a seminal company laying the groundwork of the modern computer revolution.

Would these bakeries be financially viable if forced to do all this by hand? Probably not. They'd be forced to produce less output at higher cost; many would likely fail. Bread would cost more leaving less money for other purchases. Fewer jobs, less consumer spending power, and hungry bellies to boot; that doesn't sound like good public policy.

Getting back to the grocery store my friend thinks smells like death; just a few weeks ago they started using robots in a new and, to many, not especially welcome way.

As any tourist knows, most stores in France are closed on Sunday afternoons, including and especially grocery stores. That's part of French labor law: grocery stores must close Sunday afternoons. Except that the chain grocery store near me announced they are opening Sunday afternoon. How? Robots, and sleight-of-hand. Grocers may not work on Sunday afternoons but guards are allowed.

Not my store but similar.

Dimanche means Sunday. Aprés-midi means afternoon.

I stopped in to get a feel for how the system works. Instead of grocers, the store uses security guards and self-checkout kiosks.

When you step inside, a guard reminds you there are no grocers. Nobody restocks the shelves but, presumably for half a day, it doesn't matter. On Sunday afternoons, in place of a bored-looking person wearing a store uniform and overseeing the robo-checkout kiosks sits a bored-looking person wearing a security guard uniform doing the same. There are no human-assisted checkout lanes open but this store seldom has more than one operating anyway.

I have no idea how long the French government will allow this loophole to continue. I thought it might attract yellow vest protestors or at least a cranky store worker – maybe a few locals annoyed at an ancient tradition being buried – but there was nobody complaining. There were hardly any customers, either.

The use of robots to sidestep labor law and replace people, in one of the most labor-friendly countries in the world, produced a big yawn.

Paul Krugman and Matt Stoller argue convincingly that it's the bosses, not the robots, that crush the spirits and souls of workers. Krugman calls it "automation obsession" and Stoller points out predictions of robo-Armageddon have existed for decades. The well over 100+ examples I have of major automation-tech ultimately led to more jobs, not fewer.

Jerry Yang envisions some type of forthcoming automation-induced dystopia. Zuck and the tech-bros argue for a forthcoming Star Trek style robo-utopia.

My guess is we're heading for something in-between, a place where artisanal bakers use locally grown wheat, made affordable thanks to machine milling. Where small family-owned bakeries rely on automation tech to do the undifferentiated grunt-work. The robots in my future are more likely to look more like cash registers and less like Terminators.

It's an admittedly blander vision of the future; neither utopian nor dystopian, at least not one fueled by automation tech. However, it's a vision supported by the historic adoption of automation technology.


The Rev Kev , October 25, 2019 at 10:46 am

I have no real disagreement with a lot of automation. But how it is done is another matter altogether. Using the main example in this article, Australia is probably like a lot of countries with bread in that most of the loaves that you get in a supermarket are typically bland and come in plastic bags but which are cheap. You only really know what you grow up with.

When I first went to Germany I stepped into a Bakerie and it was a revelation. There were dozens of different sorts and types of bread on display with flavours that I had never experienced. I didn't know whether to order a loaf or to go for my camera instead. And that is the point. Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle.

We are all familiar with crapification and I contend that it is automation that enables this to become a thing.

WobblyTelomeres , October 25, 2019 at 11:08 am

"I contend that it is automation that enables this to become a thing."

As does electricity. And math. Automation doesn't necessarily narrow choices; economies of scale and the profit motive do. What I find annoying (as in pollyannish) is the avoidance of the issue of those that cannot operate the machinery, those that cannot open their own store, etc.

I gave a guest lecture to a roomful of young roboticists (largely undergrad, some first year grad engineering students) a decade ago. After discussing the economics/finance of creating and selling a burgerbot, asked about those that would be unemployed by the contraption. One student immediately snorted out, "Not my problem!" Another replied, "But what if they cannot do anything else?". Again, "Not my problem!". And that is San Josie in a nutshell.

washparkhorn , October 26, 2019 at 3:25 am

A capitalist market that fails to account for the cost of a product's negative externalities is underpricing (and incentivizing more of the same). It's cheating (or sanctioned cheating due to ignorance and corruption). It is not capitalism (unless that is the only reasonable outcome of capitalism).

Tom Pfotzer , October 25, 2019 at 11:33 am

The author's vision of "appropriate tech" local enterprise supported by relatively simple automation is also my answer to the vexing question of "how do I cope with automation?"

In a recent posting here at NC, I said the way to cope with automation of your job(s) is to get good at automation. My remark caused a howl of outrage: "most people can't do automation! Your solution is unrealistic for the masses. Dismissed with prejudice!".

Thank you for that outrage, as it provides a wonder foil for this article. The article shows a small business which learned to re-design business processes, acquire machines that reduce costs. It's a good example of someone that "got good at automation".

Instead of being the victim of automation, these people adapted. They bought automation, took control of it, and operated it for their own benefit.

Key point: this entrepreneur is now harvesting the benefits of automation, rather than being systematically marginalized by it. Another noteworthy aspect of this article is that local-scale "appropriate" automation serves to reduce the scale advantages of the big players. The availability of small-scale machines that enable efficiencies comparable to the big guys is a huge problem. Most of the machines made for small-scale operators like this are manufactured in China, or India or Iran or Russia, Italy where industrial consolidation (scale) hasn't squashed the little players yet.

Suppose you're a grain farmer, but only have 50 acres (not 100s or 1000s like the big guys). You need a combine – that's a big machine that cuts grain stalk and separate grain from stalk (threshing). This cut/thresh function is terribly labor intensive, the combine is a must-have. Right now, there is no small-size ($50K or less) combine manufactured in the U.S., to my knowledge. They cost upwards of $200K, and sometimes a great deal more. The 50-acre farmer can't afford $200K (plus maint costs), and therefore can't farm at that scale, and has to sell out.

So, the design, production, and sales of these sort of small-scale, high-productivity machines is what is needed to re-distribute production (organically, not by revolution, thanks) back into the hands of the middle class.

If we make possible for the middle class to capture the benefits of automation, and you solve 1) the social dilemmas of concentration of wealth, 2) the declining std of living of the mid- and lower-class, and 3) have a chance to re-design an economy (business processes and collaborating suppliers to deliver end-user product/service) that actually fixes the planet as we make our living, instead of degrading it at every ka-ching of the cash register.

Point 3 is the most important, and this isn't the time or place to expand on that, but I hope others might consider it a bit.

marcel , October 25, 2019 at 12:07 pm

Regarding the combine, I have seen them operating on small-sized lands for the last 50 years. Without exception, you have one guy (sometimes a farmer, often not) who has this kind of harvester, works 24h a day for a week or something, harvesting for all farmers in the neighborhood, and then moves to the next crop (eg corn). Wintertime is used for maintenance. So that one person/farm/company specializes in these services, and everybody gets along well.

Tom Pfotzer , October 25, 2019 at 2:49 pm

Marcel – great solution to the problem. Choosing the right supplier (using combine service instead of buying a dedicated combine) is a great skill to develop. On the flip side, the fellow that provides that combine service probably makes a decent side-income from it. Choosing the right service to provide is another good skill to develop.

Jesper , October 25, 2019 at 5:59 pm

One counter-argument might be that while hoping for the best it might be prudent to prepare for the worst. Currently, and for a couple of decades, the efficiency gains have been left to the market to allocate. Some might argue that for the common good then the government might need to be more active.

What would happen if efficiency gains continued to be distributed according to the market? According to the relative bargaining power of the market participants where one side, the public good as represented by government, is asking for and therefore getting almost nothing?

As is, I do believe that people who are concerned do have reason to be concerned.

Kent , October 25, 2019 at 11:33 am

"Too much automation is really all about narrowing the choices in your life and making it cheaper instead of enabling a richer lifestyle." Many times the only way to automate the creation of a product is to change it to fit the machine.

Brooklin Bridge , October 25, 2019 at 12:02 pm

Some people make a living saying these sorts of things about automation. The quality of French bread is simply not what it used to be (at least harder to find) though that is a complicated subject having to do with flour and wheat as well as human preparation and many other things and the cost (in terms of purchasing power), in my opinion, has gone up, not down since the 70's.

As some might say, "It's complicated," but automation does (not sure about "has to") come with trade offs in quality while price remains closer to what an ever more sophisticated set of algorithms say can be "gotten away with."

This may be totally different for cars or other things, but the author chose French bread and the only overall improvement, or even non change, in quality there has come, if at all, from the dark art of marketing magicians.

Brooklin Bridge , October 25, 2019 at 12:11 pm

/ from the dark art of marketing magicians, AND people's innate ability to accept/be unaware of decreases in quality/quantity if they are implemented over time in small enough steps.

Michael , October 25, 2019 at 1:47 pm

You've gotta' get out of Paris: great French bread remains awesome. I live here. I've lived here for over half a decade and know many elderly French. The bread, from the right bakeries, remains great. But you're unlikely to find it where tourists might wander: the rent is too high.

As a general rule, if the bakers have a large staff or speak English you're probably in the wrong bakery. Except for one of my favorites where she learned her English watching every episode of Friends multiple times and likes to practice with me, though that's more of a fluke.

Brooklin Bridge , October 25, 2019 at 3:11 pm

It's a difficult subject to argue. I suspect that comparatively speaking, French bread remains good and there are still bakers who make high quality bread (given what they have to work with). My experience when talking to family in France (not Paris) is that indeed, they are in general quite happy with the quality of bread and each seems to know a bakery where they can still get that "je ne sais quoi" that makes it so special.

I, on the other hand, who have only been there once every few years since the 70's, kind of like once every so many frames of the movie, see a lowering of quality in general in France and of flour and bread in particular though I'll grant it's quite gradual.

The French love food and were among the best farmers in the world in the 1930s and have made a point of resisting radical change at any given point in time when it comes to the things they love (wine, cheese, bread, etc.) , so they have a long way to fall, and are doing so slowly; but gradually, it's happening.

I agree with others here who distinguish between labor saving automation and labor eliminating automation, but I don't think the former per se is the problem as much as the gradual shift toward the mentality and "rightness" of mass production and globalization.

Oregoncharles , October 26, 2019 at 12:58 am

I was exposed to that conflict, in a small way, because my father was an investment manager. He told me they were considering investing in a smallish Swiss pasta (IIRC) factory. He was frustrated with the negotiations; the owners just weren't interested in getting a lot bigger – which would be the point of the investment, from the investors' POV.

I thought, but I don't think I said very articulately, that of course, they thought of themselves as craftspeople – making people's food, after all. It was a fundamental culture clash. All that was 50 years ago; looks like the European attitude has been receding.

Incidentally, this is a possible approach to a better, more sustainable economy: substitute craft for capital and resources, on as large a scale as possible. More value with less consumption. But how we get there from here is another question.

Carolinian , October 25, 2019 at 12:42 pm

I have been touring around by car and was surprised to see that all Oregon gas stations are full serve with no self serve allowed (I vaguely remember Oregon Charles talking about this). It applies to every station including the ones with a couple of dozen pumps like we see back east. I have since been told that this system has been in place for years.

It's hard to see how this is more efficient and in fact just the opposite as there are fewer attendants than waiting customers and at a couple of stations the action seemed chaotic. Gas is also more expensive although nothing could be more expensive than California gas (over $5/gal occasionally spotted). It's also unclear how this system was preserved–perhaps out of fire safety concerns–but it seems unlikely that any other state will want to imitate just as those bakeries aren't going to bring back their wood fired ovens.

JohnnyGL , October 25, 2019 at 1:40 pm

I think NJ is still required to do all full-serve gas stations. Most in MA have only self-serve, but there's a few towns that have by-laws requiring full-serve.

Brooklin Bridge , October 25, 2019 at 2:16 pm

I'm not sure just how much I should be jumping up and down about our ability to get more gasoline into our cars quicker. But convenient for sure.

The Observer , October 25, 2019 at 4:33 pm

In the 1980s when self-serve gas started being implemented, NIOSH scientists said oh no, now 'everyone' will be increasingly exposed to benzene while filling up. Benzene is close to various radioactive elements in causing damage and cancer.

Oregoncharles , October 26, 2019 at 1:06 am

It was preserved by a series of referenda; turns out it's a 3rd rail here, like the sales tax. The motive was explicitly to preserve entry-level jobs while allowing drivers to keep the gas off their hands. And we like the more personal quality.

Also, we go to states that allow self-serve and observe that the gas isn't any cheaper. It's mainly the tax that sets the price, and location.

There are several bakeries in this area with wood-fired ovens. They charge a premium, of course. One we love is way out in the country, in Falls City. It's a reason to go there.

shinola , October 25, 2019 at 12:47 pm

Unless I misunderstood, the author of this article seems to equate mechanization/automation of nearly any type with robotics.

"Is the cash register really a robot? James Ritty, who invented it, didn't think so;" – Nor do I.

To me, "robot" implies a machine with a high degree of autonomy. Would the author consider an old fashioned manual typewriter or adding machine (remember those?) to be robotic? How about when those machines became electrified?

I think the author uses the term "robot" over broadly.

Dan , October 25, 2019 at 1:05 pm

Agree. Those are just electrified extensions of the lever or sand timer.
It's the "thinking" that is A.I.

Refuse to allow A.I.to destroy jobs and cheapen our standard of living.
Never interact with a robo call, just hang up.
Never log into a website when there is a human alternative.
Refuse to do business with companies that have no human alternative.
Never join a medical "portal" of any kind, demand to talk to medical personnel.
etc.

Sabotage A.I. whenever possible.
The Ten Commandments do not apply to corporations.

https://medium.com/@TerranceT/im-never-going-to-stop-stealing-from-the-self-checkout-22cbfff9919b

marieann , October 25, 2019 at 1:49 pm

I don't use self checkouts but sometimes I will allow a cashier to use one for me while I am supposedly learning how to work the machine.

Sancho Panza , October 25, 2019 at 1:52 pm

During a Chicago hotel stay my wife ordered an extra bath towel from the front desk. About 5 minutes later, a mini version of R2D2 rolled up to her door with towel in tow. It was really cute and interacted with her in a human-like way. Cute but really scary in the way that you indicate in your comment. It seems many low wage activities would be in immediate risk of replacement. But sabotage? I would never encourage sabotage; in fact, when it comes to true robots like this one, I would highly discourage any of the following: yanking its recharge cord in the middle of the night, zapping it with a car battery, lift its payload and replace with something else, give it a hip high-five to help it calibrate its balance, and of course, the good old kick'm in the bolts.

Sancho Panza , October 26, 2019 at 9:53 am

Here's a clip of that robot, Leo, bringing bottled water and a bath towel to my wife.

Sancho Panza , October 26, 2019 at 10:49 pm

https://www.youtube.com/watch?v=TXygNznHSs0

Barbara , October 26, 2019 at 11:48 am

Stop and Shop supermarket chain now has robots in the store. According to Stop and Shop they are oh so innocent! and friendly! why don't you just go up and say hello?
All the robots do, they say, go around scanning the shelves looking for: shelf price tags that don't match the current price, merchandise in the wrong place (that cereal box you picked up in the breakfast aisle and decided, in the laundry aisle, that you didn't want and put the box on a shelf with detergent.) All the robots do is notify management of wrong prices and misplaced merchandise.

The damn robot is cute, perky lit up eyes and a smile – so why does it remind me of the Stepford Wives.

S&S is the closest supermarket near me, so I go there when I need something in a hurry, but the bulk of my shopping is now done elsewhere. Thank goodness there are some stores that are not doing this: The area Shoprites and FoodTown's don't – and they are all run by family businesses. Shoprite succeeds by have a large assortment brands in every grocery category and keeping prices really competitive. FoodTown operates at a higher price and quality level with real butcher and seafood counters as well as prepackaged assortments in open cases and a cooked food counter of the most excellent quality with the store's cooks behind the counter to serve you and answer questions. You never have to come home from work tired and hungry and know that you just don't want to cook and settle for a power bar.

Danny , October 26, 2019 at 9:23 pm

OK, so how do you sabotage the cute SS robot? I suggest a laser pointer to blind its sensors. Or, maybe smear some peanut butter on them. What happens when it runs over your foot that just happens to get in its way? Contingency tort lawsuit?

The more automation you see in a business that you still patronize, the bigger the discount you should ask for. The Ten Commandments do not apply to corporations or job destroyers.

Barbara , October 26, 2019 at 11:30 pm

My husband recently retired from teaching. I'll have to see if he still has his laser pointer or if he had to give it back :-)

Carolinian , October 25, 2019 at 1:11 pm

A robot is a machine -- especially one programmable by a computer -- capable of carrying out a complex series of actions automatically. Robots can be guided by an external control device or the control may be embedded

https://en.wikipedia.org/wiki/Robot

Those early cash registers were perhaps an early form of analog computer. But Wiki reminds that the origin of the term is a work of fiction.

The term comes from a Czech word, robota, meaning "forced labor";the word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. (Rossumovi Univerzální Roboti – Rossum's Universal Robots) by the Czech writer, Karel Čapek

Michael , October 25, 2019 at 1:42 pm

The adding machine – yes, definitely.

The typewriter – maybe. Before the typewriter printed type had to be typeset by hand, a laborious and expensive process.

The French call food processors "robots." Industrial robots have no autonomy, much less a high degree. They repeatedly perform a task they're programmed to do but they're robots.

The idea that a robot has some type of autonomy, that it thinks, makes for good science fiction. But by that definition robots don't exist because there are no thinking machines with a high level of autonomy outside the sci-fi genre. Except that they do exist and they're everywhere.

Math is Your Friend , October 25, 2019 at 3:15 pm

"The typewriter – maybe. Before the typewriter printed type had to be typeset by hand, a laborious and expensive process."

This is really not the appropriate comparison.

The typewriter replaced pen and ink.

Hand assembly of type, a character at a time, was replaced by linotype machines in the late 1800s, which were in turn replaced by phototypesetting in the second half of the 20th century, in turn replaced by computerized typesetting.

Where you go, and if you go, after that, depends on use cases – laser printers, e-books, web pages, videos and probably a few other methods of delivering information.

shinola , October 25, 2019 at 4:26 pm

Perhaps I didn't qualify "autonomous" properly. I didn't mean to imply a 'Rosie the Robot' level of autonomy but the ability of a machine to perform its programmed task without human intervention (other than switching on/off or maintenance & adjustments).

If viewed this way, an adding machine or typewriter are not robots because they require constant manual input in order to function – if you don't push the keys, nothing happens. A computer printer might be considered robotic because it can be programmed to function somewhat autonomously (as in print 'x' number of copies of this document).

"Robotics" is a subset of mechanized/automated functions.

Michael , October 26, 2019 at 7:56 am

Adding machines are robots under your definition. You input what you want them to calculate but the actual calculations are done by the machine (which is the point of having the machine in the first place). You're touching the machine a lot more because it needs you to input more to do its thing but the calculation part is automatic. Typewriters, now that I think about it, really aren't: they don't automate anything.

The difference is a machine that extends human abilities with no automation (ex: a shovel) vs a machine that mimics human abilities (ex: a calculator or bread mixing machine).

The next level would be an autonomous machine but, depending on the definition of autonomy, I think those are a long way off. For example, a self-driving car – once perfected – can handle all sorts of different conditions but still can't really think. It comes down to the definition of autonomy, or maybe it's more accurate to say the degree of autonomy.

Brooklin Bridge , October 25, 2019 at 2:30 pm

I think the author confuses automation -in its most general sense- with progress in a down home, folksy, "let's be real" sort of way. The two are not always one and the same for all people and certainly not so for all other forms of life on the planet. At the end, he does at least make a passing gesture to automation in its more sinister form than that of helping artisans avoid drugery.

Stephen Gardner , October 25, 2019 at 4:48 pm

When I first got out of grad school I worked at United Technologies Research Center where I worked in the robotics lab. In general, at least in those days, we made a distinction between robotics and hard automation. A robot is programmable to do multiple tasks and hard automation is limited to a single task unless retooled. The machines the author is talking about are hard automation. We had ASEA robots that could be programmed to do various things. One of ours drilled, riveted and sealed the skin on the horizontal stabilators (the wing on the tail of a helicopter that controls pitch) of a Sikorsky Sea Hawk. The same robot with just a change of the fixture on the end could be programmed to paint a car or weld a seam on equipment. The drilling and riveting robot was capable of modifying where the rivets were placed (in the robot's frame of reference) based on the location of precisely milled blocks build into the fixture that held the stabilator. There was always some variation and it was important to precisely place the rivets because the spars were very narrow (weight at the tail is bad because of the lever arm). It was considered state of the art back in the day but now auto companies have far more sophisticated robotics.

Michael , October 26, 2019 at 8:03 am

By that definition aren't mixers still robots? You can put in a whisk and they'll mix one way. Put in a bread hook, set the right setting, and it will knead dough – just like the helicopter building robots. Same with cash registers: press one button and they add money, another and they calculate change, a third and they'll do a return, a fourth and they'll print a total. Though, despite multiple functions, you can't reprogram the old mechanical ones (of course, the newer ones are computers running a program). A baguette making machine seems like what you're describing as hard automation: it has one and only function.

Alex Cox , October 25, 2019 at 12:55 pm

The Oregon gas attendant rule is a job-creation scheme. It works well, and very rarely is there an annoying wait.

Gas in Oregon is considerably cheaper than in California.

A few years ago my wife told me I had to go out and get a real job. I realized I was only qualified to do two things: teach, or pump gas.

Thank you Oregon for giving me a choice!

Carolinian , October 25, 2019 at 1:17 pm

Gas in Oregon is considerably cheaper than in California.

And considerably more expensive than in low tax South Carolina (2.19/gal a recent example).

Socal Rhino , October 25, 2019 at 1:44 pm

But what happens when the bread machine is connected to the internet,can't function without an active internet connection, and requires an annual subscription to use?

That is the issue to me: however we define the tools, who will own them?

The Rev Kev , October 25, 2019 at 6:53 pm

You know, that is quite a good point that. It is not so much the automation that is the threat as the rent-seeking that anything connected to the internet allows to be implemented.

*_* , October 25, 2019 at 2:28 pm

Until 100 petaflops costs less than a typical human worker total automation isn't going to happen. Developments in AI software can't overcome basic hardware limits.

breadbaker , October 25, 2019 at 2:29 pm

The story about automation not worsening the quality of bread is not exactly true. Bakers had to develop and incorporate a new method called autolyze ( https://www.kingarthurflour.com/blog/2017/09/29/using-the-autolyse-method ) in the mid-20th-century to bring back some of the flavor lost with modern baking. There is also a trend of a new generation of bakeries that use natural yeast, hand shaping and kneading to get better flavors and quality bread.

But it is certainly true that much of the automation gives almost as good quality for much lower labor costs.

Tom Pfotzer , October 25, 2019 at 3:05 pm

On the subject of the machine-robot continuum

When I started doing robotics, I developed a working definition of a robot as: (a.) Senses its environment; (b.) Has goals and goal-seeking logic; (c.) Has means to affect environment in order to get goal and reality (the environment) to converge. Under that definition, Amazon's Alexa and your household air conditioning and heating system both qualify as "robot".

How you implement a, b, and c above can have more or less sophistication, depending upon the complexity, variability, etc. of the environment, or the solutions, or the means used to affect the environment.

A machine, like a typewriter, or a lawn-mower engine has the logic expressed in metal; it's static.

The addition of a computer (with a program, or even downloadable-on-the-fly programs) to a static machine, e.g. today's computer-controlled-manufacturing machines (lathes, milling, welding, plasma cutters, etc.) makes a massive change in utility. It's almost the same physically, but ever so much more flexible, useful, and more profitable to own/operate.

And if you add massive databases, internet connectivity, the latest machine-learning, language and image processing and some nefarious intent, then you get into trouble.

:)

Phacops , October 25, 2019 at 3:08 pm

Sometimes automation is necessary to eliminate the risks of manual processes. There are parenteral (injectable) drugs that cannot be sterilized except by filtration. Most of the work of filling, post filling processing, and sealing is done using automation in areas that make surgical suites seem filthy and people are kept from these operations.

Manual operations are only undertaken to correct issues with the automation and the procedures are tested to ensure that they do not introduce contamination, microbial or otherwise. Because even one non-sterile unit is a failure and testing is destructive process, of course any full lot of product cannot be tested to state that all units are sterile. Periodic testing of the automated process and manual intervention is done periodically and it is expensive and time consuming to test to a level of confidence that there is far less than a one in a million chance of any unit in a lot being non sterile.

In that respect, automation and the skills necessary to interface with it are fundamental to the safety of drugs frequently used on already compromised patients.

Brooklin Bridge , October 25, 2019 at 3:27 pm

Agree. Good example. Digital technology and miniaturization seem particularly well suited to many aspect of the medical world. But doubt they will eliminate the doctor or the nurse very soon. Insurance companies on the other hand

lyman alpha blob , October 25, 2019 at 8:34 pm

Bill Burr has some thoughts on self checkouts and the potential bonanza for shoppers – https://www.youtube.com/watch?v=FxINJzqzn4w

TG , October 26, 2019 at 11:51 am

"There would be no improvement in quality mixing and kneading the dough by hand. There would, however, be an enormous increase in cost." WRONG! If you had an unlimited supply of 50-cents-an-hour disposable labor, mixing and kneading the dough by hand would be cheaper. It is only because labor is expensive in France that the machine saves money.

In Japan there is a lot of automation, and wages and living standards are high. In Bangladesh there is very little automation, and wages and livings standards are very low.

Are we done with the 'automation is destroying jobs' meme yet? Excessive population growth is the problem, not robots. And the root cause of excessive population growth is the corporate-sponsored virtual taboo of talking about it seriously.

[Oct 15, 2019] Learning doxygen for source code documentation by Arpan Sen

Jul 29, 2008 | developer.ibm.com
Maintaining and adding new features to legacy systems developed using Maintaining and adding new features to legacy systems developed using C/C++ is a daunting task. There are several facets to the problem -- understanding the existing class hierarchy and global variables, the different user-defined types, and function call graph analysis, to name a few. This article discusses several features of doxygen, with examples in the context of projects using C/C++ .

However, doxygen is flexible enough to be used for software projects developed using the Python, Java, PHP, and other languages, as well. The primary motivation of this article is to help extract information from C/C++ sources, but it also briefly describes how to document code using doxygen-defined tags.

Installing doxygen

You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources from the SVN repository and build it. You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources from the SVN repository and build it. Listing 1 shows the latter process.

Listing 1. Install and build doxygen sources
                
bash‑2.05$ svn co https://doxygen.svn.sourceforge.net/svnroot/doxygen/trunk doxygen‑svn

bash‑2.05$ cd doxygen‑svn
bash‑2.05$ ./configure –prefix=/home/user1/bin
bash‑2.05$ make

bash‑2.05$ make install
Show more Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the svn utility to check out sources. Generating documentation using doxygen To use doxygen to generate documentation of the sources, you perform three steps. To use doxygen to generate documentation of the sources, you perform three steps. Generate the configuration file At a shell prompt, type the command doxygen -g At a shell prompt, type the command doxygen -g doxygen -g . This command generates a text-editable configuration file called Doxyfile in the current directory. You can choose to override this file name, in which case the invocation should be doxygen -g <_user-specified file="file" name_="name_"> doxygen -g <user-specified file name> , as shown in Listing 2 .
Listing 2. Generate the default configuration file
                
bash‑2.05b$ doxygen ‑g
Configuration file 'Doxyfile' created.
Now edit the configuration file and enter
  doxygen Doxyfile
to generate the documentation for your project
bash‑2.05b$ ls Doxyfile
Doxyfile
Show more Edit the configuration file The configuration file is structured as The configuration file is structured as <TAGNAME> = <VALUE> , similar to the Make file format. Here are the most important tags: Listing 3 shows an example of a Doxyfile.
Listing 3. Sample doxyfile with user-provided tag values
                
OUTPUT_DIRECTORY = /home/user1/docs
EXTRACT_ALL = yes
EXTRACT_PRIVATE = yes
EXTRACT_STATIC = yes
INPUT = /home/user1/project/kernel
#Do not add anything here unless you need to. Doxygen already covers all 
#common formats like .c/.cc/.cxx/.c++/.cpp/.inl/.h/.hpp
FILE_PATTERNS = 
RECURSIVE = yes
Show more Run doxygen Run doxygen in the shell prompt as Run doxygen in the shell prompt as doxygen Doxyfile (or with whatever file name you've chosen for the configuration file). Doxygen issues several messages before it finally produces the documentation in Hypertext Markup Language (HTML) and Latex formats (the default). In the folder that the <OUTPUT_DIRECTORY> tag specifies, two sub-folders named html and latex are created as part of the documentation-generation process. Listing 4 shows a sample doxygen run log.
Listing 4. Sample log output from doxygen
                
Searching for include files...
Searching for example files...
Searching for images...
Searching for dot files...
Searching for files to exclude
Reading input files...
Reading and parsing tag files
Preprocessing /home/user1/project/kernel/kernel.h
 
Read 12489207 bytes
Parsing input...
Parsing file /project/user1/project/kernel/epico.cxx
 
Freeing input...
Building group list...
..
Generating docs for compound MemoryManager::ProcessSpec
 
Generating docs for namespace std
Generating group index...
Generating example index...
Generating file member index...
Generating namespace member index...
Generating page index...
Generating graph info page...
Generating search index...
Generating style sheet...
Show more Documentation output formats Doxygen can generate documentation in several output formats other than HTML. You can configure doxygen to produce documentation in the following formats: Doxygen can generate documentation in several output formats other than HTML. You can configure doxygen to produce documentation in the following formats: Listing 5 provides an example of a Doxyfile that generates documentation in all the formats discussed.
Listing 5. Doxyfile with tags for generating documentation in several formats
                
#for HTML 
GENERATE_HTML = YES
HTML_FILE_EXTENSION = .htm

#for CHM files
GENERATE_HTMLHELP = YES

#for Latex output
GENERATE_LATEX = YES
LATEX_OUTPUT = latex

#for RTF
GENERATE_RTF = YES
RTF_OUTPUT = rtf 
RTF_HYPERLINKS = YES

#for MAN pages
GENERATE_MAN = YES
MAN_OUTPUT = man
#for XML
GENERATE_XML = YES
Show more Special tags in doxygen Doxygen contains a couple of special tags. Doxygen contains a couple of special tags. Preprocessing C/C++ code First, doxygen must preprocess First, doxygen must preprocess C/C++ code to extract information. By default, however, it does only partial preprocessing -- conditional compilation statements ( #if #endif ) are evaluated, but macro expansions are not performed. Consider the code in Listing 6 .
Listing 6. Sample C code that makes use of macros
                
#include <cstring>
#include <rope>

#define USE_ROPE

#ifdef USE_ROPE
  #define STRING std::rope
#else
  #define STRING std::string
#endif

static STRING name;
Show more With With With With <USE_ROPE> defined in sources, generated documentation from doxygen looks like this:
Defines
    #define USE_ROPE
    #define STRING std::rope

Variables
    static STRING name
Show more Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of STRING . The <ENABLE_PREPROCESSING> tag in the Doxyfile is set by default to Yes . To allow for macro expansions, also set the <MACRO_EXPANSION> tag to Yes . Doing so produces this output from doxygen:
Defines
   #define USE_ROPE
    #define STRING std::string

Variables
    static std::rope name
Show more If you set the If you set the If you set the If you set the <ENABLE_PREPROCESSING> tag to No , the output from doxygen for the earlier sources looks like this:
Variables
    static STRING name
Show more Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of STRING . It thus makes sense always to set the <ENABLE_PREPROCESSING> tag to Yes . As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting <ENABLE_PREPROCESSING> and <MACRO_EXPANSION> to Yes , you must set the <EXPAND_ONLY_PREDEF> tag to Yes (this tag is set to No by default) and provide the macro details as part of the <PREDEFINED> or <EXPAND_AS_DEFINED> tag. Consider the code in Listing 7 , where only the macro CONTAINER would be expanded.
Listing 7. C source with multiple macros
                
#ifdef USE_ROPE
  #define STRING std::rope
#else
  #define STRING std::string
#endif

#if ALLOW_RANDOM_ACCESS == 1
  #define CONTAINER std::vector
#else
  #define CONTAINER std::list
#endif

static STRING name;
static CONTAINER gList;
Show more Listing 8 shows the configuration file.
Listing 8. Doxyfile set to allow select macro expansions
                
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
EXPAND_AS_DEFINED = CONTAINER

Show more Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with only CONTAINER expanded:
Defines
#define STRING   std::string 
#define CONTAINER   std::list

Variables
static STRING name
static std::list gList
Show more Notice that only the Notice that only the Notice that only the Notice that only the CONTAINER macro has been expanded. Subject to <MACRO_EXPANSION> and <EXPAND_AS_DEFINED> both being Yes , the <EXPAND_AS_DEFINED> tag selectively expands only those macros listed on the right-hand side of the equality operator. As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is <PREDEFINED> . Much like the same way you use the -D switch to pass the G++ compiler preprocessor definitions, you use this tag to define macros. Consider the Doxyfile in Listing 9 .
Listing 9. Doxyfile with macro expansion tags defined
                
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
EXPAND_AS_DEFINED = 
PREDEFINED = USE_ROPE= \
                             ALLOW_RANDOM_ACCESS=1
Show more Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output:
Defines
#define USE_CROPE 
#define STRING   std::rope 
#define CONTAINER   std::vector

Variables
static std::rope name 
static std::vector gList
Show more When used with the When used with the When used with the When used with the <PREDEFINED> tag, macros should be defined as <_macro name_="name_">=<_value_> <macro name>=<value> . If no value is provided -- as in the case of simple #define -- just using <_macro name_="name_">=<_spaces_> <macro name>=<spaces> suffices. Separate multiple macro definitions by spaces or a backslash ( \ ). Excluding specific files or directories from the documentation process In the In the <EXCLUDE> tag in the Doxyfile, add the names of the files and directories for which documentation should not be generated separated by spaces. This comes in handy when the root of the source hierarchy is provided and some sub-directories must be skipped. For example, if the root of the hierarchy is src_root and you want to skip the examples/ and test/memoryleaks folders from the documentation process, the Doxyfile should look like Listing 10 .
Listing 10. Using the EXCLUDE tag as part of the Doxyfile
                
INPUT = /home/user1/src_root
EXCLUDE = /home/user1/src_root/examples /home/user1/src_root/test/memoryleaks

Show more Generating graphs and diagrams By default, the Doxyfile has the By default, the Doxyfile has the <CLASS_DIAGRAMS> tag set to Yes . This tag is used for generation of class hierarchy diagrams. The following tags in the Doxyfile deal with generating diagrams: Listing 11 provides an example using a few data structures. Note that the <HAVE_DOT> , <CLASS_GRAPH> , and <COLLABORATION_GRAPH> tags are all set to Yes in the configuration file.
Listing 11. Interacting C classes and structures
                
struct D {
  int d;
};

class A {
  int a;
};

class B : public A {
  int b;
};

class C : public B {
  int c;
  D d;
};
Show more Figure 1 shows the output from doxygen.
Figure 1. The Class inheritance graph and collaboration graph generated using the dot tool
Code documentation style So far, you've used doxygen to extract information from code that is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses some of the more common tags doxygen advocates using as part of So far, you've used doxygen to extract information from code that is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses some of the more common tags doxygen advocates using as part of C/C++ code. For further details, see resources on the right. Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the in-body description, which is a concatenation of all comment blocks found within the function body. Some of the more common doxygen tags and styles of commenting are: To document global functions, variables, and enum types, the corresponding file must first be documented using the To document global functions, variables, and enum types, the corresponding file must first be documented using the <\file> tag. Listing 12 provides an example that discusses item 4 with a function tag ( <\fn> ), a function argument tag ( <\param> ), a variable name tag ( <\var> ), a tag for #define ( <\def> ), and a tag to indicate some specific issues related to a code snippet ( <\warning> ).
Listing 12. Typical doxygen tags and their use
                
/∗! \file globaldecls.h
      \brief Place to look for global variables, enums, functions
           and macro definitions
  ∗/

/∗∗ \var const int fileSize
      \brief Default size of the file on disk
  ∗/
const int fileSize = 1048576;

/∗∗ \def SHIFT(value, length)
      \brief Left shift value by length in bits
  ∗/
#define SHIFT(value, length) ((value) << (length))

/∗∗ \fn bool check_for_io_errors(FILE∗ fp)
      \brief Checks if a file is corrupted or not
      \param fp Pointer to an already opened file
      \warning Not thread safe!
  ∗/
bool check_for_io_errors(FILE∗ fp);
Show more

Here's how the generated documentation looks:

Defines
#define SHIFT(value, length)   ((value) << (length))  
             Left shift value by length in bits.

Functions
bool check_for_io_errors (FILE ∗fp)  
        Checks if a file is corrupted or not.

Variables
const int fileSize = 1048576;
Function Documentation
bool check_for_io_errors (FILE∗ fp)
Checks if a file is corrupted or not.

Parameters
              fp: Pointer to an already opened file

Warning
Not thread safe!
Show more Conclusion

This article discusses how doxygen can extract a lot of relevant information from legacy C/C++ code. If the code is documented using doxygen tags, doxygen generates output in an easy-to-read format. Put to good use, doxygen is a ripe candidate in any developer's arsenal for maintaining and managing legacy systems.

[Oct 15, 2019] A Toxic Work Culture is forcing your Best Employees to Quit!

Now, show me a large company at which at least half of those 10 point does not apply ;-)
This is pretty superficial advice...
Notable quotes:
"... Employee suggestions are discarded. People are afraid to give honest feedback. ..."
"... Overworking is a badge of honor and is expected. ..."
"... Gossiping and/or social cliques. ..."
"... Favoritism and office politics. ..."
"... Aggressive or bullying behavior ..."
Oct 15, 2019 | www.linkedin.com

Whenever a boss acts like a dictator – shutting down, embarrassing, or firing anyone who dares to challenge the status quo – you've got a toxic workplace problem. And that's not just because of the boss' bad behavior, but because that behavior creates an environment in which everyone is scared, intimidated and often willing to throw their colleagues under the bus, just to stay on the good side of the such bosses.

... ... ...

10 Signs your workplace culture is Toxic

[Oct 15, 2019] Economist's View The Opportunity Cost of Computer Programming

Oct 15, 2019 | economistsview.typepad.com

From Reuters Odd News :

Man gets the poop on outsourcing , By Holly McKenna, May 2, Reuters

Computer programmer Steve Relles has the poop on what to do when your job is outsourced to India. Relles has spent the past year making his living scooping up dog droppings as the "Delmar Dog Butler." "My parents paid for me to get a (degree) in math and now I am a pooper scooper," "I can clean four to five yards in a hour if they are close together." Relles, who lost his computer programming job about three years ago ... has over 100 clients who pay $10 each for a once-a-week cleaning of their yard.

Relles competes for business with another local company called "Scoopy Do." Similar outfits have sprung up across America, including Petbutler.net, which operates in Ohio. Relles says his business is growing by word of mouth and that most of his clients are women who either don't have the time or desire to pick up the droppings. "St. Bernard (dogs) are my favorite customers since they poop in large piles which are easy to find," Relles said. "It sure beats computer programming because it's flexible, and I get to be outside,"

[Oct 13, 2019] https://www.quora.com/If-Donald-Knuth-were-25-years-old-today-which-programming-language-would-he-choose

Notable quotes:
"... He mostly writes in C today. ..."
Oct 13, 2019 | www.quora.com

Eugene Miya , A friend/colleague. Sometimes driver. Other shared experiences. Updated Mar 22 2017 · Author has 11.2k answers and 7.9m answer views

He mostly writes in C today.

I can assure you he at least knows about Python. Guido's office at Dropbox is 1 -- 2 blocks by a backdoor gate from Don's house.

I would tend to doubt that he would use R (I've used S before as one of my stat packages). Don would probably write something for himself.

Don is not big on functional languages, so I would doubt either Haskell (sorry Paul) or LISP (but McCarthy lived just around the corner from Don; I used to drive him to meetings; actually, I've driven all 3 of us to meetings, and he got his wife an electric version of my car based on riding in my car (score one for friend's choices)). He does use emacs and he does write MLISP macros, but he believes in being closer to the hardware which is why he sticks with MMIX (and MIX) in his books.

Don't discount him learning the machine language of a given architecture.

I'm having dinner with Don and Jill and a dozen other mutual friends in 3 weeks or so (our quarterly dinner). I can ask him then, if I remember (either a calendar entry or at job). I try not to bother him with things like this. Don is well connected to the hacker community

Don's name was brought up at an undergrad architecture seminar today, but Don was not in the audience (an amazing audience; I took a photo for the collection of architects and other computer scientists in the audience (Hennessey and Patterson were talking)). I came close to biking by his house on my way back home.

We do have a mutual friend (actually, I introduced Don to my biology friend at Don's request) who arrives next week, and Don is my wine drinking proxy. So there is a chance I may see him sooner.

Steven de Rooij , Theoretical computer scientist Answered Mar 9, 2017 · Author has 4.6k answers and 7.7m answer views

Nice question :-)

Don Knuth would want to use something that’s low level, because details matter . So no Haskell; LISP is borderline. Perhaps if the Lisp machine ever had become a thing.

He’d want something with well-defined and simple semantics, so definitely no R. Python also contains quite a few strange ad hoc rules, especially in its OO and lambda features. Yes Python is easy to learn and it looks pretty, but Don doesn’t care about superficialities like that. He’d want a language whose version number is converging to a mathematical constant, which is also not in favor of R or Python.

What remains is C. Out of the five languages listed, my guess is Don would pick that one. But actually, his own old choice of Pascal suits him even better. I don’t think any languages have been invented since was written that score higher on the Knuthometer than Knuth’s own original pick.

And yes, I feel that this is actually a conclusion that bears some thinking about. 24.1k views ·

Dan Allen , I've been programming for 34 years now. Still not finished. Answered Mar 9, 2017 · Author has 4.5k answers and 1.8m answer views

In The Art of Computer Programming I think he'd do exactly what he did. He'd invent his own architecture and implement programs in an assembly language targeting that theoretical machine.

He did that for a reason because he wanted to reveal the detail of algorithms at the lowest level of detail which is machine level.

He didn't use any available languages at the time and I don't see why that would suit his purpose now. All the languages above are too high-level for his purposes.

[Oct 08, 2019] Southwest Pilots Blast Boeing in Suit for Deception and Losses from -Unsafe, Unairworthy- 737 Max -

Notable quotes:
"... The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes: ..."
"... When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest." ..."
"... The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like: ..."
"... Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft. ..."
"... Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft. ..."
"... And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓. ..."
"... Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing. ..."
"... In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots " ..."
"... This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything. ..."
Oct 08, 2019 | www.nakedcapitalism.com

At first blush, the suit filed in Dallas by the Southwest Airlines Pilots Association (SwAPA) against Boeing may seem like a family feud. SWAPA is seeking an estimated $115 million for lost pilots' pay as a result of the grounding of the 34 Boeing 737 Max planes that Southwest owns and the additional 20 that Southwest had planned to add to its fleet by year end 2019. Recall that Southwest was the largest buyer of the 737 Max, followed by American Airlines. However, the damning accusations made by the pilots' union, meaning, erm, pilots, is likely to cause Boeing not just more public relations headaches, but will also give grist to suits by crash victims.

However, one reason that the Max is a sore point with the union was that it was a key leverage point in 2016 contract negotiations:

And Boeing's assurances that the 737 Max was for all practical purposes just a newer 737 factored into the pilots' bargaining stance. Accordingly, one of the causes of action is tortious interference, that Boeing interfered in the contract negotiations to the benefit of Southwest. The filing describes at length how Boeing and Southwest were highly motivated not to have the contract dispute drag on and set back the launch of the 737 Max at Southwest, its showcase buyer. The big point that the suit makes is the plane was unsafe and the pilots never would have agreed to fly it had they known what they know now.

We've embedded the compliant at the end of the post. It's colorful and does a fine job of recapping the sorry history of the development of the airplane. It has damning passages like:

Boeing concealed the fact that the 737 MAX aircraft was not airworthy because, inter alia, it incorporated a single-point failure condition -- a software/flight control logic called the Maneuvering Characteristics Augmentation System ("MCAS") -- that,if fed erroneous data from a single angle-of-attack sensor, would command the aircraft nose-down and into an unrecoverable dive without pilot input or knowledge.

The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes:

Had SWAPA known the truth about the 737 MAX aircraft in 2016, it never would have approved the inclusion of the 737 MAX aircraft as a term in its CBA [collective bargaining agreement], and agreed to operate the aircraft for Southwest. Worse still, had SWAPA known the truth about the 737 MAX aircraft, it would have demanded that Boeing rectify the aircraft's fatal flaws before agreeing to include the aircraft in its CBA, and to provide its pilots, and all pilots, with the necessary information and training needed to respond to the circumstances that the Lion Air Flight 610 and Ethiopian Airlines Flight 302 pilots encountered nearly three years later.

And (boldface original):

Boeing Set SWAPA Pilots Up to Fail

As SWAPA President Jon Weaks, publicly stated, SWAPA pilots "were kept in the dark" by Boeing.

Boeing did not tell SWAPA pilots that MCAS existed and there was no description or mention of MCAS in the Boeing Flight Crew Operations Manual.

There was therefore no way for commercial airline pilots, including SWAPA pilots, to know that MCAS would work in the background to override pilot inputs.

There was no way for them to know that MCAS drew on only one of two angle of attack sensors on the aircraft.

And there was no way for them to know of the terrifying consequences that would follow from a malfunction.

When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest."

SWAPA's pilots, like their counterparts all over the world, were set up for failure

The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like:

By March 2016, Boeing settled on a revision of the MCAS flight control logic.

However, Boeing chose to omit key safeguards that had previously been included in earlier iterations of MCAS used on the Boeing KC-46A Pegasus, a military tanker derivative of the Boeing 767 aircraft.

The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control. Those familiar with the tanker's design explained that these checks were incorporated because "[y]ou don't want the solution to be worse than the initial problem."

The 737 MAX version of MCAS abandoned the safeguards previously relied upon. As discussed below, the 737 MAX MCAS had greater control authority than its predecessor, activated repeatedly upon activation, and relied on input from just one of the plane's two sensors that measure the angle of the plane's nose.

In other words, Boeing can't credibly say that it didn't know better.

Here is one of the sections describing Boeing's cover-ups:

Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.

In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots.

We urge you to read the complaint in full, since it contains juicy insider details, like the significance of Southwest being Boeing's 737 Max "launch partner" and what that entailed in practice, plus recounting dates and names of Boeing personnel who met with SWAPA pilots and made misrepresentations about the aircraft.

If you are time-pressed, the best MSM account is from the Seattle Times, In scathing lawsuit, Southwest pilots' union says Boeing 737 MAX was unsafe

Even though Southwest Airlines is negotiating a settlement with Boeing over losses resulting from the grounding of the 737 Max and the airline has promised to compensate the pilots, the pilots' union at a minimum apparently feels the need to put the heat on Boeing directly. After all, the union could withdraw the complaint if Southwest were to offer satisfactory compensation for the pilots' lost income. And pilots have incentives not to raise safety concerns about the planes they fly. Don't want to spook the horses, after all.

But Southwest pilots are not only the ones most harmed by Boeing's debacle but they are arguably less exposed to the downside of bad press about the 737 Max. It's business fliers who are most sensitive to the risks of the 737 Max, due to seeing the story regularly covered in the business press plus due to often being road warriors. Even though corporate customers account for only 12% of airline customers, they represent an estimated 75% of profits.

Southwest customers don't pay up for front of the bus seats. And many of them presumably value the combination of cheap travel, point to point routes between cities underserved by the majors, and close-in airports, which cut travel times. In other words, that combination of features will make it hard for business travelers who use Southwest regularly to give the airline up, even if the 737 Max gives them the willies. By contrast, premium seat passengers on American or United might find it not all that costly, in terms of convenience and ticket cost (if they are budget sensitive), to fly 737-Max-free Delta until those passengers regain confidence in the grounded plane.

Note that American Airlines' pilot union, when asked about the Southwest claim, said that it also believes its pilots deserve to be compensated for lost flying time, but they plan to obtain it through American Airlines.

If Boeing were smart, it would settle this suit quickly, but so far, Boeing has relied on bluster and denial. So your guess is as good as mine as to how long the legal arm-wrestling goes on.

Update 5:30 AM EDT : One important point that I neglected to include is that the filing also recounts, in gory detail, how Boeing went into "Blame the pilots" mode after the Lion Air crash, insisting the cause was pilot error and would therefore not happen again. Boeing made that claim on a call to all operators, including SWAPA, and then three days later in a meeting with SWAPA.

However, Boeing's actions were inconsistent with this claim. From the filing:

Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft.

Relying on Boeing's description of the problem, the AD directed that in the event of un-commanded nose-down stabilizer trim such as what happened during the Lion Air crash, the flight crew should comply with the Runaway Stabilizer procedure in the Operating Procedures of the 737 MAX manual.

But the AD did not provide a complete description of MCAS or the problem in 737 MAX aircraft that led to the Lion Air crash, and would lead to another crash and the 737 MAX's grounding just months later.

An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft tail down again.

Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft.

Even after the Lion Air crash, Boeing's description of MCAS was still insufficient to put correct its lack of disclosure as demonstrated by a second MCAS-caused crash.

We hoisted this detail because insiders were spouting in our comments section, presumably based on Boeing's patter, that the Lion Air pilots were clearly incompetent, had they only executed the well-known "runaway stabilizer," all would have been fine. Needless to say, this assertion has been shown to be incorrect.


Titus , October 8, 2019 at 4:38 am

Excellent, by any standard. Which does remind of of the NYT zine story (William Langewiesche Published Sept. 18, 2019) making the claim that basically the pilots who crashed their planes weren't real "Airman".

And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓.

And especially if you as a pilot didn't know MCAS was there in the first place. This sort of engineering by Boeing is criminal. And the lying. To everyone. Oh, least we all forget the processing power of the in flight computer is that of a intel 286. There are times I just want to be beamed back to the home planet. Where we care for each other.

Carolinian , October 8, 2019 at 8:32 am

One should also point out that Langewiesche said that Boeing made disastrous mistakes with the MCAS and that the very future of the Max is cloudy. His article was useful both for greater detail about what happened and for offering some pushback to the idea that the pilots had nothing to do with the accidents.

As for the above, it was obvious from the first Seattle Times stories that these two events and the grounding were going to be a lawsuit magnet. But some of us think Boeing deserves at least a little bit of a defense because their side has been totally silent–either for legal reasons or CYA reasons on the part of their board and bad management.

Brooklin Bridge , October 8, 2019 at 8:08 am

Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above safety all else , that is glaringly obvious to everyone except Boeing.

Summer , October 8, 2019 at 9:01 am

"The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control "

"Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.

In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots "

This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything.

[Oct 08, 2019] Serious question/Semi-Rant. What the hell is DevOps supposed to be and how does it affect me as a sysadmin in higher ed?

Notable quotes:
"... Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? ..."
"... So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming). ..."
Oct 08, 2019 | www.reddit.com

Posted by u/kevbo423 59 minutes ago

What the hell is DevOps? Every couple months I find myself trying to look into it as all I ever hear and see about is DevOps being the way forward. But each time I research it I can only find things talking about streamlining software updates and quality assurance and yada yada yada. It seems like DevOps only applies to companies that make software as a product. How does that affect me as a sysadmin for higher education? My "company's" product isn't software.

Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? Again, when I try to research them a majority of what I find just links back to software development.

To give a rough idea of what I deal with, below is a list of my three main responsibilities.

  1. macOS/iOS Systems Administration (I'm the only sysadmin that does this for around 150+ machines)

  2. Network Administration (I just started with this a couple months ago and I'm slowly learning about our infrastructure and network administration in general from our IT director. We have several buildings spread across our entire campus with a mixture of Juniper, Dell, and Brocade equipment.)

  3. AV Systems Design and Programming (I'm the only person who does anything related to video conferencing, meeting room equipment, presentation systems, digital signage, etc. for 7 buildings.)

So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming).

I've been working at the same job for 5 years and I feel like I'm being left in the dust by the entire rest of the industry. I'm being pulled in so many different directions that I feel like it's impossible for me to ever get another job. At the same time, I can't specialize in anything because I have so many different unrelated areas I'm supposed to be doing work in.

And this is what I go through/ask myself every few months I try to research and learn DevOps. This is mainly a rant, but I am more than open to any and all advice anyone is willing to offer. Thanks in advance.

kimvila 2 points · 27 minutes ago

· edited 23 minutes ago

there's a lot of tools that can be used to make your life much easier that's used on a daily basis for DevOps, but apparently that's not the case for you. when you manage infra as code, you're using DevOps.

there's a lot of space for operations guys like you (and me) so look to DevOps as an alternative source of knowledge, just to stay tuned on the trends of the industry and improve your skills.

for higher education, this is useful for managing large projects and looking for improvement during the development of the product/service itself. but again, that's not the case for you. if you intend to switch to another position, you may try to search for a certification program that suits your needs

Mongoloid_the_Retard 0 points · 46 minutes ago

DevOps is a cult.

[Oct 08, 2019] FALLACIES AND PITFALLS OF OO PROGRAMMING by David Hoag and Anthony Sintes

Notable quotes:
"... In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part! ..."
"... Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture. ..."
"... OO will insure the success of your project: An object-oriented approach to software development does not guarantee the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only careful analysis and a complete understanding of the problem will make the project succeed. A successful project will utilize sound techniques, competent programmers, sound processes and solid project management. ..."
"... OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. ..."
"... OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake. ..."
Apr 27, 2000 | www.chicagotribune.com

"Hooked on Objects" is dedicated to providing readers with insight into object-oriented technologies. In our first few articles, we introduced the three tenants of object-oriented programming: encapsulation, inheritance and polymorphism. We then covered software process and design patterns. We even got our hands dirty and dissected the Java class.

Each of our previous articles had a common thread. We have written about the strengths and benefits of the object paradigm and highlighted the advantages the object approach brings to the development effort. However, we do not want to give anyone a false sense that object-oriented techniques are always the perfect answer. Object-oriented techniques are not the magic "silver bullets" of programming.

In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part!

Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture.

If anything, using OO makes design and architecture more important because without a clear, well-planned design, OO will fail almost every time. Spaghetti code (that which is written without a coherent structure) spells trouble for procedural programming, and weak architecture and design can mean the death of an OO project. A poorly planned system will fail to achieve the promises of OO: increased productivity, reusability, scalability and easier maintenance.

Some critics claim OO has not lived up to its advance billing, while others claim its techniques are flawed. OO isn't flawed, but some of the hype has given OO developers and managers a false sense of security.

Successful OO requires careful analysis and design. Our previous articles have stressed the positive attributes of OO. This time we'll explore some of the common fallacies of this promising technology and some of the potential pitfalls.

Fallacies of OO

It is important to have realistic expectations before choosing to use object-oriented technologies. Do not allow these common fallacies to mislead you.

OO Pitfalls

Life is full of compromise and nothing comes without cost. OO is no exception. Before choosing to employ object technologies it is imperative to understand this. When used properly, OO has many benefits; when used improperly, however, the results can be disastrous.

OO technologies take time to learn: Don't expect to become an OO expert overnight. Good OO takes time and effort to learn. Like all technologies, change is the only constant. If you do not continue to enhance and strengthen your skills, you will fall behind.

OO benefits might not pay off in the short term: Because of the long learning curve and initial extra development costs, the benefits of increased productivity and reuse might take time to materialize. Don't forget this or you might be disappointed in your initial OO results.

OO technologies might not fit your corporate culture: The successful application of OO requires that your development team feels involved. If developers are frequently shifted, they will struggle to deliver reusable objects. There's less incentive to deliver truly robust, reusable code if you are not required to live with your work or if you'll never reap the benefits of it.

OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. This isn't as much of a problem today. Memory prices are dropping every day. CPUs continue to provide better performance and compilers and virtual machines continue to improve. The small efficiency that you trade for increased productivity and reuse should be well worth it. However, if you're developing an application that tracks millions of data points in real time, OO might not be the answer for you.

OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake.

What do you need to do to avoid these pitfalls and fallacies? The answer is to keep expectations realistic. Beware of the hype. Use an OO approach only when appropriate.

Programmers should not feel compelled to use every OO trick that the implementation language offers. It is wise to use only the ones that make sense. When used without forethought, object-oriented techniques could cause more harm than good. Of course, there is one other thing that you should always do to improve your OO: Don't miss a single installment of "Hooked on Objects."

David Hoag is vice president-development and chief object guru for ObjectWave, a Chicago-based object-oriented software engineering firm. Anthony Sintes is a Sun Certified Java Developer and team member specializing in telecommunications consulting for ObjectWave. Contact them at hob@objectwave.com or visit their Web site at www.objectwave.com.

BOOKMARKS

Hooked on Objects archive:

chicagotribune.com/go/HOBarchive

Associated message board:

chicagotribune.com/go/HOBtalk

[Oct 07, 2019] Pitfalls of Object Oriented Programming by Tony Albrecht - Technical Consultant

This isn't a general discussion of OO pitfalls and conceptual weaknesses, but a discussion of how conventional 'textbook' OO design approaches can lead to inefficient use of cache & RAM, especially on consoles or other hardware-constrained environments. But it's still good.
Sony Computer Entertainment Europe Research & Development Division

OO is not necessarily EVIL

Its all about the memory

Homogeneity

Data Oriented Design Delivers

[Oct 06, 2019] Weird Al Yankovic - Mission Statement

Highly recommended!
This song seriously streamlined my workflow.
Oct 06, 2019 | www.youtube.com

FanmaR , 4 years ago

Props to the artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up with everything.

Maxwelhse , 3 years ago

He missed "sea change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.

VenetianTemper , 4 years ago

From my experiences as an engineer, never trust a company that describes their product with the word "synergy".

Swag Mcfresh , 5 years ago

For those too young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.

112steinway , 4 years ago

Only in corporate speak can you use a whole lot of words while saying nothing at all.

Jonathan Ingersoll , 3 years ago

As a business major this is basically every essay I wrote.

A.J. Collins , 3 years ago

"The company has undergone organization optimization due to our strategy modification, which includes empowering the support to the operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.

meanmanturbo , 3 years ago

So this is basically a Dilbert strip turned into a song. I approve.

zyxwut321 , 4 years ago

In his big long career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most ambitious songs in pop music history.

teenygozer , 3 years ago

This should be played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius as usual, Mr. Yankovic!

Dunoid , 4 years ago

Maybe I'm too far gone to the world of computer nerds, but "Cloud Computing" seems like it should have been in the song somewhere.

Snoo Lee , 4 years ago

The "paradigm shift" at the end of the video / song is when the corporation screws everybody at the end. Brilliantly done, Al.

A Piece Of Bread , 3 years ago

Don't forget to triangulate the automatonic business monetizer to create exceptional synergy.

GeoffryHawk , 3 years ago

There's a quote it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly it and it's brilliant.

Sefie Ezephiel , 4 months ago

From the current Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to capital allocation."" yeah Weird Al totally nailed it

Phil H , 6 months ago

"People who enjoy meetings should not be put in charge of anything." -Thomas Sowell

Laff , 3 years ago

I heard "monetize our asses" for some reason...

Brett Naylor , 4 years ago

Excuse me, but "proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you of anything like that. [pause] I'm fired, aren't I?~George Meyer

Mark Kahn , 4 years ago

Brilliant social commentary, on how the height of 60's optimism was bastardized into corporate enthusiasm. I hope SteveJjobs got to see this.

Mark , 4 years ago

That's the strangest "Draw My Life" I've ever seen.

Δ , 17 hours ago

I watch this at least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment Associates" eventually to discover they want someone to run a cash register and sweep up.

Mike The SandbridgeKid , 5 years ago

The irony is a song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous excesses of Unbridled Capitalism.

Geetar Bear , 4 years ago (edited)

This reminds me of George carlin so much

Vaugn Ripen , 2 years ago

If you understand who and what he's taking a jab at, this is one of the greatest songs and videos of all time. So spot on. This and Frank's 2000 inch tv are my favorite songs of yours. Thanks Al!

Joolz Godfree , 4 years ago

hahaha, "Client-Centric Solutions...!" (or in my case at the time, 'Customer-Centric' solutions) now THAT's a term i haven't heard/read/seen in years, since last being an office drone. =D

Miles Lacey , 4 years ago

When I interact with this musical visual medium I am motivated to conceptualize how the English language can be better compartmentalized to synergize with the client-centric requirements of the microcosmic community focussed social entities that I administrate on social media while interfacing energetically about the inherent shortcomings of the current socio-economic and geo-political order in which we co-habitate. Now does this tedium flow in an effortless stream of coherent verbalisations capable of comprehension?

Soufriere , 5 years ago

When I bought "Mandatory Fun", put it in my car, and first heard this song, I busted a gut, laughing so hard I nearly crashed. All the corporate buzzwords! (except "pivot", apparently).

[Oct 06, 2019] Devop created huge opportunities for a new generation of snake oil salesman

Highly recommended!
Oct 06, 2019 | www.reddit.com

DragonDrew Jack of All Trades 772 points · 4 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2 OpenScore Sysadmin 527 points · 4 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value"

[Oct 05, 2019] Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, but they don t understand half of the stuff

Devop created a new generation of bullsheeters
Oct 05, 2019 | www.reddit.com

They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes.

ntengineer 613 points · 4 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 101 points · 4 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 42 points · 4 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4 PuzzledSwitch 35 points · 4 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

williamfny Jack of All Trades 26 points · 4 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 6 points · 4 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

CrazyTachikoma 4 days ago

Most DevOps I've met are devs trying to bypass the sysadmins. This, and the Cloud fad, are burning serious amount of money from companies managed by stupid people that get easily impressed by PR stunts and shiny conferences. Then when everything goes to shit, they call the infrastructure team to fix it...

[Sep 18, 2019] MCAS design, Boeing and ethics of software architect

Sep 18, 2019 | www.moonofalabama.org

... ... ...

Boeing screwed up by designing and installing a faulty systems that was unsafe. It did not even tell the pilots that MCAS existed. It still insists that the system's failure should not be trained in simulator type training. Boeing's failure and the FAA's negligence, not the pilots, caused two major accidents.

Nearly a year after the first incident Boeing has still not presented a solution that the FAA would accept. Meanwhile more safety critical issues on the 737 MAX were found for which Boeing has still not provided any acceptable solution.

But to Langewiesche this anyway all irrelevant. He closes his piece out with more "blame the pilots" whitewash of "poor Boeing":

The 737 Max remains grounded under impossibly close scrutiny, and any suggestion that this might be an overreaction, or that ulterior motives might be at play, or that the Indonesian and Ethiopian investigations might be inadequate, is dismissed summarily. To top it off, while the technical fixes to the MCAS have been accomplished, other barely related imperfections have been discovered and added to the airplane's woes. All signs are that the reintroduction of the 737 Max will be exceedingly difficult because of political and bureaucratic obstacles that are formidable and widespread. Who in a position of authority will say to the public that the airplane is safe?

I would if I were in such a position. What we had in the two downed airplanes was a textbook failure of airmanship . In broad daylight, these pilots couldn't decipher a variant of a simple runaway trim, and they ended up flying too fast at low altitude, neglecting to throttle back and leading their passengers over an aerodynamic edge into oblivion. They were the deciding factor here -- not the MCAS, not the Max.

One wonders how much Boeing paid the author to assemble his screed.

foolisholdman , Sep 18 2019 17:14 utc | 5

14,000 Words Of "Blame The Pilots" That Whitewash Boeing Of 737 MAX Failure
The New York Times

No doubt, this WAS intended as a whitewash of Boeing, but having read the 14,000 words, I don't think it qualifies as more than a somewhat greywash. It is true he blames the pilots for mishandling a situation that could, perhaps, have been better handled, but Boeing still comes out of it pretty badly and so does the NTSB. The other thing I took away from the article is that Airbus planes are, in principle, & by design, more failsafe/idiot-proof.

William Herschel , Sep 18 2019 17:18 utc | 6
Key words: New York Times Magazine. I think when your body is for sale you are called a whore. Trump's almost hysterical bashing of the NYT is enough to make anyone like the paper, but at its core it is a mouthpiece for the military industrial complex. Cf. Judith Miller.
BM , Sep 18 2019 17:23 utc | 7
The New York Times Magazine just published a 14,000 words piece

An ill-disguised attempt to prepare the ground for premature approval for the 737max. It won't succeed - impossible. Opposition will come from too many directions. The blowback from this article will make Boeing regret it very soon, I am quite sure.

foolisholdman , Sep 18 2019 17:23 utc | 8
Come to think about it: (apart from the MCAS) what sort of crap design is it, if an absolutely vital control, which the elevator is, can become impossibly stiff under just those conditions where you absolutely have to be able to move it quickly?
A.L. , Sep 18 2019 17:27 utc | 9
This NYT article is great.

It will only highlight the hubris of "my sh1t doesn't stink" mentality of the American elite and increase the resolve of other civil aviation authorities with a backbone (or in ascendancy) to put Boeing through the wringer.

For the longest time FAA was the gold standard and years of "Air Crash Investigation" TV shows solidified its place but has been taken for granted. Unitl now if it's good enough for the FAA it's good enough for all.

That reputation has now been irreparably damaged over this sh1tshow. I can't help but think this NYT article is only meant for domestic sheeple or stock brokers' consumption as anyone who is going to have anything technical to do with this investigation is going to see right through this load literal diarroeh.

I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.

jayc , Sep 18 2019 17:38 utc | 10
An appalling indifference to life/lives has been a signature feature of the American experience.
psychohistorian , Sep 18 2019 17:40 utc | 11
Thanks for the ongoing reporting of this debacle b....you are saving peoples lives

@ A.L who wrote

"
I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.
"

I agree but would pluralize your "insider" to "insiders". This SOP gut and run financialization strategy is just like we are seeing with Purdue Pharma that just filed bankruptcy because their opioids have killed so many....the owners will never see jail time and their profits are protected by the God of Mammon legal system.

Hopefully the WWIII we are engaged in about public/private finance will put an end to this perfidy by the God of Mammon/private finance cult of the Western form of social organization.

b , Sep 18 2019 17:46 utc | 14
Peter Lemme, the satcom guru , was once an engineer at Boeing. He testified over technical MAX issue before Congress and wrote lot of technical details about it. He retweeted the NYT Mag piece with this comment :
Peter Lemme @Satcom_Guru

Blame the pilots.
Blame the training.
Blame the airline standards.
Imply rampant corruption at all levels.
Claim Airbus flight envelope protection is superior to Boeing.
Fumble the technical details.
Stack the quotes with lots of hearsay to drive the theme.
Ignore everything else

[Sep 18, 2019] the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

Sep 18, 2019 | www.moonofalabama.org

A.L. , Sep 18 2019 19:56 utc | 31

@30 David G

perhaps, just like proponents of AI and self driving cars. They just love the technology, financially and emotionally invested in it so much they can't see the forest from the trees.

I like technology, I studied engineering. But the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.

engineering used to be a discipline with ethics and responsibilities... But now anybody who could write two lines of code can call themselves a software engineer....

[Sep 14, 2019] The Man Who Could Speak Japanese

This impostor definitely demonstrated programming abilities, although at the time there was not such ter :-)
Notable quotes:
"... "We wrote it down. ..."
"... The next phrase was: ..."
"... " ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' " ..."
"... " ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' " ..."
"... "Tong what ?" rasped the colonel. ..."
"... "Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?" ..."
"... Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction. ..."
"... https://www.americanheritage.com/man-who-could-speak-japanese ..."
Sep 14, 2019 | www.nakedcapitalism.com

Wukchumni , September 13, 2019 at 4:29 pm

Re: Fake list of grunge slang:

a fabulous tale of the South Pacific by William Manchester

The Man Who Could Speak Japanese

"We wrote it down.

The next phrase was:

" ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' "

Then:

" ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' "

"Tong what ?" rasped the colonel.

"Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?"

Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction.

https://www.americanheritage.com/man-who-could-speak-japanese

[Sep 08, 2019] The Art of Defensive Programming by Diego

Dec 25, 2016 | medium.com

... ... ...

Never trust user input

Assume always you're going to receive something you don't expect. This should be your approach as a defensive programmer, against user input, or in general things coming into your system. That's because as we said we can expect the unexpected. Try to be as strict as possible. Assert that your input values are what you expect.

The best defense is a good offense

Do whitelists not blacklists, for example when validating an image extension, don't check for the invalid types but check for the valid types, excluding all the rest. In PHP however you also have an infinite number of open source validation libraries to make your job easier.

The best defense is a good offense. Be strict

Use database abstraction

The first of OWASP Top 10 Security Vulnerabilities is Injection. That means someone (a lot of people out there) is yet not using secure tools to query their databases. Please use Database Abstraction packages and libraries. In PHP you can use PDO to ensure basic injection protection .

Don't reinvent the wheel

You don't use a framework (or micro framework) ? Well you like doing extra work for no reason, congratulations! It's not only about frameworks, but also for new features where you could easily use something that's already out there, well tested, trusted by thousands of developers and stable , rather than crafting something by yourself only for the sake of it. The only reasons why you should build something by yourself is that you need something that doesn't exists or that exists but doesn't fit within your needs (bad performance, missing features etc)

That's what is used to call intelligent code reuse . Embrace it

Don't trust developers

Defensive programming can be related to something called Defensive Driving . In Defensive Driving we assume that everyone around us can potentially and possibly make mistakes. So we have to be careful even to others' behavior. The same concept applies to Defensive Programming where us, as developers shouldn't trust others developers' code . We shouldn't trust our code neither.

In big projects, where many people are involved, we can have many different ways we write and organize code. This can also lead to confusion and even more bugs. That's because why we should enforce coding styles and mess detector to make our life easier.

Write SOLID code

That's the tough part for a (defensive) programmer, writing code that doesn't suck . And this is a thing many people know and talk about, but nobody really cares or put the right amount of attention and effort into it in order to achieve SOLID code .

Let's see some bad examples

Don't: Uninitialized properties

<?phpclass BankAccount
{
    protected $currency = null;
    public function setCurrency($currency) { ... }
    public function payTo(Account $to, $amount)
    {
        // sorry for this silly example
        $this->transaction->process($to, $amount, $this->currency);
    }
}// I forgot to call $bankAccount->setCurrency('GBP');
$bankAccount->payTo($joe, 100);

In this case we have to remember that for issuing a payment we need to call first setCurrency . That's a really bad thing, a state change operation like that (issuing a payment) shouldn't be done in two steps, using two(n) public methods. We can still have many methods to do the payment, but we must have only one simple public method in order to change the status (Objects should never be in an inconsistent state) .

In this case we made it even better, encapsulating the uninitialised property into the Money object

<?phpclass BankAccount
{
    public function payTo(Account $to, Money $money) { ... }
}$bankAccount->payTo($joe, new Money(100, new Currency('GBP')));

Make it foolproof. Don't use uninitialized object properties

Don't: Leaking state outside class scope

<?phpclass Message
{
    protected $content;
    public function setContent($content)
    {
        $this->content = $content;
    }
}class Mailer
{
    protected $message;
    public function __construct(Message $message)
    {
        $this->message = $message;
    }
    public function sendMessage(){
        var_dump($this->message);
    }
}$message = new Message();
$message->setContent("bob message");
$joeMailer = new Mailer($message);$message->setContent("joe message");
$bobMailer = new Mailer($message);$joeMailer->sendMessage();
$bobMailer->sendMessage();

In this case Message is passed by reference and the result will be in both cases "joe message" . A solution would be either cloning the message object in the Mailer constructor. But what we should always try to do is to use a ( immutable ) value object instead of a plain Message mutable object. Use immutable objects when you can

<?phpclass Message
{
    protected $content;
    public function __construct($content)
    {
        $this->content = $content;
    }
}class Mailer 
{
    protected $message;
    public function __construct(Message $message)
    {
        $this->message = $message;
    }
    public function sendMessage(
    {
        var_dump($this->message);
    }
}$joeMailer = new Mailer(new Message("bob message"));
$bobMailer = new Mailer(new Message("joe message"));$joeMailer->sendMessage();
$bobMailer->sendMessage();
Write tests

We still need to say that ? Writing unit tests will help you adhering to common principles such as High Cohesion, Single Responsibility, Low Coupling and right object composition . It helps you not only testing the working small unit case but also the way you structured your object's. Indeed you'll clearly see when testing your small functions how many cases you need to test and how many objects you need to mock in order to achieve a 100% code coverage

Conclusions

Hope you liked the article. Remember those are just suggestions, it's up to you to know when, where and if to apply them.

[Sep 07, 2019] Knuth: Early on in the TeX project I also had to do programming of a completely different type on Zilog CPU which was at the heart of lazer printer that I used

Sep 07, 2019 | archive.computerhistory.org

Knuth: Yeah. That's absolutely true. I've got to get another thought out of my mind though. That is, early on in the TeX project I also had to do programming of a completely different type. I told you last week that this was my first real exercise in structured programming, which was one of Dijkstra's huge... That's one of the few breakthroughs in the history of computer science, in a way. He was actually responsible for maybe two of the ten that I know.

So I'm doing structured programming as I'm writing TeX. I'm trying to do it right, the way I should've been writing programs in the 60s. Then I also got this typesetting machine, which had, inside of it, a tiny 8080 chip or something. I'm not sure exactly. It was a Zilog, or some very early Intel chip. Way before the 386s. A little computer with 8-bit registers and a small number of things it could do. I had to write my own assembly language for this, because the existing software for writing programs for this little micro thing were so bad. I had to write actually thousands of lines of code for this, in order to control the typesetting. Inside the machine I had to control a stepper motor, and I had to accelerate it.

Every so often I had to give another [command] saying, "Okay, now take a step," and then continue downloading a font from the mainframe.

I had six levels of interrupts in this program. I remember talking to you at this time, saying, "Ed, I'm programming in assembly language for an 8-bit computer," and you said "Yeah, you've been doing the same thing and it's fun again."

You know, you'll remember. We'll undoubtedly talk more about that when I have my turn interviewing you in a week or so. This is another aspect of programming: that you also feel that you're in control and that there's not a black box separating you. It's not only the power, but it's the knowledge of what's going on; that nobody's hiding something. It's also this aspect of jumping levels of abstraction. In my opinion, the thing that computer scientists are best at is seeing things at many levels of detail: high level, intermediate levels, and lowest levels. I know if I'm adding 1 to a certain number, that this is getting me towards some big goal at the top. People enjoy most the things that they're good at. Here's a case where if you're working on a machine that has only this 8-bit capability, but in order to do this you have to go through levels, of not only that machine, but also to the next level up of the assembler, and then you have a simulator in which you can help debug your programs, and you have higher level languages that go through, and then you have the typesetting at the top. There are these six or seven levels all present at the same time. A computer scientist is in heaven in a situation like this.

Feigenbaum: Don, to get back, I want to ask you about that as part of the next question. You went back into programming in a really serious way. It took you, as I said before, ten years, not one year, and you didn't quit. As soon as you mastered one part of it, you went into Metafont, which is another big deal. To what extent were you doing that because you needed to, what I might call expose yourself to, or upgrade your skills in, the art that had emerged over the decade-and-a-half since you had done RUNCIBLE? And to what extent did you do it just because you were driven to be a programmer? You loved programming.

Knuth: Yeah. I think your hypothesis is good. It didn't occur to me at the time that I just had to program in order to be a happy man. Certainly I didn't find my other roles distasteful, except for fundraising. I enjoyed every aspect of being a professor except dealing with proposals, which I did my share of, but that was a necessary evil sort of in my own thinking, I guess. But the fact that now I'm still compelled to I wake up in the morning with an idea, and it makes my day to think of adding a couple of lines to my program. Gives me a real high. It must be the way poets feel, or musicians and so on, and other people, painters, whatever. Programming does that for me. It's certainly true. But the fact that I had to put so much time in it was not totally that, I'm sure, because it became a responsibility. It wasn't just for Phyllis and me, as it turned out. I started working on it at the AI lab, and people were looking at the output coming out of the machine and they would say, "Hey, Don, how did you do that?" Guy Steele was visiting from MIT that summer and he said, "Don, I want to port this to take it to MIT." I didn't have two users.

First I had 10, and then I had 100, and then I had 1000. Every time it went to another order of magnitude I had to change the system, because it would almost match their needs but then they would have very good suggestions as to something it wasn't covering. Then when it went to 10,000 and when it went to 100,000, the last stage was 10 years later when I made it friendly for the other alphabets of the world, where people have accented letters and Russian letters. <p>I had started out with only 7-bit codes. I had so many international users by that time, I saw that was a fundamental error. I started out with the idea that nobody would ever want to use a keyboard that could generate more than about 90 characters. It was going to be too complicated. But I was wrong. So it [TeX] was a burden as well, in the sense that I wanted to do a responsible job.

I had actually consciously planned an end-game that would take me four years to finish, and [then] not continue maintaining it and adding on, so that I could have something where I could say, "And now it's done and it's never going to change." I believe this is one aspect of software that, not for every system, but for TeX, it was vital that it became something that wouldn't be a moving target after while.

Feigenbaum: The books on TeX were a period. That is, you put a period down and you said, "This is it."

[Sep 07, 2019] As soon as you stop writing code on a regular basis you stop being a programmer. You lose you qualification very quickly. That's a typical tragedy of talented programmers who became mediocre managers or, worse, theoretical computer scientists

Programming skills are somewhat similar to the skills of people who play violin or piano. As soon a you stop playing violin or piano still start to evaporate. First slowly, then quicker. In two yours you probably will lose 80%.
Notable quotes:
"... I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. ..."
Sep 07, 2019 | archive.computerhistory.org

Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude completely, and I think he wrote his last computer program in the 1980s. At this conference I went to in 1967 about simulation language, Chris Strachey was going around asking everybody at the conference what was the last computer program you wrote. This was 1967. Some of the people said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did last week." I asked Edsger this question when I visited him in Texas in the 90s and he said, "Don, I write programs now with pencil and paper, and I execute them in my head." He finds that a good enough discipline.

I think he was mistaken on that. He taught me a lot of things, but I really think that if he had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we had. It was in the first apartment, our little rented house, before we had electricity in the house.

We were sitting there in the dark, and he was telling me how he had just learned about the specifications of the IBM System/360, and it made him so ill that his heart was actually starting to flutter.

He intensely disliked things that he didn't consider clean to work with. So I can see that he would have distaste for the languages that he had to work with on real computers. My reaction to that was to design my own language, and then make Pascal so that it would work well for me in those days. But his response was to do everything only intellectually.

So, programming.

I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. I think of a question that I want to answer, or I have part of my book where I want to present something. But I can't just present it by reading about it in a book. As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have to translate my knowledge of this method into something that the machine is going to understand just forces me to make that crystal-clear in my head. Then I can explain it to somebody else infinitely better. The exposition is always better if I've implemented it, even though it's going to take me more time.

[Sep 07, 2019] Knuth about computer science and money: At that point I made the decision in my life that I wasn't going to optimize my income;

Sep 07, 2019 | archive.computerhistory.org

So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a mathematician taking my grad studies. A startup company, called Green Tree Corporation because green is the color of money, came to me and said, "Don, name your price. Write compilers for us and we will take care of finding computers for you to debug them on, and assistance for you to do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that era this was not quite at Bill Gate's level today, but it was sort of out there.

The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not going to do it. I just thought this was an impossible number."

At that point I made the decision in my life that I wasn't going to optimize my income; I was really going to do what I thought I could do for well, I don't know. If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software". But number infinity would be Well, no. Number infinity minus one would be "I bought your book". It's not as good as "I read your book", you know. Then there is "I bought your software"; that was not in my own personal value. So that decision came up. I kept up with the literature about compilers. The Communications of the ACM was where the action was. I also worked with people on trying to debug the ALGOL language, which had problems with it. I published a few papers, like "The Remaining Trouble Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called "Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active in programming languages.

[Sep 07, 2019] Knuth: maybe 1 in 50 people have the "computer scientist's" type of intellect

Sep 07, 2019 | conservancy.umn.edu

Frana: You have made the comment several times that maybe 1 in 50 people have the "computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those people are trained professional librarians? [laughter] There is some strangeness there. But can you pinpoint what it is about the mind of the computer scientist that is....

Knuth: That is different?

Frana: What are the characteristics?

Knuth: Two things: one is the ability to deal with non-uniform structure, where you have case one, case two, case three, case four. Or that you have a model of something where the first component is integer, the next component is a Boolean, and the next component is a real number, or something like that, you know, non-uniform structure. To deal fluently with those kinds of entities, which is not typical in other branches of mathematics, is critical. And the other characteristic ability is to shift levels quickly, from looking at something in the large to looking at something in the small, and many levels in between, jumping from one level of abstraction to another. You know that, when you are adding one to some number, that you are actually getting closer to some overarching goal. These skills, being able to deal with nonuniform objects and to see through things from the top level to the bottom level, these are very essential to computer programming, it seems to me. But maybe I am fooling myself because I am too close to it.

Frana: It is the hardest thing to really understand that which you are existing within.

Knuth: Yes.

[Sep 07, 2019] conservancy.umn.edu

Sep 07, 2019 | conservancy.umn.edu

Knuth: Well, certainly it seems the way things are going. You take any particular subject that you are interested in and you try to see if somebody with an American high school education has learned it, and you will be appalled. You know, Jesse Jackson thinks that students know nothing about political science, and I am sure the chemists think that students don't know chemistry, and so on. But somehow they get it when they have to later. But I would say certainly the students now have been getting more of a superficial idea of mathematics than they used to. We have to do remedial stuff at Stanford that we didn't have to do thirty years ago.

Frana: Gio [Wiederhold] said much the same thing to me.

Knuth: The most scandalous thing was that Stanford's course in linear algebra could not get to eigenvalues because the students didn't know about complex numbers. Now every course at Stanford that takes linear algebra as a prerequisite does so because they want the students to know about eigenvalues. But here at Stanford, with one of the highest admission standards of any university, our students don't know complex numbers. So we have to teach them that when they get to college. Yes, this is definitely a breakdown.

Frana: Was your mathematics training in high school particularly good, or was it that you spent a lot of time actually doing problems?

Knuth: No, my mathematics training in high school was not good. My teachers could not answer my questions and so I decided I'd go into physics. I mean, I had played with mathematics in high school. I did a lot of work drawing graphs and plotting points and I used pi as the radix of a number system, and explored what the world would be like if you wanted to do logarithms and you had a number system based on pi. And I had played with stuff like that. But my teachers couldn't answer questions that I had.

... ... ... Frana: Do you have an answer? Are American students different today? In one of your interviews you discuss the problem of creativity versus gross absorption of knowledge.

Knuth: Well, that is part of it. Today we have mostly a sound byte culture, this lack of attention span and trying to learn how to pass exams. Frana: Yes,

[Sep 07, 2019] Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together

Sep 07, 2019 | conservancy.umn.edu

Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together. I can see that I could be viewed as a scholar that does his best to check out sources of material, so that people get credit where it is due. And to check facts over, not just to look at the abstract of something, but to see what the methods were that did it and to fill in holes if necessary. I look at my role as being able to understand the motivations and terminology of one group of specialists and boil it down to a certain extent so that people in other parts of the field can use it. I try to listen to the theoreticians and select what they have done that is important to the programmer on the street; to remove technical jargon when possible.

But I have never been good at any kind of a role that would be making policy, or advising people on strategies, or what to do. I have always been best at refining things that are there and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but not really in a way that would be in any way controlling the flow. The only time I have ever advocated something strongly was with literate programming; but I do this always with the caveat that it works for me, not knowing if it would work for anybody else.

When I work with a system that I have created myself, I can always change it if I don't like it. But everybody who works with my system has to work with what I give them. So I am not able to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says, 'Don, please forecast the future,'...

[Sep 07, 2019] The idea of literate programming is that I'm talking to, I'm writing a program for, a human being to read rather than a computer to read. This is probably not enough

Knuth description is convoluted and not very convincing. Essntially Perl POD implements that idea of literate programming inside the Perl interpteter, alloing long fragments of documentation to be mixed with the test of the program. But this is not enough. Essentially Knuth simply adaped TeX to provide high level description of what program is doing. But mixing the description and text has one important problem. While it helps to understand the logic of the program, the program itself become more difficult to debug as it speds into way too many pages.
So there should be an additional step that provide the capability to separate the documentation and the program in the programming editor, folding all documentation (or folding all programming text). You need the capability to see see alternately just documentation of just program preserving the original line numbers. This issue evades Knuth, who probably mostly works with paper anyway.
Sep 07, 2019 | archive.computerhistory.org
Feigenbaum: I'd like to do that, to move on to the third period. You've already mentioned one of them, the retirement issue, and let's talk about that. The second one you mentioned quite early on, which is the birth in your mind of literate programming, and that's another major development. Before I quit my little monologue here I also would like to talk about random graphs, because I think that's a stunning story that needs to be told. Let's talk about either the retirement or literate programming.

Knuth: I'm glad you brought up literate programming, because it was in my mind the greatest spinoff of the TeX project. I'm not the best person to judge, but in some ways, certainly for my own life, it was the main plus I got out of the TeX project was that I learned a new way to program.

I love programming, but I really love literate programming. The idea of literate programming is that I'm talking to, I'm writing a program for, a human being to read rather than a computer to read. It's still a program and it's still doing the stuff, but I'm a teacher to a person. I'm addressing my program to a thinking being, but I'm also being exact enough so that a computer can understand it as well.

And that made me think. I'm not sure if I mentioned last week, but I think I did mention last week, that the genesis of literate programming was that Tony Hoare was interested in publishing source code for programs. This was a challenge, to find a way to do this, and literate programming was my answer to this question. That is, if I had to take a large program like TeX or METAFONT, fairly large, it's 5 or 600 pages of a book--how would you do that?

The answer was to present it as sort of a hypertext, where you have a lot of simple things connected in simple ways in order to understand the whole. Once I realized that this was a good way to write programs, then I had this strong urge to go through and take every program I'd ever written in my life and make it literate. It's so much better than the next best way, I can't imagine trying to write a program any other way. On the other hand, the next best way is good enough that people can write lots and lots of very great programs without using literate programming. So it's not essential that they do. But I do have the gut feeling that if some company would start using literate programming for all of its software that I would be much more inclined to buy that software than any other.

Feigenbaum: Just a couple of things about that that you have mentioned to me in the past. One is your feeling that programs can be beautiful, and therefore they ought to be read like poetry. The other one is a heuristic that you told me about, which is if you want to get across an idea, you got to present it two ways: a kind of intuitive way, and a formal way, and that fits in with literate programming.

Knuth: Right.

Feigenbaum: Do you want to comment on those?

Knuth: Yeah. That's the key idea that I realized as I'm writing The Art of Computer Programming, the textbook. That the key to good exposition is to say everything twice, or three times, where I say something informally and formally. The reader gets to lodge it in his brain in two different ways, and they reinforce each other. All the time I'm giving in my textbooks I'm saying not only that I'm.. Well, let's see. I'm giving a formula, but I'm also interpreting the formula as to what it's good for. I'm giving a definition, and immediately I apply the definition to a simple case, so that the person learns not only the output of the definition -- what it means -- but also to internalize, using it once in your head. Describing a computer program, it's natural to say everything in the program twice. You say it in English, what the goals of this part of the program are, but then you say in your computer language -- in the formal language, whatever language you're using, if it's LISP or Pascal or Fortran or whatever, C, Java -- you give it in the computer language.

You alternate between the informal and the formal.

Literate programming enforces this idea. It has very interesting effects. I find that, for example, writing a system program, I did examples with literate programming where I took device drivers that I received from Sun Microsystems. They had device drivers for one of my printers, and I rewrote the device driver so that I could combine my laser printer with a previewer that would get exactly the same raster image. I took this industrial strength software and I redid it as a literate program. I found out that the literate version was actually a lot better in several other ways that were completely unexpected to me, because it was more robust.

When you're writing a subroutine in the normal way, a good system program, a subroutine, is supposed to check that its parameters make sense, or else it's going to crash the machine.

If they don't make sense it tries to do a reasonable error recovery from the bad data. If you're writing the subroutine in the ordinary way, just start the subroutine, and then all the code.

Then at the end, if you do a really good job of this testing and error recovery, it turns out that your subroutine ends up having 30 lines of code for error recovery and checking, and five lines of code for what the real purpose of the subroutine is. It doesn't look right to you. You're looking at the subroutine and it looks the purpose of the subroutine is to write certain error messages out, or something like this.

Since it doesn't quite look right, a programmer, as he's writing it, is suddenly unconsciously encouraged to minimize the amount of error checking that's going on, and get it done in some elegant fashion so that you can see what the real purpose of the subroutine is in these five lines. Okay.

But now with literate programming, you start out, you write the subroutine, and you put a line in there to say, "Check for errors," and then you do your five lines.

The subroutine looks good. Now you turn the page. On the next page it says, "Check for errors." Now you're encouraged.

As you're writing the next page, it looks really right to do a good checking for errors. This kind of thing happened over and over again when I was looking at the industrial software. This is part of what I meant by some of the effects of it.

But the main point of being able to combine the informal and the formal means that a human being can understand the code much better than just looking at one or the other, or just looking at an ordinary program with sprinkled comments. It's so much easier to maintain the program. In the comments you also explain what doesn't work, or any subtleties. Or you can say, "Now note the following. Here is the tricky part in line 5, and it works because of this." You can explain all of the things that a maintainer needs to know.

I'm the maintainer too, but after a year I've forgotten totally what I was thinking when I wrote the program. All this goes in as part of the literate program, and makes the program easier to debug, easier to maintain, and better in quality. It does better error messages and things like that, because of the other effects. That's why I'm so convinced that literate programming is a great spinoff of the TeX project.

Feigenbaum: Just one other comment. As you describe this, it's the kind of programming methodology you wish were being used on, let's say, the complex system that controls an aircraft. But Boeing isn't using it.

Knuth: Yeah. Well, some companies do, but the small ones. Hewlett-Packard had a group in Boise that was sold on it for a while. I keep getting I got a letter from Korea not so long ago. The guy says he thinks it's wonderful; he just translated the CWEB manual into Korean. A lot of people like it, but it doesn't take over. It doesn't get to a critical mass. I think the reason is that a lot of people don't enjoy writing the English parts. A lot of good programmers don't enjoy writing the English parts. Two percent of the world's population is born to be programmers. I don't know what percent is born to be writers, but you have to be in the intersection in order to be really happy with literate programming. I tried it with Stanford students. I had seven undergraduates. We did a project leading to the Stanford GraphBase. Six of the seven did very well with it, and the seventh one hated it.

Feigenbaum: Don, I want to get on to other topics, but you mentioned GWEB. Can you talk about WEB and GWEB, just because we're trying to be complete?

Knuth: Yeah. It's CWEB. The original WEB language was invented before the [world wide] web of the internet, but it was the only pronounceable three-letter acronym that hadn't been used at the time. It described nicely the hypertext idea, which now is why we often refer to the internet as a web too. CWEB is the version that Silvio Levy ported from the original Pascal. English and Pascal was WEB. English and C is CWEB. Now it works also with C++. Then there's FWEB for Fortran, and there's noweb that works with any language. There's all kinds of spinoffs. There's the one for Lisp. People have written books where they have their own versions of CWEB too. I got this wonderful book from Germany a year ago that goes through the entire MP3 standard. The book is not only a textbook that you can use in an undergraduate course, but it's also a program that will read an MP3 file. The book itself will tell exactly what's in the MP3 file, including its header and its redundancy check mechanism, plus all the ways to play the audio, and algorithms for synthesizing music. All of it a part of a textbook, all part of a literate program. In other words, I see the idea isn't dying. But it's just not taking over.

Feigenbaum: We've been talking about, as we've been moving toward the third Stanford period which includes the work on literate programming even though that originated earlier. There was another event that you told me about which you described as probably your best contribution to mathematics, the subject of random graphs. It involved a discovery story which I think is very interesting. If you could sort of wander us through random graphs and what this discovery was.

[Sep 06, 2019] Knuth: Programming and architecture are interrelated and it is impossible to create good architecure wthout actually programming at least of a prototype

Notable quotes:
"... When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?" ..."
"... When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this." ..."
Sep 06, 2019 | archive.computerhistory.org

...I showed the second version of this design to two of my graduate students, and I said, "Okay, implement this, please, this summer. That's your summer job." I thought I had specified a language. I had to go away. I spent several weeks in China during the summer of 1977, and I had various other obligations. I assumed that when I got back from my summer trips, I would be able to play around with TeX and refine it a little bit. To my amazement, the students, who were outstanding students, had not competed [it]. They had a system that was able to do about three lines of TeX. I thought, "My goodness, what's going on? I thought these were good students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a miracle."

Because going from my specification, which I thought was complete, they really had an impossible task, and they had succeeded wonderfully with it. These students, by the way, [were] Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the key people for Microsoft Word.

He did important mathematical things as well as his hyphenation methods which are quite used in all languages now. These guys were actually doing great work, but I was amazed that they couldn't do what I thought was just sort of a routine task. Then I became a programmer in earnest, where I had to do it. The reason is when you're doing programming, you have to explain something to a computer, which is dumb.

When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?"

It just didn't occur to the person writing the design specification. When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this."

If I hadn't been in China they would've scheduled an appointment with me and stopped their programming for a day. Then they would come in at the designated hour and we would talk. They would take 15 minutes to present to me what the problem was, and then I would think about it for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would write code for another five minutes and they'd have to schedule another appointment.

I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got going. Bob worked many years on a beautiful idea for a programming language, where he designed a language called Chiron, but he never touched the programming himself. I think this was actually the reason that he had trouble with that project, because it's so hard to do the design unless you're faced with the low-level aspects of it, explaining it to a machine instead of to another person.

Forsythe, I think it was, who said, "People have said traditionally that you don't understand something until you've taught it in a class. The truth is you don't really understand something until you've taught it to a computer, until you've been able to program it." At this level, programming was absolutely important

[Sep 06, 2019] Oral histories

Sep 06, 2019 | www-cs-faculty.stanford.edu

Having just celebrated my 10000th birthday (in base three), I'm operating a little bit in history mode. Every once in awhile, people have asked me to record some of my memories of past events --- I guess because I've been fortunate enough to live at some pretty exciting times, computersciencewise. These after-the-fact recollections aren't really as reliable as contemporary records; but they do at least show what I think I remember. And the stories are interesting, because they involve lots of other people.

So, before these instances of oral history themselves begin to fade from my memory, I've decided to record some links to several that I still know about:

Interview by Philip L Frana at the Charles Babbage Institute, November 2001
transcript of OH 332
audio file (2:00:33)
Interviews commissioned by Peoples Archive, taped in March 2006
playlist for 97 videos (about 2--8 minutes each)
Interview by Ed Feigenbaum at the Computer History Museum, March 2007
Part 1 (3:07:25) Part 2 (4:02:46)
(transcript)
Interview by Susan Schofield for the Stanford Historical Society, May 2018
(audio files, 2:20:30 and 2:14:25; transcript)
Interview by David Brock and Hansen Hsu about the computer programs that I wrote during the 1950s, July 2018
video (1:30:0)
(texts of the actual programs)

Some extended interviews, not available online, have also been published in books, notably in Chapters 7--17 of Companion to the Papers of Donald Knuth (conversations with Dikran Karagueuzian in the summer of 1996), and in two books by Edgar G. Daylight, The Essential Knuth (2013), Algorithmic Barriers Falling (2014).

[Sep 06, 2019] Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered

Sep 06, 2019 | conservancy.umn.edu

Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered. I can cope with learning about one new technique per day, but I can't take ten in a day all at once. So conferences are depressing; it means I have so much more work to do. If I hide myself from the truth I am much happier.

[Sep 06, 2019] How TAOCP was hatched

Notable quotes:
"... Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill. ..."
"... But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly. ..."
Sep 06, 2019 | archive.computerhistory.org

Knuth: This is, of course, really the story of my life, because I hope to live long enough to finish it. But I may not, because it's turned out to be such a huge project. I got married in the summer of 1961, after my first year of graduate school. My wife finished college, and I could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for our honeymoon.

We had four months of wedded bliss in Southern California, and then a man from Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to write compilers."

The more I thought about it, I decided "Oh yes, I've got this book inside of me."

I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I think I'm going to write a book."

As I say, we had four months of bliss, because the rest of our marriage has all been devoted to this book. Well, we still have had happiness. But really, I wake up every morning and I still haven't finished the book. So I try to -- I have to -- organize the rest of my life around this, as one main unifying theme. The book was supposed to be about how to write a compiler. They had heard about me from one of their editorial advisors, that I knew something about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy writing. In high school I had been editor of the weekly paper. In college I was editor of the science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote the manual for that compiler that we wrote. I enjoyed writing, number one.

Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill.

Another very important reason at the time was that I knew that there was a great need for a book about compilers, because there were a lot of people who even in 1962 -- this was January of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't been explained. The people who had discovered it, though, were scattered all over the world and they didn't know of each other's work either, very much. I had been following it. Everybody I could think of who could write a book about compilers, as far as I could see, they would only give a piece of the fabric. They would slant it to their own view of it. There might be four people who could write about it, but they would write four different books. I could present all four of their viewpoints in what I would think was a balanced way, without any axe to grind, without slanting it towards something that I thought would be misleading to the compiler writer for the future. I considered myself as a journalist, essentially. I could be the expositor, the tech writer, that could do the job that was needed in order to take the work of these brilliant people and make it accessible to the world. That was my motivation. Now, I didn't have much time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a contract, but they said "We know it'll take you a while." I didn't really begin to have much time to work on it until 1963, my third year of graduate school, as I'm already finishing up on my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the salary for that was $15,000, which was much more than an assistant professor. I think assistant professors were getting eight or nine thousand in those days.

Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was getting $7,600 for the nine-month year.

Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks up identifiers in a hash table. The method that we used is called linear probing. Basically you take the variable name that you want to look up, you scramble it, like you square it or something like this, and that gives you a number between one and, well in those days it would have been between 1 and 1000, and then you look there. If you find it, good; if you don't find it, go to the next place and keep on going until you either get to an empty place, or you find the number you're looking for. It's called linear probing. There was a rumor that one of Professor Feller's students at Princeton had tried to figure out how fast linear probing works and was unable to succeed. This was a new thing for me. It was a case where I was doing programming, but I also had a mathematical problem that would go into my other [job]. My winter job was being a math student, my summer job was writing compilers. There was no mix. These worlds did not intersect at all in my life at that point. So I spent one day during the summer while writing the compiler looking at the mathematics of how fast does linear probing work. I got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the internet now, because this became really the genesis of my main research work, which developed not to be working on compilers, but to be working on what they call analysis of algorithms, which is, have a computer method and find out how good is it quantitatively. I can say, if I got so many things to look up in the table, how long is linear probing going to take. It dawned on me that this was just one of many algorithms that would be important, and each one would lead to a fascinating mathematical problem. This was easily a good lifetime source of rich problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and I had one day to do the research and mathematics that changed my life for my future research trends. But now I've gotten off the topic of what your original question was.

Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of Computing. The compiler book morphed into The Art of Computer Programming, which became a seven-volume plan.

Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm starting, after I finish this summer job, then I began to do things that were going to be relating to the book. One of the things I knew I had to have in the book was an artificial machine, because I'm writing a compiler book but machines are changing faster than I can write books. I have to have a machine that I'm totally in control of. I invented this machine called MIX, which was typical of the computers of 1962.

In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I taught a class at Caltech on how to write programs in assembly language for this hypothetical computer. Then I started writing the parts that dealt with sorting problems and searching problems, like the linear probing idea. I began to write those parts, which are part of a compiler, of the book. I had several hundred pages of notes gathering for those chapters for The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing on The Art of Computer Programming.

I met George Forsythe about this time. George was the man who inspired both of us [Knuth and Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't do that. I just got married, and I've got to finish this book first." I said, "I think I'll finish the book next year, and then I can come up [and] start thinking about the rest of my life, but I want to get my book done before my son is born." Well, John is now 40-some years old and I'm not done with the book. Part of my lack of expertise is any good estimation procedure as to how long projects are going to take. I way underestimated how much needed to be written about in this book. Anyway, I started writing the manuscript, and I went merrily along writing pages of things that I thought really needed to be said. Of course, it didn't take long before I had started to discover a few things of my own that weren't in any of the existing literature. I did have an axe to grind. The message that I was presenting was in fact not going to be unbiased at all. It was going to be based on my own particular slant on stuff, and that original reason for why I should write the book became impossible to sustain. But the fact that I had worked on linear probing and solved the problem gave me a new unifying theme for the book. I was going to base it around this idea of analyzing algorithms, and have some quantitative ideas about how good methods were. Not just that they worked, but that they worked well: this method worked 3 times better than this method, or 3.1 times better than this method. Also, at this time I was learning mathematical techniques that I had never been taught in school. I found they were out there, but they just hadn't been emphasized openly, about how to solve problems of this kind.

So my book would also present a different kind of mathematics than was common in the curriculum at the time, that was very relevant to analysis of algorithm. I went to the publishers, I went to Addison Wesley, and said "How about changing the title of the book from 'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never sell; their focus group couldn't buy that one. I'm glad they stuck to the original title, although I'm also glad to see that several books have now come out called "The Analysis of Algorithms", 20 years down the line.

But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly.

I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of notes, including a very good example of what you mentioned about seeing holes in the fabric. One of the most important chapters in the book is parsing: going from somebody's algebraic formula and figuring out the structure of the formula. Just the way I had done in seventh grade finding the structure of English sentences, I had to do this with mathematical sentences.

Chapter ten is all about parsing of context-free language, [which] is what we called it at the time. I covered what people had published about context-free languages and parsing. I got to the end of the chapter and I said, well, you can combine these ideas and these ideas, and all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But now, all of a sudden, I saw there was a way to say I can find the most general condition that can be done efficiently without looking ahead to the end of the sentence. That you could make a decision on the fly, reading from left to right, about the structure of the thing. That was just a natural outgrowth of seeing the different pieces of the fabric that other people had put together, and writing it into a chapter for the first time. But I felt that this general concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it was right, but it was too hard for me, really, to explain it well.

So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my book. Textbooks don't have to cover everything when you get to the harder things; then you have to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's going to be published very soon, so any little things I discover and put in the book I didn't bother to write a paper and publish in the journal because I figure it'll be in my book pretty soon anyway. Computer science is changing so fast, my book is bound to be obsolete.

It takes a year for it to go through editing, and people drawing the illustrations, and then they have to print it and bind it and so on. I have to be a little bit ahead of the state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the stuff to myself that I had, these little ideas I had been coming up with. But when I got to this idea of left-to-right parsing, I said "Well here's something I don't really understand very well. I'll publish this, let other people figure out what it is, and then they can tell me what I should have said." I published that paper I believe in 1965, at the end of finishing my draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of computer science start with LR(k) and take off from there. But I want to give you an idea of

[Sep 06, 2019] Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping).

Notable quotes:
"... Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping). ..."
Sep 06, 2019 | news.ycombinator.com
fhars on Mar 29, 2011
Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping).

The only widely used OO language (for sufficiently narrow values of wide and wide values of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala. So it is actually FP that helps you with the classification.

Xurinos on Mar 29, 2011

This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class methods, friend methods, and so on).

I agree with you that subclassing (for the purpose of reusing behavior), traits (for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to depart from type systems and be used for mere code organization.

ajays on Mar 29, 2011
"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?"

Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main" in each class, but only the one in the specified class will be the entry point.

GrooveStomp on Mar 29, 2011
The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation. Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous training. Of course I jest. :)

[Sep 04, 2019] 737 MAX - Boeing Insults International Safety Regulators As New Problems Cause Longer Grounding

The 80286 Intel processors: The Intel 80286[3] (also marketed as the iAPX 286[4] and often called Intel 286) is a 16-bit microprocessor that was introduced on February 1, 1982. The 80286 was employed for the IBM PC/AT, introduced in 1984, and then widely used in most PC/AT compatible computers until the early 1990s.
Notable quotes:
"... The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes. ..."
"... In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials. ..."
"... Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities. ..."
"... The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo. ..."
"... Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance. ..."
"... I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership. ..."
"... So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. ..."
"... Arrogance? When the money keeps flowing in anyway, it comes naturally. ..."
"... In the neoliberal world order governments, regulators and the public are secondary to corporate profits. ..."
"... I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX. ..."
"... I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees. ..."
"... Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. ..."
"... Military Industrial Complex welfare programs, including wars in Syria and Yemen, are slowly winding down. We are about to get a massive bill from the financiers who already own everything in this sector, because what they have left now is completely unsustainable, with or without a Third World War. ..."
"... In my mind, the fact that Boeing transferred its head office from Seattle (where the main manufacturing and presumable the main design and engineering functions are based) to Chicago (centre of the neoliberal economic universe with the University of Chicago being its central shrine of worship, not to mention supply of future managers and administrators) in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance. ..."
"... For many amerikans, a good "offensive" is far preferable than a good defense even if that only involves an apology. Remember what ALL US presidents say.. We will never apologize.. ..."
"... Actually can you show me a single place in the US where ethics are considered a bastion of governorship? ..."
"... You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla cpu, the intel chips were dinosaurs in every way. ..."
"... Initially I thought it was just the new over-sized engines they retro-fitted. A situation that would surely have been easier to get around by just going back to the original engines -- any inefficiencies being less $costly than the time the planes have been grounded. But this post makes the whole rabbit warren 10 miles deeper. ..."
"... That is because the price is propped up by $9 billion share buyback per year . Share buyback is an effective scheme to airlift all the cash out of a company towards the major shareholders. I mean, who wants to develop reliable airplanes if you can funnel the cash into your pockets? ..."
"... If Boeing had invested some of this money that it blew on share buybacks to design a new modern plane from ground up to replace the ancient 737 airframe, these tragedies could have been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate cost-cutters and financial engineers, rather than real engineers, had the final word. ..."
"... Markets don't care about any of this. They don't care about real engineers either. They love corporate cost-cutters and financial engineers. They want share buybacks, and if something bad happens, they'll overlook the $5 billion to pay for the fallout because it's just a "one-time item." ..."
"... Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or quarter of that would suffice to build a plane that logically combines the latest technologies. E.g. the entire frame design to fit together with engines, processors proper for the information processing load, hydraulics for steering that satisfy force requirements in almost all circumstances etc. New technologies also fail because they are not completely understood, but when the overall design is logical with margins of safety, the faults can be eliminated. ..."
"... Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics. ..."
"... The problem is not new, and it is well understood. What computer modelling is is cheap, and easy to fudge, and that is why it is popular with people who care about money a lot. Much of what is called "AI" is very similar in its limitations, a complicated way to fudge up the results you want, or something close enough for casual examination. ..."
Sep 04, 2019 | www.moonofalabama.org

United Airline and American Airlines further prolonged the grounding of their Boeing 737 MAX airplanes. They now schedule the plane's return to the flight line in December. But it is likely that the grounding will continue well into the next year.

After Boeing's shabby design and lack of safety analysis of its Maneuver Characteristics Augmentation System (MCAS) led to the death of 347 people, the grounding of the type and billions of losses, one would expect the company to show some decency and humility. Unfortunately Boeing behavior demonstrates none.

There is still little detailed information on how Boeing will fix MCAS. Nothing was said by Boeing about the manual trim system of the 737 MAX that does not work when it is needed . The unprotected rudder cables of the plane do not meet safety guidelines but were still certified. The planes flight control computers can be overwhelmed by bad data and a fix will be difficult to implement. Boeing continues to say nothing about these issues.

International flight safety regulators no longer trust the Federal Aviation Administration (FAA) which failed to uncover those problems when it originally certified the new type. The FAA was also the last regulator to ground the plane after two 737 MAX had crashed. The European Aviation Safety Agency (EASA) asked Boeing to explain and correct five major issues it identified. Other regulators asked additional questions.

Boeing needs to regain the trust of the airlines, pilots and passengers to be able to again sell those planes. Only full and detailed information can achieve that. But the company does not provide any.

As Boeing sells some 80% of its airplanes abroad it needs the good will of the international regulators to get the 737 MAX back into the air. This makes the arrogance it displayed in a meeting with those regulators inexplicable:

Friction between Boeing Co. and international air-safety authorities threatens a new delay in bringing the grounded 737 MAX fleet back into service, according to government and pilot union officials briefed on the matter.

The latest complication in the long-running saga, these officials said, stems from a Boeing briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers.

The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes.

What did the Boeing managers think those regulatory agencies are? Hapless lapdogs like the FAA managers`who signed off on Boeing 'features' even after their engineers told them that these were not safe?

Buried in the Wall Street Journal piece quoted above is another little shocker:

In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials.

The new issue must be going beyond the flight control computer (FCC) issues the FAA identified in June .

Boeing's original plan to fix the uncontrolled activation of MCAS was to have both FCCs active at the same time and to switch MCAS off when the two computers disagree. That was already a huge change in the general architecture which so far consisted of one active and one passive FCC system that could be switched over when a failure occurred.

Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities.

Changing software in a delicate environment like a flight control computer is extremely difficult. There will always be surprising side effects or regressions where already corrected errors unexpectedly reappear.

The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo.

All procedures and functions of the software will have to be tested in all thinkable combinations to ensure that they will not block or otherwise influence each other. This will take months and there is a high chance that new issues will appear during these tests. They will require more software changes and more testing.

Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance.

Building safe airplanes requires engineers who know that they may make mistakes and who have the humility to allow others to check and correct their work. It requires open communication about such issues. Boeing's say-nothing strategy will prolong the grounding of its planes. It will increases the damage to Boeing's financial situation and reputation.

--- Previous Moon of Alabama posts on Boeing 737 MAX issues:

Posted by b on September 3, 2019 at 18:05 UTC | Permalink


Choderlos de Laclos , Sep 3 2019 18:15 utc | 1

"The 80286 Intel processors the FCC software is running on is limited in its capacity." You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.
b , Sep 3 2019 18:22 utc | 2
You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.

One of the two is writing this.

Half-joking aside. The 737 MAX FCC runs on 80286 processors. There are ten thousands of programmers available who can program them though not all are qualified to write real-time systems. That resource is not a problem. The processors inherent limits are one.

Meshpal , Sep 3 2019 18:24 utc | 3
Thanks b for the fine 737 max update. Others news sources seem to have dropped coverage. It is a very big deal that this grounding has lasted this long. Things are going to get real bad for Boeing if this bird does not get back in the air soon. In any case their credibility is tarnished if not down right trashed.
BraveNewWorld , Sep 3 2019 18:35 utc | 4
@1 Choderlos de Laclos

What ever software language these are programmed in (my guess is C) the compilers still exist for it and do the translation from the human readable code to the machine code for you. Of course the code could be assembler but writing assembly code for a 286 is far easier than writing it for say an i9 becuase the CPU is so much simpler and has a far smaller set of instructions to work with.

Choderlos de Laclos , Sep 3 2019 18:52 utc | 5
@b: It was a hyperbole. I might be another one, but left them behind as fast as I could. The last time I had to deal with it was an embedded system in 1998-ish. But I am also retiring, and so are thousands of others. The problems with support of a legacy system are a legend.
psychohistorian , Sep 3 2019 18:56 utc | 6
Thanks for the demise of Boeing update b

I commented when you first started writing about this that it would take Boeing down and still believe that to be true. To the extent that Boeing is stonewalling the international safety regulators says to me that upper management and big stock holders are being given time to minimize their exposure before the axe falls.

I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership.

This is how private financialization works in the Western world. Their bottom line is profit, not service to the flying public. It is in line with the recent public statement by the CEO's from the Business Roundtable that said that they were going to focus more on customer satisfaction over profit but their actions continue to say profit is their primary motive.

The God of Mammon private finance religion can not end soon enough for humanity's sake. It is not like we all have to become China but their core public finance example is well worth following.

karlof1 , Sep 3 2019 19:13 utc | 7
So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. IMO, Boeing shareholders's hair ought to be on fire given their BoD's behavior and getting ready to litigate.

As b notes, Boeing's international credibility's hanging by a very thin thread. A year from now, Boeing could very well see its share price deeply dive into the Penny Stock category--its current P/E is 41.5:1 which is massively overpriced. Boeing Bombs might come to mean something vastly different from its initial meaning.

bjd , Sep 3 2019 19:22 utc | 8
Arrogance? When the money keeps flowing in anyway, it comes naturally.
What did I just read , Sep 3 2019 19:49 utc | 10
Such seemingly archaic processors are the norm in aerospace. If the planes flight characteristics had been properly engineered from the start the processor wouldn't be an issue. You can't just spray perfume on a garbage pile and call it a rose.
VietnamVet , Sep 3 2019 20:31 utc | 12
In the neoliberal world order governments, regulators and the public are secondary to corporate profits. This is the same belief system that is suspending the British Parliament to guarantee the chaos of a no deal Brexit. The irony is that globalist, Joe Biden's restart the Cold War and nationalist Donald Trump's Trade Wars both assure that foreign regulators will closely scrutinize the safety of the 737 Max. Even if ignored by corporate media and cleared by the FAA to fly in the USA, Boeing and Wall Street's Dow Jones average are cooked gooses with only 20% of the market. Taking the risk of flying the 737 Max on their family vacation or to their next business trip might even get the credentialed class to realize that their subservient service to corrupt Plutocrats is deadly in the long term.
jared , Sep 3 2019 20:55 utc | 14
It doesn't get any TBTF'er than Boing. Bail-out is only phone-call away. With down-turn looming, the line is forming.
Piotr Berman , Sep 3 2019 21:11 utc | 15
"The latest complication in the long-running saga, these officials said, stems from a Boeing BA, -2.66% briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers."

It seems to me that Boeing had no intention to insult anybody, but it has an impossible task. After decades of applying duct tape and baling wire with much success, they finally designed an unfixable plane, and they can either abandon this line of business (narrow bodied airliners) or start working on a new design grounded in 21st century technologies.

Ken Murray , Sep 3 2019 21:12 utc | 16
Boeing's military sales are so much more significant and important to them, they are just ignoring/down-playing their commercial problem with the 737 MAX. Follow the real money.
Arata , Sep 3 2019 21:57 utc | 17
That is unblievable FLight Control comptuer is based on 80286! A control system needs Real Time operation, at least some pre-emptive task operation, in terms of milisecond or microsecond. What ever way you program 80286 you can not achieve RT operation on 80286. I do not think that is the case. My be 80286 is doing some pripherial work, other than control.
Bemildred , Sep 3 2019 22:11 utc | 18
It is quite likely (IMHO) that they are no longer able to provide the requested information, but of course they cannot say that.

I once wrote a keyboard driver for an 80286, part of an editor, in assembler, on my first PC type computer, I still have it around here somewhere I think, the keyboard driver, but I would be rusty like the Titanic when it comes to writing code. I wrote some things in DEC assembler too, on VAXen.

Peter AU 1 , Sep 3 2019 22:14 utc | 19
Arata 16

The spoiler system is fly by wire.

Bemildred , Sep 3 2019 22:17 utc | 20
arata @16: 80286 does interrupts just fine, but you have to grok asynchronous operation, and most coders don't really, I see that every day in Linux and my browser. I wish I could get that box back, it had DOS, you could program on the bare wires, but God it was slow.
Tod , Sep 3 2019 22:28 utc | 21
Boeing will just need to press the TURBO button on the 286 processor. Problem solved.
karlof1 , Sep 3 2019 22:43 utc | 23
Ken Murray @15--

Boeing recently lost a $6+Billion weapons contract thanks to its similar Q&A in that realm of its business. Its annual earnings are due out in October. Plan to short-sell soon!

Godfree Roberts , Sep 3 2019 22:56 utc | 24
I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX.

I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees.

The telecommunications giant also said law enforcement in the U.S. have searched, detained and arrested Huawei employees and its business partners, and have sent FBI agents to the homes of its workers to pressure them to collect information on behalf of the U.S."

https://www.wsj.com/articles/huawei-accuses-the-u-s-of-cyberattacks-threatening-its-employees-11567500484?mod=hp_lead_pos2

Arioch , Sep 3 2019 23:18 utc | 25
I wonder how much blind trust in Boeing is intertwined into the fabric of civic aviation all around the world.

I mean something like this: Boeing publishes some research into failure statistics, solid materials aging or something. One that is really hard and expensive to proceed with. Everything take the results for granted without trying to independently reproduce and verify, because The Boeing!

Some later "derived" researches being made, upon the foundation of some prior works *including* that old Boeing research. Then FAA and similar company institutions around the world make some official regulations and guidelines deriving from the research which was in part derived form original Boeing work. Then insurance companies calculate their tarifs and rate plans, basing their estimation upon those "government standards", and when governments determine taxation levels they use that data too. Then airline companies and airliner leasing companies make their business plans, take huge loans in the banks (and banks do make their own plans expecting those loans to finally be paid back), and so on and so forth, building the cards-deck house, layer after layer.

And among the very many of the cornerstones - there would be dust covered and god-forgotten research made by Boeing 10 or maybe 20 years ago when no one even in drunk delirium could ever imagine questioning Boeing's verdicts upon engineering and scientific matters.

Now, the longevity of that trust is slowly unraveled. Like, the so universally trusted 737NG generation turned out to be inherently unsafe, and while only pilots knew it before, and even of them - only most curious and pedantic pilots, today it becomes public knowledge that 737NG are tainted.

Now, when did this corruption started? Wheat should be some deadline cast into the past, that since the day every other technical data coming from Boeing should be considered unreliable unless passing full-fledged independent verification? Should that day be somewhere in 2000-s? 1990-s? Maybe even 1970-s?

And ALL THE BODY of civic aviation industry knowledge that was accumulated since that date can NO MORE BE TRUSTED and should be almost scrapped and re-researched new! ALL THE tacit INPUT that can be traced back to Boeing and ALL THE DERIVED KNOWLEDGE now has to be verified in its entirety.

Miss Lacy , Sep 3 2019 23:19 utc | 26
Boeing is backstopped by the Murkan MIC, which is to say the US taxpayer. Until the lawsuits become too enormous. I wonder how much that will cost. And speaking of rigged markets - why do ya suppose that Trumpilator et al have been so keen to make huge sales to the Saudis, etc. etc. ? Ya don't suppose they had an inkling of trouble in the wind do ya? Speaking of insiders, how many million billions do ya suppose is being made in the Wall Street "trade war" roller coaster by peeps, munchkins not muppets, who have access to the Tweeter-in-Chief?
C I eh? , Sep 3 2019 23:25 utc | 27
@6 psychohistorian
I commented when you first started writing about this that it would take Boeing down and still believe that to be true. To the extent that Boeing is stonewalling the international safety regulators says to me that upper management and big stock holders are being given time to minimize their exposure before the axe falls.

Have you considered the costs of restructuring versus breaking apart Boeing and selling it into little pieces; to the owners specifically?

The MIC is restructuring itself - by first creating the political conditions to make the transformation highly profitable. It can only be made highly profitable by forcing the public to pay the associated costs of Rape and Pillage Incorporated.

Military Industrial Complex welfare programs, including wars in Syria and Yemen, are slowly winding down. We are about to get a massive bill from the financiers who already own everything in this sector, because what they have left now is completely unsustainable, with or without a Third World War.

It is fine that you won't fly Boeing but that is not the point. You may not ever fly again since air transit is subsidized at every level and the US dollar will no longer be available to fund the world's air travel infrastructure.

You will instead be paying for the replacement of Boeing and seeing what google is planning it may not be for the renewal of the airline business but rather for dedicated ground transportation, self driving cars and perhaps 'aerospace' defense forces, thank you Russia for setting the trend.

Lochearn , Sep 3 2019 23:45 utc | 30
As readers may remember I made a case study of Boeing for a fairly recent PHD. The examiners insisted that this case study be taken out because it was "speculative." I had forecast serious problems with the 787 and the 737 MAX back in 2012. I still believe the 787 is seriously flawed and will go the way of the MAX. I came to admire this once brilliant company whose work culminated in the superb 777.

America really did make some excellent products in the 20th century - with the exception of cars. Big money piled into GM from the early 1920s, especially the ultra greedy, quasi fascist Du Pont brothers, with the result that GM failed to innovate. It produced beautiful cars but technically they were almost identical to previous models.

The only real innovation over 40 years was automatic transmission. Does this sound reminiscent of the 737 MAX? What glued together GM for more than thirty years was the brilliance of CEO Alfred Sloan who managed to keep the Du Ponts (and J P Morgan) more or less happy while delegating total responsibility for production to divisional managers responsible for the different GM brands. When Sloan went the company started falling apart and the memoirs of bad boy John DeLorean testify to the complete disfunctionality of senior management.

At Ford the situation was perhaps even worse in the 1960s and 1970s. Management was at war with the workers, faulty transmissions were knowingly installed. All this is documented in an excellent book by ex-Ford supervisor Robert Dewar in his book "A Savage Factory."

dus7 , Sep 3 2019 23:53 utc | 32
Well, the first thing that came to mind upon reading about Boeing's apparent arrogance overseas - silly, I know - was that Boeing may be counting on some weird Trump sanctions for anyone not cooperating with the big important USian corporation! The U.S. has influence on European and many other countries, but it can only be stretched so far, and I would guess messing with Euro/internation airline regulators, especially in view of the very real fatal accidents with the 737MAX, would be too far.
david , Sep 4 2019 0:09 utc | 34
Please read the following article to get further info about how the 5 big Funds that hold 67% of Boeing stocks are working hard with the big banks to keep the stock high. Meanwhile Boeing is also trying its best to blackmail US taxpayers through Pentagon, for example, by pretending to walk away from a competitive bidding contract because it wants the Air Force to provide better cost formula.

https://www.theamericanconservative.com/articles/despite-devastating-737-crashes-boeing-stocks-fly-high/

So basically, Boeing is being kept afloat by US taxpayers because it is "too big to fail" and an important component of Dow. Please tell. Who is the biggest suckers here?

chu teh , Sep 4 2019 0:13 utc | 36
re Piotr Berman | Sep 3 2019 21:11 utc [I have a tiny bit of standing in this matter based on experience with an amazingly similar situation that has not heretofore been mentioned. More at end. Thus I offer my opinion.] Indeed, an impossible task to design a workable answer and still maintain the fiction that 737MAX is a hi-profit-margin upgrade requiring minimal training of already-trained 737-series pilots , either male or female. Turning-off autopilot to bypass runaway stabilizer necessitates : [1]

the earlier 737-series "rollercoaster" procedure to overcome too-high aerodynamic forces must be taught and demonstrated as a memory item to all pilots.

The procedure was designed for early Model 737-series, not the 737MAX which has uniquely different center-of-gravity and pitch-up problem requiring MCAS to auto-correct, especially on take-off. [2] but the "rollercoaster" procedure does not work at all altitudes.

It causes aircraft to lose some altitude and, therefore, requires at least [about] 7,000-feet above-ground clearance to avoid ground contact. [This altitude loss consumed by the procedure is based on alleged reports of simulator demonstrations. There seems to be no known agreement on the actual amount of loss]. [3] The physical requirements to perform the "rollercoaster" procedure were established at a time when female pilots were rare.

Any 737MAX pilots, male or female, will have to pass new physical requirements demonstrating actual conditions on newly-designed flight simulators that mimic the higher load requirements of the 737MAX . Such new standards will also have to compensate for left vs right-handed pilots because the manual-trim wheel is located between the .pilot/copilot seats.

================

Now where/when has a similar situation occurred? I.e., wherein a Federal regulator agency [FAA] allowed a vendor [Boeing] to claim that a modified product did not need full inspection/review to get agency certification of performance [airworthiness]. As you may know, 2 working, nuclear, power plants were forced to shut down and be decommissioned when, in 2011, 2 newly-installed, critical components in each plant were discovered to be defective, beyond repair and not replaceable. These power plants were each producing over 1,000 megawatts of power for over 20 years. In short, the failed components were modifications of the original, successful design that claimed to need only a low-level of Federal Nuclear Regulatory Commission oversight and approval. The mods were, in fact, new and untried and yet only tested by computer modeling and theoretical estimations based on experience with smaller/different designs.

<<< The NRC had not given full inspection/oversight to the new units because of manufacturer/operator claims that the changes were not significant. The NRC did not verify the veracity of those claims. >>>

All 4 components [2 required in each plant] were essentially heat-exchangers weighing 640 tons each, having 10,000 tubes carrying radioactive water surrounded by [transferring their heat to] a separate flow of "clean" water. The tubes were progressively damaged and began leaking. The new design failed. It can not be fixed. Thus, both plants of the San Onofre Nuclear Generating Station are now a complete loss and await dismantling [as the courts will decide who pays for the fiasco].

Jen , Sep 4 2019 0:20 utc | 37
In my mind, the fact that Boeing transferred its head office from Seattle (where the main manufacturing and presumable the main design and engineering functions are based) to Chicago (centre of the neoliberal economic universe with the University of Chicago being its central shrine of worship, not to mention supply of future managers and administrators) in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance.

Phew! I barely took a breath there! :-)

Lochearn , Sep 4 2019 0:22 utc | 38
@ 32 david

Good article. Boeing is, or used to be, America's biggest manufacturing export. So you are right it cannot be allowed to fail. Boeing is also a manufacturer of military aircraft. The fact that it is now in such a pitiful state is symptomatic of America's decline and decadence and its takeover by financial predators.

jo6pac , Sep 4 2019 0:39 utc | 40
Posted by: Jen | Sep 4 2019 0:20 utc | 35

Nailed, moved to city of dead but not for gotten uncle Milton Frieman friend of aynn rand.

vk , Sep 4 2019 0:53 utc | 41
I don't think Boeing was arrogant. I think the 737 is simply unfixable and that they know that -- hence they went to the meeting with empty hands.
C I eh? , Sep 4 2019 1:14 utc | 42
They did the same with Nortel, whose share value exceeded 300 billion not long before it was scrapped. Insiders took everything while pension funds were wiped out of existence.

It is so very helpful to understand everything you read is corporate/intel propaganda, and you are always being setup to pay for the next great scam. The murder of 300+ people by boeing was yet another tragedy our sadistic elites could not let go to waste.

Walter , Sep 4 2019 3:10 utc | 43

...And to the idea that Boeing is being kept afloat by financial agencies.

Willow , Sep 4 2019 3:16 utc | 44
Aljazerra has a series of excellent investigative documentaries they did on Boeing. Here is one from 2014. https://www.aljazeera.com/investigations/boeing787/
Igor Bundy , Sep 4 2019 3:17 utc | 45
For many amerikans, a good "offensive" is far preferable than a good defense even if that only involves an apology. Remember what ALL US presidents say.. We will never apologize.. For the extermination of natives, for shooting down civilian airliners, for blowing up mosques full of worshipers, for bombing hospitals.. for reducing many countries to the stone age and using biological and chemical and nuclear weapons against the planet.. For supporting terrorists who plague the planet now. For basically being able to be unaccountable to anyone including themselves as a peculiar race of feces. So it is not the least surprising that amerikan corporations also follow the same bad manners as those they put into and pre-elect to rule them.
Igor Bundy , Sep 4 2019 3:26 utc | 46
People talk about Seattle as if its a bastion of integrity.. Its the same place Microsoft screwed up countless companies to become the largest OS maker? The same place where Amazon fashions how to screw its own employees to work longer and cheaper? There are enough examples that Seattle is not Toronto.. and will never be a bastion of ethics..

Actually can you show me a single place in the US where ethics are considered a bastion of governorship? Other than the libraries of content written about ethics, rarely do amerikans ever follow it. Yet expect others to do so.. This is getting so perverse that other cultures are now beginning to emulate it. Because its everywhere..

Remember Dallas? I watched people who saw in fascination how business can function like that. Well they cant in the long run but throw enough money and resources and it works wonders in the short term because it destroys the competition. But yea around 1998 when they got rid of the laws on making money by magic, most every thing has gone to hell.. because now there are no constraints but making money.. anywhich way.. Thats all that matters..

Igor Bundy , Sep 4 2019 3:54 utc | 47
You got to be daft or bribed to use intel cpu's in embedded systems. Going from a motorolla cpu, the intel chips were dinosaurs in every way. Requiring the cpu to be almost twice as fast to get the same thing done.. Also its interrupt control was not upto par. A simple example was how the commodore amiga could read from the disk and not stutter or slow down anything else you were doing. I never seen this fixed.. In fact going from 8Mhz to 4GHz seems to have fixed it by brute force. Yes the 8Mhz motorolla cpu worked wonders when you had music, video, IO all going at the same time. Its not just the CPU but the support chips which don't lock up the bus. Why would anyone use Intel? When there are so many specific embedded controllers designed for such specific things.
imo , Sep 4 2019 4:00 utc | 48
Initially I thought it was just the new over-sized engines they retro-fitted. A situation that would surely have been easier to get around by just going back to the original engines -- any inefficiencies being less $costly than the time the planes have been grounded. But this post makes the whole rabbit warren 10 miles deeper.

I do not travel much these days and find the cattle-class seating on these planes a major disincentive. Becoming aware of all these added technical issues I will now positively select for alternatives to 737 and bear the cost.

Joost , Sep 4 2019 4:25 utc | 50
I'm surprised Boeing stock still haven't taken nose dive

Posted by: Bob burger | Sep 3 2019 19:27 utc | 9

That is because the price is propped up by $9 billion share buyback per year . Share buyback is an effective scheme to airlift all the cash out of a company towards the major shareholders. I mean, who wants to develop reliable airplanes if you can funnel the cash into your pockets?

Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics.

Henkie , Sep 4 2019 7:04 utc | 53
Hi , I am new here in writing but not in reading.. About the 80286 , where is the coprocessor the 80287? How can the 80286 make IEEE math calculations? So how can it fly a controlled flight when it can not calculate its accuracy...... How is it possible that this system is certified? It should have at least a 80386 DX not SX!!!!
snake , Sep 4 2019 7:35 utc | 54
moved to Chicago in 1997 says much about the change in corporate culture and values from a culture that emphasised technical and design excellence, deliberate redundancies in essential functions (in case of emergencies or failures of core functions), consistently high standards and care for the people who adhered to these principles, to a predatory culture in which profits prevail over people and performance.

Jen @ 35 < ==

yes, the morally of the companies and their exclusive hold on a complicit or controlled government always defaults the government to support, enforce and encourage the principles of economic Zionism.

But it is more than just the corporate culture => the corporate fat cats 1. use the rule-making powers of the government to make law for them. Such laws create high valued assets from the pockets of the masses. The most well know of those corporate uses of government is involved with the intangible property laws (copyright, patent, and government franchise). The government generated copyright, franchise and Patent laws are monopolies. So when government subsidizes a successful outcome R&D project its findings are packaged up into a set of monopolies [copyrights, privatized government franchises which means instead of 50 companies or more competing for the next increment in technology, one gains the full advantage of that government research only one can use or abuse it. and the patented and copyrighted technology is used to extract untold billions, in small increments from the pockets of the public. 2. use of the judicial power of governments and their courts in both domestic and international settings, to police the use and to impose fake values in intangible property monopolies. Government-rule made privately owned monopoly rights (intangible property rights) generated from the pockets of the masses, do two things: they exclude, deny and prevent would be competition and their make value in a hidden revenue tax that passes to the privately held monopolist with each sale of a copyrighted, government franchised, or patented service or product. . Please note the one two nature of the "use of government law making powers to generate intangible private monopoly property rights"

Canthama , Sep 4 2019 10:37 utc | 56
There is no doubt Boeing has committed crimes on the 737MAX, its arrogance & greedy should be severely punished by the international commitment as an example to other global Corporations. It represents what is the worst of Corporate America that places profits in front of lives.
Christian J Chuba , Sep 4 2019 11:55 utc | 59
How the U.S. is keeping Russia out of the international market?

Iran and other sanctioned countries are a potential captive market and they have growth opportunities in what we sometimes call the non-aligned, emerging markets countries (Turkey, Africa, SE Asia, India, ...).

One thing I have learned is that the U.S. always games the system, we never play fair. So what did we do. Do their manufacturers use 1% U.S. made parts and they need that for international certification?

BM , Sep 4 2019 12:48 utc | 60
Ultimately all of the issues in the news these days are the same one and the same issue - as the US gets closer and closer to the brink of catastrophic collapse they get ever more desperate. As they get more and more desperate they descend into what comes most naturally to the US - throughout its entire history - frenzied violence, total absence of morality, war, murder, genocide, and everything else that the US is so well known for (by those who are not blinded by exceptionalist propaganda).

The Hong Kong violence is a perfect example - it is impossible that a self-respecting nation state could allow itself to be seen to degenerate into such idiotic degeneracy, and so grossly flaunt the most basic human decency. Ergo , the US is not a self-respecting nation state. It is a failed state.

I am certain the arrogance of Boeing reflects two things: (a) an assurance from the US government that the government will back them to the hilt, come what may, to make sure that the 737Max flies again; and (b) a threat that if Boeing fails to get the 737Max in the air despite that support, the entire top level management and board of directors will be jailed. Boeing know very well they cannot deliver. But just as the US government is desperate to avoid the inevitable collapse of the US, the Boeing top management are desperate to avoid jail. It is a charade.

It is time for international regulators to withdraw certification totally - after the problems are all fixed (I don't believe they ever will be), the plane needs complete new certification of every detail from the bottom up, at Boeing's expense, and with total openness from Boeing. The current Boeing management are not going to cooperate with that, therefore the international regulators need to demand a complete replacement of the management and board of directors as a condition for working with them.

Piotr Berman , Sep 4 2019 13:23 utc | 61
From ZeroHedge link:

If Boeing had invested some of this money that it blew on share buybacks to design a new modern plane from ground up to replace the ancient 737 airframe, these tragedies could have been prevented, and Boeing wouldn't have this nightmare on its hands. But the corporate cost-cutters and financial engineers, rather than real engineers, had the final word.

Markets don't care about any of this. They don't care about real engineers either. They love corporate cost-cutters and financial engineers. They want share buybacks, and if something bad happens, they'll overlook the $5 billion to pay for the fallout because it's just a "one-time item."

And now Boeing still has this plane, instead of a modern plane, and the history of this plane is now tainted, as is its brand, and by extension, that of Boeing. But markets blow that off too. Nothing matters.

Companies are getting away each with their own thing. There are companies that are losing a ton of money and are burning tons of cash, with no indications that they will ever make money. And market valuations are just ludicrous.

======

Thus Boeing issue is part of a much larger picture. Something systemic had to make "markets" less rational. And who is this "market"? In large part, fund managers wracking their brains how to create "decent return" while the cost of borrowing and returns on lending are super low. What remains are forms of real estate and stocks.

Overall, Boeing buy-backs exceeded 40 billion dollars, one could guess that half or quarter of that would suffice to build a plane that logically combines the latest technologies. E.g. the entire frame design to fit together with engines, processors proper for the information processing load, hydraulics for steering that satisfy force requirements in almost all circumstances etc. New technologies also fail because they are not completely understood, but when the overall design is logical with margins of safety, the faults can be eliminated.

Instead, 737 was slowly modified toward failure, eliminating safety margins one by one.

morongobill , Sep 4 2019 14:08 utc | 63

Regarding the 80286 and the 737, don't forget that the air traffic control system and the ICBM system uses old technology as well.

Seems our big systems have feet of old silicon.

Allan Bowman , Sep 4 2019 15:15 utc | 66
Boeing has apparently either never heard of, or ignores a procedure that is mandatory in satellite design and design reviews. This is FMEA or Failure Modes and Effects Analysis. This requires design engineers to document the impact of every potential failure and combination of failures thereby highlighting everthing from catastrophic effects to just annoyances. Clearly BOEING has done none of these and their troubles are a direct result. It can be assumed that their arrogant and incompetent management has not yet understood just how serious their behavior is to the future of the company.
fx , Sep 4 2019 16:08 utc | 69
Once the buyback ends the dive begins and just before it hits ground zero, they buy the company for pennies on the dollar, possibly with government bailout as a bonus. Then the company flies towards the next climb and subsequent dive. MCAS economics.

Posted by: Joost | Sep 4 2019 4:25 utc | 50

Well put!

Bemildred , Sep 4 2019 16:11 utc | 70
Computer modelling is what they are talking about in the cliche "Garbage in, garbage out".

The problem is not new, and it is well understood. What computer modelling is is cheap, and easy to fudge, and that is why it is popular with people who care about money a lot. Much of what is called "AI" is very similar in its limitations, a complicated way to fudge up the results you want, or something close enough for casual examination.

In particular cases where you have a well-defined and well-mathematized theory, then you can get some useful results with models. Like in Physics, Chemistry.

And they can be useful for "realistic" training situations, like aircraft simulators. The old story about wargame failures against Iran is another such situation. A lot of video games are big simulations in essence. But that is not reality, it's fake reality.

Trond , Sep 4 2019 17:01 utc | 79
@ SteveK9 71 "By the way, the problem was caused by Mitsubishi, who designed the heat exchangers."

Ahh. The furriners...

I once made the "mistake" of pointing out (in a comment under an article in Salon) that the reactors that exploded at Fukushima was made by GE and that GE people was still in charge of the reactors of American quality when they exploded. (The amerikans got out on one of the first planes out of the country).

I have never seen so many angry replies to one of my comments. I even got e-mails for several weeks from angry Americans.

c1ue , Sep 4 2019 19:44 utc | 80
@Henkie #53 You need floating point for scientific calculations, but I really doubt the 737 is doing any scientific research. Also, a regular CPU can do mathematical calculations. It just isn't as fast nor has the same capacity as a dedicated FPU. Another common use for FPUs is in live action shooter games - the neo-physics portions utilize scientific-like calculations to create lifelike actions. I sold computer systems in the 1990s while in school - Doom was a significant driver for newer systems (as well as hedge fund types). Again, don't see why an airplane needs this.

[Sep 02, 2019] The Joel Test 12 Steps to Better Code - Joel on Software by Joel Spolsky

Somewhat simplistic but still useful
Sep 02, 2019 | www.joelonsoftware.com
Wednesday, August 09, 2000

Have you ever heard of SEMA ? It's a fairly esoteric system for measuring how good a software team is. No, wait! Don't follow that link! It will take you about six years just to understand that stuff. So I've come up with my own, highly irresponsible, sloppy test to rate the quality of a software team. The great part about it is that it takes about 3 minutes. With all the time you save, you can go to medical school.

The Joel Test

  1. Do you use source control?
  2. Can you make a build in one step?
  3. Do you make daily builds?
  4. Do you have a bug database?
  5. Do you fix bugs before writing new code?
  6. Do you have an up-to-date schedule?
  7. Do you have a spec?
  8. Do programmers have quiet working conditions?
  9. Do you use the best tools money can buy?
  10. Do you have testers?
  11. Do new candidates write code during their interview?
  12. Do you do hallway usability testing?

The neat thing about The Joel Test is that it's easy to get a quick yes or no to each question. You don't have to figure out lines-of-code-per-day or average-bugs-per-inflection-point. Give your team 1 point for each "yes" answer. The bummer about The Joel Test is that you really shouldn't use it to make sure that your nuclear power plant software is safe.

A score of 12 is perfect, 11 is tolerable, but 10 or lower and you've got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.

Of course, these are not the only factors that determine success or failure: in particular, if you have a great software team working on a product that nobody wants, well, people aren't going to want it. And it's possible to imagine a team of "gunslingers" that doesn't do any of this stuff that still manages to produce incredible software that changes the world. But, all else being equal, if you get these 12 things right, you'll have a disciplined team that can consistently deliver.

1. Do you use source control?
I've used commercial source control packages, and I've used CVS , which is free, and let me tell you, CVS is fine . But if you don't have source control, you're going to stress out trying to get programmers to work together. Programmers have no way to know what other people did. Mistakes can't be rolled back easily. The other neat thing about source control systems is that the source code itself is checked out on every programmer's hard drive -- I've never heard of a project using source control that lost a lot of code.

2. Can you make a build in one step?
By this I mean: how many steps does it take to make a shipping build from the latest source snapshot? On good teams, there's a single script you can run that does a full checkout from scratch, rebuilds every line of code, makes the EXEs, in all their various versions, languages, and #ifdef combinations, creates the installation package, and creates the final media -- CDROM layout, download website, whatever.

If the process takes any more than one step, it is prone to errors. And when you get closer to shipping, you want to have a very fast cycle of fixing the "last" bug, making the final EXEs, etc. If it takes 20 steps to compile the code, run the installation builder, etc., you're going to go crazy and you're going to make silly mistakes.

For this very reason, the last company I worked at switched from WISE to InstallShield: we required that the installation process be able to run, from a script, automatically, overnight, using the NT scheduler, and WISE couldn't run from the scheduler overnight, so we threw it out. (The kind folks at WISE assure me that their latest version does support nightly builds.)

3. Do you make daily builds?
When you're using source control, sometimes one programmer accidentally checks in something that breaks the build. For example, they've added a new source file, and everything compiles fine on their machine, but they forgot to add the source file to the code repository. So they lock their machine and go home, oblivious and happy. But nobody else can work, so they have to go home too, unhappy.

Breaking the build is so bad (and so common) that it helps to make daily builds, to insure that no breakage goes unnoticed. On large teams, one good way to insure that breakages are fixed right away is to do the daily build every afternoon at, say, lunchtime. Everyone does as many checkins as possible before lunch. When they come back, the build is done. If it worked, great! Everybody checks out the latest version of the source and goes on working. If the build failed, you fix it, but everybody can keep on working with the pre-build, unbroken version of the source.

On the Excel team we had a rule that whoever broke the build, as their "punishment", was responsible for babysitting the builds until someone else broke it. This was a good incentive not to break the build, and a good way to rotate everyone through the build process so that everyone learned how it worked.

Read more about daily builds in my article Daily Builds are Your Friend .

4. Do you have a bug database?
I don't care what you say. If you are developing code, even on a team of one, without an organized database listing all known bugs in the code, you are going to ship low quality code. Lots of programmers think they can hold the bug list in their heads. Nonsense. I can't remember more than two or three bugs at a time, and the next morning, or in the rush of shipping, they are forgotten. You absolutely have to keep track of bugs formally.

Bug databases can be complicated or simple. A minimal useful bug database must include the following data for every bug:

If the complexity of bug tracking software is the only thing stopping you from tracking your bugs, just make a simple 5 column table with these crucial fields and start using it .

For more on bug tracking, read Painless Bug Tracking .

5. Do you fix bugs before writing new code?
The very first version of Microsoft Word for Windows was considered a "death march" project. It took forever. It kept slipping. The whole team was working ridiculous hours, the project was delayed again, and again, and again, and the stress was incredible. When the dang thing finally shipped, years late, Microsoft sent the whole team off to Cancun for a vacation, then sat down for some serious soul-searching.

What they realized was that the project managers had been so insistent on keeping to the "schedule" that programmers simply rushed through the coding process, writing extremely bad code, because the bug fixing phase was not a part of the formal schedule. There was no attempt to keep the bug-count down. Quite the opposite. The story goes that one programmer, who had to write the code to calculate the height of a line of text, simply wrote "return 12;" and waited for the bug report to come in about how his function is not always correct. The schedule was merely a checklist of features waiting to be turned into bugs. In the post-mortem, this was referred to as "infinite defects methodology".

To correct the problem, Microsoft universally adopted something called a "zero defects methodology". Many of the programmers in the company giggled, since it sounded like management thought they could reduce the bug count by executive fiat. Actually, "zero defects" meant that at any given time, the highest priority is to eliminate bugs before writing any new code. Here's why.

In general, the longer you wait before fixing a bug, the costlier (in time and money) it is to fix.

For example, when you make a typo or syntax error that the compiler catches, fixing it is basically trivial.

When you have a bug in your code that you see the first time you try to run it, you will be able to fix it in no time at all, because all the code is still fresh in your mind.

If you find a bug in some code that you wrote a few days ago, it will take you a while to hunt it down, but when you reread the code you wrote, you'll remember everything and you'll be able to fix the bug in a reasonable amount of time.

But if you find a bug in code that you wrote a few months ago, you'll probably have forgotten a lot of things about that code, and it's much harder to fix. By that time you may be fixing somebody else's code, and they may be in Aruba on vacation, in which case, fixing the bug is like science: you have to be slow, methodical, and meticulous, and you can't be sure how long it will take to discover the cure.

And if you find a bug in code that has already shipped , you're going to incur incredible expense getting it fixed.

That's one reason to fix bugs right away: because it takes less time. There's another reason, which relates to the fact that it's easier to predict how long it will take to write new code than to fix an existing bug. For example, if I asked you to predict how long it would take to write the code to sort a list, you could give me a pretty good estimate. But if I asked you how to predict how long it would take to fix that bug where your code doesn't work if Internet Explorer 5.5 is installed, you can't even guess , because you don't know (by definition) what's causing the bug. It could take 3 days to track it down, or it could take 2 minutes.

What this means is that if you have a schedule with a lot of bugs remaining to be fixed, the schedule is unreliable. But if you've fixed all the known bugs, and all that's left is new code, then your schedule will be stunningly more accurate.

Another great thing about keeping the bug count at zero is that you can respond much faster to competition. Some programmers think of this as keeping the product ready to ship at all times. Then if your competitor introduces a killer new feature that is stealing your customers, you can implement just that feature and ship on the spot, without having to fix a large number of accumulated bugs.

6. Do you have an up-to-date schedule?
Which brings us to schedules. If your code is at all important to the business, there are lots of reasons why it's important to the business to know when the code is going to be done. Programmers are notoriously crabby about making schedules. "It will be done when it's done!" they scream at the business people.

Unfortunately, that just doesn't cut it. There are too many planning decisions that the business needs to make well in advance of shipping the code: demos, trade shows, advertising, etc. And the only way to do this is to have a schedule, and to keep it up to date.

The other crucial thing about having a schedule is that it forces you to decide what features you are going to do, and then it forces you to pick the least important features and cut them rather than slipping into featuritis (a.k.a. scope creep).

Keeping schedules does not have to be hard. Read my article Painless Software Schedules , which describes a simple way to make great schedules.

7. Do you have a spec?
Writing specs is like flossing: everybody agrees that it's a good thing, but nobody does it.

I'm not sure why this is, but it's probably because most programmers hate writing documents. As a result, when teams consisting solely of programmers attack a problem, they prefer to express their solution in code, rather than in documents. They would much rather dive in and write code than produce a spec first.

At the design stage, when you discover problems, you can fix them easily by editing a few lines of text. Once the code is written, the cost of fixing problems is dramatically higher, both emotionally (people hate to throw away code) and in terms of time, so there's resistance to actually fixing the problems. Software that wasn't built from a spec usually winds up badly designed and the schedule gets out of control. This seems to have been the problem at Netscape, where the first four versions grew into such a mess that management stupidly decided to throw out the code and start over. And then they made this mistake all over again with Mozilla, creating a monster that spun out of control and took several years to get to alpha stage.

My pet theory is that this problem can be fixed by teaching programmers to be less reluctant writers by sending them off to take an intensive course in writing . Another solution is to hire smart program managers who produce the written spec. In either case, you should enforce the simple rule "no code without spec".

Learn all about writing specs by reading my 4-part series .

8. Do programmers have quiet working conditions?
There are extensively documented productivity gains provided by giving knowledge workers space, quiet, and privacy. The classic software management book Peopleware documents these productivity benefits extensively.

Here's the trouble. We all know that knowledge workers work best by getting into "flow", also known as being "in the zone", where they are fully concentrated on their work and fully tuned out of their environment. They lose track of time and produce great stuff through absolute concentration. This is when they get all of their productive work done. Writers, programmers, scientists, and even basketball players will tell you about being in the zone.

The trouble is, getting into "the zone" is not easy. When you try to measure it, it looks like it takes an average of 15 minutes to start working at maximum productivity. Sometimes, if you're tired or have already done a lot of creative work that day, you just can't get into the zone and you spend the rest of your work day fiddling around, reading the web, playing Tetris.

The other trouble is that it's so easy to get knocked out of the zone. Noise, phone calls, going out for lunch, having to drive 5 minutes to Starbucks for coffee, and interruptions by coworkers -- especially interruptions by coworkers -- all knock you out of the zone. If a coworker asks you a question, causing a 1 minute interruption, but this knocks you out of the zone badly enough that it takes you half an hour to get productive again, your overall productivity is in serious trouble. If you're in a noisy bullpen environment like the type that caffeinated dotcoms love to create, with marketing guys screaming on the phone next to programmers, your productivity will plunge as knowledge workers get interrupted time after time and never get into the zone.

With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed.

Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds).

Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh!

9. Do you use the best tools money can buy?
Writing code in a compiled language is one of the last things that still can't be done instantly on a garden variety home computer. If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time. If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion , which will suck them in and kill hours of productivity.

Debugging GUI code with a single monitor system is painful if not impossible. If you're writing GUI code, two monitors will make things much easier.

Most programmers eventually have to manipulate bitmaps for icons or toolbars, and most programmers don't have a good bitmap editor available. Trying to use Microsoft Paint to manipulate bitmaps is a joke, but that's what most programmers have to do.

At my last job , the system administrator kept sending me automated spam complaining that I was using more than ... get this ... 220 megabytes of hard drive space on the server. I pointed out that given the price of hard drives these days, the cost of this space was significantly less than the cost of the toilet paper I used. Spending even 10 minutes cleaning up my directory would be a fabulous waste of productivity.

Top notch development teams don't torture their programmers. Even minor frustrations caused by using underpowered tools add up, making programmers grumpy and unhappy. And a grumpy programmer is an unproductive programmer.

To add to all this... programmers are easily bribed by giving them the coolest, latest stuff. This is a far cheaper way to get them to work for you than actually paying competitive salaries!

10. Do you have testers?
If your team doesn't have dedicated testers, at least one for every two or three programmers, you are either shipping buggy products, or you're wasting money by having $100/hour programmers do work that can be done by $30/hour testers. Skimping on testers is such an outrageous false economy that I'm simply blown away that more people don't recognize it.

Read Top Five (Wrong) Reasons You Don't Have Testers , an article I wrote about this subject.

11. Do new candidates write code during their interview?
Would you hire a magician without asking them to show you some magic tricks? Of course not.

Would you hire a caterer for your wedding without tasting their food? I doubt it. (Unless it's Aunt Marge, and she would hate you for ever if you didn't let her make her "famous" chopped liver cake).

Yet, every day, programmers are hired on the basis of an impressive resumé or because the interviewer enjoyed chatting with them. Or they are asked trivia questions ("what's the difference between CreateDialog() and DialogBox()?") which could be answered by looking at the documentation. You don't care if they have memorized thousands of trivia about programming, you care if they are able to produce code. Or, even worse, they are asked "AHA!" questions: the kind of questions that seem easy when you know the answer, but if you don't know the answer, they are impossible.

Please, just stop doing this . Do whatever you want during interviews, but make the candidate write some code . (For more advice, read my Guerrilla Guide to Interviewing .)

12. Do you do hallway usability testing?
A hallway usability test is where you grab the next person that passes by in the hallway and force them to try to use the code you just wrote. If you do this to five people, you will learn 95% of what there is to learn about usability problems in your code.

Good user interface design is not as hard as you would think, and it's crucial if you want customers to love and buy your product. You can read my free online book on UI design , a short primer for programmers.

But the most important thing about user interfaces is that if you show your program to a handful of people, (in fact, five or six is enough) you will quickly discover the biggest problems people are having. Read Jakob Nielsen's article explaining why. Even if your UI design skills are lacking, as long as you force yourself to do hallway usability tests, which cost nothing, your UI will be much, much better.

Four Ways To Use The Joel Test

  1. Rate your own software organization, and tell me how it rates, so I can gossip.
  2. If you're the manager of a programming team, use this as a checklist to make sure your team is working as well as possible. When you start rating a 12, you can leave your programmers alone and focus full time on keeping the business people from bothering them.
  3. If you're trying to decide whether to take a programming job, ask your prospective employer how they rate on this test. If it's too low, make sure that you'll have the authority to fix these things. Otherwise you're going to be frustrated and unproductive.
  4. If you're an investor doing due diligence to judge the value of a programming team, or if your software company is considering merging with another, this test can provide a quick rule of thumb.

[Aug 31, 2019] The Substance of Style - Slashdot

Aug 31, 2019 | news.slashdot.org

Kazoo the Clown ( 644526 ) , Thursday October 16, 2003 @04:35PM ( #7233354 )

AESTHETICS of STYLE? Try CORRUPTION of GREED ( Score: 3 , Insightful)

You're looking at the downside of the "invisible hand" here, methinks.

Take anything by the Sharper Image for example. Their corporate motto is apparently "Style over Substance", though they are only one of the most blatant. A specifically good example would be their "Ionic Breeze." Selling points? Quieter than HEPA filters (that's because HEPA filters actually DO something). Empty BOXES are quiet too, and pollute your air less. Standardized tests show the Ionic Breeze's ability to remove airborne particles to be almost negligible. Tests also show it doesn't trap the particles it does catch very well such that they can be re-introduced to the environment. It produces levels of the oxidant gas ozone that accumulate over time, reportedly less than 0.05 ppm after 24 hours, but what after 48? The EPA's safe limit is 0.08, are you sure your ventilation is sufficient to keep it below that level if you have it on all the time? Do you trust the EPA's limit as being actually safe? (they dropped it to 0.08 from 0.12 in 1997 as apparently, 0.12 wasn't good enough). And what does it matter if the darn thing doesn't even remove dust and germs out of your environment worth a darn , because most dust and germs are not airborne? Oh, but it LOOKS SO SEXY.

There are countless products that people buy not because they are tuned into the brilliant aesthetics , but because the intimidation value of the brilliant marketing campaigns that convince them that if they don't have the product, they're deprived. That they need it to shallowly show off they have good taste when they really have no taste at all except that which was sold to them.

[Aug 31, 2019] Ask Slashdot How Would You Teach 'Best Practices' For Programmers - Slashdot

Aug 31, 2019 | ask.slashdot.org

Strider- ( 39683 ) , Sunday February 25, 2018 @08:43AM ( #56184459 )

Re:Back to basics ( Score: 5 , Insightful)

Oh hell no. So-called "self-documenting code" isn't. You can write the most comprehensible, clear code in the history of mankind, and that's still not good enough.

The issue is that your code only documents what the code is doing, not what it is supposed to be doing. You wouldn't believe how many subtle issues I've come across over the decades where on the face of it everything should have been good, but in reality the code was behaving slightly differently than what was intended.

JaredOfEuropa ( 526365 ) , Sunday February 25, 2018 @08:58AM ( #56184487 ) Journal
Re:Back to basics ( Score: 5 , Insightful)
The issue is that your code only documents what the code is doing, not what it is supposed to be doing

Mod this up. I aim to document my intent, i.e. what the code is supposed to do. Not only does this help catch bugs within a procedure, but it also forces me to think a little bit about the purpose of each method or function. It helps catch bugs or inconsistencies in the software architecture as well.

johnsnails ( 1715452 ) , Sunday February 25, 2018 @09:32AM ( #56184523 )
Re: Back to basics ( Score: 2 )

I agree with everything you said besides being short (whatever that is precisely). Sometimes a good comment will be a solid 4-5 line paragraph. But maybe I should fix the code instead that needs that long of a comment.

pjt33 ( 739471 ) , Sunday February 25, 2018 @04:06PM ( #56184861 )
Re: Back to basics ( Score: 2 )

I once wrote a library for (essentially) GIS which was full of comments that were 20 lines or longer. When the correctness of the code depends on theorems in non-Euclidean geometry and you can't assume that the maintainer will know any, I don't think it's a bad idea to make the proofs quite explicit.

[Aug 31, 2019] Programming is about Effective Communication

Aug 31, 2019 | developers.slashdot.org

Anonymous Coward , Friday February 22, 2019 @02:42PM ( #58165060 )

Algorithms, not code ( Score: 4 , Insightful)

Sad to see these are all books about coding and coding style. Nothing at all here about algorithms, or data structures.

My vote goes for Algorithms by Sedgewick

Seven Spirals ( 4924941 ) , Friday February 22, 2019 @02:57PM ( #58165150 )
MOTIF Programming by Marshall Brain ( Score: 3 )

Amazing how little memory and CPU MOTIF applications take. Once you get over the callbacks, it's actually not bad!

Seven Spirals ( 4924941 ) writes:
Re: ( Score: 2 )

Interesting. Sorry you had that experience. I'm not sure what you mean by a "multi-line text widget". I can tell you that early versions of OpenMOTIF were very very buggy in my experience. You probably know this, but after OpenMOTIF was completed and revved a few times the original MOTIF code was released as open-source. Many of the bugs I'd been seeing (and some just strange visual artifacts) disappeared. I know a lot of people love QT and it's produced real apps and real results - I won't poo-poo it. How

SuperKendall ( 25149 ) writes:
Design and Evolution of C++ ( Score: 2 )

Even if you don't like C++ much, The Design and Evolution of C++ [amazon.com] is a great book for understanding why pretty much any language ends up the way it does, seeing the tradeoffs and how a language comes to grow and expand from simple roots. It's way more interesting to read than you might expect (not very dry, and more about human interaction than you would expect).

Other than that reading through back posts in a lot of coding blogs that have been around a long time is probably a really good idea.

Also a side re

shanen ( 462549 ) writes:
What about books that hadn't been written yet? ( Score: 2 )

You young whippersnappers don't 'preciate how good you have it!

Back in my day, the only book about programming was the 1401 assembly language manual!

But seriously, folks, it's pretty clear we still don't know shite about how to program properly. We have some fairly clear success criteria for improving the hardware, but the criteria for good software are clear as mud, and the criteria for ways to produce good software are much muddier than that.

Having said that, I will now peruse the thread rather carefully

shanen ( 462549 ) writes:
TMI, especially PII ( Score: 2 )

Couldn't find any mention of Guy Steele, so I'll throw in The New Hacker's Dictionary , which I once owned in dead tree form. Not sure if Version 4.4.7 http://catb.org/jargon/html/ [catb.org] is the latest online... Also remember a couple of his language manuals. Probably used the Common Lisp one the most...

Didn't find any mention of a lot of books that I consider highly relevant, but that may reflect my personal bias towards history. Not really relevant for most programmers.

TMI, but if I open up my database on all t

UnknownSoldier ( 67820 ) , Friday February 22, 2019 @03:52PM ( #58165532 )
Programming is about **Effective Communication** ( Score: 5 , Insightful)

I've been programming for the past ~40 years and I'll try to summarize what I believe are the most important bits about programming (pardon the pun.) Think of this as a META: " HOWTO: Be A Great Programmer " summary. (I'll get to the books section in a bit.)

1. All code can be summarized as a trinity of 3 fundamental concepts:

* Linear ; that is, sequence: A, B, C
* Cyclic ; that is, unconditional jumps: A-B-C-goto B
* Choice ; that is, conditional jumps: if A then B

2. ~80% of programming is NOT about code; it is about Effective Communication. Whether that be:

* with your compiler / interpreter / REPL
* with other code (levels of abstraction, level of coupling, separation of concerns, etc.)
* with your boss(es) / manager(s)
* with your colleagues
* with your legal team
* with your QA dept
* with your customer(s)
* with the general public

The other ~20% is effective time management and design. A good programmer knows how to budget their time. Programming is about balancing the three conflicting goals of the Program Management Triangle [wikipedia.org]: You can have it on time, on budget, on quality. Pick two.

3. Stages of a Programmer

There are two old jokes:

In Lisp all code is data. In Haskell all data is code.

And:

Progression of a (Lisp) Programmer:

* The newbie realizes that the difference between code and data is trivial.
* The expert realizes that all code is data.
* The true master realizes that all data is code.

(Attributed to Aristotle Pagaltzis)

The point of these jokes is that as you work with systems you start to realize that a data-driven process can often greatly simplify things.

4. Know Thy Data

Fred Books once wrote

"Show me your flowcharts (source code), and conceal your tables (domain model), and I shall continue to be mystified; show me your tables (domain model) and I won't usually need your flowcharts (source code): they'll be obvious."

A more modern version would read like this:

Show me your code and I'll have to see your data,
Show me your data and I won't have to see your code.

The importance of data can't be understated:

* Optimization STARTS with understanding HOW the data is being generated and used, NOT the code as has been traditionally taught.
* Post 2000 "Big Data" has been called the new oil. We are generating upwards to millions of GB of data every second. Analyzing that data is import to spot trends and potential problems.

5. There are three levels of optimizations. From slowest to fastest run-time:

a) Bit-twiddling hacks [stanford.edu]
b) Algorithmic -- Algorithmic complexity or Analysis of algorithms [wikipedia.org] (such as Big-O notation)
c) Data-Orientated Design [dataorienteddesign.com] -- Understanding how hardware caches such as instruction and data caches matter. Optimize for the common case, NOT the single case that OOP tends to favor.

Optimizing is understanding Bang-for-the-Buck. 80% of code execution is spent in 20% of the time. Speeding up hot-spots with bit twiddling won't be as effective as using a more efficient algorithm which, in turn, won't be as efficient as understanding HOW the data is manipulated in the first place.

6. Fundamental Reading

Since the OP specifically asked about books -- there are lots of great ones. The ones that have impressed me that I would mark as "required" reading:

* The Mythical Man-Month
* Godel, Escher, Bach
* Knuth: The Art of Computer Programming
* The Pragmatic Programmer
* Zero Bugs and Program Faster
* Writing Solid Code / Code Complete by Steve McConnell
* Game Programming Patterns [gameprogra...tterns.com] (*)
* Game Engine Design
* Thinking in Java by Bruce Eckel
* Puzzles for Hackers by Ivan Sklyarov

(*) I did NOT list Design Patterns: Elements of Reusable Object-Oriented Software as that leads to typical, bloated, over-engineered crap. The main problem with "Design Patterns" is that a programmer will often get locked into a mindset of seeing everything as a pattern -- even when a simple few lines of code would solve th eproblem. For example here is 1,100+ of Crap++ code such as Boost's over-engineered CRC code [boost.org] when a mere ~25 lines of SIMPLE C code would have done the trick. When was the last time you ACTUALLY needed to _modify_ a CRC function? The BIG picture is that you are probably looking for a BETTER HASHING function with less collisions. You probably would be better off using a DIFFERENT algorithm such as SHA-2, etc.

7. Do NOT copy-pasta

Roughly 80% of bugs creep in because someone blindly copied-pasted without thinking. Type out ALL code so you actually THINK about what you are writing.

8. K.I.S.S.

Over-engineering and aka technical debt, will be your Achilles' heel. Keep It Simple, Silly.

9. Use DESCRIPTIVE variable names

You spend ~80% of your time READING code, and only ~20% writing it. Use good, descriptive variable names. Far too programmers write usless comments and don't understand the difference between code and comments:

Code says HOW, Comments say WHY

A crap comment will say something like: // increment i

No, Shit Sherlock! Don't comment the obvious!

A good comment will say something like: // BUGFIX: 1234: Work-around issues caused by A, B, and C.

10. Ignoring Memory Management doesn't make it go away -- now you have two problems. (With apologies to JWZ)

TINSTAAFL.

11. Learn Multi-Paradigm programming [wikipedia.org].

If you don't understand both the pros and cons of these programming paradigms ...

* Procedural
* Object-Orientated
* Functional, and
* Data-Orientated Design

... then you will never really understand programming, nor abstraction, at a deep level, along with how and when it should and shouldn't be used.

12. Multi-disciplinary POV

ALL non-trivial code has bugs. If you aren't using static code analysis [wikipedia.org] then you are not catching as many bugs as the people who are.

Also, a good programmer looks at his code from many different angles. As a programmer you must put on many different hats to find them:

* Architect -- design the code
* Engineer / Construction Worker -- implement the code
* Tester -- test the code
* Consumer -- doesn't see the code, only sees the results. Does it even work?? Did you VERIFY it did BEFORE you checked your code into version control?

13. Learn multiple Programming Languages

Each language was designed to solve certain problems. Learning different languages, even ones you hate, will expose you to different concepts. e.g. If you don't how how to read assembly language AND your high level language then you will never be as good as the programmer who does both.

14. Respect your Colleagues' and Consumers Time, Space, and Money.

Mobile game are the WORST at respecting people's time, space and money turning "players into payers." They treat customers as whales. Don't do this. A practical example: If you are a slack channel with 50+ people do NOT use @here. YOUR fire is not their emergency!

15. Be Passionate

If you aren't passionate about programming, that is, you are only doing it for the money, it will show. Take some pride in doing a GOOD job.

16. Perfect Practice Makes Perfect.

If you aren't programming every day you will never be as good as someone who is. Programming is about solving interesting problems. Practice solving puzzles to develop your intuition and lateral thinking. The more you practice the better you get.

"Sorry" for the book but I felt it was important to summarize the "essentials" of programming.

--
Hey Slashdot. Fix your shitty filter so long lists can be posted.: "Your comment has too few characters per line (currently 37.0)."

raymorris ( 2726007 ) , Friday February 22, 2019 @05:39PM ( #58166230 ) Journal
Shared this with my team ( Score: 4 , Insightful)

You crammed a lot of good ideas into a short post.
I'm sending my team at work a link to your post.

You mentioned code can data. Linus Torvalds had this to say:

"I'm a huge proponent of designing your code around the data, rather than the other way around, and I think it's one of the reasons git has been fairly successful [â¦] I will, in fact, claim that the difference between a bad programmer and a good one is whether he considers his code or his data structures more important."

"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."

I'm inclined to agree. Once the data structure is right, the code oftem almost writes itself. It'll be easy to write and easy to read because it's obvious how one would handle data structured in that elegant way.

Writing the code necessary to transform the data from the input format into the right structure can be non-obvious, but it's normally worth it.

[Aug 31, 2019] Slashdot Asks How Did You Learn How To Code - Slashdot

Aug 31, 2019 | ask.slashdot.org

GreatDrok ( 684119 ) , Saturday June 04, 2016 @10:03PM ( #52250917 ) Journal

Programming, not coding ( Score: 5 , Interesting)

i learnt to program at school from a Ph.D computer scientist. We never even had computers in the class. We learnt to break the problem down into sections using flowcharts or pseudo-code and then we would translate that program into whatever coding language we were using. I still do this usually in my notebook where I figure out all the things I need to do and then write the skeleton of the code using a series of comments for what each section of my program and then I fill in the code for each section. It is a combination of top down and bottom up programming, writing routines that can be independently tested and validated.

[Aug 28, 2019] CarpAssert - executable comments - metacpan.org

Aug 28, 2019 | metacpan.org

Contents [ show hide ]

NAME

Carp::Assert - executable comments

SYNOPSIS

# Assertions are on. use Carp::Assert ; $next_sunrise_time = sunrise(); # Assert that the sun must rise in the next 24 hours. assert(( $next_sunrise_time - time ) < 24*60*60) if DEBUG; # Assert that your customer's primary credit card is active affirm { my @cards = @{ $customer ->credit_cards}; $cards [0]->is_active; }; # Assertions are off. no Carp::Assert; $next_pres = divine_next_president(); # Assert that if you predict Dan Quayle will be the next president # your crystal ball might need some polishing. However, since # assertions are off, IT COULD HAPPEN! shouldnt( $next_pres , 'Dan Quayle' ) if DEBUG;

DESCRIPTION

"We are ready for any unforseen event that may or may not occur." - Dan Quayle

Carp::Assert is intended for a purpose like the ANSI C library assert.h . If you're already familiar with assert.h, then you can probably skip this and go straight to the FUNCTIONS section.

Assertions are the explicit expressions of your assumptions about the reality your program is expected to deal with, and a declaration of those which it is not. They are used to prevent your program from blissfully processing garbage inputs (garbage in, garbage out becomes garbage in, error out) and to tell you when you've produced garbage output. (If I was going to be a cynic about Perl and the user nature, I'd say there are no user inputs but garbage, and Perl produces nothing but...)

An assertion is used to prevent the impossible from being asked of your code, or at least tell you when it does. For example:

# Take the square root of a number. sub my_sqrt { my ( $num ) = shift ; # the square root of a negative number is imaginary. assert( $num >= 0); return sqrt $num ; }

The assertion will warn you if a negative number was handed to your subroutine, a reality the routine has no intention of dealing with.

An assertion should also be used as something of a reality check, to make sure what your code just did really did happen:

open (FILE, $filename ) || die $!; @stuff = <FILE>; @stuff = do_something( @stuff ); # I should have some stuff. assert( @stuff > 0);

The assertion makes sure you have some @stuff at the end. Maybe the file was empty, maybe do_something() returned an empty list... either way, the assert() will give you a clue as to where the problem lies, rather than 50 lines down at when you wonder why your program isn't printing anything.

Since assertions are designed for debugging and will remove themelves from production code, your assertions should be carefully crafted so as to not have any side-effects, change any variables, or otherwise have any effect on your program. Here is an example of a bad assertation:

assert( $error = 1 if $king ne 'Henry' ); # Bad!

It sets an error flag which may then be used somewhere else in your program. When you shut off your assertions with the $DEBUG flag, $error will no longer be set.

Here's another example of bad use:

assert( $next_pres ne 'Dan Quayle' or goto Canada); # Bad!

This assertion has the side effect of moving to Canada should it fail. This is a very bad assertion since error handling should not be placed in an assertion, nor should it have side-effects.

In short, an assertion is an executable comment. For instance, instead of writing this

# $life ends with a '!' $life = begin_life();

you'd replace the comment with an assertion which enforces the comment.

$life = begin_life(); assert( $life =~ /!$/ );

FUNCTIONS

assert
assert(EXPR) if DEBUG; assert(EXPR, $name ) if DEBUG;

assert's functionality is effected by compile time value of the DEBUG constant, controlled by saying use Carp::Assert or no Carp::Assert . In the former case, assert will function as below. Otherwise, the assert function will compile itself out of the program. See "Debugging vs Production" for details.

Give assert an expression, assert will Carp::confess() if that expression is false, otherwise it does nothing. (DO NOT use the return value of assert for anything, I mean it... really!).

The error from assert will look something like this:

Assertion failed! Carp::Assert::assert(0) called at prog line 23 main::foo called at prog line 50

Indicating that in the file "prog" an assert failed inside the function main::foo() on line 23 and that foo() was in turn called from line 50 in the same file.

If given a $name, assert() will incorporate this into your error message, giving users something of a better idea what's going on.

assert( Dogs->isa( 'People' ), 'Dogs are people, too!' ) if DEBUG; # Result - "Assertion (Dogs are people, too!) failed!"
affirm
affirm BLOCK if DEBUG; affirm BLOCK $name if DEBUG;

Very similar to assert(), but instead of taking just a simple expression it takes an entire block of code and evaluates it to make sure its true. This can allow more complicated assertions than assert() can without letting the debugging code leak out into production and without having to smash together several statements into one.

affirm { my $customer = Customer->new( $customerid ); my @cards = $customer ->credit_cards; grep { $_ ->is_active } @cards ; } "Our customer has an active credit card" ;

affirm() also has the nice side effect that if you forgot the if DEBUG suffix its arguments will not be evaluated at all. This can be nice if you stick affirm()s with expensive checks into hot loops and other time-sensitive parts of your program.

If the $name is left off and your Perl version is 5.6 or higher the affirm() diagnostics will include the code begin affirmed.

should
shouldnt
should ( $this , $shouldbe ) if DEBUG; shouldnt( $this , $shouldntbe ) if DEBUG;

Similar to assert(), it is specially for simple "this should be that" or "this should be anything but that" style of assertions.

Due to Perl's lack of a good macro system, assert() can only report where something failed, but it can't report what failed or how . should() and shouldnt() can produce more informative error messages:

Assertion ( 'this' should be 'that' !) failed! Carp::Assert::should( 'this' , 'that' ) called at moof line 29 main::foo() called at moof line 58

So this:

should( $this , $that ) if DEBUG;

is similar to this:

assert( $this eq $that ) if DEBUG;

except for the better error message.

Currently, should() and shouldnt() can only do simple eq and ne tests (respectively). Future versions may allow regexes.

Debugging vs Production

Because assertions are extra code and because it is sometimes necessary to place them in 'hot' portions of your code where speed is paramount, Carp::Assert provides the option to remove its assert() calls from your program.

So, we provide a way to force Perl to inline the switched off assert() routine, thereby removing almost all performance impact on your production code.

no Carp::Assert; # assertions are off. assert(1==1) if DEBUG;

DEBUG is a constant set to 0. Adding the 'if DEBUG' condition on your assert() call gives perl the cue to go ahead and remove assert() call from your program entirely, since the if conditional will always be false.

# With C<no Carp::Assert> the assert() has no impact. for (1..100) { assert( do_some_really_time_consuming_check ) if DEBUG; }

If if DEBUG gets too annoying, you can always use affirm().

# Once again, affirm() has (almost) no impact with C<no Carp::Assert> for (1..100) { affirm { do_some_really_time_consuming_check }; }

Another way to switch off all asserts, system wide, is to define the NDEBUG or the PERL_NDEBUG environment variable.

You can safely leave out the "if DEBUG" part, but then your assert() function will always execute (and its arguments evaluated and time spent). To get around this, use affirm(). You still have the overhead of calling a function but at least its arguments will not be evaluated.

Differences from ANSI C

assert() is intended to act like the function from ANSI C fame. Unfortunately, due to Perl's lack of macros or strong inlining, it's not nearly as unobtrusive.

Well, the obvious one is the "if DEBUG" part. This is cleanest way I could think of to cause each assert() call and its arguments to be removed from the program at compile-time, like the ANSI C macro does.

Also, this version of assert does not report the statement which failed, just the line number and call frame via Carp::confess. You can't do assert('$a == $b') because $a and $b will probably be lexical, and thus unavailable to assert(). But with Perl, unlike C, you always have the source to look through, so the need isn't as great.

EFFICIENCY

With no Carp::Assert (or NDEBUG) and using the if DEBUG suffixes on all your assertions, Carp::Assert has almost no impact on your production code. I say almost because it does still add some load-time to your code (I've tried to reduce this as much as possible).

If you forget the if DEBUG on an assert() , should() or shouldnt() , its arguments are still evaluated and thus will impact your code. You'll also have the extra overhead of calling a subroutine (even if that subroutine does nothing).

Forgetting the if DEBUG on an affirm() is not so bad. While you still have the overhead of calling a subroutine (one that does nothing) it will not evaluate its code block and that can save a lot.

Try to remember the if DEBUG .

ENVIRONMENT


NDEBUG
Defining NDEBUG switches off all assertions. It has the same effect as changing "use Carp::Assert" to "no Carp::Assert" but it effects all code.

PERL_NDEBUG
Same as NDEBUG and will override it. Its provided to give you something which won't conflict with any C programs you might be working on at the same time.

BUGS, CAVETS and other MUSINGS

Conflicts with POSIX.pm

The POSIX module exports an assert routine which will conflict with Carp::Assert if both are used in the same namespace. If you are using both together, prevent POSIX from exporting like so:

use POSIX (); use Carp::Assert ;

Since POSIX exports way too much, you should be using it like that anyway.

affirm and $^S

affirm() mucks with the expression's caller and it is run in an eval so anything that checks $^S will be wrong.

shouldn't

Yes, there is a shouldn't routine. It mostly works, but you must put the if DEBUG after it.

missing if DEBUG

It would be nice if we could warn about missing if DEBUG .

SEE ALSO

assert.h - the wikipedia page about assert.h .

Carp::Assert::More provides a set of convenience functions that are wrappers around Carp::Assert .

Sub::Assert provides support for subroutine pre- and post-conditions. The documentation says it's slow.

PerlX::Assert provides compile-time assertions, which are usually optimised away at compile time. Currently part of the Moops distribution, but may get its own distribution sometime in 2014.

Devel::Assert also provides an assert function, for Perl >= 5.8.1.

assertions provides an assertion mechanism for Perl >= 5.9.0.

REPOSITORY

https://github.com/schwern/Carp-Assert

COPYRIGHT

Copyright 2001-2007 by Michael G Schwern <schwern@pobox.com>.

This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.

See http://dev.perl.org/licenses/

AUTHOR

Michael G Schwern <schwern@pobox.com>

[Aug 27, 2019] Retire your debugger, log smartly with LogLog4perl! by Michael Schilli

This is a large currently unmaintained subsystem (last changes were in Feb 21, 2017) of questionable value for simple scripts(the main problem is overcomplexity and large amount of dependencies) . They make things way too complex for simple applications.
It still might make perfect sense for very complex applications.
Sep 11, 2002 | www.perl.com

You've rolled out an application and it produces mysterious, sporadic errors? That's pretty common, even if fairly well-tested applications are exposed to real-world data. How can you track down when and where exactly your problem occurs? What kind of user data is it caused by? A debugger won't help you there.

And you don't want to keep track of only bad cases. It's helpful to log all types of meaningful incidents while your system is running in production, in order to extract statistical data from your logs later. Or, what if a problem only happens after a certain sequence of 'good' cases? Especially in dynamic environments like the Web, anything can happen at any time and you want a footprint of every event later, when you're counting the corpses.

What you need is well-architected logging : Log statements in your code and a logging package like Log::Log4perl providing a "remote-control," which allows you to turn on previously inactive logging statements, increase or decrease their verbosity independently in different parts of the system, or turn them back off entirely. Certainly without touching your system's code – and even without restarting it.

However, with traditional logging systems, the amount of data written to the logs can be overwhelming. In fact, turning on low-level-logging on a system under heavy load can cause it to slow down to a crawl or even crash.

Log::Log4perl is different. It is a pure Perl port of the widely popular Apache/Jakarta log4j library [3] for Java, a project made public in 1999, which has been actively supported and enhanced by a team around head honcho Ceki Gülcü during the years.

The comforting facts about log4j are that it's really well thought out, it's the alternative logging standard for Java and it's been in use for years with numerous projects. If you don't like Java, then don't worry, you're not alone – the Log::Log4perl authors (yours truly among them) are all Perl hardliners who made sure Log::Log4perl is real Perl.

In the spirit of log4j , Log::Log4perl addresses the shortcomings of typical ad-hoc or homegrown logging systems by providing three mechanisms to control the amount of data being logged and where it ends up at:

In combination, these three control mechanisms turn out to be very powerful. They allow you to control the logging behavior of even the most complex applications at a granular level. However, it takes time to get used to the concept, so let's start the easy way:

Getting Your Feet Wet With Log4perl

If you've used logging before, then you're probably familiar with logging priorities or levels . Each log incident is assigned a level. If this incident level is higher than the system's logging level setting (typically initialized at system startup), then the message is logged, otherwise it is suppressed.

Log::Log4perl defines five logging levels, listed here from low to high:

    DEBUG
    INFO
    WARN
    ERROR
    FATAL

Let's assume that you decide at system startup that only messages of level WARN and higher are supposed to make it through. If your code then contains a log statement with priority DEBUG, then it won't ever be executed. However, if you choose at some point to bump up the amount of detail, then you can just set your system's logging priority to DEBUG and you will see these DEBUG messages starting to show up in your logs, too.

... ... ...

[Aug 27, 2019] perl defensive programming (die, assert, croak) - Stack Overflow

Aug 27, 2019 | stackoverflow.com

perl defensive programming (die, assert, croak) Ask Question Asked 5 years, 6 months ago Active 5 years, 6 months ago Viewed 645 times 2 0


Zaid ,Feb 23, 2014 at 17:11

What is the best (or recommended) approach to do defensive programming in perl? For example if I have a sub which must be called with a (defined) SCALAR, an ARRAYREF and an optional HASHREF.

Three of the approaches I have seen:

sub test1 {
    die if !(@_ == 2 || @_ == 3);
    my ($scalar, $arrayref, $hashref) = @_;
    die if !defined($scalar) || ref($scalar);
    die if ref($arrayref) ne 'ARRAY';
    die if defined($hashref) && ref($hashref) ne 'HASH';
    #do s.th with scalar, arrayref and hashref
}

sub test2 {
    Carp::assert(@_ == 2 || @_ == 3) if DEBUG;
    my ($scalar, $arrayref, $hashref) = @_;
    if(DEBUG) {
        Carp::assert defined($scalar) && !ref($scalar);
        Carp::assert ref($arrayref) eq 'ARRAY';
        Carp::assert !defined($hashref) || ref($hashref) eq 'HASH';
    }
    #do s.th with scalar, arrayref and hashref
}

sub test3 {
    my ($scalar, $arrayref, $hashref) = @_;
    (@_ == 2 || @_ == 3 && defined($scalar) && !ref($scalar) && ref($arrayref) eq 'ARRAY' && (!defined($hashref) || ref($hashref) eq 'HASH'))
        or Carp::croak 'usage: test3(SCALAR, ARRAYREF, [HASHREF])';
    #do s.th with scalar, arrayref and hashref
}

tobyink ,Feb 23, 2014 at 21:44

use Params::Validate qw(:all);

sub Yada {
   my (...)=validate_pos(@_,{ type=>SCALAR },{ type=>ARRAYREF },{ type=>HASHREF,optional=>1 });
   ...
}

ikegami ,Feb 23, 2014 at 17:33

I wouldn't use any of them. Aside from not not accepting many array and hash references, the checks you used are almost always redundant.
>perl -we"use strict; sub { my ($x) = @_; my $y = $x->[0] }->( 'abc' )"
Can't use string ("abc") as an ARRAY ref nda"strict refs" in use at -e line 1.

>perl -we"use strict; sub { my ($x) = @_; my $y = $x->[0] }->( {} )"
Not an ARRAY reference at -e line 1.

The only advantage to checking is that you can use croak to show the caller in the error message.


Proper way to check if you have an reference to an array:

defined($x) && eval { @$x; 1 }

Proper way to check if you have an reference to a hash:

defined($x) && eval { %$x; 1 }

Borodin ,Feb 23, 2014 at 17:23

None of the options you show display any message to give a reason for the failure, which I think is paramount.

It is also preferable to use croak instead of die from within library subroutines, so that the error is reported from the point of view of the caller.

I would replace all occurrences of if ! with unless . The former is a C programmer's habit.

I suggest something like this

sub test1 {
    croak "Incorrect number of parameters" unless @_ == 2 or @_ == 3;
    my ($scalar, $arrayref, $hashref) = @_;
    croak "Invalid first parameter" unless $scalar and not ref $scalar;
    croak "Invalid second parameter" unless $arrayref eq 'ARRAY';
    croak "Invalid third parameter" if defined $hashref and ref $hashref ne 'HASH';

    # do s.th with scalar, arrayref and hashref
}

[Aug 27, 2019] What Is Defensive Programming

Notable quotes:
"... Defensive programming is a method of prevention, rather than a form of cure. Compare this to debugging -- the act of removing bugs after they've bitten. Debugging is all about finding a cure. ..."
"... Defensive programming saves you literally hours of debugging and lets you do more fun stuff instead. Remember Murphy: If your code can be used incorrectly, it will be. ..."
"... Working code that runs properly, but ever-so-slightly slower, is far superior to code that works most of the time but occasionally collapses in a shower of brightly colored sparks ..."
"... Defensive programming avoids a large number of security problems -- a serious issue in modern software development. ..."
Aug 26, 2019 | Amazon.com

Originally from: Code Craft The Practice of Writing Excellent Code Pete Goodliffe 0689145711905 Amazon.com Gateway

Okay, defensive programming won't remove program failures altogether. But problems will become less of a hassle and easier to fix. Defensive programmers catch falling snowflakes rather than get buried under an avalanche of errors.

Defensive programming is a method of prevention, rather than a form of cure. Compare this to debugging -- the act of removing bugs after they've bitten. Debugging is all about finding a cure.

WHAT DEFENSIVE PROGRAMMING ISN'T

There are a few common misconceptions about defensive programming . Defensive programming is not:

Error checking
If there are error conditions that might arise in your code, you should be checking for them anyway. This is not defensive code. It's just plain good practice -- a part of writing correct code.
Testing
Testing your code is not defensive . It's another normal part of our development work. Test harnesses aren't defensive ; they can prove the code is correct now, but won't prove that it will stand up to future modification. Even with the best test suite in the world, anyone can make a change and slip it past untested.
Debugging
You might add some defensive code during a spell of debugging, but debugging is something you do after your program has failed. Defensive programming is something you do to prevent your program from failing in the first place (or to detect failures early before they manifest in incomprehensible ways, demanding all-night debugging sessions).

Is defensive programming really worth the hassle? There are arguments for and against:

The case against
Defensive programming consumes resources, both yours and the computer's.
  • It eats into the efficiency of your code; even a little extra code requires a little extra execution. For a single function or class, this might not matter, but when you have a system made up of 100,000 functions, you may have more of a problem.
  • Each defensive practice requires some extra work. Why should you follow any of them? You have enough to do already, right? Just make sure people use your code correctly. If they don't, then any problems are their own fault.
The case for
The counterargument is compelling.
  • Defensive programming saves you literally hours of debugging and lets you do more fun stuff instead. Remember Murphy: If your code can be used incorrectly, it will be.
  • Working code that runs properly, but ever-so-slightly slower, is far superior to code that works most of the time but occasionally collapses in a shower of brightly colored sparks.
  • We can design some defensive code to be physically removed in release builds, circumventing the performance issue. The majority of the items we'll consider here don't have any significant overhead, anyway.
  • Defensive programming avoids a large number of security problems -- a serious issue in modern software development. More on this follows.

As the market demands software that's built faster and cheaper, we need to focus on techniques that deliver results. Don't skip the bit of extra work up front that will prevent a whole world of pain and delay later.

[Aug 26, 2019] Error-Handling Techniques

Notable quotes:
"... Return a neutral value. Sometimes the best response to bad data is to continue operating and simply return a value that's known to be harmless. A numeric computation might return 0. A string operation might return an empty string, or a pointer operation might return an empty pointer. A drawing routine that gets a bad input value for color in a video game might use the default background or foreground color. A drawing routine that displays x-ray data for cancer patients, however, would not want to display a "neutral value." In that case, you'd be better off shutting down the program than displaying incorrect patient data. ..."
Aug 26, 2019 | Amazon.com

Originally from: Code Complete, Second Edition Books

Assertions are used to handle errors that should never occur in the code. How do you handle errors that you do expect to occur? Depending on the specific circumstances, you might want to return a neutral value, substitute the next piece of valid data, return the same answer as the previous time, substitute the closest legal value, log a warning message to a file, return an error code, call an error-processing routine or object, display an error message, or shut down -- or you might want to use a combination of these responses.

Here are some more details on these options:

Return a neutral value. Sometimes the best response to bad data is to continue operating and simply return a value that's known to be harmless. A numeric computation might return 0. A string operation might return an empty string, or a pointer operation might return an empty pointer. A drawing routine that gets a bad input value for color in a video game might use the default background or foreground color. A drawing routine that displays x-ray data for cancer patients, however, would not want to display a "neutral value." In that case, you'd be better off shutting down the program than displaying incorrect patient data.

Substitute the next piece of valid data. When processing a stream of data, some circumstances call for simply returning the next valid data. If you're reading records from a database and encounter a corrupted record, you might simply continue reading until you find a valid record. If you're taking readings from a thermometer 100 times per second and you don't get a valid reading one time, you might simply wait another 1/100th of a second and take the next reading.

Return the same answer as the previous time. If the thermometer-reading software doesn't get a reading one time, it might simply return the same value as last time. Depending on the application, temperatures might not be very likely to change much in 1/100th of a second. In a video game, if you detect a request to paint part of the screen an invalid color, you might simply return the same color used previously. But if you're authorizing transactions at a cash machine, you probably wouldn't want to use the "same answer as last time" -- that would be the previous user's bank account number!

Substitute the closest legal value. In some cases, you might choose to return the closest legal value, as in the Velocity example earlier. This is often a reasonable approach when taking readings from a calibrated instrument. The thermometer might be calibrated between 0 and 100 degrees Celsius, for example. If you detect a reading less than 0, you can substitute 0, which is the closest legal value. If you detect a value greater than 100, you can substitute 100. For a string operation, if a string length is reported to be less than 0, you could substitute 0. My car uses this approach to error handling whenever I back up. Since my speedometer doesn't show negative speeds, when I back up it simply shows a speed of 0 -- the closest legal value.

Log a warning message to a file. When bad data is detected, you might choose to log a warning message to a file and then continue on. This approach can be used in conjunction with other techniques like substituting the closest legal value or substituting the next piece of valid data. If you use a log, consider whether you can safely make it publicly available or whether you need to encrypt it or protect it some other way.

Return an error code. You could decide that only certain parts of a system will handle errors. Other parts will not handle errors locally; they will simply report that an error has been detected and trust that some other routine higher up in the calling hierarchy will handle the error. The specific mechanism for notifying the rest of the system that an error has occurred could be any of the following:

In this case, the specific error-reporting mechanism is less important than the decision about which parts of the system will handle errors directly and which will just report that they've occurred. If security is an issue, be sure that calling routines always check return codes.

Call an error-processing routine/object. Another approach is to centralize error handling in a global error-handling routine or error-handling object. The advantage of this approach is that error-processing responsibility can be centralized, which can make debugging easier. The tradeoff is that the whole program will know about this central capability and will be coupled to it. If you ever want to reuse any of the code from the system in another system, you'll have to drag the error-handling machinery along with the code you reuse.

This approach has an important security implication. If your code has encountered a buffer overrun, it's possible that an attacker has compromised the address of the handler routine or object. Thus, once a buffer overrun has occurred while an application is running, it is no longer safe to use this approach.

Display an error message wherever the error is encountered. This approach minimizes error-handling overhead; however, it does have the potential to spread user interface messages through the entire application, which can create challenges when you need to create a consistent user interface, when you try to clearly separate the UI from the rest of the system, or when you try to localize the software into a different language. Also, beware of telling a potential attacker of the system too much. Attackers sometimes use error messages to discover how to attack a system.

Handle the error in whatever way works best locally. Some designs call for handling all errors locally -- the decision of which specific error-handling method to use is left up to the programmer designing and implementing the part of the system that encounters the error.

This approach provides individual developers with great flexibility, but it creates a significant risk that the overall performance of the system will not satisfy its requirements for correctness or robustness (more on this in a moment). Depending on how developers end up handling specific errors, this approach also has the potential to spread user interface code throughout the system, which exposes the program to all the problems associated with displaying error messages.

Shut down. Some systems shut down whenever they detect an error. This approach is useful in safety-critical applications. For example, if the software that controls radiation equipment for treating cancer patients receives bad input data for the radiation dosage, what is its best error-handling response? Should it use the same value as last time? Should it use the closest legal value? Should it use a neutral value? In this case, shutting down is the best option. We'd much prefer to reboot the machine than to run the risk of delivering the wrong dosage.

A similar approach can be used to improve the security of Microsoft Windows. By default, Windows continues to operate even when its security log is full. But you can configure Windows to halt the server if the security log becomes full, which can be appropriate in a security-critical environment.

Robustness vs. Correctness

As the video game and x-ray examples show us, the style of error processing that is most appropriate depends on the kind of software the error occurs in. These examples also illustrate that error processing generally favors more correctness or more robustness. Developers tend to use these terms informally, but, strictly speaking, these terms are at opposite ends of the scale from each other. Correctness means never returning an inaccurate result; returning no result is better than returning an inaccurate result. Robustness means always trying to do something that will allow the software to keep operating, even if that leads to results that are inaccurate sometimes.

Safety-critical applications tend to favor correctness to robustness. It is better to return no result than to return a wrong result. The radiation machine is a good example of this principle.

Consumer applications tend to favor robustness to correctness. Any result whatsoever is usually better than the software shutting down. The word processor I'm using occasionally displays a fraction of a line of text at the bottom of the screen. If it detects that condition, do I want the word processor to shut down? No. I know that the next time I hit Page Up or Page Down, the screen will refresh and the display will be back to normal.

High-Level Design Implications of Error Processing High-Level Design Implications of Error Processing

With so many options, you need to be careful to handle invalid parameters in consistent ways throughout the program . The way in which errors are handled affects the software's ability to meet requirements related to correctness, robustness, and other nonfunctional attributes. Deciding on a general approach to bad parameters is an architectural or high-level design decision and should be addressed at one of those levels.

Once you decide on the approach, make sure you follow it consistently. If you decide to have high-level code handle errors and low-level code merely report errors, make sure the high-level code actually handles the errors! Some languages give you the option of ignoring the fact that a function is returning an error code -- in C++, you're not required to do anything with a function's return value -- but don't ignore error information! Test the function return value. If you don't expect the function ever to produce an error, check it anyway. The whole point of defensive programming is guarding against errors you don't expect.

This guideline holds true for system functions as well as for your own functions. Unless you've set an architectural guideline of not checking system calls for errors, check for error codes after each call. If you detect an error, include the error number and the description of the error.

[Aug 26, 2019] Example of correctable error

Aug 26, 2019 | www.amazon.com

Originally from: Good Habits for Great Coding Improving Programming Skills with Examples in Python Michael Stueben 9781484234587 Amazon.com

There is one danger to defensive coding: It can bury errors. Consider the following code:

def drawLine(m, b, image, start = 0, stop = WIDTH):
    step = 1
    start = int(start)
    stop =  int(stop)
    if stop-start < 0:
       step = -1
       print('WARNING: drawLine parameters were reversed.')
    for x in range(start, stop, step):
        index = int(m*x + b) * WIDTH + x
        if 0 <= index < len(image):
           image[index] = 255 # Poke in a white (= 255) pixel.

This function runs from start to stop . If stop is less than start , it just steps backward and no error is reported .

Maybe we want this kind of error to be "fixed " during the run -- buried -- but I think we should at least print a warning that the range is coming in backwards. Maybe we should abort the program .

[Aug 26, 2019] Being Defensive About Defensive Programming

Notable quotes:
"... Code installed for defensive programming is not immune to defects, and you're just as likely to find a defect in defensive-programming code as in any other code -- more likely, if you write the code casually. Think about where you need to be defensive , and set your defensive-programming priorities accordingly. ..."
Aug 26, 2019 | www.amazon.com

Originally from: Code Complete, Second Edition II. Creating High-Quality Code

8.3. Error-Handling Techniques

Too much of anything is bad, but too much whiskey is just enough. -- Mark Twain

Too much defensive programming creates problems of its own. If you check data passed as parameters in every conceivable way in every conceivable place, your program will be fat and slow.

What's worse, the additional code needed for defensive programming adds complexity to the software.

Code installed for defensive programming is not immune to defects, and you're just as likely to find a defect in defensive-programming code as in any other code -- more likely, if you write the code casually. Think about where you need to be defensive , and set your defensive-programming priorities accordingly.

Defensive Programming

General

Exceptions

Security Issues

[Aug 26, 2019] Creating High-Quality Code

Assertions as special statement is questionable approach unless there is a switch to exclude them from the code. Other then that BASH exit with condition or Perl die can serve equally well.
The main question here is which assertions should be in code only for debugging and which should be in production.
Notable quotes:
"... That an input parameter's value falls within its expected range (or an output parameter's value does) ..."
"... Many languages have built-in support for assertions, including C++, Java, and Microsoft Visual Basic. If your language doesn't directly support assertion routines, they are easy to write. The standard C++ assert macro doesn't provide for text messages. Here's an example of an improved ASSERT implemented as a C++ macro: ..."
"... Use assertions to document and verify preconditions and postconditions. Preconditions and postconditions are part of an approach to program design and development known as "design by contract" (Meyer 1997). When preconditions and postconditions are used, each routine or class forms a contract with the rest of the program . ..."
Aug 26, 2019 | www.amazon.com

Originally from: Code Complete A Practical Handbook of Software Construction, Second Edition Steve McConnell 0790145196705 Amazon.com Books

Assertions

An assertion is code that's used during development -- usually a routine or macro -- that allows a program to check itself as it runs. When an assertion is true, that means everything is operating as expected. When it's false, that means it has detected an unexpected error in the code. For example, if the system assumes that a customerinformation file will never have more than 50,000 records, the program might contain an assertion that the number of records is less than or equal to 50,000. As long as the number of records is less than or equal to 50,000, the assertion will be silent. If it encounters more than 50,000 records, however, it will loudly "assert" that an error is in the program .

Assertions are especially useful in large, complicated programs and in high-reliability programs . They enable programmers to more quickly flush out mismatched interface assumptions, errors that creep in when code is modified, and so on.

An assertion usually takes two arguments: a boolean expression that describes the assumption that's supposed to be true, and a message to display if it isn't. Here's what a Java assertion would look like if the variable denominator were expected to be nonzero:

Example 8-1. Java Example of an Assertion

assert denominator != 0 : "denominator is unexpectedly equal to 0.";

This assertion asserts that denominator is not equal to 0 . The first argument, denominator != 0 , is a boolean expression that evaluates to true or false . The second argument is a message to print if the first argument is false -- that is, if the assertion is false.

Use assertions to document assumptions made in the code and to flush out unexpected conditions. Assertions can be used to check assumptions like these:

Of course, these are just the basics, and your own routines will contain many more specific assumptions that you can document using assertions.

Normally, you don't want users to see assertion messages in production code; assertions are primarily for use during development and maintenance. Assertions are normally compiled into the code at development time and compiled out of the code for production. During development, assertions flush out contradictory assumptions, unexpected conditions, bad values passed to routines, and so on. During production, they can be compiled out of the code so that the assertions don't degrade system performance.

Building Your Own Assertion Mechanism

Many languages have built-in support for assertions, including C++, Java, and Microsoft Visual Basic. If your language doesn't directly support assertion routines, they are easy to write. The standard C++ assert macro doesn't provide for text messages. Here's an example of an improved ASSERT implemented as a C++ macro:

Cross-Reference

Building your own assertion routine is a good example of programming "into" a language rather than just programming "in" a language. For more details on this distinction, see Program into Your Language, Not in It .

Example 8-2. C++ Example of an Assertion Macro

#define ASSERT( condition, message ) {       \
   if ( !(condition) ) {                     \
      LogError( "Assertion failed: ",        \
          #condition, message );             \
      exit( EXIT_FAILURE );                  \
   }                                         \
}

Guidelines for Using Assertions

Here are some guidelines for using assertions:

Use error-handling code for conditions you expect to occur; use assertions for conditions that should. never occur Assertions check for conditions that should never occur. Error-handling code checks for off-nominal circumstances that might not occur very often, but that have been anticipated by the programmer who wrote the code and that need to be handled by the production code. Error handling typically checks for bad input data; assertions check for bugs in the code.

If error-handling code is used to address an anomalous condition, the error handling will enable the program to respond to the error gracefully. If an assertion is fired for an anomalous condition, the corrective action is not merely to handle an error gracefully -- the corrective action is to change the program's source code, recompile, and release a new version of the software.

A good way to think of assertions is as executable documentation -- you can't rely on them to make the code work, but they can document assumptions more actively than program -language comments can.

Avoid putting executable code into assertions. Putting code into an assertion raises the possibility that the compiler will eliminate the code when you turn off the assertions. Suppose you have an assertion like this:

Example 8-3. Visual Basic Example of a Dangerous Use of an Assertion

Debug.Assert( PerformAction() ) ' Couldn't perform action

Cross-Reference

You could view this as one of many problems associated with putting multiple statements on one line. For more examples, see " Using Only One Statement Per Line " in Laying Out Individual Statements .

The problem with this code is that, if you don't compile the assertions, you don't compile the code that performs the action. Put executable statements on their own lines, assign the results to status variables, and test the status variables instead. Here's an example of a safe use of an assertion:

Example 8-4. Visual Basic Example of a Safe Use of an Assertion

actionPerformed = PerformAction()
Debug.Assert( actionPerformed ) ' Couldn't perform action

Use assertions to document and verify preconditions and postconditions. Preconditions and postconditions are part of an approach to program design and development known as "design by contract" (Meyer 1997). When preconditions and postconditions are used, each routine or class forms a contract with the rest of the program .

Further Reading

For much more on preconditions and postconditions, see Object-Oriented Software Construction (Meyer 1997).

Preconditions are the properties that the client code of a routine or class promises will be true before it calls the routine or instantiates the object. Preconditions are the client code's obligations to the code it calls.

Postconditions are the properties that the routine or class promises will be true when it concludes executing. Postconditions are the routine's or class's obligations to the code that uses it.

Assertions are a useful tool for documenting preconditions and postconditions. Comments could be used to document preconditions and postconditions, but, unlike comments, assertions can check dynamically whether the preconditions and postconditions are true.

In the following example, assertions are used to document the preconditions and postcondition of the Velocity routine.

Example 8-5. Visual Basic Example of Using Assertions to Document Preconditions and Postconditions

Private Function Velocity ( _
   ByVal latitude As Single, _
   ByVal longitude As Single, _
   ByVal elevation As Single _
   ) As Single

   ' Preconditions
   Debug.Assert ( -90 <= latitude And latitude <= 90 )
   Debug.Assert ( 0 <= longitude And longitude < 360 )
   Debug.Assert ( -500 <= elevation And elevation <= 75000 )
   ...
   ' Postconditions Debug.Assert ( 0 <= returnVelocity And returnVelocity <= 600 )

   ' return value
   Velocity = returnVelocity
End Function

If the variables latitude , longitude , and elevation were coming from an external source, invalid values should be checked and handled by error-handling code rather than by assertions. If the variables are coming from a trusted, internal source, however, and the routine's design is based on the assumption that these values will be within their valid ranges, then assertions are appropriate.

For highly robust code, assert and then handle the error anyway. For any given error condition, a routine will generally use either an assertion or error-handling code, but not both. Some experts argue that only one kind is needed (Meyer 1997).

Cross-Reference

For more on robustness, see " Robustness vs. Correctness " in Error-Handling Techniques , later in this chapter.

But real-world programs and projects tend to be too messy to rely solely on assertions. On a large, long-lasting system, different parts might be designed by different designers over a period of 5–10 years or more. The designers will be separated in time, across numerous versions. Their designs will focus on different technologies at different points in the system's development. The designers will be separated geographically, especially if parts of the system are acquired from external sources. Programmers will have worked to different coding standards at different points in the system's lifetime. On a large development team, some programmers will inevitably be more conscientious than others and some parts of the code will be reviewed more rigorously than other parts of the code. Some programmers will unit test their code more thoroughly than others. With test teams working across different geographic regions and subject to business pressures that result in test coverage that varies with each release, you can't count on comprehensive, system-level regression testing, either.

In such circumstances, both assertions and error-handling code might be used to address the same error. In the source code for Microsoft Word, for example, conditions that should always be true are asserted, but such errors are also handled by error-handling code in case the assertion fails. For extremely large, complex, long-lived applications like Word, assertions are valuable because they help to flush out as many development-time errors as possible. But the application is so complex (millions of lines of code) and has gone through so many generations of modification that it isn't realistic to assume that every conceivable error will be detected and corrected before the software ships, and so errors must be handled in the production version of the system as well.

Here's an example of how that might work in the Velocity example:

Example 8-6. Visual Basic Example of Using Assertions to Document Preconditions and Postconditions

Private Function Velocity ( _
   ByRef latitude As Single, _
   ByRef longitude As Single, _
   ByRef elevation As Single _
   ) As Single

   ' Preconditions
   Debug.Assert ( -90 <= latitude And latitude <= 90 )       <-- 1
   Debug.Assert ( 0 <= longitude And longitude < 360 )         |
   Debug.Assert ( -500 <= elevation And elevation <= 75000 )       <-- 1
   ...

   ' Sanitize input data. Values should be within the ranges asserted above,
   ' but if a value is not within its valid range, it will be changed to the
   ' closest legal value
   If ( latitude < -90 ) Then       <-- 2
      latitude = -90                  |
   ElseIf ( latitude > 90 ) Then      |
      latitude = 90                   |
   End If                             |
   If ( longitude < 0 ) Then          |
      longitude = 0                   |
   ElseIf ( longitude > 360 ) Then       <-- 2
   ...

(1) Here is assertion code.

(2) Here is the code that handles bad input data at run time.

[Aug 26, 2019] Defensive Programming in C++

Notable quotes:
"... Defensive programming means always checking whether an operation succeeded. ..."
"... Exceptional usually means out of the ordinary and unusually good, but when it comes to errors, the word has a more negative meaning. The system throws an exception when some error condition happens, and if you don't catch that exception, it will give you a dialog box that says something like "your program has caused an error -- –goodbye." ..."
Aug 26, 2019 | www.amazon.com

Originally from: Amazon.com C++ by Example UnderC Learning Edition (0029236726768) Steve Donovan Gateway

There are five desirable properties of good programs : They should be robust, correct, maintainable, friendly, and efficient. Obviously, these properties can be prioritized in different orders, but generally, efficiency is less important than correctness; it is nearly always possible to optimize a well-designed program , whereas badly written "lean and mean" code is often a disaster. (Donald Knuth, the algorithms guru, says that "premature optimization is the root of all evil.")

Here I am mostly talking about programs that have to be used by non-expert users. (You can forgive programs you write for your own purposes when they behave badly: For example, many scientific number-crunching programs are like bad-tempered sports cars.) Being unbreakable is important for programs to be acceptable to users, and you, therefore, need to be a little paranoid and not assume that everything is going to work according to plan. ' Defensive programming ' means writing programs that cope with all common errors. It means things like not assuming that a file exists, or not assuming that you can write to any file (think of a CD-ROM), or always checking for divide by zero.

In the next few sections I want to show you how to 'bullet-proof' programs . First, there is a silly example to illustrate the traditional approach (check everything), and then I will introduce exception handling.

Bullet-Proofing Programs

Say you have to teach a computer to wash its hair. The problem, of course, is that computers have no common sense about these matters: "Lather, rinse, repeat" would certainly lead to a house flooded with bubbles. So you divide the operation into simpler tasks, which return true or false, and check the result of each task before going on to the next one. For example, you can't begin to wash your hair if you can't get the top off the shampoo bottle.

Defensive programming means always checking whether an operation succeeded. So the following code is full of if-else statements, and if you were trying to do something more complicated than wash hair, the code would rapidly become very ugly indeed (and the code would soon scroll off the page):


Code View: Scroll / Show All
void wash_hair()
{
  string msg = "";
  if (! find_shampoo() || ! open_shampoo()) msg = "no shampoo";
  else {
    if (! wet_hair()) msg = "no water!";
    else {
      if (! apply_shampoo()) msg = "shampoo application error";
      else {
        for(int i = 0; i < 2; i++)  // repeat twice
          if (! lather() || ! rinse()) {
                msg = "no hands!";
                break;  // break out of the loop
          }
          if (! dry_hair())  msg = "no towel!";
      }
    }
  }
  if (msg != "") cerr << "Hair error: " << msg << endl;
  // clean up after washing hair
  put_away_towel();
  put_away_shampoo();
}                                        

Part of the hair-washing process is to clean up afterward (as anybody who has a roommate soon learns). This would be a problem for the following code, now assuming that wash_hair() returns a string:

string wash_hair()
{
 ...
  if (! wet_hair()) return "no water!"
  if (! Apply_shampoo()) return "application error!";
...
}

You would need another function to call this wash_hair() , write out the message (if the operation failed), and do the cleanup. This would still be an improvement over the first wash_hair() because the code doesn't have all those nested blocks.

NOTE

Some people disapprove of returning from a function from more than one place, but this is left over from the days when cleanup had to be done manually. C++ guarantees that any object is properly cleaned up, no matter from where you return (for instance, any open file objects are automatically closed). Besides, C++ exception handling works much like a return , except that it can occur from many functions deep. The following section describes this and explains why it makes error checking easier.
Catching Exceptions

An alternative to constantly checking for errors is to let the problem (for example, division by zero, access violation) occur and then use the C++ exception-handling mechanism to gracefully recover from the problem.

Exceptional usually means out of the ordinary and unusually good, but when it comes to errors, the word has a more negative meaning. The system throws an exception when some error condition happens, and if you don't catch that exception, it will give you a dialog box that says something like "your program has caused an error -- –goodbye."

You should avoid doing that to your users -- at the very least you should give them a more reassuring and polite message.

If an exception occurs in a try block, the system tries to match the exception with one (or more) catch blocks.

try {  // your code goes inside this block
  ... problem happens - system throws exception
}
catch(Exception) {  // exception caught here
  ... handle the problem
}

It is an error to have a try without a catch and vice versa. The ON ERROR clause in Visual Basic achieves a similar goal, as do signals in C; they allow you to jump out of trouble to a place where you can deal with the problem. The example is a function div() , which does integer division. Instead of checking whether the divisor is zero, this code lets the division by zero happen but catches the exception. Any code within the try block can safely do integer division, without having to worry about the problem. I've also defined a function bad_div() that does not catch the exception, which will give a system error message when called:

int div(int i, int j)
{
 int k = 0;
 try {
   k = i/j;
   cout << "successful value " << k << endl;
 }
 catch(IntDivideByZero) {
   cout << "divide by zero\n";
 }
 return k;
}
;> int bad_div(int i,int j) {  return i/j; }
;> bad_div(10,0);
integer division by zero <main> (2)
;> div(2,1);
successful value 1
(int) 1
;> div(1,0);
divide by zero
(int) 0

This example is not how you would normally organize things. A lowly function like div() should not have to decide how an error should be handled; its job is to do a straightforward calculation. Generally, it is not a good idea to directly output error information to cout or cerr because Windows graphical user interface programs typically don't do that kind of output. Fortunately, any function call, made from within a try block, that throws an exception will have that exception caught by the catch block. The following is a little program that calls the (trivial) div() function repeatedly but catches any divide-by-zero errors:

// div.cpp
#include <iostream>
#include <uc_except.h>
using namespace std;

int div(int i, int j)
{  return i/j;   }

int main() {
 int i,j,k;
 cout << "Enter 0 0 to exit\n";
 for(;;) { // loop forever
   try {
     cout << "Give two numbers: ";
     cin >> i >> j;
     if (i == 0 && j == 0) return 0; // exit program!
     int k = div(i,j);
     cout << "i/j = " << k << endl;
   }  catch(IntDivideByZero) {
     cout << "divide by zero\n";
   }
  }
  return 0;
}

Notice two crucial things about this example: First, the error-handling code appears as a separate exceptional case, and second, the program does not crash due to divide-by-zero errors (instead, it politely tells the user about the problem and keeps going).

Note the inclusion of <uc_except.h> , which is a nonstandard extension specific to UnderC. The ISO standard does not specify any hardware error exceptions, mostly because not all platforms support them, and a standard has to work everywhere. So IntDivideByZero is not available on all systems. (I have included some library code that implements these hardware exceptions for GCC and BCC32; please see the Appendix for more details.)

How do you catch more than one kind of error? There may be more than one catch block after the try block, and the runtime system looks for the best match. In some ways, a catch block is like a function definition; you supply an argument, and you can name a parameter that should be passed as a reference. For example, in the following code, whatever do_something() does, catch_all_errors() catches it -- specifically a divide-by-zero error -- and it catches any other exceptions as well:

void catch_all_errors()
{
  try {
    do_something();
  }
  catch(IntDivideByZero) {
    cerr << "divide by zero\n";
  }
  catch(HardWareException& e) {
    cerr << "runtime error: " << e.what() << endl;
  }
  catch(Exception& e) {
    cerr << "other error " << e.what() << endl;
  }
}

The standard exceptions have a what() method, which gives more information about them. Order is important here. Exception includes HardwareException , so putting Exception first would catch just about everything. When an exception is thrown, the system picks the first catch block that would match that exception. The rule is to put the catch blocks in order of increasing generality.

Throwing Exceptions

You can throw your own exceptions, which can be of any type, including C++ strings. (In Chapter 8 , "Inheritance and Virtual Methods," you will see how you can create a hierarchy of errors, but for now, strings and integers will do fine.) It is a good idea to write an error-generating function fail() , which allows you to add extra error-tracking features later. The following example returns to the hair-washing algorithm and is even more paranoid about possible problems:

void fail(string msg)
{
  throw msg;
}

void wash_hair()
{
  try {
    if (! find_shampoo()) fail("no shampoo");
    if (! open_shampoo()) fail("can't open shampoo");
    if (! wet_hair())     fail("no water!");
    if (! apply_shampoo())fail("shampoo application error");
    for(int i = 0; i < 2; i++)  // repeat twice
      if (! lather() || ! rinse()) fail("no hands!");
    if (! dry_hair())     fail("no towel!");
  }
  catch(string err) {
    cerr << "Known Hair washing failure: " << err << endl;
  }
  catch(...) {
    cerr << "Catastropic failure\n";
  }
  // clean up after washing hair
  put_away_towel();
  put_away_shampoo();
}

In this example, the general logic is clear, and the cleanup code is always run, whatever disaster happens. This example includes a catch-all catch block at the end. It is a good idea to put one of these in your program's main() function so that it can deliver a more polite message than "illegal instruction." But because you will then have no information about what caused the problem, it's a good idea to cover a number of known cases first. Such a catch-all must be the last catch block; otherwise, it will mask more specific errors.

It is also possible to use a trick that Perl programmers use: If the fail() function returns a bool , then the following expression is valid C++ and does exactly what you want:

dry_hair() || fail("no towel");
lather() && rinse() || fail("no hands!");

If dry_hair() returns true, the or expression must be true, and there's no need to evaluate the second term. Conversely, if dry_hair() returns false, the fail() function would be evaluated and the side effect would be to throw an exception. This short-circuiting of Boolean expressions applies also to && and is guaranteed by the C++ standard.

[Aug 26, 2019] The Eight Defensive Programmer Strategies

Notable quotes:
"... Never Trust Input. Never trust the data you're given and always validate it. ..."
"... Prevent Errors. If an error is possible, no matter how probable, try to prevent it. ..."
"... Document Assumptions Clearly state the pre-conditions, post-conditions, and invariants. ..."
"... Automate everything, especially testing. ..."
Aug 26, 2019 | www.amazon.com

Originally from: Learn C the Hard Way Practical Exercises on the Computational Subjects You Keep Avoiding (Like C) by Zed Shaw

Once you've adopted this mind-set, you can then rewrite your prototype and follow a set of eight strategies to make your code as solid as possible.

While I work on the real version, I ruthlessly follow these strategies and try to remove as many errors as I can, thinking like someone who wants to break the software.

  1. Never Trust Input. Never trust the data you're given and always validate it.
  2. Prevent Errors. If an error is possible, no matter how probable, try to prevent it.
  3. Fail Early and Openly Fail early, cleanly, and openly, stating what happened, where, and how to fix it.
  4. Document Assumptions Clearly state the pre-conditions, post-conditions, and invariants.
  5. Prevention over Documentation. Don't do with documentation that which can be done with code or avoided completely.
  6. Automate Everything Automate everything, especially testing.
  7. Simplify and Clarify Always simplify the code to the smallest, cleanest form that works without sacrificing safety.
  8. Question Authority Don't blindly follow or reject rules.

These aren't the only strategies, but they're the core things I feel programmers have to focus on when trying to make good, solid code. Notice that I don't really say exactly how to do these. I'll go into each of these in more detail, and some of the exercises will actually cover them extensively.

[Aug 26, 2019] Clean Code in Python General Traits of Good Code

Notable quotes:
"... Different responsibilities should go into different components, layers, or modules of the application. Each part of the program should only be responsible for a part of the functionality (what we call its concerns) and should know nothing about the rest. ..."
"... The goal of separating concerns in software is to enhance maintainability by minimizing ripple effects. A ripple effect means the propagation of a change in the software from a starting point. This could be the case of an error or exception triggering a chain of other exceptions, causing failures that will result in a defect on a remote part of the application. It can also be that we have to change a lot of code scattered through multiple parts of the code base, as a result of a simple change in a function definition. ..."
"... Rule of thumb: Well-defined software will achieve high cohesion and low coupling. ..."
Aug 26, 2019 | www.amazon.com

Separation of concerns

This is a design principle that is applied at multiple levels. It is not just about the low-level design (code), but it is also relevant at a higher level of abstraction, so it will come up later when we talk about architecture.

Different responsibilities should go into different components, layers, or modules of the application. Each part of the program should only be responsible for a part of the functionality (what we call its concerns) and should know nothing about the rest.

The goal of separating concerns in software is to enhance maintainability by minimizing ripple effects. A ripple effect means the propagation of a change in the software from a starting point. This could be the case of an error or exception triggering a chain of other exceptions, causing failures that will result in a defect on a remote part of the application. It can also be that we have to change a lot of code scattered through multiple parts of the code base, as a result of a simple change in a function definition.

Clearly, we do not want these scenarios to happen. The software has to be easy to change. If we have to modify or refactor some part of the code that has to have a minimal impact on the rest of the application, the way to achieve this is through proper encapsulation.

In a similar way, we want any potential errors to be contained so that they don't cause major damage.

This concept is related to the DbC principle in the sense that each concern can be enforced by a contract. When a contract is violated, and an exception is raised as a result of such a violation, we know what part of the program has the failure, and what responsibilities failed to be met.

Despite this similarity, separation of concerns goes further. We normally think of contracts between functions, methods, or classes, and while this also applies to responsibilities that have to be separated, the idea of separation of concerns also applies to Python modules, packages, and basically any software component. Cohesion and coupling

These are important concepts for good software design.

On the one hand, cohesion means that objects should have a small and well-defined purpose, and they should do as little as possible. It follows a similar philosophy as Unix commands that do only one thing and do it well. The more cohesive our objects are, the more useful and reusable they become, making our design better.

On the other hand, coupling refers to the idea of how two or more objects depend on each other. This dependency poses a limitation. If two parts of the code (objects or methods) are too dependent on each other, they bring with them some undesired consequences:

Rule of thumb: Well-defined software will achieve high cohesion and low coupling.

[Aug 26, 2019] Software Development and Professional Practice by John Dooley

Notable quotes:
"... Did the read operation return anything? ..."
"... Did the write operation write anything? ..."
"... Check all values in function/method parameter lists. ..."
"... Are they all the correct type and size? ..."
"... You should always initialize variables and not depend on the system to do the initialization for you. ..."
"... taking the time to make your code readable and have the code layout match the logical structure of your design is essential to writing code that is understandable by humans and that works. Adhering to coding standards and conventions, keeping to a consistent style, and including good, accurate comments will help you immensely during debugging and testing. And it will help you six months from now when you come back and try to figure out what the heck you were thinking here. ..."
Jul 15, 2011 | www.amazon.com
Defensive Programming

By defensive programming we mean that your code should protect itself from bad data. The bad data can come from user input via the command line, a graphical text box or form, or a file. Bad data can also come from other routines in your program via input parameters like in the first example above.

How do you protect your program from bad data? Validate! As tedious as it sounds, you should always check the validity of data that you receive from outside your routine. This means you should check the following

What else should you check for? Well, here's a short list:

As an example, here's a C program that takes in a list of house prices from a file and computes the average house price from the list. The file is provided to the program from the command line.

/*
* program to compute the average selling price of a set of homes.
* Input comes from a file that is passed via the command line.

* Output is the Total and Average sale prices for
* all the homes and the number of prices in the file.
*
* jfdooley
*/
#include <stdlib.h>
#include <stdio.h>

int main(int argc, char **argv)
{
FILE *fp;
double totalPrice, avgPrice;
double price;
int numPrices;

/* check that the user entered the correct number of args */
if (argc < 2) {
fprintf(stderr,"Usage: %s <filename>\n", argv[0]);
exit(1);
}

/* try to open the input file */
fp = fopen(argv[1], "r");
if (fp == NULL) {
fprintf(stderr, "File Not Found: %s\n", argv[1]);
exit(1);
}
totalPrice = 0.0;
numPrices = 0;

while (!feof(fp)) {
fscanf(fp, "%10lf\n", &price);
totalPrice += price;
numPrices++;
}

avgPrice = totalPrice / numPrices;
printf("Number of houses is %d\n", numPrices);
printf("Total Price of all houses is $%10.2f\n", totalPrice);
printf("Average Price per house is $%10.2f\n", avgPrice);

return 0;
}

Assertions Can Be Your Friend

Defensive programming means that using assertions is a great idea if your language supports them. Java, C99, and C++ all support assertions. Assertions will test an expression that you give them and if the expression is false, it will throw an error and normally abort the program . You should use error handling code for errors you think might happen – erroneous user input, for example – and use assertions for errors that should never happen – off by one errors in loops, for example. Assertions are great for testing

your program , but because you should remove them before giving programs to customers (you don't want the program to abort on the user, right?) they aren't good to use to validate input data.

Exceptions and Error Handling

We've talked about using assertions to handle truly bad errors, ones that should never occur in production. But what about handling "normal" errors? Part of defensive programming is to handle errors in such a way that no damage is done to any data in the program or the files it uses, and so that the program stays running for as long as possible (making your program robust).

Let's look at exceptions first. You should take advantage of built-in exception handling in whatever programming language you're using. The exception handling mechanism will give you information about what bad thing has just happened. It's then up to you to decide what to do. Normally in an exception handling mechanism you have two choices, handle the exception yourself, or pass it along to whoever called you and let them handle it. What you do and how you do it depends on the language you're using and the capabilities it gives you. We'll talk about exception handling in Java later.

Error Handling

Just like with validation, you're most likely to encounter errors in input data, whether it's command line input, file handling, or input from a graphical user interface form. Here we're talking about errors that occur at run time. Compile time and testing errors are covered in the next chapter on debugging and testing. Other types of errors can be data that your program computes incorrectly, errors in other programs that interact with your program , the operating system for instance, race conditions, and interaction errors where your program is communicating with another and your program is at fault.

The main purpose of error handling is to have your program survive and run correctly for as long as possible. When it gets to a point where your program cannot continue, it needs to report what is wrong as best as it can and then exit gracefully. Exiting is the last resort for error handling. So what should you do? Well, once again we come to the "it depends" answer. What you should do depends on what your program's context is when the error occurs and what its purpose is. You won't handle an error in a video game the same way you handle one in a cardiac pacemaker. In every case, your first goal should be – try to recover.

Trying to recover from an error will have different meanings in different programs . Recovery means that your program needs to try to either ignore the bad data, fix it, or substitute something else that is valid for the bad data. See McConnell 8 for a further discussion of error handling. Here are a few examples of how to recover from errors,

__________

8 McConnell, 2004.

Exceptions in Java

Some programming languages have built-in error reporting systems that will tell you when an error occurs, and leave it up to you to handle it one way or another. These errors that would normally cause your program to die a horrible death are called exceptions . Exceptions get thrown by the code that encounters the error. Once something is thrown, it's usually a good idea if someone catches it. This is the same with exceptions. So there are two sides to exceptions that you need to be aware of when you're writing code:

Java has three different types of exceptions – checked exceptions, errors, and unchecked exceptions. Checked exceptions are those that you should catch and handle yourself using an exception handler; they are exceptions that you should anticipate and handle as you design and write your code. For example, if your code asks a user for a file name, you should anticipate that they will type it wrong and be prepared to catch the resulting FileNotFoundException . Checked exceptions must be caught.

Errors on the other hand are exceptions that usually are related to things happening outside your program and are things you can't do anything about except fail gracefully. You might try to catch the error exception and provide some output for the user, but you will still usually have to exit.

The third type of exception is the runtime exception . Runtime exceptions all result from problems within your program that occur as it runs and almost always indicate errors in your code. For example, a NullPointerException nearly always indicates a bug in your code and shows up as a runtime exception. Errors and runtime exceptions are collectively called unchecked exceptions (that would be because you usually don't try to catch them, so they're unchecked). In the program below we deliberately cause a runtime exception:

public class TestNull {
public static void main(String[] args) {
String str = null;
int len = str.length();
}
}

This program will compile just fine, but when you run it you'll get this as output:


Exception in thread "main" java.lang.NullPointerException

at TestNull.main(TestNull.java:4)


This is a classic runtime exception. There's no need to catch this exception because the only thing we can do is exit. If we do catch it, the program might look like:

public class TestNullCatch {
public static void main(String[] args) {
String str = null;

try {
int len = str.length();
} catch (NullPointerException e) {
System.out.println("Oops: " + e.getMessage());
System.exit(1);
}
}
}

which gives us the output


Oops: null

Note that the getMessage() method will return a String containing whatever error message Java deems appropriate – if there is one. Otherwise it returns a null . This is somewhat less helpful than the default stack trace above.

Let's rewrite the short C program above in Java and illustrate how to catch a checked exception .

import java.io.*;
import java.util.*;

public class FileTest

public static void main(String [] args)
{
File fd = new File("NotAFile.txt");
System.out.println("File exists " + fd.exists());

try {
FileReader fr = new FileReader(fd);
} catch (FileNotFoundException e) {
System.out.println(e.getMessage());
}
}
}

and the output we get when we execute FileTest is


File exists false

NotAFile.txt (No such file or directory)


By the way, if we don't use the try-catch block in the above program , then it won't compile. We get the compiler error message


FileTestWrong.java:11: unreported exception java.io.FileNotFoundException; must be caught or declared to be thrown

FileReader fr = new FileReader(fd);


^
1 error

Remember, checked exceptions must be caught. This type of error doesn't show up for unchecked exceptions. This is far from everything you should know about exceptions and exception handling in Java; start digging through the Java tutorials and the Java API!

The Last Word on Coding

Coding is the heart of software development. Code is what you produce. But coding is hard; translating even a good, detailed design into code takes a lot of thought, experience, and knowledge, even for small programs . Depending on the programming language you are using and the target system, programming can be a very time-consuming and difficult task.

That's why taking the time to make your code readable and have the code layout match the logical structure of your design is essential to writing code that is understandable by humans and that works. Adhering to coding standards and conventions, keeping to a consistent style, and including good, accurate comments will help you immensely during debugging and testing. And it will help you six months from now when you come back and try to figure out what the heck you were thinking here.

And finally,

I am rarely happier than when spending an entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand.

-- Douglas Adams, "Last Chance to See"

[Aug 26, 2019] Defensive Programming

Notable quotes:
"... How do you protect your program from bad data? Validate! As tedious as it sounds, you should always check the validity of data that you receive from outside your routine. This means you should check the following ..."
"... Check the number and type of command line arguments. ..."
Aug 26, 2019 | www.amazon.com

Originally from: Software Development and Professional Practice (Expert's Voice in Software Development) John Dooley 9781430238010 Amazon.com

By defensive programming we mean that your code should protect itself from bad data. The bad data can come from user input via the command line, a graphical text box or form, or a file. Bad data can also come from other routines in your program via input parameters like in the first example above.

How do you protect your program from bad data? Validate! As tedious as it sounds, you should always check the validity of data that you receive from outside your routine. This means you should check the following

What else should you check for? Well, here's a short list:

As an example, here's a C program that takes in a list of house prices from a file and computes the average house price from the list. The file is provided to the program from the command line.

/*
* program to compute the average selling price of a set of homes.
* Input comes from a file that is passed via the command line.

* Output is the Total and Average sale prices for
* all the homes and the number of prices in the file.
*
* jfdooley
*/
#include <stdlib.h>
#include <stdio.h>

int main(int argc, char **argv)
{
FILE *fp;
double totalPrice, avgPrice;
double price;
int numPrices;