|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
|
As Donald Knuth noted (Don Knuth and the Art of Computer Programming The Interview):
I think of a programming language as a tool to convert a programmer's mental images into precise operations that a machine can perform. The main idea is to match the user's intuition as well as possible. There are many kinds of users, and many kinds of application areas, so we need many kinds of languages.
Ordinarily technology changes fast. But programming languages are different: programming languages are not just technology, but what programmers think in. They're half technology and half religion. And so the median language, meaning whatever language the median programmer uses, moves as slow as an iceberg. Paul Graham: Beating the Averages Libraries are more important that the language. Donald Knuth |
|
|
A fruitful way to think about language development is to consider it a to be special type of theory building. Peter Naur suggested that programming in general is theory building activity in his 1985 paper "Programming as Theory Building". But idea is especially applicable to compilers and interpreters. What Peter Naur failed to understand was that design of programming languages has religious overtones and sometimes represent an activity, which is pretty close to the process of creating a new, obscure cult ;-). Clueless academics publishing junk papers at obscure conferences are high priests of the church of programming languages. some like Niklaus Wirth and Edsger W. Dijkstra (temporary) reached the status close to those of (false) prophets :-).
On a deep conceptual level building of a new language is a human way of solving complex problems. That means that complier construction in probably the most underappreciated paradigm of programming of large systems much more so then greatly oversold object-oriented programming. OO benefits are greatly overstated. For users, programming languages distinctly have religious aspects, so decisions about what language to use are often far from being rational and are mainly cultural. Indoctrination at the university plays a very important role. Recently they were instrumental in making Java a new Cobol.
The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities, popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things. A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked. This is a story behind success of Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.
Progress in programming languages has been very uneven and contain several setbacks. Currently this progress is mainly limited to development of so called scripting languages. Traditional high level languages field is stagnant for many decades.
At the same time there are some mysterious, unanswered question about factors that help the language to succeed or fail. Among them:
Those are difficult questions to answer without some way of classifying languages into different categories. Several such classifications exists. First of all like with natural languages, the number of people who speak a given language is a tremendous force that can overcome any real of perceived deficiencies of the language. In programming languages, like in natural languages nothing succeed like success.
History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that a language with simpler core, or even simplistic core Basic, Pascal) have better chances to acquire high level of popularity. The underlying fact here probably is that most programmers are at best mediocre and such programmers tend on intuitive level to avoid more complex, more rich languages and prefer, say, Pascal to PL/1 and PHP to Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because of the progress of hardware, availability of compilers and not the least, because it was associated with OO exactly at the time OO became a mainstream fashion). Complex non-orthogonal languages can succeed only as a result of a long period of language development (which usually adds complexly -- just compare Fortran IV with Fortran 99; or PHP 3 with PHP 5 ) from a smaller core. The banner of some fashionable new trend extending existing popular language to this new "paradigm" is also a possibility (OO programming in case of C++, which is a superset of C).
Historically, few complex languages were successful (PL/1, Ada, Perl, C++), but even if they were successful, their success typically was temporary rather then permanent (PL/1, Ada, Perl). As Professor Wilkes noted (iee90):
Things move slowly in the computer language field but, over a sufficiently long period of time, it is possible to discern trends. In the 1970s, there was a vogue among system programmers for BCPL, a typeless language. This has now run its course, and system programmers appreciate some typing support. At the same time, they like a language with low level features that enable them to do things their way, rather than the compiler’s way, when they want to.
They continue, to have a strong preference for a lean language. At present they tend to favor C in its various versions. For applications in which flexibility is important, Lisp may be said to have gained strength as a popular programming language.
Further progress is necessary in the direction of achieving modularity. No language has so far emerged which exploits objects in a fully satisfactory manner, although C++ goes a long way. ADA was progressive in this respect, but unfortunately it is in the process of collapsing under its own great weight.
ADA is an example of what can happen when an official attempt is made to orchestrate technical advances. After the experience with PL/1 and ALGOL 68, it should have been clear that the future did not lie with massively large languages.
I would direct the reader’s attention to Modula-3, a modest attempt to build on the appeal and success of Pascal and Modula-2 [12].
Complexity of the compiler/interpreter also matter as it affects portability: this is one thing that probably doomed PL/1 (and later Ada), although those days a new language typically come with open source compiler (or in case of scripting languages, an interpreter) and this is less of a problem.
Here is an interesting take on language design from the preface to The D programming language book:
Programming language design seeks power in simplicity and, when successful, begets beauty.
Choosing the trade-offs among contradictory requirements is a difficult task that requires good taste from the language designer as much as mastery of theoretical principles and of practical implementation matters. Programming language design is software-engineering-complete.
D is a language that attempts to consistently do the right thing within the constraints it chose: system-level access to computing resources, high performance, and syntactic similarity with C-derived languages. In trying to do the right thing, D sometimes stays with tradition and does what other languages do, and other times it breaks tradition with a fresh, innovative solution. On occasion that meant revisiting the very constraints that D ostensibly embraced. For example, large program fragments or indeed entire programs can be written in a well-defined memory-safe subset of D, which entails giving away a small amount of system-level access for a large gain in program debuggability.
You may be interested in D if the following values are important to you:
- Performance. D is a systems programming language. It has a memory model that, although highly structured, is compatible with C’s and can call into and be called from C functions without any intervening translation.
- Expressiveness. D is not a small, minimalistic language, but it does have a high power-to-weight ratio. You can define eloquent, self-explanatory designs in D that model intricate realities accurately.
- “Torque.” Any backyard hot-rodder would tell you that power isn’t everything; its availability is. Some languages are most powerful for small programs, whereas other languages justify their syntactic overhead only past a certain size. D helps you get work done in short scripts and large programs alike, and it isn’t unusual for a large program to grow organically from a simple single-file script.
- Concurrency. D’s approach to concurrency is a definite departure from the languages it resembles, mirroring the departure of modern hardware designs from the architectures of yesteryear. D breaks away from the curse of implicit memory sharing (though it allows statically checked explicit sharing) and fosters mostly independent threads that communicate with one another via messages.
- Generic code. Generic code that manipulates other code has been pioneered by the powerful Lisp macros and continued by C++ templates, Java generics, and similar features in various other languages. D offers extremely powerful generic and generational mechanisms.
- Eclecticism. D recognizes that different programming paradigms are advantageous for different design challenges and fosters a highly integrated federation of styles instead of One True Approach.
- “These are my principles. If you don’t like them, I’ve got others.” D tries to observe solid principles of language design. At times, these run into considerations of implementation difficulty, usability difficulties, and above all human nature that doesn’t always find blind consistency sensible and intuitive. In such cases, all languages must make judgment calls that are ultimately subjective and are about balance, flexibility, and good taste more than anything else. In my opinion, at least, D compares very favorably with other languages that inevitably have had to make similar decisions.
At the initial, the most difficult stage of language development the language should solve an important problem that was inadequately solved by currently popular languages. But at the same time the language has few chances rto cesseed unless it perfectly fits into the current software fashion. This "fashion factor" is probably as important as several other factors combined with the exclution of "language sponsor" factor.
Like in woman dress fashion rules in language design. And with time this trend became more and more prononced. A new language should simultaneously represent the current fashionable trend. For example OO-programming was a visit card into the world of "big, successful languages" since probably early 90th (C++, Java, Python). Before that "structured programming" and "verification" (Pascal, Modula) played similar role.
PL/1, Java, C#, Ada are languages that had powerful sponsors. Pascal, Basic, Forth are examples of the languages that had no such sponsor during the initial period of development. C and C++ are somewhere in between.
But any language now need a "programming environment" which consists of a set of libraries, debugger and other tools (make tool, link, pretty-printer, etc). The set of standard" libraries and debugger are probably two most important elements. They cost lot of time (or money) to develop and here the role of powerful sponsor is difficult to underestimate.
While this is not a necessary condition for becoming popular, it really helps: other things equal the weight of the sponsor of the language does matter. For example Java, being a weak, inconsistent language (C-- with garbage collection and OO) was pushed through the throat on the strength of marketing and huge amount of money spend on creating Java programming environment. The same was partially true for C# and Python. That's why Python, despite its "non-Unix" origin is more viable scripting language now then, say, Perl (which is better integrated with Unix and has pretty innovative for scripting languages support of pointers and regular expressions), or Ruby (which has support of coroutines form day 1, not as "bolted on" feature like in Python). Like in political campaigns, negative advertizing also matter. For example Perl suffered greatly from blackmail comparing programs in it with "white noise". And then from withdrawal of O'Reilly from the role of sponsor of the language (although it continue to milk that Perl book publishing franchise ;-)
People proved to be pretty gullible and in this sense language marketing is not that different from woman clothing marketing :-)
One very important classification of programming languages is based on so called the level of the language. Essentially after there is at least one language that is successful on a given level, the success of other languages on the same level became more problematic. Higher chances for success are for languages that have even slightly higher, but still higher level then successful predecessors.
The level of the language informally can be described as the number of statements (or, more correctly, the number of lexical units (tokens)) needed to write a solution of a particular problem in one language versus another. This way we can distinguish several levels of programming languages:
Lowest levels. This level is occupied by assemblers and languages designed fro specific instruction sets like PL\360.
High level with automatic memory allocation for variables and garbage collection. Languages of this category (Java, C#) typically are compiled not to the native instruction set of the computer they need to run, but to some abstract instruction set called virtual machine.
Some people distinguish between "nanny languages" and "sharp razor" languages. The latter do not attempt to protect user from his errors while the former usually go too far... Right compromise is extremely difficult to find.
For example, I consider the explicit availability of pointers as an important feature of the language that greatly increases its expressive power and far outweighs risks of errors in hands of unskilled practitioners. In other words attempts to make the language "safer" often misfire.
Another useful typology is based in expressive style of the language:
Popularity of the programming languages is not strongly connected to their quality. Some languages that look like a collection of language designer blunders (PHP, Java ) became quite popular. Java became especially a new Cobol and PHP dominates dynamic Web sites construction. The dominant technology for such Web sites is often called LAMP, which means Linux - Apache -My SQL PHP. Being a highly simplified but badly constructed subset of Perl, kind of new Basic for dynamic Web sites construction PHP provides the most depressing experience. I was unpleasantly surprised when I had learnt the Wikipedia engine was rewritten in PHP from Perl some time ago, but this quite illustrates the trend.
So language design quality has little to do with the language success in the marketplace. Simpler languages have more wide appeal as success of PHP (which at the beginning was at the expense of Perl) suggests. In addition much depends whether the language has powerful sponsor like was the case with Java (Sun and IBM) as well as Python (Google).
Progress in programming languages has been very uneven and contain several setbacks like Java. Currently this progress is usually associated with scripting languages. History of programming languages raises interesting general questions about "laws" of programming language design. First let's reproduce several notable quotes:
Please note that one thing is to read language manual and appreciate how good the concepts are, and another to bet your project on a new, unproved language without good debuggers, manuals and, what is very important, libraries. Debugger is very important but standard libraries are crucial: they represent a factor that makes or breaks new languages.
In this sense languages are much like cars. For many people car is the thing that they use get to work and shopping mall and they are not very interesting is engine inline or V-type and the use of fuzzy logic in the transmission. What they care is safety, reliability, mileage, insurance and the size of trunk. In this sense "Worse is better" is very true. I already mentioned the importance of the debugger. The other important criteria is quality and availability of libraries. Actually libraries are what make 80% of the usability of the language, moreover in a sense libraries are more important than the language...
A popular belief that scripting is "unsafe" or "second rate" or "prototype" solution is completely wrong. If a project had died than it does not matter what was the implementation language, so for any successful project and tough schedules scripting language (especially in dual scripting language+C combination, for example TCL+C) is an optimal blend that for a large class of tasks. Such an approach helps to separate architectural decisions from implementation details much better that any OO model does.
Moreover even for tasks that handle a fair amount of computations and data (computationally intensive tasks) such languages as Python and Perl are often (but not always !) competitive with C++, C# and, especially, Java.The second important observation about programming languages is that language per se is just a tiny part of what can be called language programming environment. the latter includes libraries, IDE, books, level of adoption at universities, popular, important applications written in the language, level of support and key players that support the language on major platforms such as Windows and Linux and other similar things. A mediocre language with good programming environment can give a run for the money to similar superior in design languages that are just naked. This is a story behind success of Java. Critical application is also very important and this is a story of success of PHP which is nothing but a bastardatized derivative of Perl (with all most interesting Perl features removed ;-) adapted to creation of dynamic web sites using so called LAMP stack.
History of programming languages raises interesting general questions about the limit of complexity of programming languages. There is strong historical evidence that languages with simpler core, or even simplistic core has more chanced to acquire high level of popularity. The underlying fact here probably is that most programmers are at best mediocre and such programmer tend on intuitive level to avoid more complex, more rich languages like, say, PL/1 and Perl. Or at least avoid it on a particular phase of language development (C++ is not simpler language then PL/1, but was widely adopted because OO became a fashion). Complex non-orthogonal languages can succeed only as a result on long period of language development from a smaller core or with the banner of some fashionable new trend (OO programming in case of C++).
Here is modified from Byte the timeline of Programming Languages (for the original see BYTE.com September 1995 / 20th Anniversary /)
ca. 1946
- Konrad Zuse , a German engineer working alone while hiding out in the Bavarian Alps, develops Plankalkul. He applies the language to, among other things, chess.
1949
- Short Code , the first computer language actually used on an electronic computing device, appears. It is, however, a "hand-compiled" language.
Fifties
1951
- Grace Hopper , working for Remington Rand, begins design work on the first widely known compiler, named A-0. When the language is released by Rand in 1957, it is called MATH-MATIC.
1952
- Alick E. Glennie , in his spare time at the University of Manchester, devises a programming system called AUTOCODE, a rudimentary compiler.
1957
- FORTRAN --mathematical FORmula TRANslating system--appears. Heading the team is John Backus, who goes on to contribute to the development of ALGOL and the well-known syntax-specification system known as BNF.
1958
- FORTRAN II appears, able to handle subroutines and links to assembly language.
- LISP. John McCarthy at M.I.T. begins work on LISP--LISt Processing.
- Algol-58. The original specification for ALGOL appears. The specification does not describe how data will be input or output; that is left to the individual implementations.
1959
- LISP 1.5 appears.
- COBOL is created by the Conference on Data Systems and Languages (CODASYL).
Sixties
1960
- ALGOL 60 , the specification for Algol-60, the first block-structured language, appears. This is the root of the family tree that will ultimately produce the likes of Pascal. ALGOL goes on to become the most popular language in Europe in the mid- to late-1960s. Compilers for the language were quite difficult to write and that hampered it widespread use. FORTRAN managed to hold its own in the area of numeric computations and Cobol in data processing. Only PL/1 (which was released in 1964) managed to advance ideas of Algol 60 to reasonably wide audience.
- APL Sometime in the early 1960s , Kenneth Iverson begins work on the language that will become APL--A Programming Language. It uses a specialized character set that, for proper use, requires APL-compatible I/O devices.
- Discovery of context free languages formalism. The 1960's also saw the rise of automata theory and the theory of formal languages. Noam Chomsky introduced the notion of context free languages and later became well-known for his theory that language is "hard-wired" in human brains, and for his criticism of American foreign policy.
1962
- Snobol was designed in 1962 in Bell Labs by R. E. Griswold and I. Polonsky. Work begins on the sure-fire winner of the "clever acronym" award, SNOBOL--StriNg-Oriented symBOlic Language. It will spawn other clever acronyms: FASBOL, a SNOBOL compiler (in 1971), and SPITBOL--SPeedy ImplemenTation of snoBOL--also in 1971.
- APL is documented in Iverson's book, A Programming Language .
- FORTRAN IV appears.
1963
- ALGOL 60 is revised.
- PL/1. Work begins on PL/1.
1964
- System/360, announced in April of 1964,
- PL/1 is released with high quality compiler (F-compiler), which beats is quality of both compile-time and run-time diagnostics most of the compilers of the time. Later two brilliantly written and in some aspects unsurpassable compilers: debugging and optimizing PL/1 compilers were added. Both represented state of the art of compiler writing. Cornell University implemented subset of PL/1 for teaching called PL/C with the compiler that has probably the most advanced error detection and correction capabilities of batch compilers of all times. PL/1 was also adopted as system implementation language for Multics.
- APL\360 is implemented.
- BASIC. At Dartmouth University , professors John G. Kemeny and Thomas E. Kurtz invent BASIC. The first implementation was on a timesharing system. The first BASIC program runs at about 4:00 a.m. on May 1, 1964.
1965
- SNOBOL3 appears.
1966
- FORTRAN 66 appears.
- LISP 2 appears.
- Work begins on LOGO at Bolt, Beranek, & Newman. The team is headed by Wally Fuerzeig and includes Seymour Papert. LOGO is best known for its "turtle graphics."
1967
- SNOBOL4 , a much-enhanced SNOBOL, appears.
- The first volume of The Art of Computer Programming was published in 1968 and instantly became classic Donald Knuth (b. 1938) later published two additional volumes of his world famous three-volume treatise.
- Structured programming movement started. The start of the first religious cult in programming language design. It was created by Edgar Dijkstra who published his infamous "Go to statement considered harmful" (CACM 11(3), March 1968, pp 147-148). While misguided this cult somewhat contributed to the design of control structures in programming languages serving as a kind of stimulus for creation of more rich set of control structures in new programming languages (with PL/1 and its derivative -- C as probably the two popular programming languages which incorporated this new tendencies). Later it degenerated into completely fundamentalist and mostly counter-productive verification cult.
- ALGOL 68 , the successor of ALGOL 60, appears. Was the first extensible language that got some traction but generally was a flop. Some members of the specifications committee--including C.A.R. Hoare and Niklaus Wirth -- protested its approval on the basis of its overcomplexity. They proved to be partially write: ALGOL 68 compilers proves to be difficult to implement and tat doomed the language. Dissatisfied with the complexity of the Algol-68 Niklaus Wirth begins his work on a simple teaching language which later becomes Pascal.
- ALTRAN , a FORTRAN variant, appears.
- COBOL is officially defined by ANSI.
- Niklaus Wirth begins work on Pascal language design (in part as a reaction to overcomplexity of Algol 68). Like Basic before it, Pascal was specifically designed for teaching programming at universities and as such was specifically designed to allow one pass recursive decent compiler. But the language has multiple grave deficiencies. While a talented language designer Wirth went overboard in simplification of the language (for example in the initial version of the language loops were the allowed to have only increment one, arrays were only static, etc). It also was used to promote bizarre ideas of correctness proofs of the program inspired by verification movement with the high priest Edgar Dijkstra -- the first (or may be the second after structured programming) mass religious cult in programming languages history that destroyed careers of several talented computer scientists who joined it, such as David Gries). Some of blunders in Pascal design were later corrected in Modula and Modula 2.
1969
- 500 people attend an APL conference at IBM's headquarters in Armonk, New York. The demands for APL's distribution are so great that the event is later referred to as "The March on Armonk."
Seventies
1970
- Forth. Sometime in the early 1970s , Charles Moore writes the first significant programs in his new language, Forth.
Prolog. Work on Prolog begins about this time. For some time Prolog became fashionable due to Japan initiatives. Later it returned to relative obscurity, although did not completely disappeared from the language map.
- Also sometime in the early 1970s , work on Smalltalk begins at Xerox PARC, led by Alan Kay. Early versions will include Smalltalk-72, Smalltalk-74, and Smalltalk-76.
- An implementation of Pascal appears on a CDC 6000-series computer.
- Icon , a descendant of SNOBOL4, appears.
1972
- The manuscript for Konrad Zuse's Plankalkul (see 1946) is finally published.
- Dennis Ritchie produces C. The definitive reference manual for it will not appear until 1974.
- PL/M. In 1972 Gary Kildall implemented a subset of PL/1, called "PL/M" for microprocessors. PL/M was used to write the CP/M operating system - and much application software running on CP/M and MP/M. Digital Research also sold a PL/I compiler for the PC written in PL/M. PL/M was used to write much other software at Intel for the 8080, 8085, and Z-80 processors during the 1970s.
- The first implementation of Prolog -- by Alain Colmerauer and Phillip Roussel
1974
- Donald E. Knuth published his article that give a decisive blow to "structured programming fundamentalists" led by Edgar Dijkstra: Structured Programming with go to Statements. ACM Comput. Surv. 6(4): 261-301 (1974)
- Another ANSI specification for COBOL appears.
1975
- Paul Abrahams (Courant Intritute of Mathematical Sciences) destroyed credibility of "structured programming" cult in his article " 'Structure programming' considered harmful" (SYGPLAN Notices, 1975, April, p 13-24
- Tiny BASIC by Bob Albrecht and Dennis Allison (implementation by Dick Whipple and John Arnold) runs on a microcomputer in 2 KB of RAM. It is usable of a 4-KB machine, which left 2 KB available for the program.
- Microsoft was formed on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800. Bill Gates and Paul Allen write a version of BASIC that they sell to MITS (Micro Instrumentation and Telemetry Systems) on a per-copy royalty basis. MITS is producing the Altair, one of the earlier 8080-based microcomputers that came with a interpreter for a programming language.
- Scheme , a LISP dialect by G.L. Steele and G.J. Sussman, appears.
- Pascal User Manual and Report , by Jensen and Wirth, is published. Still considered by many to be the definitive reference on Pascal. This was kind of attempt to replicate the success of Basic relying of growing "structured programming" fundamentalism movement started by Edgar Dijkstra. Pascal acquired large following in universities as compiler was made freely available. It was adequate for teaching, has fast completer and was superior to Basic.
- B.W. Kerninghan describes RATFOR--RATional FORTRAN. It is a preprocessor that allows C-like control structures in FORTRAN. RATFOR is used in Kernighan and Plauger's "Software Tools," which appears in 1976.
1976
Backlash on Dijkstra correctness proofs pseudo-religious cult started:
- Andrew Tenenbaum (Vrije University, Amsterdam) published paper In Defense of Program Testing or Correctness Proofs Considered Harmful (SIGPLAN Notices, May 1976 pp 64-68). Made the crucial contribution to the "Structured programming without GOTO" programming debate, which was a decisive blow to the structured programming fundamentalists led by E. Dijkstra;
- Maurice Wilkes, famous computer scientists and the first president of British Computer Society (1957-1960) attacked "verification cult" in this article Software engineering and Structured programming published in IEEE transactions on Software engineering (SE-2, No.4, December 1976, pp 274-276. The paper was also presented as a Keynote address at the Second International Conference on Software engineering, San Francisco, CA, October 1976.
- Design System Language , considered to be a forerunner of PostScript, appears.
1977
- AWK was probably the second (after Snobol) string processing language that extensively use regular expressions. The first version was created in BellLabs by Alfred V. Aho, Peter J. Weinberger, and Brian W. Keringhan in 1977. This was also the first widely used language with built-in garbage collection.
- The ANSI standard for MUMPS -- Massachusetts General Hospital Utility Multi-Programming System -- appears. Used originally to handle medical records, MUMPS recognizes only a string data-type. Later renamed M.
- The design competition that will produce Ada begins. Honeywell Bull's team, led by Jean Ichbiah, will win the competition. Ada never live to promises and became an expensive flop.
- Kim Harris and others set up FIG, the FORTH interest group. They develop FIG-FORTH, which they sell for around $20.
UCSD Pascal. In the late 1970s , Kenneth Bowles produces UCSD Pascal, which makes Pascal available on PDP-11 and Z80-based computers.
- Niklaus Wirth begins work on Modula, forerunner of Modula-2 and successor to Pascal. It was the first widely used language that incorporate the concept of coroutines.
1978
- AWK -- a text-processing language named after the designers, Aho, Weinberger, and Kernighan -- appears.
- FORTRAN 77: The ANSI standard for FORTRAN 77 appears.
1979
- Bourne shell. The Bourne shell was included Unix Version 7. It was inferior to paralleled developed C-shell but gained tremendous popularity on the strength of AT&T ownership of Unix.
- C shell.The Second Berkeley Software Distribution (2BSD), was released in May 1979. It included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor (a visual version of ex) and the C shell.
- REXX was designed and first implemented between 1979 and mid-1982 by Mike Cowlishaw of IBM.
Eighties
1980
- Smalltalk-80 appears.
- Modula-2 appears.
- Franz LISP appears.
- Bjarne Stroustrup develops a set of languages -- collectively referred to as "C With Classes" -- that serve as the breeding ground for C++.
1981
- C-shell was extended into tcsh.
- Effort begins on a common dialect of LISP, referred to as Common LISP.
- Japan begins the Fifth Generation Computer System project. The primary language is Prolog.
1982
- ISO Pascal appears.
- In 1982 one of the first scripting languages REXX was released by IBM as a product. It was four years after AWK was released. Over the years IBM included REXX in almost all of its operating systems (VM/CMS, VM/GCS, MVS TSO/E, AS/400, VSE/ESA, AIX, CICS/ESA, PC DOS, and OS/2), and has made versions available for Novell NetWare, Windows, Java, and Linux.
- PostScript appears. It revolutionized printing on dot matrix and laser printers.
1983
- REXX was included in the third release of IBM's VM/CMS shipped in 1983; It was four years after AWK was released. Over the years IBM included REXX in almost all of its operating systems (VM/CMS, VM/GCS, MVS TSO/E, AS/400, VSE/ESA, AIX, CICS/ESA, PC DOS, and OS/2), and has made versions available for Novell NetWare, Windows, Java, and Linux.
- The Korn shell (ksh) was released in 1983.
- Smalltalk-80: The Language and Its Implementation by Goldberg et al is published. Influencial early book that promoted ideas of OO programming.
- Ada appears . Its name comes from Lady Augusta Ada Byron, Countess of Lovelace and daughter of the English poet Byron. She has been called the first computer programmer because of her work on Charles Babbage's analytical engine. In 1983, the Department of Defense directs that all new "mission-critical" applications be written in Ada.
- In late 1983 and early 1984, Microsoft and Digital Research both release the first C compilers for microcomputers.
- In July , the first implementation of C++ appears. The name was coined by Rick Mascitti.
- In November , Borland's Turbo Pascal hits the scene like a nuclear blast, thanks to an advertisement in BYTE magazine.
1984
- GCC development started. In 1984 Stallman started his work on an open source C compiler that became widely knows as gcc. The same year Steven Levy "Hackers" book is published with a chapter devoted to RMS that presented him in an extremely favorable light.
- Icon. R.E.Griswold designed Icon programming language Icon (see overview). Like Perl Icon is a high-level, programming language with a large repertoire of features for processing data structures and character strings. Icon is an imperative, procedural language with a syntax reminiscent of C and Pascal, but with semantics at a much higher level (see Griswold, Ralph E. and Madge T. Griswold. The Icon Programming Language, Second Edition, Prentice-Hall, Inc., Englewood Cliffs, New Jersey. 1990, ISBN 0-13-447889-4.).
- APL2. A reference manual for APL2 appears. APL2 is an extension of APL that permits nested arrays.
1985
- REXX. The first PC implementation of REXX was released.
- Forth controls the submersible sled that locates the wreck of the Titanic.
- Vanilla SNOBOL4 for microcomputers is released.
- Methods, a line-oriented Smalltalk for PCs, is introduced.
- The first version of GCC was able to compile itself appeared in late 1985. The same year GNU Manifesto published
1986
- Smalltalk/V appears--the first widely available version of Smalltalk for microcomputers.
- Apple releases Object Pascal for the Mac.
- Borland releases Turbo Prolog.
- Charles Duff releases Actor, an object-oriented language for developing Microsoft Windows applications.
- Eiffel , another object-oriented language, appears.
- C++ appears.
1987
- PERL. The first version of Perl, Perl 1.000 was released by Larry Wall in 1987. See an excellent PerlTimeline for more information.
- Turbo Pascal version 4.0 is released.
1988
- The specification for CLOS -- Common LISP Object System -- is published.
- Oberon. Niklaus Wirth finishes Oberon, his follow-up to Modula-2. The language was still-born but some of its ideas found its was to Python.
- PERL 2 was released.
- TCL was created. The Tcl scripting language grew out of work of John Ousterhout on creating the design tools for integrated circuits at the University of California at Berkeley in the early 1980's. In the fall of 1987, while on sabbatical at DEC's Western Research Laboratory, he decided to build an embeddable command language. He started work on Tcl in early 1988, and began using the first version of Tcl in a graphical text editor in the spring of 1988. The idea of TCL is different and to certain extent more interesting than idea of Perl -- TCL was designed as embeddable macro language for applications. In this sense TCL is closer to REXX (which was probably was one of the first language that was used both as a shell language and as a macrolanguage). Important products that use Tcl are TK toolkit and Expect.
1989
- The ANSI C specification is published.
- C++ 2.0 arrives in the form of a draft reference manual. The 2.0 version adds features such as multiple inheritance and pointers to members.
- Perl 3.0 was released in 1989 was distributed under GNU public license -- one of the first major open source project distributed under GNU license and probably the first outside FSF.
Nineties
1990
- zsh. Paul Falstad wrote zsh, a superset of the ksh88 which also had many csh features.
- C++ 2.1 , detailed in Annotated C++ Reference Manual by B. Stroustrup et al, is published. This adds templates and exception-handling features.
- FORTRAN 90 includes such new elements as case statements and derived types.
- Kenneth Iverson and Roger Hui present J at the APL90 conference.
1991
- Visual Basic wins BYTE's Best of Show award at Spring COMDEX.
PERL 4 released. In January 1991 the first edition of Programming Perl, a.k.a. The Pink Camel, by Larry Wall and Randal Schwartz is published by O'Reilly and Associates. It described a new, 4.0 version of Perl. Simultaneously Perl 4.0 was released (in March of the same year). Final version of Perl 4 was released in 1993. Larry Wall is awarded the Dr. Dobbs Journal Excellence in Programming Award. (March)
1992
- Dylan -- named for Dylan Thomas -- an object-oriented language resembling Scheme, is released by Apple.
1993
- ksh93 was released by David Korn. Was the last of line on AT&T developed shells.
- ANSI releases the X3J4.1 technical report -- the first-draft proposal for (gulp) object-oriented COBOL. The standard is expected to be finalized in 1997.
- PERL 4. Version 4 was the first widely used version of Perl. Timing was simply perfect: it was already widely available before WEB explosion in 1994.
1994
- PERL 5. Version 5 was released in the end of 1994:
- Microsoft incorporates Visual Basic for Applications into Excel.
1995
- In February , ISO accepts the 1995 revision of the Ada language. Called Ada 95, it includes OOP features and support for real-time systems.
- RUBY December: First release 0.95.
1996
- first ANSI C++ standard .
- Ruby 1.0 released. Did not gain much popularity until later.
1997
- Java. In 1997 Java was released. Sun launches a tremendous and widely successful complain to replace Cobol with Java as a standard language for writing commercial applications for the industry.
2006
2007
2011
- Dennis Ritchie, the creator of C, dies. He was only 70 at the time.
There are several interesting "language-induced" errors -- errors that particular programming language facilitates rather then helps to avoid. They are most studied for C-style languages. Funny but Pl/1 (from which C was derived) was a better designed language then much simpler C in several of those categories.
One of most famous C design blunders was two small lexical difference between assignment and comparison (remember that Algol used := for assignment) caused by the design decision to make the language more compact (terminals at this type were not very reliable and number of symbols typed matter greatly. In C assignment is allowed in if statement but no attempts were made to make language more failsafe by avoiding possibility of mixing up "=" and "==". In C syntax the statement
if (alpha = beta) ...
assigns the contents of the variable beta to the variable alpha and executes the code in then branch if beta <> 0.
It is easy to mix thing and write if (alpha = beta ) instead of (if (alpha == beta) which is a pretty nasty, and remarkably consistent C-induced bug. in case you are comparing the constant to a variable, you can often reverse the sequence and put constant first like in
if ( 1==i ) ...as
if ( 1=i ) ...does not make any sense. In this case such a blunder will be detected on syntax level.
|
Another nasty problems with C, C++, Java, Perl and other C-style languages is that missing curvy brackets are pretty difficult to find. they also canbe insertd incorrectly endign with the even more nasty logical error. One effective solution that was first implemented in PL/1 and was based on calculation of the level of nesting (in compiler listing) and ability of multiple closure of blocks in the end statement (PL/1 did not use brackets {}, they were introduced in C).
In C one can use pseudo comments that signify nesting level zero and check those points with special program or by writing an editor macro.
Many editors have the ability to point to the closing bracket for any given opening bracket and vise versa. This is also useful, but less efficient way to solve the problem.
Specifying max length of literals is an effecting way of catching missing quote. This idea was forst implemented in debugging PL/1 compilers. You can also have an option to limit literal to a single line. In general multi-line literals should have different lexical markers (like "here" construct in shell). Some language like Perl provide opportunity to use concatenation operator for splitting literals into multiple lines, which are "merged" at compile time. But if there is no limit on the number of lines string literal can occupy some bug can slip in which unmatched quote can closed by another unmatched quote in a nearby literal " commenting out" some part of the code. So this does not help much.
Limit on the language of the literal can be communicated via pragma statement at compile type in a particular fragment of text. This is an effective way to avoid the problem. Usually only few places in program use multiline literals, if any.
Editors that use coloring help to detect unclosed literal problem, but there are cases when they are useless.
This is best done not with comments, but with a preprocessor if the language has one (PL/1, C, etc)
So a good strategy for notation of if-else statements is always use { brace brackets } around the clauses of an if-else or if statementHaving both an if-else and an if statement leads to some possibilities of confusion when one of the clause of a selection statement is itself a selection statement. For example, the C code
if (level >= good) if (level == excellent) cout << "excellent" << endl; else cout << "bad" << endl;is intended to process a three-state situation in which something can be bad, good or (as a special case of good) excellent; it is supposed to print an appropriate description for the excellent and bad cases, and print nothing for the good case. The indentation of the code reflects these expectations. Unfortunately, the code does not do this. Instead, it prints excellent for the excellent case, bad for the good case, and nothing for the bad case.
The problem is deciding which if matches the else in this expression. The basic rule is
an else matches the nearest previous unmatched if There are two ways to avoid the dangling else problem:
In fact, you can avoid the dangling else problem completely by always using brackets around the clauses of an if or if-else statement, even if they only enclose a single statement.
- reverse the logic of the outer branch, so that the else is nested inside another else instead of an unmatched if:
if (bad) cout << "bad" << endl; else if (excellent) cout << "excellent" << endl;- use brackets around the if clause so that the inner if is terminated by the end of the enclosing bracket:
if (good) { if (excellent) cout << "excellent" << endl; } else cout << "bad" << endl;
Always use { brace brackets } around the clauses of an if-else or if statement |
![]() |
![]() |
![]() |
Jul 17, 2021 | www.linkedin.com
Development was easier in the days of classical CICS, where all the logic was managed by a single mainframe computer and 3270 clients were responsible for nothing except displaying output and responding to keystrokes. But that's no longer adequate when smart phones and PC's are more powerful than mainframes of old, and our task is to develop systems that can integrate large shared databases with local processing to provide the modern systems that we need. This needs web services, but development of distributed systems with COBOL, Java, C#, and similar technology is difficult.
Since 2015 MANASYS Jazz has been able to develop CICS web services, but it remained difficult to develop client programs to work with them. Build 16.1 (December 2020) was a major breakthrough, offering integrated development of COBOL CICS web services for the mainframe, and C# client interfaces that make client development as easy as discovering properties and methods with Intellisense.
Build 16.2 (January 2021) supported services returning several records. We'd found that each request/response took a second or two, whether it was returning 1 or many records, but the interface could page forward and back instantly within the list of returned records. Build 16.2 also offered easy addition of related-table data, and interfaces for VSAM as well as DB2 web services. Build 16.3 (June 2021) takes a further step, adding services and interfaces for parent-child record collections, for example a Department record with the list of Employees who work there.
Our video "Bridging Two Worlds" has been updated to demonstrate these features. See how easy it is to create a web service and related client logic that will display and update one or many records at a time. See how MANASYS controls updating with CICS-style pseudo-locking, preventing invalid updates automatically. See how easily MANASYS handles data from many records at a time, resulting in clean and efficient service architecture.
Robert Barnes,
CEO, Jazz Software Ltd
Birkenhead, Auckland 0626, New Zealand
Mobile +64 27 4592702
Skype Robert.barnes3
![]() |
![]() |
![]() |
Jun 07, 2021 | dev.to
The working assumption should "Nobody inclusing myself will ever reuse this code". It is very reastic assumption as programmers are notoriously resultant to reuse the code from somebody elses. And you programming skills evolve you old code will look pretty foreign to use.
"In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)." - Roberto Waltman
This week on our show we discuss this quote. Does OOP encourage too many layers in code?
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of abstraction. I wrote about this before in the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us? Bertil Muth "¢ Dec 9 '18
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution. Nested Software "¢ Dec 9 '18 "¢ Edited on Dec 16
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need to solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of brainwashing that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a fundamental problem in software development... Nested Software "¢ Dec 9 '18
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always a better fit for re-using code.
![]() |
![]() |
![]() |
Jan 01, 2011 | www.pixelstech.net
Anyone who claims to be even remotely versed in computer science knows what "spaghetti code" is. That type of code still sadly exists. But today we also have, for lack of a better term" and sticking to the pasta metaphor" "lasagna code".
Lasagna Code is layer upon layer of abstractions, objects and other meaningless misdirections that result in bloated, hard to maintain code all in the name of "clarity". It drives me nuts to see how badly some code today is. And then you come across how small Turbo Pascal v3 was , and after comprehending it was a full-blown Pascal compiler, one wonders why applications and compilers today are all so massive.
Turbo Pascal v3 was less than 40k. That's right, 40 thousand bytes. Try to get anything useful today in that small a footprint. Most people can't even compile "Hello World" in less than a few megabytes courtesy of our object-oriented obsessed programming styles which seem to demand "lines of code" over clarity and "abstractions and objects" over simplicity and elegance.
Back when I was starting out in computer science I thought by today we'd be writing a few lines of code to accomplish much. Instead, we write hundreds of thousands of lines of code to accomplish little. It's so sad it's enough to make one cry, or just throw your hands in the air in disgust and walk away.
There are bright spots. There are people out there that code small and beautifully. But they're becoming rarer, especially when someone who seemed to have thrived on writing elegant, small, beautiful code recently passed away. Dennis Ritchie understood you could write small programs that did a lot. He comprehended that the algorithm is at the core of what you're trying to accomplish. Create something beautiful and well thought out and people will examine it forever, such as Thompson's version of Regular Expressions !
... ... ...
![]() |
![]() |
![]() |
Dec 04, 2011 | www.badcheese.com
I've seen many infrastructures in my day. I work for a company with a very complicated infrastructure now. They've got a dev/stage/prod environment for every product (and they've got many of them). Trust is not a word spoken lightly here. There is no 'trust' for even sysadmins (I've been working here for 7 months now and still don't have production sudo access). Developers constantly complain about not having the access that they need to do their jobs and there are multiple failures a week that can only be fixed by a small handful of people that know the (very complex) systems in place. Not only that, but in order to save work, they've used every cutting-edge piece of software that they can get their hands on (mainly to learn it so they can put it on their resume, I assume), but this causes more complexity that only a handful of people can manage. As a result of this the site uptime is (on a good month) 3 nines at best.
In my last position (pronto.com) I put together an infrastructure that any idiot could maintain. I used unmanaged switches behind a load-balancer/firewall and a few VPNs around to the different sites. It was simple. It had very little complexity, and a new sysadmin could take over in a very short time if I were to be hit by a bus. A single person could run the network and servers and if the documentation was lost, a new sysadmin could figure it out without much trouble.
Over time, I handed off my ownership of many of the Infrastructure components to other people in the operations group and of course, complexity took over. We ended up with a multi-tier network with bunches of VLANs and complexity that could only be understood with charts, documentation and a CCNA. Now the team is 4+ people and if something happens, people run around like chickens with their heads cut off not knowing what to do or who to contact when something goes wrong.
Complexity kills productivity. Security is inversely proportionate to usability. Keep it simple, stupid. These are all rules to live by in my book.
Downtimes: Beatport: not unlikely to have 1-2 hours downtime for the main site per month.
Pronto: several 10-15 minute outages a year Pronto (under my supervision): a few seconds a month (mostly human error though, no mechanical failure)
![]() |
![]() |
![]() |
Jul 22, 2005 | hxr.us
Fri Jul 22 13:56:52 EDT 2005
Category [ Internet Politics ]This was sent to me by a colleague. From "S4 -- The System Standards Stockholm Syndrome" by John G. Waclawsky, Ph.D.:
The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards Stockholm Syndrome" (S4) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking.Read the whole thing over at BCR .
And while this particularly picks on the ITU types, it should hit close to home to a whole host of other "endeavors".
IMS & Stockholm Syndrome - Light Reading
12:45 PM -- While we flood you with IMS-related content this week, perhaps it's sensible to share some airtime with a clever warning about being held "captive" to the hype.Sunday, August 07, 2005 S4 - The Systems Standards Stockholm Syndrome John Waclawsky, part of the Mobile Wireless Group at Cisco Systems, features an interesting article in the July 2005 issue of the Business Communications Review on The Systems Standards Stockholm Syndrome. Since his responsibilities include standards activities (WiMAX, IETF, OMA, 3GPP and TISPAN), identification of product requirements and the definition of mobile wireless and broadband architectures, he seems to know very well what he is talking about, namely the IP Multimedia Subsytem (IMS). See also his article in the June 2005 issue on IMS 101 - What You Need To Know Now .This warning comes from John G. Waclawsky, PhD, senior technical staff, Wireless Group, Cisco Systems Inc. (Nasdaq: CSCO). Waclawsky, writing in the July issue of Business Communications Review , compares the fervor over IMS to the " Stockholm Syndrome ," a term that comes from a 1973 hostage event in which hostages became sympathetic to their captors.
Waclawsky says a form of the Stockholm Syndrome has taken root in technical standards groups, which he calls "System Standards Stockholm Syndrome," or S4.
Here's a snippet from Waclawsky's column:
What causes S4? Captives identify with their captors initially as a defensive mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors, such as granting a secretarial role (often called a "chair") to a captive in a working group are magnified, since finding perspective in a systems standards meeting, just like a hostage situation, is by definition impossible. Rescue attempts are problematic, since the captive could become mentally incapacitated by suddenly being removed from a codependent environment.The full article can be found here -- R. Scott Raynovich, US Editor, Light Reading
See also the Wikedpedia glossary from Martin below:
IMS. Internet Monetisation System . A minor adjustment to Internet Protocol to add a "price" field to packet headers. Earlier versions referred to Innovation Minimisation System . This usage is now deprecated. (Expected release Q2 2012, not available in all markets, check with your service provider in case of sudden loss of unmediated connectivity.)It is so true that I have to cite it completely (bold emphasis added):The "Stockholm Syndrome" describes the behavior of some hostages. The "System Standards Stockholm Syndrome" (S 4 ) describes the behavior of system standards participants who, over time, become addicted to technology complexity and hostages of group thinking.
Although the original name derives from a 1973 hostage incident in Stockholm, Sweden, the expanded name and its acronym, S 4 , applies specifically to systems standards participants who suffer repeated exposure to cult dogma contained in working group documents and plenary presentations. By the end of a week in captivity, Stockholm Syndrome victims may resist rescue attempts, and afterwards refuse to testify against their captors. In system standards settings, S4 victims have been known to resist innovation and even refuse to compete against their competitors.
Recent incidents involving too much system standards attendance have resulted in people being captured by radical ITU-like factions known as the 3GPP or 3GPP2.
I have to add of course ETSI TISPAN and it seems that the syndrome is also spreading into IETF, especially to SIP and SIPPING.
The victims evolve to unwitting accomplices of the group as they become immune to the frustration of slow plodding progress, thrive on complexity and slowly turn a blind eye to innovative ideas. When released, they continue to support their captors in filtering out disruptive innovation, and have been known to even assist in the creation and perpetuation of bureaucracy.
Years after intervention and detoxification, they often regret their system standards involvement. Today, I am afraid that S 4 cases occur regularly at system standards organizations.
What causes S 4 ? Captives identify with their captors initially as a defensive mechanism, out of fear of intellectual challenges. Small acts of kindness by the captors, such as granting a secretarial role (often called a "chair") to a captive in a working group are magnified, since finding perspective in a systems standards meeting, just like a hostage situation, is by definition impossible. Rescue attempts are problematic, since the captive could become mentally incapacitated by suddenly being removed from a codependent environment.
It's important to note that these symptoms occur under tremendous emotional and/or physical duress due to lack of sleep and abusive travel schedules. Victims of S 4 often report the application of other classic "cult programming" techniques, including:
- The encouraged ingestion of mind-altering substances. Under the influence of alcohol, complex systems standards can seem simpler and almost rational.
- "Love-fests" in which victims are surrounded by cultists who feign an interest in them and their ideas. For example, "We'd love you to tell us how the Internet would solve this problem!"
- Peer pressure. Professional, well-dressed individuals with standing in the systems standards bureaucracy often become more attractive to the captive than the casual sorts commonly seen at IETF meetings.
Back in their home environments, S 4 victims may justify continuing their bureaucratic behavior, often rationalizing and defending their system standard tormentors, even to the extent of projecting undesirable system standard attributes onto component standards bodies. For example, some have been heard murmuring, " The IETF is no picnic and even more bureaucratic than 3GPP or the ITU, " or, "The IEEE is hugely political." (For more serious discussion of component and system standards models, see " Closed Architectures, Closed Systems And Closed Minds ," BCR, October 2004.)
On a serious note, the ITU's IMS (IP Multimedia Subsystem) shows every sign of becoming the latest example of systems standards groupthink. Its concepts are more than seven years old and still not deployed, while its release train lengthens with functional expansions and change requests. Even a cursory inspection of the IMS architecture reveals the complexity that results from:
- decomposing every device into its most granular functions and linkages; and
- tracking and controlling every user's behavior and related billing.
The proliferation of boxes and protocols, and the state management required for data tracking and control, lead to cognitive overload but little end user value.
It is remarkable that engineers who attend system standards bodies and use modern Internet- and Ethernet-based tools don't apply to their work some of the simplicity learned from years of Internet and Ethernet success: to build only what is good enough, and as simply as possible.
Now here I have to break in: I think the syndrome is also spreading to the IETF, becuase the IETF is starting to leave these principles behind - especially in SIP and SIPPING, not to mention Session Border Confuser (SBC).
The lengthy and detailed effort that characterizes systems standards sometimes produces a bit of success, as the 18 years of GSM development (1980 to 1998) demonstrate. Yet such successes are highly optimized, very complex and thus difficult to upgrade, modify and extend.
Email is a great example. More than 15 years of popular email usage have passed, and today email on wireless is just beginning to approach significant usage by ordinary people.
The IMS is being hyped as a way to reduce the difficulty of integrating new services, when in fact it may do just the opposite. IMS could well inhibit new services integration due to its complexity and related impacts on cost, scalability, reliability, OAM, etc.
Not to mention the sad S 4 effects on all those engineers participating in IMS-related standards efforts.
Here the Wikedpedia glossary from Martin Geddes ( Telepocalypse ) fit in very well:
![]() |
![]() |
![]() |
Jun 02, 2021 | www.reddit.com
Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.
By now, and to be frank in the last 30 years too, this is complete and utter bollocks. Feature creep is everywhere, typical shell tools are choke-full of spurious additions, from formatting to "side" features, all half-assed and barely, if at all, consistent.
Nothing can resist feature creep. not_perfect_yet 3 years ago
name_censored_ 3 years agoIt's still a good idea. It's become very rare though. Many problems we have today are a result of not following it.
· edited 3 years agobadsectoracula 3 years ago![]()
By now, and to be frank in the last 30 years too, this is complete and utter bollocks.
There is not one single other idea in computing that is as unbastardised as the unix philosophy - given that it's been around fifty years. Heck, Microsoft only just developed PowerShell - and if that's not Microsoft's take on the Unix philosophy, I don't know what is.
In that same time, we've vacillated between thick and thin computing (mainframes, thin clients, PCs, cloud). We've rebelled against at least four major schools of program design thought (structured, procedural, symbolic, dynamic). We've had three different database revolutions (RDBMS, NoSQL, NewSQL). We've gone from grassroots movements to corporate dominance on countless occasions (notably - the internet, IBM PCs/Wintel, Linux/FOSS, video gaming). In public perception, we've run the gamut from clerks ('60s-'70s) to boffins ('80s) to hackers ('90s) to professionals ('00s post-dotcom) to entrepreneurs/hipsters/bros ('10s "startup culture").
It's a small miracle that
iproute2
only has formatting options and grep only has--color
. If they feature-crept anywhere near the same pace as the rest of the computing world, they would probably be a RESTful SaaS microservice with ML-powered autosuggestions.This is because adding a new features is actually easier than trying to figure out how to do it the Unix way - often you already have the data structures in memory and the functions to manipulate them at hand, so adding a
--frob
parameter that does something special with that feels trivial.GNU and their stance to ignore the Unix philosophy (AFAIK Stallman said at some point he didn't care about it) while becoming the most available set of tools for Unix systems didn't help either.
ILikeBumblebees 3 years ago
level 2· edited 3 years agoFeature creep is everywhere
No, it certainly isn't. There are tons of well-designed, single-purpose tools available for all sorts of purposes. If you live in the world of heavy, bloated GUI apps, well, that's your prerogative, and I don't begrudge you it, but just because you're not aware of alternatives doesn't mean they don't exist.
typical shell tools are choke-full of spurious additions,
What does "feature creep" even mean with respect to shell tools? If they have lots of features, but each function is well-defined and invoked separately, and still conforms to conventional syntax, uses stdio in the expected way, etc., does that make it un-Unixy? Is BusyBox bloatware because it has lots of discrete shell tools bundled into a single binary? nirreskeya 3 years ago
icantthinkofone -34 points· 3 years agoZawinski's Law :) 1 Share Report Save
More than 1 childwaivek 3 years agoThe (anti) foreword by Dennis Ritchie -
I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memories. The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out to pasture, they are fertilizing it from below.
Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and addled by puffiness of the genome.
Yet your prison without coherent design continues to imprison you. How can this be, if it has no strong places? The rational prisoner exploits the weak places, creates order from chaos: instead, collectives like the FSF vindicate their jailers by building cells almost compatible with the existing ones, albeit with more features. The journalist with three undergraduate degrees from MIT, the researcher at Microsoft, and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred.
Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. You claim to seek progress, but you succeed mainly in whining.
Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.
Bon appetit!
![]() |
![]() |
![]() |
Jun 02, 2021 | www.reddit.com
I agree with Linus Torvalds on that issue:
There's still value in understanding the traditional UNIX "do one thing and do it well" model where many workflows can be done as a pipeline of simple tools each adding their own value, but let's face it, it's not how complex systems really work, and it's not how major applications have been working or been designed for a long time. It's a useful simplification, and it's still true at /some/ level, but I think it's also clear that it doesn't really describe most of reality.
http://www.itwire.com/business-it-news/open-source/65402-torvalds-says-he-has-no-strong-opinions-on-systemdAlmost nothing on the Desktop works as the original Unix inventors prescribed as the "Unix way", and even editors like "Vim" are questionable since it has integrated syntax highlighting and spell checker. According to dogmatic Unix Philosophy you should use "ed, the standard editor" to compose the text and then pipe your text into "spell". Nobody really wants to work that way.
But while "Unix Philosophy" in many ways have utterly failed as a way people actually work with computers and software, it is still very good to understand, and in many respects still very useful for certain things. Personally I love those standard Linux text tools like "sort", "grep" "tee", "sed" "wc" etc, and they have occasionally been very useful even outside Linux system administration.
![]() |
![]() |
![]() |
May 23, 2021 | sookocheff.com
One of the recurring themes of any technology discussion is programming language. It doesn't take much effort to find blog posts with dramatic headlines (and even more dramatic comments) about how shipping a new project with Haskell or Clojure or Elm improved someones job, marriage, and life. These success stories are posted by raving fans that have nothing but the best to say about their language of choice. A common thread running through these posts is that they are typically tied to building out new, greenfield projects. I can't help but wonder. After the honeymoon of building a new project with a new programming language, what happens next? Is it all bubble gum and roses?
Sadly, it doesn't matter how suited to the job a language is, how much fun it is to program in, or how much you learn along the way. What matters the most is if the company you work for can support it. Engineers move on, there are -- believe it or not -- lean times, and after a few years there is no one left who can support the new esoteric system. Once the application is in production, how do you support it? Who is going to be on call?
Can't you solve this problem by hiring? Not really. First you need to either find someone with the appropriate skill set or train someone in the skills. Both of these cost time and money. Second, what is the new hire going to do? They will be tasked with a part-time responsibility of maintaining a legacy system written in an esoteric language, while everyone else in the organization is working in something else. And they will be the 24/7 on-call support person. You can help with the on-call situation by hiring an additional two or three people to help support the service. Assuming you need three engineers to maintain a healthy on-call schedule, at roughly $200K per engineer, you are going to be spending $600K a year on this service. Does this new programming language save you that much money every year over using the language everyone else in the organization knows? As a manager or technology lead, what do you do? Keep trying to hire? Keep spending $600K year on a programming language with no discernible business impact? No. You design it out of the system.
A case study. We had an internal development team working on a new documentation portal and they chose to use Elm for the frontend when everyone else was using Dart and JavaScript. The team was able to get the basics up and running quickly, and man they were having fun. But the service became increasingly difficult to manage as product requirements expanded beyond the strengths of the Elm ecosystem's core competencies. Shortly after, the core development team left to pursue other opportunities.
At first we tried to train existing engineers on Elm. That doesn't work. Not because Elm is bad, but because people don't want to change the direction of their career just to support someone else's legacy project. A second option is to hire at least two, preferably three Elm engineers. This is harder than it sounds. Not all engineers are excited about learning esoteric languages, and these new engineers you hire will be quickly wondering about their career development prospects if they are tied to maintaining the only legacy system written in Elm while everyone else is working on something else.
In the end, instead of trying to maintain the system it was scrapped and rewritten in the common frontend language the rest of the organization uses. Rewrites are a difficult decision for well-known reasons, but ultimately it was the correct choice.
The thesis of this post is that you need to choose a programming language that your organization can support. A natural corollary is that if you want to introduce a new programming language to a company, it is your responsibility to convince the business of the benefit of the language. You need to generate organizational support for the language before you go ahead and start using it. This can be difficult, it can be uncomfortable, and you could be told no. But without that organizational support, your new service is dead in the water.
The most important criteria for choosing a programming language is choosing something your organization can support.
![]() |
![]() |
![]() |
May 03, 2021 | dev.to
# discuss
edAâ€'qa mortâ€'oraâ€'y Dec 8, 2018 ãƒ"1 min read
“In the one and only true way. The object-oriented version of 'Spaghetti code' is, of course, 'Lasagna code'. (Too many layers)." - Roberto Waltman
This week on our show we discuss this quote. Does OOP encourage too many layers in code?
#14 Spaghetti OOPs Edaqa & Stephane Podcast Follow
I first saw this phenomenon when doing Java programming. It wasn't a fault of the language itself, but of excessive levels of abstraction. I wrote about this before in the false abstraction antipattern
So what is your story of there being too many layers in the code? Or do you disagree with the quote, or us? Discussion (12) Subscribe
Shrek: Object-oriented programs are like onions.
Donkey: They stink?
Shrek: Yes. No.
Donkey: Oh, they make you cry.
Shrek: No.
Donkey: Oh, you leave em out in the sun, they get all brown, start sproutin’ little white hairs.
Shrek: No. Layers. Onions have layers. Object-oriented programs have layers. Onions have layers. You get it? They both have layers.
Donkey: Oh, they both have layers. Oh. You know, not everybody like onions. 8 likes Reply Dec 8 '18Unrelated, but I love both spaghetti and lasagna 😋 6 likes Reply
I once worked for a project, the codebase had over a hundred classes for quite a simple job to be done. The programmer was no longer available and had almost used every design pattern in the GoF book. We cut it down to ca. 10 classes, hardly losing any functionality. Maybe the unnecessary thick lasagne is a symptom of devs looking for a one-size-fits-all solution.
Nested Software Dec 9 '18 Edited on Dec 16
I think there's a very pervasive mentality of "I must to use these tools, design patterns, etc." instead of "I need to solve a problem" and then only use the tools that are really necessary. I'm not sure where it comes from, but there's a kind of brainwashing that people have where they're not happy unless they're applying complicated techniques to accomplish a task. It's a fundamental problem in software development... 4 likes Reply
I tend to think of layers of inheritance when it comes to OO. I've seen a lot of cases where the developers just build up long chains of inheritance. Nowadays I tend to think that such a static way of sharing code is usually bad. Having a base class with one level of subclasses can be okay, but anything more than that is not a great idea in my book. Composition is almost always a better fit for re-using code. 2 likes Reply
Inheritance is my preferred option for things that model type hierarchies. For example, widgets in a UI, or literal types in a compiler.
One reason inheritance is over-used is because languages don't offer enough options to do composition correctly. It ends up becoming a lot of boilerplate code. Proper support for mixins would go a long way to reducing bad inheritance. 2 likes Reply
It is always up to the task. For small programms of course you don't need so many layers, interfaces and so on. For a bigger, more complex one you need it to avoid a lot of issues: code duplications, unreadable code, constant merge conflicts etc. 2 likes Reply
So build layers only as needed. I would agree with that. 2 likes Reply
I'm building a personal project as a mean to get something from zero to production for learning purpose, and I am struggling with wiring the front-end with the back. Either I dump all the code in the fetch callback or I use DTOs, two sets of interfaces to describe API data structure and internal data structure... It's a mess really, but I haven't found a good level of compromise. 2 likes Reply
Thanks for sharing your thoughts!
It's interesting, because a project that gets burned by spaghetti can drift into lasagna code to overcompensate. Still bad, but lasagna code is somewhat more manageable (just a huge headache to reason about).
But having an ungodly combination of those two... I dare not think about it. shudder 2 likes Reply
Sidenote before I finish listening: I appreciate that I can minimize the browser on mobile and have this keep playing, unlike with others apps(looking at you, YouTube). 2 likes Reply
Do not build solutions for problems you do not have.
At some point you need to add something because it makes sense. Until it makes sense, STICK WITH THE SPAGHETTI!!
![]() |
![]() |
![]() |
May 03, 2021 | georgik.rocks
Code smells or anti-patterns are a common classification of source code quality. There is also classification based on food which you can find on Wikipedia.
Spaghetti codeSpaghetti code is a pejorative term for source code that has a complex and tangled control structure, especially one using many GOTOs, exceptions, threads, or other “unstructured†branching constructs. It is named such because program flow tends to look like a bowl of spaghetti, i.e. twisted and tangled. Spaghetti code can be caused by several factors, including inexperienced programmers and a complex program which has been continuously modified over a long life cycle. Structured programming greatly decreased the incidence of spaghetti code.
Ravioli codeRavioli code is a type of computer program structure, characterized by a number of small and (ideally) loosely-coupled software components. The term is in comparison with spaghetti code, comparing program structure to pasta; with ravioli (small pasta pouches containing cheese, meat, or vegetables) being analogous to objects (which ideally are encapsulated modules consisting of both code and data).
Lasagna codeLasagna code is a type of program structure, characterized by several well-defined and separable layers, where each layer of code accesses services in the layers below through well-defined interfaces. The term is in comparison with spaghetti code, comparing program structure to pasta.
Spaghetti with meatballsThe term “spaghetti with meatballs†is a pejorative term used in computer science to describe loosely constructed object-oriented programming (OOP) that remains dependent on procedural code. It may be the result of a system whose development has transitioned over a long life-cycle, language constraints, micro-optimization theatre, or a lack of coherent coding standards.
Do you know about other interesting source code classification?
![]() |
![]() |
![]() |
Apr 22, 2021 | www.redhat.com
When introducing a new tool, programming language, or dependency into your environment, what steps do you take to evaluate it? In this article, I will walk through a six-question framework I use to make these determinations.
What problem am I trying to solve?We all get caught up in the minutiae of the immediate problem at hand. An honest, critical assessment helps divulge broader root causes and prevents micro-optimizations.
[ You might also like: Six deployment steps for Linux services and their related tools ]
Let's say you are experiencing issues with your configuration management system. Day-to-day operational tasks are taking longer than they should, and working with the language is difficult. A new configuration management system might alleviate these concerns, but make sure to take a broader look at this system's context. Maybe switching from virtual machines to immutable containers eases these issues and more across your environment while being an equivalent amount of work. At this point, you should explore the feasibility of more comprehensive solutions as well. You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter.
This intellectual exercise helps you drill down to the root causes and solve core issues, not the symptoms of larger problems. This is not always going to be possible, but be intentional about making this decision.
In the cloudDoes this tool solve that problem?
- Understanding cloud computing
- Free course: Red Hat OpenStack Technical Overview
- Free e-book: Hybrid Cloud Strategy for Dummies
Now that we have identified the problem, it is time for critical evaluation of both ourselves and the selected tool.
A particular technology might seem appealing because it is new because you read a cool blog post about it or you want to be the one giving a conference talk. Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question.
What am I giving up?The tool will, in fact, solve the problem, and we know we're solving the right problem, but what are the tradeoffs?
These considerations can be purely technical. Will the lack of observability tooling prevent efficient debugging in production? Does the closed-source nature of this tool make it more difficult to track down subtle bugs? Is managing yet another dependency worth the operational benefits of using this tool?
Additionally, include the larger organizational, business, and legal contexts that you operate under.
Are you giving up control of a critical business workflow to a third-party vendor? If that vendor doubles their API cost, is that something that your organization can afford and is willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of proprietary information? Does the software licensing make this difficult to use commercially?
While not simple questions to answer, taking the time to evaluate this upfront will save you a lot of pain later on.
Is the project or vendor healthy?This question comes with the addendum "for the balance of your requirements." If you only need a tool to get your team over a four to six-month hump until Project X is complete, this question becomes less important. If this is a multi-year commitment and the tool drives a critical business workflow, this is a concern.
When going through this step, make use of all available resources. If the solution is open source, look through the commit history, mailing lists, and forum discussions about that software. Does the community seem to communicate effectively and work well together, or are there obvious rifts between community members? If part of what you are purchasing is a support contract, use that support during the proof-of-concept phase. Does it live up to your expectations? Is the quality of support worth the cost?
Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as well. Something might hit the front page of a news aggregator and receive attention for a few days, but a deeper look might reveal that only a couple of core developers are actually working on a project, and they've had difficulty finding outside contributions. Maybe a tool is open source, but a corporate-funded team drives core development, and support will likely cease if that organization abandons the project. Perhaps the API has changed every six months, causing a lot of pain for folks who have adopted earlier versions.
What are the risks?As a technologist, you understand that nothing ever goes as planned. Networks go down, drives fail, servers reboot, rows in the data center lose power, entire AWS regions become inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.
Ask yourself how this tooling could fail and what the impact would be. If you are adding a security vendor product to your CI/CD pipeline, what happens if the vendor goes down?
Kubernetes and OpenShift
- Free cheatsheet: Kubernetes and Minikube
- Free ebook: Designing Cloud-Native Applications
- Interactive course: Getting Started with OpenShift
- Free ebook: Build Applications with Kubernetes and Openshift
This brings up both technical and business considerations. Do the CI/CD pipelines simply time out because they can't reach the vendor, or do you have it "fail open" and allow the pipeline to complete with a warning? This is a technical problem but ultimately a business decision. Are you willing to go to production with a change that has bypassed the security scanning in this scenario?
Obviously, this task becomes more difficult as we increase the complexity of the system. Thankfully, sites like k8s.af consolidate example outage scenarios. These public postmortems are very helpful for understanding how a piece of software can fail and how to plan for that scenario.
What are the costs?The primary considerations here are employee time and, if applicable, vendor cost. Is that SaaS app cheaper than more headcount? If you save each developer on the team two hours a day with that new CI/CD tool, does it pay for itself over the next fiscal year?
Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations.
[ A free guide from Red Hat: 5 steps to automate your business . ]
Wrap upI hope you've found this framework insightful, and I encourage you to incorporate it into your own decision-making processes. There is no one-size-fits-all framework that works for every decision. Don't forget that, sometimes, you might need to go with your gut and make a judgment call. However, having a standardized process like this will help differentiate between those times when you can critically analyze a decision and when you need to make that leap.
![]() |
![]() |
![]() |
Nov 22, 2020 | www.sysprog.net
They have computers, and they may have other weapons of mass destruction. (Janet Reno)
I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We've created life in our own image. (Stephen Hawking)
If it keeps up, man will atrophy all his limbs but the push-button finger. (Frank Lloyd Wright)
If software were as unreliable as economic theory, there wouldn't be a plane made of anything other than paper that could get off the ground. (Jim Fawcette)
Computers are like bikinis. They save people a lot of guesswork. (Sam Ewing)
If the automobile had followed the same development cycle as the computer, a Rolls-Royce would today cost $100, get a million miles per gallon, and explode once a year, killing everyone inside. ("Robert X. Cringely", Computerworld)
To err is human, but to really foul things up you need a computer. (Paul Ehrlich)
All parts should go together without forcing. You must remember that the parts you are reassembling were disassembled by you. Therefore, if you can't get them together again, there must be a reason. By all means, do not use a hammer. (1925 IBM Maintenence Manual)
Considering the current sad state of our computer programs, software development is clearly still a black art, and cannot yet be called an engineering discipline. (Bill Clinton)
Man is still the most extraordinary computer of all. (John F Kennedy)
At this time I do not have a personal relationship with a computer. (Janet Reno)
For a long time it puzzled me how something so expensive, so leading edge, could be so useless, and then it occurred to me that a computer is a stupid machine with the ability to do incredibly smart things, while computer programmers are smart people with the ability to do incredibly stupid things. They are, in short, a perfect match. (Bill Bryson)
Just remember: you're not a "dummy," no matter what those computer books claim. The real dummies are the people who, though technically expert, couldn't design hardware and software that's usable by normal consumers if their lives depended upon it. (Walter Mossberg)
You have to ask yourself how many IT organizations, how many CIOs have on their goal sheet, or their mission statement, "Encouraging creativity and innovation in the corporation?" That's not why the IT organization was created. (Tom Austin)
The real problem is not whether machines think but whether men do. (B. F. Skinner)
The global village is not created by the motor car or even by the airplane. It's created by instant electronic information movement. (Marshall Mcluhan)
Replicating assemblers and thinking machines pose basic threats to people and to life on Earth. Among the cognoscenti of nanotechnology, this threat has become known as the gray goo problem. (Eric Drexler)
Computers are merely ingenious devices to fulfill unimportant functions. The computer revolution is an explosion of nonsense. (Neil Postman)
Who cares how it works, just as long as it gives the right answer? (Jeff Scholnik)
There's an old story about the person who wished his computer were as easy to use as his telephone. That wish has come true, since I no longer know how to use my telephone. (Bjarne Stroustrup)
I think and think for months and years. Ninety-nine times, the conclusion is false. The hundredth time I am right. (Albert Einstein)
The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency. (Bill Gates)
See, no matter how clever your automation systems might be, it all falls apart if your human wetware isn't up to the job. (Andrew Orlowski)
That's the thing about people who think they hate computers. What they really hate is lousy programmers. (Larry Niven)
On the Internet, nobody knows you're a dog. (Peter Steiner)
We are a bit of stellar matter gone wrong. We are physical machinery - puppets that strut and talk and laugh and die as the hand of time pulls the strings beneath. But there is one elementary inescapable answer. We are that which asks the question.(Sir Arthur Eddington)
The nice thing about standards is that there are so many of them to choose from. (Andrew Tannenbaum)
Standards are always out of date. That's what makes them standards. (Alan Bennett)
Computer Science : 1. A study akin to numerology and astrology, but lacking the precision of the former and the success of the latter. 2. The boring art of coping with a large number of trivialities. (Stan Kelly-Bootle)
Once there was a time when the bringing-forth of the true into the beautiful was called technology. And art was simply called techne. (Martin Heidegger)
The computer actually may have aggravated management's degenerative tendency to focus inward on costs. (Peter Drucker)
The buyer needs a hundred eyes, the vendor not one. (George Herbert)
Anyone who puts a small gloss on a fundamental technology, calls it proprietary, and then tries to keep others from building on it, is a thief. (Tim O'Reilly)
What a satire, by the way, is that machine [Babbage's Engine], on the mere mathematician! A Frankenstein-monster, a thing without brains and without heart, too stupid to make a blunder; that turns out results like a corn-sheller, and never grows any wiser or better, though it grind a thousand bushels of them! (Oliver Wendell Holmes)
No, no, you're not thinking, you're just being logical. (Niels Bohr)
Never trust a computer you can't throw out a window. (Steve Wozniak)
If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no one dares criticize it. (Pierre Gallois)
A computer is essentially a trained squirrel: acting on reflex, thoughtlessly running back and forth and storing away nuts until some other stimulus makes it do something else. (Ted Nelson)
Software people would never drive to the office if building engineers and automotive engineers were as cavalier about buildings and autos as the software "engineer" is about his software. (Henry Baker)
Since the invention of the microprocessor, the cost of moving a byte of information around has fallen on the order of 10-million-fold. Never before in the human history has any product or service gotten 10 million times cheaper-much less in the course of a couple decades. That's as if a 747 plane, once at $150 million a piece, could now be bought for about the price of a large pizza. (Michael Rothschild)
Physics is the universe's operating system. (Steven R Garman)
If patterns of ones and zeros were like patterns of human lives and death, if everything about an individual could be represented in a computer record by a long string of ones and zeros, then what kind of creature would be represented by a long string of lives and deaths? (Thomas Pynchon)
Man is the best computer we can put aboard a spacecraft...and the only one that can be mass produced with unskilled labor. (Wernher von Braun)
I've noticed lately that the paranoid fear of computers becoming intelligent and taking over the world has almost entirely disappeared from the common culture. Near as I can tell, this coincides with the release of MS-DOS. (Larry DeLuca)
A friend of the Feline reports that Big Blue marketing and sales personnel have been strictly forbidden to use the word "mainframe." Instead, in an attempt to distance themselves from the dinosaur, they're to use the more PC-friendly phrase "large enterprise server." If that's the case, the Katt retorted, they should also refer to "dumb terminals" as "intelligence-challenged workstations." (Spencer Katt)
The computer is no better than its program. (Elting Elmore Morison)
There is no doubt that human survival will continue to depend more and more on human intellect and technology. It is idle to argue whether this is good or bad. The point of no return was passed long ago, before anyone knew it was happening. (Theodosius Dobzansky)
Man is the lowest-cost, 150-pound, nonlinear, all-purpose computer system which can be mass-produced by unskilled labor. (NASA in 1965)
A computer lets you make more mistakes faster than any invention in human history - with the possible exceptions of handguns and tequila. (Mitch Radcliffe)
|
Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in. (((Larry Wall)))
If Java had true garbage collection, most programs would delete themselves upon execution. (Robert Sewell)
I fear the the new object-oriented systems may suffer the fate of LISP, in that they can do many things, but the complexity of the class hierarchies may cause them to collapse under their own weight. (Bill Joy)
Using Java for serious jobs is like trying to take the skin off a rice pudding wearing boxing gloves. (Tel Hudson)
Anybody who thinks a little 9,000-line program [ Java ] that's distributed free and can be cloned by anyone is going to affect anything we do at Microsoft has his head screwed on wrong. (Bill Gates)
Take a cup of coffee and add three drops of poison and what have you got? Microsoft J++. (Scott McNealy)
Of all the great programmers I can think of, I know of only one who would voluntarily program in Java. And of all the great programmers I can think of who don't work for Sun, on Java, I know of zero. (Paul Graham)
Using PL/I must be like flying a plane with 7,000 buttons, switches, and handles to manipulate in the cockpit. (Edsger Dijkstra)
Thirty years from now nobody will remember Java and everyone will remember Microsoft. (Charles Simonyi)
If you want to shoot yourself in the foot, Perl will give you ten bullets and a laser scope, then stand by and cheer you on. (Teodor Zlatanov)
Java is the most distressing thing to happen to computing since MS-DOS. (Alan Kay)
Your development cycle is much faster because Java is interpreted. The compile-link-load-test-crash-debug cycle is obsolete. (James Gosling)
Actually, I'm trying to make Ruby natural, not simple. (Yukihiro "Matz" Matsumoto)
Historically, languages designed for other people to use have been bad: Cobol, PL/I, Pascal, Ada, C++. The good languages have been those that were designed for their own creators: C, Perl, Smalltalk, Lisp. (Paul Graham)
When FORTRAN has been called an infantile disorder, PL/I, with its growth characteristics of a dangerous tumor, could turn out to be a fatal disease. (Edsger Dijkstra)
The three characteristics of Perl programmers: mundaneness, sloppiness, and fatuousness. (Xah Lee)
PL/I, "the fatal disease", belongs more to the problem set than to the solution set. (Edsger Dijkstra)
C treats you like a consenting adult. Pascal treats you like a naughty child. Ada treats you like a criminal. (Bruce Powel Douglass)
Java is, in many ways, C++--. (Michael Feldman)
Perl has grown from being a very good scripting language into something like a cross between a universal solvent and an open-ended Mandarin where new ideograms are invented hourly. (Jeffrey Davis)
LISP is like a ball of mud. You can add any amount of mud to it and it still looks like a ball of mud. (Joel Moses)
Perl is like vise grips. You can do anything with it but it is the wrong tool for every job. (Bruce Eckel)
I view the JVM as just another architecture that Perl ought to be ported to. (That, and the Underwood typewriter...) (Larry Wall)
I have found that humans often use Smalltalk during awkward moments. ("Data")
Perl: The only language that looks the same before and after RSA encryption. (Keith Bostic)
PL/I and Ada started out with all the bloat, were very daunting languages, and got bad reputations (deservedly). C++ has shown that if you slowly bloat up a language over a period of years, people don't seem to mind as much. (James Hague)
C++ is history repeated as tragedy. Java is history repeated as farce. (Scott McKay)
A Lisp programmer knows the value of everything, but the cost of nothing. (Alan Perlis)
Claiming Java is easier than C++ is like saying that K2 is shorter than Everest. (Larry O'Brien)
In the best possible scenario Java will end up mostly like Eiffel but with extra warts because of insufficiently thoughtful early design. (Matthew B Kennel)
Java, the best argument for Smalltalk since C++. (Frank Winkler)
[Perl] is the sanctuary of dunces. The godsend for brainless coders. The means and banner of sysadmins. The lingua franca of trial-and-error hackers. The song and dance of stultified engineers. (Xah Lee)
Java is the SUV of programming tools. (Philip Greenspun)
Going from programming in Pascal to programming in C, is like learning to write in Morse code. (J P Candusso)
Arguing that Java is better than C++ is like arguing that grasshoppers taste better than tree bark. (Thant Tessman)
I think conventional languages are for the birds. They're just extensions of the von Neumann computer, and they keep our noses in the dirt of dealing with individual words and computing addresses, and doing all kinds of silly things like that, things that we've picked up from programming for computers; we've built them into programming languages; we've built them into Fortran; we've built them in PL/1; we've built them into almost every language. (John Backus)
C++: Simula in wolf's clothing. (Bjarne Stroustrup)
Perl is a car with an autopilot designed by insane aliens. (Jeff Smith)
Like the creators of sitcoms or junk food or package tours, Java's designers were consciously designing a product for people not as smart as them. (Paul Graham)
High thoughts must have a high language. (Aristophanes)
There are undoubtedly a lot of very intelligent people writing Java, better programmers than I will ever be. I just wish I knew why. (Steve Holden)
The more of an IT flavor the job descriptions had, the less dangerous was the company. The safest kind were the ones that wanted Oracle experience. You never had to worry about those. You were also safe if they said they wanted C++ or Java developers. If they wanted Perl or Python programmers, that would be a bit frightening. If I had ever seen a job posting looking for Lisp hackers, I would have been really worried. (Paul Graham)
If you learn to program in Java, you'll never be without a job! (Patricia Seybold in 1998)
Anyone could learn Lisp in one day, except that if they already knew Fortran, it would take three days. (Marvin Minsky)
Knowing the syntax of Java does not make someone a software engineer. (John Knight)
Javascript is the duct tape of the Internet. (Charlie Campbell)
To Our IBM Home Office Staff (to the tune of Polly Wolly Doodle)
In Old New York, at 270 Broadway,
They're working night and day.
Our IBM fine girls and men --
All tasks to them, mere play.
Our President Watson's loyal band,
Well-serving our Four Lines.
All faithful workers, heart and hand,
Two hundred brilliant minds.IBM Song Book
![]() |
![]() |
![]() |
Jan 01, 2019 | softwareengineering.stackexchange.com
Is premature optimization really the root of all evil? Ask Question Asked 11 years, 11 months ago Active 10 months ago Viewed 71k times
A colleague of mine today committed a class called
ThreadLocalFormat
, which basically moved instances of Java Format classes into a thread local, since they are not thread safe and "relatively expensive" to create. I wrote a quick test and calculated that I could create 200,000 instances a second, asked him was he creating that many, to which he answered "nowhere near that many". He's a great programmer and everyone on the team is highly skilled so we have no problem understanding the resulting code, but it was clearly a case of optimizing where there is no real need. He backed the code out at my request. What do you think? Is this a case of "premature optimization" and how bad is it really? design architecture optimization quality-attributes share improve this question follow edited Dec 5 '19 at 3:54 community wiki
3 revs, 3 users 67%
Craig DayAlex ,
I think you need to distinguish between premature optimization, and unnecessary optimization. Premature to me suggests 'too early in the life cycle' whereas unnecessary suggests 'does not add significant value'. IMO, requirement for late optimization implies shoddy design. – Shane MacLaughlin Oct 17 '08 at 8:532 revs, 2 users 92%
, 2014-12-11 17:46:38345It's important to keep in mind the full quote:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
What this means is that, in the absence of measured performance issues you shouldn't optimize because you think you will get a performance gain. There are obvious optimizations (like not doing string concatenation inside a tight loop) but anything that isn't a trivially clear optimization should be avoided until it can be measured.
The biggest problems with "premature optimization" are that it can introduce unexpected bugs and can be a huge time waster. share improve this answer follow edited Dec 11 '14 at 17:46 community wiki
2 revs, 2 users 92%
Scott DormanErik Kaplun ,
Being from Donald Knuth, I wouldn't be surprized if he had some evidence to back it up. BTW, Src: Structured Programming with go to Statements, ACM Journal Computing Surveys, Vol 6, No. 4, Dec. 1974. p.268. citeseerx.ist.psu.edu/viewdoc/ – mctylr Mar 1 '10 at 17:572 revs, 2 users 90%
, 2015-10-06 13:07:11120Premature micro optimizations are the root of all evil, because micro optimizations leave out context. They almost never behave the way they are expected.
What are some good early optimizations in the order of importance:
- Architectural optimizations (application structure, the way it is componentized and layered)
- Data flow optimizations (inside and outside of application)
Some mid development cycle optimizations:
- Data structures, introduce new data structures that have better performance or lower overhead if necessary
- Algorithms (now its a good time to start deciding between quicksort3 and heapsort ;-) )
Some end development cycle optimizations
- Finding code hotpots (tight loops, that should be optimized)
- Profiling based optimizations of computational parts of the code
- Micro optimizations can be done now as they are done in the context of the application and their impact can be measured correctly.
Not all early optimizations are evil, micro optimizations are evil if done at the wrong time in the development life cycle , as they can negatively affect architecture, can negatively affect initial productivity, can be irrelevant performance wise or even have a detrimental effect at the end of development due to different environment conditions.
If performance is of concern (and always should be) always think big . Performance is a bigger picture and not about things like: should I use int or long ?. Go for Top Down when working with performance instead of Bottom Up . share improve this answer follow edited Oct 6 '15 at 13:07 community wiki
2 revs, 2 users 90%
Pop CatalinRon Ruble ,
"Optimization: Your Worst Enemy", by Joseph M. Newcomer: flounder.com/optimization.htm – Ron Ruble May 23 '17 at 21:50Jeff Atwood , 2008-10-17 09:29:14
54optimization without first measuring is almost always premature.
I believe that's true in this case, and true in the general case as well. share improve this answer follow answered Oct 17 '08 at 9:29 community wiki
Jeff AtwoodBengie ,
Here Here! Unconsidered optimization makes code un-maintainable and is often the cause of performance problems. e.g. You multi-thread a program because you imagine it might help performance, but, the real solution would have been multiple processes which are now too complex to implement. – James Anderson May 2 '12 at 5:01John Mulder , 2008-10-17 08:42:58
45Optimization is "evil" if it causes:
- less clear code
- significantly more code
- less secure code
- wasted programmer time
In your case, it seems like a little programmer time was already spent, the code was not too complex (a guess from your comment that everyone on the team would be able to understand), and the code is a bit more future proof (being thread safe now, if I understood your description). Sounds like only a little evil. :) share improve this answer follow answered Oct 17 '08 at 8:42 community wiki
John Muldermattnz ,
Only if the cost, it terms of your bullet points, is greater than the amortized value delivered. Often complexity introduces value, and in these cases one can encapsulate it such that it passes your criteria. It also gets reused and continues to provide more value. – Shane MacLaughlin Oct 17 '08 at 10:36Michael Shaw , 2020-06-16 10:01:49
42I'm surprised that this question is 5 years old, and yet nobody has posted more of what Knuth had to say than a couple of sentences. The couple of paragraphs surrounding the famous quote explain it quite well. The paper that is being quoted is called " Structured Programming with go to Statements ", and while it's nearly 40 years old, is about a controversy and a software movement that both no longer exist, and has examples in programming languages that many people have never heard of, a surprisingly large amount of what it said still applies.
Here's a larger quote (from page 8 of the pdf, page 268 in the original):
The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn't bother making such optimizations on a one-shot job, but when it's a question of preparing quality programs, I don't want to restrict myself to tools that deny me such efficiencies.
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.
Another good bit from the previous page:
share improve this answer follow edited Jun 16 at 10:01 community wikiMy own programming style has of course changed during the last decade, according to the trends of the times (e.g., I'm not quite so tricky anymore, and I use fewer go to's), but the major change in my style has been due to this inner loop phenomenon. I now look with an extremely jaundiced eye at every operation in a critical inner loop, seeking to modify my program and data structure (as in the change from Example 1 to Example 2) so that some of the operations can be eliminated. The reasons for this approach are that: a) it doesn't take long, since the inner loop is short; b) the payoff is real; and c) I can then afford to be less efficient in the other parts of my programs, which therefore are more readable and more easily written and debugged.
Michael Shaw> ,
add a comment> ,
22I've often seen this quote used to justify obviously bad code or code that, while its performance has not been measured, could probably be made faster quite easily, without increasing code size or compromising its readability.
In general, I do think early micro-optimizations may be a bad idea. However, macro-optimizations (things like choosing an O(log N) algorithm instead of O(N^2)) are often worthwhile and should be done early, since it may be wasteful to write a O(N^2) algorithm and then throw it away completely in favor of a O(log N) approach.
Note the words may be : if the O(N^2) algorithm is simple and easy to write, you can throw it away later without much guilt if it turns out to be too slow. But if both algorithms are similarly complex, or if the expected workload is so large that you already know you'll need the faster one, then optimizing early is a sound engineering decision that will reduce your total workload in the long run.
Thus, in general, I think the right approach is to find out what your options are before you start writing code, and consciously choose the best algorithm for your situation. Most importantly, the phrase "premature optimization is the root of all evil" is no excuse for ignorance. Career developers should have a general idea of how much common operations cost; they should know, for example,
- that strings cost more than numbers
- that dynamic languages are much slower than statically-typed languages
- the advantages of array/vector lists over linked lists, and vice versa
- when to use a hashtable, when to use a sorted map, and when to use a heap
- that (if they work with mobile devices) "double" and "int" have similar performance on desktops (FP may even be faster) but "double" may be a hundred times slower on low-end mobile devices without FPUs;
- that transferring data over the internet is slower than HDD access, HDDs are vastly slower than RAM, RAM is much slower than L1 cache and registers, and internet operations may block indefinitely (and fail at any time).
And developers should be familiar with a toolbox of data structures and algorithms so that they can easily use the right tools for the job.
Having plenty of knowledge and a personal toolbox enables you to optimize almost effortlessly. Putting a lot of effort into an optimization that might be unnecessary is evil (and I admit to falling into that trap more than once). But when optimization is as easy as picking a set/hashtable instead of an array, or storing a list of numbers in double[] instead of string[], then why not? I might be disagreeing with Knuth here, I'm not sure, but I think he was talking about low-level optimization whereas I am talking about high-level optimization.
Remember, that quote is originally from 1974. In 1974 computers were slow and computing power was expensive, which gave some developers a tendency to overoptimize, line-by-line. I think that's what Knuth was pushing against. He wasn't saying "don't worry about performance at all", because in 1974 that would just be crazy talk. Knuth was explaining how to optimize; in short, one should focus only on the bottlenecks, and before you do that you must perform measurements to find the bottlenecks.
Note that you can't find the bottlenecks until you have written a program to measure, which means that some performance decisions must be made before anything exists to measure. Sometimes these decisions are difficult to change if you get them wrong. For this reason, it's good to have a general idea of what things cost so you can make reasonable decisions when no hard data is available.
How early to optimize, and how much to worry about performance depend on the job. When writing scripts that you'll only run a few times, worrying about performance at all is usually a complete waste of time. But if you work for Microsoft or Oracle and you're working on a library that thousands of other developers are going to use in thousands of different ways, it may pay to optimize the hell out of it, so that you can cover all the diverse use cases efficiently. Even so, the need for performance must always be balanced against the need for readability, maintainability, elegance, extensibility, and so on.
![]() |
![]() |
![]() |
Sep 30, 2020 | www.youtube.com
MarquisDeSang , 3 years agoAwesome video, I loved watching it. In my experience, there are many situations where, like you pointed out, procedural style makes things easier and prevents you from overthinking and overgeneralizing the problem you are trying to tackle. However, in some cases, object-oriented programming removes unnecessary conditions and switches that make your code harder to read. Especially in complex game engines where you deal with a bunch of objects which interact in diverse ways to the environment, other objects and the physics engine. In a procedural style, a program like this would become an unmanageable clutter of flags, variables and switch-statements. Therefore, the statement "Object-Oriented Programming is Garbage" is an unnecessary generalization. Object-oriented programming is a tool programmers can use - and just like you would not use pliers to get a nail into a wall, you should not force yourself to use object-oriented programming to solve every problem at hand. Instead, you use it when it is appropriate and necessary. Nevertheless, i would like to hear how you would realize such a complex program. Maybe I'm wrong and procedural programming is the best solution in any case - but right now, I think you need to differentiate situations which require a procedural style from those that require an object-oriented style.
Gm3dco , 3 months agoI have been brainwashed with c++ for 20 years. I have recently switched to ANSI C and my mind is now free. Not only I feel free to create design that are more efficient and elegant, but I feel in control of what I do.
Marvin Blum , 4 years agoYou make a lot of very solid points. In your refactoring of the Mapper interface to a type-switch though: what is the point of still using a declared interface here? If you are disregarding extensibility (which would require adding to the internal type switch, rather than conforming a possible new struct to an interface) anyway, why not just make Mapper of type interface{} and add a (failing) default case to your switch?
I recommend to install the Gosublime extension, so your code gets formatted on save and you can use autocompletion. But looks good enough. But I disagree with large functions. Small ones are just easier to understand and test.
Lucid Moses , 4 years agoBeing the lead designer of an larger app (2m lines of code as of 3 years ago). I like to say we use C+. Because C++ breaks down in the real world. I'm happy to use encapsulation when it fits well. But developers that use OO just for OO-ness sake get there hands slapped. So in our app small classes like PhoneNumber and SIN make sense. Large classes like UserInterface also work nicely (we talk to specialty hardware like forklifts and such). So, it may be all coded in C++ but basic C developers wouldn't have to much of an issue with most of it. I don't think OO is garbage. It's just a lot people use it in in appropriate ways. When all you have is a hammer, everything looks like a nail. So if you use OO on everything then you sometimes end up with garbage.
TekkGnostic , 4 years ago (edited)Loving the series. The hardest part of actually becoming an efficient programmer is unlearning all the OOP brainwashing. It can be useful for high-level structuring so I've been starting with C++ then reducing everything into procedural functions and tightly-packed data structs. Just by doing that I reduced static memory use and compiled program size at least 10-15%+ (which is a lot when you only have 32kb.) And holy damn, nearly 20 years of C and I never knew you could nest a function within a function, I had to try that right away.
RyuDarragh , 4 years agoI have a design for a networked audio platform that goes into large buildings (over 11 stories) and can have 250 networked nodes (it uses an E1 style robbed bit networking system) and 65K addressable points (we implemented 1024 of them for individual control by grouping them). This system ties to a fire panel at one end with a microphone and speakers at the other end. You can manually select any combination of points to page to, or the fire panel can select zones to send alarm messages to. It works in real time with 50mS built in delays and has access to 12 audio channels. What really puts the frosting on this cake is, the CPU is an i8051 running at 18MHz and the code is a bit over 200K bytes that took close to 800K lines of code. In assembler. And it took less than a Year from concept to first installation. By one designer/coder. The only OOP in this code was when an infinite loop happened or a bug crept in - "OOPs!"
Y HA , 1 month agoFor many cases OOP has a heavy overhead. But as I learned the hard way, in many others it can save a huge deal of time and being more practical.
LedoCool1 , 1 year ago (edited)There's a way of declaring subfunctions in C++ (idk if works in C). I saw it done by my friend. General idea is to declare a struct inside which a function can be declared. Since you can declare structs inside functions, you can safely use it as a wrapper for your function-inside-function declaration. This has been done in MSVC but I believe it will compile in gcc too.
![]() |
![]() |
![]() |
Sep 29, 2020 | www.youtube.com
Learned C# first and that was a huge mistake. Programming got all exciting when I learned C
![]() |
![]() |
![]() |
Sep 29, 2020 | www.youtube.com
Thoughts Feeder , 3 months ago
"Is pixel an object or a group of objects? Is there a container? Do I have to ask a factory to get me a color?" I literally died there... that's literally the best description of my programming for the last 5 years.
Karan Joisher , 2 years agoIt's really sad that we are only taught OOP and no other paradigms in our college, when I discovered programming I had no idea about OOP and it was really easy to build programs, bt then I came across OOP:"how to deconstruct a problem statement into nouns for objects and verbs for methods" and it really messed up my thinking, I have been struggling for a long time on how to organize my code on the conceptual level, only recently I realized that OOP is the reason for this struggle, handmadehero helped alot to bring me back to the roots of how programming is done, remember never push OOP into areas where it is not needed, u don't have to model ur program as real world entities cause it's not going to run on real world, it's going to run on CPU!
Ai , 2 years agoEsben Olsen , 10 months agoLearned C# first and that was a huge mistake. Programming got all exciting when I learned C
I made a game 4 years ago. Then I learned OOP and now I haven't finished any projects since
theb1rd , 5 months ago (edited)I lost an entire decade to OOP, and agree with everything Casey said here. The code I wrote in my first year as a programmer (before OOP) was better than the code I wrote in my 15th year (OOP expert). It's a shame that students are still indoctrinated into this regressive model.
John Appleseed , 2 years agoUnfortunately, when I first started programming, I encountered nothing but tutorials that jumped right into OOP like it was the only way to program. And of course I didn't know any better! So much friction has been removed from my process since I've broken free from that state of mind. It's easier to judge when objects are appropriate when you don't think they're always appropriate!
judged by time , 1 year ago"It's not that OOP is bad or even flawed. It's that object-oriented programming isn't the fundamental particle of computing that some people want it to be. When blindly applied to problems below an arbitrary complexity threshold, OOP can be verbose and contrived, yet there's often an aesthetic insistence on objects for everything all the way down. That's too bad, because it makes it harder to identify the cases where an object-oriented style truly results in an overall simplicity and ease of understanding." - https://prog21.dadgum.com/156.html
Chen Huang , 3 years agojudged by time , 1 year agoThe first language I was taught was Java, so I was taught OOP from the get go. Removing the OOP mindset was actually really easy, but what was left stuck in my head is the practice of having small functions and make your code look artificially "clean". So I am in a constant struggle of refactoring and not refactoring, knowing that over-refactoring will unnecessarily complicate my codebase if it gets big. Even after removing my OOP mindset, my emphasis is still on the code itself, and that is much harder to cure in comparison.
"I want to emphasize that the problem with object-oriented programming is not the concept that there could be an object. The problem with it is the fact that you're orienting your program, the thinking, around the object, not the function. So it's the orientation that's bad about it, NOT whether you end up with an object. And it's a really important distinction to understand."
joseph fatur , 2 years agoNicely stated, HH. On youtube, MPJ, Brian Will, and Jonathan Blow also address this matter. OOP sucks and can be largely avoided. Even "reuse" is overdone. Straightline probably results in faster execution but slightly greater memory use. But memory is cheap and the resultant code is much easier to follow. Learn a little assembly language. X86 is fascinating and you'll know what the computer is actually doing.
Hao Wu , 1 year agoI think schools should teach at least 3 languages / paradigms, C for Procedural, Java for OOP, and Scheme (or any Lisp-style languages) for Functional paradigms.
J. Bradley Bulsterbaum , 10 months agobbkane , 5 months ago (edited)It sounds to me like you're describing JavaScript framework programming that people learn to start from. It hasn't seemed to me like object-oriented programmers who aren't doing web stuff have any problem directly describing an algorithm and then translating it into imperative or functional or just direct instructions for a computer. it's quite possible to use object-oriented languages or languages that support object-oriented stuff to directly command a computer.
I dunno man. Object oriented programming can (sometimes badly) solve real problems - notably polymorphism. For example, if you have a Dog and a Cat sprite and they both have a move method. The "non-OO" way Casey does this is using tagged unions - and that was not an obvious solution when I first saw it. Quite glad I watched that episode though, it's very interesting! Also see this tweet thread from Casey - https://twitter.com/cmuratori/status/1187262806313160704
![]() |
![]() |
![]() |
Sep 29, 2020 | en.wikipedia.org
Geovane Piccinin , PHP Programmer (2015-present) Answered November 23, 2018
Sourav Datta , A programmer trying find the ultimate source code of life. Answered August 6, 2015 · Author has 145 answers and 292K answer viewsMy deepest feeling after crossing so many discussions and books about this is a sincere YES.
Without entering in any technical details about it, because even after some years I don’t find myself qualified to talk about this (is there someone who really understand it completely?), I would argument that the main problem is that every time a read something about OOP it is trying to justify why it is “so good”.
Then, a huge amount of examples are shown, many arguments, and many expectations are created.
It is not stated simply like this: “oh, this is another programming paradigm.” It is usually stated that: “This in a fantastic paradigm, it is better, it is simpler, it permits so many interesting things, … it is this, it is that… and so on.
What happens is that, based on the “good” arguments, it creates some expectation that things produced with OOP should be very good. But, no one really knows if they are doing it right. They say: the problem is not the paradigm, it is you that are not experienced yet. When will I be experienced enough?
Are you following me? My feeling is that the common place of saying it is so good at the same time you never know how good you are actually being makes all of us very frustrated and confuse.
Yes, it is a great paradigm since you see it just as another paradigm and drop all the expectations and excessive claiming that it is so good.
It seems to me, that the great problem is that huge propaganda around it, not the paradigm itself. Again, if it had a more humble claim about its advantages and how difficult is to achieve then, people would be much less frustrated.
In recent years, OOP is indeed being regarded as a overrated paradigm by many. If we look at the most recent famous languages like Go and Rust, they do not have the traditional OO approaches in language design. Instead, they choose to pack data into something akin to structs in C and provide ways to specify "protocols" (similar to interfaces/abstract methods) which can work on those packed data...
![]() |
![]() |
![]() |
Apr 20, 2013 | cwsof.com
The last decade has seen object oriented programming (OOP) dominate the programming world. While there is no doubt that there are benefits of OOP, some programmers question whether OOP has been over rated and ponder whether alternate styles of coding are worth pursuing. To even suggest that OOP has in some way failed to produce the quality software we all desire could in some instances cost a programmer his job, so why even ask the question ?
Quality software is the goal.
Likely all programmers can agree that we all want to produce quality software. We would like to be able to produce software faster, make it more reliable and improve its performance. So with such goals in mind, shouldn't we be willing to at least consider all possibilities ? Also it is reasonable to conclude that no single tool can match all situations. For example, while few programmers today would even consider using assembler, there are times when low level coding such as assembler could be warranted. The old adage applies "the right tool for the job". So it is fair to pose the question, "Has OOP been over used to the point of trying to make it some kind of universal tool, even when it may not fit a job very well ?"
Others are asking the same question.
I won't go into detail about what others have said about object oriented programming, but I will simply post some links to some interesting comments by others about OOP.
Richard Mansfield
http://www.4js.com/files/documents/products/genero/WhitePaperHasOOPFailed.pdf
Intel Blog: by Asaf Shelly
http://software.intel.com/en-us/blogs/2008/08/22/flaws-of-object-oriented-modeling/
Usenix article: by Stephen C. Johnson (Melismatic Software)
http://static.usenix.org/publications/library/proceedings/sf94/johnson.html
Department of Computer. Science and IT, University of Jammu
http://www.csjournals.com/IJCSC/PDF1-2/9..pdf
An aspect which may be overlooked.
I have watched a number of videos online and read a number of articles by programmers about different concepts in programming. When OOP is discussed they talk about thinks like modeling the real world, abtractions, etc. But two things are often missing in such discussions, which I will discuss here. These two aspects greatly affect programming, but may not be discussed.
First is, what is programming really ? Programming is a method of using some kind of human readable language to generate machine code (or scripts eventually read by machine code) so one can make a computer do a task. Looking back at all the years I have been programming, the most profound thing I have ever learned about programming was machine language. Seeing what a CPU is actually doing with our programs provides a great deal of insight. It helps one understand why integer arithmetic is so much faster than floating point. It helps one understand what graphics is really all about (simply the moving around a lot of pixels or blocks of four bytes). It helps one understand what a procedure really must do to have parameters passed. It helps one understand why a string is simply a block of bytes (or double bytes for unicode). It helps one understand why we use bytes so much and what bit flags are and what pointers are.
When one looks at OOP from the perspective of machine code and all the work a compiler must do to convert things like classes and objects into something the machine can work with, then one very quickly begins to see that OOP adds significant overhead to an application. Also if a programmer comes from a background of working with assembler, where keeping things simple is critical to writing maintainable code, one may wonder if OOP is improving coding or making it more complicated.
Second, is the often said rule of "keep it simple". This applies to programming. Consider classic Visual Basic. One of the reasons it was so popular was that it was so simple compared to other languages, say C for example. I know what is involved in writing a pure old fashioned WIN32 application using the Windows API and it is not simple, nor is it intuitive. Visual Basic took much of that complexity and made it simple. Now Visual Basic was sort of OOP based, but actually mostly in the GUI command set. One could actually write all the rest of the code using purely procedural style code and likely many did just that. I would venture to say that when Visual Basic went the way of dot.net, it left behind many programmers who simply wanted to keep it simple. Not that they were poor programmers who didn't want to learn something new, but that they knew the value of simple and taking that away took away a core aspect of their programming mindset.
Another aspect of simple is also seen in the syntax of some programming languages. For example, BASIC has stood the test of time and continues to be the language of choice for many hobby programmers. If you don't think that BASIC is still alive and well, take a look at this extensive list of different BASIC programming languages.
http://basic.mindteq.com/index.php?i=full
While some of these BASICs are object oriented, many of them are also procedural in nature. But the key here is simplicity. Natural readable code.
Simple and low level can work together.
Now consider this. What happens when you combine a simple language with the power of machine language ? You get something very powerful. For example, I write some very complex code using purely procedural style coding, using BASIC, but you may be surprised that my appreciation for machine language (or assembler) also comes to the fore. For example, I use the BASIC language GOTO and GOSUB. How some would cringe to hear this. But these constructs are native to machine language and very useful, so when used properly they are powerful even in a high level language. Another example is that I like to use pointers a lot. Oh how powerful pointers are. In BASIC I can create variable length strings (which are simply a block of bytes) and I can embed complex structures into those strings by using pointers. In BASIC I use the DIM AT command, which allows me to dimension an array of any fixed data type or structure within a block of memory, which in this case happens to be a string.
Appreciating machine code also affects my view of performance. Every CPU cycle counts. This is one reason I use BASICs GOSUB command. It allows me to write some reusable code within a procedure, without the need to call an external routine and pass parameters. The performance improvement is significant. Performance also affects how I tackle a problem. While I want code to be simple, I also want it to run as fast as possible, so amazingly some of the best performance tips have to do with keeping code simple, with minimal overhead and also understanding what the machine code must accomplish to do with what I have written in a higher level language. For example in BASIC I have a number of options for the SELECT CASE structure. One option can optimize the code using jump tables (compiler handles this), one option can optimize if the values are only Integers or DWords. But even then the compiler can only do so much. What happens if a large SELECT CASE has to compare dozens and dozens of string constants to a variable length string being tested ? If this code is part of a parser, then it really can slow things down. I had this problem in a scripting language I created for an OpenGL based 3D custom control. The 3D scripting language is text based and has to be interpreted to generate 3D OpenGL calls internally. I didn't want the scripting language to bog things down. So what would I do ?
The solution was simple and appreciating how the compiled machine code would have to compare so many bytes in so many string constants, one quickly realized that the compiler alone could not solve this. I had to think like I was an assembler programmer, but still use a high level language. The solution was so simple, it was surprising. I could use a pointer to read the first byte of the string being parsed. Since the first character would always be a letter in the scripting language, this meant there were 26 possible outcomes. The SELECT CASE simply tested for the first character value (convert to a number) which would execute fast. Then for each letter (A,B,C, ) I would only compare the parsed word to the scripting language keywords which started with that letter. This in essence improved speed by 26 fold (or better).
The fastest solutions are often very simple to code. No complex classes needed here. Just a simple procedure to read through a text string using the simplest logic I could find. The procedure is a little more complex than what I describe, but this is the core logic of the routine.
From experience, I have found that a purely procedural style of coding, using a language which is natural and simple (BASIC), while using constructs of the language which are closer to pure machine (or assembler) in the language produces smaller and faster applications which are also easier to maintain.
Now I am not saying that all OOP is bad. Nor am I saying that OOP never has a place in programming. What I am saying though is that it is worth considering the possiblity that OOP is not always the best solution and that there are other choices.
Here are some of my other blog articles which may interest you if this one interested you:
Classic Visual Basic's end marked a key change in software development.
Is software development too complex today ?
BASIC, OOP and Learning programming in the 21st century !
Why BASIC ?
Reliable Software !
Maybe a shift in software development is required ?
Stop being a programmer for a moment !
![]() |
![]() |
![]() |
Sep 29, 2020 | beinghappyprogramming.wordpress.com
Posted on January 26, 2013 by silviomarcovilla -- Leave a comment
Yes it is. For application code at least, I'm pretty sure.
Not claiming any originality here, people smarter than me already noticed this fact ages ago.Also, don't misunderstand me, I'm not saying that OOP is bad. It probably is the best variant of procedural programming.
Maybe the term is OOP overused to describe anything that ends up in OO systems.
Things like VMs, garbage collection, type safety, mudules, generics or declarative queries (Linq) are a given , but they are not inherently object oriented.
I think these things (and others) are more relevant than the classic three principles.Inheritance
Current advice is usually prefer composition over inheritance . I totally agree.Polymorphism
This is very, very important. Polymorphism cannot be ignored, but you don't write lots of polymorphic methods in application code. You implement the occasional interface, but not every day.
Mostly you use them.
Because polymorphism is what you need to write reusable components, much less to use them.Encapsulation
Encapsulation is tricky. Again, if you ship reusable components, then method-level access modifiers make a lot of sense. But if you work on application code, such fine grained encapsulation can be overkill. You don't want to struggle over the choice between internal and public for that fantastic method that will only ever be called once. Except in test code maybe. Hiding all implementation details in private members while retaining nice simple tests can be very difficult and not worth the troulbe. (InternalsVisibleTo being the least trouble, abstruse mock objects bigger trouble and Reflection-in-tests Armageddon).
Nice, simple unit tests are just more important than encapsulation for application code, so hello public!So, my point is, if most programmers work on applications, and application code is not very OO, why do we always talk about inheritance at the job interview? 🙂
PS
If you think about it, C# hasn't been pure object oriented since the beginning (think delegates) and its evolution is a trajectory from OOP to something else, something multiparadigm.
![]() |
![]() |
![]() |
Nov 22, 2019 | stackoverflow.com
Peter Mortensen, Mar 4 '17 at 22:00
If you want to refer to a global variable in a function, you can use the global keyword to declare which variables are global. You don't have to use it in all cases (as someone here incorrectly claims) - if the name referenced in an expression cannot be found in local scope or scopes in the functions in which this function is defined, it is looked up among global variables.
However, if you assign to a new variable not declared as global in the function, it is implicitly declared as local, and it can overshadow any existing global variable with the same name.
Also, global variables are useful, contrary to some OOP zealots who claim otherwise - especially for smaller scripts, where OOP is overkill.
J S, Jan 8 '09
Absolutely re. zealots. Most Python users use it for scripting and create little functions to separate out small bits of code. – Paul Uszak Sep 22 at 22:57
![]() |
![]() |
![]() |
Sep 12, 2020 | www.sciencedirect.com
https://www.sciencedirect.com/science/article/abs/pii/0096055178900413
A statistical analysis of syntax errors - ScienceDirect
[PDF] Error log analysis in C programming language courses [BOOK] Programming languages JJ Horning - 1979 - books.google.com to note that over 14% of the faults occurring in topps programs during the second half of theFor example, approximately one-fourth of all original syntax errors in the Pascal sample were
missing semicolons or use of comma in place of semicolon 4) indicates that this type of error
is quite infrequent (80o) and hence needn't be of as great a concern to recovery pro
experiment were still semicolon faults (compared to 1% for toppsii), and that missing semicolons
were about Every decision takes time, and provides an opportunity for error n assessment of locally least-cost error recovery SO Anderson, RC Backhouse , EH Bugge - The Computer , 1983 - academic.oup.com sym = semicolon in the former, one is anticipating the possibility of a missing semicolon ; in contrast,
a missing comma is 13, p. 229) if sy = semicolon then insymbol else begin error (14); if sy = comma
then insymbol end Both conditional statements accept semicolons but the The role of systematic errors in developmental studies of programming language learners J Segal, K Ahmad , M Rogers - Journal of Educational , 1992 - journals.sagepub.com Errors were classified by their surface characteristics into single token ( missing gathered from
the students, was that they would experience considerable difficulties with using semicolons ,
and that the specific rule of ALGOL 68 syntax concerning the role of the semicolon as a Cited by 9 Related articles Follow set error recovery C Stirling - Software: Practice and Experience, 1985 - Wiley Online Library Some accounts of the recovery scheme mention and make use of non- systematic changes to
their recursive descent parsers in order to improve In the former he anticipates the possibility of
a missing semicolon whereas in the latter he does not anticipate a missing comma A first look at novice compilation behaviour using BlueJ MC Jadud - Computer Science Education, 2005 - Taylor & Francis or mark themselves present from weeks previous they may have missed -- either way change
programmer behaviour -- perhaps encouraging them to make fewer " missing semicolon " errors ,
or be or perhaps highlight places where semicolons should be when they are missing Making programming more conversational A Repenning - 2011 IEEE Symposium on Visual Languages , 2011 - ieeexplore.ieee.org Miss one semicolon in a C program and the program may no longer work at all Similar to code
auto-completion approaches, these kinds of visual programming environments prevent syntactic
programming mistakes such as missing semicolons or typos
![]() |
![]() |
![]() |
Sep 09, 2020 | en.wikipedia.org
Criticism [ edit ]
The OOP paradigm has been criticised for a number of reasons, including not meeting its stated goals of reusability and modularity, [36] [37] and for overemphasizing one aspect of software design and modeling (data/objects) at the expense of other important aspects (computation/algorithms). [38] [39]
Luca Cardelli has claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take longer to compile, and that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex. [36] The latter point is reiterated by Joe Armstrong , the principal inventor of Erlang , who is quoted as saying: [37]
The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches. [40]
Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP; [41] however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind of customizable type system to support RDBMS . [42]
In an article Lawrence Krubner claimed that compared to other languages (LISP dialects, functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden of unneeded complexity. [43]
Alexander Stepanov compares object orientation unfavourably to generic programming : [38]
I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras -- families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting -- saying that everything is an object is saying nothing at all.
Paul Graham has suggested that OOP's popularity within large companies is due to "large (and frequently changing) groups of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one programmer from "doing too much damage". [44]
Leo Brodie has suggested a connection between the standalone nature of objects and a tendency to duplicate code [45] in violation of the don't repeat yourself principle [46] of software development.
Steve Yegge noted that, as opposed to functional programming : [47]
Object Oriented Programming puts the Nouns first and foremost. Why would you go to such lengths to put one part of speech on a pedestal? Why should one kind of concept take precedence over another? It's not as if OOP has suddenly made verbs less important in the way we actually think. It's a strangely skewed perspective.
Rich Hickey , creator of Clojure , described object systems as overly simplistic models of the real world. He emphasized the inability of OOP to model time properly, which is getting increasingly problematic as software systems become more concurrent. [39]
Eric S. Raymond , a Unix programmer and open-source software advocate, has been critical of claims that present object-oriented programming as the "One True Solution", and has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency. [48] Raymond compares this unfavourably to the approach taken with Unix and the C programming language . [48]
Rob Pike , a programmer involved in the creation of UTF-8 and Go , has called object-oriented programming "the Roman numerals of computing" [49] and has said that OOP languages frequently shift the focus from data structures and algorithms to types . [50] Furthermore, he cites an instance of a Java professor whose "idiomatic" solution to a problem was to create six new classes, rather than to simply use a lookup table . [51]
![]() |
![]() |
![]() |
Sep 09, 2020 | medium.com
The Reference Problem
For efficiency sake, Objects are passed to functions NOT by their value but by reference.
What that means is that functions will not pass the Object, but instead pass a reference or pointer to the Object.
If an Object is passed by reference to an Object Constructor, the constructor can put that Object reference in a private variable which is protected by Encapsulation.
But the passed Object is NOT safe!
Why not? Because some other piece of code has a pointer to the Object, viz. the code that called the Constructor. It MUST have a reference to the Object otherwise it couldn't pass it to the Constructor?
The Reference SolutionThe Constructor will have to Clone the passed in Object. And not a shallow clone but a deep clone, i.e. every object that is contained in the passed in Object and every object in those objects and so on and so on.
So much for efficiency.
And here's the kicker. Not all objects can be Cloned. Some have Operating System resources associated with them making cloning useless at best or at worst impossible.
And EVERY single mainstream OO language has this problem.
Goodbye, Encapsulation.
![]() |
![]() |
![]() |
Aug 27, 2020 | rubberduckdebugging.com
The rubber duck debugging method is as follows:
- Beg, borrow, steal, buy, fabricate or otherwise obtain a rubber duck (bathtub variety).
- Place rubber duck on desk and inform it you are just going to go over some code with it, if that's all right.
- Explain to the duck what your code is supposed to do, and then go into detail and explain your code line by line.
- At some point you will tell the duck what you are doing next and then realise that that is not in fact what you are actually doing. The duck will sit there serenely, happy in the knowledge that it has helped you on your way.
Note : In a pinch a coworker might be able to substitute for the duck, however, it is often preferred to confide mistakes to the duck instead of your coworker.
Original Credit : ~Andy from lists.ethernal.org
FAQs4 days ago
If ducks are so smart, why don't we just let the ducks do all the work? It would be wonderful if this were true, but the fact is that most ducks prefer to take a mentoring role. There are a few ducks however that do choose to code, but these are the ducks that nobody hears about because they are selected for secret government projects that are highly classified in nature.
Where can I learn more about rubber duck debugging? More information can be found at wikipedia.org , lists.ethernal.org , codinghorror.com , and zenhub.com .
Where can I hire my own duck? Great question! Amazon.com hosts a wide selection of affordable ducks that have graduated with a technical degree from some of the world's leading universities.
Why does this site exist? As a young intern in 2008 I repeatedly pestered a mentor of mine similar to Kevin's Rubber Duck Story and eventually my mentor pointed me at the 2002 lists.ethernal.org post by Andy , which paraphrased a story from the 1999 book The Pragmatic Programmer . That night I ordered a rubber duck from Amazon and purchased this domain name as a way of owning up to my behavior.
I'd also highly recommend this chapter on debugging from Think Python book: https://greenteapress.com/thinkpython2/html/thinkpython2021.html
and this article: https://jvns.ca/blog/2019/06/23/a-few-debugging-resources/
![]() |
![]() |
![]() |
Jul 10, 2020 | www.zdnet.com
ZDNet
Liam Tung
July 6, 2020Tiobe Software's latest programming language popularity index shows the statistical programming language R making a comeback, rising to eighth place after falling out of the top 20 in May for the first time in three years. Tiobe's Paul Jansen believes demand in universities and from global efforts to find a vaccine for Covid-19 has given a boost to R and Python. Said Jansen, "Lots of statistics and data mining needs to be done to find a vaccine for the Covid-19 virus. As a consequence, statistical programming languages that are easy to learn and use gain popularity now." Tiobe's rankings are based on search engine results related to programming language queries. The C programming language topped the latest index, followed in descending order by Java, Python, C++, C#, Visual Basic, JavaScript, R, PHP, and Swift.
![]() |
![]() |
![]() |
Oct 15, 2017 | zwischenzugs.com
Uncategorized 7 Minutes Sapiens and Collective Fictions
Recently I read Sapiens: A Brief History of Humankind by Yuval Harari. The basic thesis of the book is that humans require 'collective fictions' so that we can collaborate in larger numbers than the 150 or so our brains are big enough to cope with by default. Collective fictions are things that don't describe solid objects in the real world we can see and touch. Things like religions, nationalism, liberal democracy, or Popperian falsifiability in science. Things that don't exist, but when we act like they do, we easily forget that they don't.
Collective Fictions in IT – WaterfallThis got me thinking about some of the things that bother me today about the world of software engineering. When I started in software 20 years ago, God was waterfall. I joined a consultancy (ca. 400 people) that wrote very long specs which were honed to within an inch of their life, down to the individual Java classes and attributes. These specs were submitted to the customer (God knows what they made of it), who signed it off. This was then built, delivered, and monies were received soon after. Life was simpler then and everyone was happy.
Except there were gaps in the story – customers complained that the spec didn't match the delivery, and often the product delivered would not match the spec, as 'things' changed while the project went on. In other words, the waterfall process was a 'collective fiction' that gave us enough stability and coherence to collaborate, get something out of the door, and get paid.
This consultancy went out of business soon after I joined. No conclusions can be drawn from this.
Collective Fictions in IT – Startups ca. 2000I got a job at another software development company that had a niche with lots of work in the pipe. I was employee #39. There was no waterfall. In fact, there was nothing in the way of methodology I could see at all. Specs were agreed with a phone call. Design, prototype and build were indistinguishable. In fact it felt like total chaos; it was against all of the precepts of my training. There was more work than we could handle, and we got on with it.
The fact was, we were small enough not to need a collective fiction we had to name. Relationships and facts could be kept in our heads, and if you needed help, you literally called out to the room. The tone was like this, basically:
Of course there were collective fictions, we just didn't name them:
- We will never have a mission statement
- We don't need HR or corporate communications, we have the pub (tough luck if you have a family)
- We only hire the best
We got slightly bigger, and customers started asking us what our software methodology was. We guessed it wasn't acceptable to say 'we just write the code' (legend had it our C-based application server – still in use and blazingly fast – was written before my time in a fit of pique with a stash of amphetamines over a weekend. It's still in use.)
Turns out there was this thing called 'Rapid Application Development' that emphasized prototyping. We told customers we did RAD, and they seemed happy, as it was A Thing. It sounded to me like 'hacking', but to be honest I'm not sure anyone among us really properly understood it or read up on it.
As a collective fiction it worked, because it kept customers off our backs while we wrote the software.
Soon we doubled in size, moved out of our cramped little office into a much bigger one with bigger desks, and multiple floors. You couldn't shout out your question to the room anymore. Teams got bigger, and these things called 'project managers' started appearing everywhere talking about 'specs' and 'requirements gathering'. We tried and failed to rewrite our entire platform from scratch.
Yes, we were back to waterfall again, but this time the working cycles were faster and smaller, and the same problems of changing requirements and disputes with customers as before. So was it waterfall? We didn't really know.
Collective Fictions in IT – AgileI started hearing the word 'Agile' about 2003. Again, I don't think I properly read up on it ever, actually. I got snippets here and there from various websites I visited and occasionally from customers or evangelists that talked about it. When I quizzed people who claimed to know about it their explanations almost invariably lost coherence quickly. The few that really had read up on it seemed incapable of actually dealing with the very real pressures we faced when delivering software to non-sprint-friendly customers, timescales, and blockers. So we carried on delivering software with our specs, and some sprinkling of agile terminology. Meetings were called 'scrums' now, but otherwise it felt very similar to what went on before.
As a collective fiction it worked, because it kept customers and project managers off our backs while we wrote the software.
Since then I've worked in a company that grew to 700 people, and now work in a corporation of 100K+ employees, but the pattern is essentially the same: which incantation of the liturgy will satisfy this congregation before me?
Don't You Believe?I'm not going to beat up on any of these paradigms, because what's the point? If software methodologies didn't exist we'd have to invent them, because how else would we work together effectively? You need these fictions in order to function at scale. It's no coincidence that the Agile paradigm has such a quasi-religious hold over a workforce that is immensely fluid and mobile. (If you want to know what I really think about software development methodologies, read this because it lays it out much better than I ever could.)
One of many interesting arguments in Sapiens is that because these collective fictions can't adequately explain the world, and often conflict with each other, the interesting parts of a culture are those where these tensions are felt. Often, humour derives from these tensions.
'The test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time and still retain the ability to function.' F. Scott Fitzgerald
I don't know about you, but I often feel this tension when discussion of Agile goes beyond a small team. When I'm told in a motivational poster written by someone I've never met and who knows nothing about my job that I should 'obliterate my blockers', and those blockers are both external and non-negotiable, what else can I do but laugh at it?
How can you be agile when there are blockers outside your control at every turn? Infrastructure, audit, security, financial planning, financial structures all militate against the ability to quickly deliver meaningful iterations of products. And who is the customer here, anyway? We're talking about the square of despair:
When I see diagrams like this representing Agile I can only respond with black humour shared with my colleagues, like kids giggling at the back of a church.
When within a smaller and well-functioning functioning team, the totems of Agile often fly out of the window and what you're left with (when it's good) is a team that trusts each other, is open about its trials, and has a clear structure (formal or informal) in which agreement and solutions can be found and co-operation is productive. Google recently articulated this (reported briefly here , and more in-depth here ).
So Why Not Tell It Like It Is?You might think the answer is to come up with a new methodology that's better. It's not like we haven't tried:
It's just not that easy, like the book says:
'Telling effective stories is not easy. The difficulty lies not in telling the story, but in convincing everyone else to believe it. Much of history revolves around this question: how does one convince millions of people to believe particular stories about gods, or nations, or limited liability companies? Yet when it succeeds, it gives Sapiens immense power, because it enables millions of strangers to cooperate and work towards common goals. Just try to imagine how difficult it would have been to create states, or churches, or legal systems if we could speak only about things that really exist, such as rivers, trees and lions.'
Let's rephrase that:
'Coming up with useful software methodologies is not easy. The difficulty lies not in defining them, but in convincing others to follow it. Much of the history of software development revolves around this question: how does one convince engineers to believe particular stories about the effectiveness of requirements gathering, story points, burndown charts or backlog grooming? Yet when adopted, it gives organisations immense power, because it enables distributed teams to cooperate and work towards delivery. Just try to images how difficult it would have been to create Microsoft, Google, or IBM if we could only speak about specific technical challenges.'
Anyway, does the world need more methodologies? It's not like some very smart people haven't already thought about this.
AcceptanceSo I'm cool with it. Lean, Agile, Waterfall, whatever, the fact is we need some kind of common ideology to co-operate in large numbers. None of them are evil, so it's not like you're picking racism over socialism or something. Whichever one you pick is not going to reflect the reality, but if you expect perfection you will be disappointed. And watch yourself for unspoken or unarticulated collective fictions. Your life is full of them. Like that your opinion is important. I can't resist quoting this passage from Sapiens about our relationship with wheat:
'The body of Homo sapiens had not evolved for [farming wheat]. It was adapted to climbing apple trees and running after gazelles, not to clearing rocks and carrying water buckets. Human spines, knees, necks and arches paid the price. Studies of ancient skeletons indicate that the transition to agriculture brought about a plethora of ailments, such as slipped discs, arthritis and hernias. Moreover, the new agricultural tasks demanded so much time that people were forced to settle permanently next to their wheat fields. This completely changed their way of life. We did not domesticate wheat. It domesticated us. The word 'domesticate' comes from the Latin domus, which means 'house'. Who's the one living in a house? Not the wheat. It's the Sapiens.'
Maybe we're not here to direct the code, but the code is directing us. Who's the one compromising reason and logic to grow code? Not the code. It's the Sapiens.
If you liked this, you may want to look at my book Learn Bash the Hard Way , available at $5 :
Also currently co-authoring Second Edition of a book on Docker: Get 39% off with the code 39miell2 Share this:
https://www.reddit.com/static/button/button1.html?newwindow=true&width=120&url=https%3A%2F%2Fzwischenzugs.com%2F2017%2F10%2F15%2Fmy-20-year-experience-of-software-development-methodologies%2F&title=My%2020-Year%20Experience%20of%20Software%20Development%20Methodologies
https://zwischenzugs.com/2017/10/15/my-20-year-experience-of-software-development-methodologies/
https://embed.tumblr.com/widgets/share/button?canonicalUrl=https%3A%2F%2Fzwischenzugs.com%2F2017%2F10%2F15%2Fmy-20-year-experience-of-software-development-methodologies%2F&postcontent%5Btitle%5D=My%2020-Year%20Experience%20of%20Software%20Development%20Methodologies&postcontent%5Bcontent%5D=https%3A%2F%2Fzwischenzugs.com%2F2017%2F10%2F15%2Fmy-20-year-experience-of-software-development-methodologies%2F
https://widgets.getpocket.com/v1/button?label=pocket&count=horizontal&v=1&url=https%3A%2F%2Fzwischenzugs.com%2F2017%2F10%2F15%2Fmy-20-year-experience-of-software-development-methodologies%2F&title=My%2020-Year%20Experience%20of%20Software%20Development%20Methodologies%20%E2%80%93%20zwischenzugs&src=https%3A%2F%2Fzwischenzugs.com%2F2017%2F10%2F15%2Fmy-20-year-experience-of-software-development-methodologies%2F&r=0.36378070984963884
https://www.facebook.com/v2.3/plugins/share_button.php?app_id=249643311490&channel=https%3A%2F%2Fstaticxx.facebook.com%2Fx%2Fconnect%2Fxd_arbiter%2F%3Fversion%3D46%23cb%3Df2ade132bb91e5%26domain%3Dzwischenzugs.com%26origin%3Dhttps%253A%252F%252Fzwischenzugs.com%252Ffe9e7bbb2bc894%26relation%3Dparent.parent&container_width=0&href=https%3A%2F%2Fzwischenzugs.com%2F2017%2F10%2F15%2Fmy-20-year-experience-of-software-development-methodologies%2F&layout=button_count&locale=en_US&sdk=joey
https://widgets.wp.com/likes/index.html?ver=20190321#blog_id=20870870&post_id=1474&origin=zwischenzugs.wordpress.com&obj_id=20870870-1474-5efdf020c3f1f&domain=zwischenzugs.com Related
Things I Learned Managing Site Reliability for Some of the World's Busiest Gambling Sites With 22 comments
Why Are Enterprises So Slow? With 28 comments
Riding the Tiger: Lessons Learned Implementing Istio With 4 comments Published by zwischenzugs
View all posts by zwischenzugs Published October 15, 2017 60 thoughts on "My 20-Year Experience of Software Development Methodologies"
- Pingback: My 20-Year Experience of Software Development Methodologies | ExtendTree
- gregjor October 15, 2017 at 11:28 am
Great article, matches my experience. And thanks for the link and compliment on my article. Reply
- zwischenzugs October 15, 2017 at 1:07 pm
Wow, that was yours? Have toted that article around for years. Pleasure to finally meet you! Reply
- primogatto October 15, 2017 at 1:04 pm
"And watch yourself for unspoken or unarticulated collective fictions. Your life is full of them."
Agree completely.
As for software development methodologies, I personally think that with a few tweaks the waterfall methodology could work quite well. The key changes I'd suggest would help is to introduce developer guidance at the planning stage, including timeboxed explorations of the feasibility of the proposals, as well as aiming for specs to outline business requirements rather than dictating how they should be implemented. Reply
- pheeque October 15, 2017 at 6:19 pm
And then there were 16 competing standards. Reply
- Neel October 15, 2017 at 5:30 pm
wonderful Reply
- Rob Lang October 15, 2017 at 9:15 pm
A very entertaining article! I have as similar experience and outlook. I've not tried LEAN. I once heard a senior developer say that methodologies were just a stick with which to beat developers. This was largely in the case of clients who agree to engage in whatever process when amongst business people and then are absent at grooming, demos, releases, feedback meetings and so on. When the software is delivered at progressively short notice, it's always the developer that has to carry the burden of ensuring quality, feeling keenly responsible for the work they do (the conscientious ones anyway). Then non-technical management hide behind the process and failing to have the client fully engaged is quickly forgotten.
It reminds me (I'm rambling now, sorry) of factory workers in the 80s complaining about working conditions and the management nodding and smiling while doing nothing to rectify the situation and doomed to repeat the same error. Except now the workers are intelligent and will walk, taking their business knowledge and skill set with them. Reply
- Mike Will October 16, 2017 at 1:36 am
Very enjoyable. I had a stab at the small sub-trail of 'syntonicity' here: http://www.scidata.ca/?p=895
Syntonicity is Stuart Watt's term which he probably got from Seymour Papert.Of course, this may all become moot soon as our robot overlords take their place at the keyboard. Reply
- joskid October 16, 2017 at 7:23 am
Reblogged this on josephdung . Reply
- otomato October 16, 2017 at 8:31 am
A great article! I was very much inspired by Yuval's book myself. So much that I wrote a post about DevOps being a collective fiction : http://otomato.link/devops-is-a-myth/
Basically same ideas as yours but from a different angle. Reply- Roger October 16, 2017 at 5:24 pm
Fantastic article – I wonder what the next fashionable methodology will be? Reply
- Pingback: Evolving Software Development | CR 279 | Jupiter Broadcasting
- Rafiqunnabi Nayan October 17, 2017 at 5:31 am
A great article. Thanks a lot for writing. Reply
- Follow Blog Widget - Support - WordPress.com October 17, 2017 at 6:47 am
This site truly has all the information I needed about this subject and didn't
know who to ask. Reply- Pingback: Five Blogs – 18 October 2017 – 5blogs
- Pingback: Weekly Links #83 – Useful Links For Developers
- Anthony Kesterton October 22, 2017 at 3:16 pm
Brilliant – well said Ian!
I think part of the "need" for methodology is the desire for a common terminology. However, if everyone has their own view of what these terms mean, then it all starts to go horribly wrong. The focus quickly becomes adhering to the methodology rather than getting the work done. Reply
- Pingback: Die KW 42/2017 im Link-Rückblick | artodeto's blog about coding, politics and the world
- Pingback: programming reading notes | Electronics DIY
- Steve Naidamast October 23, 2017 at 1:15 pm
A very well-written article. I retired from corporate development in 2014 but am still developing my own projects. I have written on this very subject and these pieces have been published as well.
The idea that the Waterfall technique for development was the only one in use as we go back towards the earlier years is a myth that has been built up by the folks who have been promoting the Agile technique, which for seniors like me has been just another word for what we used to call "guerrilla programming". In fact, if one were to review that standards of design in software engineering there are 13 types of design techniques, all of which have been used at one time or another by many different companies successfully. Waterfall was just one of them and was only recommended for very large projects.
The author is correct to conclude by implication that the best technique for design and implementation is the RAD technique promoted by Stephen McConnell of Construx and a team that can work well with other. His book, still in its first edition since 1996, is considered the Bible for software development and describes every aspect of software engineering one could require. His point. However, his book is only suggested as a guide where engineers can pick what they really need for the development of their projects; not hard standards. Nonetheless, McConnell stresses the need for good specifications and risk management, the latter if not used always causes a project to fail or result in less than satisfactory results. His work is proven by over 35 years of research Reply
- Mike October 23, 2017 at 1:39 pm
Hilarious and oh so true. Remember the first time you were being taught Agile and they told you that the stakeholders would take responsibility for their role and decisions. What a hoot! Seriously, I guess they did used to write detailed specs, but in my twenty some years, I've just been thrilled if I had a business analyst that knew about what they wanted Reply
- Kurt Guntheroth October 23, 2017 at 4:16 pm
OK, here's a collective fiction for you. "Methodologies don't work. They don't reflect reality. They are just something we tell customers because they are appalled when we admit that our software is developed in a chaotic and unprofessional manner." This fiction serves those people who already don't like process, and gives them excuses.
We do things the same way over and over for a reason. We have traffic lights because it reduces congestion and reduces traffic fatalities. We make cakes using a recipe because we like it when the result is consistently pleasing. So too with software methodologies.
Like cake recipes, not all software methodologies are equally good at producing a consistently good result. This fact alone should tell you that there is something of value in the best ones. While there may be a very few software chefs who can whip up a perfect result every time, the vast bulk of developers need a recipe to follow or the results are predictably bad.
Your diatribe against process does the community a disservice. Reply- Doug October 24, 2017 at 5:34 am
I have arrived at the conclusion that any and all methodologies would work – IF (and it's a big one), everyone managed to arrive at a place where they considered the benefit of others before themselves. And, perhaps, they all used the same approach.
For me, it comes down to character rather than anything else. I can learn the skills or trade a chore with someone else.
Software developers; the ones who create "new stuff", by definition, have no roadmap. They have experience, good judgment, the ability to 'survive in the wild', are always wanting to "see what is over there" and trust, as was noted is key. And there are varying levels of developer. Some want to build the roads; others use the roads built for them and some want to survey for the road yet to be built. None of these are wrong – or right.
The various methodology fights are like arguing over what side of the road to drive on, how to spell colour and color. Just pick one, get over yourself and help your partner(s) become successful.
Ah, right Where do the various methodologies resolve greed, envy, distrust, selfishness, stepping on others for personal gain, and all of the other REAL killers of success again?
I have seen great teams succeed and far too many fail. Those that have failed more often than not did so for character-related issues rather than technical ones. Reply
- Pingback: into #SoftwareDevelopment ? this is a good read https://zwischenzugs.wordpress.com/2017/10/15/my-20-year-experience-of-software-development-methodologies/
- Morten Damsgaard-madsen October 24, 2017 at 7:32 am
One of the best articles I have read in a long time about – well everything :-). Reply
- Pingback: Java Weekly, Issue 199 | Baeldung
- Pingback: My 20-Year Experience of Software Development Methodologies | beloschuk
- Pingback: 테스트메일 | simple note
- Ben Hayden November 7, 2017 at 1:36 pm
Before there exists any success, a methodology must freeze a definition for roles, as well as process. Unless there exist sufficient numbers and specifications of roles, and appropriate numbers of sapiens to hold those roles, then the one on the end becomes overburdened and triggers systemic failure.
There has never been a sufficiently-complex methodology that could encompass every field, duty, and responsibility in a software development task. (This is one of the reasons "chaos" is successful. At least it accepts the natural order of things, and works within the interstitial spaces of a thousand objects moving at once.)
We even lie to ourselves when we name what we're doing: Methodology. It sounds so official, so logical, so orderly. That's a myth. It's just a way of pushing the responsibility down from the most powerful to the least powerful -- every time.
For every "methodology," who is the caboose on the end of this authority train? The "coder."
The tighter the role definitions become in any methodology, the more actual responsibilities cascade down to the "coder." If the specs conflict, who raises his hand and asks the question? If a deadline is unreasonable, who complains? If a technique is unusable in a situation, who brings that up?
The person is obviously the "coder." And what happens when the coder asks this question?
In one methodology the "coder" is told to stop production and raise the issue with the manager who will talk to the analyst who will talk to the client who will complain that his instructions were clear and it all falls back to the "coder" who, obviously, was too dim to understand the 1,200 pages of specifications the analyst handed him.
In another, the "coder" is told, "you just work it out." And the concomitant chaos renders the project unstable.
In another, the "coder" is told "just do what you're told." And the result is incompatible with the rest of the project.
I've stopped "coding" for these reasons and because everybody is happy with the myth of programming process because they aren't the caboose. Reply
- Kurt Guntheroth November 7, 2017 at 4:29 pm
I was going to make fun of this post for being whiney and defeatust. But the more I thought about it, the more I realized it contained a big nugget of truth. A lot of methodologies, as practiced, have the purpose of putting off risk onto the developers, of fixing responsibility on developers so the managers aren't responsible for any of the things that can go wrong with projects. Reply
- Pingback: Organizing Teams With Collective Fictions | Hackaday
- Pingback: Organizing Teams With Collective Fictions – High Tech Newz
- Pingback: Organizing Teams With Collective Fictions – LorePop
- Pingback: Seven Hypothesis of German Tech Culture and Challenging the Status Quo – @Virtual_Patrick
- Pingback: My 20-Year Experience of Software Development Methodologies – InnovateStartup
- Pingback: Interesting Links for 04-12-2017 | Made from Truth and Lies
- Pingback: My 20-Year Trip of Gadget Trend Methodologies | A1A
- William (Bill) Meade December 4, 2017 at 2:27 pm
A pleasure to read. Gödel incompleteness in software? Development environments are nothing if not formalisms. :-) Reply
- Pingback: My 20-Year Experience of Software Development Methodologies – Demo
- Scott Armit (@smarmit) December 4, 2017 at 4:32 pm
Really enjoyable and matches my 20+ years in the industry. Thank you. Reply
- dinkarshastri December 4, 2017 at 5:44 pm
Reblogged this on High output engineering . Reply
- Pedro Liska December 6, 2017 at 4:14 pm
Great article! I have experienced the same regarding software methodologies. And at a greater level, thank you for introducing me to the concept of collective fictions; it makes so much sense. I will be reading Sapiens. Reply
- Pingback: The 20 MB hard drive; 3.5 billion Reddit comments; and much more - Intertech Blog
- Alex Staveley December 8, 2017 at 5:33 pm
Actually, come to think of it there are two types of Software Engineers who take process very seriously. One who is acutely aware of software entropy and wants to pro -actively fight against it because they want to engineer to a high standard and don't like working the weekend. So they wants things organised. Then there's another type who can come across as being a bit dogmatic. Maybe your links with collective delusions help explain some of the human psychology here. Reply
- Pingback: My 20-Year Experience of Software Development Methodologies – zwischenzugs | A Place Like This
- Pingback: Newsletter 40 | import digest
- Pingback: Interesting articles Jan-Mar 2018 – ProgBlog
- Frank Thun February 11, 2018 at 10:31 am
Great Article. Here is one I did about about Agile Management Systems, which are trying to lay the managerial foundations for "Agile" . Or should I say to liberate Organisations? None of the systems help if a full is using this tool, though.
https://managementdigital.net/2017/06/30/holacracy-liberation-and-management-3-0/ Reply- Pingback: Five Things I Did to Change a Team's Culture – zwischenzugs
- Pingback: Things I Learned Managing Site Reliability for Some of the World's Busiest Gambling Sites – zwischenzugs
- Cara Mudah Memblokir Situs dengan MikroTik June 2, 2018 at 4:02 pm
Mumtaz, i like this so much Reply
- Pingback: Personal experiences with agile: 16 comments, pictures and a video about practically applying agile - stratejos blog
- Praxent July 24, 2018 at 2:49 pm
really good site Reply
- Pingback: The software dev "process" | Joe Teibel
- Pingback: Why Are Enterprises So Slow? – zwischenzugs
- Kostas Chairopoulos (@khairop) November 17, 2018 at 8:54 am
First of all this is a great article, very well written. A couple of remarks. Early in waterfall, the large business requirements documents didn't work for two reasons: There was no new business process, it was the same business process that should be applied within a new technology (from mainframes to open unix systems, from ascii to RAD tools and 4-GL languages). . Second many consultancy companies (mostly the big 4) there were using "copy&paste" methods to fill these documents, submit the time and material forms for the consultants, increasing the revenue and move on. Things have change with the adoption of the smartphones use etc
To reflect the author idea, to my humble opinion the collective fictions is the embedded quality of work into the whole life cycle development
Thanks
Kostas Reply- AriC December 8, 2018 at 3:40 pm
Sorry, did you forget to finish the article? I don't see the conclusion providing the one true programming methodology that works in all occasions. What is the magic procedure? Thanks in advance. Reply
- Pingback: Notes on Books Read in 2018 – zwischenzugs
- Pingback: 'AWS vs K8s' is the new 'Windows vs Linux' – zwischenzugs
- Pingback: Notes on Books Read in 2019 – zwischenzugs
![]() |
![]() |
![]() |
Jun 17, 2020 | opensource.com
Knowing how Linux uses libraries, including the difference between static and dynamic linking, can help you fix dependency problems. Feed 27 up Image by : Internet Archive Book Images. Modified by Opensource.com. CC BY-SA 4.0 x Subscribe nowGet the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
Linux, in a way, is a series of static and dynamic libraries that depend on each other. For new users of Linux-based systems, the whole handling of libraries can be a mystery. But with experience, the massive amount of shared code built into the operating system can be an advantage when writing new applications.
To help you get in touch with this topic, I prepared a small application example that shows the most common methods that work on common Linux distributions (these have not been tested on other systems). To follow along with this hands-on tutorial using the example application, open a command prompt and type:
$ git clone https: // github.com / hANSIc99 / library_sample
$ cd library_sample /
$ make
cc -c main.c -Wall -Werror
cc -c libmy_static_a.c -o libmy_static_a.o -Wall -Werror
cc -c libmy_static_b.c -o libmy_static_b.o -Wall -Werror
ar -rsv libmy_static.a libmy_static_a.o libmy_static_b.o
ar: creating libmy_static.a
a - libmy_static_a.o
a - libmy_static_b.o
cc -c -fPIC libmy_shared.c -o libmy_shared.o
cc -shared -o libmy_shared.so libmy_shared.o
$ make clean
rm * .oAfter executing these commands, these files should be added to the directory (run
my_appls
to see them):
libmy_static.a
libmy_shared.so About static linkingWhen your application links against a static library, the library's code becomes part of the resulting executable. This is performed only once at linking time, and these static libraries usually end with a
.a
extension.A static library is an archive ( ar ) of object files. The object files are usually in the ELF format. ELF is short for Executable and Linkable Format , which is compatible with many operating systems.
The output of the
$ file libmy_static.afile
command tells you that the static librarylibmy_static.a
is thear
archive type:
libmy_static.a: current ar archiveWith
$ ar -t libmy_static.aar -t
, you can look into this archive; it shows two object files:
libmy_static_a.o
libmy_static_b.oYou can extract the archive's files with
$ ar -x libmy_static.aar -x <archive-file>
. The extracted files are object files in ELF format:
$ file libmy_static_a.o
libmy_static_a.o: ELF 64 -bit LSB relocatable, x86- 64 , version 1 ( SYSV ) , not stripped About dynamic linking More Linux resourcesDynamic linking means the use of shared libraries. Shared libraries usually end with
- Linux commands cheat sheet
- Advanced Linux commands cheat sheet
- Free online course: RHEL Technical Overview
- Linux networking cheat sheet
- SELinux cheat sheet
- Linux common commands cheat sheet
- What are Linux containers?
- Our latest Linux articles
.so
(short for "shared object").Shared libraries are the most common way to manage dependencies on Linux systems. These shared resources are loaded into memory before the application starts, and when several processes require the same library, it will be loaded only once on the system. This feature saves on memory usage by the application.
Another thing to note is that when a bug is fixed in a shared library, every application that references this library will profit from it. This also means that if the bug remains undetected, each referencing application will suffer from it (if the application uses the affected parts).
It can be very hard for beginners when an application requires a specific version of the library, but the linker only knows the location of an incompatible version. In this case, you must help the linker find the path to the correct version.
Although this is not an everyday issue, understanding dynamic linking will surely help you in fixing such problems.
Fortunately, the mechanics for this are quite straightforward.
To detect which libraries are required for an application to start, you can use
$ ldd my_appldd
, which will print out the shared libraries used by a given file:
linux-vdso.so.1 ( 0x00007ffd1299c000 )
libmy_shared.so = > not found
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f56b869b000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f56b8881000 )Note that the library
libmy_shared.so
is part of the repository but is not found. This is because the dynamic linker, which is responsible for loading all dependencies into memory before executing the application, cannot find this library in the standard locations it searches.Errors associated with linkers finding incompatible versions of common libraries (like
$ LD_LIBRARY_PATH =$ ( pwd ) : $LD_LIBRARY_PATHbzip2
, for example) can be quite confusing for a new user. One way around this is to add the repository folder to the environment variableLD_LIBRARY_PATH
to tell the linker where to look for the correct version. In this case, the right version is in this folder, so you can export it:
$ export LD_LIBRARY_PATHNow the dynamic linker knows where to find the library, and the application can be executed. You can rerun
$ ldd my_appldd
to invoke the dynamic linker, which inspects the application's dependencies and loads them into memory. The memory address is shown after the object path:
linux-vdso.so.1 ( 0x00007ffd385f7000 )
libmy_shared.so = > / home / stephan / library_sample / libmy_shared.so ( 0x00007f3fad401000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f3fad21d000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f3fad408000 )To find out which linker is invoked, you can use
$ file my_appfile
:
my_app: ELF 64 -bit LSB executable, x86- 64 , version 1 ( SYSV ) , dynamically linked, interpreter / lib64 / ld-linux-x86- 64 .so.2, BuildID [ sha1 ] =26c677b771122b4c99f0fd9ee001e6c743550fa6, for GNU / Linux 3.2.0, not strippedThe linker
$ file / lib64 / ld-linux-x86- 64 .so.2/lib64/ld-linux-x86–64.so.2
is a symbolic link told-2.30.so
, which is the default linker for my Linux distribution:
/ lib64 / ld-linux-x86- 64 .so.2: symbolic link to ld- 2.31 .soLooking back to the output of
ldd
, you can also see (next tolibmy_shared.so
) that each dependency ends with a number (e.g.,/lib64/libc.so.6
). The usual naming scheme of shared objects is:**lib** XYZ.so **.<MAJOR>** . **<MINOR>**On my system,
$ file / lib64 / libc.so.6libc.so.6
is also a symbolic link to the shared objectlibc-2.30.so
in the same folder:
/ lib64 / libc.so.6: symbolic link to libc- 2.31 .soIf you are facing the issue that an application will not start because the loaded library has the wrong version, it is very likely that you can fix this issue by inspecting and rearranging the symbolic links or specifying the correct search path (see "The dynamic loader: ld.so" below).
For more information, look on the
Dynamic loadingldd
man page .Dynamic loading means that a library (e.g., a
.so
file) is loaded during a program's runtime. This is done using a certain programming scheme.Dynamic loading is applied when an application uses plugins that can be modified during runtime.
See the
The dynamic loader: ld.sodlopen
man page for more information.On Linux, you mostly are dealing with shared objects, so there must be a mechanism that detects an application's dependencies and loads them into memory.
ld.so
looks for shared objects in these places in the following order:
- The relative or absolute path in the application (hardcoded with the
-rpath
compiler option on GCC)- In the environment variable
LD_LIBRARY_PATH
- In the file
/etc/ld.so.cache
Keep in mind that adding a library to the systems library archive
unset LD_LIBRARY_PATH/usr/lib64
requires administrator privileges. You could copylibmy_shared.so
manually to the library archive and make the application work without settingLD_LIBRARY_PATH
:
sudo cp libmy_shared.so / usr / lib64 /When you run
$ ldd my_appldd
, you can see the path to the library archive shows up now:
linux-vdso.so.1 ( 0x00007ffe82fab000 )
libmy_shared.so = > / lib64 / libmy_shared.so ( 0x00007f0a963e0000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007f0a96216000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007f0a96401000 ) Customize the shared library at compile timeIf you want your application to use your shared libraries, you can specify an absolute or relative path during compile time.
Modify the makefile (line 10) and recompile the program by invoking
make -B
. Then, the output ofldd
showslibmy_shared.so
is listed with its absolute path.Change this:
CFLAGS =-Wall -Werror -Wl,-rpath,$(shell pwd)To this (be sure to edit the username):
CFLAGS =/home/stephan/library_sample/libmy_shared.soThen recompile:
$ makeConfirm it is using the absolute path you set, which you can see on line 2 of the output:
$ ldd my_app
linux-vdso.so.1 ( 0x00007ffe143ed000 )
libmy_shared.so = > / lib64 / libmy_shared.so ( 0x00007fe50926d000 )
/ home / stephan / library_sample / libmy_shared.so ( 0x00007fe509268000 )
libc.so.6 = > / lib64 / libc.so.6 ( 0x00007fe50909e000 )
/ lib64 / ld-linux-x86- 64 .so.2 ( 0x00007fe50928e000 )This is a good example, but how would this work if you were making a library for others to use? New library locations can be registered by writing them to
/etc/ld.so.conf
or creating a<library-name>.conf
file containing the location under/etc/ld.so.conf.d/
. Afterward,ldconfig
must be executed to rewrite theld.so.cache
file. This step is sometimes necessary after you install a program that brings some special shared libraries with it.See the
How to handle multiple architecturesld.so
man page for more information.Usually, there are different libraries for the 32-bit and 64-bit versions of applications. The following list shows their standard locations for different Linux distributions:
Red Hat family
- 32 bit:
/usr/lib
- 64 bit:
/usr/lib64
Debian family
- 32 bit:
/usr/lib/i386-linux-gnu
- 64 bit:
/usr/lib/x86_64-linux-gnu
Arch Linux family
- 32 bit:
/usr/lib32
- 64 bit:
/usr/lib64
FreeBSD (technical not a Linux distribution)
- 32bit:
/usr/lib32
- 64bit:
/usr/lib
Knowing where to look for these key libraries can make broken library links a problem of the past.
While it may be confusing at first, understanding dependency management in Linux libraries is a way to feel in control of the operating system. Run through these steps with other applications to become familiar with common libraries, and continue to learn how to fix any library challenges that could come up along your way.
![]() |
![]() |
![]() |
Jun 13, 2020 | insights.dice.com
Next, Stack Overflow breaks down the languages that developers most want to use but haven't started yet. This list is radically different from the most-loved and most-dreaded lists, which are composed of languages that developers already utilize in the course of software development. As you can see, Python tops this particular list, followed by JavaScript and Go:
[ image omitted]
What conclusions can we draw from this data? Python and JavaScript are widely used languages -- it wouldn't be a stretch to call them ubiquitous -- and their presence at the top of the "most wanted" list suggests that technologists realize knowing these languages can unlock all kinds of opportunities. In a similar fashion, the presence of up-and-comers such as Go and Kotlin hints that developers suspect these languages could become big in coming years, and they want to learn them now.
If you want to learn Python, start by swinging past Python.org , which offers tons of documentation, including a useful beginner's guide to programming in it. From there, you can jump to tutorials on writing faster code (via Functions, Lists, and more), debugging , and other more advanced skills. If you're the kind who likes to learn via videos, check out Microsoft's video series, "Python for Beginners," with 44 short videos (recent additions include "More Python for Beginners" and "Even More Python for Beginners: Data Tools." )
And if you want to learn Rust (and find out why it topped the "most loved" languages), rust-lang.org offers lots of handy documentation. There are also some handy (free) tutorials available via Medium .
![]() |
![]() |
![]() |
Jun 09, 2020 | opensource.com
Rust feels like the place to be: it's well-structured, it's expressive, it helps you do the right thing. I recently started learning Rust after many years of Java development. The five points that keep coming to mind are:
- Rust feels familiar
- References make sense
- Ownership will make sense
- Cargo is helpful
- The compiler is amazing
I absolutely stand by all of these, but I've got a little more to say because I now feel like a Rustacean 1 in that:
- I don't feel like programming in anything else ever again.
- I've moved away from simple incantations.
What do I mean by these two statements? Well, the first is pretty simple: Rust feels like the place to be. It's well-structured, it's expressive, it helps you do the right thing, 2 it's got great documentation and tools, and there's a fantastic community. And, of course, it's all open source, which is something that I care about deeply.
Here is an example of what it is like to use Rust:
// Where checkhashes is pre-defined vector of hashes to verify
let algorithms = vec ! [ String :: from ( "SHA-256" ) ; checkhashes.len ()] ;This creates a new vector called "algorithms," of the same length as the vector "checkhashes," and fills it with the String "SHA-256." And the second thing? Well, I decided that in order to learn Rust properly, I should take a project that I had originally written in Java and reimplement it in hopefully fairly idiomatic Rust. Before long, I started fixing mistakes -- and making mistakes -- around implementation rather than around syntax. And I wasn't just copying text from tutorials or making minor, seemingly random changes to my code based on the compiler output. In other words, I was getting things to compile, understanding why they compiled, and then just making programming mistakes. 3
Here's another example, which should feel quite familiar:
fn usage () {
println ! ( "Usage: findfromserial KEY_LENGTH INITIAL_SALT CHECK_HASH1 [CHECK_HASH2, ...]" ) ;
std :: process :: exit ( 1 ) ;
} Programming and developmentThis is a big step forward. When you start learning a language, it's easy just to copy and paste text that you've seen elsewhere, or fiddle with unfamiliar constructs until they -- sort of -- work. Using code or producing code that you don't really understand but seems to work is sometimes referred to as "using incantations" (from the idea that most magicians in fiction, film, and gaming recite collections of magic words that "just work" without really understanding what they're doing or what the combination of words actually means). Some languages 4 are particularly prone to this sort of approach, but many -- most? -- people learning a new language are prone to doing this when they start out just because they want things to work.
- Red Hat Developers Blog
- Programming cheat sheets
- Try for free: Red Hat Learning Subscription
- New Python content
- Our latest JavaScript articles
Recently, I was up until 1am implementing a new feature -- accepting command-line input -- that I couldn't really get my head 'round. I'd spent quite a lot of time on it (including looking for -- and failing to find -- some appropriate incantations), and then asked for some help on an internal rust-lang channel. (You might want to sign up to the general Slack Rust channel inhabited by some people I know.) A number of people had made some suggestions about what had been going wrong, and one person was enormously helpful in picking apart some of the suggestions, so I understood them better. He explained quite a lot, but finished with, "I don't know the return type of the hash function you're calling -- I think this is a good spot for you to figure this piece out on your own."
Here's where I was trying to get to:
checkhashes = std :: env :: args ()
.skip ( 3 )
.map ( | x | hex :: decode ( x ))
.collect ::< Result < Vec < Vec < u8 >>, _ >> ()
.unwrap () ;It may seem weird until you get your head 'round it, but it actually works as you might expect: I wanted to take input from the command line, skip the first three inputs, iterate over the rest, casting each to a vector of u8's and creating a vector of those. The
_
at the end of the "collect" call vacuums up any errors or problems and basically throws them away.This was just what I needed, and what any learner of anything, including programming languages, needs. So when I had to go downstairs at midnight to let the dog out, I decided to stay down and see if I could work things out for myself. And I did. I took the suggestions that people had made, understood what they were doing, tried to divine what they should be doing, worked out how they should be doing it, and then found the right way of making it happen.
I've still got lots to learn, and I'll make lots of mistakes still, but I now feel that I'm in a place to find my way through those mistakes (with a little help along the way, probably -- thanks to everyone who's already pointed me in the right direction). But I do feel that I'm now actually programming in Rust. And I like it.
- This is what Rust programmers call themselves.
- It's almost impossible to stop people doing the wrong thing entirely, but encouraging people to do the right thing is great. In fact, Rust goes further and actually makes it difficult to do the wrong thing in many situations. You really have to try quite hard to do bad things in Rust.
- I found a particularly egregious off-by-one error in my code, for instance, which had nothing to do with Rust, and everything to do with my not paying enough attention to the program flow.
- Cough Perl cough
![]() |
![]() |
![]() |
May 27, 2020 | techrights.org
...it was developed along lines that are not entirely different from Microsoft's EEE tactics -- which today I will offer a new acronym and description for:
1. Steal
2. Add Bloat
3. Original TrashedIt's difficult conceptually to "steal" Free software, because it (sort of, effectively) belongs to everyone. It's not always Public Domain -- copyleft is meant to prevent that. The only way you can "steal" free software is by taking it from everyone and restricting it again. That's like "stealing" the ocean or the sky, and putting it somewhere that people can't get to it. But this is what non-free software does. (You could also simply go against the license terms, but I doubt Stallman would go for the word "stealing" or "theft" as a first choice to describe non-compliance).
... ... ...
Again and again, Microsoft "Steals" or "Steers" the development process itself so it can gain control (pronounced: "ownership") of the software. It is a gradual process, where Microsoft has more and more influence until they dominate the project and with it, the user. This is similar to the process where cults (or drug addiction) take over people's lives, and similar to the process where narcissists interfere in the lives of others -- by staking a claim and gradually dominating the person or project.
Then they Add Bloat -- more features. GitHub is friendly to use, you don't have to care about how Git works to use it (this is true of many GitHub clones as well, as even I do not really care how Git works very much. It took a long time for someone to even drag me towards GitHub for code hosting, until they were acquired and I stopped using it) and due to its GLOBAL size, nobody can or ought to reproduce its network effects.
I understand the draw of network effects. That's why larger federated instances of code hosts are going to be more popular than smaller instances. We really need a mix -- smaller instances to be easy to host and autonomous, larger instances to draw people away from even more gigantic code silos. We can't get away from network effects (just like the War on Drugs will never work) but we can make them easier and less troublesome (or safer) to deal with.
Finally, the Original is trashed, and the SABOTage is complete. This has happened with Python against Python 2, despite protests from seasoned and professional developers, it was deliberately attempted with Systemd against not just sysvinit but ALL alternatives -- Free software acts like proprietary software when it treats the existence of alternatives as a problem to be solved. I personally never trust a project with developers as arrogant as that.
... ... ...
There's a meme about creepy vans with "FREE CANDY" painted on the side, which I took one of the photos from and edited it so that it said "FEATURES" instead. This is more or less how I feel about new features in general, given my experience with their abuse in development, marketing and the takeover of formerly good software projects.
People then accuse me of being against features, of course. As with the Dijkstra article, the real problem isn't Basic itself. The problem isn't features per se (though they do play a very key role in this problem) and I'm not really against features -- or candy, for that matter.
I'm against these things being used as bait, to entrap people in an unpleasant situation that makes escape difficult. You know, "lock-in". Don't get in the van -- don't even go NEAR the van.
Candy is nice, and some features are nice too. But we would all be better off if we could get the candy safely, and delete the creepy horrible van that comes with it. That's true whether the creepy van is GitHub, or surveillance by GIAFAM, or a Leviathan "init" system, or just breaking decades of perfectly good Python code, to try to force people to develop differently because Google or Microsoft (who both have had heavy influence over newer Python development) want to try to force you to -- all while using "free" software.
If all that makes free software "free" is the license -- (yes, it's the primary and key part, it's a necessary ingredient) then putting "free" software on GitHub shouldn't be a problem, right? Not if you're running LibreJS, at least.
In practice, "Free in license only" ignores the fact that if software is effectively free, the user is also effectively free. If free software development gets dragged into doing the bidding of non-free software companies and starts creating lock-in for the user, even if it's external or peripheral, then they simply found an effective way around the true goal of the license. They did it with Tivoisation, so we know that it's possible. They've done this in a number of ways, and they're doing it now.
If people are trying to make the user less free, and they're effectively making the user less free, maybe the license isn't an effective monolithic solution. The cost of freedom is eternal vigilance. They never said "The cost of freedom is slapping a free license on things", as far as I know. (Of course it helps). This really isn't a straw man, so much as a rebuttal to the extremely glib take on software freedom in general that permeates development communities these days.
But the benefits of Free software, free candy and new features are all meaningless, if the user isn't in control.
Don't get in the van.
"The freedom to NOT run the software, to be free to avoid vendor lock-in through appropriate modularization/encapsulation and minimized dependencies; meaning any free software can be replaced with a user's preferred alternatives (freedom 4)." – Peter Boughton
... ... ...
![]() |
![]() |
![]() |
Dec 01, 2019 | turcopolier.typepad.com
Academic Conformism is the road to "1984."
The world is filled with conformism and groupthink. Most people do not wish to think for themselves. Thinking for oneself is dangerous, requires effort and often leads to rejection by the herd of one's peers.
The profession of arms, the intelligence business, the civil service bureaucracy, the wondrous world of groups like the League of Women Voters, Rotary Club as well as the empire of the thinktanks are all rotten with this sickness, an illness which leads inevitably to stereotyped and unrealistic thinking, thinking that does not reflect reality.
The worst locus of this mentally crippling phenomenon is the world of the academics. I have served on a number of boards that awarded Ph.D and post doctoral grants. I was on the Fulbright Fellowship federal board. I was on the HF Guggenheim program and executive boards for a long time. Those are two examples of my exposure to the individual and collective academic minds.
As a class of people I find them unimpressive. The credentialing exercise in acquiring a doctorate is basically a nepotistic process of sucking up to elders and a crutch for ego support as well as an entrance ticket for various hierarchies, among them the world of the academy. The process of degree acquisition itself requires sponsorship by esteemed academics who recommend candidates who do not stray very far from the corpus of known work in whichever narrow field is involved. The endorsements from RESPECTED academics are often decisive in the award of grants.
This process is continued throughout a career in academic research. PEER REVIEW is the sine qua non for acceptance of a "paper," invitation to career making conferences, or to the Holy of Holies, TENURE.
This life experience forms and creates CONFORMISTS, people who instinctively boot-lick their fellows in a search for the "Good Doggy" moments that make up their lives. These people are for sale. Their price may not be money, but they are still for sale. They want to be accepted as members of their group. Dissent leads to expulsion or effective rejection from the group.
This mentality renders doubtful any assertion that a large group of academics supports any stated conclusion. As a species academics will say or do anything to be included in their caste.
This makes them inherently dangerous. They will support any party or parties, of any political inclination if that group has the money, and the potential or actual power to maintain the academics as a tribe. pl
doug , 01 December 2019 at 01:01 PM
Sir,J , 01 December 2019 at 01:22 PMThat is the nature of tribes and humans are very tribal. At least most of them. Fortunately, there are outliers. I was recently reading "Political Tribes" which was written by a couple who are both law professors that examines this.
Take global warming (aka the rebranded climate change). Good luck getting grants to do any skeptical research. This highly complex subject which posits human impact is a perfect example of tribal bias.
My success in the private sector comes from consistent questioning what I wanted to be true to prevent suboptimal design decisions.
I also instinctively dislike groups that have some idealized view of "What is to be done?"
As Groucho said: "I refuse to join any club that would have me as a member"
Reminds one of the Borg, doesn't it?Factotum , 01 December 2019 at 03:18 PMThe 'isms' had it, be it Nazism, Fascism, Communism, Totalitarianism, Elitism all demand conformity and adherence to group think. If one does not co-tow to whichever 'ism' is at play, those outside their group think are persecuted, ostracized, jailed, and executed all because they defy their conformity demands, and defy allegiance to them.
One world, one religion, one government, one Borg. all lead down the same road to -- Orwell's 1984.
David Halberstam: The Best and the Brightest. (Reminder how the heck we got into Vietnam, when the best and the brightest were serving as presidential advisors.)Also good Halberstam re-read: The Powers that Be - when the conservative media controlled the levers of power; not the uber-liberal one we experience today.
![]() |
![]() |
![]() |
Nov 15, 2019 | developers.slashdot.org
No, not really, don't think so. ( Score: 2 )OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago. Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is. Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast, despite having a DB model that was built by non-programmers on crack.
Most critical processes are procedural, even today if only for the OOP has been a golden hammer ever since Java, but we've noticed the downsides quite a while ago.
Ruby on rails was the convention over configuration darling child of the last decade and stopped a large piece of the circular abstraction craze that Java was/is.
Every half-assed PHP toy project is kicking Javas ass on the web and it's because WordPress gets the job done, fast, despite having a DB model that was built by non-programmers on crack.
bradley13 ( 1118935 ) , Monday July 22, 2019 @01:15AM ( #58963622 ) Homepage
It depends... ( Score: 5 , Insightful)There are a lot of mediocre programmers who follow the principle "if you have a hammer, everything looks like a nail". They know OOP, so they think that every problem must be solved in an OOP way.
In fact, OOP works well when your program needs to deal with relatively simple, real-world objects: the modeling follows naturally. If you are dealing with abstract concepts, or with highly complex real-world objects, then OOP may not be the best paradigm.
In Java, for example, you can program imperatively, by using static methods. The problem is knowing when to break the rules. For example, I am working on a natural language system that is supposed to generate textual answers to user inquiries. What "object" am I supposed to create to do this task? An "Answer" object that generates itself? Yes, that would work, but an imperative, static "generate answer" method makes at least as much sense.
There are different ways of thinking, different ways of modelling a problem. I get tired of the purists who think that OO is the only possible answer. The world is not a nail.
![]() |
![]() |
![]() |
Mar 02, 2007 | blog.codinghorror.com
I'm not a fan of object orientation for the sake of object orientation. Often the proper OO way of doing things ends up being a productivity tax . Sure, objects are the backbone of any modern programming language, but sometimes I can't help feeling that slavish adherence to objects is making my life a lot more difficult . I've always found inheritance hierarchies to be brittle and unstable , and then there's the massive object-relational divide to contend with. OO seems to bring at least as many problems to the table as it solves.Perhaps Paul Graham summarized it best :
Object-oriented programming generates a lot of what looks like work. Back in the days of fanfold, there was a type of programmer who would only put five or ten lines of code on a page, preceded by twenty lines of elaborately formatted comments. Object-oriented programming is like crack for these people: it lets you incorporate all this scaffolding right into your source code. Something that a Lisp hacker might handle by pushing a symbol onto a list becomes a whole file of classes and methods. So it is a good tool if you want to convince yourself, or someone else, that you are doing a lot of work.Eric Lippert observed a similar occupational hazard among developers. It's something he calls object happiness .
What I sometimes see when I interview people and review code is symptoms of a disease I call Object Happiness. Object Happy people feel the need to apply principles of OO design to small, trivial, throwaway projects. They invest lots of unnecessary time making pure virtual abstract base classes -- writing programs where IFoos talk to IBars but there is only one implementation of each interface! I suspect that early exposure to OO design principles divorced from any practical context that motivates those principles leads to object happiness. People come away as OO True Believers rather than OO pragmatists.I've seen so many problems caused by excessive, slavish adherence to OOP in production applications. Not that object oriented programming is inherently bad, mind you, but a little OOP goes a very long way . Adding objects to your code is like adding salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err on the side of simplicity, and I tend to favor the approach that results in less code, not more .
Given my ambivalence about all things OO, I was amused when Jon Galloway forwarded me a link to Patrick Smacchia's web page . Patrick is a French software developer. Evidently the acronym for object oriented programming is spelled a little differently in French than it is in English: POO.
That's exactly what I've imagined when I had to work on code that abused objects.
But POO code can have another, more constructive, meaning. This blog author argues that OOP pales in importance to POO. Programming fOr Others , that is.
The problem is that programmers are taught all about how to write OO code, and how doing so will improve the maintainability of their code. And by "taught", I don't just mean "taken a class or two". I mean: have pounded into head in school, spend years as a professional being mentored by senior OO "architects" and only then finally kind of understand how to use properly, some of the time. Most engineers wouldn't consider using a non-OO language, even if it had amazing features. The hype is that major.So what, then, about all that code programmers write before their 10 years OO apprenticeship is complete? Is it just doomed to suck? Of course not, as long as they apply other techniques than OO. These techniques are out there but aren't as widely discussed.
The improvement [I propose] has little to do with any specific programming technique. It's more a matter of empathy; in this case, empathy for the programmer who might have to use your code. The author of this code actually thought through what kinds of mistakes another programmer might make, and strove to make the computer tell the programmer what they did wrong.
In my experience the best code, like the best user interfaces, seems to magically anticipate what you want or need to do next. Yet it's discussed infrequently relative to OO. Maybe what's missing is a buzzword. So let's make one up, Programming fOr Others, or POO for short.
The principles of object oriented programming are far more important than mindlessly, robotically instantiating objects everywhere:
- Information hiding and encapsulation
- Simplicity
- Re-use
- Maintainability and empathy
Stop worrying so much about the objects. Concentrate on satisfying the principles of object orientation rather than object-izing everything. And most of all, consider the poor sap who will have to read and support this code after you're done with it . That's why POO trumps OOP: programming as if people mattered will always be a more effective strategy than satisfying the architecture astronauts .
![]() |
![]() |
![]() |
Nov 15, 2019 | www.quora.com
Daniel Korenblum , works at Bayes Impact Updated May 25, 2015 There are many reasons why non-OOP languages and paradigms/practices are on the rise, contributing to the relative decline of OOP.
First off, there are a few things about OOP that many people don't like, which makes them interested in learning and using other approaches. Below are some references from the OOP wiki article:
taken from:
- Cardelli, Luca (1996). "Bad Engineering Properties of Object-Oriented Languages". ACM Comput. Surv. (ACM) 28 (4es): 150. doi:10.1145/242224.242415. ISSN 0360-0300. Retrieved 21 April 2010.
- Armstrong, Joe. In Coders at Work: Reflections on the Craft of Programming. Peter Seibel, ed. Codersatwork.com , Accessed 13 November 2009.
- Stepanov, Alexander. "STLport: An Interview with A. Stepanov". Retrieved 21 April 2010.
- Rich Hickey, JVM Languages Summit 2009 keynote, Are We There Yet? November 2009. (edited)
Also see this post and discussion on hackernews:
Object Oriented Programming is an expensive disaster which must end
One of the comments therein linked a few other good wikipedia articles which also provide relevant discussion on increasingly-popular alternatives to OOP:
- Modularity and design-by-contract are better implemented by module systems ( Standard ML )
- Encapsulation is better served by lexical scope ( http://en.wikipedia.org/wiki/Sco... )
- Data is better modelled by algebraic datatypes ( Algebraic data type )
- Type-checking is better performed structurally ( Structural type system )
- Polymorphism is better handled by first-class functions ( First-class function ) and parametricity ( Parametric polymorphism )
Personally, I sometimes think that OOP is a bit like an antique car. Sure, it has a bigger engine and fins and lots of chrome etc., it's fun to drive around, and it does look pretty. It is good for some applications, all kidding aside. The real question is not whether it's useful or not, but for how many projects?When I'm done building an OOP application, it's like a large and elaborate structure. Changing the way objects are connected and organized can be hard, and the design choices of the past tend to become "frozen" or locked in place for all future times. Is this the best choice for every application? Probably not.
If you want to drive 500-5000 miles a week in a car that you can fix yourself without special ordering any parts, it's probably better to go with a Honda or something more easily adaptable than an antique vehicle-with-fins.
Finally, the best example is the growth of JavaScript as a language (officially called EcmaScript now?). Although JavaScript/EcmaScript (JS/ES) is not a pure functional programming language, it is much more "functional" than "OOP" in its design. JS/ES was the first mainstream language to promote the use of functional programming concepts such as higher-order functions, currying, and monads.
The recent growth of the JS/ES open-source community has not only been impressive in its extent but also unexpected from the standpoint of many established programmers. This is partly evidenced by the overwhelming number of active repositories on Github using JavaScript/EcmaScript:
Top Github Languages of 2014 (So far)
Because JS/ES treats both functions and objects as structs/hashes, it encourages us to blur the line dividing them in our minds. This is a division that many other languages impose - "there are functions and there are objects/variables, and they are different".
This seemingly minor (and often confusing) design choice enables a lot of flexibility and power. In part this seemingly tiny detail has enabled JS/ES to achieve its meteoric growth between 2005-2015.
This partially explains the rise of JS/ES and the corresponding relative decline of OOP. OOP had become a "standard" or "fixed" way of doing things for a while, and there will probably always be a time and place for OOP. But as programmers we should avoid getting too stuck in one way of thinking / doing things, because different applications may require different approaches.
Above and beyond the OOP-vs-non-OOP debate, one of our main goals as engineers should be custom-tailoring our designs by skillfully choosing the most appropriate programming paradigm(s) for each distinct type of application, in order to maximize the "bang for the buck" that our software provides.
Although this is something most engineers can agree on, we still have a long way to go until we reach some sort of consensus about how best to teach and hone these skills. This is not only a challenge for us as programmers today, but also a huge opportunity for the next generation of educators to create better guidelines and best practices than the current OOP-centric pedagogical system.
Here are a couple of good books that elaborates on these ideas and techniques in more detail. They are free-to-read online:
Mike MacHenry , software engineer, improv comedian, maker Answered Feb 14, 2015 · Author has 286 answers and 513.7k answer views Because the phrase itself was over hyped to an extrodinary degree. Then as is common with over hyped things many other things took on that phrase as a name. Then people got confused and stopped calling what they are don't OOP.Yes I think OOP ( the phrase ) is on the decline because people are becoming more educated about the topic.
It's like, artificial intelligence, now that I think about it. There aren't many people these days that say they do AI to anyone but the laymen. They would say they do machine learning or natural language processing or something else. These are fields that the vastly over hyped and really nebulous term AI used to describe but then AI ( the term ) experienced a sharp decline while these very concrete fields continued to flourish.
![]() |
![]() |
![]() |
Nov 15, 2019 | developers.slashdot.org
spazmonkey ( 920425 ) , Monday July 22, 2019 @12:22AM ( #58963430 )
its the way OOP is taught ( Score: 5 , Interesting)There is nothing inherently wrong with some of the functionality it offers, its the way OOP is abused as a substitute for basic good programming practices.
I was helping interns - students from a local CC - deal with idiotic assignments like making a random number generator USING CLASSES, or displaying text to a screen USING CLASSES. Seriously, WTF?
A room full of career programmers could not even figure out how you were supposed to do that, much less why.
What was worse was a lack of understanding of basic programming skill or even the use of variables, as the kids were being taught EVERY program was to to be assembled solely by sticking together bits of libraries.
There was no coding, just hunting for snippets of preexisting code to glue together. Zero idea they could add their own, much less how to do it. OOP isn't the problem, its the idea that it replaces basic programming skills and best practice.
sjames ( 1099 ) , Monday July 22, 2019 @01:30AM ( #58963680 ) Homepage Journal
Re:its the way OOP is taught ( Score: 5 , Interesting)That and the obsession with absofrackinglutely EVERYTHING just having to be a formally declared object including the while program being an object with a run() method.
Some things actually cry out to be objects, some not so much. Generally, I find that my most readable and maintainable code turns out to be a procedural program that manipulates objects.
Even there, some things just naturally want to be a struct or just an array of values.
The same is true of most ingenious ideas in programming. It's one thing if code is demonstrating a particular idea, but production code is supposed to be there to do work, not grind an academic ax.
For example, slavish adherence to "patterns". They're quite useful for thinking about code and talking about code, but they shouldn't be the end of the discussion. They work better as a starting point. Some programs seem to want patterns to be mixed and matched.
In reality those problems are just cargo cult programming one level higher.
I suspect a lot of that is because too many developers barely grasp programming and never learned to go beyond the patterns they were explicitly taught.
When all you have is a hammer, the whole world looks like a nail.
![]() |
![]() |
![]() |
Nov 15, 2019 | developers.slashdot.org
mfnickster ( 182520 ) , Monday July 22, 2019 @09:54AM ( #58965660 )
Re:Tiresome ( Score: 5 , Interesting)Inheritance, while not "inherently" bad, is often the wrong solution. See: Why extends is evil [javaworld.com]
Composition is frequently a more appropriate choice. Aaron Hillegass wrote this funny little anecdote in Cocoa Programming for Mac OS X [google.com]:
"Once upon a time, there was a company called Taligent. Taligent was created by IBM and Apple to develop a set of tools and libraries like Cocoa. About the time Taligent reached the peak of its mindshare, I met one of its engineers at a trade show.
I asked him to create a simple application for me: A window would appear with a button, and when the button was clicked, the words 'Hello, World!' would appear in a text field. The engineer created a project and started subclassing madly: subclassing the window and the button and the event handler.
Then he started generating code: dozens of lines to get the button and the text field onto the window. After 45 minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years later, Taligent quietly closed its doors forever."
![]() |
![]() |
![]() |
Nov 15, 2019 | developers.slashdot.org
Darinbob ( 1142669 ) , Monday July 22, 2019 @02:00AM ( #58963760 )
Re:The issue ( Score: 5 , Insightful)Almost every programming methodology can be abused by people who really don't know how to program well, or who don't want to. They'll happily create frameworks, implement new development processes, and chart tons of metrics, all while avoiding the work of getting the job done. In some cases the person who writes the most code is the same one who gets the least amount of useful work done.
So, OOP can be misused the same way. Never mind that OOP essentially began very early and has been reimplemented over and over, even before Alan Kay. Ie, files in Unix are essentially an object oriented system. It's just data encapsulation and separating work into manageable modules. That's how it was before anyone ever came up with the dumb name "full-stack developer".
![]() |
![]() |
![]() |
Nov 15, 2019 | developers.slashdot.org
(medium.com) 782 Posted by EditorDavid on Monday July 22, 2019 @12:04AM from the OOPs dept. Senior full-stack engineer Ilya Suzdalnitski recently published a lively 6,000-word essay calling object-oriented programming "a trillion dollar disaster."
Precious time and brainpower are being spent thinking about "abstractions" and "design patterns" instead of solving real-world problems... Object-Oriented Programming (OOP) has been created with one goal in mind -- to manage the complexity of procedural codebases. In other words, it was supposed to improve code organization . There's no objective and open evidence that OOP is better than plain procedural programming ...
Instead of reducing complexity, it encourages promiscuous sharing of mutable state and introduces additional complexity with its numerous design patterns . OOP makes common development practices, like refactoring and testing, needlessly hard...
![]() |
![]() |
![]() |
Nov 15, 2019 | developers.slashdot.org
cardpuncher ( 713057 ) , Monday July 22, 2019 @03:06AM ( #58963948 )
Re:The issue ( Score: 5 , Insightful)As a developer who started in the days of FORTRAN (when it was all-caps), I've watched the rise of OOP with some curiosity. I think there's a general consensus that abstraction and re-usability are good things - they're the reason subroutines exist - the issue is whether they are ends in themselves.
I struggle with the whole concept of "design patterns". There are clearly common themes in software, but there seems to be a great deal of pressure these days to make your implementation fit some pre-defined template rather than thinking about the application's specific needs for state and concurrency. I have seen some rather eccentric consequences of "patternism".
Correctly written, OOP code allows you to encapsulate just the logic you need for a specific task and to make that specific task available in a wide variety of contexts by judicious use of templating and virtual functions that obviate the need for "refactoring".
Badly written, OOP code can have as many dangerous side effects and as much opacity as any other kind of code. However, I think the key factor is not the choice of programming paradigm, but the design process.
You need to think first about what your code is intended to do and in what circumstances it might be reused. In the context of a larger project, it means identifying commonalities and deciding how best to implement them once. You need to document that design and review it with other interested parties. You need to document the code with clear information about its valid and invalid use. If you've done that, testing should not be a problem.
Some people seem to believe that OOP removes the need for some of that design and documentation. It doesn't and indeed code that you intend to be reused needs *more* design and documentation than the glue that binds it together in any one specific use case. I'm still a firm believer that coding begins with a pencil, not with a keyboard. That's particularly true if you intend to design abstract interfaces that will serve many purposes. In other words, it's more work to do OOP properly, so only do it if the benefits outweigh the costs - and that usually means you not only know your code will be genuinely reusable but will also genuinely be reused.
Rockoon ( 1252108 ) , Monday July 22, 2019 @04:23AM ( #58964192 )
Re:The issue ( Score: 5 , Insightful)I struggle with the whole concept of "design patterns".Because design patterns are stupid.
A reasonable programmer can understand reasonable code so long as the data is documented even when the code isn't documented, but will struggle immensely if it were the other way around.
Bad programmers create objects for objects sake, and because of that they have to follow so called "design patterns" because no amount of code commenting makes the code easily understandable when its a spaghetti web of interacting "objects" The "design patterns" don't make the code easier the read, just easier to write.
Those OOP fanatics, if they do "document" their code, add comments like "// increment the index" which is useless shit.
The big win of OOP is only in the encapsulation of the data with the code, and great code treats objects like data structures with attached subroutines, not as "objects", and document the fuck out of the contained data, while more or less letting the code document itself.
![]() |
![]() |
![]() |
Nov 15, 2019 | developers.slashdot.org
Waffle Iron ( 339739 ) , Monday July 22, 2019 @01:22AM ( #58963646 )Re:680,303 lines ( Score: 4 , Insightful)680,303 lines of Java code in the main project in my system.Probably would've been more like 100,000 lines if you had used a language whose ecosystem doesn't goad people into writing so many superfluous layers of indirection, abstraction and boilerplate.
![]() |
![]() |
![]() |
Dec 18, 2017 | esr.ibiblio.org
Posted on 2017-12-18 by esr In recent discussion on this blog of the GCC repository transition and reposurgeon, I observed "If I'd been restricted to C, forget it – reposurgeon wouldn't have happened at all"
I should be more specific about this, since I think the underlying problem is general to a great deal more that the implementation of reposurgeon. It ties back to a lot of recent discussion here of C, Python, Go, and the transition to a post-C world that I think I see happening in systems programming.
(This post perhaps best viewed as a continuation of my three-part series: The long goodbye to C , The big break in computer languages , and Language engineering for great justice .)
I shall start by urging that you must take me seriously when I speak of C's limitations. I've been programming in C for 35 years. Some of my oldest C code is still in wide production use. Speaking from that experience, I say there are some things only a damn fool tries to do in C, or in any other language without automatic memory management (AMM, for the rest of this article).
This is another angle on Greenspun's Law: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp." Anyone who's been in the trenches long enough gets that Greenspun's real point is not about C or Fortran or Common Lisp. His maxim could be generalized in a Henry-Spencer-does-Santyana style as this:
"At any sufficient scale, those who do not have automatic memory management in their language are condemned to reinvent it, poorly."
In other words, there's a complexity threshold above which lack of AMM becomes intolerable. Lack of it either makes expressive programming in your application domain impossible or sends your defect rate skyrocketing, or both. Usually both.
When you hit that point in a language like C (or C++), your way out is usually to write an ad-hoc layer or a bunch of semi-disconnected little facilities that implement parts of an AMM layer, poorly. Hello, Greenspun's Law!
It's not particularly the line count of your source code driving this, but rather the complexity of the data structures it uses internally; I'll call this its "greenspunity". Large programs that process data in simple, linear, straight-through ways may evade needing an ad-hoc AMM layer. Smaller ones with gnarlier data management (higher greenspunity) won't. Anything that has to do – for example – graph theory is doomed to need one (why, hello, there, reposurgeon!)
There's a trap waiting here. As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.
Sometimes you really have no alternative but to be stuck with an ad-hoc AMM layer. Usually you get pinned to this situation because real AMM would impose latency costs you can't afford. The major case of this is operating-system kernels. I could say a lot more about the costs and contortions this forces you to assume, and perhaps I will in a future post, but it's out of scope for this one.
On the other hand, reposurgeon is representative of a very large class of "systems" programs that don't have these tight latency constraints. Before I get to back to the implications of not being latency constrained, one last thing – the most important thing – about escalating AMM-layer complexity.
At high enough levels of greenspunity, the effort required to build and maintain your ad-hoc AMM layer becomes a black hole. You can't actually make any progress on the application domain at all – when you try it's like being nibbled to death by ducks.
Now consider this prospectively, from the point of view of someone like me who has architect skill. A lot of that skill is being pretty good at visualizing the data flows and structures – and thus estimating the greenspunity – implied by a problem domain. Before you've written any code, that is.
If you see the world that way, possible projects will be divided into "Yes, can be done in a language without AMM." versus "Nope. Nope. Nope. Not a damn fool, it's a black hole, ain't nohow going there without AMM."
This is why I said that if I were restricted to C, reposurgeon would never have happened at all. I wasn't being hyperbolic – that evaluation comes from a cool and exact sense of how far reposurgeon's problem domain floats above the greenspunity level where an ad-hoc AMM layer becomes a black hole. I shudder just thinking about it.
Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer. But, though software is sometimes written by people who are exceptionally good at managing that kind of hair, it then generally has to be maintained by people who are less so
The really smart people in my audience have already figured out that this is why Ken Thompson, the co-designer of C, put AMM in Go, in spite of the latency issues.
Ken understands something large and simple. Software expands, not just in line count but in greenspunity, to meet hardware capacity and user demand. In languages like C and C++ we are approaching a point of singularity at which typical – not just worst-case – greenspunity is so high that the ad-hoc AMM becomes a black hole, or at best a trap nigh-indistinguishable from one.
Thus, Go. It didn't have to be Go; I'm not actually being a partisan for that language here. It could have been (say) Ocaml, or any of half a dozen other languages I can think of. The point is the combination of AMM with compiled-code speed is ceasing to be a luxury option; increasingly it will be baseline for getting most kinds of systems work done at all.
Sociologically, this implies an interesting split. Historically the boundary between systems work under hard latency constraints and systems work without it has been blurry and permeable. People on both sides of it coded in C and skillsets were similar. People like me who mostly do out-of-kernel systems work but have code in several different kernels were, if not common, at least not odd outliers.
Increasingly, I think, this will cease being true. Out-of-kernel work will move to Go, or languages in its class. C – or non-AMM languages intended as C successors, like Rust – will keep kernels and real-time firmware, at least for the foreseeable future. Skillsets will diverge.
It'll be a more fragmented systems-programming world. Oh well; one does what one must, and the tide of rising software complexity is not about to be turned. This entry was posted in General , Software by esr . Bookmark the permalink . 144 thoughts on "C, Python, Go, and the Generalized Greenspun Law"
Leave a Reply Cancel reply
![]()
![]()
David Collier-Brown on 2017-12-18 at 17:38:05 said: Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory. Reply ↓
![]()
![]()
esr on 2017-12-18 at 17:40:45 said: >Andrew Forber quasily-accidentally created a similar truth: any sufficiently complex program using overlays will eventually contain an implementation of virtual memory.
Oh, neat. I think that's a closer approximation to the most general statement than Greenspun's, actually. Reply ↓
![]()
![]()
Alex K. on 2017-12-20 at 09:50:37 said: For today, maybe -- but the first time I had Greenspun's Tenth quoted at me was in the late '90s. [I know this was around/just before the first C++ standard, maybe contrasting it to this new upstart Java thing?] This was definitely during the era where big computers still did your serious work, and pretty much all of it was in either C, COBOL, or FORTRAN. [Yeah, yeah, I know– COBOL is all caps for being an acronym, while Fortran ain't–but since I'm talking about an earlier epoch of computing, I'm going to use the conventions of that era.]
Now the Object-Oriented paradigm has really mitigated this to an enormous degree, but I seem to recall at that time the argument was that multimethod dispatch (a benefit so great you happily accept the flaw of memory management) was the Killer Feature of LISP.
Given the way the other advantage I would have given Lisp over the past two decades–anonymous functions [lambdas] and treating them as first-class values–are creeping into a more mainstream usage, I think automated memory management is the last visible "Lispy" feature people will associate with Greenspun. [What, are you now visualizing lisp macros? Perish the thought–anytime I see a foot cannon that big, I stop calling it a feature ] Reply ↓
![]()
![]()
Mycroft Jones on 2017-12-18 at 17:41:04 said: After looking at the Linear Lisp paper, I think that is where Lutz Mueller got One Reference Only memory management from. For automatic memory management, I'm a big fan of ORO. Not sure how to apply it to a statically typed language though. Wish it was available for Go. ORO is extremely predictable and repeatable, not stuttery. Reply ↓
![]()
![]()
lliamander on 2017-12-18 at 19:28:04 said: > Not sure how to apply it to a statically typed language though.
Clean is probably what you would be looking for: https://en.wikipedia.org/wiki/Clean_(programming_language) Reply ↓
![]()
![]()
Jeff Read on 2017-12-19 at 00:38:57 said: If Lutz was inspired by Linear Lisp, he didn't cite it. Actually ORO is more like region-based memory allocation with a single region: values which leave the current scope are copied which can be slow if you're passing large lists or vectors around.
Linear Lisp is something quite a bit different, and allows for arbitrary data structures with arbitrarily deep linking within, so long as there are no cycles in the data structures. You can even pass references into and out of functions if you like; what you can't do is alias them. As for statically typed programming languages well, there are linear type systems , which as lliamander mentioned are implemented in Clean.
Newlisp in general is smack in the middle between Rust and Urbit in terms of cultishness of its community, and that scares me right off it. That and it doesn't really bring anything to the table that couldn't be had by "old" lisps (and Lutz frequently doubles down on mistakes in the design that had been discovered and corrected decades ago by "old" Lisp implementers). Reply ↓
![]()
![]()
Gary E. Miller on 2017-12-18 at 18:02:10 said: For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath. One probable reason Ken Thompson had to invent Go is to go around the tremendous difficulty in getting new stuff into C. Reply ↓
![]()
![]()
esr on 2017-12-18 at 18:40:53 said: >For a long time I've been holding out hope for a 'standard' garbage collector library for C. But not gonna hold my breath.
Yeah, good idea not to. People as smart/skilled as you and me have been poking at this problem since the 1980s and it's pretty easy to show that you can't do better than Boehm–Demers–Weiser, which has limitations that make it impractical. Sigh Reply ↓
![]()
![]()
John Cowan on 2018-04-15 at 00:11:56 said: What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well. Reply ↓
![]()
![]()
esr on 2018-04-15 at 08:30:12 said: >What's impractical about it? I replaced the native GC in the standard implementation of the Joy interpreter with BDW, and it worked very well.
GCing data on the stack is a crapshoot. Pointers can get mistaken for data and vice-versa. Reply ↓
![]()
![]()
Konstantin Khomoutov on 2017-12-20 at 06:30:05 said: I think it's not about C. Let me cite a little bit from "The Go Programming Language" (A.Donovan, B. Kernigan) --
in the section about Go influences, it states:"Rob Pike and others began to experiment with CSP implementations as actual languages. The first was called Squeak which provided a language with statically created channels. This was followed by Newsqueak, which offered C-like statement and expression syntax and Pascal-like type notation. It was a purely functional language with garbage collection, again aimed at managing keyboard, mouse, and window events. Channels became first-class values, dynamically created and storable in variables.
The Plan 9 operating system carried these ideas forward in a language called Alef. Alef tried to make Newsqueak a viable system programming language, but its omission of garbage collection made concurrency too painful."
So my takeaway was that AMM was key to get proper concurrency.
Before Go, I dabbled with Erlang (which I enjoy, too), and I'd say there the AMM is also a key to have concurrency made easy.(Update: the ellipsises I put into the citation were eaten by the engine and won't appear when I tried to re-edit my comment; sorry.) Reply ↓
![]()
![]()
tz on 2017-12-18 at 18:29:20 said: I think this is the key insight.
There are programs with zero MM.
There are programs with orderly MM, e.g. unzip does mallocs and frees in a stacklike formation, Malloc a,b,c, free c,b,a. (as of 1.1.4). This is laminar, not chaotic flow.Then there is the complex, nonlinear, turbulent flow, chaos. You can't do that in basic C, you need AMM. But it is easier in a language that includes it (and does it well).
Virtual Memory is related to AMM – too often the memory leaks were hidden (think of your O(n**2) for small values of n) – small leaks that weren't visible under ordinary circumstances.
Still, you aren't going to get AMM on the current Arduino variants. At least not easily.
That is where the line is, how much resources. Because you require a medium to large OS, or the equivalent resources to do AMM.
Yet this is similar to using FPGAs, or GPUs for blockchain coin mining instead of the CPU. Sometimes you have to go big. Your Cooper Mini might be great most of the time, but sometimes you need a Diesel big pickup. I think a Mini would fit in the bed of my F250.
As tasks get bigger they need bigger machines. Reply ↓
![]()
![]()
Zygo on 2017-12-18 at 18:31:34 said: > Of course, where that black-hole level of ad-hoc AMM complexity is varies by programmer.
I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that. Reply ↓
![]()
![]()
esr on 2017-12-18 at 18:34:59 said: >I was about to say something about writing an AMM layer before breakfast on the way to writing backtracking parallel graph-searchers at lunchtime, but I guess you covered that.
Well, yeah. I have days like that occasionally, but it would be unwise to plan a project based on the assumption that I will. And deeply foolish to assume that J. Random Programmer will. Reply ↓
![]()
![]()
tz on 2017-12-18 at 18:32:37 said: C displaced assembler because it had the speed and flexibility while being portable.
Go, or something like it will displace C where they can get just the right features into the standard library including AMM/GC.
Maybe we need Garbage Collecting C. GCC?
One problem is you can't do the pointer aliasing if you have a GC (unless you also do some auxillary bits which would be hard to maintain). void x = y; might be decodable but there are deeper and more complex things a compiler can't detect. If the compiler gets it wrong, you get a memory leak, or have to constrain the language to prevent things which manipulate pointers when that is required or clearer. Reply ↓
![]()
![]()
Zygo on 2017-12-18 at 20:52:40 said: C++11 shared_ptr does handle the aliasing case. Each pointer object has two fields, one for the thing being pointed to, and one for the thing's containing object (or its associated GC metadata). A pointer alias assignment alters the former during the assignment and copies the latter verbatim. The syntax is (as far as a C programmer knows, after a few typedefs) identical to C.
The trouble with applying that idea to C is that the standard pointers don't have space or time for the second field, and heap management isn't standardized at all (free() is provided, but programs are not required to use it or any other function exclusively for this purpose). Change either of those two things and the resulting language becomes very different from C. Reply ↓
![]()
![]()
IGnatius T Foobar on 2017-12-18 at 18:39:28 said: Eric, I love you, you're a pepper, but you have a bad habit of painting a portrait of J. Random Hacker that is actually a portrait of Eric S. Raymond. The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit. Reply ↓
![]()
![]()
esr on 2017-12-18 at 18:55:46 said: >The world is getting along with C just fine. 95% of the use cases you describe for needing garbage collection are eliminated with the simple addition of a string class which nearly everyone has in their toolkit.
Even if you're right, the escalation of complexity means that what I'm facing now, J. Random Hacker will face in a couple of years. Yes, not everybody writes reposurgeon but a string class won't suffice for much longer even if it does today. Reply ↓
![]()
![]()
tz on 2017-12-18 at 19:27:12 said: Here's another key.
I once had a sign:
I don't solve complex problems.
I simplify complex problems and solve them.Complexity does escalate, or at least in the sense that we could cross oceans a few centuries ago, and can go to the planets and beyond today.
We shouldn't use a rocket ship to get groceries from the local market.
J Random H-1B will face some easily decomposed apparently complex problem and write a pile of spaghetti.
The true nature of a hacker is not so much in being able to handle the most deep and complex situations, but in being able to recognize which situations are truly complex and in preference working hard to simplify and reduce complexity in preference to writing something to handle the complexity. Dealing with a slain dragon's corpse is easier than one that is live, annoyed, and immolating anything within a few hundred yards. Some are capable of handling the latter. The wise knight prefers to reduce the problem to the former. Reply ↓
![]()
![]()
William O. B'Livion on 2017-12-20 at 02:02:40 said: > J Random H-1B will face some easily decomposed
> apparently complex problem and write a pile of spaghetti.J Random H-1B will do it with Informatica and Java. Reply ↓
![]()
![]()
tz on 2017-12-18 at 18:42:33 said: I will add one last "perils of java school" comment.
One of the epic fails of C++ is it being sold as C but where anyone could program because of all the safetys. Instead it created bloatware and the very memory leaks because the lesser programmers didn't KNOW (grok, understand) what they were doing. It was all "automatic".
This is the opportunity and danger of AMM/GC. It is a tool, and one with hot areas and sharp edges. Wendy (formerly Walter) Carlos had a law that said "Whatever parameter you can control, you must control". Having a really good AMM/GC requires you to respect what it can and cannot do. OK, form a huge – into VM – linked list. Won't it just handle everything? NO!. You have to think reference counts, at least in the back of your mind. It simplifys the problem but doesn't eliminate it. It turns the black hole into a pulsar, but you still can be hit.
Many will gloss over and either superficially learn (but can't apply) or ignore the "how to use automatic memory management" in their CS course. Like they didn't bother with pointers, recursion, or multithreading subtleties. Reply ↓
![]()
![]()
lliamander on 2017-12-18 at 19:36:35 said: I would say that there is a parallel between concurrency models and memory management approaches. Beyond a certain level of complexity, it's simply infeasible for J. Random Hacker to implement a locks-based solution just as it is infeasible for Mr. Hacker to write a solution with manual memory management.
My worry is that by allowing the unsafe sharing of mutable state between goroutines, Go will never be able to achieve the per-process (i.e. language-level process, not OS-level) GC that would allow for really low latencies necessary for a AMM language to move closer into the kernel space. But certainly insofar as many "systems" level applications don't require extremely low latencies, Go will probably viable solution going forward. Reply ↓
![]()
![]()
Jeff Read on 2017-12-18 at 20:14:18 said: Putting aside the hard deadlines found in real-time systems programming, it has been empirically determined that a GC'd program requires five times as much memory as the equivalent program with explicit memory management. Applications which are both CPU- and RAM-intensive, where you need to have your performance cake and eat it in as little memory as possible, are thus severely constrained in terms of viable languages they could be implemented in. And by "severely constrained" I mean you get your choice of C++ or Rust. (C, Pascal, and Ada are on the table, but none offer quite the same metaprogramming flexibility as those two.)
I think your problems with reposturgeon stem from the fact that you're just running up against the hard upper bound on the vector sum of CPU and RAM efficiency that a dynamic language like Python (even sped up with PyPy) can feasibly deliver on a hardware configuration you can order from Amazon. For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII. Reply ↓
![]()
![]()
esr on 2017-12-18 at 20:27:20 said: > For applications like that, you need to forgo GC entirely and rely on smart pointers, automatic reference counting, value semantics, and RAII.
How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it? Reply ↓
![]()
![]()
Jeff Read on 2017-12-18 at 20:48:24 said: You mentioned that reposurgeon wouldn't have been written under the constraints of C. But C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp, especially if you avail yourself of those wonderful features in C++1x. C++ programmers talk about "zero-cost abstractions" for a reason .
Semantically, programming in a GC'd language and programming in a language that uses smart pointers and RAII are very similar: you create the objects you need, and they are automatically disposed of when no longer needed. But instead of delegating to a GC which cleans them up whenever, both you and the compiler have compile-time knowledge of when those cleanups will take place, allowing you finer-grained control over how memory -- or any other resource -- is used.
Oh, that's another thing: GC only has something to say about memory -- not file handles, sockets, or any other resource. In C++, with appropriate types value semantics can be made to apply to those too and they will immediately be destructed after their last use. There is no special
with
construct in C++; you simply construct the objects you need and they're destructed when they go out of scope.This is how the big boys do systems programming. Again, Go has barely displaced C++ at all inside Google despite being intended for just that purpose. Their entire critical path in search is still C++ code. And it always will be until Rust gains traction.
As for my Lisp experience, I know enough to know that Lisp has utterly failed and this is one of the major reasons why. It's not even a decent AI language, because the scruffies won, AI is basically large-scale statistics, and most practitioners these days use C++. Reply ↓
![]()
![]()
esr on 2017-12-18 at 20:54:08 said: >C++ is not C, and has an entirely different set of constraints. In practice, it's not thst far off from Lisp,
Oh, bullshit. I think you're just trolling, now.
I've been a C++ programmer and know better than this.
But don't argue with me. Argue with Ken Thompson, who designed Go because he knows better than this. Reply ↓
![]()
![]()
Anthony Williams on 2017-12-19 at 06:02:03 said: Modern C++ is a long way from C++ when it was first standardized in 1998. You should *never* be manually managing memory in modern C++. You want a dynamically sized array? Use std::vector. You want an adhoc graph? Use std::shared_ptr and std::weak_ptr.
Any code I see which uses new or delete, malloc or free will fail code review.
Destructors and the RAII idiom mean that this covers *any* resource, not just memory.
See the C++ Core Guidelines on resource and memory management: http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-resource Reply ↓
![]()
![]()
esr on 2017-12-19 at 07:53:58 said: >Modern C++ is a long way from C++ when it was first standardized in 1998.
That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad .
My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like. Reply ↓
![]()
![]()
Anthony Williams on 2017-12-19 at 08:17:58 said: > My experience dates from 2009 and included Boost – I was a senior dev on Battle For Wesnoth. Don't try to tell me I don't know what "modern C++" is like.
C++ in 2009 with boost was C++ from 1998 with a few extra libraries. I mean that quite literally -- the standard was unchanged apart from minor fixes in 2003.
C++ has changed a lot since then. There have been 3 standards issued, in 2011, 2014, and just now in 2017. Between them, there is a huge list of changes to the language and the standard library, and these are readily available -- both clang and gcc have kept up-to-date with the changes, and even MSVC isn't far behind. Even more changes are coming with C++20.
So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.
If you are interested, browse the wikipedia entries: https://en.wikipedia.org/wiki/C%2B%2B11 https://en.wikipedia.org/wiki/C%2B%2B14 and https://en.wikipedia.org/wiki/C%2B%2B17 along with articles like https://blog.smartbear.com/development/the-biggest-changes-in-c11-and-why-you-should-care/ http://www.drdobbs.com/cpp/the-c14-standard-what-you-need-to-know/240169034 and https://isocpp.org/files/papers/p0636r0.html Reply ↓
![]()
![]()
esr on 2017-12-19 at 08:37:11 said: >So, with all due respect, C++ from 2009 is not "modern C++", though there certainly were parts of boost that were leaning that way.
But the foundational abstractions are still leaky. So when you tell me "it's all better now", I don't believe you. I just plain do not.
I've been hearing this soothing song ever since around 1989. "Trust us, it's all fixed." Then I look at the "fixes" and they're horrifying monstrosities like templates – all the dangers of preprocessor macros and a whole new class of Turing-complete nightmares, too! In thirty years I'm certain I'll be hearing that C++2047 solves all the problems this time for sure , and I won't believe a word of it then, either. Reply ↓
![]()
![]()
Anthony Williams on 2017-12-19 at 08:45:34 said: > But the foundational abstractions are still leaky.
If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about? Reply ↓
![]()
![]()
esr on 2017-12-19 at 09:26:24 said: >If you would elaborate on this, I would be grateful. What are the problematic leaky abstractions you are concerned about?
Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.
Are bare, untyped pointers still in the language? Never mind, I know the answer.
Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.
Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.
![]()
![]()
Anthony Williams on 2017-12-19 at 09:54:51 said: Thank you for the list of concerns.
> Are array accesses bounds-checked? Don't yammer about iterators; what happens if I say foo[3] and foo is dimension 2? Never mind, I know the answer.
You are right, bare arrays are not bounds-checked, but std::array provides an at() member function, so arr.at(3) will throw if the array is too small.
Also, ranged-for loops can avoid the need for explicit indexing lots of the time anyway.
> Are bare, untyped pointers still in the language? Never mind, I know the answer.
Yes, void* is still in the language. You need to cast it to use it, which is something that is easy to spot in a code review.
> Can I get a core dump from code that the compiler has statically checked and contains no casts? Never mind, I know the answer.
Probably. Is it possible to write code in any language that dies horribly in an unintended fashion?
> Yes, C has these problems too. But it doesn't pretend not to, and in C I'm never afflicted by masochistic cultists denying that they're problems.
Did I say C++ was perfect? This blog post was about the problems inherent in the lack of automatic memory management in C and C++, and thus why you wouldn't have written reposurgeon if that's all you had. My point is that it is easy to write C++ in a way that doesn't suffer from those problems.
![]()
![]()
esr on 2017-12-19 at 10:10:11 said: > My point is that it is easy to write C++ in a way that doesn't suffer from those problems.
No, it is not. The error statistics of large C++ programs refute you.
My personal experience on Battle for Wesnoth refutes you.
The persistent self-deception I hear from C++ advocates on this score does nothing to endear the language to me.
![]()
![]()
Ian Bruene on 2017-12-19 at 11:05:22 said: So what I am hearing in this is: "Use these new standards built on top of the language, and make sure every single one of your dependencies holds to them just as religiously are you are. And if anyone fails at any point in the chain you are doomed.".
Cool.
![]()
![]()
Casey Barker on 2017-12-19 at 11:12:16 said: Using Go has been a revelation, so I mostly agree with Eric here. My only objection is to equating C++03/Boost with "modern" C++. I used both heavily, and given a green field, I would consider C++14 for some of these thorny designs that I'd never have used C++03/Boost for. It's a qualitatively different experience. Just browse a copy of Scott Meyer's _Effective Modern C++_ for a few minutes, and I think you'll at least understand why C++14 users object to the comparison. Modern C++ enables better designs.
Alas, C++ is a multi-layered tool chest. If you stick to the top two shelves, you can build large-scale, complex designs with pretty good safety and nigh unmatched performance. Everything below the third shelf has rusty tools with exposed wires and no blade guards, and on large-scale projects, it's impossible to keep J. Random Programmer from reaching for those tools.
So no, if they keep adding features, C++ 2047 won't materially improve this situation. But there is a contingent (including Meyers) pushing for the *removal* of features. I think that's the only way C++ will stay relevant in the long-term.
http://scottmeyers.blogspot.com/2015/11/breaking-all-eggs-in-c.html Reply ↓![]()
![]()
Zygo on 2017-12-19 at 11:52:17 said: My personal experience is that C++11 code (in particular, code that uses closures, deleted methods, auto (a feature you yourself recommended for C with different syntax), and the automatic memory and resource management classes) has fewer defects per developer-year than the equivalent C++03-and-earlier code.
This is especially so if you turn on compiler flags that disable the legacy features (e.g. -Werror=old-style-cast), and treat any legacy C or C++03 code like foreign language code that needs to be buried under a FFI to make it safe to use.
Qualitatively, the defects that do occur are easier to debug in C++11 vs C++03. There are fewer opportunities for the compiler to interpolate in surprising ways because the automatic rules are tighter, the library has better utility classes that make overloads and premature optimization less necessary, the core language has features that make templates less necessary, and it's now possible to explicitly select or rule out invalid candidates for automatic code generation.
I can design in Lisp, but write C++11 without much effort of mental translation. Contrast with C++03, where people usually just write all the Lispy bits in some completely separate language (or create shambling horrors like Boost to try to bandaid over the missing limbs boost::lambda, anyone?
Oh, look, since C++11 they've doubled down on something called boost::phoenix).Does C++11 solve all the problems? Absolutely not, that would break compatibility. But C++11 is noticeably better than its predecessors. I would say the defect rates are now comparable to Perl with a bunch of custom C modules (i.e. exact defect rate depends on how much you wrote in each language). Reply ↓
![]()
![]()
NHO on 2017-12-19 at 11:55:11 said: C++ happily turned into complexity metatarpit with "Everything that could be implemented in STL with templates should, instead of core language". And not deprecating/removing features, instead leaving there. Reply ↓
![]()
![]()
Michael on 2017-12-19 at 08:59:41 said: For the curious, can you point to a C++ tutorial/intro that shows how to do it the right way ? Reply ↓
![]()
![]()
Anthony Williams on 2017-12-19 at 09:58:12 said: I would suggest checking out the C++ Core Guidelines http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines Reply ↓
![]()
![]()
Michael on 2017-12-19 at 12:09:45 said: Thank you. Not sure this is what I was looking for.
Was thinking more along the lines of "Learning Python" equivalent.
![]()
![]()
Anthony Williams on 2017-12-19 at 08:26:13 said: > That's correct. Modern C++ is a disaster area of compounded complexity and fragile kludges piled on in a failed attempt to fix leaky abstractions. 1998 C++ had the leaky-abstractions problem, but at least it was drastically simpler. Clue: complexification when you don't even fix the problems is bad.
I agree that there is a lot of complexity in C++. That doesn't mean you have to use all of it. Yes, it makes maintaining legacy code harder, because the older code might use dangerous or complex parts, but for new code we can avoid the danger, and just stick to the simple, safe parts.
The complexity isn't all bad, though. Part of the complexity arises by providing the ability to express more complex things in the language. This can then be used to provide something simple to the user.
Take std::variant as an example. This is a new facility from C++17 that provides a type-safe discriminated variant. If you have a variant that could hold an int or a string and you store an int in it, then attempting to access it as a string will cause an exception rather than a silent error. The code that *implements* std::variant is complex. The code that uses it is simple. Reply ↓
![]()
![]()
Jeff Read on 2017-12-20 at 09:07:06 said: I won't argue with you. C++ is error-prone (albeit less so than C) and horrid to work in. But for certain classes of algorithmically complex, CPU- and RAM-intensive problems it is literally the only viable choice. And it looks like performing surgery on GCC-scale repos falls into that class of problem.
I'm not even saying it was a bad idea to initially write reposurgeon in Python. Python and even Ruby are great languages to write prototypes or even small-scale production versions of things because of how rapidly they may be changed while you're hammering out the details. But scale comes around to bite you in the ass sooner than most people think and when it does, your choice of language hobbles you in a way that can't be compensated for by throwing more silicon at the problem. And it's in that niche where C++ and Rust dominate, absolutely uncontested. Reply ↓
![]()
![]()
jim on 2017-12-22 at 06:41:27 said: If you found rust hard going, you are not a C++ programmer who knows better than this.
You were writing C in C++ Reply ↓
![]()
![]()
Anthony Williams on 2017-12-19 at 06:15:12 said: > How many times do I have to repeat "reposurgeon would never have been
> written under that constraint" before somebody who claims LISP
> experience gets it?That speaks to your lack of experience with modern C++, rather than an inherent limitation. *You* might not have written reposurgeon under that constraint, because *you* don't feel comfortable that you wouldn't have ended up with a black-hole of AMM. That does not mean that others wouldn't have or couldn't have, and that their code would necessarily be an unmaintainable black hole.
In well-written modern C++, memory management errors are a solved problem. You can just write code, and know that the compiler and library will take care of cleaning up for you, just like with a GC-based system, but with the added benefit that it's deterministic, and can handle non-memory resources such as file handles and sockets too. Reply ↓
![]()
![]()
esr on 2017-12-19 at 07:59:30 said: >In well-written modern C++, memory management errors are a solved problem
In well-written assembler memory management errors are a solved problem. I hate this idiotic cant repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue. Reply ↓
![]()
![]()
Anthony Williams on 2017-12-19 at 08:08:53 said: > I hate this idiotic repetition about how if you're just good enough for the language it won't hurt you – it sweeps the actual problem under the rug while pretending to virtue.
It's not about being "just good enough". It's about *not* using the dangerous parts. If you never use manual memory management, then you can't forget to free, for example, and automatic memory management is *easy* to use. std::string is a darn sight easier to use than the C string functions, for example, and std::vector is a darn sight easier to use than dynamic arrays with new. In both cases, the runtime manages the memory, and it is *easier* to use than the dangerous version.
Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems. The same is true with C++.
The fact that historically there are areas where C++ didn't provide a good solution, and thus there are programs that don't use the modern solution, and experience the consequential problems is not an inherent problem with the language, but it does make it harder to educate people. Reply ↓
![]()
![]()
John D. Bell on 2017-12-19 at 10:48:09 said: > It's about *not* using the dangerous parts. Every language has "dangerous" features that allow you to cause problems. Well-written programs in a given language don't use the dangerous features when there are equivalent ones without the problems.
Why not use a language that doesn't have "'dangerous' features"?
NOTES: [1] I am not saying that Go is necessarily that language – I am not even saying that any existing language is necessarily that language.
[2] /me is being someplace between naive and trolling here. Reply ↓
![]()
![]()
esr on 2017-12-19 at 11:10:15 said: >Why not use a language that doesn't have "'dangerous' features"?
Historically, it was because hardware was weak and expensive – you couldn't afford the overhead imposed by those languages. Now it's because the culture of software engineering has bad habits formed in those days and reflexively flinches from using higher-overhead safe languages, though it should not. Reply ↓
![]()
![]()
Paul R on 2017-12-19 at 12:30:42 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.
To be provocative, I think there are two types of programmer, the ones who watch Eric Niebler on Ranges https://www.youtube.com/watch?v=mFUXNMfaciE&t=4230s and think 'Wow, I want to find out more!' and the rest. The rest can have Go and Rust
D of course is the baby elephant in the room, worth much more attention than it gets. Reply ↓
![]()
![]()
Michael on 2017-12-19 at 12:53:33 said: Runtime efficiency still matters. That and the ability to innovate are the reasons I think C++ is in such wide use.
Because you can't get runtime efficiency in any other language?
Because you can't innovate in any other language? Reply ↓
![]()
![]()
Paul R on 2017-12-19 at 13:50:56 said: Obviously not.
Our three main advantages, runtime efficiency, innovation opportunity, building on a base of millions of lines of code that run the internet and an international standard.
Our four main advantages
More seriously, C++ enabled the STL, the STL transforms the approach of its users, with much increased reliability and readability, but no loss of performance. And at the same time your old code still runs. Now that is old stuff, and STL2 is on the way. Evolution.
That's five. Damn Reply ↓
![]()
![]()
Zygo on 2017-12-19 at 14:14:42 said: > Because you can't innovate in any other language?
That claim sounded odd to me too. C++ looks like the place that well-proven features of younger languages go to die and become fossilized. The standardization process would seem to require it. Reply ↓
![]()
![]()
Paul R on 2017-12-20 at 06:27:47 said: Such as?
My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.
Generic programming being a prime example.
![]()
![]()
Michael on 2017-12-20 at 08:19:41 said: My thought was the language is flexible enough to enable new stuff, and has sufficient weight behind it to get that new stuff actually used.
Are you sure it's that, or is it more the fact that the standards committee has forever had a me-too kitchen-sink no-feature-left-behind obsession?
(Makes me wonder if it doesn't share some DNA with the featuritis that has been Microsoft's calling card for so long. – they grew up together.)
![]()
![]()
Paul R on 2017-12-20 at 11:13:20 said: No, because people come to the standards committee with ideas, and you cannot have too many libraries. You don't pay for what you don't use. Prime directive C++.
![]()
![]()
Michael on 2017-12-20 at 11:35:06 said: and you cannot have too many libraries. You don't pay for what you don't use.
And this, I suspect, is the primary weakness in your perspective.
Is the defect rate of C++ code better or worse because of that?
![]()
![]()
Paul R on 2017-12-20 at 15:49:29 said: The rate is obviously lower because I've written less code and library code only survives if it is sound. Are you suggesting that reusing code is a bad idea? Or that an indeterminate number of reimplementations of the same functionality is a good thing?
You're not on the most productive path to effective criticism of C++ here.
![]()
![]()
Michael on 2017-12-20 at 17:40:45 said: The rate is obviously lower because I've written less code
Please reconsider that statement in light of how defect rates are measured.
Are you suggesting..
Arguing strawmen and words you put in someone's mouth is not the most productive path to effective defense of C++.
But thank you for the discussion.
![]()
![]()
Paul R on 2017-12-20 at 18:46:53 said: This column is too narrow to have a decent discussion. WordPress should rewrite in C++ or I should dig out my Latin dictionary.
Seriously, extending the reach of libraries that become standardised is hard to criticise, extending the reach of the core language is.
It used to be a thing that C didn't have built in functionality for I/O (for example) rather it was supplied by libraries written in C interfacing to a lower level system interface. This principle seems to have been thrown out of the window for Go and the others. I'm not sure that's a long term win. YMMV.
But use what you like or what your cannot talk your employer out of using, or what you can get a job using. As long as it's not Rust.
![]()
![]()
Zygo on 2017-12-19 at 12:24:25 said: > Well-written programs in a given language don't use the dangerous features
Some languages have dangerous features that are disabled by default and must be explicitly enabled prior to use. C++ should become one of those languages.
I am very fond of the 'override' keyword in C++11, which allows me to say "I think this virtual method overrides something, and don't compile the code if I'm wrong about that." Making that assertion incorrectly was a huge source of C++ errors for me back in the days when I still used C++ virtual methods instead of lambdas. C++11 solved that problem two completely different ways: one informs me when I make a mistake, and the other makes it impossible to be wrong.
Arguably, one should be able to annotate any C++ block and say "there shall be no manipulation of bare pointers here" or "all array access shall be bounds-checked here" or even " and that's the default for the entire compilation unit." GCC can already emit warnings for these without human help in some cases. Reply ↓
![]()
![]()
Kevin S. Van Horn on 2017-12-20 at 12:20:14 said: Is this a good summary of your objections to C++ smart pointers as a solution to AMM?
1. Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.
2. Subversion of AMM. Bare news and deletes are still available, so some later maintenance programmer could still introduce memory leaks. You could forbid the use of bare new and delete in your project, and write a check-in hook to look for violations of the policy, but that's one more complication to worry about and it would be difficult to impossible to implement reliably due to macros and the generally difficulty of parsing C++.
3. Memory corruption. It's too easy to overrun the end of arrays, treat a pointer to a single object as an array pointer, or otherwise corrupt memory. Reply ↓
![]()
![]()
esr on 2017-12-20 at 15:51:55 said: >Is this a good summary of your objections to C++ smart pointers as a solution to AMM?
That is at least a large subset of my objections, and probably the most important ones. Reply ↓
![]()
![]()
jim on 2017-12-22 at 07:15:20 said: It is uncommon to find a cyclic graph that cannot be rendered acyclic by weak pointers.
C++17 cheerfully breaks backward compatibility by removing some dangerous idioms, refusing to compile code that should never have been written. Reply ↓
![]()
![]()
guest on 2017-12-20 at 19:12:01 said: > Circular references. C++ has smart pointer classes that work when your data structures are acyclic, but it doesn't have a good solution for circular references. I'm guessing that reposurgeon's graphs are almost never DAGs.
General graphs with possibly-cyclical references are precisely the workload GC was created to deal with optimally, so ESR is right in a sense that reposturgeon _requires_ a GC-capable language to work. In most other programs, you'd still want to make sure that the extent of the resources that are under GC-control is properly contained (which a Rust-like language would help a lot with) but it's possible that even this is not quite worthwhile for reposturgeon. Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that. Reply ↓
![]()
![]()
esr on 2017-12-20 at 20:12:49 said: >Still, I'd want to make sure that my program is optimized in _other_ possible ways, especially wrt. using memory bandwidth efficiently – and Go looks like it doesn't really allow that.
Er, there's any language that does allow it? Reply ↓
![]()
![]()
Jeff Read on 2017-12-27 at 20:58:43 said: Yes -- ahem -- C++. That's why it's pretty much the only language taken seriously by game developers. Reply ↓
![]()
![]()
Zygo on 2017-12-21 at 12:56:20 said: > I'm guessing that reposurgeon's graphs are almost never DAGs
Why would reposurgeon's graphs not be DAGs? Some exotic case that comes up with e.g. CVS imports that never arises in a SVN->Git conversion (admittedly the only case I've really looked deeply at)?
Git repos, at least, are cannot-be-cyclic-without-astronomical-effort graphs (assuming no significant advances in SHA1 cracking and no grafts–and even then, all you have to do is detect the cycle and error out). I don't know how a generic revision history data structure could contain a cycle anywhere even if I wanted to force one in somehow. Reply ↓
![]()
![]()
esr on 2017-12-21 at 15:13:18 said: >Why would reposurgeon's graphs not be DAGs?
The repo graph is, but a lot of the structures have reference loops for fast lookup. For example, a blob instance has a pointer back to the containing repo, as well as being part of the repo through a pointer chain that goes from the repo object to a list of commits to a blob.
Without those loops, navigation in the repo structure would get very expensive. Reply ↓
![]()
![]()
guest on 2017-12-21 at 15:22:32 said: Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data, whereas the "true" DAG references do? In that case, and assuming you can be sufficiently sure that only DAGs will happen, refcounting (ideally using something like Rust) would very likely be the most efficient choice. No need for a fully-general GC here. Reply ↓
![]()
![]()
esr on 2017-12-21 at 15:34:40 said: >Aren't these inherently "weak" pointers though? In that they don't imply ownership/live data
I think they do. Unless you're using "ownership" in some sense I don't understand. Reply ↓
![]()
![]()
jim on 2017-12-22 at 07:31:39 said: A weak pointer does not own the object it points to. A shared pointer does.
When there are are zero shared pointers pointing to an object, it gets freed, regardless of how many weak pointers are pointing to it.
Shared pointers and unique pointers own, weak pointers do not own. Reply ↓
![]()
![]()
jim on 2017-12-22 at 07:23:35 said: In C++11, one would implement a pointer back to the owning object as a weak pointer. Reply ↓
![]()
![]()
jim on 2017-12-23 at 00:40:36 said:
> How many times do I have to repeat "reposurgeon would never have been written under that constraint" before somebody who claims LISP experience gets it?
Maybe it is true, but since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers, we hear you say that you would never write reposurgeon would never under that constraint.
Which, since no one else is writing reposurgeon, is an argument, but not an argument that those who do get weak pointers and rust scopes find all that convincing.
I am inclined to think that those who write C++98 (which is the gcc default) could not write reposurgeon under that constraint, but those who write C++11 could write reposurgeon under that constraint, and except for some rather unintelligible, complicated, and twisted class constructors invoking and enforcing the C++11 automatic memory management system, it would look very similar to your existing python code. Reply ↓
![]()
![]()
esr on 2017-12-23 at 02:49:13 said: >since you do not understand, or particularly wish to understand, Rust scoping, ownership, and zero cost abstractions, or C++ weak pointers
Thank you, I understand those concepts quite well. I simply prefer to apply them in languages not made of barbed wire and landmines. Reply ↓
![]()
![]()
guest on 2017-12-23 at 07:11:48 said: I'm sure that you understand the _gist_ of all of these notions quite accurately, and this alone is of course quite impressive for any developer – but this is not quite the same as being comprehensively aware of their subtler implications. For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.
For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language. You've criticized Rust before (not without some justification!) for having half-baked async-IO facilities, but I would think that reposturgeon does not depend significantly on async-IO. Reply ↓
![]()
![]()
esr on 2017-12-23 at 08:14:25 said: >For instance, both James and I have suggested to you that backpointers implemented as an optimization of an overall DAG structure should be considered "weak" pointers, which can work well alongside reference counting.
Yes, I got that between the time I wrote my first reply and JAD brought it up. I've used Python weakrefs in similar situations. I would have seemed less dense if I'd had more sleep at the time.
>For that matter, I'm sure that Rustlang developers share your aversion to "barbed wire and landmines" in a programming language.
That acidulousness was mainly aimed at C++. Rust, if it implements its theory correctly (a point on which I am willing to be optimistic) doesn't have C++'s fatal structural flaws. It has problems of its own which I won't rehash as I've already anatomized them in detail. Reply ↓
![]()
![]()
Garrett on 2017-12-21 at 11:16:25 said: There's also development cost. I suspect that using eg. Python drastically reduces the cost for developing the code. And since most repositories are small enough that Eric hasn't noticed accidental O(n**2) or O(n**3) algorithms until recently, it's pretty obvious that execution time just plainly doesn't matter. Migration is going to involve a temporary interruption to service and is going to be performed roughly once per repo. The amount of time involved in just stopping the eg. SVN service and bringing up the eg. GIT hosting service is likely to be longer than the conversion time for the median conversion operation.
So in these cases, most users don't care about the run-time, and outside of a handful of examples, wouldn't brush up against the CPU or memory limitations of a whitebox PC.
This is in contrast to some other cases in which I've worked such as file-serving (where latency is measured in microseconds and is actually counted), or large data processing (where wasting resources reduces the total amount of stuff everybody can do). Reply ↓
![]()
![]()
David Collier-Brown on 2017-12-18 at 20:20:59 said: Hmmn, I wonder if the virtual memory of Linux (and Unix, and Multics) is really the OS equivalent of the automatic memory management of application programs? One works in pages, admittedly, not bytes or groups of bytes, but one could argue that the sub-page stuff is just expensive anti-internal-fragmentation plumbing
–dave
[In polite Canajan, "I wonder" is the equivalent of saying "Hey everybody, look at this" in the US. And yes, I that's also the redneck's famous last words.] Reply ↓![]()
![]()
John Moore on 2017-12-18 at 22:20:21 said: In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity. The memory is often allocated by code which first encounters the need, and it is then passed on through layers and at some point, encounters code which determines the memory is no longer needed. All of this creates an implicit contract that he who is handed a pointer to something (say, a buffer) becomes responsible for disposing of it. But, there may be many places where that is needed – most of them in exception handling.
That creates many, many opportunities for some to simply forget to release it. Also, when the code is handed off to someone unfamiliar, they may not even know about the contract. Crises (or bad habits) lead to failures to document this stuff (or create variable names or clear conventions that suggest one should look for the contract).
I've also done a bunch of stuff in Java, both applications level (such as a very complex Android app with concurrency) and some infrastructural stuff that wasn't as performance constrained. Of course, none of this was hard real-time although it usually at least needed to provide response within human limits, which GC sometimes caused trouble with. But, the GC was worth it, as it substantially reduced bugs which showed up only at runtime, and it simplified things.
On the side, I write hard real time stuff on tiny, RAM constrained embedded systems – PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast). In that world, there is no malloc used, and shouldn't be. It's compile time created buffers and structures for the most part. Fortunately, the applications don't require advanced dynamic structures (like symbol tables) where you need memory allocation. In that world, AMM isn't an issue. Reply ↓
![]()
![]()
Michael on 2017-12-18 at 22:47:26 said: PIC18F series stuff (with the most horrible machine model imaginable for such a simple little beast)
LOL. Glad I'm not the only one who thought that. Most of my work was on the 16F – after I found out what it took to do a simple table lookup, I was ready for a stiff drink. Reply ↓![]()
![]()
esr on 2017-12-18 at 23:45:03 said: >In my experience, with most of my C systems programming in protocol stacks and transaction processing infrastructure, the MM problem has been one of code, not data structure complexity.
I believe you. I think I gravitate to problems with data-structure complexity because, well, that's just the way my brain works.
But it's also true that I have never forgotten one of the earliest lessons I learned from Lisp. When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about. Reply ↓
![]()
![]()
Jeremy on 2017-12-19 at 01:36:47 said: Perhaps its because my coding experience has mostly been short python scripts of varying degrees of quick-and-dirtiness, but I'm having trouble grokking the difference between smart code/dumb data vs dumb code/smart data. How does one tell the difference?
Now, as I type this, my intuition says it's more than just the scary mess of nested if statements being in the class definition for your data types, as opposed to the function definitions which munch on those data types; a scary mess of nested if statements is probably the former. The latter though I'm coming up blank.
Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)? Reply ↓
![]()
![]()
jsn on 2017-12-19 at 02:35:48 said: I've always expressed it as "smart data + dumb logic = win".
You almost said my favorite canned example: a big conditional block vs. a lookup table. The LUT can replace all the conditional logic with structured data and shorter (simpler, less bug-prone, faster, easier to read) unconditional logic that merely does the lookup. Concretely in Python, imagine a long list of "if this, assign that" replaced by a lookup into a dictionary. It's still all "code", but the amount of program logic is reduced.
So I would answer your first question by saying look for places where data structures are used. Then guesstimate how complex some logic would have to be to replace that data. If that complexity would outstrip that of the data itself, then you have a "smart data" situation. Reply ↓
![]()
![]()
Emanuel Rylke on 2017-12-19 at 04:07:58 said: To expand on this, it can even be worth to use complex code to generate that dumb lookup table. This is so because the code generating the lookup table runs before, and therefore separately, from the code using the LUT. This means that both can be considered in isolation more often; bringing the combined complexity closer to m+n than m*n. Reply ↓
![]()
![]()
TheDividualist on 2017-12-19 at 05:39:39 said: Admittedly I have an SQL hammer and think everything is a nail, but why not would *every* program include a database, like the SQLLite that even comes bundled with Python distros, no sweat, and put that lookup table into it, not in a dictionary inside the code?
Of course the more you go in this direction the more problems you will have with unit testing, in case you want to do such a thing. Generally we SQL-hammer guys don't do that much, because in theory any fuction can read any part of the database, making the whole database the potential "inputs" for every function.
That is pretty lousy design, but I think good design patterns for separations of concerns and unit testability are not yet really known for database driven software, I mean, for example, model-view-controller claims to be one, but actually fails as these can and should call each other. So you have in the "customer" model or controller a function to check if the customer has unpaid invoices, and decide to call it from the "sales order" controller or model to ensure such customers get no new orders registered. In the same "sales order" controller you also check the "product" model or controller if it is not a discontinued product and check the "user" model or controller if they have the proper rights for this operation and the "state" controller if you are even offering this product in that state and so on a gazillion other things, so if you wanted to automatically unit test that "register a new sales order" function you have a potential "input" space of half the database. And all that with good separation of concerns MVC patterns. So I think no one really figured this out yet? Reply ↓
![]()
![]()
guest on 2017-12-20 at 19:21:13 said: There's a reason not to do this if you can help it – dispatching through a non-constant LUT is way slower than running easily-predicted conditionals. Like, an order of magnitude slower, or even worse. Reply ↓
![]()
![]()
esr on 2017-12-19 at 07:45:38 said: >Perhaps a better question than my one above: what codebases would you recommend for study which would be good examples of the latter (besides reposurgeon)?
I do not have an instant answer, sorry. I'll hand that question to my backbrain and hope an answer pops up. Reply ↓
![]()
![]()
Jon Brase on 2017-12-20 at 00:54:15 said: When you can turn code complexity into data structure complexity, that's usually a win. Or to put it slightly differently, dumb code munching smart data beats smart code munching dumb data. It's easier to debug and reason about.
Doesn't "dumb code munching smart data" really reduce to "dumb code implementing a virtual machine that runs a different sort of dumb code to munch dumb data"? Reply ↓
![]()
![]()
jim on 2017-12-22 at 20:25:07 said: "Smart Data" is effectively a domain specific language.
A domain specific language is easier to reason about within its proper domain, because it lowers the difference between the problem and the representation of the problem. Reply ↓
![]()
![]()
wisd0me on 2017-12-19 at 02:35:10 said: I wonder why you talked about inventing an AMM-layer so much, but told nothing about the GC, which is available for C language. Why do you need to invent some AMM-layer in the first place, instead of just using the GC?
For example, Bigloo Scheme and The GNU Objective C runtime successfully used it, among many others. Reply ↓![]()
![]()
Walter Bright on 2017-12-19 at 04:53:13 said: Maybe D, with its support for mixed GC / manual memory allocation is the right path after all! Reply ↓
![]()
![]()
Jeremy Bowers on 2017-12-19 at 10:40:24 said: Rust seems like a good fit for the cases where you need the low latency (and other speed considerations) and can't afford the automation. Firefox finally got to benefit from that in the Quantum release, and there's more coming. I wouldn't dream of writing a browser engine in Go, let alone a highly-concurrent one. When you're willing to spend on that sort of quality, Rust is a good tool to get there.
But the very characteristics necessary to be good in that space will prevent it from becoming the "default language" the way C was for so long. As much fun as it would be to fantasize about teaching Rust as a first language, I think that's crazy talk for anything but maybe MIT. (And I'm not claiming it's a good idea even then; just saying that's the level of student it would take for it to even be possible .) Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already. (I keep hearing about how Swift is going to be usable off of Apple platforms real soon now just around the corner just a bit longer .) Reply ↓
![]()
![]()
esr on 2017-12-19 at 17:30:07 said: >Dunno if Go will become that "default language" but it's taking a decent run at it; most of the other contenders I can think of at the moment have the short-term-strength-yet-long-term-weakness of being tied to a strong platform already.
I really think the significance of Go being an easy step up from C cannot be overestimated – see my previous blogging about the role of inward transition costs.
Ken Thompson is insidiously clever. I like channels and subroutines and := but the really consequential hack in Go's design is the way it is almost perfectly designed to co-opt people like me – that is, experienced C programmers who have figured out that ad-hoc AMM is a disaster area. Reply ↓
![]()
![]()
Jeff Read on 2017-12-20 at 08:58:23 said: Go probably owes as much to Rob Pike and Phil Winterbottom for its design as it does to Thompson -- because it's basically Alef with the feature whose lack, according to Pike, basically killed Alef: garbage collection.
I don't know that it's "insidiously clever" to add concurrency primitives and GC to a C-like language, as concurrency and memory management were the two obvious banes of every C programmer's existence back in the 90s -- so if Go is "insidiously clever", so is Java. IMHO it's just smart, savvy design which is no small thing; languages are really hard to get right. And in the space Go thrives in, Go gets a lot right. Reply ↓
![]()
![]()
John G on 2017-12-19 at 14:01:09 said: Eric, have you looked into D *lately*? These days:
* it's fully open source (Boost license),
* there's [three back-ends to choose from]( https://dlang.org/download.html ),
* there's [exactly one standard library]( https://dlang.org/phobos/index.html ), and
* it's got a standard package repository and management tool ([dub]( https://code.dlang.org/ )). Reply ↓
![]()
![]()
esr on 2017-12-19 at 15:14:02 said: >Eric, have you looked into D *lately*?
No. What's it got that Go does not?
That's not intended as a hostile question, I'm trying to figure out where to focus my attention when I read up on it. Reply ↓
![]()
![]()
PInver on 2017-12-19 at 15:36:18 said: D take safety seriously, like Rust, but with a human usable approach, see:
https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.mdD template system is simply the most powerful and simple template system I've seen so far, see:
https://github.com/PhilippeSigaud/D-templates-tutorialThe garbage collector can be tamed, see:
https://wiki.dlang.org/DIP60A minimal, no-runtime, subset can be used, suitable for being a replacement for C code, see:
https://dlang.org/spec/betterc.html Reply ↓
![]()
![]()
xenon325 on 2017-12-20 at 12:15:01 said: Good list. I would add to that:
First, `pure` functions and transitive `const`, which make code so much easier to reason about
Second. Allmost entire language is available at compile time. That, combined with templates, enables crazy (in a good way) stuff, like building optimized state machine for regex at compile-time. Given, regex pattern is known at compile time, of course. But that's pretty common.
Can't find it now, but there were bechmarks, which show it's faster than any run-time built regex engine out there. Still, source code is pretty straightforward – one don't have to be Einstein to write code like that [1].There is a talk by Andrei Alexandrescu called "Fastware" were he show how various metaprogramming facilities enable useful optimizations [2].
And a more recent talk, "Design By Introspection" [3], were he shows how these facilities enable much more compact designs and implementaions.[1] https://github.com/dlang/phobos/blob/master/std/regex/package.d#L426
[2] https://youtu.be/AxnotgLql0k
[3] video: https://youtu.be/29h6jGtZD-U?t=1m6s
slides: https://dconf.org/2017/talks/alexandrescu.pdf Reply ↓![]()
![]()
John G on 2017-12-19 at 15:45:27 said: > > Eric, have you looked into D *lately*?
> No. What's it got that Go does not?
Not sure. I've only recently begun learning D, and I don't know Go. [The D overview]( https://dlang.org/overview.html ) may include enough for you to surmise the differences though. Reply ↓
![]()
![]()
Doctor Mist on 2017-12-19 at 18:28:54 said:
As the greenspunity rises, you are likely to find that more and more of your effort and defect chasing is related to the AMM layer, and proportionally less goes to the application logic. Redoubling your effort, you increasingly miss your aim.
Even when you're merely at the edge of this trap, your defect rates will be dominated by issues like double-free errors and malloc leaks. This is commonly the case in C/C++ programs of even low greenspunity.
Interesting. This certainly fits my experience.
Has anybody looked for common patterns in whatever parasitic distractions plague you when you start to reach the limits of a language with AMM? Reply ↓
![]()
![]()
Dave taht on 2017-12-23 at 10:44:24 said: The biggest thing that I hate about go is the
result, err = whatever()
if (err) dosomethingtofixit();abstraction.
I went through a phase earlier this year where I tried to eliminate the concept of an errno entirely (and failed, in the end reinventing lisp, badly), but sometimes I still think – to the tune of flight of the valkeries – "Kill the errno, kill the errno, kill the ERRno, kill the err!' Reply ↓
![]()
![]()
jim on 2017-12-23 at 23:37:46 said: I have on several occasions been part of big projects using languages with AMM, many programmers, much code, and they hit scaling problems and died, but it is not altogether easy to explain what the problem was.
But it was very clear that the fact that I could get a short program, or a quick fix up and running with an AMM much faster than in C or C++ was failing to translate into getting a very large program containing far too many quick fixes up and running. Reply ↓
![]()
![]()
François-René Rideau on 2017-12-19 at 21:39:05 said: Insightful, but I think you are missing a key point about Lisp and Greenspunning.
AMM is not the only thing that Lisp brings on the table when it comes to dealing with Greenspunity. Actually, the whole point of Lisp is that there is not _one_ conceptual barrier to development, or a few, or even a lot, but that there are _arbitrarily_many_, and that is why you need be able to extend your language through _syntactic_abstraction_ to build DSLs so that every abstraction layer can be written in a language that is fit that that layer. [Actually, traditional Lisp is missing the fact that DSL tooling depends on _restriction_ as well as _extension_; but Haskell types and Racket languages show the way forward in this respect.]
That is why all languages without macros, even with AMM, remain "blub" to those who grok Lisp. Even in Go, they reinvent macros, just very badly, with various preprocessors to cope with the otherwise very low abstraction ceiling.
(Incidentally, I wouldn't say that Rust has no AMM; instead it has static AMM. It also has some support for macros.) Reply ↓
![]()
![]()
Patrick Maupin on 2017-12-23 at 18:44:27 said: " static AMM" ???
WTF sort of abuse of language is this?
Oh, yeah, rust -- the language developed by Humpty Dumpty acolytes:
https://github.com/rust-lang/rust/pull/25640
You just can't make this stuff up. Reply ↓
![]()
![]()
jim on 2017-12-23 at 22:02:18 said: Static AMM means that the compiler analyzes your code at compile time, and generates the appropriate frees,
Static AMM means that the compiler automatically does what you manually do in C, and semi automatically do in C++11 Reply ↓
![]()
![]()
Patrick Maupin on 2017-12-24 at 13:36:35 said: To the extent that the compiler's insertion of calls to free() can be easily deduced from the code without special syntax, the insertion is merely an optimization of the sort of standard AMM semantics that, for example, a PyPy compiler could do.
To the extent that the compiler's ability to insert calls to free() requires the sort of special syntax about borrowing that means that the programmer has explicitly described a non-stack-based scope for the variable, the memory management isn't automatic.
Perhaps this is why a google search for "static AMM" doesn't return much. Reply ↓
![]()
![]()
Jeff Read on 2017-12-27 at 03:01:19 said: I think you fundamentally misunderstand how borrowing works in Rust.
In Rust, as in C++ or even C, references have value semantics. That is to say any copies of a given reference are considered to be "the same". You don't have to "explicitly describe a non-stack-based scope for the variable", but the hitch is that there can be one, and only one, copy of the original reference to a variable in use at any time. In Rust this is called ownership, and only the owner of an object may mutate it.
Where borrowing comes in is that functions called by the owner of an object may borrow a reference to it. Borrowed references are read-only, and may not outlast the scope of the function that does the borrowing. So everything is still scope-based. This provides a convenient way to write functions in such a way that they don't have to worry about where the values they operate on come from or unwrap any special types, etc.
If you want the scope of a reference to outlast the function that created it, the way to do that is to use a
std::Rc
, which provides a regular, reference-counted pointer to a heap-allocated object, the same as Python.The borrow checker checks all of these invariants for you and will flag an error if they are violated. Since worrying about object lifetimes is work you have to do anyway lest you pay a steep price in performance degradation or resource leakage, you win because the borrow checker makes this job much easier.
Rust does have explicit object lifetimes, but where these are most useful is to solve the problem of how to have structures, functions, and methods that contain/return values of limited lifetime. For example declaring a
struct Foo { x: &'a i32 }
means that any instance ofstruct Foo
is valid only as long as the borrowed reference inside it is valid. The borrow checker will complain if you attempt to use such a struct outside the lifetime of the internal reference. Reply ↓![]()
![]()
Doctor Locketopus on 2017-12-27 at 00:16:54 said: Good Lord (not to be confused with Audre Lorde). If I weren't already convinced that Rust is a cult, that would do it.
However, I must confess to some amusement about Karl Marx and Michel Foucault getting purged (presumably because Dead White Male). Reply ↓
![]()
![]()
Jeff Read on 2017-12-27 at 02:06:40 said: This is just a cost of doing business. Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada -- "it doesn't matter if you're a brain in a jar or a superintelligent dolphin as long as your code is good" -- but when it comes to actually putting its money where its mouth is, hacker culture has fallen far short. Now that's changing, and one of the side effects of that is how we use language and communicate internally, and to the wider community, has to change.
But none of this has to do with automatic memory management. In Rust, management of memory is not only fully automatic, it's "have your cake and eat it too": you have to worry about neither releasing memory at the appropriate time, nor the severe performance costs and lack of determinism inherent in tracing GCs. You do have to be more careful in how you access the objects you've created, but the compiler will assist you with that. Think of the borrow checker as your friend, not an adversary. Reply ↓
![]()
![]()
John on 2017-12-20 at 05:03:22 said: Present day C++ is far from C++ when it was first institutionalized in 1998. You should *never* be physically overseeing memory in present day C++. You need a powerfully measured cluster? Utilize std::vector. You need an adhoc diagram? Utilize std::shared_ptr and std::weak_ptr.
Any code I see which utilizes new or erase, malloc or through and through freedom fall flat code audit. Reply ↓
![]()
![]()
Garrett on 2017-12-21 at 11:24:41 said: What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.
In contrast to what I think of as systems programming, you have automatic memory management. You aren't working in kernel-space. You aren't modifying the core libraries or doing significant programmatic interface design.
I'm missing something in your semantic usage and my understanding of the solution implementation. Reply ↓
![]()
![]()
esr on 2017-12-21 at 15:08:28 said: >What makes you refer to this as a systems programming project?
Never user-facing. Often scripted. Development-support tool. Used by systems programmers.
I realize we're in an area where the "systems" vs. "application" distinction gets a little tricky to make. I hang out in that border zone a lot and have thought about this. Are GPSD and ntpd "applications"? Is giflib? Sure, they're out-of-kernel, but no end-user will ever touch them. Is GCC an application? Is apache or named?
Inside kernel is clearly systems. Outside it, I think the "systems" vs. "application" distinction is about the skillset being applied and who your expected users are than anything else.
I would not be upset at anyone who argued for a different distinction. I think you'll find the definitional questions start to get awfully slippery when you poke at them. Reply ↓
![]()
![]()
Jeff Read on 2017-12-24 at 03:21:34 said:
What makes you refer to this as a systems programming project? It seems to me to be a standard data-processing problem. Data in, data out. Sure, it's hella complicated and you're brushing up against several different constraints.
When you're talking about Unix, there is often considerable overlap between "systems" and "application" programming because the architecture of Unix, with pipes, input and output redirection, etc., allowed for essential OS components to be turned into simple, data-in-data-out user-space tools. The functionality of
ls
,cp
,rm
, orcat
, for instance, would have been built into the shell of a pre-Unix OS (or many post-Unix ones). One of the great innovations of Unix is to turn these units of functionality into standalone programs, and then make spawning processes cheap enough to where using them interactively from the shell is easy and natural. This makes extending the system, as accessed through the shell, easy: just write a new, small program and add it to yourPATH
.So yeah, when you're working in an environment like Unix, there's no bright-line distinction between "systems" and "application" code, just like there's no bright-line distinction between "user" and "developer". Unix is a tool for facilitating humans working with computers. It cannot afford to discriminate, lest it lose its Unix-nature. (This is why Linux on the desktop will never be a thing, not without considerable decay in the facets of Linux that made it so great to begin with.) Reply ↓
![]()
![]()
Peter Donis on 2017-12-21 at 22:15:44 said: @tz: you aren't going to get AMM on the current Arduino variants. At least not easily.
At the upper end you can; the Yun has 64 MB, as do the Dragino variants. You can run OpenWRT on them and use its Python (although the latest OpenWRT release, Chaos Calmer, significantly increased its storage footprint from older firmware versions), which runs fine in that memory footprint, at least for the kinds of things you're likely to do on this type of device. Reply ↓
![]()
![]()
esr on 2017-12-21 at 22:43:57 said: >You can run OpenWRT on them and use its Python
I'd be comfortable in that environment, but if we're talking AMM languages Go would probably be a better match for it. Reply ↓
![]()
![]()
Peter Donis on 2017-12-21 at 23:16:33 said: Go is not available as a standard package on OpenWRT, but it probably won't be too much longer before it is. Reply ↓
![]()
![]()
Jeff Read on 2017-12-22 at 14:07:21 said: Go binaries are statically linked, so the best approach is probably to install Go on your big PC, cross compile, and push the binary out to the device. Cross-compiling is a doddle; simply set GOOS and GOARCH. Reply ↓
![]()
![]()
Michael on 2017-12-22 at 15:07:09 said: This is one of Go's best features IMO. Reply ↓
![]()
![]()
jim on 2017-12-22 at 06:37:36 said: C++11 has an excellent automatic memory management layer. Its only defect is that it is optional, for backwards compatibility with C and C++98 (though it really is not all that compatible with C++98)
And, being optional, you are apt to take the short cut of not using it, which will bite you.
Rust is, more or less, C++17 with the automatic memory management layer being almost mandatory. Reply ↓
![]()
![]()
jim on 2017-12-22 at 20:39:27 said:
> you are likely to find that more and more of your effort and defect chasing is related to the AMM layer
But the AMM layer for C++ has already been written and debugged, and standards and idioms exist for integrating it into your classes and type definitions.
Once built into your classes, you are then free to write code as if in a fully garbage collected language in which all types act like ints.
C++14, used correctly, is a metalanguage for writing domain specific languages.
Now sometimes building your classes in C++ is weird, nonobvious, and apt to break for reasons that are difficult to explain, but done correctly all the weird stuff is done once in a small number of places, not spread all over your code Reply ↓
![]()
![]()
Dave taht on 2017-12-22 at 22:31:40 said: Linux is the best C library ever created. And it's often, terrifying. Things like RCU are nearly impossible for mortals to understand. Reply ↓
![]()
![]()
Alex Beamish on 2017-12-23 at 11:18:48 said: Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago. Once I'd moved, I also appreciated the memory management in the shape of 'any size you want' arrays, hashes (where had they been all my life?) and autovivification -- on the spot creation of array or hash elements, at any depth.
While C is a low-level language that masquerades as a high-level language, the original intent of the language was to make writing assembler easier and faster. It can still be used for that, when necessary, leaving the more complicated matters to higher level languages. Reply ↓
![]()
![]()
esr on 2017-12-23 at 14:36:26 said: >Interesting thesis .. it was the 'extra layer of goodness' surrounding file operations, and not memory management, that persuaded me to move from C to Perl about twenty years ago.
Prestty much all that goodness depends on AMM and could not be implemented without it. Reply ↓
![]()
![]()
jim on 2017-12-23 at 22:17:39 said: Autovivification saves you much effort, thought, and coding, because most of the time the perl interpreter correctly divines your intention, and does a pile of stuff for you, without you needing to think about it.
And then it turns around and bites you because it does things for you that you did not intend or expect.
The larger the program, and the longer you are keeping the program around, the more it is a problem. If you are writing a quick one off script to solve some specific problem, you are the only person who is going to use the script, and are then going to throw the script away, fine. If you are writing a big program that will be used by lots of people for a long time, autovivification, is going to turn around and bit you hard, as are lots of similar perl features where perl makes life easy for you by doing stuff automagically.
With the result that there are in practice very few big perl programs used by lots of people for a long time, while there are an immense number of very big C and C++ programs used by lots of people for a very long time.
On esr's argument, we should never be writing big programs in C any more, and yet, we are.
I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.
This, my personal experience, is the exact opposite of the outcome claimed by esr.
Well, that was perl, Microsoft Visual Basic, and PHP. Maybe Java scales better.
But perl, Microsoft visual basic, and PHP did not scale. Reply ↓
![]()
![]()
esr on 2017-12-23 at 22:41:15 said: >But perl, Microsoft visual basic, and PHP did not scale.
Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.
Now I understand your fondness for C++ better. It's bad, but those are way worse at any large scale. AMM isn't enough to keep you out of trouble if the rest of the language is a tar-pit. Those three are full of the bones of drowned devops victims.
Yes, Java scales better. CPython would too from a pure maintainability standpoint, but it's too slow for the kind of deployment you're implying – on the other hand, PyPy might not be, I'm finding the JIT compilation works extremely well and I get runtimes I think are within 2x or 3x of C. Go would probably be da bomb. Reply ↓
![]()
![]()
esr on 2017-12-23 at 23:35:29 said: I wrote:
>All three of those languages are notorious sinkholes
You know when you're in deep shit? You're in deep shit when your figure of merit is long-term maintainability and Perl is the least bad alternative.
*shudder* Reply ↓
![]()
![]()
Jeff Read on 2017-12-24 at 02:56:28 said:
Oh, dear Goddess, no wonder. All three of those languages are notorious sinkholes – they're where "maintainability" goes to die a horrible and lingering death.
Can confirm -- Visual Basic (6 and VBA) is a toilet. An absolute cesspool. It's full of little gotchas -- such as non-short-circuiting AND and OR operators (there are no differentiated bitwise/logical operators) and the cryptic Dir() function that exactly mimics the broken semantics of MS-DOS's directory-walking system call -- that betray its origins as an extended version of Microsoft's 8-bit BASIC interpreter (the same one used to write toy programs on TRS-80s and Commodores from a bygone era), and prevent you from writing programs in a way that feels natural and correct if you've been exposed to nearly anything else.
VB is a language optimized to a particular workflow -- and like many languages so optimized as long as you color within the lines provided by the vendor you're fine, but it's a minefield when you need to step outside those lines (which happens sooner than you may think). And that's the case with just about every all-in-one silver-bullet "solution" I've seen -- Rails and PHP belong in this category too.
It's no wonder the cuddly new Microsoft under Nadella is considering making Python a first-class extension language for Excel (and perhaps other Office apps as well).
Visual Basic .NET is something quite different -- a sort of Microsoft-flavored Object Pascal, really. But I don't know of too many shops actually using it; if you're targeting the .NET runtime it makes just as much sense to just use C#.
As for Perl, it's possible to write large, readable, maintainable code bases in object-oriented Perl. I've seen it done. BUT -- you have to be careful. You have to establish coding standards, and if you come across the stereotype of "typical, looks-like-line-noise Perl code" then you have to flunk it at code review and never let it touch prod. (Do modern developers even know what line noise is, or where it comes from?) You also have to choose your libraries carefully, ensuring they follow a sane semantics that doesn't require weirdness in your code. I'd much rather just do it in Python. Reply ↓
![]()
![]()
TheDividualist on 2017-12-27 at 11:24:59 said: VB.NET is unusued in the kind of circles *you know* because these are competitive and status-conscious circles and anything with BASIC in the name is so obviously low-status and just looks so bad on the resume that it makes sense to add that 10-20% more effort and learn C#. C# sounds a whole lot more high status, as it has C in the name so obvious it looks like being a Real Programmer on the resume.
What you don't know is what happens outside the circles where professional programmers compete for status and jobs.
I can report that there are many "IT guys" who are not in these circles, they don't have the intra-programmer social life hence no status concerns, nor do they ever intend apply for Real Programmer jobs. They are just rural or not first worlder guys who grew up liking computers, and took a generic "IT guy" job at some business in a small town and there they taught themselves Excel VBscript when the need arised to automate some reports, and then VB.NET when it was time to try to build some actual application for in-house use. They like it because it looks less intimidating – it sends out those "not only meant for Real Programmers" vibes.
I wish we lived in a world where Python would fill that non-intimidating amateur-friendly niche, as it could do that job very well, but we are already on a hell of a path dependence. Seriously, Bill Gates and Joel Spolsky got it seriously right when they made Excel scriptable. The trick is how to provide a smooth transition between non-programming and programming.
One classic way is that you are a sysadmin, you use the shell, then you automate tasks with shell scripts, then you graduate to Perl.
One, relatively new way is that you are a web designer, write HTML and CSS, and then slowly you get dragged, kicking and screaming into JavaScript and PHP.
The genius was that they realized that a spreadsheet is basically modern paper. It is the most basic and universal tool of the office drone. I print all my automatically generated reports into xlsx files, simply because for me it is the "paper" of 2017, you can view it on any Android phone, and unlike PDF and like paper you can interact and work with the figures, like add other numbers to them.
So it was automating the spreadsheet, the VBScript Excel macro that led the way from not-programming to programming for an immense number of office drones, who are far more numerous than sysadmins and web designers.
Aaand I think it was precisely because of those microcomputers, like the Commodore. Out of every 100 office drone in 1991 or so, 1 or 2 had entertained themselves in 1987 typing in some BASIC programs published in computer mags. So when they were told Excel is programmable with a form of BASIC they were not too intidimated
This created such a giant path dependency that still if you want to sell a language to millions and millions of not-Real Programmers you have to at least make it look somewhat like Basic.
I think from this angle it was a masterwork of creating and exploiting path dependency. Put BASIC on microcomputers. Have a lot of hobbyists learn it for fun. Create the most universal office tool. Let it be programmable in a form of BASIC – you can just work on the screen, let it generate a macro and then you just have to modify it. Mostly copy-pasting, not real programming. But you slowly pick up some programming idioms. Then the path curves up to VB and then VB.NET.
To challenge it all, one needs to find an application area as important as number cruching and reporting in an office: Excel is basically electronic paper from this angle and it is hard to come up with something like this. All our nearly computer illiterate salespeople use it. (90% of the use beyond just typing data in a grid is using the auto sum function.) And they don't use much else than that and Word and Outlook and chat apps.
Anyway suppose such a purpose can be found, then you can make it scriptable in Python and it is also important to be able to record a macro so that people can learn from the generated code. Then maybe that dominance can be challenged. Reply ↓
![]()
![]()
Jeff Read on 2018-01-18 at 12:00:29 said: TIOBE says that while VB.NET saw an uptick in popularity in 2011, it's on its way down now and usage was moribund before then.
In your attempt to reframe my statements in your usual reference frame of Academic Programmer Bourgeoisie vs. Office Drone Proletariat, you missed my point entirely: VB.NET struggled to get a foothold during the time when VB6 was fresh in developers' minds. It was too different (and too C#-like) to win over VB6 devs, and didn't offer enough value-add beyond C# to win over the people who would've just used C# or Java. Reply ↓
![]()
![]()
jim of jim's blog on 2018-02-10 at 19:10:17 said: Yes, but he has point.
App -> macros -> macro script-> interpreted language with automatic memory management.
So you tend to wind up with a widely used language that was not so much designed, as accreted.
And, of course, programs written in this language fail to scale. Reply ↓
![]()
![]()
Jeff Read on 2017-12-24 at 02:30:27 said:
I have been part of big projects with many engineers using languages with automatic memory management. I noticed I could get something up and running in a fraction of the time that it took in C or C++.
And yet, somehow, strangely, the projects as a whole never got successfully completed. We found ourselves fighting weird shit done by the vast pile of run time software that was invisibly under the hood automatically doing stuff for us. We would be fighting mysterious and arcane installation and integration issues.
Sounds just like every Ruby on Fails deployment I've ever seen. It's great when you're slapping together Version 0.1 of a product or so I've heard. But I've never joined a Fails team on version 0.1. The ones I saw were already well-established, and between the PFM in Rails itself, and the amount of monkeypatching done to system classes, it's very, very hard to reason about the code you're looking at. From a management level, you're asking for enormous pain trying to onboard new developers into that sort of environment, or even expand the scope of your product with an existing team, without them tripping all over each other.
There's a reason why Twitter switched from Rails to Scala. Reply ↓
![]()
![]()
jim on 2017-12-27 at 03:53:42 said: Jeff Read wrote:
> Hacker culture has, for decades, tried to claim it was inclusive and nonjudgemental and yada yada , hacker culture has fallen far short. Now that's changing, has to change.|
Observe that "has to change" in practice means that the social justice warriors take charge.
Observe that in practice, when the social justice warriors take charge, old bugs don't get fixed, new bugs appear, and projects turn into aimless garbage, if any development occurs at all.
"has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.
Reflect on the attempted suicide of "Coraline" It is not people like me who keep using the correct pronouns that caused "her" to attempt suicide. It is the people who used "her" to grab power. Reply ↓
![]()
![]()
esr on 2017-12-27 at 14:30:33 said: >"has to change" is a power grab, and the people grabbing power are not competent to code, and do not care about code.
It's never happened before, and may very well never happen again but this once I completely agree with JAD. The "change" the SJWs actually want – as opposed to what they claim to want – would ruin us. Reply ↓
![]()
![]()
jim on 2017-12-27 at 19:42:36 said: To get back on topic:
Modern, mostly memory safe C++, is enforced by:
https://blogs.msdn.microsoft.com/vcblog/2016/03/31/c-core-guidelines-checkers-preview-of-the-lifetime-safety-checker/
http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#S-abstract
http://clang.llvm.org/extra/clang-tidy/$ clang-tidy test.cpp -checks=clang-analyzer-cplusplus*, cppcoreguidelines-*, modernize-*
cppcoreguidelines-* and modernize-* will catch most of the issues that esr complains about, in practice usually all of them, though I suppose that as the project gets bigger, some will slip through.
Remember that gcc and g++ is C++98 by default, because of the vast base of old fashioned C++ code which is subtly incompatible with C++11, C++11 onwards being the version of C++ that optionally supports memory safety, hence necessarily subtly incompatible.
To turn on C++11
Place
cmake_minimum_required(VERSION 3.5)
# set standard required to ensure that you get
# the same version of C++ on every platform
# as some environments default to older dialects
# of C++ and some do not.
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)in your CMakeLists.txt Reply ↓
![]()
![]()
Paul R on 2017-12-28 at 10:26:44 said: The g++ default, as of now, see 2.2 at https://gcc.gnu.org/onlinedocs/gcc/Standards.html is '-std=gnu++14' which is c++14 and some GNU extras.
Which is good I think MSVC is similar, you have to specifically ask for C++17. Reply ↓
![]()
![]()
h0bby1 on 2018-05-19 at 09:27:02 said: I think i solved lots of those issues in C with a runtime i made.
Originally i made this system because i wanted to test programming a micro kernel OS, with protected mode, PCI bus, usb, ACPI etc, and i didn't want to get close to the 'event horizon' of memory mannagement in C.
But i didn't wait the Greenspun law to kick in, so i first developped a safe memory system as a runtime, and replaced the standard C runtime and memory mannagement with it.
I wanted zero seg fault or memory error possible at all anywhere in the C code. Because debuguing bare metal exception, without debugger, with complex data structures made in C look very close to the black hole.
I didn't want to use C++ because C++ compiler have very unpredictible binary format and function name decoration, which make it much harder to interface with at kernel level.
I wanted also some system as efficient as possible to mannage lockless shared access between thread of the whole memory as much as possible, to avoid the 'exclusive borrow' syndrome of rust, with global variables shared between threads with lockless algorithm to access them.
I took inspiration from the algorithm on this site http://www.1024cores.net/ to develop the basic system, with strong references as the norm, and direct 'bare pointer' only as weak references for fast access to memory in C.
What i ended doing is basically a 'strongly typed hashmap DAG' to store object references hierarchy, which can be manipulated using 'lambda expressions', in sort that applications can manipulate objects in a indirect manner only through the DAG abstraction, without having to manipulate bare pointers at all.
This also make a mark and sweep garbage collector easier to do, especially with an 'event based' system, the main loop can call the garbage collector between two executions of event/messages handlers, which has the advantage that it can be made at a point where there is no application data on the stack to mark, so it avoid mistaking application data in the stack for a pointer. All references that are only in stack variables can get automatically garbage collected when the function exit, much like in C++ actually.
The garbage collector can still be called by the allocator when there is OOM error, it will attempt a garbage collection before failing the allocation, but all references in the stack should be garbage collected when the function return to the main loop and the garbage collector is run.
As all the references hierarchy is expressed explicity in the DAG, there shouldn't be any pointer stored in the heap, outside of the module's data section, which correspond to C global variables that are used as the 'root element' of object hierarchy, which can be traversed to find all the actives references to heap data that the code can potentially use. A quick system could be made for that the compiler can automatically generate a list of the 'root references' in the global variables, to avoid memory leak if some global data can look like a reference.
As each thread have their own heap, it also avoid the 'stop the world syndrome', all threads can garbage collect their own heap, and there is already some system of lockless synchronisation to access references based on expression in the DAG, to avoid having to rely only on 'bare pointers' to manipulate object hierarchy, which allow dynamic relocation, and make it easier to track active references.
It's also very useful to track memory leak, as the allocator can keep the time of each memory allocation, it's easy to see all the allocations that happenned between two points of the program, and dump all their hierarchy and property only from the 'bare reference'.
Each thread contain two heaps, one which is manually mannaged, mostly used for temporary strings , or IO buffers, and the other heap which can be mannaged either with atomic reference count, or mark and sweep.
With this system, C program rarely have to use directly malloc/free, nor to manipulate pointers to allocated memory directly, other than for temporary buffer allocation, like a dynamic stack, for io buffers or temporary strings who can easily be mannaged manually. And all the memory manipulation can be made via a runtime which keep track internally of pointer address and size, data type and eventually a 'finalizer' function that will be callled when the pointer is freed,
Since i started to use this system to make C programs, alongside with my own ABI which can dynamically link binaries compiled with visual studio and gcc together, i tested it for many different use case, i could make a mini multi thread window mannager/UI, with aysnc irq driven HID driver events, and a system of distributed application based on blockchain data, which include multi thread http server who can handle parrallel json/rpc calls, with an abstraction of applications stack via custom data type definition / scripts stored on the blockchain, and i have very little problem of memory, albeit it's 100% in C, multi threaded and deal with heavily dynamic data.
With the mark and sweep mode, it can become quite easy to develop multi thread applications with good level of concurrency, even to do simple database system, driven by a script over asynch http/json/rpc, without having to care about complex memory mannagement.
Even with the reference count mode, the manipulation of references is explicit, and it should not be to hard to detect leaks with simple parsers, i already did test with antlr C parser, with a visitor class to parse the grammar and detect potentially errors, as all memory referencing happen through specific type instead of bare pointers, it's not too hard to detect potential memory leak problem with a simple parser. Reply ↓
![]()
![]()
Arron Grier on 2018-06-07 at 17:37:17 said: Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?
Just wondering Reply ↓
![]()
![]()
esr on 2018-06-08 at 05:48:37 said: >Since you've been talking a lot about Go lately, should you not mention it on your Document: How To Become A Hacker?
Too soon. Go is very interesting but it's not an essential tool yet.
That might change in a few years. Reply ↓
![]()
![]()
Yankes on 2018-12-18 at 19:20:46 said: I have one question, do you even need global AMM? Get one of element of graph, when it will/should be released in your reposugeon? Over all I think it is never because it usually link with other from this graph. Overall do you check how many objects are created and released during operations? I do not mean some temporal strings but object representing main working set.
Depending on answer it could be if you load some graph element and it will stay indefinitely in memory then this could easy be converted to C/C++ by simply never using `free` for graph elements (and all problems with memory management goes out of the windows).
If they should be released early then when it should happened? Do you have some code in reposurgeon that purge not needed objects when not needed any more? Depend on simply accessibility of some object do not mean it needed, many times is quite opposite.I now working on C# application that had similar bungle like this and previous developers "solution" was to restarting whole application instead of fixing lifetime problems. Correct solution was C++ like code, I create object, do work and purge it explicitly. With this non of components have memory issues now. Of corse problem there lay with lack of knowing tools they use and not complexity of domain, but did you do analysis what is needed and what not, and how long? AMM do not solve this.
btw I big fan of lisp that is in C++11 aka templates, great pure functional language :D Reply ↓
![]()
![]()
esr on 2018-12-18 at 20:57:12 said: >I have one question, do you even need global AMM?
Oh hell yes. Consider, for example, the demands of loading in ad operating on multiple repositories. Reply ↓
![]()
![]()
Yankes on 2018-12-19 at 08:56:36 said: If I understood this correctly situations look like:
I have processes that loaded repo A, B and C and active working on each one.
Now because of some demand we need load repo D.
After we are done we back to A, B and C.
Now question is should be D data be purged?
If there are memory connection form previous repos then it will stay in memory if not then AMM will remove all data from memory.
If this is complex graph when you have access to any element the you can crawl to any other element of this graph (this is simplification but probably safe assumption).
In first case (there is connection) is equivalent to not using `free` in C. Of corse if not all graph is reachable then there will be partial purge of it memory (let say that 10% will stay), but what will happens when you need again load repo D? Current data avaialbe is hidden deep in other graphs and most of data is lost do AMM. you need load everything again and now repo D size is 110%.In case there is not connection between repos A, B, C and repo D then we can free it entirely.
This is easy done in C++ (some kind of smart pointer that know if it pointing same repo or other).Do my reasoning is correct? or I miss something?
btw there BIG difference between C and C++, I can implement things in C++ that I will NEVER be able to implement in C, example of this is my strong typed simple script language:
https://github.com/Yankes/OpenXcom/blob/master/src/Engine/Script.cpp
I would need drop functionalists/protections to be able to convert this to C (or even C++03).Another example of this is https://github.com/fmtlib/fmt from C++ and `printf` from C.
Both do exactly same but C++ is lot of times better and safer.This mean if we add your statement on impossibility and my then we have:
C <<< C++ <<< Go/Python
but for me personally is more:
C <<< C++ < Go/Python
than yours:
C/C++ <<< Go/Python Reply ↓
![]()
![]()
esr on 2018-12-19 at 09:08:46 said: >Do my reasoning is correct? or I miss something?
Not much. The bigger issue is that it is fucking insane to try anything like this in a language where the core abstractions are leaky. That disqualifies C++. Reply ↓
![]()
![]()
Yankes on 2018-12-19 at 10:24:47 said: I only disagree with word `insane`, C++ have lot of problems like UB, lot of corner cases, leaking abstraction, whole crap form C (and my favorite: 1000 line errors from templates), but is not insane to work with memory problems.
You can easy create tools that make all this problems bearable, and this is biggest flaw in C++, many problems are solvable but not out of box. C++ is good on crating abstraction:
https://www.youtube.com/watch?v=sPhpelUfu8Q
That will fit your domain then it will not leak much because it fit right the underling problem.
And you can enforce lot of things that allow you to reason locally about behavior of program.In case of creating this new abstraction is indeed insane then I think you have problems in Go too because only problem that AMM solve is reachability of memory and how long you need for it.
btw best thing that show difference between C++03 and C++11 is `std::vector<std::vector>`, in C++03 this is insane stupid and in C++11 is insane clever because it have performance characteristic of `std::vector` (thanks to `std::move`) and no problems with memory management (keep index stable and use `v.at(i).at(j).x = 5;` or warp it in helper class and use `v[i][j].x` that will throw on wrong index). Reply ↓
Your email address will not be published. Required fields are marked *
Tip Jar
Donate here to support my open-source projects. Small but continuing donations via Patreon help more than one-time donations via PayPal.
![]()
Archives
Meta Hacker Emblem
- November 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- January 2010
- December 2009
- November 2009
- October 2009
- September 2009
- August 2009
- July 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- December 2008
- November 2008
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- June 2006
- May 2006
- April 2006
- March 2006
- February 2006
- January 2006
- December 2005
- November 2005
- September 2005
- August 2005
- July 2005
- February 2005
- January 2005
- December 2004
- November 2004
- October 2004
- September 2004
- February 2004
- January 2004
- December 2003
- November 2003
- October 2003
- September 2003
- August 2003
- July 2003
- June 2003
- May 2003
- April 2003
- December 2002
- November 2002
- October 2002
- September 2002
- July 2002
- June 2002
- May 2002
Eric Conspiracy
Anti-Idiotarian Manifesto
© 2019 Armed and Dangerous Admired Theme
![]() |
![]() |
![]() |
Nov 04, 2019 | en.wikipedia.org
... ... ...
The designers were primarily motivated by their shared dislike of C++ . [26] [27] [28]
... ... ...
Omissions [ edit ]Go deliberately omits certain features common in other languages, including (implementation) inheritance , generic programming , assertions , [e] pointer arithmetic , [d] implicit type conversions , untagged unions , [f] and tagged unions . [g] The designers added only those facilities that all three agreed on. [95]
Of the omitted language features, the designers explicitly argue against assertions and pointer arithmetic, while defending the choice to omit type inheritance as giving a more useful language, encouraging instead the use of interfaces to achieve dynamic dispatch [h] and composition to reuse code. Composition and delegation are in fact largely automated by struct embedding; according to researchers Schmager et al. , this feature "has many of the drawbacks of inheritance: it affects the public interface of objects, it is not fine-grained (i.e, no method-level control over embedding), methods of embedded objects cannot be hidden, and it is static", making it "not obvious" whether programmers will overuse it to the extent that programmers in other languages are reputed to overuse inheritance. [61]
The designers express an openness to generic programming and note that built-in functions are in fact type-generic, but these are treated as special cases; Pike calls this a weakness that may at some point be changed. [53] The Google team built at least one compiler for an experimental Go dialect with generics, but did not release it. [96] They are also open to standardizing ways to apply code generation. [97]
Initially omitted, the exception -like panic / recover mechanism was eventually added, which the Go authors advise using for unrecoverable errors such as those that should halt an entire program or server request, or as a shortcut to propagate errors up the stack within a package (but not across package boundaries; there, error returns are the standard API). [98
![]() |
![]() |
![]() |
Jul 29, 2008 | developer.ibm.com
Maintaining and adding new features to legacy systems developed using Maintaining and adding new features to legacy systems developed usingC/C++
is a daunting task. There are several facets to the problem -- understanding the existing class hierarchy and global variables, the different user-defined types, and function call graph analysis, to name a few. This article discusses several features of doxygen, with examples in the context of projects usingC/C++
.However, doxygen is flexible enough to be used for software projects developed using the Python, Java, PHP, and other languages, as well. The primary motivation of this article is to help extract information from
C/C++
sources, but it also briefly describes how to document code using doxygen-defined tags.Installing doxygen
You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources from the SVN repository and build it. You have two choices for acquiring doxygen. You can download it as a pre-compiled executable file, or you can check out sources from the SVN repository and build it. Listing 1 shows the latter process.
Listing 1. Install and build doxygen sources
bash‑2.05$ svn co https://doxygen.svn.sourceforge.net/svnroot/doxygen/trunk doxygen‑svn bash‑2.05$ cd doxygen‑svn bash‑2.05$ ./configure –prefix=/home/user1/bin bash‑2.05$ make bash‑2.05$ make installShow more Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need the Note that the configure script is tailored to dump the compiled sources in /home/user1/bin (add this directory to the PATH variable after the build), as not every UNIX® user has permission to write to the /usr folder. Also, you need thesvn
utility to check out sources. Generating documentation using doxygen To use doxygen to generate documentation of the sources, you perform three steps. To use doxygen to generate documentation of the sources, you perform three steps. Generate the configuration file At a shell prompt, type the command doxygen -g At a shell prompt, type the command doxygen -gdoxygen -g
. This command generates a text-editable configuration file called Doxyfile in the current directory. You can choose to override this file name, in which case the invocation should be doxygen -g <_user-specified file="file" name_="name_">doxygen -g <user-specified file name>
, as shown in Listing 2 .Listing 2. Generate the default configuration file
bash‑2.05b$ doxygen ‑g Configuration file 'Doxyfile' created. Now edit the configuration file and enter doxygen Doxyfile to generate the documentation for your project bash‑2.05b$ ls Doxyfile DoxyfileShow more Edit the configuration file The configuration file is structured as The configuration file is structured as<TAGNAME>
=<VALUE>
, similar to the Make file format. Here are the most important tags:Listing 3 shows an example of a Doxyfile.
<OUTPUT_DIRECTORY>
: You must provide a directory name here -- for example, /home/user1/documentation -- for the directory in which the generated documentation files will reside. If you provide a nonexistent directory name, doxygen creates the directory subject to proper user permissions.<INPUT>
: This tag creates a space-separated list of all the directories in which theC/C++
source and header files reside whose documentation is to be generated. For example, consider the following snippet:INPUT = /home/user1/project/kernel /home/user1/project/memoryShow more In this case, doxygen would read in theC/C++
sources from these two directories. If your project has a single source root directory with multiple sub-directories, specify that folder and make the<RECURSIVE>
tag Yes .<FILE_PATTERNS>
: By default, doxygen searches for files with typicalC/C++
extensions such as .c, .cc, .cpp, .h, and .hpp. This happens when the<FILE_PATTERNS>
tag has no value associated with it. If the sources use different naming conventions, update this tag accordingly. For example, if a project convention is to use .c86 as aC
file extension, add this to the<FILE_PATTERNS>
tag.<RECURSIVE>
: Set this tag to Yes if the source hierarchy is nested and you need to generate documentation forC/C++
files at all hierarchy levels. For example, consider the root-level source hierarchy /home/user1/project/kernel, which has multiple sub-directories such as /home/user1/project/kernel/vmm and /home/user1/project/kernel/asm. If this tag is set to Yes , doxygen recursively traverses the hierarchy, extracting information.<EXTRACT_ALL>
: This tag is an indicator to doxygen to extract documentation even when the individual classes or functions are undocumented. You must set this tag to Yes .<EXTRACT_PRIVATE>
: Set this tag to Yes . Otherwise, private data members of a class would not be included in the documentation.<EXTRACT_STATIC>
: Set this tag to Yes . Otherwise, static members of a file (both functions and variables) would not be included in the documentation.Listing 3. Sample doxyfile with user-provided tag values
OUTPUT_DIRECTORY = /home/user1/docs EXTRACT_ALL = yes EXTRACT_PRIVATE = yes EXTRACT_STATIC = yes INPUT = /home/user1/project/kernel #Do not add anything here unless you need to. Doxygen already covers all #common formats like .c/.cc/.cxx/.c++/.cpp/.inl/.h/.hpp FILE_PATTERNS = RECURSIVE = yesShow more Run doxygen Run doxygen in the shell prompt as Run doxygen in the shell prompt asdoxygen Doxyfile
(or with whatever file name you've chosen for the configuration file). Doxygen issues several messages before it finally produces the documentation in Hypertext Markup Language (HTML) and Latex formats (the default). In the folder that the<OUTPUT_DIRECTORY>
tag specifies, two sub-folders named html and latex are created as part of the documentation-generation process. Listing 4 shows a sample doxygen run log.Listing 4. Sample log output from doxygen
Searching for include files... Searching for example files... Searching for images... Searching for dot files... Searching for files to exclude Reading input files... Reading and parsing tag files Preprocessing /home/user1/project/kernel/kernel.h Read 12489207 bytes Parsing input... Parsing file /project/user1/project/kernel/epico.cxx Freeing input... Building group list... .. Generating docs for compound MemoryManager::ProcessSpec Generating docs for namespace std Generating group index... Generating example index... Generating file member index... Generating namespace member index... Generating page index... Generating graph info page... Generating search index... Generating style sheet...Show more Documentation output formats Doxygen can generate documentation in several output formats other than HTML. You can configure doxygen to produce documentation in the following formats: Doxygen can generate documentation in several output formats other than HTML. You can configure doxygen to produce documentation in the following formats:Listing 5 provides an example of a Doxyfile that generates documentation in all the formats discussed.
- UNIX man pages: Set the
<GENERATE_MAN>
tag to Yes . By default, a sub-folder named man is created within the directory provided using<OUTPUT_DIRECTORY>
, and the documentation is generated inside the folder. You must add this folder to the MANPATH environment variable.- Rich Text Format (RTF): Set the
<GENERATE_RTF>
tag to Yes . Set the<RTF_OUTPUT>
to wherever you want the .rtf files to be generated -- by default, the documentation is within a sub-folder named rtf within the OUTPUT_DIRECTORY. For browsing across documents, set the<RTF_HYPERLINKS>
tag to Yes . If set, the generated .rtf files contain links for cross-browsing.- Latex: By default, doxygen generates documentation in Latex and HTML formats. The
<GENERATE_LATEX>
tag is set to Yes in the default Doxyfile. Also, the<LATEX_OUTPUT>
tag is set to Latex, which implies that a folder named latex would be generated inside OUTPUT_DIRECTORY, where the Latex files would reside.- Microsoft® Compiled HTML Help (CHM) format: Set the
<GENERATE_HTMLHELP>
tag to Yes . Because this format is not supported on UNIX platforms, doxygen would only generate a file named index.hhp in the same folder in which it keeps the HTML files. You must feed this file to the HTML help compiler for actual generation of the .chm file.- Extensible Markup Language (XML) format: Set the
<GENERATE_XML>
tag to Yes . (Note that the XML output is still a work in progress for the doxygen team.)Listing 5. Doxyfile with tags for generating documentation in several formats
#for HTML GENERATE_HTML = YES HTML_FILE_EXTENSION = .htm #for CHM files GENERATE_HTMLHELP = YES #for Latex output GENERATE_LATEX = YES LATEX_OUTPUT = latex #for RTF GENERATE_RTF = YES RTF_OUTPUT = rtf RTF_HYPERLINKS = YES #for MAN pages GENERATE_MAN = YES MAN_OUTPUT = man #for XML GENERATE_XML = YESShow more Special tags in doxygen Doxygen contains a couple of special tags. Doxygen contains a couple of special tags. Preprocessing C/C++ code First, doxygen must preprocess First, doxygen must preprocessC/C++
code to extract information. By default, however, it does only partial preprocessing -- conditional compilation statements (#if #endif
) are evaluated, but macro expansions are not performed. Consider the code in Listing 6 .Listing 6. Sample C code that makes use of macros
#include <cstring> #include <rope> #define USE_ROPE #ifdef USE_ROPE #define STRING std::rope #else #define STRING std::string #endif static STRING name;Show more With With With With<USE_ROPE>
defined in sources, generated documentation from doxygen looks like this:Defines #define USE_ROPE #define STRING std::rope Variables static STRING nameShow more Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion of Here, you see that doxygen has performed a conditional compilation but has not done a macro expansion ofSTRING
. The<ENABLE_PREPROCESSING>
tag in the Doxyfile is set by default to Yes . To allow for macro expansions, also set the<MACRO_EXPANSION>
tag to Yes . Doing so produces this output from doxygen:Defines #define USE_ROPE #define STRING std::string Variables static std::rope nameShow more If you set the If you set the If you set the If you set the<ENABLE_PREPROCESSING>
tag to No , the output from doxygen for the earlier sources looks like this:Variables static STRING nameShow more Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type of Note that the documentation now has no definitions, and it is not possible to deduce the type ofSTRING
. It thus makes sense always to set the<ENABLE_PREPROCESSING>
tag to Yes . As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting As part of the documentation, it might be desirable to expand only specific macros. For such purposes, along setting<ENABLE_PREPROCESSING>
and<MACRO_EXPANSION>
to Yes , you must set the<EXPAND_ONLY_PREDEF>
tag to Yes (this tag is set to No by default) and provide the macro details as part of the<PREDEFINED>
or<EXPAND_AS_DEFINED>
tag. Consider the code in Listing 7 , where only the macroCONTAINER
would be expanded.Listing 7. C source with multiple macros
#ifdef USE_ROPE #define STRING std::rope #else #define STRING std::string #endif #if ALLOW_RANDOM_ACCESS == 1 #define CONTAINER std::vector #else #define CONTAINER std::list #endif static STRING name; static CONTAINER gList;Show more Listing 8 shows the configuration file.Listing 8. Doxyfile set to allow select macro expansions
ENABLE_PREPROCESSING = YES MACRO_EXPANSION = YES EXPAND_ONLY_PREDEF = YES EXPAND_AS_DEFINED = CONTAINERShow more Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with only Here's the doxygen output with onlyCONTAINER
expanded:Defines #define STRING std::string #define CONTAINER std::list Variables static STRING name static std::list gListShow more Notice that only the Notice that only the Notice that only the Notice that only theCONTAINER
macro has been expanded. Subject to<MACRO_EXPANSION>
and<EXPAND_AS_DEFINED>
both being Yes , the<EXPAND_AS_DEFINED>
tag selectively expands only those macros listed on the right-hand side of the equality operator. As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is As part of preprocessing, the final tag to note is<PREDEFINED>
. Much like the same way you use the-D
switch to pass the G++ compiler preprocessor definitions, you use this tag to define macros. Consider the Doxyfile in Listing 9 .Listing 9. Doxyfile with macro expansion tags defined
ENABLE_PREPROCESSING = YES MACRO_EXPANSION = YES EXPAND_ONLY_PREDEF = YES EXPAND_AS_DEFINED = PREDEFINED = USE_ROPE= \ ALLOW_RANDOM_ACCESS=1Show more Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output: Here's the doxygen-generated output:Defines #define USE_CROPE #define STRING std::rope #define CONTAINER std::vector Variables static std::rope name static std::vector gListShow more When used with the When used with the When used with the When used with the<PREDEFINED>
tag, macros should be defined as <_macro name_="name_">=<_value_><macro name>=<value>
. If no value is provided -- as in the case of simple#define
-- just using <_macro name_="name_">=<_spaces_><macro name>=<spaces>
suffices. Separate multiple macro definitions by spaces or a backslash (\
). Excluding specific files or directories from the documentation process In the In the<EXCLUDE>
tag in the Doxyfile, add the names of the files and directories for which documentation should not be generated separated by spaces. This comes in handy when the root of the source hierarchy is provided and some sub-directories must be skipped. For example, if the root of the hierarchy is src_root and you want to skip the examples/ and test/memoryleaks folders from the documentation process, the Doxyfile should look like Listing 10 .Listing 10. Using the EXCLUDE tag as part of the Doxyfile
INPUT = /home/user1/src_root EXCLUDE = /home/user1/src_root/examples /home/user1/src_root/test/memoryleaksShow more Generating graphs and diagrams By default, the Doxyfile has the By default, the Doxyfile has the<CLASS_DIAGRAMS>
tag set to Yes . This tag is used for generation of class hierarchy diagrams. The following tags in the Doxyfile deal with generating diagrams:Listing 11 provides an example using a few data structures. Note that the
<CLASS_DIAGRAMS>
: The default tag is set to Yes in the Doxyfile. If the tag is set to No , diagrams for inheritance hierarchy would not be generated.<HAVE_DOT>
: If this tag is set to Yes , doxygen uses the dot tool to generate more powerful graphs, such as collaboration diagrams that help you understand individual class members and their data structures. Note that if this tag is set to Yes , the effect of the<CLASS_DIAGRAMS>
tag is nullified.<CLASS_GRAPH>
: If the<HAVE_DOT>
tag is set to Yes along with this tag, the inheritance hierarchy diagrams are generated using thedot
tool and have a richer look and feel than what you'd get by using only<CLASS_DIAGRAMS>
.<COLLABORATION_GRAPH>
: If the<HAVE_DOT>
tag is set to Yes along with this tag, doxygen generates a collaboration diagram (apart from an inheritance diagram) that shows the individual class members (that is, containment) and their inheritance hierarchy.<HAVE_DOT>
,<CLASS_GRAPH>
, and<COLLABORATION_GRAPH>
tags are all set to Yes in the configuration file.Listing 11. Interacting C classes and structures
struct D { int d; }; class A { int a; }; class B : public A { int b; }; class C : public B { int c; D d; };Show more Figure 1 shows the output from doxygen.Figure 1. The Class inheritance graph and collaboration graph generated using the dot tool
Code documentation style So far, you've used doxygen to extract information from code that is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses some of the more common tags doxygen advocates using as part of So far, you've used doxygen to extract information from code that is otherwise undocumented. However, doxygen also advocates documentation style and syntax, which helps it generate more detailed documentation. This section discusses some of the more common tags doxygen advocates using as part ofC/C++
code. For further details, see resources on the right. Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the Every code item has two kinds of descriptions: one brief and one detailed. Brief descriptions are typically single lines. Functions and class methods have a third kind of description known as the in-body description, which is a concatenation of all comment blocks found within the function body. Some of the more common doxygen tags and styles of commenting are:To document global functions, variables, and enum types, the corresponding file must first be documented using the To document global functions, variables, and enum types, the corresponding file must first be documented using the
- Brief description: Use a single-line
C++
comment, or use the<\brief>
tag.- Detailed description: Use JavaDoc-style commenting
/** test */
(note the two asterisks [*
] in the beginning) or the Qt-style/*! text */
.- In-body description: Individual
C++
elements like classes, structures, unions, and namespaces have their own tags, such as<\class>
,<\struct>
,<\union>
, and<\namespace>
.<\file>
tag. Listing 12 provides an example that discusses item 4 with a function tag (<\fn>
), a function argument tag (<\param>
), a variable name tag (<\var>
), a tag for#define
(<\def>
), and a tag to indicate some specific issues related to a code snippet (<\warning>
).Listing 12. Typical doxygen tags and their use
/∗! \file globaldecls.h \brief Place to look for global variables, enums, functions and macro definitions ∗/ /∗∗ \var const int fileSize \brief Default size of the file on disk ∗/ const int fileSize = 1048576; /∗∗ \def SHIFT(value, length) \brief Left shift value by length in bits ∗/ #define SHIFT(value, length) ((value) << (length)) /∗∗ \fn bool check_for_io_errors(FILE∗ fp) \brief Checks if a file is corrupted or not \param fp Pointer to an already opened file \warning Not thread safe! ∗/ bool check_for_io_errors(FILE∗ fp);Show moreHere's how the generated documentation looks:
Defines #define SHIFT(value, length) ((value) << (length)) Left shift value by length in bits. Functions bool check_for_io_errors (FILE ∗fp) Checks if a file is corrupted or not. Variables const int fileSize = 1048576; Function Documentation bool check_for_io_errors (FILE∗ fp) Checks if a file is corrupted or not. Parameters fp: Pointer to an already opened file Warning Not thread safe!Show more ConclusionThis article discusses how doxygen can extract a lot of relevant information from legacy
C/C++
code. If the code is documented using doxygen tags, doxygen generates output in an easy-to-read format. Put to good use, doxygen is a ripe candidate in any developer's arsenal for maintaining and managing legacy systems.
![]() |
![]() |
![]() |
Oct 15, 2019 | economistsview.typepad.com
From Reuters Odd News :
Man gets the poop on outsourcing , By Holly McKenna, May 2, Reuters
Computer programmer Steve Relles has the poop on what to do when your job is outsourced to India. Relles has spent the past year making his living scooping up dog droppings as the "Delmar Dog Butler." "My parents paid for me to get a (degree) in math and now I am a pooper scooper," "I can clean four to five yards in a hour if they are close together." Relles, who lost his computer programming job about three years ago ... has over 100 clients who pay $10 each for a once-a-week cleaning of their yard.
Relles competes for business with another local company called "Scoopy Do." Similar outfits have sprung up across America, including Petbutler.net, which operates in Ohio. Relles says his business is growing by word of mouth and that most of his clients are women who either don't have the time or desire to pick up the droppings. "St. Bernard (dogs) are my favorite customers since they poop in large piles which are easy to find," Relles said. "It sure beats computer programming because it's flexible, and I get to be outside,"
![]() |
![]() |
![]() |
Oct 13, 2019 | www.quora.com
Eugene Miya , A friend/colleague. Sometimes driver. Other shared experiences. Updated Mar 22 2017 · Author has 11.2k answers and 7.9m answer views
He mostly writes in C today.
I can assure you he at least knows about Python. Guido's office at Dropbox is 1 -- 2 blocks by a backdoor gate from Don's house.
I would tend to doubt that he would use R (I've used S before as one of my stat packages). Don would probably write something for himself.
Don is not big on functional languages, so I would doubt either Haskell (sorry Paul) or LISP (but McCarthy lived just around the corner from Don; I used to drive him to meetings; actually, I've driven all 3 of us to meetings, and he got his wife an electric version of my car based on riding in my car (score one for friend's choices)). He does use emacs and he does write MLISP macros, but he believes in being closer to the hardware which is why he sticks with MMIX (and MIX) in his books.
Don't discount him learning the machine language of a given architecture.
I'm having dinner with Don and Jill and a dozen other mutual friends in 3 weeks or so (our quarterly dinner). I can ask him then, if I remember (either a calendar entry or at job). I try not to bother him with things like this. Don is well connected to the hacker community
Don's name was brought up at an undergrad architecture seminar today, but Don was not in the audience (an amazing audience; I took a photo for the collection of architects and other computer scientists in the audience (Hennessey and Patterson were talking)). I came close to biking by his house on my way back home.
We do have a mutual friend (actually, I introduced Don to my biology friend at Don's request) who arrives next week, and Don is my wine drinking proxy. So there is a chance I may see him sooner.
Steven de Rooij , Theoretical computer scientist Answered Mar 9, 2017 · Author has 4.6k answers and 7.7m answer views
Nice question :-)
Don Knuth would want to use something that’s low level, because details matter . So no Haskell; LISP is borderline. Perhaps if the Lisp machine ever had become a thing.
He’d want something with well-defined and simple semantics, so definitely no R. Python also contains quite a few strange ad hoc rules, especially in its OO and lambda features. Yes Python is easy to learn and it looks pretty, but Don doesn’t care about superficialities like that. He’d want a language whose version number is converging to a mathematical constant, which is also not in favor of R or Python.
What remains is C. Out of the five languages listed, my guess is Don would pick that one. But actually, his own old choice of Pascal suits him even better. I don’t think any languages have been invented since
was written that score higher on the Knuthometer than Knuth’s own original pick.And yes, I feel that this is actually a conclusion that bears some thinking about. 24.1k views ·
Dan Allen , I've been programming for 34 years now. Still not finished. Answered Mar 9, 2017 · Author has 4.5k answers and 1.8m answer views
In The Art of Computer Programming I think he'd do exactly what he did. He'd invent his own architecture and implement programs in an assembly language targeting that theoretical machine.
He did that for a reason because he wanted to reveal the detail of algorithms at the lowest level of detail which is machine level.
He didn't use any available languages at the time and I don't see why that would suit his purpose now. All the languages above are too high-level for his purposes.
![]() |
![]() |
![]() |
Oct 08, 2019 | www.nakedcapitalism.com
At first blush, the suit filed in Dallas by the Southwest Airlines Pilots Association (SwAPA) against Boeing may seem like a family feud. SWAPA is seeking an estimated $115 million for lost pilots' pay as a result of the grounding of the 34 Boeing 737 Max planes that Southwest owns and the additional 20 that Southwest had planned to add to its fleet by year end 2019. Recall that Southwest was the largest buyer of the 737 Max, followed by American Airlines. However, the damning accusations made by the pilots' union, meaning, erm, pilots, is likely to cause Boeing not just more public relations headaches, but will also give grist to suits by crash victims.
However, one reason that the Max is a sore point with the union was that it was a key leverage point in 2016 contract negotiations:
And Boeing's assurances that the 737 Max was for all practical purposes just a newer 737 factored into the pilots' bargaining stance. Accordingly, one of the causes of action is tortious interference, that Boeing interfered in the contract negotiations to the benefit of Southwest. The filing describes at length how Boeing and Southwest were highly motivated not to have the contract dispute drag on and set back the launch of the 737 Max at Southwest, its showcase buyer. The big point that the suit makes is the plane was unsafe and the pilots never would have agreed to fly it had they known what they know now.
We've embedded the compliant at the end of the post. It's colorful and does a fine job of recapping the sorry history of the development of the airplane. It has damning passages like:
Boeing concealed the fact that the 737 MAX aircraft was not airworthy because, inter alia, it incorporated a single-point failure condition -- a software/flight control logic called the Maneuvering Characteristics Augmentation System ("MCAS") -- that,if fed erroneous data from a single angle-of-attack sensor, would command the aircraft nose-down and into an unrecoverable dive without pilot input or knowledge.
The lawsuit also aggressively contests Boeing's spin that competent pilots could have prevented the Lion Air and Ethiopian Air crashes:
Had SWAPA known the truth about the 737 MAX aircraft in 2016, it never would have approved the inclusion of the 737 MAX aircraft as a term in its CBA [collective bargaining agreement], and agreed to operate the aircraft for Southwest. Worse still, had SWAPA known the truth about the 737 MAX aircraft, it would have demanded that Boeing rectify the aircraft's fatal flaws before agreeing to include the aircraft in its CBA, and to provide its pilots, and all pilots, with the necessary information and training needed to respond to the circumstances that the Lion Air Flight 610 and Ethiopian Airlines Flight 302 pilots encountered nearly three years later.
And (boldface original):
Boeing Set SWAPA Pilots Up to Fail
As SWAPA President Jon Weaks, publicly stated, SWAPA pilots "were kept in the dark" by Boeing.
Boeing did not tell SWAPA pilots that MCAS existed and there was no description or mention of MCAS in the Boeing Flight Crew Operations Manual.
There was therefore no way for commercial airline pilots, including SWAPA pilots, to know that MCAS would work in the background to override pilot inputs.
There was no way for them to know that MCAS drew on only one of two angle of attack sensors on the aircraft.
And there was no way for them to know of the terrifying consequences that would follow from a malfunction.
When asked why Boeing did not alert pilots to the existence of the MCAS, Boeing responded that the company decided against disclosing more details due to concerns about "inundate[ing] average pilots with too much information -- and significantly more technical data -- than [they] needed or could realistically digest."
SWAPA's pilots, like their counterparts all over the world, were set up for failure
The filing has a detailed explanation of why the addition of heavier, bigger LEAP1-B engines to the 737 airframe made the plane less stable, changed how it handled, and increased the risk of catastrophic stall. It also describes at length how Boeing ignored warning signs during the design and development process, and misrepresented the 737 Max as essentially the same as older 737s to the FAA, potential buyers, and pilots. It also has juicy bits presented in earlier media accounts but bear repeating, like:
By March 2016, Boeing settled on a revision of the MCAS flight control logic.
However, Boeing chose to omit key safeguards that had previously been included in earlier iterations of MCAS used on the Boeing KC-46A Pegasus, a military tanker derivative of the Boeing 767 aircraft.
The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control. Those familiar with the tanker's design explained that these checks were incorporated because "[y]ou don't want the solution to be worse than the initial problem."
The 737 MAX version of MCAS abandoned the safeguards previously relied upon. As discussed below, the 737 MAX MCAS had greater control authority than its predecessor, activated repeatedly upon activation, and relied on input from just one of the plane's two sensors that measure the angle of the plane's nose.
In other words, Boeing can't credibly say that it didn't know better.
Here is one of the sections describing Boeing's cover-ups:
Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.
In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots.
We urge you to read the complaint in full, since it contains juicy insider details, like the significance of Southwest being Boeing's 737 Max "launch partner" and what that entailed in practice, plus recounting dates and names of Boeing personnel who met with SWAPA pilots and made misrepresentations about the aircraft.
If you are time-pressed, the best MSM account is from the Seattle Times, In scathing lawsuit, Southwest pilots' union says Boeing 737 MAX was unsafe
Even though Southwest Airlines is negotiating a settlement with Boeing over losses resulting from the grounding of the 737 Max and the airline has promised to compensate the pilots, the pilots' union at a minimum apparently feels the need to put the heat on Boeing directly. After all, the union could withdraw the complaint if Southwest were to offer satisfactory compensation for the pilots' lost income. And pilots have incentives not to raise safety concerns about the planes they fly. Don't want to spook the horses, after all.
But Southwest pilots are not only the ones most harmed by Boeing's debacle but they are arguably less exposed to the downside of bad press about the 737 Max. It's business fliers who are most sensitive to the risks of the 737 Max, due to seeing the story regularly covered in the business press plus due to often being road warriors. Even though corporate customers account for only 12% of airline customers, they represent an estimated 75% of profits.
Southwest customers don't pay up for front of the bus seats. And many of them presumably value the combination of cheap travel, point to point routes between cities underserved by the majors, and close-in airports, which cut travel times. In other words, that combination of features will make it hard for business travelers who use Southwest regularly to give the airline up, even if the 737 Max gives them the willies. By contrast, premium seat passengers on American or United might find it not all that costly, in terms of convenience and ticket cost (if they are budget sensitive), to fly 737-Max-free Delta until those passengers regain confidence in the grounded plane.
Note that American Airlines' pilot union, when asked about the Southwest claim, said that it also believes its pilots deserve to be compensated for lost flying time, but they plan to obtain it through American Airlines.
If Boeing were smart, it would settle this suit quickly, but so far, Boeing has relied on bluster and denial. So your guess is as good as mine as to how long the legal arm-wrestling goes on.
Update 5:30 AM EDT : One important point that I neglected to include is that the filing also recounts, in gory detail, how Boeing went into "Blame the pilots" mode after the Lion Air crash, insisting the cause was pilot error and would therefore not happen again. Boeing made that claim on a call to all operators, including SWAPA, and then three days later in a meeting with SWAPA.
However, Boeing's actions were inconsistent with this claim. From the filing:
Then, on November 7, 2018, the FAA issued an "Emergency Airworthiness Directive (AD) 2018-23-51," warning that an unsafe condition likely could exist or develop on 737 MAX aircraft.
Relying on Boeing's description of the problem, the AD directed that in the event of un-commanded nose-down stabilizer trim such as what happened during the Lion Air crash, the flight crew should comply with the Runaway Stabilizer procedure in the Operating Procedures of the 737 MAX manual.
But the AD did not provide a complete description of MCAS or the problem in 737 MAX aircraft that led to the Lion Air crash, and would lead to another crash and the 737 MAX's grounding just months later.
An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft tail down again.
Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft.
Even after the Lion Air crash, Boeing's description of MCAS was still insufficient to put correct its lack of disclosure as demonstrated by a second MCAS-caused crash.
We hoisted this detail because insiders were spouting in our comments section, presumably based on Boeing's patter, that the Lion Air pilots were clearly incompetent, had they only executed the well-known "runaway stabilizer," all would have been fine. Needless to say, this assertion has been shown to be incorrect.
Titus , October 8, 2019 at 4:38 am
Excellent, by any standard. Which does remind of of the NYT zine story (William Langewiesche Published Sept. 18, 2019) making the claim that basically the pilots who crashed their planes weren't real "Airman".
And making the point that to turn off MCAS all you had to do was flip two switches behind everything else on the center condole. Not exactly true, normally those switches were there to shut off power to electrically assisted trim. Ah, it one thing to shut off MCAS it's a whole other thing to shut off power to the planes trim, especially in high speed ✓ and the plane noise up ✓, and not much altitude ✓.
And especially if you as a pilot didn't know MCAS was there in the first place. This sort of engineering by Boeing is criminal. And the lying. To everyone. Oh, least we all forget the processing power of the in flight computer is that of a intel 286. There are times I just want to be beamed back to the home planet. Where we care for each other.
Carolinian , October 8, 2019 at 8:32 am
One should also point out that Langewiesche said that Boeing made disastrous mistakes with the MCAS and that the very future of the Max is cloudy. His article was useful both for greater detail about what happened and for offering some pushback to the idea that the pilots had nothing to do with the accidents.
As for the above, it was obvious from the first Seattle Times stories that these two events and the grounding were going to be a lawsuit magnet. But some of us think Boeing deserves at least a little bit of a defense because their side has been totally silent–either for legal reasons or CYA reasons on the part of their board and bad management.
Brooklin Bridge , October 8, 2019 at 8:08 am
Classic addiction behavior. Boeing has a major behavioral problem, the repetitive need for and irrational insistence on profit above
safetyall else , that is glaringly obvious to everyone except Boeing.Summer , October 8, 2019 at 9:01 am
"The engineers who created MCAS for the military tanker designed the system to rely on inputs from multiple sensors and with limited power to move the tanker's nose. These deliberate checks sought to ensure that the system could not act erroneously or cause a pilot to lose control "
"Yet Boeing's website, press releases, annual reports, public statements and statements to operators and customers, submissions to the FAA and other civil aviation authorities, and 737 MAX flight manuals made no mention of the increased stall hazard or MCAS itself.
In fact, Boeing 737 Chief Technical Pilot, Mark Forkner asked the FAA to delete any mention of MCAS from the pilot manual so as to further hide its existence from the public and pilots "
This "MCAS" was always hidden from pilots? The military implemented checks on MCAS to maintain a level of pilot control. The commercial airlines did not. Commercial airlines were in thrall of every little feature that they felt would eliminate the need for pilots at all. Fell right into the automation crapification of everything.
![]() |
![]() |
![]() |
Oct 08, 2019 | www.reddit.com
Posted by u/kevbo423 59 minutes ago
What the hell is DevOps? Every couple months I find myself trying to look into it as all I ever hear and see about is DevOps being the way forward. But each time I research it I can only find things talking about streamlining software updates and quality assurance and yada yada yada. It seems like DevOps only applies to companies that make software as a product. How does that affect me as a sysadmin for higher education? My "company's" product isn't software.
Additionally, what does Chef, Puppet, Docker, Kubernetes, Jenkins, or whatever else have to offer me? Again, when I try to research them a majority of what I find just links back to software development.
To give a rough idea of what I deal with, below is a list of my three main responsibilities.
macOS/iOS Systems Administration (I'm the only sysadmin that does this for around 150+ machines)
Network Administration (I just started with this a couple months ago and I'm slowly learning about our infrastructure and network administration in general from our IT director. We have several buildings spread across our entire campus with a mixture of Juniper, Dell, and Brocade equipment.)
AV Systems Design and Programming (I'm the only person who does anything related to video conferencing, meeting room equipment, presentation systems, digital signage, etc. for 7 buildings.)
So what does DevOps have to do with what I do in my job? I'm legitimately trying to learn, but it gets so overwhelming trying to find information because everything I find just assumes you're a software developer with all this prerequisite knowledge. Additionally, how the hell do you find the time to learn all of this? It seems like new DevOps software or platforms or whatever you call them spin up every single month. I'm already in the middle of trying to learn JAMF (macOS/iOS administration), Junos, Dell, and Brocade for network administration (in addition to networking concepts in general), and AV design stuff (like Crestron programming).
I've been working at the same job for 5 years and I feel like I'm being left in the dust by the entire rest of the industry. I'm being pulled in so many different directions that I feel like it's impossible for me to ever get another job. At the same time, I can't specialize in anything because I have so many different unrelated areas I'm supposed to be doing work in.
And this is what I go through/ask myself every few months I try to research and learn DevOps. This is mainly a rant, but I am more than open to any and all advice anyone is willing to offer. Thanks in advance.
kimvila 2 points · 27 minutes ago
· edited 23 minutes agothere's a lot of tools that can be used to make your life much easier that's used on a daily basis for DevOps, but apparently that's not the case for you. when you manage infra as code, you're using DevOps.
there's a lot of space for operations guys like you (and me) so look to DevOps as an alternative source of knowledge, just to stay tuned on the trends of the industry and improve your skills.
for higher education, this is useful for managing large projects and looking for improvement during the development of the product/service itself. but again, that's not the case for you. if you intend to switch to another position, you may try to search for a certification program that suits your needs
Mongoloid_the_Retard 0 points · 46 minutes ago
DevOps is a cult.
![]() |
![]() |
![]() |
Apr 27, 2000 | www.chicagotribune.com
"Hooked on Objects" is dedicated to providing readers with insight into object-oriented technologies. In our first few articles, we introduced the three tenants of object-oriented programming: encapsulation, inheritance and polymorphism. We then covered software process and design patterns. We even got our hands dirty and dissected the Java class.
Each of our previous articles had a common thread. We have written about the strengths and benefits of the object paradigm and highlighted the advantages the object approach brings to the development effort. However, we do not want to give anyone a false sense that object-oriented techniques are always the perfect answer. Object-oriented techniques are not the magic "silver bullets" of programming.
In the programming world, the term silver bullet refers to a technology or methodology that is touted as the ultimate cure for all programming challenges. A silver bullet will make you more productive. It will automatically make design, code and the finished product perfect. It will also make your coffee and butter your toast. Even more impressive, it will do all of this without any effort on your part!
Naturally (and unfortunately) the silver bullet does not exist. Object-oriented technologies are not, and never will be, the ultimate panacea. Object-oriented approaches do not eliminate the need for well-planned design and architecture.
If anything, using OO makes design and architecture more important because without a clear, well-planned design, OO will fail almost every time. Spaghetti code (that which is written without a coherent structure) spells trouble for procedural programming, and weak architecture and design can mean the death of an OO project. A poorly planned system will fail to achieve the promises of OO: increased productivity, reusability, scalability and easier maintenance.
Some critics claim OO has not lived up to its advance billing, while others claim its techniques are flawed. OO isn't flawed, but some of the hype has given OO developers and managers a false sense of security.
Successful OO requires careful analysis and design. Our previous articles have stressed the positive attributes of OO. This time we'll explore some of the common fallacies of this promising technology and some of the potential pitfalls.
Fallacies of OO
It is important to have realistic expectations before choosing to use object-oriented technologies. Do not allow these common fallacies to mislead you.
- OO will insure the success of your project: An object-oriented approach to software development does not guarantee the automatic success of a project. A developer cannot ignore the importance of sound design and architecture. Only careful analysis and a complete understanding of the problem will make the project succeed. A successful project will utilize sound techniques, competent programmers, sound processes and solid project management.
- OO makes you a better programmer: OO does not make a programmer better. Only experience can do that. A coder might know all of the OO lingo and syntactical tricks, but if he or she doesn't know when and where to employ these features, the resulting code will be error-prone and difficult for others to maintain and reuse.
- OO-derived software is superior to other forms of software: OO techniques do not make good software; features make good software. You can use every OO trick in the book, but if the application lacks the features and functionality users need, no one will use it.
- OO techniques mean you don't need to worry about business plans: Before jumping onto the object bandwagon, be certain to conduct a full business analysis. Don't go in without careful consideration or on the faith of marketing hype. It is critical to understand the costs as well as the benefits of object-oriented development. If you plan for only one or two internal development projects, you will see few of the benefits of reuse. You might be able to use preexisting object-oriented technologies, but rolling your own will not be cost effective.
- OO will cure your corporate ills: OO will not solve morale and other corporate problems. If your company suffers from motivational or morale problems, fix those with other solutions. An OO Band-Aid will only worsen an already unfortunate situation.
OO Pitfalls
Life is full of compromise and nothing comes without cost. OO is no exception. Before choosing to employ object technologies it is imperative to understand this. When used properly, OO has many benefits; when used improperly, however, the results can be disastrous.
OO technologies take time to learn: Don't expect to become an OO expert overnight. Good OO takes time and effort to learn. Like all technologies, change is the only constant. If you do not continue to enhance and strengthen your skills, you will fall behind.
OO benefits might not pay off in the short term: Because of the long learning curve and initial extra development costs, the benefits of increased productivity and reuse might take time to materialize. Don't forget this or you might be disappointed in your initial OO results.
OO technologies might not fit your corporate culture: The successful application of OO requires that your development team feels involved. If developers are frequently shifted, they will struggle to deliver reusable objects. There's less incentive to deliver truly robust, reusable code if you are not required to live with your work or if you'll never reap the benefits of it.
OO technologies might incur penalties: In general, programs written using object-oriented techniques are larger and slower than programs written using other techniques. This isn't as much of a problem today. Memory prices are dropping every day. CPUs continue to provide better performance and compilers and virtual machines continue to improve. The small efficiency that you trade for increased productivity and reuse should be well worth it. However, if you're developing an application that tracks millions of data points in real time, OO might not be the answer for you.
OO techniques are not appropriate for all problems: An OO approach is not an appropriate solution for every situation. Don't try to put square pegs through round holes! Understand the challenges fully before attempting to design a solution. As you gain experience, you will begin to learn when and where it is appropriate to use OO technologies to address a given problem. Careful problem analysis and cost/benefit analysis go a long way in protecting you from making a mistake.
What do you need to do to avoid these pitfalls and fallacies? The answer is to keep expectations realistic. Beware of the hype. Use an OO approach only when appropriate.
Programmers should not feel compelled to use every OO trick that the implementation language offers. It is wise to use only the ones that make sense. When used without forethought, object-oriented techniques could cause more harm than good. Of course, there is one other thing that you should always do to improve your OO: Don't miss a single installment of "Hooked on Objects."
David Hoag is vice president-development and chief object guru for ObjectWave, a Chicago-based object-oriented software engineering firm. Anthony Sintes is a Sun Certified Java Developer and team member specializing in telecommunications consulting for ObjectWave. Contact them at [email protected] or visit their Web site at www.objectwave.com.
BOOKMARKS
Hooked on Objects archive:
chicagotribune.com/go/HOBarchive
Associated message board:
chicagotribune.com/go/HOBtalk
![]() |
![]() |
![]() |
Sony Computer Entertainment Europe Research & Development Division
OO is not necessarily EVIL
- Be careful not to design yourself into a corner
- Consider data in your design
- - Can you decouple data from objects?
- ... code from objects?
- Be aware of what the compiler and HW are
doingIts all about the memory
- Optimise for data first, then code.
- Memory access is probably going to be your biggest bottleneck
- Simplify systems
- - KISS
- - Easier to optimize, easier to parallelize
Homogeneity
- Keep code and data homogenous
- Avoid introducing variations
- Don't test for exceptions - sort by them.
- Not everything needs to be an object
- If you must have a pattern, then consider using Managers
Data Oriented Design Delivers
- Better performance
- Better realisation of code optimisations
- Often simpler code
- More parallelisable code
![]() |
![]() |
![]() |
Oct 06, 2019 | www.youtube.com
Maxwelhse , 3 years agoProps to the artist who actually found a way to visualize most of this meaningless corporate lingo. I'm sure it wasn't easy to come up with everything.
VenetianTemper , 4 years agoHe missed "sea change" and "vertical integration". Otherwise, that was pretty much all of the useless corporate meetings I've ever attended distilled down to 4.5 minutes. Oh, and you're getting laid off and/or no raises this year.
Swag Mcfresh , 5 years agoFrom my experiences as an engineer, never trust a company that describes their product with the word "synergy".
112steinway , 4 years agoFor those too young to get the joke, this is a style parody of Crosby, Stills & Nash, a folk-pop super-group from the 60's. They were hippies who spoke out against corporate interests, war, and politics. Al took their sound (flawlessly), and wrote a song in corporate jargon (the exact opposite of everything CSN was about). It's really brilliant, to those who get the joke.
Jonathan Ingersoll , 3 years agoOnly in corporate speak can you use a whole lot of words while saying nothing at all.
A.J. Collins , 3 years agoAs a business major this is basically every essay I wrote.
meanmanturbo , 3 years ago"The company has undergone organization optimization due to our strategy modification, which includes empowering the support to the operation in various global markets" - Red 5 on why they laid off 40 people suddenly. Weird Al would be proud.
zyxwut321 , 4 years agoSo this is basically a Dilbert strip turned into a song. I approve.
teenygozer , 3 years agoIn his big long career this has to be one of the best songs Weird Al's ever done. Very ambitious rendering of one of the most ambitious songs in pop music history.
Dunoid , 4 years agoThis should be played before corporate meetings to shame anyone who's about to get up and do the usual corporate presentation. Genius as usual, Mr. Yankovic!
Snoo Lee , 4 years agoMaybe I'm too far gone to the world of computer nerds, but "Cloud Computing" seems like it should have been in the song somewhere.
A Piece Of Bread , 3 years agoThe "paradigm shift" at the end of the video / song is when the corporation screws everybody at the end. Brilliantly done, Al.
GeoffryHawk , 3 years agoDon't forget to triangulate the automatonic business monetizer to create exceptional synergy.
Sefie Ezephiel , 4 months agoThere's a quote it goes something like: A politician is someone who speaks for hours while saying nothing at all. And this is exactly it and it's brilliant.
Phil H , 6 months agoFrom the current Gamestop earnings call "address the challenges that have impacted our results, and execute both deliberately and with urgency. We believe we will transform the business and shape the strategy for the GameStop of the future. This will be driven by our go-forward leadership team that is now in place, a multi-year transformation effort underway, a commitment to focusing on the core elements of our business that are meaningful to our future, and a disciplined approach to capital allocation."" yeah Weird Al totally nailed it
Laff , 3 years ago"People who enjoy meetings should not be put in charge of anything." -Thomas Sowell
Brett Naylor , 4 years agoI heard "monetize our asses" for some reason...
Mark Kahn , 4 years agoExcuse me, but "proactive" and "paradigm"? Aren't these just buzzwords that dumb people use to sound important? Not that I'm accusing you of anything like that. [pause] I'm fired, aren't I?~George Meyer
Mark , 4 years agoBrilliant social commentary, on how the height of 60's optimism was bastardized into corporate enthusiasm. I hope SteveJjobs got to see this.
Δ , 17 hours agoThat's the strangest "Draw My Life" I've ever seen.
Mike The SandbridgeKid , 5 years agoI watch this at least once a day to take the edge of my job search whenever I have to decipher fifteen daily want-ads claiming to seek "Hospitality Ambassadors", "Customer Satisfaction Specialists", "Brand Representatives" and "Team Commitment Associates" eventually to discover they want someone to run a cash register and sweep up.
Geetar Bear , 4 years ago (edited)The irony is a song about Corporate Speak in the style of tie-died, hippie-dippy CSN (+/- )Y four-part harmony. Suite Judy Blue Eyes via Almost Cut My Hair filtered through Carry On. "Fantastic" middle finger to Wall Street,The City, and the monstrous excesses of Unbridled Capitalism.
Vaugn Ripen , 2 years agoThis reminds me of George carlin so much
Joolz Godfree , 4 years agoIf you understand who and what he's taking a jab at, this is one of the greatest songs and videos of all time. So spot on. This and Frank's 2000 inch tv are my favorite songs of yours. Thanks Al!
Miles Lacey , 4 years agohahaha, "Client-Centric Solutions...!" (or in my case at the time, 'Customer-Centric' solutions) now THAT's a term i haven't heard/read/seen in years, since last being an office drone. =D
Soufriere , 5 years agoWhen I interact with this musical visual medium I am motivated to conceptualize how the English language can be better compartmentalized to synergize with the client-centric requirements of the microcosmic community focussed social entities that I administrate on social media while interfacing energetically about the inherent shortcomings of the current socio-economic and geo-political order in which we co-habitate. Now does this tedium flow in an effortless stream of coherent verbalisations capable of comprehension?
When I bought "Mandatory Fun", put it in my car, and first heard this song, I busted a gut, laughing so hard I nearly crashed. All the corporate buzzwords! (except "pivot", apparently).
![]() |
![]() |
![]() |
Oct 06, 2019 | www.reddit.com
DragonDrew Jack of All Trades 772 points · 4 days ago
"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2 OpenScore Sysadmin 527 points · 4 days ago
"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value"
![]() |
![]() |
![]() |
Oct 05, 2019 | www.reddit.com
They say, No more IT or system or server admins needed very soon...
Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes.
ntengineer 613 points · 4 days ago
Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.
For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:
"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "
Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.
I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2
PuzzledSwitch 101 points · 4 days ago
my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.
no amount of corporate pressuring or bitching could ever stand up to that. level 3
themastermatt 42 points · 4 days ago
Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4 PuzzledSwitch 35 points · 4 days ago
make a follow-up in email, then.
or, you might have to interject for a moment.
williamfny Jack of All Trades 26 points · 4 days ago
This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4
MaxHedrome 6 points · 4 days ago
CrazyTachikoma 4 days agoListen to this guy OP
This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.
Most DevOps I've met are devs trying to bypass the sysadmins. This, and the Cloud fad, are burning serious amount of money from companies managed by stupid people that get easily impressed by PR stunts and shiny conferences. Then when everything goes to shit, they call the infrastructure team to fix it...
![]() |
![]() |
![]() |
Oct 01, 2019 | veekaybee.github.io
In 1976, after eight years in the Soviet education system, I graduated the equivalent of middle school. Afterwards, I could choose to go for two more years, which would earn me a high school diploma, and then do three years of college, which would get me a diploma in "higher education."
Or, I could go for the equivalent of a blend of an associate and bachelor's degree, with an emphasis on vocational skills. This option took four years.
I went with the second option, mainly because it was common knowledge in the Soviet Union at the time that there was a restrictive quota for Jews applying to the five-year college program, which almost certainly meant that I, as a Jew, wouldn't get in. I didn't want to risk it.
My best friend at the time proposed that we take the entrance exams to attend Nizhniy Novgorod Industrial and Economic College. (At that time, it was known as Gorky Industrial and Economic College - the city, originally named for famous poet Maxim Gorky, was renamed in the 1990s after the fall of the Soviet Union.)
They had a program called "Programming for high-speed computing machines." Since I got good grades in math and geometry, this looked like I'd be able to get in. It also didn't hurt that my aunt, a very good seamstress and dressmaker, sewed several dresses specifically for the school's chief accountant, who was involved in enrollment decisions. So I got in.
What's interesting is that from the almost sixty students accepted into the program that year, all of them were female. It was the same for the class before us, and for the class after us. Later, after I started working the Soviet Union, and even in the United States in the early 1990s, I understood that this was a trend. I'd say that 70% of the programmers I encountered in the IT industry were female. The males were mostly in middle and upper management.
My mom's code notebook, with her name and "Macroassembler" on it.
We started what would be considered our major concentration courses during the second year. Along with programming, there were a lot of related classes: "Computing Appliances and Their Organization", "Electro Technology", "Algorithms of Numerical Methods," and a lot of math that included integral and differential calculations. But programming was the main course, and we spent the most hours on it.
Notes on programming - Heading is "Directives (Commands) for job control implementation", covering the ABRT command
In the programming classes, we studied programming the "dry" way: using paper, pencil and eraser. In fact, this method was so important that students who forgot their pencils were sent to the main office to ask for one. It was extremely embarrassing, and we learned quickly not to forget them.
Paper and pencil code for opening a file in Macroassembler
Every semester we would take a new programming language to learn. We learned Algol, Fortran,and PL/1. We would learn from simplest commands to loop organization, function and sub-function programming, multi-dimensional array processing, and more.
After mastering the basics, we would take exams, which were logical computing tasks to code in this specific language.
At some point midway through the program, our school bought the very first physical computer I ever saw : the Nairi. The programming language was AP, which was one of the few computer languages with Russian keywords.
Then, we started taking labs. It was terrifying experience. You had to type your program in entering device which basically was a typewriter connected to a huge computer. The programs looked like step-by-step instructions, and if you made even one mistake you had to start all over again. To code a solution for a linear algebraic equation usually would take 10 - 12 steps.
Program output in Macroassembler ("I was so creative with my program names," jokes my mom.)
Our teacher used to go for one week of "practice work and curriculum development," to a serious IT shop with more advanced machines every once in a while. At that time, the heavy computing power was in the ES Series, produced by Soviet bloc countries.
These machines were clones of the IBM 360. They worked with punch cards and punch tapes. She would bring back tons of papers with printed code and debugging comments for us to learn in classroom.
After two and half years of rigorous study using pencil and paper, we had six months of practice. Most of the time it was one of several scientific research institutes existed in Nizhny Novgorod. I went to an institute that was oriented towards the auto industry.
I graduate with title "Programmer-Technician". Most of the girls from my class took computer operator jobs, but I did not want to settle. I continued my education at Lobachevsky State University , named after Lobachevsky , the famous Russian mathematician. Since I was taking evening classes, it took me six years to graduate.
I wrote a lot about my first college because now looking back I realize that this is where I really learned to code and developed my programming skills. At the State University, we took a huge amount of unnecessary courses. The only useful one was professional English. After this course I could read technical documentation in English without issues.
My final university degree was equivalent to a US master's in Computer Science. The actual major was called "Computational Mathematics and Cybernetics".
In total I worked for about seven years in the USSR as computer programmer, from 1982 to 1989. Technology changed rapidly, even there. I started out writing programs on special blanks for punch card machines using a Russian version of Assembler. To maximize performance, we would leave stacks of our punch cards for nightly processing.
After a couple years, we got terminals with keyboards. First they were installed in the same room where main computer was. Initially, there were not enough terminals and "machine time" was evenly divided between all of the programmers during the day.
Then, the terminals started to appear in the same room where programmers were. The displays were small, with black background and green font. We were now working in the terminal.
The languages were also changing. I switched to C and had to get hands-on training. I did not know then, but I picked profession where things are constantly moving. The most I've ever worked with the same software was for about three years.
In 1991, we emigrated to the States. I had to quit my job two years before to avoid any issues with the Soviet government. Every programmer I knew had to sign a special form commanding them to keep state secrets. Such a signature could prevent us from getting exit visas.
When I arrived in the US, I worried I had fallen behind. To refresh my skills and to become more marketable, I had to take programming course for six months. It was the then-popular mix of COBOL, DB2, JCL etc.
The main differences between USA and the USSR was the level at which computers were incorporated in every day life. In the USSR, they were still a novelty. There were not a lot of practical usage. Some of the reasons were planed organization of economy, politicized approach to science. Cybernetics was considered "capitalist" discovery and was in exile in 1950s. In the United States, computers were already widely in use, and even in consumer settings.
The other difference is gender of this profession. In the United States, it is more male-dominated. In Russia as I was starting my professional life, it was considered more of a female occupation. In both programs I studied , girls represented 100% of the class. Guys would go for something that was considered more masculine. These choices included majors like construction engineering and mechanical engineering.
Now, things have changed in Russia. Average salary for software developer in Moscow is around $21K annually, versus $10K average salary for Russia as a whole. It, like in the United States, has become a male-dominated field.
In conclusion, I have to say I picked the good profession to be in. Although I constantly have to learn new things, I've never had to worry about being employed. When I did go through a layoff, I was able to find a job very quickly. It is also a good paying job. I was very lucky compared to other immigrants, who had to study programming from scratch.
![]() |
![]() |
![]() |
Feb 28, 1998 | www.ddj.com
... ... ...
The creator of Perl talks about language design and Perl. By Eugene Eric Kim
DDJ : Is Perl 5.005 what you envisioned Perl to be when you set out to do it?
LW: That assumes that I'm smart enough to envision something as complicated as Perl. I knew that Perl would be good at some things, and would be good at more things as time went on. So, in a sense, I'm sort of blessed with natural stupidity -- as opposed to artificial intelligence -- in the sense that I know what my intellectual limits are.
I'm not one of these people who can sit down and design an entire system from scratch and figure out how everything relates to everything else, so I knew from the start that I had to take the bear-of-very-little-brain approach, and design the thing to evolve. But that fit in with my background in linguistics, because natural languages evolve over time.
You can apply biological metaphors to languages. They move into niches, and as new needs arise, languages change over time. It's actually a practical way to design a computer language. Not all computer programs can be designed that way, but I think more can be designed that way than have been. A lot of the majestic failures that have occurred in computer science have been because people thought they could design the whole thing in advance.
DDJ : How do you design a language to evolve?
LW: There are several aspects to that, depending on whether you are talking about syntax or semantics. On a syntactic level, in the particular case of Perl, I placed variable names in a separate namespace from reserved words. That's one of the reasons there are funny characters on the front of variable names -- dollar signs and so forth. That allowed me to add new reserved words without breaking old programs.
DDJ : What is a scripting language? Does Perl fall into the category of a scripting language?
LW: Well, being a linguist, I tend to go back to the etymological meanings of "script" and "program," though, of course, that's fallacious in terms of what they mean nowadays. A script is what you hand to the actors, and a program is what you hand to the audience. Now hopefully, the program is already locked in by the time you hand that out, whereas the script is something you can tinker with. I think of phrases like "following the script," or "breaking from the script." The notion that you can evolve your script ties into the notion of rapid prototyping.
A script is something that is easy to tweak, and a program is something that is locked in. There are all sorts of metaphorical tie-ins that tend to make programs static and scripts dynamic, but of course, it's a continuum. You can write Perl programs, and you can write C scripts. People do talk more about Perl programs than C scripts. Maybe that just means Perl is more versatile.
... ... ...
DDJ : Would that be a better distinction than interpreted versus compiled -- run-time versus compile-time binding?
LW: It's a more useful distinction in many ways because, with late-binding languages like Perl or Java, you cannot make up your mind about what the real meaning of it is until the last moment. But there are different definitions of what the last moment is. Computer scientists would say there are really different "latenesses" of binding.
A good language actually gives you a range, a wide dynamic range, of your level of discipline. We're starting to move in that direction with Perl. The initial Perl was lackadaisical about requiring things to be defined or declared or what have you. Perl 5 has some declarations that you can use if you want to increase your level of discipline. But it's optional. So you can say "use strict," or you can turn on warnings, or you can do various sorts of declarations.
DDJ : Would it be accurate to say that Perl doesn't enforce good design?
LW: No, it does not. It tries to give you some tools to help if you want to do that, but I'm a firm believer that a language -- whether it's a natural language or a computer language -- ought to be an amoral artistic medium.
You can write pretty poems or you can write ugly poems, but that doesn't say whether English is pretty or ugly. So, while I kind of like to see beautiful computer programs, I don't think the chief virtue of a language is beauty. That's like asking an artist whether they use beautiful paints and a beautiful canvas and a beautiful palette. A language should be a medium of expression, which does not restrict your feeling unless you ask it to.
DDJ : Where does the beauty of a program lie? In the underlying algorithms, in the syntax of the description?
LW: Well, there are many different definitions of artistic beauty. It can be argued that it's symmetry, which in a computer language might be considered orthogonality. It's also been argued that broken symmetry is what is considered most beautiful and most artistic and diverse. Symmetry breaking is the root of our whole universe according to physicists, so if God is an artist, then maybe that's his definition of what beauty is.
This actually ties back in with the built-to-evolve concept on the semantic level. A lot of computer languages were defined to be naturally orthogonal, or at least the computer scientists who designed them were giving lip service to orthogonality. And that's all very well if you're trying to define a position in a space. But that's not how people think. It's not how natural languages work. Natural languages are not orthogonal, they're diagonal. They give you hypotenuses.
Suppose you're flying from California to Quebec. You don't fly due east, and take a left turn over Nashville, and then go due north. You fly straight, more or less, from here to there. And it's a network. And it's actually sort of a fractal network, where your big link is straight, and you have little "fractally" things at the end for your taxi and bicycle and whatever the mode of transport you use. Languages work the same way. And they're designed to get you most of the way here, and then have ways of refining the additional shades of meaning.
When they first built the University of California at Irvine campus, they just put the buildings in. They did not put any sidewalks, they just planted grass. The next year, they came back and built the sidewalks where the trails were in the grass. Perl is that kind of a language. It is not designed from first principles. Perl is those sidewalks in the grass. Those trails that were there before were the previous computer languages that Perl has borrowed ideas from. And Perl has unashamedly borrowed ideas from many, many different languages. Those paths can go diagonally. We want shortcuts. Sometimes we want to be able to do the orthogonal thing, so Perl generally allows the orthogonal approach also. But it also allows a certain number of shortcuts, and being able to insert those shortcuts is part of that evolutionary thing.
I don't want to claim that this is the only way to design a computer language, or that everyone is going to actually enjoy a computer language that is designed in this way. Obviously, some people speak other languages. But Perl was an experiment in trying to come up with not a large language -- not as large as English -- but a medium-sized language, and to try to see if, by adding certain kinds of complexity from natural language, the expressiveness of the language grew faster than the pain of using it. And, by and large, I think that experiment has been successful.
DDJ : Give an example of one of the things you think is expressive about Perl that you wouldn't find in other languages.
LW: The fact that regular-expression parsing and the use of regular expressions is built right into the language. If you used the regular expression in a list context, it will pass back a list of the various subexpressions that it matched. A different computer language may add regular expressions, even have a module that's called Perl 5 regular expressions, but it won't be integrated into the language. You'll have to jump through an extra hoop, take that right angle turn, in order to say, "Okay, well here, now apply the regular expression, now let's pull the things out of the regular expression," rather than being able to use the thing in a particular context and have it do something meaningful.
The school of linguistics I happened to come up through is called tagmemics, and it makes a big deal about context. In a real language -- this is a tagmemic idea -- you can distinguish between what the conventional meaning of the "thing" is and how it's being used. You think of "dog" primarily as a noun, but you can use it as a verb. That's the prototypical example, but the "thing" applies at many different levels. You think of a sentence as a sentence. Transformational grammar was built on the notion of analyzing a sentence. And they had all their cute rules, and they eventually ended up throwing most of them back out again.
But in the tagmemic view, you can take a sentence as a unit and use it differently. You can say a sentence like, "I don't like your I-can-use-anything-like-a-sentence attitude." There, I've used the sentence as an adjective. The sentence isn't an adjective if you analyze it, any way you want to analyze it. But this is the way people think. If there's a way to make sense of something in a particular context, they'll do so. And Perl is just trying to make those things make sense. There's the basic distinction in Perl between singular and plural context -- call it list context and scalar context, if you will. But you can use a particular construct in a singular context that has one meaning that sort of makes sense using the list context, and it may have a different meaning that makes sense in the plural context.
That is where the expressiveness comes from. In English, you read essays by people who say, "Well, how does this metaphor thing work?" Owen Barfield talks about this. You say one thing and mean another. That's how metaphors arise. Or you take two things and jam them together. I think it was Owen Barfield, or maybe it was C.S. Lewis, who talked about "a piercing sweetness." And we know what "piercing" is, and we know what "sweetness" is, but you put those two together, and you've created a new meaning. And that's how languages ought to work.
DDJ : Is a more expressive language more difficult to learn?
LW: Yes. It was a conscious tradeoff at the beginning of Perl that it would be more difficult to master the whole language. However, taking another clue from a natural language, we do not require 5-year olds to speak with the same diction as 50-year olds. It is okay for you to use the subset of a language that you are comfortable with, and to learn as you go. This is not true of so many computer-science languages. If you program C++ in a subset that corresponds to C, you get laughed out of the office.
There's a whole subject that we haven't touched here. A language is not a set of syntax rules. It is not just a set of semantics. It's the entire culture surrounding the language itself. So part of the cultural context in which you analyze a language includes all the personalities and people involved -- how everybody sees the language, how they propagate the language to other people, how it gets taught, the attitudes of people who are helping each other learn the language -- all of this goes into the pot of context.
Because I had already put out other freeware projects (rn and patch), I realized before I ever wrote Perl that a great deal of the value of those things was from collaboration. Many of the really good ideas in rn and Perl came from other people.
I think that Perl is in its adolescence right now. There are places where it is grown up, and places where it's still throwing tantrums. I have a couple of teenagers, and the thing you notice about teenagers is that they're always plus or minus ten years from their real age. So if you've got a 15-year old, they're either acting 25 or they're acting 5. Sometimes simultaneously! And Perl is a little that way, but that's okay.
DDJ : What part of Perl isn't quite grown up?
LW: Well, I think that the part of Perl, which has not been realistic up until now has been on the order of how you enable people in certain business situations to actually use it properly. There are a lot of people who cannot use freeware because it is, you know, schlocky. Their bosses won't let them, their government won't let them, or they think their government won't let them. There are a lot of people who, unknown to their bosses or their government, are using Perl.
DDJ : So these aren't technical issues.
LW: I suppose it depends on how you define technology. Some of it is perceptions, some of it is business models, and things like that. I'm trying to generate a new symbiosis between the commercial and the freeware interests. I think there's an artificial dividing line between those groups and that they could be more collaborative.
As a linguist, the generation of a linguistic culture is a technical issue. So, these adjustments we might make in people's attitudes toward commercial operations or in how Perl is being supported, distributed, advertised, and marketed -- not in terms of trying to make bucks, but just how we propagate the culture -- these are technical ideas in the psychological and the linguistic sense. They are, of course, not technical in the computer-science sense. But I think that's where Perl has really excelled -- its growth has not been driven solely by technical merits.
DDJ : What are the things that you do when you set out to create a culture around the software that you write?
LW: In the beginning, I just tried to help everybody. Particularly being on USENET. You know, there are even some sneaky things in there -- like looking for people's Perl questions in many different newsgroups. For a long time, I resisted creating a newsgroup for Perl, specifically because I did not want it to be ghettoized. You know, if someone can say, "Oh, this is a discussion about Perl, take it over to the Perl newsgroup," then they shut off the discussion in the shell newsgroup. If there are only the shell newsgroups, and someone says, "Oh, by the way, in Perl, you can solve it like this," that's free advertising. So, it's fuzzy. We had proposed Perl as a newsgroup probably a year or two before we actually created it. It eventually came to the point where the time was right for it, and we did that.
DDJ : Perl has really been pigeonholed as a language of the Web. One result is that people mistakenly try to compare Perl to Java. Why do you think people make the comparison in the first place? Is there anything to compare?
LW: Well, people always compare everything.
DDJ : Do you agree that Perl has been pigeonholed?
LW: Yes, but I'm not sure that it bothers me. Before it was pigeonholed as a web language, it was pigeonholed as a system-administration language, and I think that -- this goes counter to what I was saying earlier about marketing Perl -- if the abilities are there to do a particular job, there will be somebody there to apply it, generally speaking. So I'm not too worried about Perl moving into new ecological niches, as long as it has the capability of surviving in there.
Perl is actually a scrappy language for surviving in a particular ecological niche. (Can you tell I like biological metaphors?) You've got to understand that it first went up against C and against shell, both of which were much loved in the UNIX community, and it succeeded against them. So that early competition actually makes it quite a fit competitor in many other realms, too.
For most web applications, Perl is severely underutilized. Your typical CGI script says print, print, print, print, print, print, print. But in a sense, it's the dynamic range of Perl that allows for that. You don't have to say a whole lot to write a simple Perl script, whereas your minimal Java program is, you know, eight or ten lines long anyway. Many of the features that made it competitive in the UNIX space will make it competitive in other spaces.
Now, there are things that Perl can't do. One of the things that you can't do with Perl right now is compile it down to Java bytecode. And if that, in the long run, becomes a large ecological niche (and this is not yet a sure thing), then that is a capability I want to be certain that Perl has.
DDJ : There's been a movement to merge the two development paths between the ActiveWare Perl for Windows and the main distribution of Perl. You were talking about ecological niches earlier, and how Perl started off as a text-processing language. The scripting languages that are dominant on the Microsoft platforms -- like VB -- tend to be more visual than textual. Given Perl's UNIX origins -- awk, sed, and C, for that matter -- do you think that Perl, as it currently stands, has the tools to fit into a Windows niche?
LW: Yes and no. It depends on your problem domain and who's trying to solve the problem. There are problems that only need a textual solution or don't need a visual solution. Automation things of certain sorts don't need to interact with the desktop, so for those sorts of things -- and for the programmers who aren't really all that interested in visual programming -- it's already good for that. And people are already using it for that. Certainly, there is a group of people who would be enabled to use Perl if it had more of a visual interface, and one of the things we're talking about doing for the O'Reilly NT Perl Resource Kit is some sort of a visual interface.
A lot of what Windows is designed to do is to get mere mortals from 0 to 60, and there are some people who want to get from 60 to 100. We are not really interested in being in Microsoft's crosshairs. We're not actually interested in competing head-to-head with Visual Basic, and to the extent that we do compete with them, it's going to be kind of subtle. There has to be some way to get people from the slow lane to the fast lane. It's one thing to give them a way to get from 60 to 100, but if they have to spin out to get from the slow lane to the fast lane, then that's not going to work either.
Over the years, much of the work of making Perl work for people has been in designing ways for people to come to Perl. I actually delayed the first version of Perl for a couple of months until I had a sed-to-Perl and an awk-to-Perl translator. One of the benefits of borrowing features from various other languages is that those subsets of Perl that use those features are familiar to people coming from that other culture. What would be best, in my book, is if someone had a way of saying, "Well, I've got this thing in Visual Basic. Now, can I just rewrite some of these things in Perl?"
We're already doing this with Java. On our UNIX Perl Resource Kit, I've got a hybrid language called "jpl" -- that's partly a pun on my old alma mater, Jet Propulsion Laboratory, and partly for Java, Perl...Lingo, there we go! That's good. "Java Perl Lingo." You've heard it first here! jpl lets you take a Java program and magically turn one of the methods into a chunk of Perl right there inline. It turns Perl code into a native method, and automates the linkage so that when you pull in the Java code, it also pulls in the Perl code, and the interpreter, and everything else. It's actually calling out from Java's Virtual Machine into Perl's virtual machine. And we can call in the other direction, too. You can embed Java in Perl, except that there's a bug in JDK having to do with threads that prevents us from doing any I/O. But that's Java's problem.
It's a way of letting somebody evolve from a purely Java solution into, at least partly, a Perl solution. It's important not only to make Perl evolve, but to make it so that people can evolve their own programs. It's how I program, and I think a lot of people program that way. Most of us are too stupid to know what we want at the beginning.
DDJ : Is there hope down the line to present Perl to a standardization body?
LW: Well, I have said in jest that people will be free to standardize Perl when I'm dead. There may come a time when that is the right thing to do, but it doesn't seem appropriate yet.
DDJ : When would that time be?
LW: Oh, maybe when the federal government declares that we can't export Perl unless it's standardized or something.
DDJ : Only when you're forced to, basically.
LW: Yeah. To me, once things get to a standards body, it's not very interesting anymore. The most efficient form of government is a benevolent dictatorship. I remember walking into some BOF that USENIX held six or seven years ago, and John Quarterman was running it, and he saw me sneak in, sit in the back corner, and he said, "Oh, here comes Larry Wall! He's a standards committee all of his own!"
A great deal of the success of Perl so far has been based on some of my own idiosyncrasies. And I recognize that they are idiosyncrasies, and I try to let people argue me out of them whenever appropriate. But there are still ways of looking at things that I seem to do differently than anybody else. It may well be that perl5-porters will one day degenerate into a standards committee. So far, I have not abused my authority to the point that people have written me off, and so I am still allowed to exercise a certain amount of absolute power over the Perl core.
I just think headless standards committees tend to reduce everything to mush. There is a conservatism that committees have that individuals don't, and there are times when you want to have that conservatism and times you don't. I try to exercise my authority where we don't want that conservatism. And I try not to exercise it at other times.
DDJ : How did you get involved in computer science? You're a linguist by background?
LW: Because I talk to computer scientists more than I talk to linguists, I wear the linguistics mantle more than I wear the computer-science mantle, but they actually came along in parallel, and I'm probably a 50/50 hybrid. You know, basically, I'm no good at either linguistics or computer science.
DDJ : So you took computer-science courses in college?
LW: In college, yeah. In college, I had various majors, but what I eventually graduated in -- I'm one of those people that packed four years into eight -- what I eventually graduated in was a self-constructed major, and it was Natural and Artificial Languages, which seems positively prescient considering where I ended up.
DDJ : When did you join O'Reilly as a salaried employee? And how did that come about?
LW: A year-and-a-half ago. It was partly because my previous job was kind of winding down.
DDJ : What was your previous job?
LW: I was working for Seagate Software. They were shutting down that branch of operations there. So, I was just starting to look around a little bit, and Tim noticed me looking around and said, "Well, you know, I've wanted to hire you for a long time," so we talked. And Gina Blaber (O'Reilly's software director) and I met. So, they more or less offered to pay me to mess around with Perl.
So it's sort of my dream job. I get to work from home, and if I feel like taking a nap in the afternoon, I can take a nap in the afternoon and work all night.
DDJ : Do you have any final comments, or tips for aspiring programmers? Or aspiring Perl programmers?
LW: Assume that your first idea is wrong, and try to think through the various options. I think that the biggest mistake people make is latching onto the first idea that comes to them and trying to do that. It really comes to a thing that my folks taught me about money. Don't buy something unless you've wanted it three times. Similarly, don't throw in a feature when you first think of it. Think if there's a way to generalize it, think if it should be generalized. Sometimes you can generalize things too much. I think like the things in Scheme were generalized too much. There is a level of abstraction beyond which people don't want to go. Take a good look at what you want to do, and try to come up with the long-term lazy way, not the short-term lazy way.
![]() |
![]() |
![]() |
Sep 21, 2019 | www.scriptol.com
1948
1949
- Plankalkül. First high-level language. The date is that of the first public description.
1951
- Short Code.
1952
- A-0 (starting work for Math-Matic).
1955
- Autocode.
1956
- FLOW-MATIC. By Grace Hopper, first language with words.
1957
- IPL.
1958
- Fortran.
- Math-Matic.
1959
- Fortran II.
- Lisp, work begins by John Mc Carthy at MIT..
- ALGOL 58 also called IAL (International Algorithmic Language). Original specification by a comitee of European and American computer scientists.
- IAL.
- UNCOL. First intermediate language for a virtual machine.
1960
- Lisp 1.5.
- COBOL, work begins.
1962
- ALGOL 60. Revision of ALGOL 58, and first implementation.
- APL, work begins.
- COBOL defined.
- First JIT functions used for Lisp.
1963
- APL implemented.
- Fortran IV appears.
- SNOBOL, work begins.
- Simula.
1964
- ALGOL 60 is revised.
- CPL. Universities of Cambridge and of London. Extended version of Algol 60. Predecessor of BCPL.
- PL/1, work begins.
- Joss.
- Apl-360 is implemented.
- Basic.
- PL/1.
- COWSEL. Renamed POP-1 in 1966, sort of Lisp without parenthesis.
- MATHLAB. Became popular since MATHLAB 68.
- ... ... ...
![]() |
![]() |
![]() |
Jul 03, 2018 | medium.com
I present here a small bibliography of papers on programming languages from the 1970's. I have personally considered these papers interesting in my research on the syntax of programming languages. I give here short annotations and comments (adapted to modern's day notions) on some of these papers.
![]() |
![]() |
![]() |
Sep 18, 2019 | www.moonofalabama.org
... ... ...
Boeing screwed up by designing and installing a faulty systems that was unsafe. It did not even tell the pilots that MCAS existed. It still insists that the system's failure should not be trained in simulator type training. Boeing's failure and the FAA's negligence, not the pilots, caused two major accidents.
Nearly a year after the first incident Boeing has still not presented a solution that the FAA would accept. Meanwhile more safety critical issues on the 737 MAX were found for which Boeing has still not provided any acceptable solution.
But to Langewiesche this anyway all irrelevant. He closes his piece out with more "blame the pilots" whitewash of "poor Boeing":
The 737 Max remains grounded under impossibly close scrutiny, and any suggestion that this might be an overreaction, or that ulterior motives might be at play, or that the Indonesian and Ethiopian investigations might be inadequate, is dismissed summarily. To top it off, while the technical fixes to the MCAS have been accomplished, other barely related imperfections have been discovered and added to the airplane's woes. All signs are that the reintroduction of the 737 Max will be exceedingly difficult because of political and bureaucratic obstacles that are formidable and widespread. Who in a position of authority will say to the public that the airplane is safe?I would if I were in such a position. What we had in the two downed airplanes was a textbook failure of airmanship . In broad daylight, these pilots couldn't decipher a variant of a simple runaway trim, and they ended up flying too fast at low altitude, neglecting to throttle back and leading their passengers over an aerodynamic edge into oblivion. They were the deciding factor here -- not the MCAS, not the Max.
One wonders how much Boeing paid the author to assemble his screed.
foolisholdman , Sep 18 2019 17:14 utc | 5
William Herschel , Sep 18 2019 17:18 utc | 614,000 Words Of "Blame The Pilots" That Whitewash Boeing Of 737 MAX Failure
The New York TimesNo doubt, this WAS intended as a whitewash of Boeing, but having read the 14,000 words, I don't think it qualifies as more than a somewhat greywash. It is true he blames the pilots for mishandling a situation that could, perhaps, have been better handled, but Boeing still comes out of it pretty badly and so does the NTSB. The other thing I took away from the article is that Airbus planes are, in principle, & by design, more failsafe/idiot-proof.
Key words: New York Times Magazine. I think when your body is for sale you are called a whore. Trump's almost hysterical bashing of the NYT is enough to make anyone like the paper, but at its core it is a mouthpiece for the military industrial complex. Cf. Judith Miller.BM , Sep 18 2019 17:23 utc | 7The New York Times Magazine just published a 14,000 words piecefoolisholdman , Sep 18 2019 17:23 utc | 8An ill-disguised attempt to prepare the ground for premature approval for the 737max. It won't succeed - impossible. Opposition will come from too many directions. The blowback from this article will make Boeing regret it very soon, I am quite sure.
Come to think about it: (apart from the MCAS) what sort of crap design is it, if an absolutely vital control, which the elevator is, can become impossibly stiff under just those conditions where you absolutely have to be able to move it quickly?A.L. , Sep 18 2019 17:27 utc | 9This NYT article is great.jayc , Sep 18 2019 17:38 utc | 10It will only highlight the hubris of "my sh1t doesn't stink" mentality of the American elite and increase the resolve of other civil aviation authorities with a backbone (or in ascendancy) to put Boeing through the wringer.
For the longest time FAA was the gold standard and years of "Air Crash Investigation" TV shows solidified its place but has been taken for granted. Unitl now if it's good enough for the FAA it's good enough for all.
That reputation has now been irreparably damaged over this sh1tshow. I can't help but think this NYT article is only meant for domestic sheeple or stock brokers' consumption as anyone who is going to have anything technical to do with this investigation is going to see right through this load literal diarroeh.
I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.
An appalling indifference to life/lives has been a signature feature of the American experience.psychohistorian , Sep 18 2019 17:40 utc | 11Thanks for the ongoing reporting of this debacle b....you are saving peoples livesb , Sep 18 2019 17:46 utc | 14@ A.L who wrote
"
I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.
"I agree but would pluralize your "insider" to "insiders". This SOP gut and run financialization strategy is just like we are seeing with Purdue Pharma that just filed bankruptcy because their opioids have killed so many....the owners will never see jail time and their profits are protected by the God of Mammon legal system.
Hopefully the WWIII we are engaged in about public/private finance will put an end to this perfidy by the God of Mammon/private finance cult of the Western form of social organization.
Peter Lemme, the satcom guru , was once an engineer at Boeing. He testified over technical MAX issue before Congress and wrote lot of technical details about it. He retweeted the NYT Mag piece with this comment :Peter Lemme @Satcom_GuruBlame the pilots.
Blame the training.
Blame the airline standards.
Imply rampant corruption at all levels.
Claim Airbus flight envelope protection is superior to Boeing.
Fumble the technical details.
Stack the quotes with lots of hearsay to drive the theme.
Ignore everything else
![]() |
![]() |
![]() |
Sep 18, 2019 | www.moonofalabama.org
A.L. , Sep 18 2019 19:56 utc | 31
@30 David Gperhaps, just like proponents of AI and self driving cars. They just love the technology, financially and emotionally invested in it so much they can't see the forest from the trees.
I like technology, I studied engineering. But the myopic drive to profitability and naivety to unintended consequences are pushing these tech out into the world before they are ready.
engineering used to be a discipline with ethics and responsibilities... But now anybody who could write two lines of code can call themselves a software engineer....
![]() |
![]() |
![]() |
Sep 14, 2019 | www.nakedcapitalism.com
Wukchumni , September 13, 2019 at 4:29 pm
Re: Fake list of grunge slang:
a fabulous tale of the South Pacific by William Manchester
The Man Who Could Speak Japanese
"We wrote it down.
The next phrase was:
" ' Booki fai kiz soy ?' " said Whitey. "It means 'Do you surrender?' "
Then:
" ' Mizi pok loi ooni rak tong zin ?' 'Where are your comrades?' "
"Tong what ?" rasped the colonel.
"Tong zin , sir," our instructor replied, rolling chalk between his palms. He arched his eyebrows, as though inviting another question. There was one. The adjutant asked, "What's that gizmo on the end?"
Of course, it might have been a Japanese newspaper. Whitey's claim to be a linguist was the last of his status symbols, and he clung to it desperately. Looking back, I think his improvisations on the Morton fantail must have been one of the most heroic achievements in the history of confidence men -- which, as you may have gathered by now, was Whitey's true profession. Toward the end of our tour of duty on the 'Canal he was totally discredited with us and transferred at his own request to the 81-millimeter platoon, where our disregard for him was no stigma, since the 81 millimeter musclemen regarded us as a bunch of eight balls anyway. Yet even then, even after we had become completely disillusioned with him, he remained a figure of wonder among us. We could scarcely believe that an impostor could be clever enough actually to invent a language -- phonics, calligraphy, and all. It had looked like Japanese and sounded like Japanese, and during his seventeen days of lecturing on that ship Whitey had carried it all in his head, remembering every variation, every subtlety, every syntactic construction.
https://www.americanheritage.com/man-who-could-speak-japanese
![]() |
![]() |
![]() |
Aug 31, 2019 | developers.slashdot.org
mccoma ( 64578 ) , Friday February 22, 2019 @06:10PM ( #58166468 )
Thinking Forth ( Score: 3 )I wish I had read Thinking Forth by Leo Brodie ISBN-10: 0976458705 ISBN-13: 978-0976458708 much earlier. It is an amazing book to really show you a different way to approach programming problems. It is available online these days.
![]() |
![]() |
![]() |
Sep 07, 2019 | archive.computerhistory.org
Dijkstra said he was proud to be a programmer. Unfortunately he changed his attitude completely, and I think he wrote his last computer program in the 1980s. At this conference I went to in 1967 about simulation language, Chris Strachey was going around asking everybody at the conference what was the last computer program you wrote. This was 1967. Some of the people said, "I've never written a computer program." Others would say, "Oh yeah, here's what I did last week." I asked Edsger this question when I visited him in Texas in the 90s and he said, "Don, I write programs now with pencil and paper, and I execute them in my head." He finds that a good enough discipline.
I think he was mistaken on that. He taught me a lot of things, but I really think that if he had continued... One of Dijkstra's greatest strengths was that he felt a strong sense of aesthetics, and he didn't want to compromise his notions of beauty. They were so intense that when he visited me in the 1960s, I had just come to Stanford. I remember the conversation we had. It was in the first apartment, our little rented house, before we had electricity in the house.
We were sitting there in the dark, and he was telling me how he had just learned about the specifications of the IBM System/360, and it made him so ill that his heart was actually starting to flutter.
He intensely disliked things that he didn't consider clean to work with. So I can see that he would have distaste for the languages that he had to work with on real computers. My reaction to that was to design my own language, and then make Pascal so that it would work well for me in those days. But his response was to do everything only intellectually.
So, programming.
I happened to look the other day. I wrote 35 programs in January, and 28 or 29 programs in February. These are small programs, but I have a compulsion. I love to write programs and put things into it. I think of a question that I want to answer, or I have part of my book where I want to present something. But I can't just present it by reading about it in a book. As I code it, it all becomes clear in my head. It's just the discipline. The fact that I have to translate my knowledge of this method into something that the machine is going to understand just forces me to make that crystal-clear in my head. Then I can explain it to somebody else infinitely better. The exposition is always better if I've implemented it, even though it's going to take me more time.
![]() |
![]() |
![]() |
Sep 07, 2019 | archive.computerhistory.org
So I had a programming hat when I was outside of Cal Tech, and at Cal Tech I am a mathematician taking my grad studies. A startup company, called Green Tree Corporation because green is the color of money, came to me and said, "Don, name your price. Write compilers for us and we will take care of finding computers for you to debug them on, and assistance for you to do your work. Name your price." I said, "Oh, okay. $100,000.", assuming that this was In that era this was not quite at Bill Gate's level today, but it was sort of out there.
The guy didn't blink. He said, "Okay." I didn't really blink either. I said, "Well, I'm not going to do it. I just thought this was an impossible number."
At that point I made the decision in my life that I wasn't going to optimize my income; I was really going to do what I thought I could do for well, I don't know. If you ask me what makes me most happy, number one would be somebody saying "I learned something from you". Number two would be somebody saying "I used your software". But number infinity would be Well, no. Number infinity minus one would be "I bought your book". It's not as good as "I read your book", you know. Then there is "I bought your software"; that was not in my own personal value. So that decision came up. I kept up with the literature about compilers. The Communications of the ACM was where the action was. I also worked with people on trying to debug the ALGOL language, which had problems with it. I published a few papers, like "The Remaining Trouble Spots in ALGOL 60" was one of the papers that I worked on. I chaired a committee called "Smallgol" which was to find a subset of ALGOL that would work on small computers. I was active in programming languages.
![]() |
![]() |
![]() |
Sep 07, 2019 | conservancy.umn.edu
Frana: You have made the comment several times that maybe 1 in 50 people have the "computer scientist's mind." Knuth: Yes. Frana: I am wondering if a large number of those people are trained professional librarians? [laughter] There is some strangeness there. But can you pinpoint what it is about the mind of the computer scientist that is....
Knuth: That is different?
Frana: What are the characteristics?
Knuth: Two things: one is the ability to deal with non-uniform structure, where you have case one, case two, case three, case four. Or that you have a model of something where the first component is integer, the next component is a Boolean, and the next component is a real number, or something like that, you know, non-uniform structure. To deal fluently with those kinds of entities, which is not typical in other branches of mathematics, is critical. And the other characteristic ability is to shift levels quickly, from looking at something in the large to looking at something in the small, and many levels in between, jumping from one level of abstraction to another. You know that, when you are adding one to some number, that you are actually getting closer to some overarching goal. These skills, being able to deal with nonuniform objects and to see through things from the top level to the bottom level, these are very essential to computer programming, it seems to me. But maybe I am fooling myself because I am too close to it.
Frana: It is the hardest thing to really understand that which you are existing within.
Knuth: Yes.
![]() |
![]() |
![]() |
Sep 07, 2019 | conservancy.umn.edu
Knuth: I can be a writer, who tries to organize other people's ideas into some kind of a more coherent structure so that it is easier to put things together. I can see that I could be viewed as a scholar that does his best to check out sources of material, so that people get credit where it is due. And to check facts over, not just to look at the abstract of something, but to see what the methods were that did it and to fill in holes if necessary. I look at my role as being able to understand the motivations and terminology of one group of specialists and boil it down to a certain extent so that people in other parts of the field can use it. I try to listen to the theoreticians and select what they have done that is important to the programmer on the street; to remove technical jargon when possible.
But I have never been good at any kind of a role that would be making policy, or advising people on strategies, or what to do. I have always been best at refining things that are there and bringing order out of chaos. I sometimes raise new ideas that might stimulate people, but not really in a way that would be in any way controlling the flow. The only time I have ever advocated something strongly was with literate programming; but I do this always with the caveat that it works for me, not knowing if it would work for anybody else.
When I work with a system that I have created myself, I can always change it if I don't like it. But everybody who works with my system has to work with what I give them. So I am not able to judge my own stuff impartially. So anyway, I have always felt bad about if anyone says, 'Don, please forecast the future,'...
![]() |
![]() |
![]() |
Sep 06, 2019 | archive.computerhistory.org
...I showed the second version of this design to two of my graduate students, and I said, "Okay, implement this, please, this summer. That's your summer job." I thought I had specified a language. I had to go away. I spent several weeks in China during the summer of 1977, and I had various other obligations. I assumed that when I got back from my summer trips, I would be able to play around with TeX and refine it a little bit. To my amazement, the students, who were outstanding students, had not competed [it]. They had a system that was able to do about three lines of TeX. I thought, "My goodness, what's going on? I thought these were good students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a miracle."
Because going from my specification, which I thought was complete, they really had an impossible task, and they had succeeded wonderfully with it. These students, by the way, [were] Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the key people for Microsoft Word.
He did important mathematical things as well as his hyphenation methods which are quite used in all languages now. These guys were actually doing great work, but I was amazed that they couldn't do what I thought was just sort of a routine task. Then I became a programmer in earnest, where I had to do it. The reason is when you're doing programming, you have to explain something to a computer, which is dumb.
When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?"
It just didn't occur to the person writing the design specification. When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this."
If I hadn't been in China they would've scheduled an appointment with me and stopped their programming for a day. Then they would come in at the designated hour and we would talk. They would take 15 minutes to present to me what the problem was, and then I would think about it for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would write code for another five minutes and they'd have to schedule another appointment.
I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got going. Bob worked many years on a beautiful idea for a programming language, where he designed a language called Chiron, but he never touched the programming himself. I think this was actually the reason that he had trouble with that project, because it's so hard to do the design unless you're faced with the low-level aspects of it, explaining it to a machine instead of to another person.
Forsythe, I think it was, who said, "People have said traditionally that you don't understand something until you've taught it in a class. The truth is you don't really understand something until you've taught it to a computer, until you've been able to program it." At this level, programming was absolutely important
![]() |
![]() |
![]() |
Sep 06, 2019 | conservancy.umn.edu
Knuth: No, I stopped going to conferences. It was too discouraging. Computer programming keeps getting harder because more stuff is discovered. I can cope with learning about one new technique per day, but I can't take ten in a day all at once. So conferences are depressing; it means I have so much more work to do. If I hide myself from the truth I am much happier.
![]() |
![]() |
![]() |
Sep 06, 2019 | archive.computerhistory.org
Knuth: This is, of course, really the story of my life, because I hope to live long enough to finish it. But I may not, because it's turned out to be such a huge project. I got married in the summer of 1961, after my first year of graduate school. My wife finished college, and I could use the money I had made -- the $5000 on the compiler -- to finance a trip to Europe for our honeymoon.
We had four months of wedded bliss in Southern California, and then a man from Addison-Wesley came to visit me and said "Don, we would like you to write a book about how to write compilers."
The more I thought about it, I decided "Oh yes, I've got this book inside of me."
I sketched out that day -- I still have the sheet of tablet paper on which I wrote -- I sketched out 12 chapters that I thought ought to be in such a book. I told Jill, my wife, "I think I'm going to write a book."
As I say, we had four months of bliss, because the rest of our marriage has all been devoted to this book. Well, we still have had happiness. But really, I wake up every morning and I still haven't finished the book. So I try to -- I have to -- organize the rest of my life around this, as one main unifying theme. The book was supposed to be about how to write a compiler. They had heard about me from one of their editorial advisors, that I knew something about how to do this. The idea appealed to me for two main reasons. One is that I did enjoy writing. In high school I had been editor of the weekly paper. In college I was editor of the science magazine, and I worked on the campus paper as copy editor. And, as I told you, I wrote the manual for that compiler that we wrote. I enjoyed writing, number one.
Also, Addison-Wesley was the people who were asking me to do this book; my favorite textbooks had been published by Addison Wesley. They had done the books that I loved the most as a student. For them to come to me and say, "Would you write a book for us?", and here I am just a secondyear gradate student -- this was a thrill.
Another very important reason at the time was that I knew that there was a great need for a book about compilers, because there were a lot of people who even in 1962 -- this was January of 1962 -- were starting to rediscover the wheel. The knowledge was out there, but it hadn't been explained. The people who had discovered it, though, were scattered all over the world and they didn't know of each other's work either, very much. I had been following it. Everybody I could think of who could write a book about compilers, as far as I could see, they would only give a piece of the fabric. They would slant it to their own view of it. There might be four people who could write about it, but they would write four different books. I could present all four of their viewpoints in what I would think was a balanced way, without any axe to grind, without slanting it towards something that I thought would be misleading to the compiler writer for the future. I considered myself as a journalist, essentially. I could be the expositor, the tech writer, that could do the job that was needed in order to take the work of these brilliant people and make it accessible to the world. That was my motivation. Now, I didn't have much time to spend on it then, I just had this page of paper with 12 chapter headings on it. That's all I could do while I'm a consultant at Burroughs and doing my graduate work. I signed a contract, but they said "We know it'll take you a while." I didn't really begin to have much time to work on it until 1963, my third year of graduate school, as I'm already finishing up on my thesis. In the summer of '62, I guess I should mention, I wrote another compiler. This was for Univac; it was a FORTRAN compiler. I spent the summer, I sold my soul to the devil, I guess you say, for three months in the summer of 1962 to write a FORTRAN compiler. I believe that the salary for that was $15,000, which was much more than an assistant professor. I think assistant professors were getting eight or nine thousand in those days.
Feigenbaum: Well, when I started in 1960 at [University of California] Berkeley, I was getting $7,600 for the nine-month year.
Knuth: Knuth: Yeah, so you see it. I got $15,000 for a summer job in 1962 writing a FORTRAN compiler. One day during that summer I was writing the part of the compiler that looks up identifiers in a hash table. The method that we used is called linear probing. Basically you take the variable name that you want to look up, you scramble it, like you square it or something like this, and that gives you a number between one and, well in those days it would have been between 1 and 1000, and then you look there. If you find it, good; if you don't find it, go to the next place and keep on going until you either get to an empty place, or you find the number you're looking for. It's called linear probing. There was a rumor that one of Professor Feller's students at Princeton had tried to figure out how fast linear probing works and was unable to succeed. This was a new thing for me. It was a case where I was doing programming, but I also had a mathematical problem that would go into my other [job]. My winter job was being a math student, my summer job was writing compilers. There was no mix. These worlds did not intersect at all in my life at that point. So I spent one day during the summer while writing the compiler looking at the mathematics of how fast does linear probing work. I got lucky, and I solved the problem. I figured out some math, and I kept two or three sheets of paper with me and I typed it up. ["Notes on 'Open' Addressing', 7/22/63] I guess that's on the internet now, because this became really the genesis of my main research work, which developed not to be working on compilers, but to be working on what they call analysis of algorithms, which is, have a computer method and find out how good is it quantitatively. I can say, if I got so many things to look up in the table, how long is linear probing going to take. It dawned on me that this was just one of many algorithms that would be important, and each one would lead to a fascinating mathematical problem. This was easily a good lifetime source of rich problems to work on. Here I am then, in the middle of 1962, writing this FORTRAN compiler, and I had one day to do the research and mathematics that changed my life for my future research trends. But now I've gotten off the topic of what your original question was.
Feigenbaum: We were talking about sort of the.. You talked about the embryo of The Art of Computing. The compiler book morphed into The Art of Computer Programming, which became a seven-volume plan.
Knuth: Exactly. Anyway, I'm working on a compiler and I'm thinking about this. But now I'm starting, after I finish this summer job, then I began to do things that were going to be relating to the book. One of the things I knew I had to have in the book was an artificial machine, because I'm writing a compiler book but machines are changing faster than I can write books. I have to have a machine that I'm totally in control of. I invented this machine called MIX, which was typical of the computers of 1962.
In 1963 I wrote a simulator for MIX so that I could write sample programs for it, and I taught a class at Caltech on how to write programs in assembly language for this hypothetical computer. Then I started writing the parts that dealt with sorting problems and searching problems, like the linear probing idea. I began to write those parts, which are part of a compiler, of the book. I had several hundred pages of notes gathering for those chapters for The Art of Computer Programming. Before I graduated, I've already done quite a bit of writing on The Art of Computer Programming.
I met George Forsythe about this time. George was the man who inspired both of us [Knuth and Feigenbaum] to come to Stanford during the '60s. George came down to Southern California for a talk, and he said, "Come up to Stanford. How about joining our faculty?" I said "Oh no, I can't do that. I just got married, and I've got to finish this book first." I said, "I think I'll finish the book next year, and then I can come up [and] start thinking about the rest of my life, but I want to get my book done before my son is born." Well, John is now 40-some years old and I'm not done with the book. Part of my lack of expertise is any good estimation procedure as to how long projects are going to take. I way underestimated how much needed to be written about in this book. Anyway, I started writing the manuscript, and I went merrily along writing pages of things that I thought really needed to be said. Of course, it didn't take long before I had started to discover a few things of my own that weren't in any of the existing literature. I did have an axe to grind. The message that I was presenting was in fact not going to be unbiased at all. It was going to be based on my own particular slant on stuff, and that original reason for why I should write the book became impossible to sustain. But the fact that I had worked on linear probing and solved the problem gave me a new unifying theme for the book. I was going to base it around this idea of analyzing algorithms, and have some quantitative ideas about how good methods were. Not just that they worked, but that they worked well: this method worked 3 times better than this method, or 3.1 times better than this method. Also, at this time I was learning mathematical techniques that I had never been taught in school. I found they were out there, but they just hadn't been emphasized openly, about how to solve problems of this kind.
So my book would also present a different kind of mathematics than was common in the curriculum at the time, that was very relevant to analysis of algorithm. I went to the publishers, I went to Addison Wesley, and said "How about changing the title of the book from 'The Art of Computer Programming' to 'The Analysis of Algorithms'." They said that will never sell; their focus group couldn't buy that one. I'm glad they stuck to the original title, although I'm also glad to see that several books have now come out called "The Analysis of Algorithms", 20 years down the line.
But in those days, The Art of Computer Programming was very important because I'm thinking of the aesthetical: the whole question of writing programs as something that has artistic aspects in all senses of the word. The one idea is "art" which means artificial, and the other "art" means fine art. All these are long stories, but I've got to cover it fairly quickly.
I've got The Art of Computer Programming started out, and I'm working on my 12 chapters. I finish a rough draft of all 12 chapters by, I think it was like 1965. I've got 3,000 pages of notes, including a very good example of what you mentioned about seeing holes in the fabric. One of the most important chapters in the book is parsing: going from somebody's algebraic formula and figuring out the structure of the formula. Just the way I had done in seventh grade finding the structure of English sentences, I had to do this with mathematical sentences.
Chapter ten is all about parsing of context-free language, [which] is what we called it at the time. I covered what people had published about context-free languages and parsing. I got to the end of the chapter and I said, well, you can combine these ideas and these ideas, and all of a sudden you get a unifying thing which goes all the way to the limit. These other ideas had sort of gone partway there. They would say "Oh, if a grammar satisfies this condition, I can do it efficiently." "If a grammar satisfies this condition, I can do it efficiently." But now, all of a sudden, I saw there was a way to say I can find the most general condition that can be done efficiently without looking ahead to the end of the sentence. That you could make a decision on the fly, reading from left to right, about the structure of the thing. That was just a natural outgrowth of seeing the different pieces of the fabric that other people had put together, and writing it into a chapter for the first time. But I felt that this general concept, well, I didn't feel that I had surrounded the concept. I knew that I had it, and I could prove it, and I could check it, but I couldn't really intuit it all in my head. I knew it was right, but it was too hard for me, really, to explain it well.
So I didn't put in The Art of Computer Programming. I thought it was beyond the scope of my book. Textbooks don't have to cover everything when you get to the harder things; then you have to go to the literature. My idea at that time [is] I'm writing this book and I'm thinking it's going to be published very soon, so any little things I discover and put in the book I didn't bother to write a paper and publish in the journal because I figure it'll be in my book pretty soon anyway. Computer science is changing so fast, my book is bound to be obsolete.
It takes a year for it to go through editing, and people drawing the illustrations, and then they have to print it and bind it and so on. I have to be a little bit ahead of the state-of-the-art if my book isn't going to be obsolete when it comes out. So I kept most of the stuff to myself that I had, these little ideas I had been coming up with. But when I got to this idea of left-to-right parsing, I said "Well here's something I don't really understand very well. I'll publish this, let other people figure out what it is, and then they can tell me what I should have said." I published that paper I believe in 1965, at the end of finishing my draft of the chapter, which didn't get as far as that story, LR(k). Well now, textbooks of computer science start with LR(k) and take off from there. But I want to give you an idea of
![]() |
![]() |
![]() |
Sep 06, 2019 | news.ycombinator.com
fhars on Mar 29, 2011Most mainstream OO languages with a type system to speak of actually get in the way of correctly classifying data by confusing the separate issues of reusing implementation artefacts (aka subclassing) and classifying data into a hierarchy of concepts (aka subtyping).Xurinos on Mar 29, 2011The only widely used OO language (for sufficiently narrow values of wide and wide values of OO) to get that right used to be Objective Caml, and recently its stepchildren F# and scala. So it is actually FP that helps you with the classification.
ajays on Mar 29, 2011This is a very interesting point and should be highlighted. You said implementation artifacts (especially in reference to reducing code duplication), and for clarity, I think you are referring to the definition of operators on data (class methods, friend methods, and so on).
I agree with you that subclassing (for the purpose of reusing behavior), traits (for adding behavior), and the like can be confused with classification to such an extent that modern designs tend to depart from type systems and be used for mere code organization.
GrooveStomp on Mar 29, 2011"was there really a point to the illusion of wrapping the entrypoint main() function in a class (I am looking at you, Java)?"Far be it for me to defend Java (I hate the damn thing), but: main is just a function in a class. The class is the entry point, as specified in the command line; main is just what the OS looks for, by convention. You could have a "main" in each class, but only the one in the specified class will be the entry point.
The way of the theorist is to tell any non-theorist that the non-theorist is wrong, then leave without any explanation. Or, simply hand-wave the explanation away, claiming it as "too complex" too fully understand without years of rigorous training. Of course I jest. :)
![]() |
![]() |
![]() |
Sep 04, 2019 | www.moonofalabama.org
United Airline and American Airlines further prolonged the grounding of their Boeing 737 MAX airplanes. They now schedule the plane's return to the flight line in December. But it is likely that the grounding will continue well into the next year.
After Boeing's shabby design and lack of safety analysis of its Maneuver Characteristics Augmentation System (MCAS) led to the death of 347 people, the grounding of the type and billions of losses, one would expect the company to show some decency and humility. Unfortunately Boeing behavior demonstrates none.
There is still little detailed information on how Boeing will fix MCAS. Nothing was said by Boeing about the manual trim system of the 737 MAX that does not work when it is needed . The unprotected rudder cables of the plane do not meet safety guidelines but were still certified. The planes flight control computers can be overwhelmed by bad data and a fix will be difficult to implement. Boeing continues to say nothing about these issues.
International flight safety regulators no longer trust the Federal Aviation Administration (FAA) which failed to uncover those problems when it originally certified the new type. The FAA was also the last regulator to ground the plane after two 737 MAX had crashed. The European Aviation Safety Agency (EASA) asked Boeing to explain and correct five major issues it identified. Other regulators asked additional questions.
Boeing needs to regain the trust of the airlines, pilots and passengers to be able to again sell those planes. Only full and detailed information can achieve that. But the company does not provide any.
As Boeing sells some 80% of its airplanes abroad it needs the good will of the international regulators to get the 737 MAX back into the air. This makes the arrogance it displayed in a meeting with those regulators inexplicable:
Friction between Boeing Co. and international air-safety authorities threatens a new delay in bringing the grounded 737 MAX fleet back into service, according to government and pilot union officials briefed on the matter.The latest complication in the long-running saga, these officials said, stems from a Boeing briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers.
The fate of Boeing's civil aircraft business hangs on the re-certification of the 737 MAX. The regulators convened an international meeting to get their questions answered and Boeing arrogantly showed up without having done its homework. The regulators saw that as an insult. Boeing was sent back to do what it was supposed to do in the first place: provide details and analysis that prove the safety of its planes.
What did the Boeing managers think those regulatory agencies are? Hapless lapdogs like the FAA managers`who signed off on Boeing 'features' even after their engineers told them that these were not safe?
Buried in the Wall Street Journal piece quoted above is another little shocker:
In recent weeks, Boeing and the FAA identified another potential flight-control computer risk requiring additional software changes and testing, according to two of the government and pilot officials.The new issue must be going beyond the flight control computer (FCC) issues the FAA identified in June .
Boeing's original plan to fix the uncontrolled activation of MCAS was to have both FCCs active at the same time and to switch MCAS off when the two computers disagree. That was already a huge change in the general architecture which so far consisted of one active and one passive FCC system that could be switched over when a failure occurred.
Any additional software changes will make the issue even more complicated. The 80286 Intel processors the FCC software is running on is limited in its capacity. All the extras procedures Boeing now will add to them may well exceed the system's capabilities.
Changing software in a delicate environment like a flight control computer is extremely difficult. There will always be surprising side effects or regressions where already corrected errors unexpectedly reappear.
The old architecture was possible because the plane could still be flown without any computer. It was expected that the pilots would detect a computer error and would be able to intervene. The FAA did not require a high design assurance level (DAL) for the system. The MCAS accidents showed that a software or hardware problem can now indeed crash a 737 MAX plane. That changes the level of scrutiny the system will have to undergo.
All procedures and functions of the software will have to be tested in all thinkable combinations to ensure that they will not block or otherwise influence each other. This will take months and there is a high chance that new issues will appear during these tests. They will require more software changes and more testing.
Flight safety regulators know of these complexities. That is why they need to take a deep look into such systems. That Boeing's management was not prepared to answer their questions shows that the company has not learned from its failure. Its culture is still one of finance orientated arrogance.
Building safe airplanes requires engineers who know that they may make mistakes and who have the humility to allow others to check and correct their work. It requires open communication about such issues. Boeing's say-nothing strategy will prolong the grounding of its planes. It will increases the damage to Boeing's financial situation and reputation.
--- Previous Moon of Alabama posts on Boeing 737 MAX issues:
- Boeing, The FAA, And Why Two 737 MAX Planes Crashed - March 12 2019
- Flawed Safety Analysis, Failed Oversight - Why Two 737 MAX Planes Crashed - March 17 2019
- Regulators Knew Of 737 MAX Trim Problems - Certification Demanded Training That Boeing Failed To Deliver - March 29 2019
- Ethiopian Airline Crash - Boeing Advice To 737 MAX Pilots Was Flawed - April 9 2019
- Boeing 737 MAX Crash Reveals Severe Problem With Older Boeing 737 NGs - May 25 2019
- Boeing's Software Fix For The 737 MAX Problem Overwhelms The Plane's Computer - June 27 2019
- EASA Tells Boeing To Fix 5 Major 737 MAX Issues - July 7 2019
- The New Delay Of Boeing's 737 MAX Return Will Not Be The Last One - July 15 2019
- 737 MAX Rudder Control Does Not Meet Safety Guidelines - It Was Still Certified - July 28 2019
Posted by b on September 3, 2019 at 18:05 UTC | Permalink
Choderlos de Laclos , Sep 3 2019 18:15 utc | 1
"The 80286 Intel processors the FCC software is running on is limited in its capacity." You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.b , Sep 3 2019 18:22 utc | 2You must be joking, right? If this is the case, the problem is unfixable: you can't find two competent software engineers who can program these dinosaur 16-bit processors.Meshpal , Sep 3 2019 18:24 utc | 3One of the two is writing this.
Half-joking aside. The 737 MAX FCC runs on 80286 processors. There are ten thousands of programmers available who can program them though not all are qualified to write real-time systems. That resource is not a problem. The processors inherent limits are one.
Thanks b for the fine 737 max update. Others news sources seem to have dropped coverage. It is a very big deal that this grounding has lasted this long. Things are going to get real bad for Boeing if this bird does not get back in the air soon. In any case their credibility is tarnished if not down right trashed.BraveNewWorld , Sep 3 2019 18:35 utc | 4@1 Choderlos de LaclosChoderlos de Laclos , Sep 3 2019 18:52 utc | 5What ever software language these are programmed in (my guess is C) the compilers still exist for it and do the translation from the human readable code to the machine code for you. Of course the code could be assembler but writing assembly code for a 286 is far easier than writing it for say an i9 becuase the CPU is so much simpler and has a far smaller set of instructions to work with.
@b: It was a hyperbole. I might be another one, but left them behind as fast as I could. The last time I had to deal with it was an embedded system in 1998-ish. But I am also retiring, and so are thousands of others. The problems with support of a legacy system are a legend.psychohistorian , Sep 3 2019 18:56 utc | 6Thanks for the demise of Boeing update bkarlof1 , Sep 3 2019 19:13 utc | 7I commented when you first started writing about this that it would take Boeing down and still believe that to be true. To the extent that Boeing is stonewalling the international safety regulators says to me that upper management and big stock holders are being given time to minimize their exposure before the axe falls.
I also want to add that Boeing's focus on profit over safety is not restricted to the 737 Max but undoubtedly permeates the manufacture of spare parts for the rest of the their plane line and all else they make.....I have no intention of ever flying in another Boeing airplane, given the attitude shown by Boeing leadership.
This is how private financialization works in the Western world. Their bottom line is profit, not service to the flying public. It is in line with the recent public statement by the CEO's from the Business Roundtable that said that they were going to focus more on customer satisfaction over profit but their actions continue to say profit is their primary motive.
The God of Mammon private finance religion can not end soon enough for humanity's sake. It is not like we all have to become China but their core public finance example is well worth following.
So again, Boeing mgmt. mirrors its Neoliberal government officials when it comes to arrogance and impudence. IMO, Boeing shareholders's hair ought to be on fire given their BoD's behavior and getting ready to litigate.bjd , Sep 3 2019 19:22 utc | 8As b notes, Boeing's international credibility's hanging by a very thin thread. A year from now, Boeing could very well see its share price deeply dive into the Penny Stock category--its current P/E is 41.5:1 which is massively overpriced. Boeing Bombs might come to mean something vastly different from its initial meaning.
Arrogance? When the money keeps flowing in anyway, it comes naturally.What did I just read , Sep 3 2019 19:49 utc | 10Such seemingly archaic processors are the norm in aerospace. If the planes flight characteristics had been properly engineered from the start the processor wouldn't be an issue. You can't just spray perfume on a garbage pile and call it a rose.VietnamVet , Sep 3 2019 20:31 utc | 12In the neoliberal world order governments, regulators and the public are secondary to corporate profits. This is the same belief system that is suspending the British Parliament to guarantee the chaos of a no deal Brexit. The irony is that globalist, Joe Biden's restart the Cold War and nationalist Donald Trump's Trade Wars both assure that foreign regulators will closely scrutinize the safety of the 737 Max. Even if ignored by corporate media and cleared by the FAA to fly in the USA, Boeing and Wall Street's Dow Jones average are cooked gooses with only 20% of the market. Taking the risk of flying the 737 Max on their family vacation or to their next business trip might even get the credentialed class to realize that their subservient service to corrupt Plutocrats is deadly in the long term.jared , Sep 3 2019 20:55 utc | 14It doesn't get any TBTF'er than Boing. Bail-out is only phone-call away. With down-turn looming, the line is forming.Piotr Berman , Sep 3 2019 21:11 utc | 15Ken Murray , Sep 3 2019 21:12 utc | 16"The latest complication in the long-running saga, these officials said, stems from a Boeing BA, -2.66% briefing in August that was cut short by regulators from the U.S., Europe, Brazil and elsewhere, who complained that the plane maker had failed to provide technical details and answer specific questions about modifications in the operation of MAX flight-control computers."It seems to me that Boeing had no intention to insult anybody, but it has an impossible task. After decades of applying duct tape and baling wire with much success, they finally designed an unfixable plane, and they can either abandon this line of business (narrow bodied airliners) or start working on a new design grounded in 21st century technologies.
Boeing's military sales are so much more significant and important to them, they are just ignoring/down-playing their commercial problem with the 737 MAX. Follow the real money.Arata , Sep 3 2019 21:57 utc | 17That is unblievable FLight Control comptuer is based on 80286! A control system needs Real Time operation, at least some pre-emptive task operation, in terms of milisecond or microsecond. What ever way you program 80286 you can not achieve RT operation on 80286. I do not think that is the case. My be 80286 is doing some pripherial work, other than control.Bemildred , Sep 3 2019 22:11 utc | 18It is quite likely (IMHO) that they are no longer able to provide the requested information, but of course they cannot say that.Peter AU 1 , Sep 3 2019 22:14 utc | 19I once wrote a keyboard driver for an 80286, part of an editor, in assembler, on my first PC type computer, I still have it around here somewhere I think, the keyboard driver, but I would be rusty like the Titanic when it comes to writing code. I wrote some things in DEC assembler too, on VAXen.
Arata 16Bemildred , Sep 3 2019 22:17 utc | 20The spoiler system is fly by wire.
arata @16: 80286 does interrupts just fine, but you have to grok asynchronous operation, and most coders don't really, I see that every day in Linux and my browser. I wish I could get that box back, it had DOS, you could program on the bare wires, but God it was slow.Tod , Sep 3 2019 22:28 utc | 21Boeing will just need to press the TURBO button on the 286 processor. Problem solved.karlof1 , Sep 3 2019 22:43 utc | 23Ken Murray @15--Godfree Roberts , Sep 3 2019 22:56 utc | 24Boeing recently lost a $6+Billion weapons contract thanks to its similar Q&A in that realm of its business. Its annual earnings are due out in October. Plan to short-sell soon!
I am surprised that none of the coverage has mentioned the fact that, if China's CAAC does not sign off on the mods, it will cripple, if not doom the MAX.Arioch , Sep 3 2019 23:18 utc | 25I am equally surprised that we continue to sabotage China's export leader, as the WSJ reports today: "China's Huawei Technologies Co. accused the U.S. of "using every tool at its disposal" to disrupt its business, including launching cyberattacks on its networks and instructing law enforcement to "menace" its employees.
The telecommunications giant also said law enforcement in the U.S. have searched, detained and arrested Huawei employees and its business partners, and have sent FBI agents to the homes of its workers to pressure them to collect information on behalf of the U.S."
I wonder how much blind trust in Boeing is intertwined into the fabric of civic aviation all around the world.Miss Lacy , Sep 3 2019 23:19 utc | 26I mean something like this: Boeing publishes some research into failure statistics, solid materials aging or something. One that is really hard and expensive to proceed with. Everything take the results for granted without trying to independently reproduce and verify, because The Boeing!
Some later "derived" researches being made, upon the foundation of some prior works *including* that old Boeing research. Then FAA and similar company institutions around the world make some official regulations and guidelines deriving from the research which was in part derived form original Boeing work. Then insurance companies calculate their tarifs and rate plans, basing their estimation upon those "government standards", and when governments determine taxation levels they use that data too. Then airline companies and airliner leasing companies make their business plans, take huge loans in the banks (and banks do make their own plans expecting those loans to finally be paid back), and so on and so forth, building the cards-deck house, layer after layer.
And among the very many of the cornerstones - there would be dust covered and god-forgotten research made by Boeing 10 or maybe 20 years ago when no one even in drunk delirium could ever imagine questioning Boeing's verdicts upon engineering and scientific matters.
Now, the longevity of that trust is slowly unraveled. Like, the so universally trusted 737NG generation turned out to be inherently unsafe, and while only pilots knew it before, and even of them - only most curious and pedantic pilots, today it becomes public knowledge that 737NG are tainted.
Now, when did this corruption started? Wheat should be some deadline cast into the past, that since the day every other technical data coming from Boeing should be considered unreliable unless passing full-fledged independent verification? Should that day be somewhere in 2000-s? 1990-s? Maybe even 1970-s?
And ALL THE BODY of civic aviation industry knowledge that was accumulated since that date can NO MORE BE TRUSTED and should be almost scrapped and re-researched new! ALL THE tacit INPUT that can be traced back to Boeing and ALL THE DERIVED KNOWLEDGE now has to be verified in its entirety.
Boeing is backstopped by the Murkan MIC, which is to say the US