Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Real Insights into Architecture Come Only From Actual Programming

News Software Engineering Recommended Links Software Prototyping Defensive programming Compilers Algorithms Unix Component Model
Primitive views of the Software Design Software Life Cycle Models Simplification and KISS Brooks law Conway Law Project Management LAMP Stack
Refactoring vs Restructuring Perl-based Bug Tracking Distributed software development Exteme programming as yet another SE fad anti-OO CMM Design patterns
Bad Software ITIL as cognitive capture Conceptual Integrity Code Reviews and Inspections Conway Law Document Management Systems Featuritis
Cargo cult programming CMDB  CMM (Capability Maturity Model) Agile -- Fake Solution to an Important Problem Slightly Skeptical View on Extreme Programming   Virtual Software Appliances
Version Control & Configuration Management Tools Programming style Unix Component Model Software Architecture courses Inhouse vs Outsourced Applications Development Humor Etc


I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas.

... ... ...

For a given level of function, however, that system is best in which one can specify things with the most simplicity and straightforwardness. Simplicity is not enough. Mooers's TRAC language and Algol 68 achieve simplicity as measured by the number of distinct elementary concepts. They are not, however, straightforward. The expression of the things one wants to do often requires involuted and unexpected combinations of the basic facilities. It is not enough to learn the elements and rules of combination; one must also learn the idiomatic usage, a whole lore of how the elements are combined in practice. Simplicity and straighforwardness proceed from conceptual integrity. Every part must reflect the same philosophies and the same balancing of desiderata. Every part must even use the same techniques in syntax and analogous notions in semantics. Ease of use, then, dictates unity of design, conceptual integrity.

Frederick P. Brooks, Jr.: The Mythical Man-Month. Addison-Wesley, Reading MA, 1995 (anniversary ed.)

To me, development consists of two processes that feed each other. First, you figure out what you want the computer to do. Then, you instruct the computer to do it. Trying to write those instructions inevitably changes what you want the computer to do and so it goes.

In this model, coding isn't the poor handmaiden of design or analysis. Coding is where your fuzzy, comfortable ideas awaken in the harsh domain of reality. It is where you learn what your computer can do. If you stop coding, you stop learning.

We aren't always good at guessing where responsibilities should go. Coding is where our design guesses are tested. Being prepared to be flexible about making design changes during coding results in programs that get better and better over time. Insisting that early design ideas be carried through is short sighted.

Kent Beck: Smalltalk Best Practice Patterns. Prentice Hall, NJ 1997


Introduction

One widespread delusion is that you can separate architecture from actual programming. In reality this is a fallacy. Designing software architecture is a complex activity that suffers greatly if it is detached from the implementation.  You lose important feedback loop and things instantly became more complex and less predictable.  The higher is the level of detachment of the architect from the implementation, the higher are chances to get into Brooks "software development tar pit". As Kent Beck noted in Smalltalk Best Practice Patterns. 

Coding is where our design guesses are tested. Being prepared to be flexible about making design changes during coding results in programs that get better and better over time. Insisting that early design ideas be carried through is short sighted.

That means that those approaches that simplify prototyping are vastly superior over alternatives. Among such "prototyping friendly" approach to design of complex software systems are:

As guys in Breadmear consulting aptly noted in their paper  Who Software Architect Role:

A simplistic view of the role is that architects create architectures, and their responsibilities encompass all that is involved in doing so. This would include articulating the architectural vision, conceptualizing and experimenting with alternative architectural approaches, creating models and component and interface specification documents, and validating the architecture against requirements and assumptions.

However, any experienced architect knows that the role involves not just these technical activities, but others that are more political and strategic in nature on the one hand, and more like those of a consultant, on the other. A sound sense of business and technical strategy is required to envision the "right" architectural approach to the customer's problem set, given the business objectives of the architect's organization. Activities in this area include the creation of technology roadmaps, making assertions about technology directions and determining their consequences for the technical strategy and hence architectural approach.

Further, architectures are seldom embraced without considerable challenges from many fronts. The architect thus has to shed any distaste for what may be considered "organizational politics", and actively work to sell the architecture to its various stakeholders, communicating extensively and working networks of influence to ensure the ongoing success of the architecture.

But "buy-in" to the architecture vision is not enough either. Anyone involved in implementing the architecture needs to understand it. Since weighty architectural documents are notorious dust-gatherers, this involves creating and teaching tutorials and actively consulting on the application of the architecture, and being available to explain the rationale behind architectural choices and to make amendments to the architecture when justified.

Lastly, the architect must lead -- the architecture team, the developer community, and, in its technical direction, the organization.

The concept of "technology mudslide"

...some regard the management of software development akin to the management of manufacturing, which can be performed by someone with management skills, but no programming skills. John C. Reynolds rebuts this view, and argues that software development is entirely design work, and compares a manager who cannot program to the managing editor of a newspaper who cannot write.[3]

Software project management
 Wikipedia, the free encyclopedia

The main principle of software architecture is simple and well known -- it's famous KISS principle. While principle is simple its implementation is not and a lot of developers (especially developers with limited resources) paid dearly for violating this principle.  I have found only one reference on simplicity in SE: R. S. Pressman. Simplicity. In Software Engineering, A Practitioner's Approach, page 452. McGraw Hill, 1997. But lack of references notwithstanding open source tools can help here if only because for those tools a complexity is not such a competitive advantage as for closed source tools.  That also help to avoid what is called "technology mudslide".

In his book by Harvard Business School Professor Clayton M. Christensen   defined "technology mudslide" very similar to Brooks "software development tar pit" -- a perpetual cycle of abandonment or retooling of existing systems in pursuit of the latest fashionable technology trend -- a cycle in which

 "Coping with the relentless onslaught of technology change was akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you've got to stay on top of it. and if you ever once stop to catch your breath, you get buried."

A  book by Harvard Business School Professor Clayton M. Christensen   is also somewhat  relevant. He defined "technology mudslide" very similar to Brooks "software development tar pit" -- a perpetual cycle of abandonment or retooling of existing systems in pursuit of the latest fashinable technology trend -- a cycle in which

 "Coping with the relentless onslaught of technology change was akin to trying to climb a mudslide raging down a hill. You have to scramble with everything you've got to stay on top of it. and if you ever once stop to catch your breath, you get buried."

Featuritis (aka creeping featurism)

Featuritis or creeping featurism is the tendency for the number of features in a software product to rise with each release of the product. If software architect is detached from actual implementation that increase chances for the project to suffer from creeping featurism. What may have been a cohesive and consistent design in the early versions may end up as a patchwork of added features. And with extra features comes extra complexity.

As Donald Norman explains: "Complexity probably increases as the square of the features: double the number of features, quadruple the complexity. Provide ten times as many features, multiply the complexity by one hundred." (Norman 1988: p. 174) The result is, in other words, that the product may be extremely productive to the small proportion of expert users whose knowledge of the use of the product has been extended with each incremental addition of features. For the first-time user or the beginner, however, the sum of features is overwhelming and it can be very discouraging to have to spend large amounts of time finding out how to accomplish simple tasks.

Another factor in spreading featuritis in the project is the pressure caused by power users or designers. Power users are especially prone to request additional features to better meet their specific needs and naively think that such additional features could "improve" the software, at least from their point of view.  In reality if features contradict existing architecture and are difficult to implement within the current design framework are often the major source of deterioration of project quality. 

Moreover typically only few users can actually profit from the continuous addition of features as new  features became difficult to remember and as such never used. It is important to differentiate adding a feature that generalized sequence of very frequently performed operations (variation of Huffman encoding) to adding a feature that might look desirable ("Wouldn't it be nice if it had this feature too?"). Well-meaning designers who are not aware of the danger of featuritis tend to respond to pressure from power users and in the process make it more difficult to use software by the average user or beginner, who are not necessarily interested in extra features.

Once a software application suffers from featuritis designers often resort to  providing "beginner's mode", which contains a basic subset of the full set of features. But the resulting software complexity and destruction of initial architecture in the process of adding features  represents more serious and not easily solve d problem, making featuritis a dangerous disease. 

Don Norman Jan 1st, 1970 #1

Hah! The example shown in Figure 1 is a wonderful example of a "self-defeating mechanism" (a concept worthy of its own dictionary entry). Too many features in a product? Well, we will simply add yet another feature to let you reduce the number of features. As the text for the figure legend puts it: "Example of featuritis overcome by letting the user choose a 'mode' corresponding to his/her skills." Um, but I am confused. Seems like the addition cancels any reduction. Self-defeating mechanism, self-defined. That's not reducing featuritis -- that is propagating it. I can think of other similar examples -- such as all the manuals one can purchase that explain the instruction manuals of products. Witting manuals to explain PowerPoint or Photoshop is a big business. Manuals that explain manuals. Added features in order to reduce the number of features. It's wonderful.

John Mashey (mash(at)heymash(dot)com) says: Oct 13th, 2008 #2

The term "creeping featurism" was used in a 1976 Programmer's Workbench paper I wrote, and in a talk first done in 1977, and later gave (as an ACM National Lecture) about 50-70 times through 1982. The original foils were scanned in 2002, and the phrase is used on Slide 033 within the talk.

I've lost the cartoon pair that went with this: the first, a smiling little innocent baby feature, the second, the monstrous tentacled adult creature.

I can't recall if I actually coined this myself or heard it somewhere, but in any case, the phrase was certainly in public use by 1976.

-John Mashey

Martin Van Zanten (martinjzu(at)gmail(dot)com) says: Nov 11th, 2008

Quite well said! One other aspect I would like to point out: part of this featuritis is the feeling of "shooting on a moving target". It would be great if a "core application" would stay the same forever, so in my lifetime the "language used" would stay the same!

Of course modules could also be treated in this way... and for the adventurous this modular setup would provide an open end to experiment in different directions...

Get the point?!

Mads Soegaard (mads(at)interaction-design(dot)org) says: Nov 19th, 2008 #4

Frank Spillers has written a good article called "Feature frenzy - 10 tips to getting feature creep under control".

You can find it at http://experiencedynamics.blogs.com/site_search_usability/2007/02/feature_frenzy_.html 

Achieving simplicity via rewriting: effect of the second system in not absolute

Great authors of literary work are often known on relentless rewriting their masterpieces, achieving perfection only after a dozen of drafts. Software is not that different. Writing a software system is a learning experience and after finishing you can start anew with much different level of understanding the problem and greater level of understanding of advantages and disadvantages of various solutions. That's why software prototyping is such a valuable approach to writing complex software systems.  See Software Life Cycle Models

Sometimes people start with fundamentally flawed architecturally "first draft" and continue to enhance and debug it long past the point, when it would have been much simpler and faster to rewrite it based on better understanding of architecture and better understanding of the problem.  In this case the second variant of the system can be aging improvement, if you resist the temptation for "overstretch" and adhere to kiss principle. If you fail, then Brooks "effect of the second system" comes into play and typically dooms your effort.  Humility and clear understand of you own limitations to deal with complexity is a necessary trait for any good software architect.

Actually rewriting and simplification of the first "draft" can save tremendous amount of time spend in debugging of the initial version, which typically suffer from inferior in comparison with your later ideas architecture. Clean architecture simplifies debugging multiple times. And despite additional effort in rewriting/refactoring when you're done, you get codebase that is easer-to-understand, and much easier and faster to debug. With chances of getting most bugs out, instead of just systems that "seems to work OK" but as soon as you stray from a limited, common set of cases fails almost instantly.

An interesting note about programming as art form can be found at The GNU-Linux Art Farm - 08-30-99

On component level, usage of refactoring (see Refactoring: Improving the Design of Existing Code) might be a useful simplification technique. Actually rewriting is a simpler term, but let's assume that refactoring is rewriting with some ideological frosting ;-). See Slashdot Book Reviews Refactoring Improving the Design of Existing Code.

The complexity caused by adopting new technology for the sake of new technology are further exacerbated by the narrow focus and inexperience of many project leaders -- inexperience with mission-critical systems, systems of scale, software development disciplines, and project management.  Resulting systems suffer from overcomplexity and that alone typically dooms them in a long run. 

Typical rate of failures in software development is high. A Standish Group International survey recently showed that 46% of IT projects were over budget and overdue -- and 28% failed altogether. That's normal and probably the real figures are more dramatic: great software managers and architects are rare, and it is those people who determine the success of a software project.

Great software managers and architects are rare, and it is those people who determine the success of a software project.

Prototyping friendly approaches to implementation of complex software systems

Software prototyping, refers to the activity of creating "disposable", threw-away version of  software applications to test the key ideas of the architecture .

A prototype typically simulates only a key aspects of the system, avoiding bells and whistles and as such can use different, higher level implementation language then the final product. It also can be a virtual machine, not a software implementation in traditional sense (see below).

As Wikipedia article states

Prototyping has several benefits: The software designer and implementer can get valuable feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in the prototyping have been in development and debate since its proposal in the early 1970s.[6]

The monolithic "waterfall" approach to software development has been dubbed the "Slaying the (software) Dragon" technique, since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone.

Prototyping can help in avoiding the great expense and difficulty of changing an almost finished software product due to necessary but introduced to late in the development cycle changes in specification.  or understanding of the problem (and understanding of the problem often comes too late in the "prototype-less" development of software).

The process of prototyping involves the following steps

  1. Identify key requirements
    Determine key requirements including the input and output information desired. Details, such as security, can typically be ignored.
  2. Develop the Initial Prototype
    The initial prototype is developed that includes only key parts of the system in a high level language. Which currently means scripting language such as bas, Perl Python, Javascript. The customers, including end-users, can provide some (limited) provide feedback on what is right and what is wrong in the specifications and implementations. Often at this point specification radically changes. Sometimes you discover that the  project can be implemented within LAMP paradigm, which means art less cost and with higher quality. Sometimes virtual machine environment can be  used  (VM as a software system approach is very interesting and powerful paradigm of software development).   
  3. Review of architecture of the prototype
Typically the initial approach to architecture is sub-optimal and during the implementation of the first version better ideas surface. That's why it is of paramount important that architects were part of the prototype implementation team and "feel the heat" of selected approaches to system design.
  1. Revise and Enhance the Prototype
    Using the feedback both the specifications  can be made more realistic, often some requirements can be simplified or eliminated.  Negotiations about what is within the scope of the project are crustal for getting high quality system in time and within budget.
  2. Rewrite part or all the prototype (possibly in a different, foe example, compiled language) creating production version of software.

Importance of scripting languages, the idea of tandem programming

Unless this is a re-implementation of a pre-existing software, writing a complex system in a low level language such as C, C++ or Java is actually a questionable idea. You will get all the necessary insights but of pretty high price. Usage of "tandem programming" when the system is programmed with the tandem of a scripting language which serves as the main implementation language and low level subroutines which are used when scripting language is insufficient or inefficient. The latter can't be determined priory, and you need to profile the program to see which part have critical influence of the total execution time and which of them represent bottlenecks.

Using a mixture of scripting language and complied language, say TCL and C or Python and C++ is a more productive approach then OO.  The latter has its place and advantages, but they are overrated. The main danger of OO that it puts of the forefront of designing of software system religious fundamentalist type of people who will defend their (often stupid and superficial) object structuring ideas to death.  The whole discussion of architecture degenerate in in endless cat fights, fruitless attempt to create orthogonal system of classes which supposedly encompass the problem space. But in many problem domains such an approach is as artificial, as one can get; and some assumptions are totally unrealistic. Simply put, those domains need other approach  then the object oriented software design.  See also anti-OO and Software Prototyping.

Prototyping usually allow you quicker get the most important insights into the actual problem that you are facing and improve your understanding of specification (which most often are totally unrealistic in the first draft) As you knowledge increase dramatically before you made the final decisions as for architecture of the system (as prototype can at least perform  basic functions required soon enough, although it may do it very slow or in incomplete fashion). This approach when you do not put all eggs into one basket by adopting initial approach (which may, or may not be good) usually allows at the end create a better system at the same, of even lesser cost and in less time (changing specifications at the late stages of system development are very costly and time consuming indeed; that's what usually doubles development time of large software projects).  And with much lower level of frustration and fewer setbacks and nerve breakouts in the process. 

Another problem that this approach partially solves is "building on shifting sands" problem. End users (future application owners) usually specify what they want incorrectly. Seeing prototype make them more sober and allow you to negotiate more viable specifications. In a way prototype is great specification negotiation tool, which gives you the levelrage, unavailable with other approaches.  

In these days the usage of scripting languages can cut the volume of code more than in three times in comparison with Java or C++.  People say that "Real Beauty can be found in Simplicity," and as you may know already,  "Less" sometimes equal to "More" as it enables reuse of given tool and use it combination with others in Lego fashion. As Unix pipe-based components architecture demonstrated to everybody long ago. I continue to adhere to that philosophy. And this component-based approach can be implemented in CGI applications too. If you, like me, value simplicity in software engineering, then you might benefit from this collection of selected quotes and links below.

Simplicity does not guarantee commercial success. On the contrary is somewhat complicated your market position as cloning of your ideas is simper and less expensive. Microsoft, which in many ways can be called the king of software complexity, demonstrated that it is possible to success and keep market share with complex, but well debugged products. So value of simplicity is, in a way, relative. But all-in-all simplicity has a deep aesthetic value. See  Aesthetics and the Human Factor in Programming.

I think writing a good software system is somewhat similar to writing a multivolume series of books. Not all writers can do that. Still most writers, even writers of short stories, will rewrite each part of the final product several times and changes general structure a lot. Rewriting large systems is more difficult, but also very beneficial. The only problem here is that life is short ;-). Still, it make sense consider the current version of the system a draft that can be substantially improved and simplified by discovering some new unifying and simplifying paradigm.  Sometimes you can take a wrong direction, and rewrite fails despite honest efforts to make the system architecture simpler and more transparent. But still "nothing venture, nothing have."

On the subsystem level, a decent configuration management system is a must as it simplify recovering from the wrong turns in development. In software development, the road to hell is paved with good intentions way too often. 

Compiler as a software architecture paradigm, usage of coroutines as structuring mechanism

Compiler design  and related set of classic algorithms provides a pretty flexible software architecture that can be called "abstract machine" architecture. Sometimes using this architecture and adapting it to a particular task can make design more transparent and more easily debugged. Separating lexer, parser and semantic routines is very powerful way of structuring many types of programs, especially text conversion/transformation programs. 

In other words structuring a large program as if it were a compiler for some new specialized language helps to solve problem that are more difficult or even impossible to solve other way. It can adds some elegance  in architecture of many types of  text processing and interactive  programs.  BTW coroutines were first introduced as a way to simply writing compilers.  An approach when an input to the program can be treated as some formal language and program transforms it to another language  now got some traction in XML world. But in no way it is limited to XML and the idea has much wider applicability. In other words, elements of compiler technology, especially a notion of scanner-parser-generator troika are applicable to a wide range of programming tasks. Of course to use it you need to understand how compiler is structured and what algorithms are used.   That's why complier construction course is so crucial.  And again, while difficult (especially if overloaded is redundant formalisms) it is really existing.  In some way, creating a new language is the way human solve complex problems.

LAMP as element of software architecture

If you don't want to outdo Microsoft in design of interface and can limit yourself to basic staff,  then  LAMP (with Perl, or Python, or Ruby instead of PHP which I don't like ;-)  is a tremendously powerful platform for creating applications that fall into this paradigm (and this is surprisingly broad class of applications as web interface can be replaced by command  line interface).

In this case you can structure the application in several abstraction layers such as interface layer and partial "abstract machine" layer implemented as CGI/Fast CGI or similar interface. In a way this reminds a structure of a complier and using this structure is a very powerful software architecture paradigm, actually more deep then OO (the latter can be considered as a primitive compiler-compiler system).   See also Software as a Service

Virtual machine as a new architecture for software system

With the advent of virtual machines it is now possible to create an architecture of a software system in which OS is used not only as the environment in which software runs but  as a part of software system itself.  This is pretty new approach and has its pitfalls (you need VM to implement it, so VM became the necessary part of the solutions).

Still such an approach provide important advantages in comparison with traditional software application architectures. A specialized virtual machine provide many services that would be costly and/or difficult to implement with other approaches to the architecture of particular application such  logging, makup parralization of execution, etc. 

On limitation is that you need to work in a particular VM environment (Xen, VMware, etc) but with the tremendous power of modern PCs and servers this is less and less of a negative factor.

This approach is sometimes called Virtual Software Appliances.

This way you can use all the facilities of OS as complements of your system. In case you use Unix you can usually reuse quite a bit of pre-existing functionality in your software system.  You also have ability to use multiple process with tools to control them, schedule them, use named pipes for communication between them, etc. It's pretty liberating.  In a way that allow to reuse Conway's idea of writing a complex system as set of coroutines on a new level. Here is my old review of Salus book that touched this aspect of Unix:

Expensive short chronology; most material is available online, July 9, 2004

This is an expensive short book with mainly trivial chronological information, 90% of which are freely available on the Internet. As for the history of the first 25 year of Unix it is both incomplete and superficial. Peter Salus is reasonably good as a facts collector (although for a person with his level of access to the Unix pioneers he looks extremely lazy and he essentially missed an opportunity to write a real history, setting for a glossy superficial chronology instead). He probably just felt the market need for such a book and decided to fill the niche.

In my humble opinion Salus lucks real understanding of the technical and social dynamics of Unix development, understanding that can be found, say, in chapter "Twenty Years of Berkeley Unix from AT&T-Owned to Freely Redistributable" in the book "Open Sources: Voices from the Open Source Revolution (O'Reilly, 1999)" (available online). The extended version of this chapter will be published in the second edition of "The Design and Implementation of the 4.4BSD Operating System (Unix and Open Systems Series)" which I highly recommend (I read a preprint at Usenix.)

In any case Kirk McKusick is a real insider, not a former Usenix bureaucrat like Salus. Salus was definitely close to the center of the events; but it is unclear to what extent he understood the events he was close to.

Unix history is a very interesting example how interests of military (DAPRA) shape modern technical projects (not always to the detriment of technical quality, quite opposite in case of Unix) and how DAPRA investment in Unix created completely unforeseen side effect: BSD Unix that later became the first free/open Unix ever (Net2 tape and then Free/Open/NetBSD distributions). Another interesting side of Unix history is that AT&T brass never understood what a jewel they have in hands.

Salus's Usenix position prevented him from touching many bitter conflicts that litter the first 25 years of Unix, including personal conflicts. The reader should be advised that the book represents "official" version of history, and that Salus is, in essence, a court historian, a person whose main task is to put gloss on the events, he is writing about. As far as I understand, Salus never strays from this very safe position.

Actually Unix created a new style of computing, a new way of thinking of how to attack a problem with a computer. This style was essentially the first successful component model in programming. As Frederick P. Brooks Jr (another computer pioneer who early recognized the importance of pipes) noted, the creators of Unix "...attacked the accidental difficulties that result from using individual programs together, by providing integrated libraries, unified file formats, and pipes and filters.". As a non-programmer, in no way Salus is in the position to touch this important side of Unix. The book contains standard and trivial praise for pipes, without understanding of full scope and limitations of this component programming model...

I can also attest that as a historian, Peter Salus can be extremely boring: this July I was unfortunate enough to sit on one of his talks, when he essentially stole from Kirk McKusick more then an hour (out of two scheduled for BSD history section at this year Usenix Technical Conference ) with some paternalistic trivia insulting the intelligence of the Usenix audience, instead of a short 10 min introduction he was expected to give; only after he eventually managed to finish, Kirk McKusick made a really interesting, but necessarily short (he had only 50 minutes left :-) presentation about history of BSD project, which was what this session was about.

Dr. Nikolai Bezroukov


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 02, 2021] The Basics of the Unix Philosophy - programming

Jun 02, 2021 | www.reddit.com

Gotebe 3 years ago

Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.

By now, and to be frank in the last 30 years too, this is complete and utter bollocks. Feature creep is everywhere, typical shell tools are choke-full of spurious additions, from formatting to "side" features, all half-assed and barely, if at all, consistent.

Nothing can resist feature creep. not_perfect_yet 3 years ago

It's still a good idea. It's become very rare though. Many problems we have today are a result of not following it.

name_censored_ 3 years ago
· edited 3 years ago Gold

By now, and to be frank in the last 30 years too, this is complete and utter bollocks.

There is not one single other idea in computing that is as unbastardised as the unix philosophy - given that it's been around fifty years. Heck, Microsoft only just developed PowerShell - and if that's not Microsoft's take on the Unix philosophy, I don't know what is.

In that same time, we've vacillated between thick and thin computing (mainframes, thin clients, PCs, cloud). We've rebelled against at least four major schools of program design thought (structured, procedural, symbolic, dynamic). We've had three different database revolutions (RDBMS, NoSQL, NewSQL). We've gone from grassroots movements to corporate dominance on countless occasions (notably - the internet, IBM PCs/Wintel, Linux/FOSS, video gaming). In public perception, we've run the gamut from clerks ('60s-'70s) to boffins ('80s) to hackers ('90s) to professionals ('00s post-dotcom) to entrepreneurs/hipsters/bros ('10s "startup culture").

It's a small miracle that iproute2 only has formatting options and grep only has --color . If they feature-crept anywhere near the same pace as the rest of the computing world, they would probably be a RESTful SaaS microservice with ML-powered autosuggestions.

badsectoracula 3 years ago

This is because adding a new features is actually easier than trying to figure out how to do it the Unix way - often you already have the data structures in memory and the functions to manipulate them at hand, so adding a --frob parameter that does something special with that feels trivial.

GNU and their stance to ignore the Unix philosophy (AFAIK Stallman said at some point he didn't care about it) while becoming the most available set of tools for Unix systems didn't help either.


level 2

ILikeBumblebees 3 years ago
· edited 3 years ago

Feature creep is everywhere

No, it certainly isn't. There are tons of well-designed, single-purpose tools available for all sorts of purposes. If you live in the world of heavy, bloated GUI apps, well, that's your prerogative, and I don't begrudge you it, but just because you're not aware of alternatives doesn't mean they don't exist.

typical shell tools are choke-full of spurious additions,

What does "feature creep" even mean with respect to shell tools? If they have lots of features, but each function is well-defined and invoked separately, and still conforms to conventional syntax, uses stdio in the expected way, etc., does that make it un-Unixy? Is BusyBox bloatware because it has lots of discrete shell tools bundled into a single binary? nirreskeya 3 years ago

Zawinski's Law :) 1 Share Report Save

icantthinkofone -34 points· 3 years ago
More than 1 child
waivek 3 years ago

The (anti) foreword by Dennis Ritchie -

I have succumbed to the temptation you offered in your preface: I do write you off as envious malcontents and romantic keepers of memories. The systems you remember so fondly (TOPS-20, ITS, Multics, Lisp Machine, Cedar/Mesa, the Dorado) are not just out to pasture, they are fertilizing it from below.

Your judgments are not keen, they are intoxicated by metaphor. In the Preface you suffer first from heat, lice, and malnourishment, then become prisoners in a Gulag. In Chapter 1 you are in turn infected by a virus, racked by drug addiction, and addled by puffiness of the genome.

Yet your prison without coherent design continues to imprison you. How can this be, if it has no strong places? The rational prisoner exploits the weak places, creates order from chaos: instead, collectives like the FSF vindicate their jailers by building cells almost compatible with the existing ones, albeit with more features. The journalist with three undergraduate degrees from MIT, the researcher at Microsoft, and the senior scientist at Apple might volunteer a few words about the regulations of the prisons to which they have been transferred.

Your sense of the possible is in no sense pure: sometimes you want the same thing you have, but wish you had done it yourselves; other times you want something different, but can't seem to get people to use it; sometimes one wonders why you just don't shut up and tell people to buy a PC with Windows or a Mac. No Gulag or lice, just a future whose intellectual tone and interaction style is set by Sonic the Hedgehog. You claim to seek progress, but you succeed mainly in whining.

Here is my metaphor: your book is a pudding stuffed with apposite observations, many well-conceived. Like excrement, it contains enough undigested nuggets of nutrition to sustain life for some. But it is not a tasty pie: it reeks too much of contempt and of envy.

Bon appetit!

[Jun 02, 2021] UNIX Philosophy and (GNU-)Linux- Is it still relevant

Notable quotes:
"... There's still value in understanding the traditional UNIX "do one thing and do it well" model where many workflows can be done as a pipeline of simple tools each adding their own value, but let's face it, it's not how complex systems really work, and it's not how major applications have been working or been designed for a long time. It's a useful simplification, and it's still true at /some/ level, but I think it's also clear that it doesn't really describe most of reality. ..."
Jun 02, 2021 | www.reddit.com

sub200ms 5 years ago

I agree with Linus Torvalds on that issue:

There's still value in understanding the traditional UNIX "do one thing and do it well" model where many workflows can be done as a pipeline of simple tools each adding their own value, but let's face it, it's not how complex systems really work, and it's not how major applications have been working or been designed for a long time. It's a useful simplification, and it's still true at /some/ level, but I think it's also clear that it doesn't really describe most of reality.
http://www.itwire.com/business-it-news/open-source/65402-torvalds-says-he-has-no-strong-opinions-on-systemd

Almost nothing on the Desktop works as the original Unix inventors prescribed as the "Unix way", and even editors like "Vim" are questionable since it has integrated syntax highlighting and spell checker. According to dogmatic Unix Philosophy you should use "ed, the standard editor" to compose the text and then pipe your text into "spell". Nobody really wants to work that way.

But while "Unix Philosophy" in many ways have utterly failed as a way people actually work with computers and software, it is still very good to understand, and in many respects still very useful for certain things. Personally I love those standard Linux text tools like "sort", "grep" "tee", "sed" "wc" etc, and they have occasionally been very useful even outside Linux system administration.

[Oct 06, 2019] Devop created huge opportunities for a new generation of snake oil salesman

Highly recommended!
Oct 06, 2019 | www.reddit.com

DragonDrew Jack of All Trades 772 points · 4 days ago

"I am resolute in my ability to elevate this collaborative, forward-thinking team into the revenue powerhouse that I believe it can be. We will transition into a DevOps team specialising in migrating our existing infrastructure entirely to code and go completely serverless!" - CFO that outsources IT level 2 OpenScore Sysadmin 527 points · 4 days ago

"We will utilize Artificial Intelligence, machine learning, Cloud technologies, python, data science and blockchain to achieve business value"

[Oct 05, 2019] Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, but they don t understand half of the stuff

Devop created a new generation of bullsheeters
Oct 05, 2019 | www.reddit.com

They say, No more IT or system or server admins needed very soon...

Sick and tired of listening to these so called architects and full stack developers who watch bunch of videos on YouTube and Pluralsight, find articles online. They go around workplace throwing words like containers, devops, NoOps, azure, infrastructure as code, serverless, etc, they don't understand half of the stuff. I do some of the devops tasks in our company, I understand what it takes to implement and manage these technologies. Every meeting is infested with these A holes.

ntengineer 613 points · 4 days ago

Your best defense against these is to come up with non-sarcastic and quality questions to ask these people during the meeting, and watch them not have a clue how to answer them.

For example, a friend of mine worked at a smallish company, some manager really wanted to move more of their stuff into Azure including AD and Exchange environment. But they had common problems with their internet connection due to limited bandwidth and them not wanting to spend more. So during a meeting my friend asked a question something like this:

"You said on this slide that moving the AD environment and Exchange environment to Azure will save us money. Did you take into account that we will need to increase our internet speed by a factor of at least 4 in order to accommodate the increase in traffic going out to the Azure cloud? "

Of course, they hadn't. So the CEO asked my friend if he had the numbers, which he had already done his homework, and it was a significant increase in cost every month and taking into account the cost for Azure and the increase in bandwidth wiped away the manager's savings.

I know this won't work for everyone. Sometimes there is real savings in moving things to the cloud. But often times there really isn't. Calling the uneducated people out on what they see as facts can be rewarding. level 2

PuzzledSwitch 101 points · 4 days ago

my previous boss was that kind of a guy. he waited till other people were done throwing their weight around in a meeting and then calmly and politely dismantled them with facts.

no amount of corporate pressuring or bitching could ever stand up to that. level 3

themastermatt 42 points · 4 days ago

Ive been trying to do this. Problem is that everyone keeps talking all the way to the end of the meeting leaving no room for rational facts. level 4 PuzzledSwitch 35 points · 4 days ago

make a follow-up in email, then.

or, you might have to interject for a moment.

williamfny Jack of All Trades 26 points · 4 days ago

This is my approach. I don't yell or raise my voice, I just wait. Then I start asking questions that they generally cannot answer and slowly take them apart. I don't have to be loud to get my point across. level 4

MaxHedrome 6 points · 4 days ago

Listen to this guy OP

This tactic is called "the box game". Just continuously ask them logical questions that can't be answered with their stupidity. (Box them in), let them be their own argument against themselves.

CrazyTachikoma 4 days ago

Most DevOps I've met are devs trying to bypass the sysadmins. This, and the Cloud fad, are burning serious amount of money from companies managed by stupid people that get easily impressed by PR stunts and shiny conferences. Then when everything goes to shit, they call the infrastructure team to fix it...

[Sep 18, 2019] MCAS design, Boeing and ethics of software architect

Sep 18, 2019 | www.moonofalabama.org

... ... ...

Boeing screwed up by designing and installing a faulty systems that was unsafe. It did not even tell the pilots that MCAS existed. It still insists that the system's failure should not be trained in simulator type training. Boeing's failure and the FAA's negligence, not the pilots, caused two major accidents.

Nearly a year after the first incident Boeing has still not presented a solution that the FAA would accept. Meanwhile more safety critical issues on the 737 MAX were found for which Boeing has still not provided any acceptable solution.

But to Langewiesche this anyway all irrelevant. He closes his piece out with more "blame the pilots" whitewash of "poor Boeing":

The 737 Max remains grounded under impossibly close scrutiny, and any suggestion that this might be an overreaction, or that ulterior motives might be at play, or that the Indonesian and Ethiopian investigations might be inadequate, is dismissed summarily. To top it off, while the technical fixes to the MCAS have been accomplished, other barely related imperfections have been discovered and added to the airplane's woes. All signs are that the reintroduction of the 737 Max will be exceedingly difficult because of political and bureaucratic obstacles that are formidable and widespread. Who in a position of authority will say to the public that the airplane is safe?

I would if I were in such a position. What we had in the two downed airplanes was a textbook failure of airmanship . In broad daylight, these pilots couldn't decipher a variant of a simple runaway trim, and they ended up flying too fast at low altitude, neglecting to throttle back and leading their passengers over an aerodynamic edge into oblivion. They were the deciding factor here -- not the MCAS, not the Max.

One wonders how much Boeing paid the author to assemble his screed.

foolisholdman , Sep 18 2019 17:14 utc | 5

14,000 Words Of "Blame The Pilots" That Whitewash Boeing Of 737 MAX Failure
The New York Times

No doubt, this WAS intended as a whitewash of Boeing, but having read the 14,000 words, I don't think it qualifies as more than a somewhat greywash. It is true he blames the pilots for mishandling a situation that could, perhaps, have been better handled, but Boeing still comes out of it pretty badly and so does the NTSB. The other thing I took away from the article is that Airbus planes are, in principle, & by design, more failsafe/idiot-proof.

William Herschel , Sep 18 2019 17:18 utc | 6
Key words: New York Times Magazine. I think when your body is for sale you are called a whore. Trump's almost hysterical bashing of the NYT is enough to make anyone like the paper, but at its core it is a mouthpiece for the military industrial complex. Cf. Judith Miller.
BM , Sep 18 2019 17:23 utc | 7
The New York Times Magazine just published a 14,000 words piece

An ill-disguised attempt to prepare the ground for premature approval for the 737max. It won't succeed - impossible. Opposition will come from too many directions. The blowback from this article will make Boeing regret it very soon, I am quite sure.

foolisholdman , Sep 18 2019 17:23 utc | 8
Come to think about it: (apart from the MCAS) what sort of crap design is it, if an absolutely vital control, which the elevator is, can become impossibly stiff under just those conditions where you absolutely have to be able to move it quickly?
A.L. , Sep 18 2019 17:27 utc | 9
This NYT article is great.

It will only highlight the hubris of "my sh1t doesn't stink" mentality of the American elite and increase the resolve of other civil aviation authorities with a backbone (or in ascendancy) to put Boeing through the wringer.

For the longest time FAA was the gold standard and years of "Air Crash Investigation" TV shows solidified its place but has been taken for granted. Unitl now if it's good enough for the FAA it's good enough for all.

That reputation has now been irreparably damaged over this sh1tshow. I can't help but think this NYT article is only meant for domestic sheeple or stock brokers' consumption as anyone who is going to have anything technical to do with this investigation is going to see right through this load literal diarroeh.

I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.

jayc , Sep 18 2019 17:38 utc | 10
An appalling indifference to life/lives has been a signature feature of the American experience.
psychohistorian , Sep 18 2019 17:40 utc | 11
Thanks for the ongoing reporting of this debacle b....you are saving peoples lives

@ A.L who wrote

"
I wouldn't be surprised if some insider wants to offload some stock and planted this story ahead of some 737MAX return-to-service timetable announcement to get an uplift. Someone needs to track the SEC forms 3 4 and 5. But there are also many ways to skirt insider reporting requirements. As usual, rules are only meant for the rest of us.
"

I agree but would pluralize your "insider" to "insiders". This SOP gut and run financialization strategy is just like we are seeing with Purdue Pharma that just filed bankruptcy because their opioids have killed so many....the owners will never see jail time and their profits are protected by the God of Mammon legal system.

Hopefully the WWIII we are engaged in about public/private finance will put an end to this perfidy by the God of Mammon/private finance cult of the Western form of social organization.

b , Sep 18 2019 17:46 utc | 14
Peter Lemme, the satcom guru , was once an engineer at Boeing. He testified over technical MAX issue before Congress and wrote lot of technical details about it. He retweeted the NYT Mag piece with this comment :
Peter Lemme @Satcom_Guru

Blame the pilots.
Blame the training.
Blame the airline standards.
Imply rampant corruption at all levels.
Claim Airbus flight envelope protection is superior to Boeing.
Fumble the technical details.
Stack the quotes with lots of hearsay to drive the theme.
Ignore everything else

[Sep 06, 2019] Knuth: Programming and architecture are interrelated and it is impossible to create good architecure wthout actually programming at least of a prototype

Notable quotes:
"... When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?" ..."
"... When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this." ..."
Sep 06, 2019 | archive.computerhistory.org

...I showed the second version of this design to two of my graduate students, and I said, "Okay, implement this, please, this summer. That's your summer job." I thought I had specified a language. I had to go away. I spent several weeks in China during the summer of 1977, and I had various other obligations. I assumed that when I got back from my summer trips, I would be able to play around with TeX and refine it a little bit. To my amazement, the students, who were outstanding students, had not competed [it]. They had a system that was able to do about three lines of TeX. I thought, "My goodness, what's going on? I thought these were good students." Well afterwards I changed my attitude to saying, "Boy, they accomplished a miracle."

Because going from my specification, which I thought was complete, they really had an impossible task, and they had succeeded wonderfully with it. These students, by the way, [were] Michael Plass, who has gone on to be the brains behind almost all of Xerox's Docutech software and all kind of things that are inside of typesetting devices now, and Frank Liang, one of the key people for Microsoft Word.

He did important mathematical things as well as his hyphenation methods which are quite used in all languages now. These guys were actually doing great work, but I was amazed that they couldn't do what I thought was just sort of a routine task. Then I became a programmer in earnest, where I had to do it. The reason is when you're doing programming, you have to explain something to a computer, which is dumb.

When you're writing a document for a human being to understand, the human being will look at it and nod his head and say, "Yeah, this makes sense." But then there's all kinds of ambiguities and vagueness that you don't realize until you try to put it into a computer. Then all of a sudden, almost every five minutes as you're writing the code, a question comes up that wasn't addressed in the specification. "What if this combination occurs?"

It just didn't occur to the person writing the design specification. When you're faced with implementation, a person who has been delegated this job of working from a design would have to say, "Well hmm, I don't know what the designer meant by this."

If I hadn't been in China they would've scheduled an appointment with me and stopped their programming for a day. Then they would come in at the designated hour and we would talk. They would take 15 minutes to present to me what the problem was, and then I would think about it for a while, and then I'd say, "Oh yeah, do this. " Then they would go home and they would write code for another five minutes and they'd have to schedule another appointment.

I'm probably exaggerating, but this is why I think Bob Floyd's Chiron compiler never got going. Bob worked many years on a beautiful idea for a programming language, where he designed a language called Chiron, but he never touched the programming himself. I think this was actually the reason that he had trouble with that project, because it's so hard to do the design unless you're faced with the low-level aspects of it, explaining it to a machine instead of to another person.

Forsythe, I think it was, who said, "People have said traditionally that you don't understand something until you've taught it in a class. The truth is you don't really understand something until you've taught it to a computer, until you've been able to program it." At this level, programming was absolutely important

Bad Government Software

Edwardus

I used to work as a software developer many years (and generations of hardware and software) ago. The large systems integration houses tend to assign their worst staff to government projects and governments complain less about quality of work, and are less likely to go ballistic when projects fail (and many certainly do in both private and public sectors). To put a career bureaucrat in charge (responsible/accountable on the RACI chart) of a complex software development project is irresponsible.

I also question why they had to build from scratch when production code is available from third-parties, some of it written for consumer-driven healthcare (CDHP) which shares some of the Obamacare functionality. While those systems are far less complex in terms of constraints and total functionality, the software core of these systems contain (once again) production grade code that have been tested extensively. Why on Earth anyone would want to build this from scratch is beyond my comprehension, unless of course someone wanted to enrich one or more government contractors.

If read some inept excuse about the project team not being able to perform beta testing. The combination of using production grade code would reduce part of that problem, and of course stress testing (!) the code for greater-than-anticipated volume of users and transactions would have revealed any design or environmental problems early.

[Nov 24, 2013] Bob Goodwin What Can Be Done About the Software Engineering Crisis

The main article is very weak, but some comments make sense...
naked capitalism

Michael Robinson

Unfortunately, I don't have time to rebut the sheer quantity of wrong in this article. I hope someone else who has experience delivering large software projects will come along to do so.

I will simply state that there is no "crisis" in software engineering. Software engineering practices have improved remarkably since the famous 1994 Standish Group Chaos study.

What has not improved at all in that time is the procurement and specification process. Software continues to be specified and purchased by people who are utterly unqualified to do so, and I have never seen any attempt by a procuring organization to impose any sort of accountability on those responsible for making critical decisions.

I've seen many projects that were doomed to failure from the moment they were greenlighted.

To use a civil engineering analogy, just because you want to build a bridge to Hawaii, and can find a salesman who will promise to build one for you, doesn't mean it is going to happen, no matter how much you beat up on the engineers.

Brooklin Bridge

Personally, I think what you did is wonderful and only wish more people would try it. One of the strongest concerns I have regarding the direction computing is taking is that computer literacy in general and programming specifically is becoming so far removed from the general public and its needs and so tailor made for resolving problems on behalf of big business and big data only. Problems that Big Business uses invariably to extract more RENT from and more intrusive monitoring of the public and for little else.

In the early 90′s, it looked for a while as if Microsoft was truly going to provide a foundation for the democratization of computer literacy and software development. But then they took a sharp turn in the opposite direction. Soon after, they became lost, like everyone else, in the race to generate web based services for RENT EXTRACTION and to set up an environment – the cloud – where people increasingly HAVE to use them. Most people didn't even notice this trend and couldn't care less, they are utterly seduced by convenience, but it comes at a huge hidden cost and partly explains why violations of our privacy create such a collective yawn – people simply are NOT aware of what is going on, of how deep and intrusive the invasion of privacy is.

What the industry now calls easy programing languages such as Java and C# are not necessarily easy for the general public; they are easy for sweat shops full of poorly paid programmers that are hired to crank it out more and more quickly with less and less skill and definitely less and less craftsmanship, but they are not easy for the average person to just pick up. The software industry needs its own Charlie Chaplin to poke fun at the way the software industry is becoming more and more drone like and less and less human in its goals.

Brooklin Bridge

Why has OO not lived up to its promise? Object oriented design can be very difficult, for one thing. To make an elegant framework that publishes its functionality in an inheritable object oriented manner requires unusual skill and considerable practice.

Also, it doesn't always lend itself well to groups of Engineers where every one of them is to some degree the designer of his/her own solutions to their part of a project but who are not given time or don't have the inclination to develop a common set of objects between them.

You can have an extremely smart engineer who might do quite poorly when trying to think in OO. That doesn't mean his or her solutions to problems don't represent excellent solutions.

Then, OO's emphasis on hierarchy doesn't always adapt readily to modeling every problem. OO code can be wonderful at making tool kits and frameworks, such as a graphical interfaces; things that are specifically designed for subsequent use by other developers. But if your job is to design and implement some one-off that hooks up two protocols, by yesterday, OO design in its formal sense may be more than you want.

Frequently there is confusion about OO languages such as Java and C#. It is assumed that because they are object oriented by nature, their power derives from that aspect. But frequently developers get around this aspect when for what ever reason they don't want to design actual OO solutions. Make every procedure into an object and voila, you have OO code used in a procedural manner. Each object has little or no relation, and absolutely no hierarchical relation, to the objects surrounding it. They are called from a managing routine with the usual and traditional control flow directives.

But often the greatest power of these "interpreted" languages (run inside a virtual machine – often written in straight C), is in managing memory and other aspects of the code process that languages such as C++ left up to each developer and/or team with extremely diverse results running the gamut from true shanty town to absolutely sparkling and breathtaking Taj Mahals.

As to the three points Mr. Goodwin made, Standards, Transparency and Modularity/Simplicity, I always have problems with software evangelists no matter how brilliant or well presented their arguments are or how good their intentions.

A lot of these suggestions such as enforced standardization, and modularity would benefit business as rent extractor at the expense of innovation (and thus ultimately at the expense of business). It would direct the purpose of software to a narrow band of leggo applications and the result would tend towards an increasingly narrowly skilled set of programmers – particularly with the overtones of enforcement Mr. Goodwin requires of putting his views into practice. How many times have I heard the phrase, "Let's not re-invent the wheel", only to find that someone with no special license and no special authorization makes a MUCH better widget than the one that was supposed to be the "standard". It's called evolution and yet has the terminal flaw of timidity when forced into a world of GMO's where even the inventors must be authorized to invent.

Code requirements in the building industry are an excellent example of the dangers involved in this approach. Ten years ago there used to be essentially one book of code requirements for building residential homes. Now there are seven or eight. The average contractor simply doesn't have time to keep up with it all. Naturally, one can argue that as we develop more and more restrictions on what one can do, and more and more requirements on what they must do, we are making the buildings safer and more energy efficient. And that is true as far as it goes. But we are also making them expensive to the point no one except big business can afford to build them and no one except big business can afford to buy them and they come with so many restrictions about what even the home-owner can do that no one would want to live in them. Indeed, they DO satisfy all the requirements Mr. Goodwin specifies above, but the outcome is essentially that everyone will end up renting them. And as to innovation, forget it. Just plain forget it unless it is officially sanctioned or unless you are very rich (the two are usually one and the same). It may start out that everyone can be "innovative" with software as much as they want to, say in their own officially sanctioned play pens, but end up, as with building codes, being illegal to even own a compiler without obnoxious and often corrupt code inspectors sniffing around at every turn.

Brond

As a physics PhD with 25 years experience working in software engineering, I've heard and considered the building analogy before. I always find that it misleads more than it illuminates. Here are some of the reasons.

(1) Building is based on slowly evolving properties. The size of people does not change over time, nor do the properties of wood or the laws of physics. Software is built to run on systems which have increased in performance by many many orders of magnitude in relevant dimensions over just a few decades. Economics dictates that large software systems take advantage of this fire hose of new performance. Software projects that delay time to market in order to build in much future extensibility fail. Nor do the relevant dimensions change at the same rate. Over just a decade this can open up a relative order of magnitude or more between two of the dimensions (say, bandwidth to SSD vs bandwidth to disk). This rate of change means completely reorganized software systems become (un)economic over little more than a decade. Simple scale-up does not work for long.

(2) Software can be copied free, unlike buildings. We never build the same thing twice. True, applications that are popular and similar enough spawn frameworks and automation tools (think thick-client apps during the 90′s and three tier web apps in the 2000′s). Then, given (1), the very monoculture of these opens up competitive opportunities for doing better for some important subclass of the problem space (think, NoSQL databases).

(3) Software system testing faces more challenges than than do physical systems. There are several reasons which bear on this. Building materials have very good continuum approximations – we don't have to model atomic scale structure to understand when a beam will fail. So far, there are few similar effective descriptions for software systems which offer compositional utility together with the kind of guarantees that can enhance reliability testing (operating systems are relatively good at hiding hardware details–maybe the most important success) Also most software has very complex inputs and outputs, and the flow of information down the layers is also very complex - it's "all surface" compared with the physical interface of, say, a tunnel or bridge. Think of modeling a fractal bridge as an analogy and you won't be far wrong. It might be objected that these are deficiencies of software engineering, and not inherent in the undertaking. I won't argue it, but they are not easily solved, and building analogies won't help solve them.

(4) Building codes involve the legal system. Now consider for a moment the speed at which the legal system adapts to change in the world (sometimes with very good reason), and the political forces which are involved. There is a huge impedance mismatch between the legal and the techno-economic evolution of (1).

So, it would be nice, and good luck to you with helping it happen. We shall watch your future career with interest!

bob

If you're driving too fast to be able to pay attention, the normal, logical step is to SLOW DOWN. Cursing the road or the car for being too old and slow does not help.

[Mar 13, 2013] Life Cycle of a Silver Bullet by Sarah A. Sheard, Software Productivity Consortium

Jul 2003 | STSC CrossTalk"

Attention! Throw out those other improvement methods - we have just discovered the best ever. With our method, your quality will go up and costs and cycle time will go down." Almost any improvement method is hailed as the best way to save business from problems when it is new. Unfortunately, a few years later, this same method is now the reviled, flawed method that a new method is replacing. This parable tells how this happens.

In the 17th century, Europeans believed that silver bullets could kill werewolves. Today's executives seek silver bullets to protect themselves not from werewolves but from sliding profits, disillusioned stockholders, and lost market share. The silver bullets for our executives are those new management trends that promise to transform the way business is done.

Examples over the decades have included Management by Objectives and Total Quality Management, while Six Sigma, Lean Enterprise, the Capability Maturity Model Integration (CMMI®), and agile software development techniques are more recent methods earning silver-bullet reputations.

Process improvement initiatives like these can and do work, but how they are implemented is critical to their success. The following parable shows the 11 phases in the life cycle of such an improvement initiative.

Phase 1: Fresh Start

An executive of Porcine Products, Mr. Hamm, decides to throw away all silver bullets. He decides that no one knows his company like he does. He takes a close look at how the company is working to determine what its problems are and how they arose. He also looks at company strengths to leverage them and make them more effective in the future.

Envision a little pig in a suit, wiping a bunch of architectural drawings and books off a table.

Phase 2: Executive Dedication and Openness

Hamm makes it his single-minded focus to improve Porcine Products. Having identified its problems and strengths and determined how to address them, he dedicates time and money to implementing the identified improvements and eliminating conflicting initiatives. He hires forward-thinking, intelligent managers and devotes considerable amounts of his own time to be sure that the problems are truly solved, not just glossed over. Hamm and his managers research a number of current and future improvement methods to help define current problems and to be potential tool kits providing applicable suggestions.

The executive insists that the senior managers become part of the solution. Hamm forces them to examine their roles in contributing to company problems and to restructure their own work to change the way the company operates. A climate of openness without retribution is fostered, and senior managers listen to messages from all levels of the company, especially messages suggesting improvements in their own work.

Envision a little pig constructing a house made of bricks.

Phase 3: Success

Porcine Products reaps the rewards of this thorough effort. Executives and managers change the way they lead. Cross-company improvements change the way the company operates. Products are created more efficiently and have better quality. Costs go down, orders increase, and morale improves.

Phase 4: Publicity

The business press notices the successes of Porcine Products. Hamm explains the improvements his company has achieved and is asked for a name for his method. In honor of his French grandfather, he calls the improvement Balle-Argentee. The press also wants to report how much time and money was spent, and what was reaped from the improvements; Hamm looks back and makes estimates. From these the business press calculates the magic return-on-investment (ROI) number for the Balle-Argentee method of business improvement.

Envision a little pig proudly holding a book showing a house of bricks on the cover. The book's title is "The Balle-Argentee Method."

Phase 5: Momentum

Other companies look eagerly at the success of Porcine Products. Some of them are experiencing a competitive disadvantage because Porcine Products is now working more effectively than their own company, while others want to achieve the publicized ROI. Discussions at meetings of executives focus on what Porcine Products did, and why it worked.

Phase 6: First Replication

Executives at these other companies decide they want to reproduce Porcine Product's success. They talk with Hamm and others in his company about what actually happened. Each company assigns a senior manager to oversee the implementation of this improvement method across the companies. These senior managers carefully read the literature about the Balle-Argentee method. Implementers look at their own companies' problems and seek to implement the spirit as well as the letter of the Balle-Argentee approach. When they make recommendations, they listen to suggestions for improvement in their own work. They keep close watch on expenditures and benefits of this approach so they will be able to report their ROI.

Envision two or three other little pigs constructing house of wood.

Phase 7: Confirmation

Some of these companies publish studies of their own success using the Balle-Argentee method. The studies cite the specific improvements each company decided to make. Because of the attention paid to investments and returns since the adoption of Balle-Argentee, this set of companies can cite precise ROI figures. These companies earn accolades from shareholders for fiscally effective management. General business books about this method are published, including "Balle-Argentee in Warp Time," and "Balle-Argentee for Small Companies."

Envision a collection of books with houses of wood on the cover.

Phase 8: Proceduralization

Many more companies decide that this method is valuable. The ROI convinces some, and the fact that their competitors are reaping the returns from Balle-Argentee convinces the rest. With this second set of companies, the executives and senior managers add Balle-Argentee as one more method in their current process initiatives. Because they cannot adequately focus on all of the methods, they delegate implementation of the Balle-Argentee effort to middle managers. These middle managers are given ROI goals that match the published numbers. Other middle managers are given comparable ROI goals for simultaneously implementing different process improvement efforts. Executives believe that the competition engendered by these multiple initiatives will increase the fervor in implementing all the initiatives.

What the implementing managers know about the Balle-Argentee method is limited to the published results. Time constraints prevent these managers from contacting Porcine Products or from reading any of but the shortest summary articles. To reduce the risk of missing their ROI goals, the managers seek ways to improve the cost-effectiveness of Balle-Argentee as they implement it. Implementation that took Porcine Products several years must now be completed within a fiscal cycle. The implementing managers require their people to use some of the specific improvements described in the literature exactly as they are described, without costly discussion or modification. Other specific improvements are ruled out because they would be costly to implement. The stated rationale is that these improvements will not work here because company circumstances differ.

Instead, the implementing managers restate general strategies in the Balle-Argentee literature as broad goals, which they then apply in a sparing manner. In almost all cases, the imperative for executives and managers to listen to workers and to change their own work accordingly is the first general strategy to be deleted. It is restated as improve communication and then becomes implemented as improve communication downward. These implementing managers have risen in their companies because they respect the wisdom of their superiors. They do not ask for literal implementation of the strategy executives must listen more because to do so might cause their superiors to feel threatened or embarrassed.

Finally, these implementing managers seek to cast their own actions in the best light. They believe involving executives would signal weakness. Much of the implementation of Balle-Argentee shifts to managing the news. Executives and senior managers remain uninformed and are uninvolved in the improvement effort except in expecting to reap benefits.

Envision an entire village of houses made of straw.

Phase 9: Diminished Returns

Because of cost cutting, time compression of the improvement effort, lack of executive involvement, dilution of emphasis due to other improvement initiatives, and a tendency to apply the steps as a checklist rather than to seek and fix the company's basic business problems, these more recent Balle-Argentee improvement efforts do not reap the published ROI numbers. This happens broadly across the industry.

Envision the village of straw houses starting to crumble, propped up by sticks and invaded by mice.

Phase 10: Blaming the Method

Workers in these companies feel bombarded by misunderstood management initiatives, and Balle-Argentee is applied intrusively asking for additional work in order to claim compliance. Workers know that the checklists they are being asked to follow and fill out are not solving any real problems. Some attend conferences and complain that the Balle-Argentee method makes companies do stupid things. They cite their experiences, complaining that the Balle-Argentee sponsor does not want to hear about any real problems that are not quickly solved. They complain that checklists and complex documentation substitute for investigation and solutions, and that the intense focus on the ROI severely decreases the investment money for making complex improvements rather than applying Band-Aids.

Coupled with the evidence from Phase 9 that current implementations of Balle-Argentee do not provide good ROI, these very real complaints cause the business press to be ruthless in denigrating Balle-Argentee as a flawed approach. Articles appear advocating slaying the Balle-Argentee monster.

Envision the big bad wolf blowing down the village of straw houses.

Phase 11: Starting Fresh

Mr. Boar, a true improvement-minded executive at Animalia, Inc., decides that no one knows his company like he does. He decides to throw out Balle-Argentee along with all the other silver bullets and takes a close look at Animalia's problems and how to fix them.

Envision a different little pig wiping a bunch of books and drawings off his desk. One of the books has a picture of a house of bricks on the cover.

Morals of the Story How to Use Silver Bullets

A great deal has been written about the appropriate way to do process improvement. You must focus on the business goal of improvement, not just on the method used to get there (e.g., CMMI) or on intermediate indicators (e.g., Level 3) [2, 3]. Executives must devote the appropriate resources and stay involved [4]. Managers must learn what is real and react appropriately [5, 6]. The process group must analyze the real causes of problems [7], plan changes, get them approved, and make sure the organization follows through [8]. And everyone must make sure the changes actually improve the product development processes, not interfere with them.

Specific guidance on how to avoid making mistakes with a silver bullet follows:

Special Credit

Special thanks goes to Cathy Kreyche for her contributions to this article.

References
  1. Sheard, Sarah A., and Christopher L. Miller. The Shangri-La of ROI. Proc. of the 10th Annual Symposium of the International Council on Systems Engineering, Minneapolis, MN, July 2000.
  2. Sheard, Sarah A. What Is Senior Management Commitment? Proc. of the 11th Annual Symposium of the International Council on Systems Engineering, 2001. Republished in Proc. of the Software Technology Conference, Salt Lake City, UT, 2002.
  3. Kaplan, Robert S. "Implementing the Balanced Scorecard at FMC Corporation: An Interview with Larry D. Brady." Harvard Business Review Sept.-Oct. 1993.
  4. Gardner, Robert A. "10 Process Improvement Lessons for Leaders." Quality Progress Nov. 2002.
  5. Gilb, Tom. "The 10 Most Powerful Principles for Quality in Software and Software Organizations." CrossTalk Nov. 2002.
  6. Baxter, Peter. "Focusing Measurements on Managers' Informational Needs." CrossTalk July 2002: 22- 25.
  7. Card, David. Learning From Our Mistakes With Defect Causal Analysis. Proc. of the International Conference on Software Process Improvement, Adelphi, MD, Nov. 2002.
  8. Bowers, Pam. "Raytheon Stands Firm on Benefits of Process Improvement." CrossTalk March 2001: 9- 12.
  9. Argyris, Chris. Overcoming Organizational Defenses: Facilitating Organizational Learning. Prentice Hall, 1990.
  10. Argyris, Chris. Flawed Advice and the Management Trap. New York: Oxford UP, 2000.

About the Author

Sarah A. Sheard is technical lead for Systems Engineering at the Software Productivity Consortium. She has more than 20 years of experience in systems engineering and process improvement. Sheard has published more than 20 articles and papers on systems engineering and process improvement in CrossTalk, the proceedings of software technology conferences, International Council on Systems Engineering (INCOSE) symposiums, and the INCOSE journal. Sheard received INCOSE's "Founder's Award" in 2002. As the consortium's technical lead for the Capability Maturity Model Integration (CMMI®), Sheard was the lead author of the Software Productivity Consortium's course on "Transitioning to the CMMI."

Software Productivity Consortium
Phone: (703) 742-7106
Fax: (703) 742-7350
E-mail: [email protected]

Linux: Linus On Specifications

Posted by Jeremy on Friday, September 30, 2005 - 20:14

In a conversation that began as a request to include the SAS Transport Layer in the mainline Linux kernel, there was an interesting thread regarding specifications. Linux creator Linus Torvalds began the discussion saying, "a 'spec' is close to useless. I have _never_ seen a spec that was both big enough to be useful _and_ accurate. And I have seen _lots_ of total crap work that was based on specs. It's _the_ single worst way to write software, because it by definition means that the software was written to match theory, not reality."

Linus went on to list two reasons to avoid specifications when writing software. First, "they're dangerously wrong. Reality is different, and anybody who thinks specs matter over reality should get out of kernel programming NOW." Second, "specs have an inevitable tendency to try to introduce abstractions levels and wording and documentation policies that make sense for a written spec. Trying to implement actual code off the spec leads to the code looking and working like CRAP." As a "classic example" he pointed to the OSI model, "we still talk about the seven layers model, because it's a convenient model for _discussion_, but that has absolutely zero to do with any real-life software engineering. In other words, it's a way to _talk_ about things, not to implement them. And that's important. Specs are a basis for _talking_about_ things. But they are _not_ a basis for implementing software."


From: Linus Torvalds [email blocked]

To: Arjan van de Ven [email blocked]

Subject: Re: I request inclusion of SAS Transport Layer and AIC-94xx into the kernel

Date:	Thu, 29 Sep 2005 12:57:05 -0700 (PDT)

On Thu, 29 Sep 2005, Arjan van de Ven wrote:
>
> a spec describes how the hw works... how we do the sw piece is up to
> us ;)

How we do the SW is indeed up to us, but I want to step in on your first
point.

Again.

A "spec" is close to useless. I have _never_ seen a spec that was both big enough to be useful _and_ accurate.

And I have seen _lots_ of total crap work that was based on specs. It's _the_ single worst way to write software, because it by definition means that the software was written to match theory, not reality.

So there's two MAJOR reasons to avoid specs:

- they're dangerously wrong. Reality is different, and anybody who thinks specs matter over reality should get out of kernel programming NOW. When reality and specs clash, the spec has zero meaning. Zilch. Nada.
None.

It's like real science: if you have a theory that doesn't match experiments, it doesn't matter _how_ much you like that theory. It's wrong. You can use it as an approximation, but you MUST keep in mind that it's an approximation.

- specs have an inevitably tendency to try to introduce abstractions levels and wording and documentation policies that make sense for a written spec. Trying to implement actual code off the spec leads to the code looking and working like CRAP.

The classic example of this is the OSI network model protocols. Classic spec-design, which had absolutely _zero_ relevance for the real world. We still talk about the seven layers model, because it's a convenient model for _discussion_, but that has absolutely zero to do with any real-life software engineering. In other words, it's a way to _talk_ about things, not to implement them.

And that's important. Specs are a basis for _talking_about_ things. But they are _not_ a basis for implementing software.

So please don't bother talking about specs. Real standards grow up _despite_ specs, not thanks to them.

Linus

From: Luben Tuikov [email blocked]

Subject: Re: I request inclusion of SAS Transport Layer and AIC-94xx into the kernel
Date: Thu, 29 Sep 2005 16:20:13 -0700 (PDT)

--- Linus Torvalds [email blocked] wrote:
>
> A "spec" is close to useless. I have _never_ seen a spec that was both big
> enough to be useful _and_ accurate.
>
> And I have seen _lots_ of total crap work that was based on specs. It's
> _the_ single worst way to write software, because it by definition means
> that the software was written to match theory, not reality.

A spec defines how a protocol works and behaves. All SCSI specs are currently very layered and defined by FSMs.

This is _the reason_ I can plug in an Adaptec SAS host adapter to Vitesse Expander which has a Seagate SAS disk attached to phy X... And guess what? They interoperate and communicate with each other.

Why? Because at each layer (physical/link/phy/etc) each one of them follow the FSMs defined in the, guess where, SAS spec.

If you take a SAS/SATA/FC/etc course, they _show you_ a link trace and then _show_ you how all of it is defined by the FSM specs, and make you follow the FSMs.

> So there's two MAJOR reasons to avoid specs:

Ok, then I accept that you and James Bottomley and Christoph are _right_, and I'm wrong.

I see we differ in ideology.

> It's like real science: if you have a theory that doesn't match
> experiments, it doesn't matter _how_ much you like that theory. It's
> wrong. You can use it as an approximation, but you MUST keep in mind
> that it's an approximation.

But this is _the_ definition of a theory. No one is arguing that a theory is not an approximation to observed behaviour.

What you have here is interoperability. Only possible because different vendors follow the same spec(s).

> - specs have an inevitably tendency to try to introduce abstractions
> levels and wording and documentation policies that make sense for a
> written spec. Trying to implement actual code off the spec leads to the
> code looking and working like CRAP.

Ok, I give up: I'm wrong and you and James B are right.

> The classic example of this is the OSI network model protocols. Classic

Yes, it is a _classic_ example and OSI is _very_ old.

_But_ the tendency of representing things in a _layered_, object oriented
design has persisted.

> spec-design, which had absolutely _zero_ relevance for the real world.
> We still talk about the seven layers model, because it's a convenient
> model for _discussion_, but that has absolutely zero to do with any
> real-life software engineering. In other words, it's a way to _talk_
> about things, not to implement them.

Ok.

> And that's important. Specs are a basis for _talking_about_ things. But
> they are _not_ a basis for implementing software.

Ok. Let's forget about maintenance and adding _new_ functionality.

> So please don't bother talking about specs. Real standards grow up
> _despite_ specs, not thanks to them.

Yes, you're right. Linus is always right. Now to things more pertinent, which I'm sure people are interested in:

Jeff has been appointed to the role of integrating the SAS code with the Linux SCSI _model_, with James Bottomley's "transport attributes". So you can expect more patches from him.

Regards,
Luben

P.S. I have to get this 8139too.c network card here working.

From: Linus Torvalds [email blocked]
Subject: Re: I request inclusion of SAS Transport Layer and AIC-94xx into the kernel
Date: Thu, 29 Sep 2005 17:35:27 -0700 (PDT)

On Thu, 29 Sep 2005, Luben Tuikov wrote:
>
> > It's like real science: if you have a theory that doesn't match
> > experiments, it doesn't matter _how_ much you like that theory. It's
> > wrong. You can use it as an approximation, but you MUST keep in mind
> > that it's an approximation.
>
> But this is _the_ definition of a theory. No one is arguing that
> a theory is not an approximation to observed behaviour.

No.

A scientific theory is an approximation of observed behaviour WITH NO KNOWN HOLES.

Once there are known holes in the theory, it's not a scientific theory. At best it's an approximation, but quite possibly it's just plain wrong.

And that's my point. Specs are not only almost invariably badly written, they also never actually match reality.

At which point at _best_ it's just an approximation. At worst, it's much worse. At worst, it causes people to ignore reality, and then it becomes religion.

And that's way _way_ too common. People who ignore reality are sadly not at all unusual.

"But the spec says ..." is pretty much always a sign of somebody who has just blocked out the fact that some device doesn't.

So don't talk about specs.

Talk about working code that is _readable_ and _works_.

There's an absolutely mindbogglingly huge difference between the two.

Linus


From: Theodore Ts'o [email blocked]

Subject: Re: I request inclusion of SAS Transport Layer and AIC-94xx into the kernel

Date:	Fri, 30 Sep 2005 01:31:49 -0400

On Thu, Sep 29, 2005 at 04:20:13PM -0700, Luben Tuikov wrote:
>
> A spec defines how a protocol works and behaves. All SCSI specs
> are currently very layered and defined by FSMs.

A spec defines how a protocol works and behaves --- *if* it is well-specified and unambiguous, and *if* vendors actually implement the spec correctly. (And sometimes vendors have major economic incentives to cheat and either intentionally violate the specification, or simply not bother to test to make sure whether or
not they implemented their hardware correctly.)

Computing history has been literred with specifications that were incompentently written and/or incompentently implemented --- from the disaster known as ACPI, to FDDI (early FDDI networking gear was interoperable only if you bought all of your gear from one vendor, natch), consumer-grade disks which lied about when data had been
safely written to iron oxide to garner better Winbench scores, and many, many, many others.

This is one of the reasons why the IETF doesn't bless a networking standard until there are multiple independent, interoperable implementations --- and even _then_ there will be edge cases that won't be caught until much, much later.

In those cases, if you implement something which is religiously adherent to the specification, and it doesn't interoperate with the real world (i.e., everybody else, or some large part of the industry) --- do you claim that you are right because you are following the specification, and everyone else in the world is wrong? Or do you adapt to reality? People who are too in love with specifications so that they are not willing to be flexible will generally not be able to achieve complete interoperability. This is the reason for the IETF Maxim --- be conservative in what you send, liberal in what you will accept. And it's why interoperability testing and reference implementations are critical.

But it's also important to remember when there is a reference implementation, or pseudo-code in the specification, it's not the only way you can implement things. Very often, as Linus has pointed out, there are reasons why the pseudo-code in the specification is wholely inappropriate for a particular implementation. But that's OK; the
implementation can use a different implementastion, as long as the result is interoperable.

Regards,

- Ted


Related Links:

[add new comment | printer friendly page]
Call me crazy, but a scientif

Comment posted by Anonymous (not verified) on Friday, September 30, 2005 - 22:57

Call me crazy, but a scientific theory is only what is proported to be the best explaination to a set of natural observed occurances and behaviors. It isn't required to be without holes. Perhaps Linus is referrign to Laws, which are not certified by anybody, but assumed to carry the weight of universal (or at least near universal) truth, such as the Law of Gravity or the Law of Conservation of Mass and Energy.

Certainly Linus is allowed to say that a particular spec is useless, but I'm not throwing out my processor spec book anytime soon. The point is that useful specs match reality. It would have been nice if this post had included anything reguarding whether the protocol in question actually matches reality or not.

[ reply ]
Linus' hole in a theory seems

Comment posted by Ano Nymous on Saturday, October 1, 2005 - 04:32

Linus' hole in a theory seems to be a counter case for the theory, NOT an area which isn't explained by the theory.

Keeping that meaning in mind, if a theory has such hole it means that there exist something which proves that the theory isn't correct, in which case the theory isn't valid.

If my theory is that all cats are green, and someone encounters a black one, it means my theory is bogus. Perhaps most cats are green, in which case the theory becomes only an approximation (but in this case even that's not true ;-).

There's no real difference between Laws and theories, except that laws maybe were more deducted from experimental data instead of thought out theoretical theories. "This seems to be always true, but we don't know why".

[ reply ]
Very close. Theories usually

Comment posted by Pingu (not verified) on Saturday, October 1, 2005 - 14:14

Very close. Theories usually attempt to explain why or how things happen, while laws are always just generalizations for what does happen. Laws often take the form of mathematical relationships which express what always happens, without attempting to explain why.
[ reply ]
Many of our best scientific theories "have holes"

Comment posted by Eivind Eklund (not verified) on Monday, October 3, 2005 - 04:42

To use a really specific example, Einstein's General Theory of Relativity and Quantum Electrodynamics - two of our best present theories - are in conflict. Stating that they "aren't valid" is a copout against how science works. We just end up using them, and trying to find some way we can actually get at the edge cases that may give us a clue to how to unify them.

Eivind.

[ reply ]
leave science out, please :)

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 06:49

at the same time, a scientific theory is a mere intellectual exploration space. to compare it to specs is misleading. please leave science out of specs. science doesn't care about specs. if at all, it makes some sense in engineering problems. it does make interoperability easy but only in a perfect world. so i'm leaning towards mr. torvalds :) on the "usefulness" of specs.

i guess, by processor spec book you mean what's already been "implemented" in the hardware. if x86 spec was useful, you should have been able to swap a processor made by Intel with one made by AMD. ofcourse you can atleast compile your programs to a generic x86 spec, but it is only an interoperability "compromise".

my 2 cents :)

[ reply ]
you can run i386 binaries on

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 07:30

you can run i386 binaries on all Intel AMD, via ,etc processors thanks to the x86 spec.
[ reply ]
Not an argument for Specs at all.

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 07:51

As far as I'm aware x86 was never an open agreed upon standard, people knew what commands where available and what they did, they just reverse engineered the bugger. Then Cyrix had a bunch of extensions, then intel had a bunch of extensions, then they stuck the MMX code on to it, then it became incorporated Then AMD extended it again, etc etc all the way up to AMD64. One big bloody mess.

Proving Linus's last point that standards grow up despite specs not because of them.

[ reply ]
Documentation

Comment posted by Anonymous (not verified) on Friday, September 30, 2005 - 23:38

Documenting the code isn't usefull too. It never reflects reality. So, just quit commenting code, writing user/architecture documentation.
[ reply ]
wrong target

Comment posted by anonymous (not verified) on Saturday, October 1, 2005 - 00:56

Linus is an engineer/tech. He dislikes theory work because it often gives nothing in practice.

However, specs are not always theory, and they may be usefull, as well as docs. He may be smart enough (or know linux code enough) to not need any doc/spec, but it's not the case of many other people. Some specs are good, and sometimes necessary.

He cited OSI model, well, but I can assure you I won't go in an airplane if it was done with Linus' practices... There are specs in some places that are good, and that are read and followed. Even in non-dangerous domains such as Web standards, specs are necessary, and those who don't follow these specs make crap softwares/browsers!

Moreover, in Linux development model, which is fuzzy and distributed, not directed, defining the software may be vain. However, in a commercial environment, defining the spec is really writing a contract, which protects both the customer and the editor. Specs there defines what the software can and must do, and ensures it will do. Linus obviously lacks of experience in industrial and critical projects. He may be right for the kernel development (however I still doubt it should be so entire on that subject), but he's wrong on many other domains.

IOW, Linus does here a generalization which is at least as wrong as are the examples he cited. As we say : "all generalization are false".

If he finds a bad spec, either it throws it away, or he fixes it. It's the same for technical docs. But he shouldn't tell every specs are useless and bad. That's wrong.

[ reply ]
specs

Comment posted by Anonymous (not verified) on Saturday, October 1, 2005 - 01:05

specs snatched out of thin air just dont work. specs drawn and refined by experimentation are the ones that are useful.

i never write specs when i start a software project. i start with small test programs that do what is required.

[ reply ]
right, but limited to small things/implementation details

Comment posted by Anonymous (not verified) on Saturday, October 1, 2005 - 10:20

Knowledge of technology, given through small tests, is a good thing to get a spec. However, you cannot build a whole complex software with limited cost blindly.

If you do not have at least a plan, you won't do what you should. The software may be usefull and correct, but it certainly will be inadequate and don't do what was asked for.

And building specs on top of experience will certainly prevent you from controlling you development process, therefore forbid access to many big and serious projects. I'm sorry, but when things get really complex, hackers have no place. Structure, rigor and organization have, and are far more important than good or bad code. This is were specs appears (among other things of course).

And do not tell me a kernel OS as Linux is a complex thing as an example: it is big (wide?) because of many hardware, but the core OS still is simple (vertically). Moreover, Linux was rewritten several times, showing perfectly that a. nobody told initially what the needs were, and b. it was several times wrong. In an industrial process, this is simply unacceptable (you simply won't have the contract).

[ reply ]
linux isn't targeting a singl

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 04:34

linux isn't targeting a single customer nor a single process. it is flexible and always evolving, unlike industrial process where you design something once and you're done with it. there's no comparison between the two, they are completely different domains.

just because linus doesn't have any specs to show you, doesn't mean linus doesn't have a plan. don't confuse the two.

even specs don't help produce reliable code in the industrial process domains. what counts there is good peer review and testing to the limits of your sanity. choosing a good language can help too.

[ reply ]
that's wishful thinking, afai

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 05:20

that's wishful thinking, afaiac. in my experience, commercial software is rewritten just about as often as non-commercial software, for one thing. for another, specs often are complete crap. not so much because they contain bad ideas, but more often because they are overly complex or worded in an ambiguous manner.

take soap, for example. overengineered, because when there was no real implementation lots of things probably sounded like good ideas. now half the implementations implement half the spec (all different halves), and have to concentrate on a common subset in order to interoperate. you know what? that common subset is almost as simple as xml-rpc, which was defined because soap was deemed to complex.

that being said, xml-rpc is rather limited because - among other things - it allows only for 32 bit integers. it's not a great spec either, but it works because it's easily implemented and simple enough that interoperability problems are rare.

i don't think linus is right in dismissing specs. some specs, such as rfc 2045/2046 are complex and still pretty unambiguous. you can work with them, therefore they are useful specs.

[ reply ]
wrong target

Comment posted by Joris (not verified) on Monday, October 3, 2005 - 03:27

I have the distinct, yet humble, impression Linus is not talking about NOT complying to specs as a matter of establishing standardisation but as NOT complying to specs when actually writing code wich is to be used by a piece of software wich is dependant on a/the spec.

In the end, the spec is the 'communications layer/protocol', the code is both the carrier and the signal. It doesn not matter on wich 'material' the 'carrier' is built or in wich 'modulation' the 'signal' is sent, as long as it complies with the demands for the spec at some point along the 'transmission' it will work without any problem.

Maybe he could have reffered to this as a "blackbox" programming technique ;-)

Though i'm by no means a programmer of any kind and have no fundamental insights on the linux-kernel i do understand that after all what matters is the final product is supporting the linux system calls, what's in that 'blackbox' does not matter all that much as long it's working up to expactations.

note: even hardware support is being dealt with in the same fashion

I'm not sure any spec can handle this as even specs are subject to interpretation on both sides, and even by the implementation a spec can be interpreted contrary to expectation (the reality vs spec argument). In short anyone should be able to admit a spec does not guarantee any predictable results.

note: This looks like a good step-up for an ITIL discussion as well

[ reply ]
To my mind, Linux is also pro

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 06:01

To my mind, Linux is also provocating and he's right. There are many, many examples of standards and specifications that are simply not followed, because they conflict with the real world or are too complicated. I have also experienced this very often.

This does not mean that specs are bad, quite in the contrary they are
to my mind very important but they must be always in contact with the real world - and this is often not the case.

For example - have a look at ISDN: This was specified over and over and in the end every country implemented his own ISDN standard. Have a look at SOAP: 200 pages, extremely complicated, few implementations that are not fully compatible. There are many, many other examples.

Nevertheless all these specifications are important, similar to all those computer languages that died out. People are "taking the best and leaving the rest" - and there is for sure a *lot* of "good" in all these specifications.

[ reply ]
I think Linus is way out of l

Comment posted by Ferry (not verified) on Saturday, October 1, 2005 - 03:40

I think Linus is way out of line here. As an engineer he should know that a spec is as much a work in progress as the code that implements the desired behaviour for a device!

The tendency in technology is to create devices with greater and greater complexity. A way to keep a grip on this complexity is to create specs, have interoperability tests, plug fests, etc.

A released spec is just a point in time at which the spec covers desired behaviour for a device _at that time_. of course there can be mistakes in a spec, as there can be mistakes in an implementation of a device. just like there can be mistakes in the code!

stop being so childish Linus and accept the fact that in the real world specs are in fact used. Specs are not bibles or fixed laws but approximations of reality (like you pointed out yourself), just as the code is an approximation of reality!!!

as a side note: maybe you should talk to a few CPU engineers, they will tell you that specs are the only way to go when implementing a CPU (or any other piece of complex hardware). Re-spinning the chip in the fab is quite expensive and you will want to avoid this, hence the specs: they define the behaviour and the implementation can be tested against it....

[ reply ]
in reality, spec == best guess by a bunch of theorists.

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 03:58

specifications in programming/software are not like specifications in engineering.

Programming/Software contains a lot of art and especially a lot of NEW stuff that hopefully works......certainly very little of the tried and true stuff you find in engineering. And very little testing like in engineering, IF any, BEFORE actually going live.

Programming/Software != engineering

Software doesn't even come close to the rigors of engineering by any means.

[ reply ]
You are EXACTLY right!

Comment posted by Sproggit (not verified) on Monday, October 3, 2005 - 07:24

You are EXACTLY right!
and THAT is EXACTLY why todays software is generally crap.

Once programming software == the rigors of engineering I will finally be out of a testing job.

It can't happen soon enough, and I for one would have preferred Linus to be helping in the drive for this to happen, as opposed to helping perpetuate the situation of software engineering having no more discipline than any other transient hobby activity.

[ reply ]
'spec' == standard

Comment posted by Ano Nymous on Saturday, October 1, 2005 - 04:45

There are different kinds of specifications. The kind that is discussed in the thread are standards, not specific hardware or software specs.

There is no x86 standard, only the Intel spec of their implementation. Then others decided they wanted to be compatible with that, and mostly followed it. This is totally different than e.g. HTML specs, where a browser who follows the specs blindly is near useless in practice, thanks to reality.

Linus Torvalds said:
"Specs are a basis for _talking_about_ things. But
they are _not_ a basis for implementing software."

And this is his main point I think.

[ reply ]
I'd rather base my software o

Comment posted by William Poetra Yoga Hadisoeseno (not verified) on Saturday, October 1, 2005 - 11:04

I'd rather base my software on specs, then implement real-world "unexpected" behaviour on top of it as special cases.
[ reply ]
Specs are abstractions which

Comment posted by Ano Nymous on Saturday, October 1, 2005 - 12:52

Specs are abstractions which describe certain behaviour. Writing code while using the spec as sort of design document for the actual code implementation gives other results than writing code that fits best in its environment (e.g. hardware and rest of kernel), while still being compliant with the spec.

Imagine a spec which describes something with 5 layers and 10 classes. If you write code the same way, and someone points out that you only need two layers and 3 classes to do the same with much less code, in a nicer and more efficient way, wouldn't you agree? Or do you sputter and say "but according to the spec..."? Or in other words, what do you value more: the spec or reality?

[ reply ]
If that doesn't break the spe

Comment posted by William Poetra Yoga Hadisoeseno (not verified) on Saturday, October 1, 2005 - 19:00

If that doesn't break the spec (as in interoperability) it might be considered. But other people who read our software will have to figure out what's going on in the code then.

And if a spec doesn't translate into good code, then it's not a good spec.

[ reply ]
You can make a layer and say

Comment posted by Ano Nymous on Sunday, October 2, 2005 - 03:52

You can make a layer and say in a comment somewhere that it implements layers a, b, c of spec X. The code doesn't have to be totally alienated from the spec, there's always a middle ground.

Some complex things may be best described very verbosely so that people can understand them. When that understandment is there, it can be trimmed down to the essential. So I don't think it's always true about the code translation, but in practice probably more often than not.

Imagine that almost no spec is good (technically: what they do is good, but they're poorly made), then you'll understand Linus' point of view to not translate specs into bad code. If you're lucky enough to work with a good spec you can get away with it, but the code shouldn't be judged by how well it resembles the spec, but on its own.

[ reply ]
You don't have a clue what yo

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 05:32

You don't have a clue what you're doing then. A spec isn't supposed to translate into code at all. It is supposed to describe, in as few words as possible, with no implementation details whatsoever, with no subjective opinions whatsoever, what minimum requirements the project must meet when it is complete to be considered functional and fit for purpose.

A spec is not a recipe, nor a set of instructions. If I give you a spec to build a bridge, it's not going to read

"must use x beams, attach beam a to beam b like so..."

it's going to read

"must support x tonnes, must stand in windspeeds of x, must last x years"

And anything you can do to meet it is fine, but you better fuckin meet it.

Man... if I was as public a figure as Linus is, I'd be embarressed as hell to have said something so stupid.

[ reply ]
Sure there are specs like that

Comment posted by Mr_Z on Monday, October 3, 2005 - 05:47

...but there also are specs that ARE much lower level and do specify details. As a spec writer myself, I know that specs have their limitations. I owned a set of architecture specs for a DSP core, and I know those specs have holes.

My specs were much more detailed than "It must support X tonnes," to be sure. Nonetheless, another set of people wrote microarchitecture specs that went under my architecture specs that went even further--"Use such-and-such register file for this, use such-and-such memory for that, etc." complete with pipeline diagrams, etc.

And then there was the actual implementation. In the end, the implementation did deviate from both the architecture and microarchitecture specs. But the device did ultimately work.

So in a sense Linus is right. The spec gives you guideposts and bounds. It does not give you the implementation. And, it won't, by itself, give you interoperability. There's no substitute for testing against other implementations.

[ reply ]
It shouldn't give you any imp

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 06:37

It shouldn't give you any implementation. When a spec deviates from the what and starts filling in the how, it stops being a spec and becomes a half-assed plan. And a half-assed plan is worse than none at all.

I don't deal with low level hardware, so my experiences are different from yours, but I've done a lot of requirements gathering and spec writing myself, and I consider the hardest part of the job to be refining out the implementation crap that every armchair quarterback wants to throw into the mix.

When you get right down to it, you're writing a spec because you don't have the time, skill or inclination to do something yourself, so you're telling someone you trust to make competent decision in how to go about doing what you need. When you go from telling them what you need to telling them how to do their jobs, you have to question why you're writing a spec instead of doing it yourself.

[ reply ]
Im sorry, but you are wrong.

Comment posted by Sproggit (not verified) on Monday, October 3, 2005 - 07:35

Im sorry, but you are wrong.
You are describing a requirement.
A requirement indicates what something must do.
A spec details how it must go about doing it.

Requirement:
Move 20 tons of rubble to the dump.

Spec:
1)One heavy duty tuck with shovel attachment
2)An operator
3)A map to & permission to use the dump
4)Fuel for the machinery.

If there is anything missing in the spec, it can (and should) be modified in accordance to real-world experience.
It's better (and in the software world, anywhere between 20x and 120x cheaper) to change the spec before implimenting, but you MUST change the spec to suit the real world at all times, otherwise you get pissed off hackers that start to ignore specs, or just consider them crap, because they've only ever encountered weak / outdated specs (no offence Linus).

[ reply ]
i'm very afraid !

Comment posted by Anonymous (not verified) on Saturday, October 1, 2005 - 09:26

so this is the reason why linux just happens to run ?
[ reply ]
eehhh

Comment posted by Anonymous (not verified) on Saturday, October 1, 2005 - 10:08

It often doesn't work and have many bugs...
But specs are not the way to reduce bug number either. They are the way to tell what the software must do, how, and how it can interoperate.
Of course, specs are always false, and must be fixed all the times the code evolve. Exactly the same thing as the code: it is always false too! and must always be fixed...
[ reply ]
ELF is a pretty good spec, no

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 04:36

ELF is a pretty good spec, not too broken. It's stood the test of time without any big problems.

OpenGL is a pretty good spec too.

[ reply ]
Observation

Comment posted by Anonymous (not verified) on Saturday, October 1, 2005 - 10:24

It's funny how so many arguments are really about the definition of words. Stuff is put forward by attempting to fit it to a collection of word meanings, and interprets by fitting it to a (usually slightly different) collection of word meanings. Things can get pretty heated.

Guess you could say the "language spec" (a dictionary) is the source of a lot of pain. Without it there would be no interoperability though... *grin*

[ reply ]
People thinking

Comment posted by MighMoS on Saturday, October 1, 2005 - 11:56

This comment has little to do with the thread itself, but I'm glad that its not filled with "Linus Groupies", but rather people still think for themselves, despite who said what. That's all.
[ reply ]
I'm sure he wriggles in glee

Comment posted by Ano Nymous on Saturday, October 1, 2005 - 12:54

I'm sure he wriggles in glee by the caused mass confusion. ;-)
[ reply ]
Joke or forgery?

Comment posted by Ike Ahnoklast (not verified) on Sunday, October 2, 2005 - 06:56

Somebody (possibly Linus) is jerking our chains by describing ideas that are mostly nonsense with the sole intent of stirring up the ants nest. If it's a joke, it's in rather bad taste. If it's a forgery, I'd not be surprised to learn that the forger was a Microsoft stooge trying to make FOSS proponents look irresponsible. In either case it's advisable to keep a grain or two of salt handy...

--Ike

[ reply ]
See above. There's a diffe

Comment posted by Ano Nymous on Sunday, October 2, 2005 - 09:20

See above.

There's a difference between being compliant with specs and designing your code like the spec. Linus said the second is bad, but as far as I can see nowehere he said the first is bad too.

[ reply ]
Very true

Comment posted by miro (not verified) on Sunday, October 2, 2005 - 13:21

The only useful spec are those where the written word is combined with actual code and/or data structures and that is nothing more than a well documented source file.
[ reply ]
comment + source = best

Comment posted by Anonymous (not verified) on Sunday, October 2, 2005 - 16:06

because _some_ specs(acpi, bios-stuff, nforce chipsets, intel's uhci usb, (just to name a few) ) tend be inaccurate, incomplete, Hardware-vendor found a better way to do sth. better without saying it to the spec. writer, the Big MegaShit Company messed it up, or just written by a complete idiot!

(usb is fine now, acpi _works_ too to some degree but there still some broken laptops around!).

[ reply ]
and i think the reason for th

Comment posted by hobgoblin (not verified) on Monday, October 3, 2005 - 04:01

and i think the reason for that is that most of this stuff was implemented on windows first. This allows the vendor that makes the hardware to put in special case changes in their drivers while allowing windows to boot of the hardware, barely.

if a spec was followed rather then extended upon all the time from corp a, b and c one would have less work of first implementing the baseline spec and then trying to nail down variation a, b and c.

yes you cant follow the specs like a orthodoxy, but its a place to start. without specs we would not have stuff like the net or the web. the problem is when there is no open talk about changes done, or extensions made, so that vendor a extends or changes and vendor b have to reverse-engineer the changes.

so as of currenty, with our money driven computer world, sepcs are virtualy useless. implement the spec, it fails, and your not sure if its the implementation thats bad or the hardware thats insane.

as long as not all prosesses are open one is worse of following those that are. one is better of dropping the spec and doing hardware probes.

still, i dont think the OSI model was the best example in this discussion as you cant implement the diffrent layers perfectly. basicly its to fine grained. but its a better starting point then a spec thats to like gravel that vendors have totaly diffrent takes on diffrent "rocks" and then trow in some extra rocks on a later driver revision to try and help interoprability.

specs like anything can be a double edged sword if followed as if its a religion. but its a nice basis to start from and make future observations and refinements.

einstein extends on newton, because newton works as long as your inside the gravity well :P come from the outside with enough speed and stuff start to act up ;)

[ reply ] e
Ok Linus...

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 03:13

...want to live near a nuclear power station that hasn't had the software design fully specified? No, didn't think so.
Linus is a highly technical coder but he really doesn't see much beyond his own specialised capabilities.
[ reply ]
Linus is splitting one definition into two to make a valid point

Comment posted by Stephen White (not verified) on Monday, October 3, 2005 - 03:49

Linus is setting a common misperception of "specification" onto stage and explaining that it really is two separate things that are often lumped as one. He isn't invalidating the concept of specification, only that people confuse ways of talking about things with ways of implementing things.

He is right that the reality of the way things are take priority over the way that we talk about the problem. We also need to talk about the problem, so specifications exist so devices from different manufacturers are able to communicate.

So the main point, as he said: "it's a way to _talk_ about things, not to implement them."

The common misconception is that it's possible to do systems development by progressively amending the specifications, rather than making the specifications be driven by technical requirements.

No sacred cows are being shot here, and you don't need to defend any favourite positions or dream up disaster scenarios that disprove the point Linus has made. He is simply asking that you realise there are two distinct areas and one follows the other.

[ reply ]
How about this interpretation

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 06:23

How about this interpretation?

Prior to software development someone comes up with "requirements". This is often where people who usually have little clue about reality analyze and document what THEY think is reality. The programmer then comes along and hopefully can speak to the people that need the software to find out what reality really is.

During the development of said software you might have some modules that "require" that you follow "specs". For instance, how you interoperate with another, already developed and working module/standard. This is key, specifications should document things that exist and already work.

From my 20+ years of experience, requirements typically do not reflect what the end software product specifications are. This is largely due to lack of cooperation/apathy during requirements gathering and having the wrong people do the analysis.

[ reply ]
Theories and such....

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 04:43

I could be wrong, but last I knew a theory is a set of scientific facts and observations supporting an idea....

A spec defines how certain things (protocols, hardware, etc.) are supposed to work in theory. Not all theories are correct, which leads to some crappy specs.

I'll join the side that think

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 06:12

I'll join the side that thinks Linus is kinda weird in this statement.

Maybe what he really dislikes is that some people whine about stuff not following spec. Of course you have to be flexible and say: "ok, this clearly doesn't follow spec, but then I'll just go and make it work regardless". That's ok, but _only_ if you at the same time specify exactly what you did instead, and why.

Spec:s isn't something written in blood. But they make it possible for people to work on different parts of a system and have those parts work together without having to take stress pills and visit therapists. If they are followed.

The problem seems to be that Linus just isn't a follower. He (and other programmers that think like him) is exactly the reason why spec:s doesn't match reality. They try to code somehing and read the spec for that and think: "Hey, this sucks, I have a much cooler way of doing this, I'll just do that instead". Well. Ahum... and now it is a surprise that the spec doesn't follow reality?

What you should do is to take it up with the designers of that spec. revise it and work out what the spec really should say. Most often designers of a spec has a reason for designing it that way, believe it or not. It's called a team, it might be a new concept for some of you but it can work quite well.

In deference to his obvious t

Comment posted by hmurray (not verified) on Monday, October 3, 2005 - 06:44

In deference to his obvious technical skills, I'll have to assume Linus was either:

a) Being intentionally provocative, or
b) Having a bad day

No non-trivial piece of software is written without some sort of spec. It might be a detailed spec, or a high-level one. It might be written, or verbal, or even a mental one ("this is what I want to do")...but ALL software is spec-based. All of it.

Even when you don't write anything down, what you end up coding is often very different from what you intended to write. This is true for all forms of specification. It in no way invalidates the concept of a spec, it just means that a spec is supposed to provide guidelines for development, NOT describe a finished product in intimate detail.

His comment that specs are the "single worst way to write software" is baffling, because I guarantee he has a plan for writing any piece of software he creates. I truly doubt he just types from a stream-of-conciousness and ten days later goes "oh look, I wrote a kernel!".

I can't believe he honestly belives what he's saying, so I'll have to assume he's pulling our collective chain.

Software Specification vs. Protocol Standard

Comment posted by Anonymous (not verified) on Monday, October 3, 2005 - 07:02

Reading the thread I come away with the notion that there is some confusion between the respondents regarding Software Specifications and Protocol Standards.

A Software Specification is a document or collection of documents used to specify the functional and design details for the development of a piece of software. I have to agree with Linus that Software Specifications in all their forms rarely, if ever, survive the reality of iterative experimentation and software implementation. Reality indeed bends specifications - and after that specs only serve to document what came before - racing to catch up with reality.

A Protocol Standard is a document which describes interfaces. In isolation a standard serves much as a software specification - and is just as malleable. Nevertheless the protocol standard is very important to a wider audience because it provides an agreement between various parties (manufacturers, software designers, network and system administrators) about how a particular interface is agreed to function. Unless you are creating a new standard in a particular area it would be wise to provide an implementation that follows the standard to maintain interoperability. Was the original standard developed on paper first, then implemented? Probably not. Would you agree that following the standard in this instance is important? Heck yes.

Given that, I think both parties were 'right' in their approach - but were mixing terms between software development and standards compliance. Perhaps the word 'specification' should relate exclusively to software development, and 'standards' to protocol and API specifications - so we can avoid the confusion between emergent technology and established interface standards.

[Jul 24, 2011] What Apple Has That Google Doesn't - An Auteur By RANDALL STROSS

July 23, 2011 | NYTimes.com

AT Apple, one is the magic number.

One person is the Decider for final design choices. Not focus groups. Not data crunchers. Not committee consensus-builders. The decisions reflect the sensibility of just one person: Steven P. Jobs, the C.E.O.

By contrast, Google has followed the conventional approach, with lots of people playing a role. That group prefers to rely on experimental data, not designers, to guide its decisions.

The contest is not even close. The company that has a single arbiter of taste has been producing superior products, showing that you don't need multiple teams and dozens or hundreds or thousands of voices.

Two years ago, the technology blogger John Gruber presented a talk, "The Auteur Theory of Design," at the Macworld Expo. Mr. Gruber suggested how filmmaking could be a helpful model in guiding creative collaboration in other realms, like software.

The auteur, a film director who both has a distinctive vision for a work and exercises creative control, works with many other creative people. "What the director is doing, nonstop, from the beginning of signing on until the movie is done, is making decisions," Mr. Gruber said. "And just simply making decisions, one after another, can be a form of art."

"The quality of any collaborative creative endeavor tends to approach the level of taste of whoever is in charge," Mr. Gruber pointed out.

Two years after he outlined his theory, it is still a touchstone in design circles for discussing Apple and its rivals.

Garry Tan, designer in residence and a venture partner at Y Combinator, an investor in start-ups, says: "Steve Jobs is not always right-MobileMe would be an example. But we do know that all major design decisions have to pass his muster. That is what an auteur does."

Mr. Jobs has acquired a reputation as a great designer, Mr. Tan says, not because he personally makes the designs but because "he's got the eye." He has also hired classically trained designers like Jonathan Ive. "Design excellence also attracts design talent," Mr. Tan explains.

Google has what it calls a "creative lab," a group that had originally worked on advertising to promote its brand. More recently, the lab has been asked to supply a design vision to the engineering and user-experience groups that work on all of Google's products. Chris L. Wiggins, the lab's creative director, whose own background is in advertising, describes design as a collaborative process among groups "with really fruitful back-and-forth."

"There's only one Steve Jobs, and he's a genius," says Mr. Wiggins. "But it's important to distinguish that we're discussing the design of Web applications, not hardware or desktop software. And for that we take a different approach to design than Apple," he says. Google, he says, utilizes the Web to pull feedback from users and make constant improvements.

Mr. Wiggins's argument that Apple's apples should not be compared to Google's oranges does not explain, however, why Apple's smartphone software gets much higher marks than Google's.

GOOGLE'S ability to attract and retain design talent has not been helped by the departure of designers who felt their expertise was not fully appreciated. "Google is an engineering company, and as a researcher or designer, it's very difficult to have your voice heard at a strategic level," writes Paul Adams on his blog, "Think Outside In." Mr. Adams was a senior user-experience researcher at Google until last year; he is now at Facebook.

Douglas Bowman is another example. He was hired as Google's first visual designer in 2006, when the company was already seven years old. "Seven years is a long time to run a company without a classically trained designer," he wrote in his blog Stopdesign in 2009. He complained that there was no one at or near the helm of Google who "thoroughly understands the principles and elements of design" "I had a recent debate over whether a border should be 3, 4 or 5 pixels wide," Mr. Bowman wrote, adding, "I can't operate in an environment like that." His post was titled, "Goodbye, Google."

Mr. Bowman's departure spurred other designers with experience at either Google or Apple to comment on differences between the two companies. Mr. Gruber, at his Daring Fireball blog, concisely summarized one account under the headline "Apple Is a Design Company With Engineers; Google Is an Engineering Company With Designers."

In May, Google, ever the engineering company, showed an unwillingness to notice design expertise when it tried to recruit Pablo Villalba Villar, the chief executive of Teambox, an online project management company. Mr. Villalba later wrote that he had no intention of leaving Teambox and cooperated to experience Google's hiring process for himself. He tried to call attention to his main expertise in user interaction and product design. But he said that what the recruiter wanted to know was his mastery of 14 programming languages.

Mr. Villalba was dismayed that Google did not appear to have changed since Mr. Bowman left. "Design can't be done by committee," he said.

Recently, as Larry Page, the company co-founder, began his tenure as C.E.O., , Google rolled out Google+ and a new look for the Google home page, Gmail and its calendar. More redesigns have been promised. But they will be produced, as before, within a very crowded and noisy editing booth. Google does not have a true auteur who unilaterally decides on the final cut.

Randall Stross is an author based in Silicon Valley and a professor of business at San Jose State University. E-mail: [email protected].

Recommended Links

Software Architecture Courses

Current Major Research Areas:

Major Courses Taught:

Educational programs


Papers


Conferences

WICSA 2001 The Working IEEE-IFIP Conference on Software Architecture


Bibliographies

Ric Holt's Annotated Biblography on Software Architecture
Rick Kazman's Software Architecture Bibliography
Kamran Sartipi's Software Architecture Bibliography
SEI Bibliography on Software Architecture



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: June 02, 2021