|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better
|News||Books||Recommended Links||Scripting Guants|
|History of pipes concept||SE quotes||Humor||Random Findings||Etc|
There are many people who use UNIX or Linux but who IMHO do not understand UNIX. UNIX is not just an operating system, it is a way of doing things, and the shell plays a key role by providing the glue that makes it work. The UNIX methodology relies heavily on reuse of a set of tools rather than on building monolithic applications. Even perl programmers often miss the point, writing the heart and soul of the application as perl script without making use of the UNIX toolkit.
David Korn(bold italic is mine -- BNN)
Expensive short chronology; most material is available online, July 9, 2004
This is an expensive short book with mainly trivial chronological information, 90% of which are freely available on the Internet. As for the history of the first 25 year of Unix it is both incomplete and superficial. Peter Salus is reasonably good as a facts collector (although for a person with his level of access to the Unix pioneers he looks extremely lazy and he essentially missed an opportunity to write a real history, setting for a glossy superficial chronology instead). He probably just felt the market need for such a book and decided to fill the niche.
In my humble opinion Salus lucks real understanding of the technical and social dynamics of Unix development, understanding that can be found, say, in chapter "Twenty Years of Berkeley Unix from AT&T-Owned to Freely Redistributable" in the book "Open Sources: Voices from the Open Source Revolution (O'Reilly, 1999)" (available online). The extended version of this chapter will be published in the second edition of "The Design and Implementation of the 4.4BSD Operating System (Unix and Open Systems Series)" which I highly recommend (I read a preprint at Usenix.)
In any case Kirk McKusick is a real insider, not a former Usenix bureaucrat like Salus. Salus was definitely close to the center of the events; but it is unclear to what extent he understood the events he was close to.
Unix history is a very interesting example how interests of military (DAPRA) shape modern technical projects (not always to the detriment of technical quality, quite opposite in case of Unix) and how DAPRA investment in Unix created completely unforeseen side effect: BSD Unix that later became the first free/open Unix ever (Net2 tape and then Free/Open/NetBSD distributions). Another interesting side of Unix history is that AT&T brass never understood what a jewel they have in hands.
Salus's Usenix position prevented him from touching many bitter conflicts that litter the first 25 years of Unix, including personal conflicts. The reader should be advised that the book represents "official" version of history, and that Salus is, in essence, a court historian, a person whose main task is to put gloss on the events, he is writing about. As far as I understand, Salus never strays from this very safe position.
Actually Unix created a new style of computing, a new way of thinking of how to attack a problem with a computer. This style was essentially the first successful component model in programming. As Frederick P. Brooks Jr (another computer pioneer who early recognized the importance of pipes) noted, the creators of Unix "...attacked the accidental difficulties that result from using individual programs together, by providing integrated libraries, unified file formats, and pipes and filters.". As a non-programmer, in no way Salus is in the position to touch this important side of Unix. The book contains standard and trivial praise for pipes, without understanding of full scope and limitations of this component programming model...
I can also attest that as a historian, Peter Salus can be extremely boring: this July I was unfortunate enough to sit on one of his talks, when he essentially stole from Kirk McKusick more then an hour (out of two scheduled for BSD history section at this year Usenix Technical Conference ) with some paternalistic trivia insulting the intelligence of the Usenix audience, instead of a short 10 min introduction he was expected to give; only after he eventually managed to finish, Kirk McKusick made a really interesting, but necessarily short (he had only 50 minutes left :-) presentation about history of BSD project, which was what this session was about.
Nov 05, 2018 | opensource.com
Revisiting the Unix philosophy in 2018 The old strategy of building small, focused applications is new again in the modern microservices environment.Program Design in the Unix Environment " in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD's cat -v implementation. In a nutshell that philosophy is: Build small, focused programs -- in whatever language -- that do only one thing but do this thing well, communicate via stdin / stdout , and are connected through pipes.
Yeah, I thought so. That's pretty much the definition of microservices offered by James Lewis and Martin Fowler:
In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.
While one *nix program or one microservice may be very limited or not even very interesting on its own, it's the combination of such independently working units that reveals their true benefit and, therefore, their power.*nix vs. microservices
The following table compares programs (such as cat or lsof ) in a *nix environment against programs in a microservices environment.
*nix Microservices Unit of execution program using stdin / stdout service with HTTP or gRPC API Data flow Pipes ? Configuration & parameterization Command-line arguments,
environment variables, config files
JSON/YAML docs Discovery Package manager, man, make DNS, environment variables, OpenAPI
Let's explore each line in slightly greater detail.Unit of execution
More on Microservices
The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input from stdin and writes output to stdout . A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens. Data flow
- How to explain microservices to your CEO
- Free eBook: Microservices vs. service-oriented architecture
- Secured DevOps for microservices
Traditionally, *nix programs could communicate via pipes. In other words, thanks to Doug McIlroy , you don't need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little Apache Kafka-based experiment from 2017 .Configuration and parameterization
How do you configure a program or service -- either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include Kubernetes resource definitions , Nomad job specifications , or Docker Compose files. These may or may not be parameterized; that is, either you have some templating language, such as Helm in Kubernetes, or you find yourself doing an awful lot of sed -i commands.Discovery
How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there's a bit more automation in finding a service. In addition to bespoke approaches like Airbnb's SmartStack or Netflix's Eureka , there usually are environment variable-based or DNS-based approaches that allow you to discover services dynamically. Equally important, OpenAPI provides a de-facto standard for HTTP API documentation and design, and gRPC does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good Makefiles and ending with writing your docs with (or in?) style .Pros and cons
Both *nix and microservices offer a number of challenges and opportunitiesComposability
It's hard to design something that has a clear, sharp focus and can also play well with others. It's even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts -- maybe it's a better option to outsource these features into a service mesh? It's hard, but if you get it right, its reusability can be enormous.Observability
In a monolith (in 2018) or a big program that tries to do it all (in 1984), it's rather straightforward to find the culprit when things go south. But, in ayes | tr \\n x | head -c 450m | grep n
or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably OpenCensus and OpenTracing . Observability still might be the biggest single blocker if you are looking to move to microservices.Global state
While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible.Wrapping up
In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith is the best option for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices -- maybe we can learn something from the former to benefit the latter.
Michael Hausenblas is a Developer Advocate for Kubernetes and OpenShift at Red Hat where he helps appops to build and operate apps. His background is in large-scale data processing and container orchestration and he's experienced in advocacy and standardization at W3C and IETF. Before Red Hat, Michael worked at Mesosphere, MapR and in two research institutions in Ireland and Austria. He contributes to open source software incl. Kubernetes, speaks at conferences and user groups, and shares good practices...
Gain a better understanding of how to achieve successful code reuse.
"Those who don't understand UNIX are doomed to reinvent it, poorly."
Component architects can learn a lot of important design principles from studying Unix; study of Unix principles can help us realize the gains in development speed and reliability that component architecture promises.
Unix provides a beautiful example of an architecture that achieves many of the goals of component architecture, including portability and code reuse. Some of the key benefits include:
- Shell scripts are broadly portable among Unix systems. Programs in C or Perl are generally fairly portable, too.
- No other system has ever had a component used as broadly as
- Code reuse is actually practical in a Unix environment.
- Ad hoc and scripting capabilities available for Unix support rapid prototyping and testing.
- Time is spent focused on solving problems, not filling out checklists of API features.
Lesson 1: Simple interfaces
If a simple program can't be expressed simply, something is wrong with your environment. Unix's shell interface allows many simple programs to be written in a single line, and indeed, in a single line simple enough that people will actually use this facility. The possibility of ad hoc programming, which is accessible even to people that don't think of themselves as programmers, is one of the things that makes Unix powerful.
Without a genuinely simple interface, you can't have ad hoc programming for naive users. A simple interface implies that the data structure at least looks simple, and that brings us to our next lesson.
Lesson 2: Human-readable data
One of the things that makes it practical to quickly develop and test applications based on the Unix tool architecture is that, in general, the intermediate steps of a process can be reviewed quickly and easily by a human being. You don't need to write a special program to display your data in a readable form -- so there's one less place a bug can hide when you can't get the output you expect. At every step in the process, you can check your work.
If you need to do something to your data that is obvious to you, but you can't figure out how to explain it to a computer, you can do it by hand, on the "real" data. You don't need to use a special translation program to get data in and out of a human-readable format.
Lesson 3: Lots of simple components
The lesson of
sort, and of
gzip, is that a single monolithic component that solves your whole problem is less useful to you than two or three components that can be combined one way to solve your problem today and can be combined another way to solve a different problem tomorrow.
By keeping each logically distinct task physically separate, Unix encourages users to experiment with new combinations. While you can't change the compression encoding for WinZip or StuffIt without huge changeover costs, it is practical to add a new compression algorithm to a Unix system.
Lesson 4: Simple API
Just as a simple front-end interface makes it possible for inexperienced users to write simple shell programs, the simple programming interface of a Unix tool makes it practical for more experienced programmers to write tools for every application they use. The easier it is to write a new component for your library, the more components you will find lying around, waiting to be used again and again.
A Unix tool needs to know about an API that involves a network only if the tool's function is to interact with the network. Otherwise, this complexity is not merely hidden, but entirely removed from the program. If you want to use the network, you use one of the tools for talking to networks.
Lesson 5: Encourage reuse
Unix encourages reuse by making it easy to reuse code, and by providing a huge variety of important pieces to reuse. An elegant and beautiful component architecture that lacks a selection of key building blocks will not encourage the reuse of code. A system where it's less work to teach a component to sort than to have it interface with an existing module for sorting is a system where no one will bother reusing the existing modules.
Lesson 6: Allow evolution
If I don't like one of the standard Unix tools, I am not obliged to use it: I can add a new one. Over time, some of these new utilities have become popular enough to be adopted into one or more of the various standards Unix vendors base their systems on.
Allowing users to evolve their own utilities and tools leads to better components for everyone else to use.
Unix doesn't provide a standard mechanism for verifying that a new feature you've come to like is available on a slightly older system, and it doesn't provide a way to handle the feature's absence. In practice, this is rarely a problem; the behavior you get if you don't specify a new option will be the same as always, and the new utility is probably portable to the old system.
Once again, keeping the logically separate parts of a utility genuinely separate is necessary for this to take place. If your editor can do its own calculation, how can you upgrade the calculator in it? In Unix, the calculator is a separate tool. So if I switch to a smarter or more flexible one, the editor doesn't even notice (let alone protest).
Lesson 7: Everything is a tool
Programmers tend to see a strong division between components used to build an application, and applications, which are in some way final products. Unix doesn't do this. Editors are tools, just like any other tool. The command-line interface is a tool. Everything can be used as a tool.
The MH mail system is a particularly stunning example of this. It provides a set of tools, each of which handles some portion of the process of reading, writing, sorting, and responding to e-mail. Since each tool is separate, and each tool is designed to allow the use of other tools with it, users can create their own customized mail processing environments -- all very different, and all well-integrated with the basic capabilities of the OS.
Lesson 8: Eschew safety
"UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things."
Unix utilities and applications should not try to anticipate all possible needs, and they should never try to keep you from doing something that might be dangerous. Your program may be so useful that it outlives you. Even if it doesn't, it should most certainly outlive your current project whenever possible: Don't try to guess at how people -- even you yourself -- might want to use it in future.
This feeds back into the simplicity argument. Trying to prevent undesirable consequences can be a disaster. (For instance, imagine how much less convenient it would be if programs that used $EDITOR tried to enforce the assumption that it will be an interactive program.)
Lesson 9: 10% of the code does 90% of the work
When writing a component, don't try to anticipate every need; figure out what your core functionality is, and stay focused. This isn't to say you should avoid generality when it's easy to add, but if it's really hard to handle a special case, don't handle it: Other programs that handle that case and nothing else will show up, and they won't clutter your design. The resulting component will be smaller, faster, easier to use, and it will be done on time.
Ideological purity has never been part of the Unix model. There are exceptions to every rule. Don't try to anticipate them all. Don't try to include them all. Pick a problem domain, point to everything outside it, and say "here there be dragons." Recognize that there are special cases, but don't try to accommodate them all, or fit them all into your framework. A flexible and robust framework can accommodate special cases; a rickety framework designed to handle everything without special cases will be destroyed when a new special case comes along.
Lesson 10: Structured data is more important than structured code
Try to focus on simpler input and output formats. Put data in streams. And please, please, make the data human readable. XML is a great step forward in this regard: It allows you to look at the output of a failing utility and read it directly.
Unix pipes marshal structured data between applications in a transparent way, which allows a great deal of flexibility for other applications and tools to operate on this data in ways that the original designers would not even have anticipated. In practical terms, this ability to repurpose the data is much more likely to save development effort than even the best structured code methodologies. XML is one tremendous example of a technology that has learned this lesson. One of the reasons for XML's meteoric rise is that XML processing brings about flexibility and extensibility similar to that afforded by Unix pipes. At the same time, XML processing is designed to fit into more "mainstream" environments.
XML pretty much brings us full circle past component technology, and back to the idea of synthesizing data from independent processing modes. Certainly, XML has adopted many of the lessons of Unix pipes, especially with the XSLT language.
Taking these lessons with you
Of course, not everything is Unix. (More's the pity!) Not everything is even like Unix. Can these lessons be applied to other environments? Other tasks? Other APIs?
OF COURSE they can!
Even if it's a little more expensive in your environment of choice, take the time to separate out parts of a program. Make sure you're decomposing them as far as you reasonably can. Remember why
uniqdoesn't sort its input; make sure that you keep separate tasks separate. This principle will serve you well whether you are developing or using components, or if you are involved in any other sort of software development process.
Keep an eye out for fundamental operations; build components that provide these services and do nothing else. Then use them, regularly.
Finally, above all else: Have fun. Unix has survived, more than anything else, because it is a delightful environment to work in. Always take pride in your work. Don't cut corners. (Narrowing your problem domain isn't cutting corners; failing to handle a domain you never narrowed is cutting corners.)
Google matched content
pretty.pl, are available at http://www.plethora.net/~seebs/comp/. unsort.c is also available for download so that you may examine the source.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2018 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
Last modified: November 13, 2018