Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Computer History Bulletin Bulletin, 2000-2009

2010-2019 2000-2009 1990-1999 1980-1989 1970-1979 1960-1969 1950-1959

Prev | Contents | Next


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Aug 1, 2008] Author Wallace Says Gates Surrounds Himself With Smart People

July 31 (Bloomberg) -- Author James Wallace, a reporter at the Seattle Post-Intelligencer, talks with Bloomberg's Tom Keene about Microsoft Corp.'s strategy and competition with Google Inc., Boeing Co.'s performance, and the shortage of engineers in the U.S. James Wallace and Jim Erickson co-wrote the best seller ``Hard Drive: Bill Gates & the Making of the Microsoft Empire,'' published in 1992.

Listen/Download

[Jul 23, 2008] Randy Pausch, whose 'last lecture' became sensation dies

chicagotribune.com

Randy Pausch, a terminally ill professor whose earnest farewell lecture at Carnegie Mellon University became an Internet phenomenon and bestselling book that turned him into a symbol for living and dying well, died Friday. He was 47.

Pausch, a computer science professor and virtual-reality pioneer, died at his home in Chesapeake, Va., of complications from pancreatic cancer, the Pittsburgh university announced.

When Pausch agreed to give the talk, he was participating in a long-standing academic tradition that calls on professors to share their wisdom in a theoretical "last lecture." A month before the speech, the 46-year-old Pausch was told he had only months to live, a prognosis that heightened the poignancy of his address.

Originally delivered last September to about 400 students and colleagues, his message about how to make the most of life has been viewed by millions on the Internet. Pausch gave an abbreviated version of it on "Oprah" and expanded it into a best-selling book, "The Last Lecture," released in April.

Related links

Yet Pausch insisted that both the spoken and written words were designed for an audience of three: his children, then 5, 2 and 1.

"I was trying to put myself in a bottle that would one day wash up on the beach for my children," Pausch wrote in his book.

Unwilling to take time from his family to pen the book, Pausch hired a coauthor, Jeffrey Zaslow, a Wall Street Journal writer who had covered the lecture. During more than 50 bicycle rides crucial to his health, Pausch spoke to Zaslow on a cellphone headset.

"The speech made him famous all over the world," Zaslow told The Times. "It was almost a shared secret, a peek into him telling his colleagues and students to go on and do great things. It touched so many people because it was authentic."

Thousands of strangers e-mailed Pausch to say they found his upbeat lecture, laced with humor, to be inspiring and life-changing. They drank up the sentiments of a seemingly vibrant terminally ill man, a showman with Jerry Seinfeld-esque jokes and an earnest Jimmy Stewart delivery.

If I don't seem as depressed or morose as I should be, sorry to disappoint you.

He used that line after projecting CT scans, complete with helpful arrows pointing to the tumors on his liver as he addressed "the elephant in the room" that made every word carry more weight.

Some people believe that those who are dying may be especially insightful because they must make every moment count. Some are drawn to valedictories like the one Pausch gave because they offer a spiritual way to grapple with mortality that isn't based in religion.

Sandra Yarlott, director of spiritual care at UCLA Medical Center, said researchers, including Elisabeth Kubler-Ross, have observed that work done by dying patients "resonates with people in that timeless place deep within."

As Pausch essentially said goodbye at Carnegie Mellon, he touched on just about everything but religion as he raucously relived how he achieved most of his childhood dreams. His ambitions included experiencing the weightlessness of zero gravity; writing an article in the World Book Encyclopedia ("You can tell the nerds early on," he joked); wanting to be both a Disney Imagineer and Captain Kirk from "Star Trek"; and playing professional football.

Onstage, Pausch was a frenetic verbal billboard, delivering as many one-liners as he did phrases to live by.

Experience is what you get when you didn't get what you wanted.

When his virtual-reality students at Carnegie Mellon won a flight in a NASA training plane that briefly simulates weightlessness, Pausch was told faculty members were not allowed to fly. Finding a loophole, he applied to cover it as his team's hometown Web journalist -- and got his 25 seconds of floating.

Since 1997, Pausch had been a professor of computer science, human-computer interaction and design at Carnegie Mellon. With a drama professor, he founded the university's Entertainment Technology Center, which teams students from the arts with those in technology to develop projects.

The popular professor had an "enormous and lasting impact" on Carnegie Mellon, said Jared L. Cohon, the university's president, in a statement. He pointed out that Pausch's "love of teaching, his sense of fun and his brilliance" came together in his innovative software program, Alice, which uses animated characters and storytelling to make it easier to learn to write computer code.

During the lecture, Pausch joked that he had become just enough of an expert to fulfill one childhood ambition. World Book sought him out to write its virtual-reality entry.

[Apr 24, 2008] Eliza's world by Nicholas Carr

April 11, 2008 | roughtype.com

Reposted from the new edition of Edge:

What is the compelling urgency of the machine that it can so intrude itself into the very stuff out of which man builds his world? - Joseph Weizenbaum

Somehow I managed to miss, until just a few days ago, the news that Joseph Weizenbaum had died. He died of cancer on March 5, in his native Germany, at the age of 85. Coincidentally, I was in Germany that same day, giving a talk at the CeBIT technology show, and - strange but true - one of the books I had taken along on the trip was Weizenbaum's Computer Power and Human Reason.

Born in 1923, Weizenbaum left Germany with his family in 1936, to escape the Nazis, and came to America. After earning a degree in mathematics and working on programming some of the earliest mainframes, he spent most of his career as a professor of computer science at MIT. He became - to his chagrin - something of a celebrity in the 1960s when he wrote the Eliza software program, an early attempt at using a computer to simulate a person. Eliza was designed to mimic the conversational style of a psychotherapist, and many people who used the program found the conversations so realistic that they were convinced that Eliza had a capacity for empathy.

The reaction to Eliza startled Weizenbaum, and after much soul-searching he became, as John Markoff wrote in his New York Times obituary, a "heretic" in the computer-science world, raising uncomfortable questions about man's growing dependence on computers. Computer Power and Human Reason, published in 1976, remains one of the best books ever written about computing and its human implications. It's dated in some its details, but its messages seem as relevant, and as troubling, as ever. Weizenbaum argued, essentially, that computers impose a mechanistic point of view on their users - on us - and that that perspective can all too easily crowd out other, possibly more human, perspectives.

The influence of computers is hard to resist and even harder to escape, wrote Weizenbaum:

The computer becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure. That is virtually a tautology. The utility of this tautology is that it can reawaken us to the possibility that some human actions, e.g., the introduction of computers into some complex human activities, may constitute an irreversible commitment. . . . The computer was not a prerequisite to the survival of modern society in the post-war period and beyond; its enthusiastic, uncritical embrace by the most "progressive" elements of American government, business, and industry quickly made it a resource essential to society's survival in the form that the computer itself had been instrumental in shaping.

The machine's influence shapes not only society's structures but the more intimate structures of the self. Under the sway of the ubiquitous, "indispensable" computer, we begin to take on its characteristics, to see the world, and ourselves, in the computer's (and its programmers') terms. We become ever further removed from the "direct experience" of nature, from the signals sent by our senses, and ever more encased in the self-contained world delineated and mediated by technology. It is, cautioned Weizenbaum, a perilous transformation:

Science and technology are sustained by their translations into power and control. To the extent that computers and computation may be counted as part of science and technology, they feed at the same table. The extreme phenomenon of the compulsive programmer teaches us that computers have the power to sustain megalomaniac fantasies. But the power of the computer is merely an extreme version of a power that is inherent in all self-validating systems of thought. Perhaps we are beginning to understand that the abstract systems - the games computer people can generate in their infinite freedom from the constraints that delimit the dreams of workers in the real world - may fail catastrophically when their rules are applied in earnest. We must also learn that the same danger is inherent in other magical systems that are equally detached from authentic human experience, and particularly in those sciences that insist they can capture the whole man in their abstract skeletal frameworks.

His own invention, Eliza, revealed to Weizenbaum the ease with which we will embrace a fabricated world. He spent the rest of his life trying to warn us away from the seductions of Eliza and her many friends. The quest may have been quixotic, but there was something heroic about it too.

See other appreciations of Weizenbaum by Andrew Brown, Jaron Lanier, and Thomas Otter.

[Apr 18, 2008] The Machine That Made Us by KEVIN KELLY

April 18, 2008 | kk.org/thetechnium

Computer scientist Joseph Weizenbaum recently passed away at the age of 85. Weizenbaum invented the famous Eliza chat bot forty years ago. Amazingly this pseudo-AI still has the power to both amusing and confuse us. But later in life Weizenbaum became a critic of artificial intelligence. He was primarily concerned about the pervasive conquest of our culture by the computational metaphor - the idea that everything interesting is computation - and worried that in trying to make thinking machines, we would become machines ourselves. Weizenbaum's death has prompted a review of his ideas set out in his book "Computer Power and Human Reason".

On the Edge Nick Carr says this book "remains one of the best books ever written about computing and its human implications. It's dated in some its details, but its messages seem as relevant, and as troubling, as ever. Weizenbaum argued, essentially, that computers impose a mechanistic point of view on their users - on us - and that that perspective can all too easily crowd out other, possibly more human, perspectives." He highlights one passage worth inspecting.

The computer becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure. That is virtually a tautology. The utility of this tautology is that it can reawaken us to the possibility that some human actions, e.g., the introduction of computers into some complex human activities, may constitute an irreversible commitment. . . . The computer was not a prerequisite to the survival of modern society in the post-war period and beyond; its enthusiastic, uncritical embrace by the most "progressive" elements of American government, business, and industry quickly made it a resource essential to society's survival in the form that the computer itself had been instrumental in shaping.

That's an elegant summary of a common worry: we are letting the Machine take over, and taking us over in the process.

Reading this worry, I was reminded of a new BBC program called "The Machine That Made Us." This video series celebrates not the computer but the other machine that made us - the printing press. It's a four part investigation into the role that printing has played in our culture. And it suggested to me that everything that Weizenbaum said about AI might be said about printing.

So I did a search-and-replace in Weizenbaum's text. I replaced "computer" with this other, older technology, "printing."

Printing becomes an indispensable component of any structure once it is so thoroughly integrated with the structure, so enmeshed in various vital substructures, that it can no longer be factored out without fatally impairing the whole structure. That is virtually a tautology. The utility of this tautology is that it can reawaken us to the possibility that some human actions, e.g., the introduction of printing into some complex human activities, may constitute an irreversible commitment. . . . Printing was not a prerequisite to the survival of modern society; its enthusiastic, uncritical embrace by the most "progressive" elements of government, business, and industry quickly made it a resource essential to society's survival in the form that the printing itself had been instrumental in shaping.

Stated this way its clear that printing is pretty vital and foundational, and it is. I could have done the same replacement with the technologies of "writing" or "the alphabet" - both equally transformative and essential to our society.

Printing, writing, and the alphabet did in fact bend the culture to favor themselves. They also made themselves so indispensable that we cannot imagine culture and society without them. Who would deny that our culture is unrecognizable without writing? And, as Weizenbaum indicated, the new embedded technology tends to displace the former mindset. Orality is gone, and our bookish culture is often at odds with oral cultures.

Weizenbaum's chief worry seems to be that we would become dependent on this new technology, and because it has its own agenda and self-reinforcement, it will therefore change us away from ourselves (whatever that may be).

All these are true. But as this exercise makes clear, we've gone through these kind of self-augmentating transitions several times before, and I believe come out better for it. Literacy and printing has improved us, even though we left something behind.

Weizenbaum (and probably Carr) would have been one of those smart, well-meaning elder figures in ancient times preaching against the coming horrors of printing and books. They would highlight the loss or orality, and the way these new-fangled auxiliary technologies demean humanity. We have our own memories, people: use them! They would have been in good company, since even Plato lamented the same.

There may indeed be reasons to worry about AI, but the fact that AI and computers tend to be pervasive, indispensable, foundational, self-reinforcing, and irreversible are not reasons alone to worry. Rather, if the past history of printing and writing is any indication, they are reasons to celebrate. With the advent of ubiquitous computation we are about to undergo another overhaul of our identity.

[Apr 10, 2008] Andrew Brown The creation of artificial stupidity reflects badly on the human race by Andrew Brown

April 10 2008 | guardian.co.uk

Joseph Weizenbaum, who died last month, was one of the computer scientists who changed the way we think. Unfortunately for all of us, he didn't change it in the way he wanted to. His family was driven from Germany by the Nazis in 1936, and by the early 1960s he was a professor at MIT, part of the first wave of brilliant programmers to whom it sometimes seemed that there was nothing that computers could not do. Contemporaries like John McCarthy and Marvin Minsky confidently predicted the emergence of "strong" human-like artificial intelligence (AI). Then, in 1965, Weizenbaum demonstrated artificial stupidity, and the world has never been the same since.

He wrote a program called Eliza, which would respond to sentences typed in at a terminal with sentences of its own that bore some relation to what had been typed in; it mimicked a completely non-directional psychotherapist, who simply encouraged the patient to ramble till they stumbled on the truth, or the end of the session. What happened, of course, was that some students started to confide in the program as if it were a real person.

Even professional psychiatrists were completely deceived. One of them wrote: "If the Eliza method proves beneficial then it would provide a therapeutic tool which can be made widely available to mental hospitals and psychiatric centres suffering a shortage of therapists ... several hundred patients an hour could be handled by a computer system." Clearly, this is not a misunderstanding of the particular powers of one program, but a much larger misunderstanding of what computers are and what we are.

For Weizenbaum this raised unsettling questions about what human understanding might be. Instead of building computers which were genuinely capable of understanding the world, his colleagues had simply redefined understanding and knowledge until they were things of which computers were, in principle, capable.

We live in a world full of Eliza's grandchildren now, a race of counterfeit humans. I am not thinking of the automated systems that appear to parse the things that we say on customer service hotlines, but the humans chained to scripts whom we eventually reach, trained to react like machines to anything that is said to them.

What made Weizenbaum such an acute critic was not just that he understood computers very well and was himself a considerable programmer. He shared the enthusiasms of his enemies, but unlike them he saw the limits of enthusiasm. Perhaps because of the circumstances of his family's expulsion from Germany, he saw very clearly that the values associated with science - curiosity, determination, hard work and cleverness - were not on their own going to make us happy or good. Scientists had been complicit, sometimes enthusiastically complicit, in the Nazi war machine, and now computer programmers were making possible the weapons that threaten all life on Earth. He was an early campaigner against anti-ballistic missile systems, because they would make war more likely.

He wrote a wonderful denunciation of the early hacking culture in his book, Computer Power and Human Reason:

"Bright young men of disheveled appearance, often with sunken glowing eyes, can be seen sitting at computer consoles, their arms tensed and waiting to fire their fingers at the buttons and keys on which their attention seems to be as riveted ... The hacker ... has only technique, not knowledge. He has nothing he can analyze or synthesize. His skill is therefore aimless, even disembodied. It is simply not connected with anything other than the instrument on which it may be exercised. His skill is like that of a monastic copyist who, though illiterate, is a first-rate calligrapher. His grandiose projects must therefore necessarily have the quality of illusions, indeed, of illusions of grandeur. He will construct the one grand system in which all other experts will soon write their systems."

But Weizenbaum did much more than that himself even if he wrote only one long book. His book has dated very little, and nothing else I've read shows so well how a humanist may love computers without idolising them.

thewormbook.com/helmintholog

ELIZA'S WORLD by JARON LANIER

edge.org

We have lost a lion of Computer Science. Joseph Weizenbaum's life is proof that someone can be an absolute alpha-geek and a compassionate, soulful person at the same time. He displayed innovative courage in recognizing the seductive dangers of computation.

History will remember Weizenbaum as the clearest thinker about the philosophy of computation. A metaphysical confrontation dominated his interactions with the non-human centered mainstream. There were endless arguments about whether people were special in ways that cybernetic artifacts could never be. The mainstream preferred to sprinkle the magic dust of specialness on the "instruments," as Weizenbaum put it, instead of people.

But there was a less metaphysical side of Weizenbaum's thinking that is urgently applicable to the most pressing problems we all face right now. He warned that if you believe in computers too much, you lose touch with reality. That's the real danger of the magic dust so liberally sprinkled by the mainstream. We pass this fallacy from the lab out into the world. This is what apparently happened to Wall Street traders in fomenting a series of massive financial failures. Computers can be used rather too easily to improve the efficiency with which we lie to ourselves. This is the side of Weizenbaum that I wish was better known.

We wouldn't let a student become a professional medical researcher without learning about double blind experiments, control groups, placebos, the replication of results, and so on. Why is computer science given a unique pass that allows us to be soft on ourselves? Every computer science student should be trained in Weizenbaumian skepticism, and should try to pass that precious discipline along to the users of our inventions.

Weizenbaum's legacy includes an unofficial minority school in computer science that has remained human-centered. A few of the other members, in my opinion, are David Gelernter, Ted Nelson, Terry Winograd, Alan Kay, and Ben Schneiderman.

Everything about computers has become associated with youth. Turing's abstractions have been woven into a theater in which we can enjoy fantasies of eternal youth. We are fascinated by wiz kids and the latest young billionaires in Silicon Valley. We fantasize that we will be uploaded when the singularity arrives in order to become immortal, and so on. But when we look away from the stage for a moment, we realize that we computer scientists are ultimately people. We die.

[Feb 6, 2008] Industry milestone DNS turns 25

02/06/08 | networkworld.com

The Domain Name System turned 25 last week.

Paul Mockapetris is credited with creating DNS 25 years ago and successfully tested the technology in June 1983, according to several sources.

The anniversary of the technology that underpins the Internet -- and prevents Web surfers from having to type a string of numbers when looking for their favorite sites -- reminds us how network managers can't afford to overlook even the smallest of details. Now in all honesty, DNS has been on my mind lately because of a recent film that used DNS and network technology in its plot, but savvy network managers have DNS on the mind daily.

DNS is often referred to as the phone book for the Internet, it matches the IP address with a name and makes sure people and devices requesting an address actually arrive at the right place. And if the servers hosting DNS are configured wrong, networks can be susceptible to downtime and attacks, such as DNS poisoning.

And in terms of managing networks, DNS has become a critical part of many IT organization's IP address management strategies. And with voice-over-IP and wireless technologies ramping up the number of IP addresses that need to be managed, IT staff are learning they need to also ramp up their IP address management efforts. Companies such as Whirlpool are on top of IP address management projects, but industry watchers say not all IT shops have that luxury. (Learn more about IP ADDRESS MANAGEMENT products from our IP ADDRESS MANAGEMENT Buyer's Guide)

"IP address management sometimes gets pushed to the back burner because a lot of times the business doesn't see the immediate benefit -- until something goes wrong," says Larry Burton, senior analyst with Enterprise Management Associates.

And the way people are doing IP address management today won't hold up under the proliferation of new devices, an update to the Internet Protocol (from IPv4 to IPv6) and the compliance requirements that demand detailed data on IP addresses.

"IP address management for a lot of IT shops today is manual and archaic. It is now how most would say to manage a critical network service," says Robert Whiteley, a senior analyst at Forrester Research. "Network teams need to fix how they approach IP address management to be considered up to date."

And those looking to overhaul their approach to IP address management might want to consider migrating how they do DNS and DHCP services as well. While the technology functions can be conducted with separate platforms -- albeit integration among them is a must -- some experts say while updating how they manage IP addresses, network managers should also take a look at their DNS and DHCP infrastructure.

"Some people think of IP address management as the straight up managing of IP addresses and others incorporate the DNS/DHCP infrastructure, says Lawrence Orans, research director at Gartner. "If you are updating how you manage IPs it's a good time to also see if how you are doing DNS and DHCP needs an update."

Email in the 18th century

Low-tech Magazine

More than 200 years ago it was already possible to send messages throughout Europe and America at the speed of an aeroplane – wireless and without need for electricity.

Email leaves all other communication systems far behind in terms of speed. But the principle of the technology – forwarding coded messages over long distances – is nothing new. It has its origins in the use of plumes of smoke, fire signals and drums, thousands of years before the start of our era. Coded long distance communication also formed the basis of a remarkable but largely forgotten communications network that prepared the arrival of the internet: the optical telegraph.

(Maps and picture : Ecole Centrale de Lyon)

--------------------------------------------------------------------------------------------------------
Every tower had a telegrapher, looking through the telescope
at the previous tower in the chain.
--------------------------------------------------------------------------------------------------------

Throughout history, long distance communication was a matter of patience – lots of patience. Postmen have existed longer than humans can write, but the physical transport of spoken or written messages was always limited by the speed of the messenger. Humans or horses can maintain a speed of 5 or 6 kilometres an hour for long distances. If they walk 10 hours a day, the transmission of a message from Paris to Antwerp would take about a week.

Already in antiquity, post systems were designed that made use of the changing of postmen. In these stations, the message was transferred to another runner or rider, or the horseman could change his horse. These organised systems greatly increased the speed of the postal services. The average speed of a galloping horse is 21 kilometres an hour, which means that the distance in time between Paris and Antwerp could be shortened to a few days. A carrier pigeon was twice as fast, but less reliable. Intercontinental communication was limited to the speed of shipping.

A chain of towers

Centuries of slow long-distance communications came to an end with the arrival of the telegraph. Most history books start this chapter with the appearance of the electrical telegraph, midway the nineteenth century. However, they skip an important intermediate step. Fifty years earlier (in 1791) the Frenchman Claude Chappe developed the optical telegraph. Thanks to this technology, messages could be transferred very quickly over long distances, without the need for postmen, horses, wires or electricity.

The optical telegraph network consisted of a chain of towers, each placed 5 to 20 kilometres apart from each other. On each of these towers a wooden semaphore and two telescopes were mounted (the telescope was invented in 1600). The semaphore had two signalling arms which each could be placed in seven positions. The wooden post itself could also be turned in 4 positions, so that 196 different positions were possible. Every one of these arrangements corresponded with a code for a letter, a number, a word or (a part of) a sentence.

1,380 kilometres an hour

Every tower had a telegrapher, looking through the telescope at the previous tower in the chain. If the semaphore on that tower was put into a certain position, the telegrapher copied that symbol on his own tower. Next he used the telescope to look at the succeeding tower in the chain, to control if the next telegrapher had copied the symbol correctly. In this way, messages were signed through symbol by symbol from tower to tower. The semaphore was operated by two levers. A telegrapher could reach a speed of 1 to 3 symbols per minute.

The technology today may sound a bit absurd, but in those times the optical telegraph was a genuine revolution. In a few decades, continental networks were built both in Europe and the United States. The first line was built between Paris and Lille during the French revolution, close to the frontline. It was 230 kilometres long and consisted of 15 semaphores. The very first message – a military victory over the Austrians – was transmitted in less than half an hour. The transmission of 1 symbol from Paris to Lille could happen in ten minutes, which comes down to a speed of 1,380 kilometres an hour. Faster than a modern passenger plane – this was invented only one and a half centuries later.

From Amsterdam to Venice

The technology expanded very fast. In less than 50 years time the French built a national infrastructure with more than 530 towers and a total length of almost 5,000 kilometres. Paris was connected to Strasbourg, Amsterdam, Toulon, Perpignan, Lyon, Turin, Milan and Venice. At the beginning of the 19th century, it was possible to wirelessly transmit a short message from Amsterdam to Venice in one hour's time. A few years before, a messenger on a horse would have needed at least a month's time to do the same.

The system was copied on a large scale in other countries. Sweden developed a country-wide network, followed by parts of England and North America. A bit later also Spain, Germany and Russia constructed a large optical telegraph infrastructure. Most of these countries devised their own variations on the optical telegraph, using shutters instead of arms for example. Sweden developed a system that was twice as fast, Spain built a telegraph that was windproof. Later the optical telegraph was also put into action in shipping and rail traffic.

A real European network never really existed. The connection between Amsterdam and Venice existed for only a short period. When Napoleon was chased out of the Netherlands, his telegraph network was dismantled. The Spanish, on the other hand, started too late. Their nationwide network was only finished when the technology started to fall into disuse in other countries. The optical telegraph network was solely used for military and national communications, individuals did not have access to it – although it was used for transmitting winning lottery numbers and stock market data. (Map : Ecole Centrale de Lyon)

Intercontinental communication

The optical telegraph disappeared as fast as it came. This happened with the arrival of the electrical telegraph, fifty years later. The last optical line in France was stopped in 1853, in Sweden the technology was used up to 1880. The electrical telegraph was not hindered by mist, wind, heavy rainfall or low hanging clouds, and it could also be used at night. Moreover, the electrical telegraph was cheaper than the mechanical variant. Another advantage was that it was much harder to intercept a message – whoever knew the code of the optical telegraph, could decipher the message. The electrical telegraph also made intercontinental communication possible, which was impossible with the optical telegraph (unless you made a large detour via Asia.

The electrical telegraph was the main means of communication for transmitting text messages over long distances for more than 100 years. At first, electrical wires were used; later on radio waves were used to communicate. The first line was built in 1844, the first transatlantic connection was put into use in 1865. The telegraph made use of Morse code, where dots and dashes symbolize letters and numbers.

Not the telephone, nor the railroads, nor radio or television made the telegraph obsolete. The technology only died with the arrival of the fax and the computer networks in the second half of the 20th century. Also in rail-traffic and shipping optical telegraphy was replaced by electronic variants, but in shipping the technology is still used in emergency situations (by means of flags or lamps).

Keyboard

The electrical telegraph is the immediate predecessor of e-mail and internet. Since the thirties, it was even possible to transmit images. A variant equipped with a keyboard was also developed, so that the technology could be used by people without any knowledge of Morse code. The optical as well as the electrical telegraph are both in essence the same technology as the internet and e-mail. All these means of communication make use of code language and intermediate stations to transmit information across large distances; the optical telegraph uses visual signs, the electrical telegraph dots and dashes, the internet ones and zeroes. Plumes of smoke and fire signals are also telegraphic systems – in combination with a telescope they would be as efficient as an optical telegraph.

Low-tech internet

Of course, e-mail is much more efficient than the optical telegraph. But that does not alter the fact that the low-tech predecessor of electronic mail more or less obtained the same result without wires or energy, while the internet consists of a cluster of cables and is devouring our energy resources at an ever faster pace.

© Kris De Decker (edited by Vincent Grosjean)

[Nov 10, 2007] MIT releases the sources of MULTICS, the father of UNIX! -

November 10, 2007 | Jos Kirps's Popular Science and Technology Blog

This is extraordinary news for all nerds, computer scientists and the Open Source community: the source code of the MULTICS operating system (Multiplexed Information and Computing Service), the father of UNIX and all modern OSes, has finally been opened.

Multics was an extremely influential early time-sharing operating system started in 1964 and introduced a large number of new concepts, including dynamic linking and a hierarchical file system. It was extremely powerful, and UNIX can in fact be considered to be a "simplified" successor to MULTICS (the name "Unix" is itself a hack on "Multics"). The last running Multics installation was shut down on October 31, 2000.

From now on, MULTICS can be downloaded from the following page (it's the complete MR12.5 source dumped at CGI in Calgary in 2000, including the PL/1 compiler):

http://web.mit.edu/multics-history

Unfortunately you can't install this on any PC, as MULTICS requires dedicated hardware, and there's no operational computer system today that could run this OS. Nevertheless the software should be considered to be an outstanding source for computer research and scientists. It is not yet know if it will be possible to emulate the required hardware to run the OS.

Special thanks to Tom Van Vleck for his continuous work on www.multicians.org, to the Group BULL including BULL HN Information Systems Inc. for opening the sources and making all this possible, to the folks at MIT for releasing it and to all of those who helped to convince BULL to open this great piece of computer history.

UNIX letters Anti-Foreword by Dennis Ritchie.

Dear Mr. Ritchie,

I heard a story from a guy in a UNIX sysadmin class, and was wondering if it was true.

The guy in this class told of a co-worker of his who was in a UNIX training class that got involved in UNIX bashing. You know, like why is the -i option for grep mean ignore case, and the -f option for sort mean ignore case, and so on. Well, the instructor of the course decided to chime in and said something like this:

"Here's another good example of this problem with UNIX. Take the find command for example. WHAT idiot would program a command so that you have to say -print to print the output to the screen. What IDIOT would make a command like this and not have the output go to the screen by default."

And the instructor went on and on, and vented his spleen...

The next morning, one of the ladies in the class raised her hand, the instructor called on her, and she proceeded to say something like this:

"The reason my father programmed the find command that way, was because he was told to do so in his specifications."

I've always wondered if this story was true, and who it was who wrote the find command. In the Oct. 94 issue of Byte they had an article on "UNIX at 25" which said that Dick Haight wrote the find command along with cpio, expr, and a lot of the include files for Version 7 of UNIX. I don't know where to send this message directly to Dick Haight, and I would appreciate it if you would forward it to him, if you are able. If you can't, well then I hope you liked the story. I got your mail address from "The UNIX Haters Handbook", and would like to add this to your Anti-Forward:
Until that frozen day in HELL occurs, and the authors of that book write a better operating system, I'm sticking with UNIX.

Sincerely,

Dan Bacus
[email protected].

From daemon Thu Feb  9 02:22 GMT 1995
Return-Path: [email protected]
Received: from plan9.research.att.com ([192.20.225.252]) by nscsgi.nscedu.com (8.6
From: [email protected]
Message-Id:  <[email protected]>
To: danb
Date: Wed, 8 Feb 1995 21:20:30 EST
Subject: Re: story
Content-Type: text
Content-Length: 1031
Status: RO

Thanks for the story and the note.  Dick Haight was in what was
then probably called USG, for Unix Support Group (the name changed
as they grew).  Their major role was to support the system within
AT&T, and later to turn it into a real commercial product.  He was indeed
one of the major people behind find and cpio.  This group was distinct from
the research area where the system originated, and we were somewhat put
off by the syntax of their things.  However, they were clearly quite useful,
and they were accepted.

Dick left AT&T some years ago and I think he's somewhere in South
Carolina, but I don't have an e-mail address for him.  I'm not sure what
he thinks of find and cpio today.  That group always was more concerned
with specifications and the like than we were, but I don't know enough
about their internal interactions to judge how these commands evolved.
All of your story is consistent with what I know up to the punchline,
about which I can't render an opinion!

Thanks again for your note.

       Dennis

[Sep 24, 2007] Happy Birthday, Sputnik! (Thanks for the Internet) by Gary Anthes

September 24, 2007 | computerworld.com

Quick, what's the most influential piece of hardware from the early days of computing? The IBM 360 mainframe? The DEC PDP-1 minicomputer? Maybe earlier computers such as Binac, ENIAC or Univac? Or, going way back to the 1800s, is it the Babbage Difference Engine?

More likely, it was a 183-pound aluminum sphere called Sputnik, Russian for "traveling companion." Fifty years ago, on Oct. 4, 1957, radio-transmitted beeps from the first man-made object to orbit the Earth stunned and frightened the U.S., and the country's reaction to the "October surprise" changed computing forever.

Although Sputnik fell from orbit just three months after launch, it marked the beginning of the Space Age, and in the U.S., it produced angst bordering on hysteria. Soon, there was talk of a U.S.-Soviet "missile gap." Then on Dec. 6, 1957, a Vanguard rocket that was to have carried aloft the first U.S. satellite exploded on the launch pad. The press dubbed the Vanguard "Kaputnik," and the public demanded that something be done.

The most immediate "something" was the creation of the Advanced Research Projects Agency (ARPA), a freewheeling Pentagon office created by President Eisenhower on Feb. 7, 1958. Its mission was to "prevent technological surprises," and in those first days, it was heavily weighted toward space programs.

Speaking of surprises, it might surprise some to learn that on the list of people who have most influenced the course of IT -- people with names like von Neumann, Watson, Hopper, Amdahl, Cerf, Gates and Berners-Lee -- appears the name J.C.R. Licklider, the first director of IT research at ARPA.

Armed with a big budget, carte blanche from his bosses and an unerring ability to attract bright people, Licklider catalyzed the invention of an astonishing array of IT, from time sharing to computer graphics to microprocessors to the Internet.

J.C. R. Licklider

J.C.R. Licklider

Indeed, although he left ARPA in 1964 and returned only briefly in 1974, it would be hard to name a major branch of IT today that Licklider did not significantly shape through ARPA funding -- all ultimately in reaction to the little Soviet satellite.

But now, the special culture that enabled Licklider and his successors to work their magic has largely disappeared from government, many say, setting up the U.S. once again for a technological drubbing. Could there be another Sputnik? "Oh, yes," says Leonard Kleinrock, the Internet pioneer who developed the principles behind packet-switching, the basis for the Internet, while Licklider was at ARPA. "But it's not going to be a surprise this time. We all see it coming."

The ARPA Way

Licklider had studied psychology as an undergraduate, and in 1962, he brought to ARPA a passionate belief that computers could be far more user-friendly than the unconnected, batch-processing behemoths of the day. Two years earlier, he had published an influential paper, "Man-Computer Symbiosis," in which he laid out his vision for computers that could interact with users in real time. It was a radical idea, one utterly rejected by most academic and industrial researchers at the time. (See sidebar, Advanced Computing Visions from 1960.)

Driven by the idea that computers might not only converse with their users, but also with one another, Licklider set out on behalf of ARPA to find the best available research talent. He found it at companies like the RAND Corp., but mostly he found it at universities, starting first at MIT and then adding to his list Carnegie Mellon University; Stanford University; University of California, Berkeley; the University of Utah; and others.


Advanced Computing Visions from 1960
Nearly a half-century ago, a former MIT professor of psychology and electrical engineering wrote a paper -- largely forgotten today -- that anticipated by decades the emergence of computer time sharing, networks and some features that even today are at the leading edge of IT.

Licklider wrote "Man-Computer Symbiosis" in 1960, at a time when computing was done by a handful of big, stand-alone batch-processing machines. In addition to predicting "networks of thinking centers," he said man-computer symbiosis would require the following advances:

  • Indexed databases. "Implicit in the idea of man-computer symbiosis are the requirements that information be retrievable both by name and by pattern and that it be accessible through procedures much faster than serial search."
  • Machine learning in the form of "self-organizing" programs. "Computers will in due course be able to devise and simplify their own procedures for achieving stated goals."
  • Dynamic linking of programs and applications, or "real-time concatenation of preprogrammed segments and closed subroutines which the human operator can designate and call into action simply by name."
  • More and better methods for input and output. "In generally available computers, there is almost no provision for any more effective, immediate man-machine communication than can be achieved with an electric typewriter."
  • Tablet input and handwriting recognition. "It will be necessary for the man and the computer to draw graphs and pictures and to write notes and equations to each other on the same display surface."
  • Speech recognition. "The interest stems from realization that one can hardly take a ... corporation president away from his work to teach him to type."


Licklider sought out researchers like himself: bright, farsighted and impatient with bureaucratic impediments. He established a culture and modus operandi -- and passed it on to his successors Ivan Sutherland, Robert Taylor, Larry Roberts and Bob Kahn -- that would make the agency, over the next 30 years, the most powerful engine for IT innovation in the world.

Recalls Kleinrock, "Licklider set the tone for ARPA's funding model: long-term, high-risk, high-payoff and visionary, and with program managers, that let principal investigators run with research as they saw fit." (Although Kleinrock never worked at ARPA, he played a key role in the development of the ARPAnet, and in 1969, he directed the installation of the first ARPAnet node at UCLA.)

Leonard Kleinrock

Leonard Kleinrock

From the early 1960s, ARPA built close relationships with universities and a few companies, each doing what it did best while drawing on the accomplishments of the others. What began as a simple attempt to link the computers used by a handful of U.S. Department of Defense researchers ultimately led to the global Internet of today.

Along the way, ARPA spawned an incredible array of supporting technologies, including time sharing, workstations, computer graphics, graphical user interfaces, very large-scale integration (VLSI) design, RISC processors and parallel computing (see DARPA's Role in IT Innovations). There were four ingredients in this recipe for success: generous funding, brilliant people, freedom from red tape and the occasional ascent to the bully pulpit by ARPA managers.

These individual technologies had a way of cross-fertilizing and combining over time in ways probably not foreseen even by ARPA managers. What would become the Sun Microsystems Inc. workstation, for example, owes its origins rather directly to a half-dozen major technologies developed at multiple universities and companies, all funded by ARPA. (See Timeline: Three Decades of DARPA Hegemony.)

Ed Lazowska, a computer science professor at the University of Washington in Seattle, offers this story from the 1970s and early 1980s, when Kahn was a DARPA program manager, then director of its Information Processing Techniques Office:

What Kahn did was absolutely remarkable. He supported the DARPA VLSI program, which funded the [Carver] Mead-[Lynn] Conway integrated circuit design methodology. Then he funded the SUN workstation at Stanford because Forest Baskett needed a high-
resolution, bitmapped workstation for doing VLSI design, and his grad student, Andy Bechtolsheim, had an idea for a new frame buffer.

Meanwhile, [Kahn] funded Berkeley to do Berkeley Unix. He wanted to turn Unix into a common platform for all his researchers so they could share results more easily, and he also saw it as a Trojan horse to drive the adoption of TCP/IP. That was at a time when every company had its own networking protocol -- IBM with SNA, DEC with DECnet, the Europeans with X.25 -- all brain-dead protocols.

Bob Kahn

Bob Kahn

One thing Kahn required in Berkeley Unix was that it have a great implementation of TCP/IP. So he went to Baskett and Bechtolsheim and said, "By the way, boys, you need to run Berkeley Unix on this thing." Meanwhile, Jim Clark was a faculty member at Stanford, and he looked at what Baskett was doing with the VLSI program and realized he could take the entire rack of chips that were Baskett's graphics processor and reduce them to a single board. That's where Silicon Graphics came from.

All this stuff happened because one brilliant guy, Bob Kahn, cherry-picked a bunch of phenomenal researchers -- Clark, Baskett, Mead, Conway, [Bill] Joy -- and headed them off in complimentary directions and cross-fertilized their work. It's just utterly remarkable.

Surprise?
The launch of the Soviet satellite Sputnik shocked the world and became known as the "October surprise." But was it really?
Paul Green

Paul Green

Paul Green was working at MIT's Lincoln Laboratory in 1957 as a communications researcher. He had learned Russian and was invited to give talks to the Popov Society, a group of Soviet technology professionals. "So I knew Russian scientists," Green recalls. "In particular, I knew this big-shot academician named [Vladimir] Kotelnikov."

In the summer of 1957, Green told Computerworld, a coterie of Soviet scientists, including Kotelnikov, attended a meeting of the International Scientific Radio Union in Boulder, Colo. Says Green, "At the meeting, Kotelnikov -- who, it turned out later, was involved with Sputnik -- just mentioned casually, 'Yeah, we are about to launch a satellite.'"

"It didn't register much because the Russians were given to braggadocio. And we didn't realize what that might mean -- that if you could launch a satellite in those days, you must have a giant missile and all kinds of capabilities that were scary. It sort of went in one ear and out the other."

And did he tell anyone in Washington? "None of us even mentioned it in our trip reports," he says.

DARPA Today

But around 2000, Kleinrock and other top-shelf technology researchers say, the agency, now called the Defense Advanced Research Projects Agency (DARPA), began to focus more on pragmatic, military objectives. A new administration was in power in Washington, and then 9/11 changed priorities everywhere. Observers say DARPA shifted much of its funding from long-range to shorter-term research, from universities to military contractors, and from unclassified work to secret programs.

Of government funding for IT, Kleinrock says, "our researchers are now being channeled into small science, small and incremental goals, short-term focus and small funding levels." The result, critics say, is that DARPA is much less likely today to spawn the kinds of revolutionary advances in IT that came from Licklider and his successors.

DARPA officials declined to be interviewed for this story. But Jan Walker, a spokesperson for DARPA Director Anthony Tether, said, "Dr. Tether ... does not agree. DARPA has not pulled back from long-term, high-risk, high-payoff research in IT or turned more to short-term projects." (See sidebar, DARPA's Response.)

A Shot in the Rear

David Farber, now a professor of computer science and public policy at Carnegie Mellon, was a young researcher at AT&T Bell Laboratories when Sputnik went up.

"We people in technology had a firm belief that we were leaders in science, and suddenly we got trumped," he recalls. "That was deeply disturbing. The Russians were considerably better than we thought they were, so what other fields were they good in?"

David Farber

David Farber

Farber says U.S. university science programs back then were weak and out of date, but higher education soon got a "shot in the rear end" via Eisenhower's ARPA. "It provided a jolt of funding," he says. "There's nothing to move academics like funding."

Farber says U.S. universities are no longer weak in science, but they are again suffering from lack of funds for long-range research.

"In the early years, ARPA was willing to fund things like artificial intelligence -- take five years and see what happens," he says. "Nobody cared whether you delivered something in six months. It was, 'Go and put forth your best effort and see if you can budge the field.' Now that's changed. It's more driven by, 'What did you do for us this year?'"

DARPA's budget calls for it to spend $414 million this year on information, communications and computing technologies, plus $483 million more on electronics, including things such as semiconductors. From 2001 to 2004, the percentage going to universities has shrunk from 39% to 21%, according the Senate Armed Services Committee. The beneficiaries have been defense contractors.

Victor Zue

Victor Zue

Meanwhile, funding from the National Science Foundation (NSF) for computer science and engineering -- most of it for universities -- has increased from $478 million in 2001 to $709 million this year, up 48%. But the NSF tends to fund smaller, more-focused efforts. And because contract awards are based on peer review, bidders on NSF jobs are inhibited from taking the kinds of chances that Licklider would have favored.

"At NSF, people look at your proposal and assign a grade, and if you are an outlier, chances are you won't get funded," says Victor Zue, who directs MIT's 900-person Computer Science and Artificial Intelligence Laboratory, the direct descendent of MIT's Project MAC, which was started with a $2 million ARPA grant in 1963.

"At DARPA, at least in the old days, they tended to fund people, and the program managers had tremendous latitude to say, 'I'm just going to bet on this.' At NSF, you don't bet on something."


DARPA's Response
"We are confident that anyone who attended DARPATech [in Aug. 2007] and heard the speeches given by DARPA's [managers] clearly understands that DARPA continues to be interested in high-risk, high-payoff research," says DARPA spokesperson Jan Walker.

Walker offers the following projects as examples of DARPA's current research efforts:

  • Computing systems able to assimilate knowledge by being immersed in a situation
  • Universal [language] translation
  • Realistic agent-based societal simulation environments
  • Networks that design themselves and collaborate with application services to jointly optimize performance
  • Self-forming information infrastructures that automatically organize services and applications
  • Routing protocols that allow computers to choose the best path for traffic, and new methods for route discovery for wide area networks
  • Devices to interconnect an optically switched backbone with metropolitan-level IP networks
  • Photonic communications in a microprocessor having a theoretical maximum performance of 10 TFLOPS (trillion floating-point operations per second)

Farber sits on a computer science advisory board at the NSF, and he says he has been urging the agency to "take a much more aggressive role in high-risk research." He explains, "Right now, the mechanisms guarantee that low-risk research gets funded. It's always, 'How do you know you can do that when you haven't done it?' A program manager is going to tell you, 'Look, a year from now, I have to write a report that says what this contributed to the country. I can't take a chance that it's not going to contribute to the country.' "

A report by the President's Council of Advisors on Science and Technology, released Sept. 10, indicates that at least some in the White House agree. In "Leadership Under Challenge: Information Technology R&D in a Competitive World," John H. Marburger, science advisor to the president, said, "The report highlights in particular the need to ... rebalance the federal networking and IT research and development portfolio to emphasize more large-scale, long-term, multidisciplinary activities and visionary, high-payoff goals."

Still, turning the clock back would not be easy, says Charles Herzfeld, who was ARPA director in the mid-1960s. The freewheeling behavior of the agency in those days might not even be legal today, he adds. (See The IT Godfather Speaks: Q&A With Charles M. Herzfeld.)

No Help From Industry

The U.S. has become the world's leader in IT because of the country's unique combination of government funding, university research, and industrial research and development, says the University of Washington's Lazowska. But just as the government has turned away from long-range research, so has industry, he says.

According to the Committee on Science, Engineering and Public Policy at the National Academy of Sciences, U.S. industry spent more on tort litigation than on research and development in 2001, the last year for which figures are available. And more than 95% of that R&D is engineering or development, not long-range research, Lazowska says.

Ed Lazowska

Ed Lazowska

"It's not looking out more than one product cycle; it's building the next release of the product," he says. "The question is, where do the ideas come from that allow you to do that five years from now? A lot of it has come from federally funded university research."

A great deal of fundamental research in IT used to take place at IBM, AT&T Inc. and Xerox Corp., but that has been cut way back, Lazowska says. "And of the new companies -- those created over the past 30 years -- only Microsoft is making significant investments that look out more than one product cycle."

Lazowska isn't expecting another event like Sputnik. "But I do think we are likely to wake up one day and find that China and India are producing far more highly qualified engineers than we are. Their educational systems are improving unbelievably quickly."

Farber also worries about those countries. His "Sputnik" vision is to "wake up and find that all our critical resources are now supplied by people who may not always be friendly." He recalls the book, The Japan That Can Say No (Simon & Schuster), which sent a Sputnik-like chill through the U.S. when it was published in 1991 by suggesting that Japan would one day outstrip the U.S. in technological prowess and thus exert economic hegemony over it.

"Japan could never pull that off because their internal markets aren't big enough, but a China that could say no or an India that could say no could be real," Farber says.

The U.S. has already fallen behind in communications, Farber says. "In computer science, we are right at the tender edge, although I do think we still have leadership there."


Science and Technology Funding by the U.S. Department of Defense (in millions)

Account FY 2006 Level FY 2007 Estimate FY 2008 Request $ Change FY 07 vs. FY 08 % Change FY 07 vs. FY 08
Total Basic Research $1,457 $1,563 $1,428 -$135 -8.6%
Total Applied Research $4,948 $5,329 $4,357 -$972 -18%
Total Advanced Technology Development $6,866 $6,432 $4,987 -$1,445 -22.4%
Total Science and Technology $13,272 $13,325 $10,772 -$2,553 -19%

Source: The Computing Research Association

Some of the cutbacks in DARPA funding at universities are welcome, says MIT's Zue. "Our reliance on government funding is nowhere near what it was in 1963. In a way, that's healthy, because when a discipline matures, the people who benefit from it ought to begin paying the freight."

"But," Zue adds, "it's sad to see DARPA changing its priorities so that we can no longer rely on it to do the big things."

Related News:

Gary Anthes is a Computerworld national correspondent.

[Apr 20, 2007]Early History of Computing by Nathan Ensmenger

April 20, 2007 | The Franklin Institute's Resources for Science Learning

Nathan Ensmenger, Ph.D.
April 20, 2007

The electronic computer is the defining technology of the modern era. For many of us it is difficult, if not impossible, to imagine life without computers: we use computers to do our work, to help us study, to create and access entertainment, and to communicate with friends and family. And those are just the ways in which computers are most obviously visible in our society: millions of other tiny computing devices, called microprocessors, are hidden inside other products and technologies, quietly gathering data, controlling processes, and communicating between components. Your automobile almost certainly has its own computer (in fact, probably several), as does your cell phone, and perhaps even your refrigerator. Computers are everywhere. But where did they come from?

The history of the computer is, like the computer itself, complicated but fascinating. It encompasses many of the great events of the 19th and 20th centuries: the industrial and communications revolutions, the Second World War, the Space Race, the emergence of the electronics and plastics industries, the establishment of a truly global economy. Some of the actors in this history have become famous - the IBM corporation, for example, and Apple Computer, and Bill Gates - while others have yet to be widely recognized for their contributions. This exhibit explores some of the less well-known but nevertheless extremely important pioneers of the computer era: Herman Hollerith, the inventor of the electric tabulating machine; John Mauchly and Presper Eckert, who designed and built the ENIAC, one of the earliest and most influential electronic computers; John Bardeen and Walter Brattain, whose work on the point-contact transistor won them a Nobel Prize in Physics; and Claude Shannon, who defined for the world a theory of information and communications that has profoundly shaped not only the science and technology of computing, but also of biology, ecology, economics, and physics.

One way to introduce the history of the computer is to begin with what would seem to be a simple and straightforward question: who invented the first computer? This is a question that is often asked, quite understandably, but which is in fact surprisingly difficult to answer. To begin with, the word "computer" has been with us a long time: it was first used in the third century AD to describe the calculations used to determine the constantly shifting date of the Easter holiday.1 More recently, "computer" was used to describe, not a machine, but a person: well into the 20th century, these "computers" were employed, by a wide variety of scientific, governmental, and commercial organizations, to make calculations, either by hand or with the assistance of calculating machines.2 But while these "human computers" played an important role in the larger history of computation, they are not what most of us would consider to be true computers; we associate the computer revolution with machines, rather than people.

Even if we confine ourselves to the more conventional understanding of the computer as an electronic, digital, and programmable device (more on all of these characteristics later), our search for the "first" computer is complicated. The shift from human to machine-based computing can be traced back to the early 17th century and beyond, as clever individuals developed new tools for manipulating numbers. It is easy to imagine why numbers were important: they are widely used in business, science, and warfare. But it was not until the 19th century that the search for large-scale mechanical computation began in earnest. It was in this period that population growth, economic expansion, and the rise of powerful nation-states created new demands for information processing techniques and technologies. In the United States, for example, innovations in communication and transportation allowed for the emergence of large national (and eventually international) corporations whose seemingly insatiable demand for data spurred numerous innovations in information technology. Many of the most important players in the early computer industry - including Burroughs, Remington Rand, National Cash Register (now NCR), and most importantly, the International Business Machines Company (IBM) - were creations of the burgeoning business machines industry of the late 19th century. Although IBM itself was not incorporated until 1924, it traces its origins directly to the 19th century Tabulating Machines Company, founded in 1886 by the inventor and engineer Herman Hollerith.3

The story of Herman Hollerith and his Tabulating Machine provides valuable insights into the early origins of the electronic computer. In the early 1880s Herman Hollerith had worked as a statistician for the United States Census bureau. At that time the Census Bureau was facing a problem typical of many Industrial Era governments and corporations: it had more data than it knew what to do with. In the case of the Census Bureau, of course, this was population data: for the 1880 census, the Census Bureau had to gather and enumerate data more than 50 million US citizens. The 1880 census report was 21,000 pages long and took seven years to compile. It was clear that, without some dramatic change in the way the Census Bureau dealt with their data, the 1890 census was going to prove too much to handle.

The case file on Herman Hollerith describes how this remarkable young inventor harnessed ideas from science and technology (some quite new, some well-established) to help contain the information explosion that threatened the Census Bureau. His tabulating machines provided an industrial-strength solution to the problem of information processing. Indeed, his tabulating machines formed essential components of an "information factory" approach to computing that gradually replaced older, human-based methods. In many respects, the earliest electronic computers were simply evolutionary extensions of Hollerith's 19th century technology - "glorified tabulating machines," as they were sometimes dismissively referred to by contemporaries.4 But in another very real sense, tabulating machines, despite their importance in the history of computing, were not, by modern standards, real computers. Which returns us to our original question: so who actually did invent the computer?

[Apr 5, 2007] AlanTuring.net The Turing Archive for the History of Computing

Largest web collection of digital facsimiles of original documents by
Turing and other pioneers of computing. Plus articles about
Turing and his work, including Artificial Intelligence.

NEW Recently declassified previously top-secret documents about codebreaking.

[Apr 5, 2007] Proceedings of Symposium Computer in Europe.1998

Pretty unique and little known material

[Apr 5, 2007] Andrei P. Ershov, Aesthetics and the human factor in programming, Communications of the ACM, v.15 n.7, p.501-505, July 1972

In 1988 the charitable Ershov's Fund was founded. The main aim of the Fund was development of informatics in forms of invention, creation, art and education activity.)

Academician A. Ershov's archive Documents (raw images of the ACM article)

[Mar 20, 2007] Fortran creator John Backus dies by Brian Bergstein

First FORTRAN compilers have pretty sophisticated optimization algorithms and generally much of compiler optimization research was done for Fortran compliers. More information can be found at The History of the Development of Programming Languages
March 20, 2007 | MSNBC.com

John Backus, whose development of the Fortran programming language in the 1950s changed how people interacted with computers and paved the way for modern software, has died. He was 82.

Backus died Saturday in Ashland, Ore., according to IBM Corp., where he spent his career.

Prior to Fortran, computers had to be meticulously "hand-coded" - programmed in the raw strings of digits that triggered actions inside the machine. Fortran was a "high-level" programming language because it abstracted that work - it let programmers enter commands in a more intuitive system, which the computer would translate into machine code on its own.

The breakthrough earned Backus the 1977 Turing Award from the Association for Computing Machinery, one of the industry's highest accolades. The citation praised Backus' "profound, influential, and lasting contributions."

Backus also won a National Medal of Science in 1975 and got the 1993 Charles Stark Draper Prize, the top honor from the National Academy of Engineering.

"Much of my work has come from being lazy," Backus told Think, the IBM employee magazine, in 1979. "I didn't like writing programs, and so, when I was working on the IBM 701 (an early computer), writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."

John Warner Backus was born in Wilmington, Del., in 1924. His father was a chemist who became a stockbroker. Backus had what he would later describe as a "checkered educational career" in prep school and the University of Virginia, which he left after six months. After being drafted into the Army, Backus studied medicine but dropped it when he found radio engineering more compelling.

Backus finally found his calling in math, and he pursued a master's degree at Columbia University in New York. Shortly before graduating, Backus toured the IBM offices in midtown Manhattan and came across the company's Selective Sequence Electronic Calculator, an early computer stuffed with 13,000 vacuum tubes. Backus met one of the machine's inventors, Rex Seeber - who "gave me a little homemade test and hired me on the spot," Backus recalled in 1979.

Backus' early work at IBM included computing lunar positions on the balky, bulky computers that were state of the art in the 1950s. But he tired of hand-coding the hardware, and in 1954 he got his bosses to let him assemble a team that could design an easier system.

The result, Fortran, short for Formula Translation, reduced the number of programming statements necessary to operate a machine by a factor of 20.

It showed skeptics that machines could run just as efficiently without hand-coding. A wide range of programming languages and software approaches proliferated, although Fortran also evolved over the years and remains in use.

Backus remained with IBM until his retirement in 1991. Among his other important contributions was a method for describing the particular grammar of computer languages. The system is known as Backus-Naur Form.

© 2007 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.Copyright 2007 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

[Jan 12, 2007] Jeff Raikes interview -- the whole thing from .. By Jack Schofield

December 14, 2006 | Guardian

This week's interview, in the printed Guardian Technology section, is with Jeff Raikes, president of the Microsoft Business Division, and "a member of the company's Senior Leadership Team" with Bill Gates and Steve Ballmer. Obviously we don't have room to print more than 3,000 words, even if you have time to read it. However, if you do want more, what follows is an almost complete transcript. You don't often get Microsoft's most senior guys one-on-one, and they are rarely as forthcoming as Raikes was this time....
For searches: the topics covered include Office Genuine Advantage (piracy), the Office 2007 user interface (the ribbon), SharePoint Server, hosted Office and online applications, the new XML file formats, and the Office bundles....

To set the scene, it's around 8.20am at the QEII Conference Centre in London, where Microsoft is holding a conference for software partners. I'm setting up my tape, and one of the PRs is getting us cups of coffee. I'm telling Raikes that I used to run VisiCalc on an Apple II, so I remember he joined Microsoft from Apple in 1981. "Unlike most of the people I talk to nowadays, you've been in this business longer than I have!"

Jeff Raikes: [Laughs] I started on VisiCalc in June of 1980, I actually worked for Atari briefly, when I was finishing up college. I ended up spending more in the company store than I made in income, so it's probably a good thing I moved on. Atari at that time was owned by Warner, so you could buy all the music albums for like a dollar, and games machines for all my friends.

Jack Schofield: Before we get going, did you write the Gates memo?
JR: Which memo are you referring to?

JS: The 1985 memo that Bill Gates sent to Apple, saying "you ought to license Mac OS to make it an industry standard." (http://www.scripting.com/specials/gatesLetter/text.html)
JR: I did. It's funny, there's a great irony in that memo, in that I was absolutely sincere in wanting the Macintosh to succeed, because that was the heart of our applications business at the time. And Apple somehow decided it was a devious plot and that I was the devil....

The irony is that I think if they'd taken the advice in the memo, we'd probably have ended up seeing the Mac be more successful and Windows perhaps not quite as successful, so I guess it all worked out OK in the end!

JS: It was good advice: I always thought you were right!
JR: Thankyou. I always thought it was a good memo, too, but if nobody did anything about it then perhaps it wasn't so good...

JS: And you're still in applications, which is amazing after all these years.
JR: It's amazing to see how much the opportunity has grown. If in 1981 we'd said that there would be 500 million people using Microsoft Office tools, people would have thought we were nuts. Yet today, I look at the landscape, at the broad opportunities of impacting how people find, use and share information, how they work together in a world where there's a lot of pressure; at the explosion of content, and how people manage content. And on the horizon, there's voice over IP and Unified Communications, and business intelligence, and software as a service to enhance the information work experience. So I look at it today, and I'm amazed at how much opportunity I had, and how much there is.
I've done different roles -- I spent eight or nine years with Steve Ballmer as we were building the worldwide sales and marketing organisation -- and when I returned to Office in 2000, some people thought there's not that much more to do. Quite the contrary, there was an incredible amount to do!

JS: Is that 500 million paid up users?
JR: Mmm, mmm, no, that's opportunity, Jack! [Laughs]

JS: Now you're getting the equivalent of Windows Genuine Advantage [piracy protection], which is going to be fun.
JR: We do have Office Genuine Advantage now, but it's not implemented exactly the same. Encouraging licensing is an important activity, but it's one of those things where you have to strike the right balance. We want to encourage usage of our software, and we want to make sure that those people who have licensed the software appropriately have a good experience.
I've lived on this copy protection thing since the 1980s, and it could be very obtrusive, so you have to strike the right balance. Office Genuine Advantage is a good approach in that it incents people to want to be licensed.

JS: What about your approach to licensing the ribbon from the Office 2007 user interface? What would happen if OpenOffice.org did a knock-off of Office 2007?
JR: Well, we'd just have to see. We have a certain responsibility to protect our intellectual property, and we try to do that in ways that are good for our customers and of course for our shareholders. So we've come up with a licensing programme [for the ribbon] and we'll see what others want to do. We have made no decisions yet as to exactly what we might do in a set of various scenarios.

JS: You seem to be offering to license it to people who write applications and utilities that support Office but not ones that are competing with Office....
JR: That's right.

JS: There's possibly a fuzzy line there....
JR: That's true, it can be. That's why I say there's a lot to come, to understand what people's interests are and what they may wish to do.

JS: How do you think the take-up of the ribbon is going to go?
JR: If we were to go by the research -- and of course that doesn't always bear out in the market -- it would be extremely positive. If you poll Office users, there's a couple of things that really stand out. One is that they really see that Office is very important to what they do in their jobs, so they care a lot about it. The second thing is that they'd like to be able to do even more. They recognise there's a lot of capability in the product that they're not getting to today. So the research that we put into designing the user experience was to address that issue: to help folks get to more capability and get things done faster and easier. Our research shows they can use 65% fewer keystrokes and less mouse-travel.
People want a results-oriented interface: they want to get things done. So that's the most notable step with Office 2007.
Now there's Office SharePoint Server, which takes the server side to a new level. Bill and I would draw the analogy to when we put together the Office productivity suite in the late 80s: we think Office SharePoint Server will in a few years be recognised as a similarly important strategic initiative. We're bringing together the collaboration, the document libraries, integrated workflow, electronic forms, business intelligence, content management, the portal capability, and having the opportunity to build on it. Bringing that platform together is important.

JS: But how do you promulgate it? SharePoint is more or less confined to large corporations, except for Office Live, and you don't even say that that's based on SharePoint.
JR: Office Live will be a way for people to have access to it quite broadly, both small businesses and individual Office users. In fact, SharePoint is perhaps the fastest growth business in the history of our company: we went from zero to $500 million in three years.

JS: Why isn't it talked about as part of the big web conversation, along with wikis and blogs and so on?
JR: Well, of course, SharePoint 2007 does have support for blogs and wikis, is that what you mean? I'm sorry, I may not be following your question....

JS: Well, when you created Office, it wasn't a corporate sale, it was part of the mass market, part of the conversation between ordinary users. Now SharePoint is a corporate sale, but it isn't part of it the wider market conversation about blogs and wikis, Apache, MySQL and so on.

JR: Today, not as much as we would like ... and I think that's an opportunity. As you say, SharePoint is one of the foundations of Office Live, and we have chosen to build Office Live in a progression. We've started with small businesses, but I think that as you recognise -- and the broad market doesn't, yet -- there's certainly the opportunity to open that up to anybody who does information work and anybody who uses Office tools and wants to extend that. So I think that's a great opportunity.

JS: Are you doing anything with hosted Office, apart from watching it disappear?
JR: Today, I don't get a lot of interest in running Word over the internet. Bandwidth is precious, and most people have Office. Nobody's crystal ball is perfect, but I think in a few years those who say software is dead will go the way of those people who said PCs were dead and network computing was the thing.
The reason is, people get very focused in on trying to undermine Microsoft and they don't get very focused in on the customer. You have all this horsepower at your fingertips, whether it's your PC or your laptop or your mobile device, and you have all that horsepower in the cloud. Why not use the combination of the horsepower in order to optimise the experience. Do I really want to run the Word bits over my network connection, or do I want to use it to store contents, to have access to them anywhere, to share and collaborate and so on. It's the combination....

JS: It's noticeable with Office 2007 that you don't always know which things are on the server and which are on your PC, so ultimately the two things blend together....
JR: I think it's important to think about what are the scenarios that will really enhance and extend information work.

JS: You did do Office 2003 as a hosted online service, as part of the Microsoft.Net launch....
JR: People can do that, but most people already have the software on their computers, so there isn't that big a demand for that today. I think Exchange is a platform that will more rapidly move to a service form than Office client applications, where most of the time you want to optimise the power at your fingertips. Or at least that would be my prediction. I think the key strategy is to be able to use the combination.

JS: Hosted Exchange hasn't got a lot of traction, has it?
JR: I think it's a market that's still in its early stage. I would also say that hosted Exchange has done as well as any hosted business email system. So the question is, to what extent will businesses want to access these things online? Some of my colleagues think that, in 10 years, no companies will have their own Exchange servers. I'm not quite that aggressive!
I do believe, though, that many companies will look to hosted Exchange, hosted SharePoint.... I think we'll see more and more of those infrastructure elements. And frankly, Jack, I'll make sure that the people who are developing our servers are thinking of hosted services, which means they have to think through the technical issues. We are going to make sure we have service thinking integrated throughout our software.
At the end of the day, my point of view is: give the customer the choice. Sell them on the value of Exchange as a messaging system and let them choose whether they want it on the premises or have someone run it for them as a service.

JS: What about web-based alternatives such as ThinkFree, which offers a sort of Office online? Is that part of your bailliewick?
JR: There are a number of those web productivity ideas out there. As I said, the thing that will probably trip people up is they'll get focussed on the idea that that's a replacement for the Office suite, when what's most interesting are the new and unique scenarios that you can get by having that capability. But then, it's our responsibility to make sure that our customers have access to those services as part of their use of Office tools. It's about software and services, as opposed to services versus software.

JS: I wondered if that online element was part of your empire or something that someone else was looking after....
JR: It's certainly something that's very top of mind of mind for me....

JS: And I wondered that because Office is a blockbuster, but it does take a while to do things compared to the speed at which things happen on the web. Look at YouTube!
JR: That's a fair point. You know, for better or for worse -- and it's probably both -- the core of what we do with Office probably doesn't have that characteristic, even in a web context. There are billions of documents out there, and people want tools that are compatible with billions of documents, and that have the functionality to allow people to do what they want to do. Things such as Google Docs, there are certainly some nice elements, but if you're a student and you need to do a paper that requires footnotes, well, good luck! [Laughs]
That's not to say they won't get better, but I try and temper my reaction to these things. In the same way I think our competitors get confused by focusing on trying to undermine us, instead of delivering customer value, I think we could get confused if we overreact to what might be the trend. The thing to do is to step back and say: "What is it that customers really want to do?" They may not be doing it today, and they might not know what they want to do, and they don't know the technology well enough to know what's possible, which is what makes this business kind of fun. But if you can make those predictions then you can end up with a winning business.
As an example, what happened with Mac Excel in 1985 was that we had a programmer called Steve Hazelrig who was doing the printing code. Laser printers were expensive then, and ours was way down the hall, so Steve wrote a little routine that put an image of the page up on the screen, with a magnifying glass so he could examine every pixel to make sure he had an accurate rendering of the page. The program manager Jake Blumenthal came down the hall and said: "Wow, that would be a great feature." That's how Print Preview made it into all of our products: no customer ever asked for it.
So the trick is to understand the things people want to do, and they may not know to ask for them, but the opportunity is there. So I think it's more important to understand what customers really want to do, and to make sure we deliver on that.

JS: Who's driving XML file formats? Is that customers or is it Microsoft?
JR: It's a combination: there are actually multiple motivations. First of all, there's the obvious reason: that you can increase the interoperability with file formats by using XML. We have customers who are very excited by the ability to extract information from Open XML formats and use that as part of their applications. But frankly, we would say that we feel document binaries are outliving their usefulness. They're now a source of security threats, in the sense that it's the new frontier for trying to attack computing infrastructures. And we can have more resilient file formats with XML.
People forget that we had rtf, and we had sylk, and those were "open formats" that people didn't really use that much because they were less performant. OK, so we're now in a different era where we can use XML as the foundation, get the benefits of interoperability, and have it be performant. It can actually be smaller footprint, and it's a much better structure than what we had before.
And, frankly, we've had to step up to the recognition that putting these formats with an open standards group is a good thing: it's a good thing for Microsoft, and it's a good thing for society. When I meet with governments, I recognise there's a legitimate interest in making sure that the way we store information is done on a long term basis. I fully support that. Some people say, "Hey, there's a lot of our intellectual property in there and you're opening that up for cloning." Well, we did. We decided. I decided. We needed to go forward and make these part of a standards body and address that interest.

JS: Do you foresee a mass migration of the installed base to the new formats?
JR: I don't. I wish I did -- I think it would be a good thing -- but I just think that's very hard. I think we have to do an excellent job of supporting compatibility, and with Office 2003, we download the compatibility packs so that you can read and write the XML formats. And we're going to work with third parties on ODF [Open Document Format] converters. But again, given your deeper knowledge, you probably recognise that when you don't have [a feature] in your format, well, how does that work? The idea that somehow everybody is going to be able to use ODF for these things, well, let's just say they've got a lot of work to do!
[PR says one last question, we have to go....]

JS: Have you got in a bit of a rut with Office bundles, now you have SharePoint and OneNote and so on. The operating system side have had a pretty good hit with the Small Business Server, SBS. Now you've got 19 products in Office....
JR: Maybe 22! [Laughs] We've got Small Business Office and it's one of our big products, but its primary distribution vehicle is with the hardware manufacturers and the reseller channel. That's been hugely successful. One of the key changes that we made was to give it small business specific value, with things like Contact Manager. For many businesses, Publisher is the number two used application because they use it as their sales and marketing vehicle.
But we do have Office Professional Plus and Enterprise, which gives people OneNote and Groove, not just Standard and Professional. In the retail market there's an all-in-one type package, like Vista Ultimate, but the high volume at retail is our new Home and Student Edition, at a very attractive price. We used to think of that as our Student and Teacher Edition.

JS: Could you throw SharePoint Server into that? [Laughs]
JR: I'd really like to include some kind of subscription to Office Live: we've looked at doing that and we probably will do that some time in the future. That's one of the beauties of software as a service: it does give us a way to complement the core client applications.
[Getting up to leave]
Thanks very much. It's fun to reflect on history a little bit...

The Raikes empire

The Information Worker Group includes: Access, Business Intelligence Applications, Data Analyzer, Excel, FrontPage, Groove, InfoPath, Live Meeting, Natural Language Processing, Office Live, Microsoft Office System, Office Online, OneNote, Outlook, PowerPoint, Project, Publisher, SharePoint, Visio, and Word.
Business Solutions includes: Microsoft Dynamics, Supply Chain Management, Customer Relationship Management, Financial Management, Microsoft Dynamics AX, Great Plains, Microsoft Dynamics NAV, Enterprise Reporting, Retail Management, Small Business Products, Microsoft Small Business Financials, Microsoft Dynamics SL, Business Contact Manager, Exchange Server and Speech Server.

[Dec 15, 2006] Ralph Griswold died

Ralph Griswold, the creator of Snobol and Icon programming languages, died in October 2006 of cancer. Until recently Computer Science was a discipline where the founders were still around. That's changing. Griswold was an important pioneer of programming language design with Snobol sting manipulation facilities different and somewhat faster then regular expressions.
Lambda the Ultimate

Ralph Griswold died two weeks ago. He created several programming languages, most notably Snobol (in the 60s) and Icon (in the 70s) - both outstandingly innovative, integral, and efficacious in their areas. Despite the abundance of scripting and other languages today, Snobol and Icon are still unsurpassed in many respects, both as elegance of design and as practicality.

Ralph Griswold

See also Ralph Griswold 1934-2006 and Griswold Memorial Endowment
plex86.org

Ralph E. Griswold died in Tucson on October 4, 2006, of complications from pancreatic cancer. He was Regents Professor Emeritus in the Department of Computer Science at the University of Arizona.

Griswold was born in Modesto, California, in 1934. He was an award winner in the 1952 Westinghouse National Science Talent Search and went on to attend Stanford University, culminating in a PhD in Electrical Engineering in 1962.

Griswold joined the staff of Bell Telephone Laboratories in Holmdel, New Jersey, and rose to become head of Programming Research and Development. In 1971, he came to the University of Arizona to found the Department of Computer Science, and he served as department head through 1981. His insistence on high standards brought the department recognition and respect. In recognition of his work the university granted him the breastle of Regents Professor in 1990.

While at Bell Labs, Griswold led the design and implementation of the groundbreaking SNOBOL4 programming language with its emphasis on string manipulation and high-level data structures. At Arizona, he developed the Icon programming language, a high-level language whose influence can be seen in Python and other recent languages.

Griswold authored numerous books and articles about computer science. After retiring in 1997, his interests turned to weaving. While researching mathematical aspects of weaving design he collected and digitized a large library of weaving documents and maintained a public website. He published technical monographs and weaving designs that inspired the work of others, and he remained active until his final week.

-----Gregg Townsend Staff Scientist The University of Arizona

[Mar 3, 2006] ACM Press Release, March 01, 2006

BTW John Backus authored of extremely speculative 1977 ACM Turing award Lecture "Can Programming be liberated from the von Neumann Style? A Functional Style and its Algebra of Programs". It can be found here. As E.W.Dijkstra noted "The article is a progress report on a valid research effort but suffers badly from aggressive overselling of its significance. This is the more regrettable as it has been published by way of Turing Award Lecture."
From Slashdot: "It's interesting that Peter Naur is being recognized 40 years later, when another Algol team member, Alan Perlis, received the first Turing Award in 1966. Here's a photo of Perlis, Naur and the other Algol 1960 conference participants. [tugurium.com] ".
Some contributions of Algol60 (Score:2, Informative)
by Marc Rochkind (775756) on Saturday March 04, @04:39PM (#14851091)
(http://mudbag.com/)

1. The Report on the language used a formal syntax specification, one of the first, if not the first, to do so. Semantics were specified with prose, however.
2. There was a distinction between the publication language and the implementation language (those probably aren't the right terms). Among other things, it got around differences such as whether to use decimal points or commas in numeric constants.
3. Designed by a committee, rather than a private company or government agency.
4. Archetype of the so-called "Algol-like languages," examples of which are (were?) Pascal, PL./I, Algol68, Ada, C, and Java. (The term Algol-like languages is hardly used any more, since we have few examples of contemporary non-Algol-like languages.)

However, as someone who actually programmed in it (on a Univac 1108 in 1972 or 1973), I can say that Algol60 was extremely difficult to use for anything real, since it lacked string processing, data structures, adequate control flow constructs, and separate compilation. (Or so I recall... it's been a while since I've read the Report.)

Backus Normal Form vs. Backus Naur Form

The following exchange comes from a transcript given at the 1978 conference which the book documents:

CHEATHAM: The next question is from Bernie Galler of the University of Michigan, and he asks: "BNF is sometimes pronounced Backus-Naur-Form and sometimes Backus-Normal- Form. What was the original intention?

NAUR: I don't know where BNF came from in the first place. I don't know -- surely BNF originally meant Backus Normal Form. I don't know who suggested it. Perhaps Ingerman. [This is denied by Peter Z. Ingerman.] I don't know.

CHEATHAM: It was a suggestion that Peter Ingerman proposed then?

NAUR: ... Then the suggestion to change that I think was made by Don Knuth in a letter to the Communications of the ACM, and the justification -- well, he has the justification there. I think I made reference to it, so there you'll find whatever justification was originally made. That's all I would like to say.

About BNF notation

BNF is an acronym for "Backus Naur Form". John Backus and Peter Naur introduced for the first time a formal notation to describe the syntax of a given language (This was for the description of the ALGOL 60 programming language, see [Naur 60]). To be precise, most of BNF was introduced by Backus in a report presented at an earlier UNESCO conference on ALGOL 58.

Few read the report, but when Peter Naur read it he was surprised at some of the differences he found between his and Backus's interpretation of ALGOL 58. He decided that for the successor to ALGOL, all participants of the first design had come to recognize some weaknesses, should be given in a similar form so that all participants should be aware of what they were agreeing to. He made a few modificiations that are almost universally used and drew up on his own the BNF for ALGOL 60 at the meeting where it was designed. Depending on how you attribute presenting it to the world, it was either by Backus in 59 or Naur in 60.

(For more details on this period of programming languages history, see the introduction to Backus's Turing award article in Communications of the ACM, Vol. 21, No. 8, august 1978. This note was suggested by William B. Clodius from Los Alamos Natl. Lab).

[Jan 6, 2006] Frank Cary; Drove Personal Computer Creation for IBM By Patricia Sullivan

washingtonpost.com
Frank T. Cary, 85, the chairman and chief executive of IBM who pushed for the creation of the company's once-dominant personal computer, defended the giant business against a 13-year-long federal antitrust lawsuit and helped launch a decade-long effort by U.S. corporations to end apartheid in South Africa, died Jan. 1 at his home in Darien, Conn.

His wife of 63 years, Anne Curtis Cary, described her husband as a quiet, down-to-earth person who would prefer a bare-bones obituary, if any at all. She declined to provide a cause of death.

Mr. Cary led IBM as its chief executive from 1973 to 1981 and as chairman from 1973 to 1983. He was chairman of biopharmaceutical company Celgene Corp. from 1986 to 1990.

Under his watch in the 1970s, IBM more than doubled its revenues and earnings while operating in one of the most competitive industries in the world.

Mr. Cary, described in the news media at the time as a tight-lipped, powerfully built sales executive with a crushing handshake and an Irish twinkle, oversaw the introduction of the widely popular Selectric typewriter, among other innovations.

Annoyed that smaller companies such as Commodore and Apple had successfully introduced desktop personal computers while IBM's first two efforts had failed, Mr. Cary delegated the task of coming up with a personal computer in a single calendar year to an independent business unit headed by executive Bill Lowe, ordering him to "teach the Big Blue elephant to dance."

Working in Boca Raton, Fla., Lowe and his colleagues used off-the-shelf components, a key decision that let them make their deadline and set the course of the PC industry. They bought the operating system, the software that runs the computer, from the startup company Microsoft; that sale launched the juggernaut that made Microsoft founder Bill Gates the richest man in the world.

When Mr. Cary left IBM, under its then-mandatory policy of retiring executives at 60, the company seemed invincible. Its PC sales were a highly profitable $4 billion, and customers seeking a personal computer were often offered the choice of Commodores, Apples or "IBM clones." But within a decade, the once-dominant business lost control of the PC standard, and of the market as well. No one nowadays refers to the ubiquitous personal computer as an "IBM clone."

Born in Gooding, Idaho, Frank Taylor Cary moved to California when young and graduated from the University of California at Los Angeles. He served in the Army during World War II, received a master's degree in business administration from Stanford University in 1948 and went to work for IBM.

He succeeded Thomas J. Watson Jr., the son of the company's founder, as chief executive. Under Watson, IBM had what the New York Times called "the premier glamour stock," when it sold for 66 times earnings in the mid-1960s. Mr. Cary told the Times that his singular disappointment was that IBM's stock price hovered between 15 and 20 times earnings during his tenure.

He suffered through 45 days of questioning during what he called "the Methuselah of antitrust cases" and what The Washington Post called "a Homeric pretrial paper chase . . . involving 66 million documents and 2,500 depositions."

The trial lasted six years, off and on, with 974 witnesses, 66 million pages of evidence and 104,000 pages of testimony, as the government tried to prove that IBM had broken the law by forming a monopoly and taking out competitors one by one. The case, filed on the last day of President Lyndon B. Johnson's administration, lasted until the Justice Department dropped it in January 1982, a year into President Ronald Reagan's administration.

In 1975, Mr. Cary authorized the creation of IBM's first lobbying office in Washington, which later became a powerful force among corporate lobbyists.

Mr. Cary was not solely concerned with financial goals. According to IBM, he joined General Motors chief executive Tom Murphy and the Rev. Leon Sullivan, a General Motors board member, in 1975 to recruit 21 top American corporate leaders for a decade-long effort to end apartheid in South Africa. The meeting led to the creation of the original Sullivan Principles, which committed businesses to equal and fair pay practices, training of nonwhites for management positions, and improving the quality of life for nonwhites in housing, transportation, school, health and recreation facilities.

Mr. Cary's hobbies included skiing, swimming, tennis and golf. He served on company boards in recent years, including printer manufacturer Lexmark International Inc., medical services provider Lincare Holdings Inc., media company Capital Cities/ABC Inc., and the engineering- and construction-oriented Bechtel Group Inc.

Besides his wife, of Darien, Conn., survivors include four children and 12 grandchildren.

[Jun 4, 2005] Q&A Internet Pioneer Looks Ahead

Computerworld

Q&A: An Internet Pioneer Looks Ahead Leonard Kleinrock predicts 'really smart' handhelds, but warns of out-of-control complexity.

Leonard Kleinrock with Interface Message Processor 1, the Arpanet's first switching node. The minicomputer, configured by Bolt, Beranek and Newman, arrived at UCLA on Labor Day weekend in 1969. Two days later, a team led by Kleinrock had messages moving between IMP1 and another computer at UCLA. Thus the Arpanet, the forerunner of today's Internet, was born.

JULY 04, 2005 (COMPUTERWORLD) - Leonard Kleinrock is emeritus professor of computer science at the University of California, Los Angeles. He created the basic principles of packet switching, the foundation of the Internet, while a graduate student at MIT, where he earned a Ph.D. in 1963. The Los Angeles Times in 1999 called him one of the "50 people who most influenced business this century."

Computerworld's Gary H. Anthes interviewed Kleinrock in 1994 as part of the Internet's 25th anniversary celebration. Recently, Anthes asked Kleinrock for an update.

You told Computerworld 11 years ago that the Internet needed, among other things, "a proper security framework." What about today? In the past 11 years, things have gotten far worse, so much so that there are parts of the population that are beginning to question whether the pain they are encountering with spam, viruses and so on is worth the benefit. I don't think there's a silver bullet. We need systemwide solutions. Strong authentication will help. IPv6 will help. Identifying the source of information-a networking issue-to make sure it's not being spoofed will help.

You called for better multimedia capabilities in 1994 as well. One of the major changes related to multimedia in these 11 years has been the explosion of what we call the "mobile Internet." There's this ability now to travel from one location to another and gain access to a rich set of services as easily as you can from your office. The digitization of nearly all content and the convergence of function and content on really smart handheld devices are beginning to enable anytime, anywhere, by anyone Internet -- the mobile Internet. But there is a lot more to be done.

Such as? We have to make it easier for people to move from place to place and get access. What's missing is the billing and authentication interface that allows one to identify oneself easily in a global, mobile, roaming fashion. We [will] see this change to an alternate pricing model where people can subscribe to a Wi-Fi roaming service offered by their company or from their home ISP. As these roaming agreements are forged between the subscription provider and the owners/operators of today's disparate public-access networks, the effective number of locations where a subscriber will be able to connect at no or low fee will grow. A key component in this environment is internetwork interoperability, not only for data traffic but for authentication and billing. The benefits will be ease of use and predictable cost.

You mentioned smart handheld devices. Where are they going? We are seeing your phone, PDA, GPS, camera, e-mail, pager, walkie-talkie, TV, radio, all converging on this handheld device, which you carry around in addition to your laptop. It will [alter the properties of] a lot of content - video, images, music-to match what's come down to the particular device you have. For example, you may be using your handheld cell phone to serve as a passthrough device to receive an image or video that you wish to display on some other output device-say, your PC or your TV. The handheld may need to "dumb down" the image for itself but pass the high-quality stream to the TV, which will render the stream to match its-the TV's-display capability.

Is that capability of interest to corporate IT? Absolutely. We see e-mail already on the handheld, as well as the ability to download business documents such as spreadsheets and PowerPoint presentations. We'll see the ability to handle the occasional videoconference on a handheld, as well as other media-rich communications. We are right on the threshold of seeing these multifunction devices. Of course, the human-computer interface is always a problem.

How might that improve? Voice recognition is going to be really important. And there will be flexible devices where you actually pull out keyboards and screens and expand what you are carrying with you. Haptic technologies-based on touch and force feedback-are not yet here, but there's a lot of research going on. For example, with a handheld, you could display a virtual keyboard on a piece of paper and just touch that.

You have warned that we are "hitting a wall of complexity." What do you mean? We once arrogantly thought that any man-made system could be completely understood, because we created it. But we have reached the point where we can't predict how the systems we design will perform, and it's inhibiting our ability to do some really interesting system designs. We are allowing distributed control and intelligent agents to govern the way these systems behave. But that has its own dangers; there are cascading failures and dependencies we don't understand in these automatic protective mechanisms.

Will we see catastrophic failures of complex systems, like the Internet or power grid? Yes. The better you design a system, the more likely it is to fail catastrophically. It's designed to perform very well up to some limit, and if you can't tell how close it is to this limit, the collapse will occur suddenly and surprisingly. On the other hand, if a system slowly erodes, you can tell when it's weakening; typically, a well-designed system doesn't expose that.

So, how can complex systems be made more safe and reliable? Put the protective control functions in one portion of the design, one portion of the code, so you can see it. People, in an ad hoc fashion, add a little control here, a little protocol there, and they can't see the big picture of how these things interact. When you are willy-nilly patching new controls on top of old ones, that's one way you get unpredictable behavior.

[Apr 16, 2005] The Daemon, the Gnu and the Penguin By Peter H. Salus

Apr 16, 2005 | groklaw.net

So, by the beginning of 1974 there were a number of user groups exchanging information and a new operating system that was beginning to get folks excited. No one had thought seriously about licensing. And there were 40 nodes on the ARPAnet.

Early in 1974, Mel Ferentz (then at Brooklyn College)4 and Lou Katz (then at Columbia's College of Physicians and Surgeons)5called a meeting of UNIX users in New York in May. Ken Thompson supplied them with a list of those who had requested a copy of UNIX after the SOSP meeting. Nearly three dozen in under six months. The meeting took place on May 15, 1974. The agenda was a simple one: descriptions of several installations and uses; lunch; "Ken Thompson speaks!"; interchange of UNIX hints; interchange of DEC hints; free-for-all discussion. Lou told me that he thought there were about 20 people in attendance; Mel thought it might have been a few more than that. That's the organization that's now the USENIX Association.

The Ritchie-Thompson paper appeared in the July 1974 issue of Communications of the ACM. The editor described it as "elegant." Soon, Ken was awash in requests for UNIX.

Mike O'Dell's reaction to the article is typical. In 1974, Mike was an undergraduate at the University of Oklahoma. He told me:

When the famous 1974 CACM issue appeared, I was working at the OU Computer Center. We had this thing called ITF, the Intermittent Terminal Facility, which had the world's worst implementation of BASIC, and one of the guys had written some routines which let you do I/O on terminals -- and this was a non-trivial feat. So a group of us sat down and tried to figure out whether we could do something interesting. ...

The UNIX issue came. I remember going down the hall and getting it out of my mailbox and saying to myself, Oh, ACM's got something on operating systems, maybe it's worth reading. And I started reading through it. I remember reading this paper on the UNIX time-sharing system. It was sort of like being hit in the head with a rock. And I reread it. And I got up and went out of my office, around the corner to George Maybry who was one of the other guys involved with this. And I threw the issue down on his desk and said: "How could this many people have been so wrong for so long?"

And he said: "What are you talking about?"

And I said: "Read this and then try to tell me that what we've been doing is not just nuts. We've been crazy. This is what we want."

The CACM article most definitely had a dramatic impact.

[Aug 22, 2003] SCO Claims 2002 UNIX Source Release Was Non-Commercial By Don Marti

Aug 22, 2003 | Linux Journal
SCO offers a novel interpretation of its 2002 open-source letter.

Blake Stowell, Director of Public Relations for The SCO Group, said this week that SCO's 2002 letter that released old UNIX versions did not offer free, open-source terms but included a non-commercial use restriction. The company then was called Caldera.

"I do not dispute that this letter was distributed and that Caldera at the time allowed 16-bit, non-UNIX System V code to be contributed to Linux for non-commercial use", Stowell wrote in an e-mail interview.

The text of the letter, sent January 23, 2002 by Bill Broderick, Director of Licensing Services for Caldera, in fact makes no mention of "non-commercial use" restrictions, does not include the words "non-commercial use" anywhere and specifically mentions "32-bit 32V Unix" as well as the 16-bit versions.

When asked for clarification on the "non-commercial" assertion, Stowell replied by e-mail, "That is what I was told by Chris Sontag." Sontag is SCO's Senior Vice President and General Manager of the SCOsource division and is responsible for controversial license demands from Linux users based on SCO's claim that Linux contains code illegally copied from SCO.

SCO CEO Darl McBride included examples of allegedly infringing code in a presentation at this week's SCO Forum and promptly triggered a flurry of UNIX and Linux research on such sites as Linux Weekly News, as users attempt to trace the code's origin.

UNIX developer and author Eric Raymond, who formerly maintained an exhaustive buyers' guide covering proprietary UNIX versions for x86 PCs and writes that he has access to both open-source and proprietary UNIX source code, has traced the code McBride presented specifically to the 32V version of UNIX, which is covered by the Broderick letter.

Don Marti is Editor in Chief of Linux Journal.

Linux How Does SCO Compare

linux.ittoolbox.com

Full Article:

Disclaimer: Contents are not reviewed for correctness and are not endorsed or recommended by ITtoolbox or any vendor. FAQ contents include summarized information from Linux-Select discussion unless otherwise noted.

Adapted from responses by Brian on Tuesday, July 08, 2003

SCO was the first to popularize the use of UNIX software on Intel-based hardware, building on the software they acquired from the XENIX work done by Microsoft in the late 1970s and early 1980s. They've probably sold more UNIX licenses than anyone else, including Sun, but their profit margins have never come close to the margins that Sun has long enjoyed on the larger server systems.

SCO has always excelled in the retail market, but IBM and Sun are moving into that space along with many Linux software vendors.

According to http://www.computerwire.com/recentresearch/CA8DBB43AE69514C80256D57003848CB

"In 1969/70, Kenneth Thompson, Dennis Ritchie, and others at AT&T's Bell
Labs began the development of the Unix operating system. Unable to sell the software due to a DoJ injunction, AT&T instead licensed Unix Version 7 (and later Unix System III and V) to other organizations including the University of California at Berkeley (which developed its version as BSD under a more liberal license) and Microsoft (Xenix), while AT&T continued the development of the original Unix code (System III, System V).

In 1980 Microsoft began the development of Xenix, a version of Unix for Intel processors, with the Santa Cruz Operation (SCO), before handing it over to SCO to concentrate on MS-DOS. SCO renamed it SCO Xenix, then SCO Unix after a code merge with the original System V, before settling on OpenServer."

SCO has always sold low end licenses on Intel hardware. Typical license prices ranged from $100-1000, rarely more than that. In contrast, Sun's prices start around $1000 and go up dramatically for high end hardware. Sun paid at least $1,000,000 to AT&T, but was granted complete rights to all UNIX source code, so that investment was regained long ago.

In contrast, both Caldera and SCO have been jockeying to establish a profitable business, something that neither of them have done consistently. SCO has probably sold more UNIX licenses over the years than anyone else, but just when people really started to run server software in larger numbers on Intel hardware, Sun (with their Solaris on Intel and now Linux) and IBM, with their embrace of Linux software, have started to compete in areas which once belonged almost completely to SCO.

From my experience, I've used a few different versions of Caldera products, the former Linux software division of SCO. They were always very easy to install, mixed with a combination of freely available and commercial software. They tended to be quite stable and useful, but rarely near the leading or bleeding edge. I used Caldera Open Linux Base, an early version of commercial and free software. It was decent. The one I really enjoyed was Caldera Open Linux eDesktop 2.4. Though now quite dated, at the time, it was one of the first really friendly, truly easy to install and configure systems.

More recently, I've used Caldera Open Linux Workstation 3.1. It's now dated, too, and probably pulled from the market. But it ran quite well and exhibited many of the same friendly characteristics as eDesktop 2.4. As far as SCO UNIX products, I've used XENIX, the Microsoft predecessor to SCO UNIX, and I've used SCO UNIX, too, but I haven't used any of their products in the past two years. In the meantime, things changed, and the company changed ownership from Caldera to SCO, underwent management changes, and my interest diminished.

But all in all, SCO seems to offer good, stable, but aging software. It compares very well to Linux software in stability, but lags in current features and tends to be higher in price because of its higher proprietary content. Given the present situation, I feel that SCO will probably try to make a little money off the current litigation, then sell off or get out of the business entirely. For that reason, I hesitate to recommend their software - the future stability of the company itself is in question.

More information:
More information on Caldera/SCO is available in the Linux-Select discussion group archives. Enter a topic and click the search button for detailed results.

[May 21, 2003] The Microsoft-SCO Connection By Steven J. Vaughan-Nichols

May 21, 2003 | practical-tech.com

Cyber Cynic: The Microsoft-SCO Connection -- 21 May 2003

What is Microsoft really up to by licensing Unix from SCO for between 10 to 30 million dollars? I think the answer's quite simple: they want to hurt Linux. Anything that damages Linux's reputation, which lending support to SCO's Unix intellectual property claims does, is to Microsoft's advantage.

Mary Jo Foley, top reporter of Microsoft Watch agrees with me. She tells me, "This is just Microsoft making sure the Linux waters get muddier They are doing this to hurt Linux and keep customers off balance. Eric Raymond, president of the Open Source Initative agrees and adds "Any money they (Microsoft) give SCO helps SCO hurt Linux. I think it's that simple."

Dan Kusnetzky, IDC vice president for system software research, also believes that Microsoft winning can be the only sure result from SCO's legal maneuvering. But, he also thinks that whether SCO wins, loses, or draws, Microsoft will get blamed for SCO's actions.

He's right. People are already accusing Microsoft of bankrolling SCO's attacks on IBM and Linux.

But is there more to it? Is Microsoft actually in cahoots with SCO? I don't think so. Before this deal, both SCO and Caldera have had long, rancorous histories with Microsoft

While Microsoft certainly benefits from any doubt thrown Linux's way, despite rumors to the contrary Microsoft no longer owns any share of SCO and hasn't for years. In fact, Microsoft's last official dealing with Caldera/SCO was in early January 2000, when Microsoft paid approximately $60 million to Caldera to settle Caldera's claims that Microsoft had tried to destroy DR-DOS. While Microsoft never admitted to wrong-doing, the pay-off speaks louder than words.

The deal didn't make SCO/Caldera feel any kinder towards Microsoft. A typical example of SCO's view of Microsoft until recently can be found in the title of such marketing white papers as "Caldera vs. Microsoft: Attacking the Soft Underbelly" from February 2002.

Historically, Microsoft licensed the Unix code from AT&T in 1980 to make its own version of Unix: Xenix. At the time, the plan was that Xenix would be Microsoft's 16-bit operating system. Microsoft quickly found they couldn't do it on their own, and so started work with what was then a small Unix porting company, SCO. By 1983, SCO XENIX System V had arrived for 8086 and 8088 chips and both companies were marketing it.

It didn't take long though for Microsoft to decide that Xenix wasn't for them. In 1984, the combination of AT&T licensing fees and the rise of MS-DOS, made Microsoft decide to start moving out of the Unix business.

Microsoft and SCO were far from done with each other yet though. By 1988, Microsoft and IBM were at loggerheads over the next generation of operating systems: OS/2 and Unix. Microsoft saw IBM's support of the Open Software Foundation (OSF), an attempt to come up with a common AIX-based Unix to battle the alliance of AT&T and Sun, which was to lead to Solaris.

Microsoft saw this as working against their plans for IBM and Microsoft's joint operating system project, OS/2 and their own plans for Windows. Microsoft thought briefly about joining the OSF, but decided not to. Instead Bill Gates and company hedged their operating systems bets by buying about 16% of SCO, an OSF member, in March 1989

In January 2000, Microsoft finally divested the last of their SCO stock. Even before Caldera bought out SCO though in August 2000, Microsoft and SCO continued to fight with each other. The last such battle was in 1997, when they finally settled a squabble over European Xenix technology royalties that SCO had been paying Microsoft since the 80s.

Despite their long, bad history, no one calling the shots in today's SCO has anything to do with either the old SCO or Caldera. I also though think that there hasn't been enough time for SCO and Microsoft to cuddle up close enough for joint efforts against IBM and Linux.

I also think that it's doubtful that Microsoft would buy SCO with the hopes of launching licensing and legal battles against IBM, Sun and the Linux companies. They're still too close to their own monopoly trials. Remember, even though they ended up only being slapped on the wrist, they did lose the trial. Buying the ability to attack their rivals' operating systems could only give Microsoft a world of hurt.

Besides, as Eric Raymond in the Open Source Initiative's position paper on SCO vs. IBM and Bruce Perens' "The FUD War against Linux," point out, it's not like SCO has a great case.

Indeed, as Perens told me the other day, in addition to all the points that has already been made about SCO's weak case, SCO made most 16-bit Unix and 32V Unix source code freely available. To be precise, on January 23, 2002, Caldera wrote, "Caldera International, Inc. hereby grants a fee free license that includes the rights use, modify and distribute this named source code, including creating derived binary products created from the source code." Although not mentioned by name, the letter seems to me to put these operating systems under the BSD license.While System III and System V code are specifically not included, it certainly makes SCO's case even murkier.

SCO has since taken down its own 'Ancient Unix' source code site, but the code and the letter remain available at many mirror sites.

Given all this, I think Microsoft has done all they're going to do with SCO. They've helped spread more FUD for a minimal investment. To try more could only entangle them in further legal problems. No, SCO alone is responsible for our current Unix/Linux situation and alone SCO will have to face its day in court.

[Apr 19, 2003] Pirates of Silicon Valley (1999)

April 19, 2003 | amazon.com
3 out of 5 stars Cheezefest, but also insightful, April 19, 2003
Reviewer: A viewer (Arlington, MA USA)

The video is brutally honest about how Jobs neglects his daughter and abuses Apple employees. He seems to have had a hard time dealing with his own illegitimacy (he was adopted) It is no coincidence that he shaped the Mac project to be the "bastard" project that tears Apple apart from within. Too bad the movie didn't spend a little more time on this theme. I also loved the climactic scene where Gates and Jobs confront each other. Although certainly fictional, it sums up the Mac/PC war brilliantly. Jobs shouts about how the Macintosh is a superior product, and Gates, almost whispering, answers "That doesn't matter. It just doesn't matter"

4 out of 5 stars A fair overview of Microsoft and Apple beginnings, August 5, 2001
Reviewer: aaron wittenberg (portland, or)

From the people I've talked to that had seen it, this sounded like a great movie. I finally got my chance to see it just a few days ago.

I wasn't using computers back when this movie starts at Berkeley in the early 70's, but from the time the Apple I was invented until the IBM PC came around, I recall that history pretty well.

This movie does an alright job explaining the starting of Microsoft and Apple. The downside is many interesting facts have been left out. The writers never mentioned why IBM made a personal computer. They did because almost every other computer related company was building computers, and they wanted to cash in on the market. The scene where Gates goes to IBM and offers them DOS was not entirely correct.

What they didn't tell you is that IBM first offered the late Gary Kildahl (the owner of CP/M) to write DOS, and for whatever reason he wasn't interested. Next was Microsoft, and Bill Gates was interested, but he sold IBM something he didn't have. Instead, Gates bought 86-DOS for $50,000 from its author Tim Paterson who worked at Seattle Computer Products way back in 1980. Tim later went to work for Microsoft.

Just think... had Gary been interested, his business would likely be the Microsoft of today.

There are other small differences which the movie either didn't tell the full story, or it wasn't entirely accurate. They failed to mention that Microsoft supplied MS-DOS (they renamed 86-DOS for licensing and ownership) to IBM, but the IBM PC used virtually all Intel components. They failed to mention that a huge chunk of history came from Intel. It was also never mentioned that Apple Motorola for processors, or else their beloved Macintosh would not exist.

They were right on track about Apple stealing the graphic interface from Xerox. It is true that Xerox invented it sometime during the early-mid 70's and the management wasn't interested in this invention. BIG mistake.

Apple is often credited with inventing the GUI. Not true.

I was a little surprised that the movie made no mention about how Microsoft teamed up with IBM back in the 80's and they worked on OS/2. Microsoft spent more time working on Windows and IBM finally finished OS/2 on their own. I truly feel if they had worked together on a single operating system, we would have one today that doesn't crash like Windows and actually worked like an operating system should.

If you are even a little interested in the history of computers and how some of these huge companies started out, you might find this very interesting.

I still remember using Windows 1.0 back in 1986. A lot has changed with it!

So to close, this would make a good time killer and something you give you a little more knowledge about computer history. But please keep in mind, not all of the events are totally accurate, and a lot of critical information was left out. This is by no means the end all authority.

4 out of 5 stars Great Movie!, June 12, 2001
Reviewer: Matt (Reno, NV)

Even though I do not know either man personally, this movie gives soo much insight to both sides. It shows that Apple followers are much like Jobs himself, arrogant, condescending, and self-righteous. It also shows why Apple has not, and never will, get more than 10% of the market share. There is even speculation that Jobs is going to drop Motorola and use AMD chips instead. On the other side, it shows how Gates tells his employees what he wants, and lets them complete their tasks without intervention, while Jobs continually abuses and verbally trashes his employees to get the job done. Jobs is an emotional powder keg, while Gates plays it cool. Great movie!

4 out of 5 stars The great american success story, timing is everything!, February 27, 2003
Reviewer: A viewer (Nampa, Id. United States)

This takes you from beginning to present day.
Shows Paul Allen (who now OWNS the Seahawks and Trailblazers pro teams) Bill Gates, Steve Jobs etc. etc. Dropping out of college to pursue a slow burning fire that would become the personal computer/windows software that we know today.

What is interesting is that it shows who talks and who works. Gates lies a lot, pretty much living by the saying "telling people what they want to hear" while Paul Allen grinds away at making code.

On the other end it's the same somewhat, rogue cannon Steve Jobs handling the business part while we get a sense that Steve Wozniak is a true tech who goes above and beyond Jobs' rantings to produce the final product.

What is so funny is the irony of this movie:

Loan Officer: "Sorry Mr. Jobs, but we don't think the ordinary person will have any use for a computer".

HP: "You think people are interested in something called a mouse?".

Xerox: "We build it and then they can come right in here and steal it from us? It's just not fair, this operating system is a result of our hard work!".

Jobs to Gates: "You're STEALING FROM US!!!"

Assistant to Gates: "Do you realize Apple has a pirate flag over their front door, and they just gave us 3 prototypes of their operating system?"

Jobs: "I don't want people to look at it like a monitor and mouse, I think of this as art, a vision, people need to think outside the box".

Jobs: "You stole it from ussss!"

Gates: "No it's not stealing, you see, it's like we both have this neighbor, and he leaves his door open all the time. You go over there to get his TV, only I've gotten their first..and now you're calling me the thief?!".

Just some of the excerpts that make this movie a classic and show you everything that went down when a bunch of college dropouts set out and changed the world in which we live today.

5 out of 5 stars Apple vs Microsoft...but not a war, January 17, 2002
Reviewer: Sebastian Brytting (Stockholm, Sweden)

The best thing about this movie, I think, is that it manages to deal with the Apple vs Microsoft discussion without picking a side. It shows Steve Jobs yelling at his employees when his private life is messy. But it also shows him inspire and develop products that changed the world, and how he eventually sorted out his private problems.

It shows Bill Gates stealing from every large company he comes across, but he is not portrayed as the 'bad guy.' The viewer can pick sides himself.

Computer related movies most often end up really lousy, but not this one. When Steve Jobs is having fun, you get happy. When he finds out that Bill Gates has betrayed his trust and stolen his life's work, you get sad. When Bill Gates tries to be 'cool', you laugh. (Hilarious scene)

The other great thing about this movie is that since it's so neutral, it makes even the toughest Microsoft fan admit that it was all pirated from Apple. (Though they always add "at least from the beginning" to preserve their pride) =)

Bottom line: This movie rocks! See it! Newbie or hacker, you've got to see this movie!

[ News It's a blizzard--time to innovate By Rupert Goodwins

Summary: As feet of snow blankets the eastern United States, just remember that it was a blizzard in 1978 which served as a catalyst for the invention of the online bulletin board.

-five years and one month ago saw a doozy of a storm. Ward Christensen, mainframe programmer and home computer hobbyist, was stuck at home behind drifts too thick to dig. He'd been in the habit of swapping programs with Randy Suess, a fellow hacker--in the old sense of someone who did smart things with dumb electronics--by recording them onto cassettes and posting them.

They'd invented the hardware and software to do that, but in that same chilly month of 1978 someone called Bill Hayes came up with a neat circuit called the Hayes MicroModem 100. Ward called Randy, complained about the weather, and said wouldn't it be a smart idea to have a computer on the phone line where people could leave messages. "I'll do the hardware. When will the software be ready?" said Randy.

The answer was two weeks later, when the Computerized Bulletin Board System first spun its disk, picked up the line and took a message. February 16th, 1978 was the official birthday: another two weeks after it really came to life, says Christensen, because nobody would believe they did it in a fortnight. He's got a point: these were the days when you couldn't just pop down to PC World and pick up a box, download some freeware and spend most of your time wondering what color to make the opening screen.

Everything about the CBBS was 1970s state of the hobbyist's art: a single 173-kilobyte 8-inch floppy disk to store everything, 300-baud modem, 8-bit processor running at a megahertz or so, and--blimey--64kb of memory.

Christensen wrote the BIOS and all the drivers (as well as the small matter of the bulletin board code itself), while Suess took care of five million solder joints and the odd unforeseen problem. Little things were important: the motor in the floppy disk drive ran from mains electricity instead of the cute little five volts of today--things burned out quickly if left on. So the floppy had to be modified to turn itself on when the phone rang, keep going for a few seconds after the caller had finished to let the rest of the computer saved its data, and then quietly go back to sleep. Tell the kids of today that...

The kids of yesterday didn't need telling. Bulletin boards running CBBS spread across the US and further afield; by 1980, Christensen was reporting 11,000 users on his board alone, some of whom called in from Europe and Australia--in the days of monopoly telcos with monstrous international call charges. But that was because there was nothing else like it. People dialed in and got news instantly--well, after five hours of engaged tone--that would otherwise have to wait for the monthly specialist magazines to get into print. And of course, they could swap files and software, starting the process which today has grown into the savior of the human race or the destroyer of all that is noble and good (pick one).

The experience of a BBS (the C got dropped as alternative programs proliferated) was very different on some levels to our broadband, Webbed online lifestyle. Three-hundred baud is around five words a second: you can read faster than that. Commands were single characters, messages were terse but elegant, while a wrong command can land you with a minute's worth of stuff you just didn't need to know. Some software even threw users off who pressed too many keys without being productive enough: it was a harsh, monochrome and entirely textual world.

It was also utterly addictive. For the first time, people could converse with others independently of social, temporal or spatial connections. People made the comparison at the time with the great epistolary conversations of the Victorians, where men and women of letters sent long hand-written notes to each other two or three times a day, but BBS life was much more anarchic than that. You didn't know with whom you were swapping messages, but you could quickly find out if they were worth it. At first envisioned as local meeting places where people who knew each other in real life could get together from home, BBSs rapidly became entreats for complete strangers--the virtual community had arrived, with its joys, flamewars and intense emotions.

Ten years later, bulletin boards had evolved into a cooperative mesh of considerable complexity, A system called Fidonet linked them together, so mail from one could be forwarded to another anywhere in the world via a tortuous skein of late night automated phone calls. File-transfer protocols, graphics, interactive games and far too much politics had all transformed that first box of bits and chips beyond recognition.

Then came that world's own extinction-level event, as the asteroid of the Internet came smashing through the stratosphere and changed the ecosystem for good. That we were ready for the Net, and that every country had its own set of local experts who'd been there, done that and knew what to do next, is in large part due to the great Chicago snowstorm of 1978 and two people who rolled up their sleeves to make a good idea happen. It's an anniversary well worth remembering.

[Feb 7, 2003] UNIX and Bull by Jean Bellec

Feb 7, 2003 | feb-patrimoine.com

Bull, like many "old" computer companies, faced from the early 1980s the dilemma of "open systems". Bull had a "proprietary systems" culture and had a business model oriented towards being the sole supplier of its customers needs. Engineers in all laboratories were used to design all the parts of a computer system, excluding the elementary electronic components. Even when Bull adopted a system from another laboratory, the whole system or software was revisited to be "adapted" to specific manufacturing requirements or to specific customer needs? The advent of open systems, where the specifications and the implementations were to be adopted as such, was a cultural shock that had since traumatized the company.

The new management of Groupe Bull in the 1980s was convinced of the eventual domination of open systems. Jacques Stern, the new CEO, even prophesized in 1982 the decline and the fall of IBM under the pressure of the governments' backed open standards.

The Bull strategy was then to phase out the various proprietary product lines very progressively and to take positions in the promising open systems market.
Many UNIX projects have been considered in the various components of what was now part of Groupe Bull: Thomson had concluded agreements with Fortune, Transac Alcatel was considering its own line (based on NS3032), CNET (the engineering arm of France Télécom) had developed its own architecture on the base of Motorola 68000 (SM-90)...

The take-over of R2E in the late 1970s had given to Bull-Micral a sizeable position in the PC market. But, at that time, many in the company, did not envision the overwhelming success of the personal computers. So, Bull decided to invest in the more promising minicomputer market based on the UNIX operating system.

Bull developed an UNIX strategy independently from Honeywell's. Honeywell did start a port of UNIX on a customized 386 PC and reoriented Honeywell Italia towards a 68000 based UNIX computer. However, plans were exchanged between the companies and did not differ significantly, while products were separately developed until the eventual take-over of Honeywell by Bull..

Open Software

UNIX was, at that time, a property of AT&T, then also a potential competitor for existing computer companies. So, Bull undertook a lobbying effort both in the standards organizations (ECMA, ISO) and at the European Commission to establish UNIX as a standard not controlled by AT&T. This lobbying effort succeeded in establishing X-Open standards, initially for Europe and eventually backed by U.S. manufacturers.

X-Open standardized UNIX API (Application Programming Interfaces), an obvious desire for software houses. But, that objective was not sufficient for a hardware or a basic software manufacturer. So, when approached by Digital Equipment and IBM in 1988, Bull supported with enthusiasm the OSF Open Systems Foundation that had the purpose of developing an alternative source to AT&T supplied UNIX source code. An OSF laboratory was installed in Boston with a subsidiary lab in Grenoble. Bull enlisted the support of OSF from a majority of X-Open backers.
That was the climax of Unix wars: while AT&T got the support of Sun Microsystems, of the majority of Japanese suppliers -including NEC-, the OSF clan gathered H-P, DEC, IBM and even Microsoft that planned the support of X-Open source code in the, still secret, Windows/NT.
IBM had initially granted the AIX know-how to OSF, but a chasm progressively appeared between the Austin AIX developers and the Cambridge OSF. Eventually, OSF abandoned the idea to use AIX as the base of their operating system and went their own way.
When eventually delivered, the first version of OSF was adopted by DEC alone. IBM, H-P were sticking to their own version of UNIX.

In the mean time, Bull and Honeywell engineers had ported license free old versions of UNIX on some mainframes architectures: Level 6, DPS-4, Intel PC and DPS-7. Those implementations were not fully X-Open standardized and their distribution was quite limited.

UNIX Hardware

UNIX was the only successful example of an architecture independent operating system. In the early 1980s, that independence and the related peripheral subsystems openness was considered as satisfying customers. Architects of all companies expected to remain free to invent new instruction sets and the early 1980s saw a blooming of new RISC architectures increasing processor performances and occupying many engineers to port "standard" software to those architectures.

The initial entry of Bull in the UNIX market was to adopt the French PTT CNET's platform known as SM-90. That platform was based on the Motorola MC-68000 microprocessor for which Thomson (future SGI) got a manufacturing license

In parallel, Bull in its Echirolles center and Honeywell in Boston and Pregnana . developed several versions of 68000 based UNIX systems. After the purchase of Honeywell computers assets by Bull, those systems were consolidated into DPX/2 product line.

Jacques Stern, convinced on the superiority of RISC architectures and having failed to convince his engineers to build the right one, decided in 1984 to invest into Ridge Computers, a Santa Clara start-up founded in 1980 by ex-Hewlett Packard employees. Ridge systems were licensed to Bull and sold by Bull as SPS-9. However Ridge entered a financial crisis in1986-1988 and, after new capital injections from Bull and others, eventually vanished.

Going back to Silicon Valley to shop for another RISC architecture in 1988, Bull decided to license the upper range MIPS system and to move its own MC-68000 products to the soon to be announced MOS technology MIPS microprocessors. MIPS looks very promising in 1990: its architecture was adopted by Digital, by Siemens, by Silicon Graphics, by Nintendo and others. However, the multiprocessor version of MIPS chip was delayed and the company entered a financial crisis, ended by its absorption by Silicon Graphics.

Bull decided to abandon MIPS and went shopping for yet another partner. Both Hewlett-Packard and IBM courted Bull in 1991 for adopting their architecture. The French prime minister supported publicly Hewlett-Packard, while Bull's Francis Lorentz and the French ministry of industry were leaning towards IBM.
Eventually, in January 1992, Bull choose IBM. It adopted the PowerPC RISC architecture, introduced the RS/6000 and entered a cooperative work with IBM's Austin's laboratory to develop a multi-processor computer running IBM's AIX operating software. That project, code-named Pegasus, that involved the Bull laboratories of Pregnana and Grenoble, gave birth to the Escala product line.
The PowerPC architecture was completed by the adoption of Motorola's workstations and small servers by Bull as Estrella systems.
In the upper range, Bull attempted unsuccessfully a cooperation with IBM in SP-x Parallel Systems. It did not succeed also in its attempt in 1994 to repackage Escala as a main frame priced system under the name Sagister.

Escala and AIX succeeded satisfactorily. But customers switching to UNIX from mainframes wanted the price of their systems low enough to offset their conversion costs and were very reluctant to buy the kind of hardware profitable to the manufacturer.

In addition to maintaining its AIX-PowerPC based Escala systems, Bull had to introduce also in 1996 a line of open systems, designed by NEC, based on Intel Pentium microprocessors, and running Microsoft Windows/NT.

Conclusion (not a definitive one)

The conversion of the industry to the Open Systems has been much slower than predicted in the early 1980s.

The large systems customers were reluctant to move, perhaps afraid of Y2K problems. The lower part of the computer world adopted the Intel-Microsoft standard and their success allowed many companies to take over the small servers market with Windows/NT.

Bull, when defining its UNIX strategy, was not expecting that the future of UNIX might reside in the open version (Linux) designed by a then obscure Finnish university programmer running on PC hardware with an early 1980s architecture.

68000 products (Honeywell DPX, SM-90 - Bull SPS-7, DPX-2)
MIPS alliance
PowerPC based servers

Intel based servers

Bull UNIX contributors

Index

Revision : 12 March 2019.

Unix's Founding Fathers

Slashdot

by js7a (579872) <james AT bovik DOT org> on Monday July 26, @05:00AM (#9799332)
(http://www.bovik.org./ | Last Journal: Monday July 19, @05:17PM)

... It was proprietary software, patents wouldn't have done a thing to it.

Actually, a crucial part of Unix was patented, before software patents were technically allowed. But the fact that it had been was the main reason that Unix spread so rapidly in the 70s and 80s.

Back in the 70s, Bell Labs was required by an antitrust consent decree of January 1956 to reveal what patents it had applied for, supply information about them to competitors, and license them in anticipation of issuance to anyone for nominal fees. Any source code covered by such a Bell Labs patent also had to be licensed for a nominal fee. So about every computer science department on the planet was able to obtain the Unix source.

The patent in question was for the setuid bit, U.S. No. 4,135,240 [uspto.gov]. If you look at it, you will see that it is apparently a hardware patent! This is the kicker paragraph:

... So far this Detailed Description has described the file access control information associated with each stored file, and the function of each piece of information in regulating access to the associated file. It remains now to complete this Detailed Description by illustrating an implementation giving concrete form to this functional description. To those skilled in the computer art it is obvious that such an implementation can be expressed either in terms of a computer program (software) implementation or a computer circuitry (hardware) implementation, the two being functional equivalents of one another. It will be understood that a functionally equivalent software embodiment is within the scope of the inventive contribution herein described. For some purposes a software embodiment may likely be preferrable in practice.

Technically, even though that said it "will be understood," and was understood by everyone as a software patent, it wasn't until the 1981 Supreme case of Diamond v. Diehr that it became enforcable as such. Perhaps that is why the patent took six years to issue back in the 70s.

So, through the 1970s, Unix spread because it was covered by an unenforcable software patent! Doug McIlroy said, "AT&T distributed Unix with the understanding that a license fee would be collected if and when the setuid patent issued. When the event finally occurred, the logistical problems of retroactively collecting small fees from hundreds of licensees did not seem worth the effort, so the patent was placed in the public domain."

Windows NT and VMS: The Rest of the Story by Mark Russinovich

December 1, 1998 | winnetmag.com

Most of NT's core designers had worked on and with VMS at Digital; some had worked directly with Cutler. How could these developers prevent their VMS design decisions from affecting their design and implementation of NT? Many users believe that NT's developers carried concepts from VMS to NT, but most don't know just how similar NT and VMS are at the kernel level (despite the Usenet joke that if you increment each letter in VMS you end up with WNT­Windows NT).

As in UNIX and most commercial OSs, NT has two modes of execution, as Figure 2 shows. In user mode, applications execute, and OS/2, DOS, and POSIX execute and export APIs for applications to use. These components are unprivileged because NT controls them and the hardware they run on. Without NT's permission, these components cannot directly access hardware. In addition, the components and hardware cannot access each other's memory space, nor can they access the memory associated with NT's kernel. The components in user mode must call on the kernel if they want to access hardware or allocate physical or logical resources.

The kernel executes in a privileged mode: It can directly access memory and hardware. The kernel consists of several Executive subsystems, which are responsible for managing resources, including the Process Manager, the I/O Manager, the Virtual Memory Manager, the Security Reference Monitor, and a microkernel that handles scheduling and interrupts. The system dynamically loads device drivers, which are kernel components that interface NT to different peripheral devices. The hardware abstraction layer (HAL) hides the specific intricacies of an underlying CPU and motherboard from NT. NT's native API is the API that user-mode applications use to speak to the kernel. This native API is mostly undocumented, because applications are supposed to speak Win32, DOS, OS/2, POSIX, or Win16, and these respective OS environments interact with the kernel on the application's behalf.

VMS doesn't have different OS personalities, as NT does, but its kernel and Executive subsystems are clear predecessors to NT's. Digital developers wrote the VMS kernel almost entirely in VAX assembly language. To be portable across different CPU architectures, Microsoft developers wrote NT's kernel almost entirely in C. In developing NT, these designers rewrote VMS in C, cleaning up, tuning, tweaking, and adding some new functionality and capabilities as they went. This statement is in danger of trivializing their efforts; after all, the designers built a new API (i.e., Win32), a new file system (i.e., NTFS), and a new graphical interface subsystem and administrative environment while maintaining backward compatibility with DOS, OS/2, POSIX, and Win16. Nevertheless, the migration of VMS internals to NT was so thorough that within a few weeks of NT's release, Digital engineers noticed the striking similarities.

Those similarities could fill a book. In fact, you can read sections of VAX/VMS Internals and Data Structures (Digital Press) as an accurate description of NT internals simply by translating VMS terms to NT terms. Table 1 lists a few VMS terms and their NT translations. Although I won't go into detail, I will discuss some of the major similarities and differences between Windows NT 3.1 and VMS 5.0, the last version of VMS Dave Cutler and his team might have influenced. This discussion assumes you have some familiarity with OS concepts (for background information about NT's architecture, see "Windows NT Architecture, Part 1" March 1998 and "Windows NT Architecture, Part 2" April 1998).

Microsoft OS/2 Announcement

prodigy.net

NEWS RELEASE

M-3592

FOR RELEASE APRIL 2, 1987

Microsoft Operating System/2™ With Windows Presentation Manager Provides Foundation for Next Generation of Personal Computer Industry

REDMOND, WA • April 2, 1987 • Microsoft Corporation today announced Microsoft Operating System/2 (MS OS/2™), a new personal computer system operating system. MS OS/2 is planned for phased release to OEM manufacturers beginning in the fourth quarter of 1987. Designed and developed specifically to harness the capabilities of personal computers based upon the Intel® 80286 and 80386 microprocessors, MS OS/2 provides significant new benefits to personal computer application software developers and end-users.

MS OS/2, a multi-tasking operating system which allows applications software to use up to 16 Mb of memory on 80286 and 80386-based personal computers, can be adapted for use on most personal computers based on the 80286 and 80386 processors, including the IBM® PC AT and other popular systems in use today. The MS OS/2 Windows presentation manager is an integral part of the MS OS/2 product, providing a sophisticated graphical user interface to the MS OS/2 system. The MS OS/2 Windows presentation manager is derived from the existing Microsoft® Windows product developed and marketed by Microsoft for the current generation of IBM personal computers and compatible machines.

The MS OS/2 product is the first to be announced as the result of the Joint Development Agreement announced by IBM and Microsoft in August 1985. Microsoft will be offering MS OS/2, including the MS OS/2 Windows presentation manager, to all its existing OEM customers.

"Microsoft Operating System/2 provides the foundation for the next phase of exciting growth in the personal computer industry," said Bill Gates, chairman of Microsoft. "Microsoft is committed to providing outstanding systems software products to the personal computer industry. MS OS/2 will be the platform upon which the next 1000 exciting personal computer applications software products are built. In particular, our commitment to the power of the graphical user interface has been realized with the announcement of the MS OS/2 Windows presentation manager and the new IBM Personal System/2™ series. We believe that these machines represent a new standard in personal computer graphics capabilities which will drive the software industry toward the creation of incredible new graphics-based applications software products."

Microsoft products to support MS OS/2 local area network systems and applications software developers

In a series of related announcements, Microsoft announced the Microsoft Operating System/2 LAN Manager, a high-performance local area networking software product. The MS OS/2 LAN Manager enables personal computers running either MS OS/2 or MS-DOS® to be connected together on a local area network. Any machine in the network can function as either a server or a workstation.

Microsoft also announced that it plans to begin distribution of the MS OS/2 Software Development Kit (SDK) on August 1. The MS OS/2 SDK includes pre-release software and full product specifications for MS OS/2. This will enable applications developers and other hardware and software developers to begin design and development of products for the MS OS/2 environment in advance of general end-user availability of the MS OS/2 product. The MS OS/2 SDK will include a pre-release version of MS OS/2 and a comprehensive set of software development tools. Microsoft will be providing a high level of support for the MS OS/2 SDK product using the Microsoft Direct Information Access Line (DIAL) electronic mail support service. In addition, users of the MS OS/2 SDK will receive regular software updates and credit for attendance at technical training seminars given by Microsoft personnel.

New version of Microsoft Windows to provide bridge to MS OS/2

Microsoft also announced today a new version of Microsoft Windows for MS-DOS. To be made available in the third quarter of 1987, Microsoft Windows version 2.0 has a number of new features, including significantly improved performance and full utilization of expanded memory features. This MS-DOS version of Windows will run existing Windows applications and will present users with the same visual interface used by the Microsoft OS/2 Windows presentation manager. This interface is based upon the use of overlapping windows rather than the tiling technique used in the current release of Microsoft Windows. The incorporation of the MS OS/2 Windows presentation manager user interface into Microsoft Windows version 2.0 will provide a consistent interface between the current MS-DOS generation of personal computers and future systems based on MS OS/2.

New version of Microsoft MS-DOS to enhance installed base of personal computers

Separately, Microsoft also announced a new version of the MS-DOS operating system™ version 3.3, which provides improved performance and increased hard-disk storage capability for MS-DOS personal computers. "Microsoft is committed both to providing significant enhancements to the current generation of operating systems technology, and to introducing revolutionary new products, such as MS OS/2. MS-DOS version 3.3 is part of Microsoft's ongoing effort to improve our current products," said Gates.

Microsoft products support new IBM Personal System/2 series

Microsoft also announced that it will be releasing updated versions of its existing MS-DOS-based applications software, Microsoft Windows and Microsoft XENIX® products to take advantage of the new IBM Personal System/2 series. Microsoft will also be supporting the new IBM Personal System/2 with versions of the Microsoft Mouse, the most popular pointing device in use on personal computers today.

Microsoft Corporation (NASDAQ "MSFT") develops, markets and supports a wide range of software for business and professional use, including operating systems, languages and application programs as well as books and hardware for the microcomputer marketplace.

# # # # #

Microsoft, MS-DOS, XENIX and the Microsoft logo are registered trademarks of Microsoft Corporation.

Microsoft Operating System/2 and MS OS/2 are trademarks of Microsoft Corporation.
IBM is a registered trademark of International Business Machines Corporation.
Personal System/2 is a trademark of International Business Machines Corporation.
Intel is a registered trademark of Intel Corporation.

Product Information

Microsoft OS/2 Windows Presentation Manager

Introduction

The Windows presentation manager is the graphical user-interface component of Microsoft Operating System/2™.

Product Features

The Windows presentation manager provides a windowed graphical user interface to MS OS/2™ users. It replaces the MS OS/2 command line interface with a full function user shell. This shell features:

Tandy 16b

The Tandy 16b, and the similar Tandy 6000, were Tandy's stab at a multi-user business system. These machines came in 1984 equiped with both a Motorola 68000 and a Zilog Z-80. 64Kb of RAM was standard, but could be expanded up to a whole megabyte.

Utilizing the 68000 chip, they could also run Xenix, Microsoft's early UNIX experiment, that later morphed into SCO UNIX. Under Xenix, the 16b/6000 could handle two stand-alone terminals in addition to the base unit itself. The 16b came standard with two 8" 1.2 meg floppy drives. The 6000 could also be equipped with one floppy and an internal 15 meg hard drive. A green monochrome screen was also standard.

Pranks at Microsoft

Tsunami 386

Everyone that knows me knows that I tend to get really worked up about some issues, and I've earned a reputation as quite a flamer over the years.

In late '86 and early '87, Microsoft had decided to get out of the Xenix (unix) operating system business by farming the work off to other vendors, and merging the product with AT&T's offering.

I still believe that Xenix is the best operating system Microsoft has ever sold (even though I was also heavily involved with the development of OS/2, OS/2 2.0, Windows 1 through 3.1, and NT/OS) and it frustrated me greatly that we were dumping Xenix in this ignoble way. On April 1, 1987, several of the xenix developers, most particularly Dave Perlin and Paul Butzi, decided to see if they could get me to go really wild.

They cooked up a story about a deal that had been worked up with some company over a machine called the Tsunami 386, which involved giving them extraordinary rights to the xenix sources, and most particularly, the sources and support of the xenix 386 compiler, which I had written, and was in no way coupled to any of the usual AT&T licenses.

My bosses and their bosses were all informed of the prank, and everybody played along beautifully. It must have been quite a sight, as I went storming around building 2, explaining why this contract was so terribly outrageous. I don't think I've ever been so angry about anything. Finally, when I'm getting ready to escalate to Bill Gates, Dave suggests that I check the date.

I swear, I'll get them back some day. I just haven't thought of how, yet.

Re Microsoft and Xenix

Linux-Kernel Archive

On Friday 22 June 2001 18:41, Alan Chandler wrote:
> I am not subscribed to the list, but I scan the archives and saw the
> following. Please cc e-mail me in followups.

I've had several requests to start a mailing list on this, actually... Might do so in a bit...

> I was working (and still am) for a UK computer systems integrator called
> Logica. One of our departments sold and supported Xenix (as distributor
> for Microsoft? - all the manuals had Logica on the covers although there
> was at least some mention of Microsoft inside) in the UK. At the time it

I don't suppose you have any of those manuals still lying around?

> It was more like (can't remember exactly when) 1985/1986 that Xenix got
> ported to the IBM PC.

Sure. Before that the PC didn't have enough Ram. Dos 2.0 was preparing the
dos user base for the day when the PC -would- have enough ram.

Stuff Paul Allen set in motion while he was in charge of the technical side
of MS still had some momentum when he left. Initially, Microsoft's
partnership with SCO was more along the lines of outsourcing development and
partnering with people who knew Unix. But without Allen rooting for it,
Xenix gradually stopped being strategic.

Gates allowed his company to be led around by the nose by IBM, and sucked
into the whole SAA/SNA thing (which DOS was the bottom tier of along with
a bunch of IBM big iron, and which OS/2 emerged from as an upgrade
path bringing IBM mainframe technology to higher-end PCs.)

IBM had a unix, AIX, which had more or less emerged from the early RISC
research (the 701 project? Lemme grab my notebook...)

Ok, SAA/SNA was "Systems Application Architecture" and "Systems Network
Architecture", which was launched coinciding with the big PS/2 announcement
on April 2, 1987. (models 50, 60, and 80.) The SAA/SNA push also extended
through the System/370 and AS400 stuff too. (I think 370's the mainframe and
AS400 is the minicomputer, but I'd have to look it up. One of them (AS400?)
had a database built into the OS. Interestingly, this is where SQL
originated (my notes say SQL came from the System/370 but I have to
double-check that, I thought the AS400 was the one with the built in
database?). In either case, it was first ported to the PC as part of SAA.
We also got the acronym "API" from IBM about this time.) Dos 4.0 was new, it
added 723 meg disks, EMS bundled into the OS rather than an add-on (the
Lotus-Intel-Microsoft Expanded Memory Specification), and "DOSShell" which
conformed to the SAA graphical user interface guidelines. (Think an
extremely primitive version of midnight commander.)

The PS/2 model 70/80 (desktop/tower versions of same thing) were IBM's first
386 based PC boxes, which came with either DOS 3.3, DOS 4.0, OS/2 (1.0), or
AIX.

AIX was NOT fully SAA/SNA compliant, since Unix had its own standards that
conflicted with IBM's. Either they'd have a non-standard unix, or a non-IBM
os. (They kind of wound up with both, actually.) The IBM customers who
insisted on Unix wanted it to comply with Unix standards, and the result is
that AIX was an outsider in the big IBM cross-platform push of the 80's, and
was basically sidelined within IBM as a result. It was its own little world.

skip skip skip skip (notes about boca's early days... The PC was launched in
August 1981, list of specs, xt, at, specs for PS/2 models 25/30, 50, 70/80,
and the "pc convertable" which is a REALLY ugly laptop.)

Here's what I'm looking for:

AIX was first introduced for the IBM RT/PC in 1986, which came out of the
early RISC research.
It was ported to PS/2 and S/370 by SAA, and was based
on unix SVR2. (The book didn't specify whether the original version or the
version ported to SAA was based on SVR2, I'm guessing both were.)

AIX was "not fully compliant" with SAA due to established and conflicting
unix standards it had to be complant with, and was treated as a second class
citizen by IBM because of this. It was still fairly hosed according to the
rest of the unix world, but IBM mostly bent standards rather than breaking
them.

Hmmm... Notes on the history of shareware (pc-write/bob wallace/quiicksoft,
pc-file/pc-calc/jim button/buttonware, pc-talk/andrew flugelman, apparently
the chronological order is andrew-jim-bob, and bob came up with the name
"shareware" because "freeware" was a trademark of Headlands Press, Inc...)
Notes on the IBM Risc System 6000 launch out of a book by Jim Hoskins (which
is where micro-channel came from, and also had one of the first cd-rom
drives, scsi based, 380 ms access time, 150k/second, with a caddy.) Notes on
the specifications of the 8080 and 8085 processors, plus the Z80

Sorry, that risc thing was the 801 project led by John Cocke, named after the
building it was in and started in 1975.

Ah, here's the rest of it:

The IBM Person Computer RT (Risc Technology) was launched in January 1986
running AIX. The engineers (in Austin) went on for the second generation
Risc System 6000 (the RS/6000) with AIX version 3, launched February 15 1990.
The acronym "POWER" stands for Performance Optimized WIth Enhanced Risc.

Then my notes diverge into the history of ethernet and token ring (IEEE 802.3
and 802.5, respectively. The nutshell is that ethernet was a commodity and
token ring was IBM only, and commodity out evolves proprietary every time.
The second generation ethernet increased in speed 10x while the second
generation token ring only increase 4x, and ethernet could mix speeds while
token ring had to be homogeneous. Plus ethernet moved to the "baseT" stuff
which was just just so much more reliable and convenient, and still cheaper
even if you had to purchase hubs because it was commodity.)

> instead) and I was comparing Xenix, GEM (remember that - for a time it
> looked like it might be ahead of windows) and Microsoft Windows v 1 . We

Ummm... GEM was the Geos stuff? (Yeah I remember it, I haven't researched
it yet though...)

> chose Windows in the end for its graphics capability although by the time
> we started development it was up to v2 and we were using 286's (this was
> 1987/88).

I used windows 2.0 briefly. It was black and white and you could watch the
individual pixels appear on the screen as it drew the fonts. (It looked
about like somebody writing with a pen. Really fast for writing with a pen,
but insanely slow by most other standards. Scrolling the screen was an
excuse to take a sip of beverage du jour.)

The suckiness of windows through the 80's has several reasons. The first
apple windowing system Gates saw was the LISA, -before- the macintosh, and
they actually had a pre-release mac prototype (since they were doing
application software for it) to clone. Yet it took them 11 years to get it
right.

In part this was because PC graphics hardware really sucked. CGA, hercules,
EGA... Painful. Black and white frame buffers pumped through an 8 mhz ISA
bus. (Even the move to 16 bit bus with the AT didn't really help matters too
much.)

In part, when Paul Allen left, Microsoft's in-house technical staff just
disintegrated. (Would YOU work for a company where marketing had absolute
power?) The scraps of talent they had left mostly followed the agenda set by
IBM (DOS 4/5, OS/2 1.0/1.1). A lot of other stuff (like the AIX work) got
outsourced.

Windows was Gates' pet project (I suspect an ego thing with steve jobs may
have been involved a bit, but they BOTH knew that the stuff from Xerox parc
was the future). He didn't want to outsource it, but the in-house resources
available to work on it were just pathetic.

There are a couple good histories of windows (with dates, detailed feature
lists, and screen shots of the various versions) available online. And if
you're discussing windows, you not only have to compare it with the Macintosh
but at least take a swipe at the Amiga and Atari ST as well. And OS/2's
presentation manager development, and of course the early X days (The first
version of X came out of MIT in 1984, the year the macintosh launched.
Unfortunatley in 1988 X got caught in a standards committee and development
STOPPED for the next ten years. Development finally got back in gear with
the XFree86 guys told X Open where it could stick its new license a year or
two back and finally decided to forge ahead on their own, and they've been
making up for lost time ever since but they've had a LOT of ground to cover.
Using 3d accelerator cards to play MPEG video streams is only now becoming
feasable to do under X. And it SHOULD be possible to do that through a
100baseT network, let alone gigabit, but the layering's all wrong...)

> Logica sold out its Xenix operation to Santa-Cruz around 1987 (definately
> before October 1987) because we couldn't afford the costs of developing the
> product (which makes me think that we had bought it out from Microsoft - at
> least in the UK). By then we had switched our PDP 11s to System V (I also
> remember BUYING an editor called "emacs" for use on it:-) ).

That would be the X version of emacs. And there's the explanation for the
split between GNU and X emacs: it got forked and the closed-source version
had a vew years of divergent development before opening back up, by which
point it was very different to reconcile the two code bases.

Such is the fate of BSD licensed code, it seems. At least when there's money
in it, anyway...

And THAT happy experience is why Richard Stallman stopped writing code for a
while and instead started writing licenses. The GPL 1.0 decended directly
from that (and 2.0 from real world use/experience/users' comments in the
field)

(Yes, I HAVE been doing a lot of research. I think I'll head down to the UT
library again this afternoon, actually...)

Rob
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Xenix: back when MS was just starting out.

netherworld.com

Cable
[email protected]
Xenix: back when MS was just starting out.

>Yup, Xenix was 16-bit only.
>But there was a time - and this is hilarious when you
>consider the current state of affairs of NT v. Unix and
>Microsoft v. all the sorry Unix hardware vendors - that
>there were more computers running Microsoft Xenix than ALL
>OTHER VERSIONS OF UNIX COMBINED!

That much may be true. At that time MS had a stable OS, with power and 99.44% crash-proof with true multitasking.

In fact, IBM had planned to offer Xenix as an option for its AT computers in 1984 with Dumb Terminals hooked up to it for multi-user access.

>Yes indeed, Microsoft was the BIGGEST UNIX LICENSEE of ALL!

Maybe.

>No kidding, that's what a mass market platform like a PC
>will get you; even the primitive 286 based PCs that
>existed back then. Lots and lots of stores ran vertical
>applications on Xenix software. Lots and lots of fast food
>joints (big brand names) ran their store and all their
>cash registers on Xenix software.

Yeah one of my business partners used to work at a "Jack In The Box" around the time Xenix was popular, and they used it in the kitchen to place orders. He knew the modem number, and was able to dial in and run programs on their system. He went to college at the time, and learned all he could about Unix.

>Even the mighty Microsoft used 100s of Xenix PCs to run
>their internal email system.

Now they use 100s of multi-processor PC systems to run their e-mail globally.

>(Hard to imagine 1000s of Microsoft developers using vi,
>but who knows? Maybe it happened.)

There are other editors besides the archaic VI, PICO is one such editor.

But what happened to Xenix? Why did it almost vanish in 1987, replaced by a product known as OS/2 and later Windows 3.0 replaced the OS/2 product when IBM and MS had a falling out? Why no just use that X-Window interface in Xenix and make Xenix easier to install? :)

Microsoft Applauds European Commission Decision to Close Santa Cruz Operation Matter

Microsoft

Microsoft Applauds European Commission Decision to Close Santa Cruz Operation Matter

Decision upholds Microsoft's right to receive royalties if SCO utilizes Microsoft's technology

REDMOND, Wash.-November 24, 1997 - Microsoft Corporation today applauded the decision of the European Commission to close the file and take no further action on a dispute between Microsoft and Santa Cruz Operation (SCO) involving a 1987 contract. The Commission's decision follows progress by Microsoft and SCO to resolve a number of commercial issues related to the contract, and upholds Microsoft's right to receive royalty payments from SCO if software code developed by Microsoft is used in SCO's UNIX products.

"We are gratified that the European Commission rejected SCO's request for further action and approved our request to close the file on this case," said Brad Smith, Microsoft's associate general counsel, international.

"We were prepared to address SCO's concerns as long as our intellectual property royalty rights could be protected at the same time. The unique nature of the original 1987 contract made it difficult, but we were able to find a workable solution that resolves SCO's major concerns and still protects Microsoft's intellectual property rights," Smith said.

SCO's complaint concerned a contract originally negotiated in 1987 between Microsoft and AT&T for the development of the UNIX operating system. A principal goal of that contract was to help AT&T reduce fragmentation in the UNIX marketplace by creating a single merged UNIX product. To accomplish this goal, under the contract Microsoft developed for AT&T a new Intel-compatible version of UNIX that improved the program's performance and added compatibility with Microsoft's popular XENIX® operating system, which was at the time the most popular version of UNIX on any hardware platform. When completed in 1988, the merged product created by Microsoft was named "Product of the Year" by UnixWorld Magazine.

To prevent further UNIX fragmentation and at AT&T's behest, the contract obligated the parties to ensure that any future versions of UNIX they developed for the Intel platform would be compatible with this new version of UNIX.

As compensation for Microsoft's technology and for its agreement to give up its leadership position with XENIX, AT&T agreed to pay Microsoft a set royalty for the future copies of UNIX it shipped. AT&T subsequently transferred its rights and obligations under the contract to Novell, which transferred the contract to SCO in 1995.

The code developed by Microsoft under the 1987 contract continues to play an important role in SCO's OpenServer UNIX product. This includes improvements Microsoft made in memory management and system performance, development of a multi-step bootstrap sequence, numerous bug fixes, and the addition of new functions originally developed for XENIX and still documented today by SCO for use by current application developers.

SCO complained to the EC that the provisions in the 1987 contract restricted the manner in which it could develop a future version of UNIX (code-named "Gemini") for the 64-bit generation of Intel processors. After reviewing the matter, Microsoft modified the contract to waive SCO's backward compatibility and development obligations, but insisted on continued payment of royalties for any UNIX versions that include Microsoft's technology. Microsoft then requested that the Commission close the file on the case and take no further action, and the Commission agreed to do so. SCO therefore withdrew its complaint.

Microsoft's Smith said there were basically three issues in the contract that needed to be resolved: (1) the backward compatibility requirement, (2) a development requirement designed to reduce UNIX fragmentation under which each new version of UNIX would be built on the previous versions, and (3) royalty payment obligations for Microsoft's intellectual property rights.

"Microsoft was willing to waive the backward compatibility and development requirements, which were included in the 1987 agreement at AT&T's behest, but we needed to preserve our intellectual property royalty rights, which are fundamental to the software industry as a whole," he noted. "Unfortunately, the old contract was written in a way that made it difficult to separate the development requirement from the royalty rights, but we were able to find a solution that gave SCO what it wanted but protected our intellectual property rights."

Microsoft first learned of SCO's complaint to the European Commission in late March. In a May 22 submission to European Commission officials, Microsoft affirmed that it was willing to waive the backward compatibility requirement in the contract, as long as Microsoft's right to receive royalty payment for use of its copyrighted technology was preserved. On May 26, before receiving Microsoft's submission, the Commission provided Microsoft with a Statement of Objections. This is a preliminary step in the EC process that identifies issues for further deliberation and provides a company an opportunity to present its position in person at an internal hearing. Microsoft reiterated its willingness to waive the backward compatibility requirements in an August 1 filing with the European Commission. Microsoft also requested that the Commission hold a hearing, so that Microsoft could document the various ways in which Microsoft's intellectual property is contained in SCO's present UNIX products.

On November 4, after discussions with SCO were unsuccessful in resolving the matter, Microsoft informed SCO that it was unilaterally waiving the compatibility and development requirements of the contract, but retaining the requirement that SCO pay a royalty to Microsoft when it ships product that utilizes Microsoft's intellectual property rights. Upon receiving Microsoft's waiver, the Commission canceled the hearing, which was scheduled for November 13. Despite Microsoft's action to address SCO's concerns, SCO continued to ask for further action by the European Commission. However, the Commission rejected SCO's request and decided to close the case. SCO therefore withdrew its complaint.

"We're pleased that we were able to resolve these issues to the satisfaction of everyone involved, and we're particularly pleased that the EC upheld our right to collect royalties for the use of our technology. This principle is fundamental to the entire software industry," said Smith.

Founded in 1975, Microsoft (NASDAQ "MSFT") is the worldwide leader in software for personal computers. The company offers a wide range of products and services for business and personal use, each designed with the mission of making it easier and more enjoyable for people to take advantage of the full power of personal computing every day.

Microsoft and XENIX are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Other products and company names mentioned herein may be the trademarks of their respective owners.

Note to editors: If you are interested in viewing additional information on Microsoft, please visit the Microsoft Web page at http://www.microsoft.com/presspass/ on Microsoft's corporate information pages.

[ Feb 11, 2002] unix xenix history comments --Re SCO boot disk images

Feb 11, 2002 | aplawrence.com

From: [email protected] (Bill Vermillion)
Subject: Re: SCO boot disk images
References: <[email protected]> <[email protected]> <[email protected]> <[email protected]>
Date: Mon, 11 Feb 2002 00:00:22 GMT

In article <[email protected]>,
Bela Lubkin <[email protected]> wrote:
>Bill Vermillion wrote:

>> In article <[email protected]>,
>> Tony Lawrence <[email protected]> wrote:
>> >Bela Lubkin wrote:

>> >> Amazing that after all these years and however many SCO Xenix,
>> >> Unix and OpenServer systems in the field (well over a million
>> >> licenses sold), they still can't bring themselves to name it.
>> >> Also amazing the GNU HURD chose the same ID as SysV Unix --
>> >> something like 10 years after SysV Unix had already claimed
>> >> it...

>> >Yeah, doesn't it just frost you? Far and away SCO had more Unix out
>> >there than anybody else, but it always got ignored- didn't exist as
>> >far as anyone else was concerned.

>> I was running SysV systems on iNTEL devices before SCO ever brought
>> forth their first Unix implementation. The SysV2 was realy raw as it
>> was one where you had to add all the lines in the /etc/password file
>> by hand, and make sure you got all the :'s correct, etc. That was
>> from MicroPort - which I think was the first iNTEL based SysV2. When
>> I used Esix - which was V.3 - it was about a year before SCO came
>> out with their V.3 based implementation. So you can't blame someone
>> for not mentioning something that hadn't yet existed. Of course
>> Linux came along after that so your point is valid there :-)

>The first releases of SCO Xenix were based on AT&T 7th Edition and then
>System III, back in 1984. But even then they used the 0x63 partition
>ID. MicroPort's '286 SysV was released in early '86, I believe. SCO's
>SysV port (Xenix System V 2.1.0 or so) was around the same timeframe,
>with the '386 version a year or two later.

Tony's comment was on SCO SysV Unix. I'm well aware of the early
Xenix uses as I maintained several machines with it. The MicroPort
time frame is correct as it was being promoted at the 1986 summer
Usenix conference where I first saw. They were promoting it along
the line of 'buy this hard drive and get Unix free'. This was
also about the same time I'd see your posts on the Dr. Dobbs forum.
[some of us still remember!]

>> Until SCO brought out the Unix implementation the Xenix would only
>> support 16 users - so that one reason it was looked down upon from
>> people in that group. It was thought of more along the lines of a
>> small car than a large truck.

>Was there really a 16-user limitation? I can only think of license
>reasons for that, not software...

That sticks in my memory - but I could be wrong on that one.

>> One of the constant grumblings was that which was forced upon SCO
>> by licensing issues and that was that only the base system was
>> there, and you had to purchase the development system and text
>> processing system separately. I heard that a lot from others
>> who were using SysV iNTEL based systems, who were independant of
>> the above group.

>SCO was forced to unbundle the devsys and text processing
>portions due to unfavorable license agreements with MS and AT&T,
>respectively. The royalties due on each of those portions would
>have made the overall price of the OS unacceptable. You could
>argue (and I would agree) that SCO should have made better royalty
>agreements with MS & AT&T, initially _or_ by renegotiating. But it
>didn't happen.

You notice I did say 'forced upon SCO'. It was the others Unix
users who complained - thinking it was something that SCO did on
purpose. And who of us who was on this list in the early 1990s
will ever forget all of Larry's rants against SCO. SCO's problem
as I see it was that they were about the only pure SW vendor while
others had HW ties.

Intel even had their own brand of Unix for awhile. And maybe you
recall but was that the one that went to Kodak, which then became
Interactive, which then went to Sun. Others came and went.
SCO has always 'been there'. That's more than you can say for
other vendors who championed Unix for awhile and then quit.

Dell comes immdiately to mind - the one Larry championed so
loudly. They pushed it for awhle and then they dropped. Dell
later pushed Linux and then dropped it. Now if they would only
push MS products there might be hope ;-).

The next time I stumble across the SCO price list from 1984/5
that has the pricing for Xenix on the Apple Lisa and the Lyrix word
processing for other platforms [I think the VAX was included] I'll
scan it in. Far too many think of SCO as only working Xenix and
Unix on iNTEL but they were far more than that. Their list of
cross-assemblers for different CPUs/platforms was amazing too.
I had forgotten how broad that field was - had to well over a dozen
at that time.

>By the OSR5 timeframe, when we finally got rid of the MS-based
>development system, the idea of selling the DS separately was well
>entrenched, and persists to date (though you can get really deep
>discounts by joining a developer's program, which is either free or
>cheap -- I've lost track). And the text processing package had become
>almost completely irrelevant.

The EU suit against Micrsoft making them stop forcing the inclusion
of the Xenix code was a good thing. ISTR that it was only about
six months after that when SCO was able to drop that part. People
seem to forget the MS's licensing hurt more than just MS user.
Given the environments where SCO was used I don't know whether
a cheaper or bundled DS would have been beneficial to the business
side or not.

About the only thing the text-processing was being used for by that
time in many was to write/format man pages it seemed. Of course all
of SCO man pages were already formatted, probably because of this.
And writing in troff style was certainly nothing I ever felt I
would like to learn.

The best parts of AT&T text processing never seemed to make it past
AT&T. I was really impressed by the Writers Work Bench. But by
that time serious document production was being done by companies
who speicilized in it - and it really didn't belong in the OS.
FrameMaker comes immeditely to mind. That did some truly amazing
things but it's target customers were HUGE companies. Main users
were places such as drug manufacturers who would generate a
semi-truck full of paper for submission for drug approval, and
automobile manufactures. Unix really shined in those environments.

Bill

--
Bill Vermillion - bv @ wjv . com

Computer Source Publications - Under The Hood Part 8

November 4, 2002 | computersourcemag.com

Under The Hood: Part 8
November 4, 2002 - Computer Source Magazine

E Pluribus UNIX

Last time, I explained how UNIX evolved and became the operating system (OS) with which most computer science students were most familiar and wanted to emulate. Now, I'll explain how Microsoft became a champion of UNIX for the microcomputer.

Due to its portability and flexibility, UNIX Version 6 (V6) became the minicomputer OS of choice for universities in 1975. At about the same time, microprocessors from Intel, Motorola, MOS Technology and Zilog ushered in the age of the microcomputer and created the home- or personal-computer market. That's also when Bill Gates and Paul Allen founded Micro-Soft and created the MBASIC Beginner's All-purpose Symbolic Instruction Code (BASIC) interpreter for the MITS Altair 8800 microcomputer.

Ironically, MBASIC was actually written on a Digital Equipment Corporation (DEC) Programmed Data Processor 11 (PDP-11) minicomputer running UNIX V6 on the University of California at Berkeley (UCB) campus, which Gates and Allen timeshared.

The Altair 8800 used the Intel 8080 microprocessor, which couldn't run UNIX. Instead, it used MBASIC as a self-contained programming environment. The same was true of the MBASIC interpreters for the Motorola 6800 for the Ohio Scientific OS-9, the Zilog Z80 for the Tandy/Radio Shack TRS-80 and the MOS Technology 6502 for the Apple II and Commodore Personal Electronic Transactor (PET). Each MBASIC interpreter was custom-written specifically for each processor.

UNIX could also be ported to different processors, but at that time only ran on high-end minicomputer and mainframe systems from DEC and IBM. In 1974, the closest thing to UNIX was Digital Research CP/M for the Intel 8080 and Zilog Z80 microprocessors. By 1977, 8080 or Z80 systems with an S-100 bus running CP/M were considered as close to a "real computer" running UNIX that you get with a microcomputer. It was at this time that Micro-Soft became Microsoft and expanded its inventory of language offerings.
In 1978, Bell Labs distributed UNIX with full source-code and, within a year, academic researchers began developing their own custom versions, most notably the UCB Berkeley Standard Distribution (BSD). In 1979, Microsoft licensed UNIX directly from AT&T, but couldn't license the UNIX name, so it called its UNIX variant Microsoft XENIX.

XENIX was originally developed on a DEC Virtual Address Extension (VAX) running the Virtual Memory System (VMS) and a PDP-11 running UNIX V7, albeit now using Microsoft's own in-house minicomputers, and then converted into assembly language specific to the new 16-bit Motorola 68000 and Intel 8086 microprocessors. This put XENIX at the high end of the microcomputer market, which was still dominated by 8-bit machines, but well below the lowest end of the minicomputer market.

In 1979, brothers Doug and Larry Michels founded the Santa Cruz Operation (SCO) as a UNIX porting and consulting company using venture capital from Microsoft, which handed over all further development of Microsoft XENIX to SCO. Doug Michels recalled that the company's name was a bit of "social engineering" to obscure the fact that it was essentially a two-man peration. "I'd call up and say, 'This is Doug from the Santa Cruz Operation' and be pretty sure they wouldn't catch that the 'O' was capitalized and think I was from another branch of their company."
By 1980, the UNIX family tree had split into three distinct major branches:

  1. AT&T UNIX System III from Bell Labs' UNIX Support Group (USG).
  2. Berkeley Standard Distribution 4.1 from UCB.
  3. XENIX 3.0 from Microsoft and SCO.


Microsoft XENIX was initially an Intel 8086 port of AT&T UNIX Version 7 with some BSD-like enhancements. This became Microsoft/SCO XENIX 3.0 a year or so later. SCO XENIX 5.0 was updated to conform to AT&T UNIX System V Release 0 (SVR0) in 1983, for which SCO brought its own rights to the UNIX source code. By then, XENIX had the largest installed base of any UNIX system during the early 1980s.

Microsoft acquired a 25 percent share of SCO, which at the time gave it a controlling interest. While SCO handled the actual development and added some enhancements of its own, Microsoft handled the marketing of the product, which it touted as the "Microcomputer Operating System of the Future!"

A 1980 issue of Microsoft Quarterly stated, "The XENIX system's inherent flexibility … will make the XENIX OS the standard operating system for the computers of the '80s." The 1983 XENIX Users' Manual declared, "Microsoft announces the XENIX Operating System, a 16 bit adaptation of Bell Laboratories UNIX™ Operating System. We have enhanced the UNIX software for our commercial customer base, and ported it to popular 16-bit microprocessors. We've put the XENIX OS on the DEC® PDP-11™, Intel® 8086, Zilog® Z8000 and Motorola® 68000." It went on to warn against "so-called UNIX-like" products. Similar sentiments were echoed in ads for Microsoft XENIX in the UNIX Review and UNIX World magazines as late as 1984. That's when Microsoft and SCO had a parting of the ways.
What changed?
On August 12, 1981, the IBM Model 5150 Personal Computer changed everything. Then, on January 24, 1984, the Apple Macintosh changed everything … again!

-Dafydd Neal Dyar

notes01

guinness.cs.stevens-tech.edu/~jschauma
Slide ``Unix history'':

trademark UNIX, genetic UNIX, UNIX-like
	very simple: trademark UNIX: certified by The Open Group to bear the
		trademark UNIX(R), genetic: by inheritance, possibly without
		trademark (examples: BSDs), UNIX-like: independent (linux)

	1969: Thompson and Ritchie worked on Multics: Multiplexed Information
		and Computing Service; joint effort by Bell Telephone
		Laboratories (BTL), GE and MIT.

		BTL withdrew, and "Unics" (UNIplexed Information...) was
		re-written in BCPL (Basic Combined Programming Language),
		Ritchie wrote a ``cut-down version'': "B"

	1970: First elementary UNIX system installed on PDP-11 for text
		preparation, featuring such exciting tools as ``ed'' and
		``roff''.

	1971: First manual published.  (Manual in print!)  Commands included
		b, cat(1), chmod(1), chown(1), cp(1), ls(1), mv(1), wc(1)

	1972: Ritchie rewrote "B" to become "C"; Thompson implements the
		concept of the pipe (which is attributed to Douglas McIlroy,
		but simultaneously popped up at Dartmouth)

caption{Ritchie and Thompson, porting UNIX to the PDP-11
via two Teletype 33 terminals.}

	1974: Thompson teaches at Berkeley.  Together with Bill Joy (Sun!),
		Chuck Haley and others, he developed the Berkeley Software
		Distribution (BSD)

		BSD added (over the years) the vi editor (Joy), sendmail,
		virtual memory, TCP/IP networking

	throughout 70's:
		Second Berkeley Software Distribution, aka 2BSD.  final
		version of this distribution, 2.11BSD, is a complete system
		used on hundreds of PDP-11's still running in various corners
		of the world.
		
		spread of UNIX also thanks to VAX, to which Joy ported 2BSD,
		which included the virtual memory kernel

		other important factors influencing OS development:  research
		from Xerox Parc, including GUI studies and the mouse, then
		adopted by Steve Jobs for the Apple OS.

	1978: UNIX Version 7 was released and licensed (free to universities);
		from here on (actually, since Version 5), we see two distinct
		directions: BSD and what was to become ``System V''

	1979: 3BSD released for VAX

		At this time, commercial vendors become interested in UNIX and
		start to license it from BTL / ATT.  ATT could not sell the
		work from BTL, so it spread to universities and academia.
		Eventually, work from ATT was taken over by WE, to become UNIX
		System Laboratories, later on owned by Novell.


		Some dates:

Version 6 	1975 	Universities
Version 7 	1978 	Universities and commercial. The basis for System V.
System III 	1981 	Commercial
System V, Release 1 	1983 	Commercial
System V, Release 2 	1984 	Commercial, enhancements and performance improvements
Version 8 	1985 	Universities
Version 9 	1986 	Universities
Version 10 	1989 	Universities


	The 80s:
		Note that there never was a ``System I'':  it was thought that
		a I would imply a buggy system.  (Note references to todays
		version insanity:  RH 100.12, SuSE 7, etc., Solaris 2.6
		becomes Solaris 7)  Software version numbering is (largely)
		arbitrary!

		Note there was no 5BSD:  4.1BSD should have been 5BSD, but
		ATT thought users would get confused with ``System V'' and
		objected.

		4.2BSD shipped more versions than SV, since early versions did
		not yet include TCP/IP (developed with funding from DARPA at
		Berkeley) or the Berkeley Fast Filesystem (covered in future
		lecture).

Legal stuff:
	Up until 4.3BSD-Tahoe (in which machine-dependent and
	machine-independent parts were first separated), everybody who wanted
	to get BSD had to get a license from ATT (BSD was never released as
	binary only, but always contained the source, too).  Other vendors
	wanted to use the Berkeley networking code, but didn't want to pay for
	the entire BSD binaries in licenses.

	TCP/IP, entirely developed at Berkeley, was broken out of BSD and
	released in 1989 as the Networking Release 1, with the first BSD
	license.

	Then, people wanted to get a more complete version of a freely
	redistributable OS.  So folks at Berkeley started to rewrite every
	utility from scratch, solely based on the documentation.  In the end
	only about 6 files were left that were ATT contaminated and could not
	trivially be rewritten.  The rewrites etc. were released as Networking
	Release 2;  soon after the remaining parts were rewritten and 386/BSD
	was released which then turned into NetBSD (hence the name).

	Similarly, BSDI rewrote the files and released their system, even
	advertising it as UNIX (call 1-800-ITS-UNIX).  USL (Unix System
	Laboratories), the part of ATT that sold UNIX, did not like that one
	bit.  Even after BSDI changed their ads and didn't claim it was UNIX,
	they still sued, claiming that BSDI contains USL code.

	BSDI argues that it's willing to discuss the six files they wrote, but
	they should not be held responsible for the Berkeley code.  USL,
	knowing they'd have no case based on just six files *refiled* lawsuit
	against BSDI and UC Berkeley.

	UC Berkeley then counter-sued USL, saying they didn't comply with
	their license (no credit for code incorporated into SV).  Soon after,
	USL was bought by Novell, and settlement talks started.  Settlement was
	reached in 1994: three out of 18,000 files of Networking Release 2
	were removed, some minor changes.

	This was released as 4.4BSD-Lite.  Since the settlement also included
	that USL would not sue anybody who did use 4.4BSD-lite as the basis,
	BSDI, NetBSD and FreeBSD all had to merge their changes with
	4.4BSD-lite and get rid of the encumbered files.

	We'll cover more legal stuff in a future lecture.


BSD: First to support VM (inspired by SunOS implementation and MACH OS); SunOS
versions 3 and 4, which brought the UNIX world some of its most well known
features, such as NFS, were derived from 4.2BSD and 4.3BSD, respectively. The
NetBSD and BSD/OS operating systems are descendants of 4.4BSD, with a wealth
of new features; FreeBSD and OpenBSD are two other operating systems which are
descended from 4.4BSD through NetBSD, and, last but certainly not least,
Apple's "Rhapsody", "Darwin", and "OS X" operating systems are mostly NetBSD
and FreeBSD code running atop a microkernel.

System V R4: Commercial UNIX's attempt at standardization. Most commercial
versions of UNIX are compliant with SVR4; some are compliant with the more
recent, but only slightly different, SVR4.2 or SVR5 specifications. Sun's
Solaris is SVR4, with many Sun enhancements; SGI's IRIX is, similarly, SVR4
with many subsequent changes by SGI; UnixWare is SVR4.2 or SVR5, depending on
version. Most differences between different vendor operating systems derived
from SVR4 are not obvious to the application programmer or are comparatively
minor.

Another Unix-like operating system is Linux. While Linux isn't directly
descended from any version of Unix, it is generally similar to SVR4 from the
programmer's point of view; modern Linux systems also implement much of the
functionality of the 4.4BSD-derived systems such as NetBSD or OS X.
Unfortunately, sometimes Linux quite simply "goes its own way" to an extent
otherwise relatively uncommon in the modern era of Unix; functions won't work
quite as they do in other versions of Unix, common utilities will implement
strange new options and deprecate old ones, and so forth. If you learn to
write code that is portable between many versions of Unix, it will run on
Linux -- but be prepared to scratch your head at times!

Xenix was Microsoft's version of UNIX for microprocessors.  When Microsoft
entered into an agreement with IBM to develop OS/2, it lost interest in
promoting Xenix.  Microsoft transferred ownership of Xenix to SCO in an
agreement that left Microsoft owning 25% of SCO.  However, Microsoft continued
to use Xenix internally, submitting a patch to support functionality in UNIX
to AT&T in 1987, which trickled down to the code base of both Xenix and SCO
UNIX. Microsoft is said to have used Xenix on VAX minicomputers extensively
within their company as late as 1992.

SCO released a version of Xenix for the Intel 286 processor in 1985, and
following their port of Xenix to the 386 processor, a 32-bit chip, renamed it
SCO UNIX.


Todays big UNIX versions: SCO UNIX, largely SVR4, mostly irrelevant
			SCO UnixWare (from Novell (again)).
			SCO Linux (from Caldera)

			SunOS: largely irrelevant, as it's superseded by
			Solaris.  SunOS is BSD derived.

			Solaris: SVR4, including features from SunOS (such as
			NFS).  Naming nonsense:  some called Solaris SunOS 5,
			Solaris 2.3 SunOS 6, Solaris 2.4 SunOS 7 etc.  silly
			Strength: NFS
			One of the most popular UNIX versions.  Here lie big
			bucks!

			HP-UX: Version 10 mostly SVR4, dunno much about it.
			
			Digital UNIX: DEC's version, mostly BSD.

			IRIX:  guinness, SV based, includes BSD extensions /
			compatibilities from SGI for Mips architecture.
			Strenght: XFS, graphics (SGI -> OpenGL)
			Might soon go away in favor of Linux

			AIX: IBMs SV based, including a large number of BSD
			changes.  dunno much about, but obviously IBM's big
			in the linux business these days
			Acronym: Advanced Interactive eXecutive or AIn't uniX

			*BSD:  NetBSD first BSD.  Designed for correctness
			(portability is just a side effect!).  First release
			was 0.8 in March 1993.
			FreeBSD, born as 1.0 in Dec. 1993, concentrates on
			i386 performance
			OpenBSD forked off NetBSD in October 1995,
			self-proclaimed focus on security.
			BSD/OS or BSDi: commercial BSD.  Mostly irrelevant.

			Linux: completely different:  neither genetic nor
			trademark.  Minix­like.  Just a kernel, not a complete
			OS.  Only GNU makes it a complete OS (so Stallman has
			a point, after all).  First kernel first announced in
			1991.  Monolithic.

Slide ``Some UNIX versions'':
Monolithic vs microkernel:
The microkernel approach consists in defining a very simple virtual machine
over the hardware, with a set of primitives or system calls to implement
minimal OS services such as thread management, address spaces and interprocess
communication.
The main objective is the separation of basic service implementations from the
operation policy of the system.

Examples of a microkernel: 
	GNU Hurd, AIX, Windows NT, Minix, QNX

Monolithic kernels:
	traditional UNIX kernels, BSD
hybrid monolithic kernel:
	can load modules at runtime (Linux, BSD, ...)


Some of the more interesting other UNIX and UNIX-like OS:

		GNU started in 1983 by Stallman.  Intention: use HURD
		(a Mach microkernel), then took Linux.  GNU's Not Unix.

		HURD:  a unix-like microkernel, currently based on GNU Mach

		Mach: unix-like microkernel developed at Carnegie Mellon.
		Mach-based OS include NeXTSTEP and OS/2

		NeXTESTEP: of interest due to connection to Mac OS X.
		Influence in WindowManagers:  see AfterStep, WindowMaker etc.
		Founded by Steve Jobs (throwing in the towel at Apple after
		revolutionizing the GUI using ideas from Xerox Parc with
		the Apple Lisa and the Apple Macintosh) around 1985.

NeXTSTEP is the original object-oriented, multitasking operating system that
NeXT Computer, Inc. developed to run on its proprietary NeXT computers
(informally known as "black boxes"). NeXTSTEP 1.0 was released in 1989 after
several previews starting in 1986, and the last release 3.3 in early 1995. By
that point NeXT had teamed up with Sun Microsystems to develop OpenStep, a
cross-platform standard and implementation (for SPARC, Intel, HP and NeXT m68k
architectures), based on NeXTSTEP.

		includes: mach-kernel + BSD code, postscript windowing engine
		(see OS X), Objective-C, advanced, interesting UI (dock,
		shelf)

		Apple bought NeXTSTEP in 1997 (again under Steve Jobs).


	Darwin:
		open source, XNU kernel, integrates Mach 3.0, and Net- and
		FreeBSD.  Stand-alone OS, but really interesting as the core
		of Mac OS X.

	Mac OS X:
		Apple's Unix (not UNIX - not trademarked).  Combines NeXTSTEP
		and Darwin.  Quartz (postscript based windowing engine),
		netinfo, unix core + pretty user interface

Some other interesting UNIX versions:
	QNX:	POSIX compliant real-time UNIX-like OS

	Plan 9 / Inferno:
		Since 2003 even Open Source.
		Incorporates interesting Operating System research.  again
		from Bell Labs.  Absolutely _everything_ is a file, no
		distinction between a local and a remote object.  low-level
		networking protocol known as 9P


Slide: UNIX Basics:
	- kernel: schedules tasks, manages storage, controls hardware, makes
	  system calls

	- shell: user interface to the OS and other applications, usually
	  command interpreters.  provide the functionality of pipes.

	- tools and applications: everything else.  Most common tools adhere
	  to written (POSIX, SUSV3) or unwritten standards.

	- multitasking: running multiple jobs at the same time without hanging
	  in between

	- multiuser: more than one person can use the same resources at one
	  time.  Windows did not have this until XXX, Mac OS only since OS X!
	  Prioritization (nice(1)), user privileges and permissions etc.

	- portability: easily port applications from one UNIX to another --
	  even if sometimes hurdles have to be overcome.  But try porting from
	  Mac OS (pre-X) or Windows to UNIX!

	- networking capabilities: inclusion of TCP/IP early on brought email
	  to every UNIX.  UNIX is the foundation of the internet.


Slide: Unix Basics necessitate:

	- multi user concepts:
		each user has unique ID, access granted based on numeric ID,
		not name
		concepts of groups
		some UNIX versions have ACLs

		file ownership:
			ls -l
			chmod bits
			chown
			directory permissions, sticky bit, /tmp dir

		process priorities:
			kill, pkill, killall
			nice, runaway processes
			fork-exec, PID 1, example login process:

			init (PID1) 	(via fork-exec)
			^ |
			| +----getty----getty----getty----getty
			|        |
			|      *exec*
			|        |
			|      login
			|        |
			|      *exec*
			|        |
			+----- shell ---  *fork-exec* --- ls
			         ^                         |
			         +-------------------------+


		login:
			get username
			modify terminal not to echo next input
			get password
			open /etc/passwd

jschauma:abcdEFG12345:2379:600:Jan Schaumann:/home/jschauma:/bin/ksh

			get information
			if credentials are right, set UID, GID and HOME and
				finally exec shell



		communication with users:
			wall, write, talk
			mailing lists
			make sure you can always reach most of the users
			need to notify users in advance

	- super user account
		as before: only identified by ID 0
		toor vs root
		logins, remote logins, password in the clear, su and sudo

	- security considerations in a networked world
		the internet is no longer what it used to be:
			open relays for SMTP
			telnet vs ssh
		use encryption where possible
		strike balance between security and convenience
		guard against threats from inside or outside the network
			80% of all attempts come from inside
			95% of all successful attempts come from the inside!

[Aug 25, 2001] HALL OF FAME

[Jul 1, 2001] Biographies of pioneers in the computing history

This index mentions most of the computer pioneers and their inventions, or other important people in computers or computing industry.

Due to the length of the index we have cut the index into two parts: [A-J] and [K-Z] you can navigate through both parts via the alphabet icon (see below) from both pages; they have identical mappings.
Sometimes a link will bring you to external sites, use the back button of your browser to come back.
If you see in-active or un-linked pages, they are either under revision or will be added in the future. So come back often, or press the what's new hotspot on the main page.

Recently we have added a list of historic papers as they are referenced via the biographies, readers asked us to insert an index page for easier retrieval:

[Jun 30, 2001] Usenet Co-founder Jim Ellis Dies

Slashdot

"Jim Ellis, one of the cofounders of Usenet, has passed away. Usenet is considered the first large information sharing service, predating the WWW by years." He was 45 years old, and died after battling non-Hodgkins lymphoma for 2 years. Usenet of course began in 1979, and is the 2nd of the 3 most important applications on the net (the first being email, and the third being the web). Truly a man who changed the world.

Thanks a lot. (Score:1, Interesting)
by Anonymous Coward on Friday June 29, @12:01PM EST (#34)
I never knew you, but thanks anyway, dude.

If Usenet is one of the first really democratic institutions, shouldn't we all recognize this as significant as when one of the country's Founding Fathers died? Just an idea...

Son of Usenet (Score:2, Interesting)
by mike_the_kid (http://www.nedyah.org/mailer.asp) on Friday June 29, @12:02PM EST (#35)
(User #58164 Info) http://www.nedyah.org/music_for_the_masses/
Anyone who remembers FidoNet or BBS can realize just how far ahead of its time usenet was. Fidonet was a direct descendant of usenet, and it was quite a resource in its heyday.

The model of usenet, where people can post new articles or reply 10 Years of Impact Technology, Products, and People 10 Years of Impact Technology, Products, and Peopleto older ones is seen right here on slashdot discussions, and all the other web based discussion boards. Bulletin boards are one of the great things about the Internet. The format for discussion, seen today in mailing lists and forums like this, started with usenet.

Fido was my first exposure to this type of information, way before I had an IP address.

If the core of this model was not usenet, what was it? If it was, I must give credit to the people who developed usenet for their forward thinking on information exchange and hierarchy.

It is not a perfect system, but in its flaws (namely the signal to noise ratio) is hope for better methods of communication.

www.nedyah.org -- Careful!

He deserves respect (Score:5, Insightful)
by nougatmachine ([email protected]) on Friday June 29, @12:02PM EST (#39)
(User #445974 Info)
Besides the obvious need to have respect for the dead, I feel that Jim Ellis deserves respect because he made the first internet resource that strived to create a community atmosphere.

This is the model that the web boards found on many websites were based on, and certainly was an influence on the Slashdot model.

Whoever made the sarcastic comment about the graves saying "make money now", I understand you were trying to be funny, but I have a hard time laughing about people who have recently died. It's hardly Jim's fault Usenet has become such a wasteland.

So many important dudez in heaven (Score:4, Insightful)
by chrysalis on Friday June 29, @12:13PM EST (#69)
(User #50680 Info) http://www.jedi.claranet.fr
Richard Stevens. Douglas Adams (not really internet-related but definitely someone I loved). The ZIP algorithm inventor (sorry I can't remember his name) . And now Usenet's daddy. All rest in heaven now.

But do you think Richard Stevens and the Usenet creator were enjoying today's internet ? They built something that worked perfectly to exchange tons of messages with low bandwidths. Now, everyone has 100x the bandwidth they had when they designed their product. Computers are 100x faster. So what ? Do we find info 100x faster than before ?

Actually not. To read a simple text, you have to download hundreds of kilobytes. 99% is bloat (ads, bloated HTML, useless Java, etc) . Reading messages on a web discussion board is slow. You have to issue dozens of clicks before reading a thread, and wait for every ad to load. Usenet provided a consistent, sorted, easy to parse, and *fast* way to share info with other people.

7 years ago, I was providing access to 12000 newsgroups on Minitel. Minitel is a french terminal, with a 1200 bauds modem (and 75 bauds in emission) . And it worked. People could easily browse all Usenet news. Faster and easier than on web sites.

Another thing is that Usenet let you choose any client. You can choose your preferred fancy interface. Web discussion boards don't let you a lot of choice.

Migrating from Usenet to web sites is stupid. It wastes a lot of bandwidth for nothing. People do this because :

  • Everyone can open its own web site

        People can force users to see web ads to read messages Great deal. Web discussion boards provides inconsistency and redundancy. How many web sites discusses the same thing ? How many questions are asked on a web site though they were already answered on another web site ?

        Usenet solved this a long time ago.

        What killed Usenet is the load of uuencoded warez and spam. Everyone has to filter messages to find real ones. Lousy. But we can't fight stupidity. Give people mail access, they will send spam. Give people Napster, they will share copyrighted songs. Give people a CD writer, they will burn commercial software. Give people the web, they will DOS it or try root exploits. Give people usenet, they will kill it. And there's no way back.

    -- Pure FTP server - Upgrade your FTP server to something simple and secure.

  • That is truly sad (Score:4, Interesting)
    by jfunk ([email protected]) on Friday June 29, @12:23PM EST (#88)
    (User #33224 Info) http://www.funktronics.ca/
    The Internet to me, at first, was news, ftp, and telnet. I spent an inordinate amount of time in 'nn' every day reading sci.electronics, alt.hackers (that was a very fun newsgroup about *real* hacking), and host of others.

    When I first saw the 'web' I thought, "this is crap, random words are linked to various things and it doesn't seem to make sense. Back to the newsgroups with me." I realise now that it was just my initial sampling that was total crap, but I kept up with the newsgroups anyway.

    I'm totally sad about the state of USENET over the past few years, and this just makes it all worse.

    However, for that long time I spent thriving on the USENET, I'll have to thank Jim Ellis. He indirectly helped me find out about Linux, electronics, hardware hacking, etc. Things I do professionally these days.

    I think it's a somewhat appropriate time for an:

    ObHack (I'm sorry if it's not a very good one. Good hacks, that are not your employer's intellectual property, seem to decrease to almost nothingness when you're no longer a poor student): We had this hub where a heatsink had broken off inside. I grabbed some solid wire and threaded it through the fins and through holes in the circuit board. Through a fair bit of messing around I made sure that it will *never* come out of place again. Ok, that was bad, so I'll add another simple one: Never underestimate the power of a hot glue gun. It allows you to easily provide strain relief for wires that you've soldered onto a PCB and I've also used it to make prototypes of various sensors. If you want to take it apart, and x-acto knife does the trick very easily.

    Sigh.

    Honor Jim Ellis and help others with lymphoma (Score:5, Informative)
    by redowa (trillatpurpleturtledotcom) on Friday June 29, @02:56PM EST (#197)
    (User #102115 Info)
    One way to truly honor Jim Ellis's memory and his contributions to the internet as we know it would be to help find a cure for the cancer that killed him.

    The Leukemia & Lymphoma Society (nat'l. non-profit org.) has this amazing program called Team in Training - basically, you train for an endurance event (marathon, century cycle, triathlon, etc.), and in exchange for 3-5 months of professional coaching, staff support, transportation, accomodation, and entrance fee for your event, you agree to fundraise for the Leukemia & Lymphoma Society.

    It's such an inspiring experience. It's totally doable - you can go from complete slothdom to finishing a marathon in just a few months. And you get to meet patients with various blood-related cancers, and hear about their experiences - after you find out what chemo & marrow transplants are like, suddenly your upcoming 14-mile run doesn't seem so hard - and you directly affect their chances of survival with every dollar you raise. It is such a good feeling, both physically and mentally, to be a part of this program.

    Usenet when you could talk to the heroes (Score:2)
    by King Babar on Friday June 29, @03:46PM EST (#209)
    (User #19862 Info) http://www.missouri.edu/~kingjw
    Here, at the time of the passing of its co-creator, I see a great out-pouring of nostalgia for Usenet of old. I also see the posts of many people who were not lucky enough to have seen it at its zenith. I think the one most amazing aspect of Usenet was not merely that you could get fast answers to pressing technical questions, but that you had direct access to some real giants of that day, and see a little bit about how they think. It wasn't just that there was more signal, in some cases the signal came from the creator of whatever it was you were asking about. Even if they worked for a Big Important Company. So if you asked an interesting question in comp.sys.mac.hypercard, chances were good that somebody from Apple would respond. Alexander Stepanov used to respond to traffic about the C++ STL. World experts at your fingertips everywhere! It should have been paradise!

    And I have to say that by and large we really blew it. It wasn't just the spam, or even the massive flamefests. It was really the corrosive effects of ignorance and greed. Take Tom Christiansen (most recently [email protected]). Not always a bunch of rainbows and smiles, he, but an incredibly well-informed individual whose contributions to Usenet are the stuff of legend. Apparently chased away from Usenet for good by one too many "gimme gimme" question and one too many displays of horrible netiquette. A real tragedy.

    This was around the time I discovered Slashdot, and saw what looked like a more clueful albeit imperfect mirror of the Spirit of Usenet. I was quite cheered when I found out that tchrist himself was becoming a key contributor. It might be a new geek paradise! But, of course, that didn't happen. Tom got chased away again by a bunch of cretins.

    And, getting back to the idea of an elegy for Ellis, I believe the final straw there was some jerk maligning Jon Postel when his obituary came up in this forum. Much worse than spam.

    Babar

    Usenet was NOT the Internet (Score:5, Interesting)
    by JoeBuck (jbuck at welsh-buck dot org) on Friday June 29, @12:34PM EST (#114)
    (User #7947 Info) http://www.welsh-buck.org/jbuck/
    Back in the 80s, Usenet was the net for those of us who couldn't get on the Internet, because we didn't have the connections into DARPA (by virtue of being a defense contractor or big research university) to get on it. The only connectivity we had was 1200 baud modems (in some cases, 300 baud). The way you got on was that you had a Unix system and a modem, and a contact with someone that was willing to give you a news feed (possibly in exchange for lightening the load by feeding a couple of other folks).

    Actually, you didn't even need Unix. I was at a small company that did a lot of digital signal processing, and it was a VMS shop, so we ran Usenet on top of Eunice (a Unix-on-top-of-VMS emulation that sort of worked, but had only symbolic links, no hard links). I was the guy who did the Eunice port for 2.11B news: my first involvement in what would now be called a major open source project.

    Back in those days, to send mail you had to have a picture of the UUCP network topology in your head: a series of paths that would get you from here to there. There were a couple of short cuts: sites that would move messages across the country (ihnp4) or internationally (seismo, which later became uunet, the first commercial provider of news feeds).

    Because of the way Usenet worked, in the days where it went over UUCP (before NNTP), it was based on personal connections and a web of trust. Things were pretty loose, but if someone ignored community norms and behaved in a way that would clearly damage the fabric of the net, they just lost their news feed and that was that. It was cheap Internet connections and NNTP that made Canter and Siegel (the first big Usenet spammers) possible. But this reliance on personal connections had its downside: some admins enjoyed being petty dictators too much. The UUCP connection between AMD and National Semi (yes, competitors fed each other news on a completely informal basis, it was a different era) was temporarily dropped because of a personal squabble between the sysadmins.

    There were many other nets then that weren't the Internet: Bitnet, berknet (at Berkeley) and the like. Figuring out how to get mail around required wizardry: mixes of bang paths (...!oliveb!epimass!jbuck), percent signs, and at-signs (user%[email protected]).

    The user interfaces on sites like Slashdot are still vastly inferior to newsreader interfaces, like trn and friends. I could quickly blast through hundreds of messages, killing threads I wasn't interested in, skimming to get to the meat. If only sites like Slashdot would pay more attention to what worked so well about Usenet.

    MAKE GREEN CARDS FAST WITH SERDAR ARGIC AND KIBO!! (Score:5, Insightful)
    by connorbd on Friday June 29, @12:01PM EST (#33)
    (User #151811 Info) http://www.geocities.com/ResearchTriangle/Station/2266
    I remember the end of the Usenet glory days (mid-90s, unfortunately just after the September That Never Ended), before it was swallowed by spam. Usenet IMHO is the place where net.culture grew up, even if it wasn't part of the Internet in the beginning. No offense to the /. community, but to those of you who never experienced it, Usenet back in the day was a place the likes of which we probably won't see again.

    Places like /. and k5 still have an echo of the old Usenet, and you likewise still get some of it on mailing lists now, but take a look through Google Groups now -- too much garbage, and the community that's there is somewhat isolated because Usenet isn't as integral to the net experience as it once was.

    Two taps and a v-sign for the man -- not everyone can claim to have created a true community single-handedly.

    /brian

    [Jun 29, 2001] Jim Ellis, one of the founders of the Usenet system which predated and helped shape the Web, died

    Jun 29, 2001 | alt.rest.in.peace

    Jim Ellis, one of the founders of the Usenet system which predated and helped shape the Web, died Thursday of non-Hodgkins lymphoma at his home in Pennsylvania. He was 45. The newbies amongst us might not be familiar with Usenet, a massive information-sharing service where files were swapped, friends and enemies were made, and just about every topic imaginable was discussed. It served as the model for modern message boards and for a long time was the coolest thing happening on the Internet. In 1979, as a graduate student at Duke University, Ellis helped design the system, linking computers at Duke to some at the University of North Carolina. Within just a few years, it spread worldwide. By 1993, there were 1,200 newsgroups and the system reflected an increasingly diverse and chaotic online community. Users would post messages and encrypted files in a series of newsgroups built into a hierarchy of interests, such as rec.collecting.stamps and comp.os.linux. The infamous alt. groups were home to the wilder topics, from alt.religion.kibology to alt.pave.the.earth.

    In time, as with many communities, it got crowded and went into decline. By 1999, an estimated 37,000 newsgroups were in operation, and legitimate postings had largely been drowned out by ads, spam, and flame wars. But the impact of Ellis' creation on our modern Internet can't be dismissed. For his contributions, Jim Ellis received the Electronic Frontier Foundation's Pioneer Award in 1993 and the Usenix Lifetime Achievement Award in 1995.

    An archive of Usenet postings dating back to 1995 is hosted by Google.

    The People's Embassy Computer history section

    Disappeared from the Web...

    hammer.prohosting.com/~penz/home.htm

    10 Years of Impact Technology, Products, and People. The Role of Courage in Applied Research by Ivan Sutherland

    Sun

    While the formulation of a research strategy is a business decision for Sun Labs, the choice of a worthy problem is a highly personal decision for researchers.

    "Selecting a project worthy of your team's time and the company's money requires foresight, passion, suspension of disbelief, luck, and courage," said Dr. Ivan Sutherland, Vice President and Fellow of Sun Microsystems. A pioneer in the field of computer graphics and integrated circuit design, Ivan produced some of the world's first virtual reality and 3-D display systems as well as some of the most advanced computer image generators now in use. His groundbreaking research on asynchronous circuits has resulted in technology that could lead to circuits with much higher processing speeds than currently possible using conventional technology.

    "A critical first step in picking problems is to understand where the technology will be in 10 years," he said. "And that requires two things: First, you need to project ahead based on your knowledge of the technology; but there's also a critical element of self-deception. The danger in projecting that far ahead is that you'll become overwhelmed with the complexity or difficulty of your mission and never actually get to work. So in part it's a matter of convincing yourself that things are really simpler than they are and getting started before you realize how hard the problem actually is.

    "It is also important to weigh the opportunity," Ivan continued. "Some problems aren't worth consideration because a solution is simply too far off--a `beam me up Scotty' transporter, for example. I try to select a problem I think I can solve in a limited time frame using the existing base of technology.

    "And on a personal level, it is important not to overlook the role of courage," said Ivan. "With research comes risk. Researchers daily face the uncertainty of whether their chosen approach will succeed or fail. We sometimes face periods of weeks, months, or even years with no visible progress. To succeed, you must have the personal fortitude to overcome discouragement and to keep your focus on the task at hand."

    Successful development of technology can also require courage on the part of the enterprise, according to Jon Kannegaard, Sun Vice President and Deputy Director of Sun Labs, "It can be a leap of faith from a business perspective as well as a personal perspective," he said. "There isn't always an objective way to determine whether or not there's a pot of gold at the end of the rainbow, yet the company is called upon to decide whether or not to invest resources to develop the technology. In some cases, ongoing investment may be required to transform the technology into a product, again with no certainty in the outcome. There's courage required at every step."

    USENIX ;login -A Tribute to Rich Stevens by Rik Farrow

    Hal Stern reflected that he respected his teaching and his ability to illuminate and explain.

    "Charles Mingus, late jazz bassist, sums it up well: 'Taking something complex and making it simple is true creativity.'

    We have lost one of our truly creative."

    [Nov 25, 2000] Michael John Muuss -- homepage of the late author of ping...

    Mr. Muuss was born in 1958, received a BES in Electrical Engineering from the Johns Hopkins University in 1979, and has subsequently received numerous awards and citations for his work, and is a two-time winner of the U.S. Army Research and Development Achievement Award.

    "The author of the enormously popular freeware network tool PING, Mike Muuss, died in a Maryland car crash last night. The accident happened at 9.30pm (New York time) on route 95 as a result of a previous accident.
    Mike hit a car stuck in the middle of the road and was pushed into the path of an oncoming tractor."

    http://www.theregister.co.uk/content/4/14936.html

    [Sep 12, 2000] Raph's Page Online World Timeline -- interesting game technology timeline

    Also mirrored in other sites, for example Four Below Zero - The Gamer - Timelines

    [Jul 21, 2000] Peter Salus wrote a short overview of the vi history in Open Source Library - Papers:

    The original UNIX editor was ed. It was a line editor of reluctant and recalcitrant style. When UNIX (version 4) got to Queen Mary College, London, in 1973, George Coulouris -- a Professor of Computing -- wasn't happy with it. So he wrote a screen editor, which he called "em," or "ed for mortals."

    Coulouris went on sabbatical to Berkeley, where he installed em on "his" machine. A graduate student noticed it one day, and asked about it. Coulouris explained. He then went off to New Jersey to Bell Labs, and when he returned to Berkeley, he found that em had been transmuted into ex, a display editor that is a superset of ed and a number of extensions -- primarily the one that enables display editing.

    At the beginning of 1978, the first Berkeley Software Distribution was available. It consisted of a tape of the Berkeley Pascal System and the ex text editor. The graduate student was Bill Joy, and the distribution cost $50. The next year Berkeley got some ADM-3a terminals, and Joy rewrote em to vi -- a truly visual editor.

    In sum, ed came out of Bell Labs in New Jersey, went to Queen Mary College in London, from there to the University of California at Berkeley, and from there back to New Jersey, where it was incorporated into the next edition of UNIX.

    [May 27, 2000] Falling on their Face Six Incidents from Corporate History

    May 27, 2000 | cableone.net

    3) In 1988/89, UNIX was growing by leaps and bounds. AT&T's Unix Systems Labs and the Open Systems Foundation each had dozens of allies and were slinging "roadmaps" at each other, claiming to have the best view of the future of the computer industry.

    It was obvious that networking, security, multiprocessing, multi-platform, and multi-vendor was the future. Obviously (to all of us UNIX-lovers), that DOS/Windows kludge was nothing but a sick joke of a toy. A prototype that should never have left the lab… worse, it was a copy of Apple's copy of a prototype at Xerox Palo Alto Research Center. All the good computer scientists knew that the future was already under development at CMU, Berkeley, Bell Labs and other great centers of UNIX-based innovation: X-Windows, NeWS, TCP/IP, Distributed Computing Environment (DCE), Kerberos, C++, Object Oriented Programming, etc.

    The "roadmaps" of the various standards committees simply spelled out how each of these "excellent features" would be integrated into product over the next few years. Microsoft (probably Gates) looked at these and again saw the danger… the challengers were about to blow by Microsoft into the new "networked" world, leaving the boys with their toys in the dust, just the way they had left IBM a decade earlier.

    Gates went out and bought the best Operating System architect willing to challenge UNIX dominance and the race to Windows NT was begun. By time Windows NT 3.1 was finally released (some say the "Not a Toy" version of Windows 3.1) the UNIX community had largely self-destructed into a series of battles between vendors who were sure their fellow UNIX vendors were out to get them. The roadmaps that were waved so bravely in 1989 were never followed; USL was sold to Novell and then dismembered. OSF stumbled along and finally joined X-Open and the attempt to "standardize" UNIX faded into history. Though IBM, NCR and DEC did field DCE implementations that spanned UNIX and mainframes, Sun and other UNIX vendors scoffed and followed ARPA funding, forsaking much of the early research and standardization efforts. Meanwhile, Microsoft fielded a cheaper, stripped down DCE in the form of the Windows NT Domain model which is today, with Windows 2000 beginning to meet the goals of the original DCE effort. Lesson: if you're going to publish a roadmap of where you're going, be sure you get there before your competition!

    This bit of history makes me sick. In the case of the Apple/Microsoft encounter, Apple was the proprietary 'fool' and suffered for it. In this case, Microsoft first tried to join the UNIX community early on (remember XENIX?) but was roundly ostracized from the community. Apparently Gates saw a flaw in the UNIX brotherhood that we all missed. We missed the self-destructive Not-Invented-Here attitude that ultimately doomed the Open Systems revolution. We forced Microsoft into a one-against-many position, not unlike the position Apple chose. We should have won. Instead, we fractured into a mass of incompatible (enough) UNIX variants and proceeded to blame each other for the failure to meet the #1 promise of UNIX: source-level platform independence.

    In this case, we committed fratricide while Microsoft took our plans for a Distributed Computing Environment and made it the centerpiece of their entire Enterprise business. I don't know about the rest of the UNIX community, but living though this history from my position in Bell Labs, I felt we did fall on our faces!

    Early Unix history and evolution

    Dennis M. Ritchie
    Bell Laboratories, Murray Hill, NJ, 07974
    ABSTRACT

    This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process-control mechanism, and the idea of pipelined commands. Some attention is paid to social conditions during the development of the system.

    NOTE: *This paper was first presented at the Language Design and Programming Methodology conference at Sydney, Australia, September 1979. The conference proceedings were published as Lecture Notes in Computer Science #79: Language Design and Programming Methodology, Springer-Verlag, 1980. This rendition is based on a reprinted version appearing in AT&T Bell Laboratories Technical Journal 63 No. 6 Part 2, October 1984, pp. 1577-93.

    Introduction

    During the past few years, the Unix operating system has come into wide use, so wide that its very name has become a trademark of Bell Laboratories. Its important characteristics have become known to many people. It has suffered much rewriting and tinkering since the first publication describing it in 1974 [1], but few fundamental changes. However, Unix was born in 1969 not 1974, and the account of its development makes a little-known and perhaps instructive story. This paper presents a technical and social history of the evolution of the system.

    Origins

    For computer science at Bell Laboratories, the period 1968-1969 was somewhat unsettled. The main reason for this was the slow, though clearly inevitable, withdrawal of the Labs from the Multics project. To the Labs computing community as a whole, the problem was the increasing obviousness of the failure of Multics to deliver promptly any sort of usable system, let alone the panacea envisioned earlier. For much of this time, the Murray Hill Computer Center was also running a costly GE 645 machine that inadequately simulated the GE 635. Another shake-up that occurred during this period was the organizational separation of computing services and computing research.

    From the point of view of the group that was to be most involved in the beginnings of Unix (K. Thompson, Ritchie, M. D. McIlroy, J. F. Ossanna), the decline and fall of Multics had a directly felt effect. We were among the last Bell Laboratories holdouts actually working on Multics, so we still felt some sort of stake in its success. More important, the convenient interactive computing service that Multics had promised to the entire community was in fact available to our limited group, at first under the CTSS system used to develop Multics, and later under Multics itself. Even though Multics could not then support many users, it could support us, albeit at exorbitant cost. We didn't want to lose the pleasant niche we occupied, because no similar ones were available; even the time-sharing service that would later be offered under GE's operating system did not exist. What we wanted to preserve was not just a good environment in which to do programming, but a system around which a fellowship could form. We knew from experience that the essence of communal computing, as supplied by remote-access, time-shared machines, is not just to type programs into a terminal instead of a keypunch, but to encourage close communication.

    Thus, during 1969, we began trying to find an alternative to Multics. The search took several forms. Throughout 1969 we (mainly Ossanna, Thompson, Ritchie) lobbied intensively for the purchase of a medium-scale machine for which we promised to write an operating system; the machines we suggested were the DEC PDP-10 and the SDS (later Xerox) Sigma 7. The effort was frustrating, because our proposals were never clearly and finally turned down, but yet were certainly never accepted. Several times it seemed we were very near success. The final blow to this effort came when we presented an exquisitely complicated proposal, designed to minimize financial outlay, that involved some outright purchase, some third-party lease, and a plan to turn in a DEC KA-10 processor on the soon-to-be-announced and more capable KI-10. The proposal was rejected, and rumor soon had it that W. O. Baker (then vice-president of Research) had reacted to it with the comment `Bell Laboratories just doesn't do business this way!'

    Actually, it is perfectly obvious in retrospect (and should have been at the time) that we were asking the Labs to spend too much money on too few people with too vague a plan. Moreover, I am quite sure that at that time operating systems were not, for our management, an attractive area in which to support work. They were in the process of extricating themselves not only from an operating system development effort that had failed, but from running the local Computation Center. Thus it may have seemed that buying a machine such as we suggested might lead on the one hand to yet another Multics, or on the other, if we produced something useful, to yet another Comp Center for them to be responsible for.

    Besides the financial agitations that took place in 1969, there was technical work also. Thompson, R. H. Canaday, and Ritchie developed, on blackboards and scribbled notes, the basic design of a file system that was later to become the heart of Unix. Most of the design was Thompson's, as was the impulse to think about file systems at all, but I believe I contributed the idea of device files. Thompson's itch for creation of an operating system took several forms during this period; he also wrote (on Multics) a fairly detailed simulation of the performance of the proposed file system design and of paging behavior of programs. In addition, he started work on a new operating system for the GE-645, going as far as writing an assembler for the machine and a rudimentary operating system kernel whose greatest achievement, so far as I remember, was to type a greeting message. The complexity of the machine was such that a mere message was already a fairly notable accomplishment, but when it became clear that the lifetime of the 645 at the Labs was measured in months, the work was dropped.

    Also during 1969, Thompson developed the game of `Space Travel.' First written on Multics, then transliterated into Fortran for GECOS (the operating system for the GE, later Honeywell, 635), it was nothing less than a simulation of the movement of the major bodies of the Solar System, with the player guiding a ship here and there, observing the scenery, and attempting to land on the various planets and moons. The GECOS version was unsatisfactory in two important respects: first, the display of the state of the game was jerky and hard to control because one had to type commands at it, and second, a game cost about $75 for CPU time on the big computer. It did not take long, therefore, for Thompson to find a little-used PDP-7 computer with an excellent display processor; the whole system was used as a Graphic-II terminal. He and I rewrote Space Travel to run on this machine. The undertaking was more ambitious than it might seem; because we disdained all existing software, we had to write a floating-point arithmetic package, the pointwise specification of the graphic characters for the display, and a debugging subsystem that continuously displayed the contents of typed-in locations in a corner of the screen. All this was written in assembly language for a cross-assembler that ran under GECOS and produced paper tapes to be carried to the PDP-7.

    Space Travel, though it made a very attractive game, served mainly as an introduction to the clumsy technology of preparing programs for the PDP-7. Soon Thompson began implementing the paper file system (perhaps `chalk file system' would be more accurate) that had been designed earlier. A file system without a way to exercise it is a sterile proposition, so he proceeded to flesh it out with the other requirements for a working operating system, in particular the notion of processes. Then came a small set of user-level utilities: the means to copy, print, delete, and edit files, and of course a simple command interpreter (shell). Up to this time all the programs were written using GECOS and files were transferred to the PDP-7 on paper tape; but once an assembler was completed the system was able to support itself. Although it was not until well into 1970 that Brian Kernighan suggested the name `Unix,' in a somewhat treacherous pun on `Multics,' the operating system we know today was born.

    The PDP-7 Unix file system

    Structurally, the file system of PDP-7 Unix was nearly identical to today's. It had

    1)
    An i-list: a linear array of i-nodes each describing a file. An i-node contained less than it does now, but the essential information was the same: the protection mode of the file, its type and size, and the list of physical blocks holding the contents.
    2)
    Directories: a special kind of file containing a sequence of names and the associated i-number.
    3)
    Special files describing devices. The device specification was not contained explicitly in the i-node, but was instead encoded in the number: specific i-numbers corresponded to specific files.

    The important file system calls were also present from the start. Read, write, open, creat (sic), close: with one very important exception, discussed below, they were similar to what one finds now. A minor difference was that the unit of I/O was the word, not the byte, because the PDP-7 was a word-addressed machine. In practice this meant merely that all programs dealing with character streams ignored null characters, because null was used to pad a file to an even number of characters. Another minor, occasionally annoying difference was the lack of erase and kill processing for terminals. Terminals, in effect, were always in raw mode. Only a few programs (notably the shell and the editor) bothered to implement erase-kill processing.

    In spite of its considerable similarity to the current file system, the PDP-7 file system was in one way remarkably different: there were no path names, and each file-name argument to the system was a simple name (without `/') taken relative to the current directory. Links, in the usual Unix sense, did exist. Together with an elaborate set of conventions, they were the principal means by which the lack of path names became acceptable.

    The link call took the form

    link(dir, file, newname)
    
    where dir was a directory file in the current directory, file an existing entry in that directory, and newname the name of the link, which was added to the current directory. Because dir needed to be in the current directory, it is evident that today's prohibition against links to directories was not enforced; the PDP-7 Unix file system had the shape of a general directed graph.

    So that every user did not need to maintain a link to all directories of interest, there existed a directory called dd that contained entries for the directory of each user. Thus, to make a link to file x in directory ken, I might do

    ln dd ken ken
    ln ken x x
    rm ken
    
    This scheme rendered subdirectories sufficiently hard to use as to make them unused in practice. Another important barrier was that there was no way to create a directory while the system was running; all were made during recreation of the file system from paper tape, so that directories were in effect a nonrenewable resource.

    The dd convention made the chdir command relatively convenient. It took multiple arguments, and switched the current directory to each named directory in turn. Thus

    chdir dd ken
    
    would move to directory ken. (Incidentally, chdir was spelled ch; why this was expanded when we went to the PDP-11 I don't remember.)

    The most serious inconvenience of the implementation of the file system, aside from the lack of path names, was the difficulty of changing its configuration; as mentioned, directories and special files were both made only when the disk was recreated. Installation of a new device was very painful, because the code for devices was spread widely throughout the system; for example there were several loops that visited each device in turn. Not surprisingly, there was no notion of mounting a removable disk pack, because the machine had only a single fixed-head disk.

    The operating system code that implemented this file system was a drastically simplified version of the present scheme. One important simplification followed from the fact that the system was not multi-programmed; only one program was in memory at a time, and control was passed between processes only when an explicit swap took place. So, for example, there was an iget routine that made a named i-node available, but it left the i-node in a constant, static location rather than returning a pointer into a large table of active i-nodes. A precursor of the current buffering mechanism was present (with about 4 buffers) but there was essentially no overlap of disk I/O with computation. This was avoided not merely for simplicity. The disk attached to the PDP-7 was fast for its time; it transferred one 18-bit word every 2 microseconds. On the other hand, the PDP-7 itself had a memory cycle time of 1 microsecond, and most instructions took 2 cycles (one for the instruction itself, one for the operand). However, indirectly addressed instructions required 3 cycles, and indirection was quite common, because the machine had no index registers. Finally, the DMA controller was unable to access memory during an instruction. The upshot was that the disk would incur overrun errors if any indirectly-addressed instructions were executed while it was transferring. Thus control could not be returned to the user, nor in fact could general system code be executed, with the disk running. The interrupt routines for the clock and terminals, which needed to be runnable at all times, had to be coded in very strange fashion to avoid indirection.

    Process control

    By `process control,' I mean the mechanisms by which processes are created and used; today the system calls fork, exec, wait, and exit implement these mechanisms. Unlike the file system, which existed in nearly its present form from the earliest days, the process control scheme underwent considerable mutation after PDP-7 Unix was already in use. (The introduction of path names in the PDP-11 system was certainly a considerable notational advance, but not a change in fundamental structure.)

    Today, the way in which commands are executed by the shell can be summarized as follows:

    1)
    The shell reads a command line from the terminal.
    2)
    It creates a child process by fork.
    3)
    The child process uses exec to call in the command from a file.
    4)
    Meanwhile, the parent shell uses wait to wait for the child (command) process to terminate by calling exit.
    5)
    The parent shell goes back to step 1).

    Processes (independently executing entities) existed very early in PDP-7 Unix. There were in fact precisely two of them, one for each of the two terminals attached to the machine. There was no fork, wait, or exec. There was an exit, but its meaning was rather different, as will be seen. The main loop of the shell went as follows.

    1)
    The shell closed all its open files, then opened the terminal special file for standard input and output (file descriptors 0 and 1).
    2)
    It read a command line from the terminal.
    3)
    It linked to the file specifying the command, opened the file, and removed the link. Then it copied a small bootstrap program to the top of memory and jumped to it; this bootstrap program read in the file over the shell code, then jumped to the first location of the command (in effect an exec).
    4)
    The command did its work, then terminated by calling exit. The exit call caused the system to read in a fresh copy of the shell over the terminated command, then to jump to its start (and thus in effect to go to step 1).

    The most interesting thing about this primitive implementation is the degree to which it anticipated themes developed more fully later. True, it could support neither background processes nor shell command files (let alone pipes and filters); but IO redirection (via `<' and `>') was soon there; it is discussed below. The implementation of redirection was quite straightforward; in step 3) above the shell just replaced its standard input or output with the appropriate file. Crucial to subsequent development was the implementation of the shell as a user-level program stored in a file, rather than a part of the operating system.

    The structure of this process control scheme, with one process per terminal, is similar to that of many interactive systems, for example CTSS, Multics, Honeywell TSS, and IBM TSS and TSO. In general such systems require special mechanisms to implement useful facilities such as detached computations and command files; Unix at that stage didn't bother to supply the special mechanisms. It also exhibited some irritating, idiosyncratic problems. For example, a newly recreated shell had to close all its open files both to get rid of any open files left by the command just executed and to rescind previous IO redirection. Then it had to reopen the special file corresponding to its terminal, in order to read a new command line. There was no /dev directory (because no path names); moreover, the shell could retain no memory across commands, because it was reexecuted afresh after each command. Thus a further file system convention was required: each directory had to contain an entry tty for a special file that referred to the terminal of the process that opened it. If by accident one changed into some directory that lacked this entry, the shell would loop hopelessly; about the only remedy was to reboot. (Sometimes the missing link could be made from the other terminal.)

    Process control in its modern form was designed and implemented within a couple of days. It is astonishing how easily it fitted into the existing system; at the same time it is easy to see how some of the slightly unusual features of the design are present precisely because they represented small, easily-coded changes to what existed. A good example is the separation of the fork and exec functions. The most common model for the creation of new processes involves specifying a program for the process to execute; in Unix, a forked process continues to run the same program as its parent until it performs an explicit exec. The separation of the functions is certainly not unique to Unix, and in fact it was present in the Berkeley time-sharing system [2], which was well-known to Thompson. Still, it seems reasonable to suppose that it exists in Unix mainly because of the ease with which fork could be implemented without changing much else. The system already handled multiple (i.e. two) processes; there was a process table, and the processes were swapped between main memory and the disk. The initial implementation of fork required only

    1)
    Expansion of the process table
    2)
    Addition of a fork call that copied the current process to the disk swap area, using the already existing swap IO primitives, and made some adjustments to the process table.

    In fact, the PDP-7's fork call required precisely 27 lines of assembly code. Of course, other changes in the operating system and user programs were required, and some of them were rather interesting and unexpected. But a combined fork-exec would have been considerably more complicated, if only because exec as such did not exist; its function was already performed, using explicit IO, by the shell.

    The exit system call, which previously read in a new copy of the shell (actually a sort of automatic exec but without arguments), simplified considerably; in the new version a process only had to clean out its process table entry, and give up control.

    Curiously, the primitives that became wait were considerably more general than the present scheme. A pair of primitives sent one-word messages between named processes:

    smes(pid, message)
    (pid, message) = rmes()
    
    The target process of smes did not need to have any ancestral relationship with the receiver, although the system provided no explicit mechanism for communicating process IDs except that fork returned to each of the parent and child the ID of its relative. Messages were not queued; a sender delayed until the receiver read the message.

    The message facility was used as follows: the parent shell, after creating a process to execute a command, sent a message to the new process by smes; when the command terminated (assuming it did not try to read any messages) the shell's blocked smes call returned an error indication that the target process did not exist. Thus the shell's smes became, in effect, the equivalent of wait.

    A different protocol, which took advantage of more of the generality offered by messages, was used between the initialization program and the shells for each terminal. The initialization process, whose ID was understood to be 1, created a shell for each of the terminals, and then issued rmes; each shell, when it read the end of its input file, used smes to send a conventional `I am terminating' message to the initialization process, which recreated a new shell process for that terminal.

    I can recall no other use of messages. This explains why the facility was replaced by the wait call of the present system, which is less general, but more directly applicable to the desired purpose. Possibly relevant also is the evident bug in the mechanism: if a command process attempted to use messages to communicate with other processes, it would disrupt the shell's synchronization. The shell depended on sending a message that was never received; if a command executed rmes, it would receive the shell's phony message, and cause the shell to read another input line just as if the command had terminated. If a need for general messages had manifested itself, the bug would have been repaired.

    At any rate, the new process control scheme instantly rendered some very valuable features trivial to implement; for example detached processes (with `&') and recursive use of the shell as a command. Most systems have to supply some sort of special `batch job submission' facility and a special command interpreter for files distinct from the one used interactively.

    Although the multiple-process idea slipped in very easily indeed, there were some aftereffects that weren't anticipated. The most memorable of these became evident soon after the new system came up and apparently worked. In the midst of our jubilation, it was discovered that the chdir (change current directory) command had stopped working. There was much reading of code and anxious introspection about how the addition of fork could have broken the chdir call. Finally the truth dawned: in the old system chdir was an ordinary command; it adjusted the current directory of the (unique) process attached to the terminal. Under the new system, the chdir command correctly changed the current directory of the process created to execute it, but this process promptly terminated and had no effect whatsoever on its parent shell! It was necessary to make chdir a special command, executed internally within the shell. It turns out that several command-like functions have the same property, for example login.

    Another mismatch between the system as it had been and the new process control scheme took longer to become evident. Originally, the read/write pointer associated with each open file was stored within the process that opened the file. (This pointer indicates where in the file the next read or write will take place.) The problem with this organization became evident only when we tried to use command files. Suppose a simple command file contains

    ls
    who
    
    and it is executed as follows:
    sh comfile >output
    
    The sequence of events was

    1)
    The main shell creates a new process, which opens outfile to receive the standard output and executes the shell recursively.
    2)
    The new shell creates another process to execute ls, which correctly writes on file output and then terminates.
    3)
    Another process is created to execute the next command. However, the IO pointer for the output is copied from that of the shell, and it is still 0, because the shell has never written on its output, and IO pointers are associated with processes. The effect is that the output of who overwrites and destroys the output of the preceding ls command.

    Solution of this problem required creation of a new system table to contain the IO pointers of open files independently of the process in which they were opened.

    IO Redirection

    The very convenient notation for IO redirection, using the `>' and `<' characters, was not present from the very beginning of the PDP-7 Unix system, but it did appear quite early. Like much else in Unix, it was inspired by an idea from Multics. Multics has a rather general IO redirection mechanism [3] embodying named IO streams that can be dynamically redirected to various devices, files, and even through special stream-processing modules. Even in the version of Multics we were familiar with a decade ago, there existed a command that switched subsequent output normally destined for the terminal to a file, and another command to reattach output to the terminal. Where under Unix one might say

    ls >xx
    
    to get a listing of the names of one's files in xx, on Multics the notation was
    iocall attach user_output file xx
    list
    iocall attach user_output syn user_i/o
    
    Even though this very clumsy sequence was used often during the Multics days, and would have been utterly straightforward to integrate into the Multics shell, the idea did not occur to us or anyone else at the time. I speculate that the reason it did not was the sheer size of the Multics project: the implementors of the IO system were at Bell Labs in Murray Hill, while the shell was done at MIT. We didn't consider making changes to the shell (it was their program); correspondingly, the keepers of the shell may not even have known of the usefulness, albeit clumsiness, of iocall. (The 1969 Multics manual [4] lists iocall as an `author-maintained,' that is non-standard, command.) Because both the Unix IO system and its shell were under the exclusive control of Thompson, when the right idea finally surfaced, it was a matter of an hour or so to implement it.

    The advent of the PDP-11

    By the beginning of 1970, PDP-7 Unix was a going concern. Primitive by today's standards, it was still capable of providing a more congenial programming environment than its alternatives. Nevertheless, it was clear that the PDP-7, a machine we didn't even own, was already obsolete, and its successors in the same line offered little of interest. In early 1970 we proposed acquisition of a PDP-11, which had just been introduced by Digital. In some sense, this proposal was merely the latest in the series of attempts that had been made throughout the preceding year. It differed in two important ways. First, the amount of money (about $65,000) was an order of magnitude less than what we had previously asked; second, the charter sought was not merely to write some (unspecified) operating system, but instead to create a system specifically designed for editing and formatting text, what might today be called a `word-processing system.' The impetus for the proposal came mainly from J. F. Ossanna, who was then and until the end of his life interested in text processing. If our early proposals were too vague, this one was perhaps too specific; at first it too met with disfavor. Before long, however, funds were obtained through the efforts of L. E. McMahon and an order for a PDP-11 was placed in May.

    The processor arrived at the end of the summer, but the PDP-11 was so new a product that no disk was available until December. In the meantime, a rudimentary, core-only version of Unix was written using a cross-assembler on the PDP-7. Most of the time, the machine sat in a corner, enumerating all the closed Knight's tours on a 6×8 chess board-a three-month job.

    The first PDP-11 system

    Once the disk arrived, the system was quickly completed. In internal structure, the first version of Unix for the PDP-11 represented a relatively minor advance over the PDP-7 system; writing it was largely a matter of transliteration. For example, there was no multi-programming; only one user program was present in core at any moment. On the other hand, there were important changes in the interface to the user: the present directory structure, with full path names, was in place, along with the modern form of exec and wait, and conveniences like character-erase and line-kill processing for terminals. Perhaps the most interesting thing about the enterprise was its small size: there were 24K bytes of core memory (16K for the system, 8K for user programs), and a disk with 1K blocks (512K bytes). Files were limited to 64K bytes.

    At the time of the placement of the order for the PDP-11, it had seemed natural, or perhaps expedient, to promise a system dedicated to word processing. During the protracted arrival of the hardware, the increasing usefulness of PDP-7 Unix made it appropriate to justify creating PDP-11 Unix as a development tool, to be used in writing the more special-purpose system. By the spring of 1971, it was generally agreed that no one had the slightest interest in scrapping Unix. Therefore, we transliterated the roff text formatter into PDP-11 assembler language, starting from the PDP-7 version that had been transliterated from McIlroy's BCPL version on Multics, which had in turn been inspired by J. Saltzer's runoff program on CTSS. In early summer, editor and formatter in hand, we felt prepared to fulfill our charter by offering to supply a text-processing service to the Patent department for preparing patent applications. At the time, they were evaluating a commercial system for this purpose; the main advantages we offered (besides the dubious one of taking part in an in-house experiment) were two in number: first, we supported Teletype's model 37 terminals, which, with an extended type-box, could print most of the math symbols they required; second, we quickly endowed roff with the ability to produce line-numbered pages, which the Patent Office required and which the other system could not handle.

    During the last half of 1971, we supported three typists from the Patent department, who spent the day busily typing, editing, and formatting patent applications, and meanwhile tried to carry on our own work. Unix has a reputation for supplying interesting services on modest hardware, and this period may mark a high point in the benefit/equipment ratio; on a machine with no memory protection and a single .5 MB disk, every test of a new program required care and boldness, because it could easily crash the system, and every few hours' work by the typists meant pushing out more information onto DECtape, because of the very small disk.

    The experiment was trying but successful. Not only did the Patent department adopt Unix, and thus become the first of many groups at the Laboratories to ratify our work, but we achieved sufficient credibility to convince our own management to acquire one of the first PDP 11/45 systems made. We have accumulated much hardware since then, and labored continuously on the software, but because most of the interesting work has already been published, (e.g. on the system itself [1, 5, 6, 7, 8, 9]) it seems unnecessary to repeat it here.

    Pipes

    One of the most widely admired contributions of Unix to the culture of operating systems and command languages is the pipe, as used in a pipeline of commands. Of course, the fundamental idea was by no means new; the pipeline is merely a specific form of coroutine. Even the implementation was not unprecedented, although we didn't know it at the time; the `communication files' of the Dartmouth Time-Sharing System [10] did very nearly what Unix pipes do, though they seem not to have been exploited so fully.

    Pipes appeared in Unix in 1972, well after the PDP-11 version of the system was in operation, at the suggestion (or perhaps insistence) of M. D. McIlroy, a long-time advocate of the non-hierarchical control flow that characterizes coroutines. Some years before pipes were implemented, he suggested that commands should be thought of as binary operators, whose left and right operand specified the input and output files. Thus a `copy' utility would be commanded by

    inputfile copy outputfile
    
    To make a pipeline, command operators could be stacked up. Thus, to sort input, paginate it neatly, and print the result off-line, one would write
    input sort paginate offprint
    
    In today's system, this would correspond to
    sort input | pr | opr
    
    The idea, explained one afternoon on a blackboard, intrigued us but failed to ignite any immediate action. There were several objections to the idea as put: the infix notation seemed too radical (we were too accustomed to typing `cp x y' to copy x to y); and we were unable to see how to distinguish command parameters from the input or output files. Also, the one-input one-output model of command execution seemed too confining. What a failure of imagination!

    Some time later, thanks to McIlroy's persistence, pipes were finally installed in the operating system (a relatively simple job), and a new notation was introduced. It used the same characters as for I/O redirection. For example, the pipeline above might have been written

    sort input >pr>opr>
    
    The idea is that following a `>' may be either a file, to specify redirection of output to that file, or a command into which the output of the preceding command is directed as input. The trailing `>' was needed in the example to specify that the (nonexistent) output of opr should be directed to the console; otherwise the command opr would not have been executed at all; instead a file opr would have been created.

    The new facility was enthusiastically received, and the term `filter' was soon coined. Many commands were changed to make them usable in pipelines. For example, no one had imagined that anyone would want the sort or pr utility to sort or print its standard input if given no explicit arguments.

    Soon some problems with the notation became evident. Most annoying was a silly lexical problem: the string after `>' was delimited by blanks, so, to give a parameter to pr in the example, one had to quote:

    sort input >"pr -2">opr>
    
    Second, in attempt to give generality, the pipe notation accepted `<' as an input redirection in a way corresponding to `>'; this meant that the notation was not unique. One could also write, for example,
    opr <pr<"sort input"<
    
    or even
    pr <"sort input"< >opr>
    
    The pipe notation using `<' and `>' survived only a couple of months; it was replaced by the present one that uses a unique operator to separate components of a pipeline. Although the old notation had a certain charm and inner consistency, the new one is certainly superior. Of course, it too has limitations. It is unabashedly linear, though there are situations in which multiple redirected inputs and outputs are called for. For example, what is the best way to compare the outputs of two programs? What is the appropriate notation for invoking a program with two parallel output streams?

    I mentioned above in the section on IO redirection that Multics provided a mechanism by which IO streams could be directed through processing modules on the way to (or from) the device or file serving as source or sink. Thus it might seem that stream-splicing in Multics was the direct precursor of Unix pipes, as Multics IO redirection certainly was for its Unix version. In fact I do not think this is true, or is true only in a weak sense. Not only were coroutines well-known already, but their embodiment as Multics spliceable IO modules required that the modules be specially coded in such a way that they could be used for no other purpose. The genius of the Unix pipeline is precisely that it is constructed from the very same commands used constantly in simplex fashion. The mental leap needed to see this possibility and to invent the notation is large indeed.

    High-level languages

    Every program for the original PDP-7 Unix system was written in assembly language, and bare assembly language it was-for example, there were no macros. Moreover, there was no loader or link-editor, so every program had to be complete in itself. The first interesting language to appear was a version of McClure's TMG [11] that was implemented by McIlroy. Soon after TMG became available, Thompson decided that we could not pretend to offer a real computing service without Fortran, so he sat down to write a Fortran in TMG. As I recall, the intent to handle Fortran lasted about a week. What he produced instead was a definition of and a compiler for the new language B [12]. B was much influenced by the BCPL language [13]; other influences were Thompson's taste for spartan syntax, and the very small space into which the compiler had to fit. The compiler produced simple interpretive code; although it and the programs it produced were rather slow, it made life much more pleasant. Once interfaces to the regular system calls were made available, we began once again to enjoy the benefits of using a reasonable language to write what are usually called `systems programs:' compilers, assemblers, and the like. (Although some might consider the PL/I we used under Multics unreasonable, it was much better than assembly language.) Among other programs, the PDP-7 B cross-compiler for the PDP-11 was written in B, and in the course of time, the B compiler for the PDP-7 itself was transliterated from TMG into B.

    When the PDP-11 arrived, B was moved to it almost immediately. In fact, a version of the multi-precision `desk calculator' program dc was one of the earliest programs to run on the PDP-11, well before the disk arrived. However, B did not take over instantly. Only passing thought was given to rewriting the operating system in B rather than assembler, and the same was true of most of the utilities. Even the assembler was rewritten in assembler. This approach was taken mainly because of the slowness of the interpretive code. Of smaller but still real importance was the mismatch of the word-oriented B language with the byte-addressed PDP-11.

    Thus, in 1971, work began on what was to become the C language [14]. The story of the language developments from BCPL through B to C is told elsewhere [15], and need not be repeated here. Perhaps the most important watershed occurred during 1973, when the operating system kernel was rewritten in C. It was at this point that the system assumed its modern form; the most far-reaching change was the introduction of multi-programming. There were few externally-visible changes, but the internal structure of the system became much more rational and general. The success of this effort convinced us that C was useful as a nearly universal tool for systems programming, instead of just a toy for simple applications.

    Today, the only important Unix program still written in assembler is the assembler itself; virtually all the utility programs are in C, and so are most of the applications programs, although there are sites with many in Fortran, Pascal, and Algol 68 as well. It seems certain that much of the success of Unix follows from the readability, modifiability, and portability of its software that in turn follows from its expression in high-level languages.

    Conclusion

    One of the comforting things about old memories is their tendency to take on a rosy glow. The programming environment provided by the early versions of Unix seems, when described here, to be extremely harsh and primitive. I am sure that if forced back to the PDP-7 I would find it intolerably limiting and lacking in conveniences. Nevertheless, it did not seem so at the time; the memory fixes on what was good and what lasted, and on the joy of helping to create the improvements that made life better. In ten years, I hope we can look back with the same mixed impression of progress combined with continuity.

    About the FreeBSD Project by Jordan Hubbard.

    freebsd.org

    1.3.1 A Brief History of FreeBSD Contributed by Jordan Hubbard.

    The first CDROM (and general net-wide) distribution was FreeBSD 1.0, released in December of 1993. This was based on the 4.3BSD-Lite (``Net/2'') tape from U.C. Berkeley, with many components also provided by 386BSD and the Free Software Foundation. It was a fairly reasonable success for a first offering, and we followed it with the highly successful FreeBSD 1.1 release in May of 1994.

    Around this time, some rather unexpected storm clouds formed on the horizon as Novell and U.C. Berkeley settled their long-running lawsuit over the legal status of the Berkeley Net/2 tape. A condition of that settlement was U.C. Berkeley's concession that large parts of Net/2 were ``encumbered'' code and the property of Novell, who had in turn acquired it from AT&T some time previously. What Berkeley got in return was Novell's ``blessing'' that the 4.4BSD-Lite release, when it was finally released, would be declared unencumbered and Net/2 based product. Under the terms of that agreement, the project was allowed one last release before the deadline, that release being FreeBSD 1.1.5.1.

    FreeBSD then set about the arduous task of literally re-inventing itself from a completely new and rather incomplete set of 4.4BSD-Lite bits. The ``Lite'' releases were light in part because Berkeley's CSRG had removed large chunks of code required for actually constructing a bootable running system (due to various legal requirements) and the fact that the Intel port of 4.4 was highly incomplete. It took the project until November of 1994 to make this transition, at which point it released FreeBSD 2.0 to the net and on CDROM (in late December). Despite being still more than a little rough around the edges, the release was a significant success and was followed by the more robust and easier to install FreeBSD 2.0.5 release in June of 1995.

    We released FreeBSD 2.1.5 in August of 1996, and it appeared to be popular enough among the ISP and commercial communities that another release along the 2.1-STABLE branch was merited. This was FreeBSD 2.1.7.1, released in February 1997 and capping the end of mainstream development on 2.1-STABLE. Now in maintenance mode, only security enhancements and other critical bug fixes will be done on this branch (RELENG_2_1_0).

    25th Anniversary UNIX Playing Cards

    The year 1994 was the 25th anniversary of UNIX, and as part of the celebrations held during the Summer 1994 USENIX conference in Boston, a commemorative deck of playing cards was created. Each playing card had a picture and the name of a UNIX contributor. The intent was that this be a small way to honor the contributors, including both well known and lesser known UNIX personalities, and it was a fun collector's item for the attendees. Evi Nemeth coordinated the production of the card deck, and a committee consisting of J.R. Oldroyd (it was his idea), Dennis Ritchie, Kirk Mckusick, Keith Bostic, and Margo Seltzer nominated people to be included on each card. Here's what the front and back of the Queen of Spades looked like:

    25th Anniversary of Unix

    See also ais.org -- Unix and Computer Science by Ronda Hauben
    comp.unix.misc

    Ronda Hauben

    It's 1994 and the 25th anniversary of the creation of UNIX in 1969.

    In the tradition of open code and of UNIX it would be good to see some open discussion of what commemorative activities Usenix and other groups and publications are planning to celebrate this important anniversary.

    Jeff Tranter

    In article <[email protected]> [email protected] (Ronda Hauben) writes:
    It's 1994 and the 25th anniversary of the creation of UNIX
    in 1969.


    In the tradition of open code and of UNIX it would be good to see some
    open discussion of what commemorative activities Usenix and other
    groups and publications are planning to celebrate this important
    anniversary.
    --

    This brings to mind an entry in the fortune file:

    Get GUMMed
    --- ------
    The Gurus of Unix Meeting of Minds (GUMM) takes place Wednesday, April
    1, 2076 (check THAT in your perpetual calendar program), 14 feet above
    the ground directly in front of the Milpitas Gumps. Members will grep
    each other by the hand (after intro), yacc a lot, smoke filtered
    chroots in pipes, chown with forks, use the wc (unless uuclean), fseek
    nice zombie processes, strip, and sleep, but not, we hope, od. Three
    days will be devoted to discussion of the ramifications of whodo. Two
    seconds have been allotted for a complete rundown of all the user-
    friendly features of Unix. Seminars include "Everything You Know is
    Wrong", led by Tom Kempson, "Batman or Cat:man?" led by Richie Dennis
    "cc C? Si! Si!" led by Kerwin Bernighan, and "Document Unix, Are You
    Kidding?" led by Jan Yeats. No Reader Service No. is necessary because
    all GUGUs (Gurus of Unix Group of Users) already know everything we
    could tell them.

    -- Dr. Dobb's Journal, June '84

    Prophetic Petroglyphs

    Attached by magnet to the wall of my office is a yellowed sheet of paper, evidently the tenth page of an internal Bell Labs memo by Doug McIlroy. Unfortunately, I don't have the rest of the note.

    To put my strongest concerns into a nutshell:

    1. We should have some ways of connecting programs like garden hose--screw in another segment when it becomes when it becomes necessary to massage data in another way. This is the way of IO also.

    2. Our loader should be able to do link-loading and controlled establishment.

    3. Our library filing scheme should allow for rather general indexing, responsibility, generations, data path switching.

    4. It should be possible to get private system components (all routines are system components) for buggering around with.

    Selected Computing Sciences Technical Reports

    cs.bell-labs.com

    A Brief History of Unix By Charles Severance

    Disappeared...

    CSRG Archive CD-ROMs

    CSRG Archive CD-ROMs Click on either of the above pictures to view larger versions. Thanks to the efforts of the volunteers of the ``Unix Heritage Society' and the willingness of Caldera to release 32/V under an open source license, it is now possible to make the full source archives of the University of California at Berkeley's Computer Systems Research Group (CSRG) available. The archive contains four CD-ROM's with the following content: CD-ROM #1 - Berkeley Systems 1978 - 1986 1bsd 2. 9pucc 4. 1

    Twenty Years of Berkeley Unix -- From AT&T-Owned to Freely Redistributable by Marshall Kirk McKusick

    oreilly.com

    Early History

    Ken Thompson and Dennis Ritchie presented the first Unix paper at the Symposium on Operating Systems Principles at Purdue University in November 1973. Professor Bob Fabry, of the University of California at Berkeley, was in attendance and immediately became interested in obtaining a copy of the system to experiment with at Berkeley.

    At the time, Berkeley had only large mainframe computer systems doing batch processing, so the first order of business was to get a PDP-11/45 suitable for running with the then-current Version 4 of Unix. The Computer Science Department at Berkeley, together with the Mathematics Department and the Statistics Department, were able to jointly purchase a PDP-11/45. In January 1974, a Version 4 tape was delivered and Unix was installed by graduate student Keith Standiford.

    Although Ken Thompson at Purdue was not involved in the installation at Berkeley as he had been for most systems up to that time, his expertise was soon needed to determine the cause of several strange system crashes. Because Berkeley had only a 300-baud acoustic-coupled modem without auto answer capability, Thompson would call Standiford in the machine room and have him insert the phone into the modem; in this way Thompson was able to remotely debug crash dumps from New Jersey.

    Many of the crashes were caused by the disk controller's inability to reliably do overlapped seeks, contrary to the documentation. Berkeley's 11/45 was among the first systems that Thompson had encountered that had two disks on the same controller! Thompson's remote debugging was the first example of the cooperation that sprang up between Berkeley and Bell Labs. The willingness of the researchers at the Labs to share their work with Berkeley was instrumental in the rapid improvement of the software available at Berkeley.

    Though Unix was soon reliably up and running, the coalition of Computer Science, Mathematics, and Statistics began to run into problems; Math and Statistics wanted to run DEC's RSTS system. After much debate, a compromise was reached in which each department would get an eight-hour shift; Unix would run for eight hours followed by sixteen hours of RSTS. To promote fairness, the time slices were rotated each day. Thus, Unix ran 8 a.m. to 4 p.m. one day, 4 p.m. to midnight the next day, and midnight to 8 a.m. the third day. Despite the bizarre schedule, students taking the Operating Systems course preferred to do their projects on Unix rather than on the batch machine.

    Professors Eugene Wong and Michael Stonebraker were both stymied by the confinements of the batch environment, so their INGRES database project was among the first groups to move from the batch machines to the interactive environment provided by Unix. They quickly found the shortage of machine time and the odd hours on the 11/45 intolerable, so in the spring of 1974, they purchased an 11/40 running the newly available Version 5. With their first distribution of INGRES in the fall of 1974, the INGRES project became the first group in the Computer Science department to distribute their software. Several hundred INGRES tapes were shipped over the next six years, helping to establish Berkeley's reputation for designing and building real systems.

    Even with the departure of the INGRES project from the 11/45, there was still insufficient time available for the remaining students. To alleviate the shortage, Professors Michael Stonebraker and Bob Fabry set out in June 1974, to get two instructional 11/45's for the Computer Science department's own use. Early in 1975, the money was obtained. At nearly the same time, DEC announced the 11/70, a machine that appeared to be much superior to the 11/45. Money for the two 11/45s was pooled to buy a single 11/70 that arrived in the fall of 1975. Coincident with the arrival of the 11/70, Ken Thompson decided to take a one-year sabbatical as a visiting professor at the University of California at Berkeley, his alma mater. Thompson, together with Jeff Schriebman and Bob Kridle, brought up the latest Unix, Version 6, on the newly installed 11/70.

    Also arriving in the fall of 1975 were two unnoticed graduate students, Bill Joy and Chuck Haley; they both took an immediate interest in the new system. Initially they began working on a Pascal system that Thompson had hacked together while hanging around the 11/70 machine room. They expanded and improved the Pascal interpreter to the point that it became the programming system of choice for students because of its excellent error recovery scheme and fast compile and execute time.

    With the replacement of Model 33 teletypes by ADM-3 screen terminals, Joy and Haley began to feel stymied by the constraints of the ed editor. Working from an editor named em that they had obtained from Professor George Coulouris at Queen Mary's College in London, they worked to produce the line-at-a-time editor ex.

    With Ken Thompson's departure at the end of the summer of 1976, Joy and Haley begin to take an interest in exploring the internals of the Unix kernel. Under Schriebman's watchful eye, they first installed the fixes and improvements provided on the "fifty changes" tape from Bell Labs. Having learned to maneuver through the source code, they suggested several small enhancements to streamline certain kernel bottlenecks.

    Early Distributions

    Meanwhile, interest in the error recovery work in the Pascal compiler brought in requests for copies of the system. Early in 1977, Joy put together the "Berkeley Software Distribution." This first distribution included the Pascal system, and, in an obscure subdirectory of the Pascal source, the editor ex. Over the next year, Joy, acting in the capacity of distribution secretary, sent out about thirty free copies of the system.

    With the arrival of some ADM-3a terminals offering screen-addressable cursors, Joy was finally able to write vi, bringing screen-based editing to Berkeley. He soon found himself in a quandary. As is frequently the case in universities strapped for money, old equipment is never replaced all at once. Rather than support code for optimizing the updating of several different terminals, he decided to consolidate the screen management by using a small interpreter to redraw the screen. This interpreter was driven by a description of the terminal characteristics, an effort that eventually became termcap.

    By mid-1978, the software distribution clearly needed to be updated. The Pascal system had been made markedly more robust through feedback from its expanding user community, and had been split into two passes so that it could be run on PDP-11/34s. The result of the update was the "Second Berkeley Software Distribution," a name that was quickly shortened to 2BSD. Along with the enhanced Pascal system, vi and termcap for several terminals was included. Once again Bill Joy single-handedly put together distributions, answered the phone, and incorporated user feedback into the system. Over the next year nearly seventy-five tapes were shipped. Though Joy moved on to other projects the following year, the 2BSD distribution continued to expand. The final version of this distribution, 2.11BSD, is a complete system used on hundreds of PDP-11's still running in various corners of the world.

    VAX Unix

    Early in 1978, Professor Richard Fateman began looking for a machine with a larger address space on which he could continue his work on Macsyma (originally started on a PDP-10). The newly announced VAX-11/780 fulfilled the requirements and was available within budget. Fateman and thirteen other faculty members put together an NSF proposal that they combined with some departmental funds to purchase a VAX.

    Initially the VAX ran DEC's operating system VMS, but the department had gotten used to the Unix environment and wanted to continue using it. So, shortly after the arrival of the VAX, Fateman obtained a copy of the 32/V port of Unix to the VAX by John Reiser and Tom London of Bell Labs.

    Although 32/V provided a Version 7 Unix environment on the VAX, it did not take advantage of the virtual memory capability of the VAX hardware. Like its predecessors on the PDP-11, it was entirely a swap-based system. For the Macsyma group at Berkeley, the lack of virtual memory meant that the process address space was limited by the size of the physical memory, initially 1 megabyte on the new VAX.

    To alleviate this problem, Fateman approached Professor Domenico Ferrari, a member of the systems faculty at Berkeley, to investigate the possibility of having his group write a virtual memory system for Unix. Ozalp Babaoglu, one of Ferrari's students, set about to find some way of implementing a working set paging system on the VAX; his task was complicated because the VAX lacked reference bits.

    As Babaoglu neared the completion of his first cut at an implementation, he approached Bill Joy for some help in understanding the intricacies of the Unix kernel. Intrigued by Babaoglu's approach, Joy joined in helping to integrate the code into 32/V and then with the ensuing debugging.

    Unfortunately, Berkeley had only a single VAX for both system development and general production use. Thus, for several weeks over the Christmas break, the tolerant user community alternately found themselves logging into 32/V and "Virtual VAX/Unix." Often their work on the latter system would come to an abrupt halt, followed several minutes later by a 32/V login prompt. By January, 1979, most of the bugs had been worked out, and 32/V had been relegated to history.

    Joy saw that the 32-bit VAX would soon make the 16-bit PDP-11 obsolete, and began to port the 2BSD software to the VAX. While Peter Kessler and I ported the Pascal system, Joy ported the editors ex and vi, the C shell, and the myriad other smaller programs from the 2BSD distribution. By the end of 1979, a complete distribution had been put together. This distribution included the virtual memory kernel, the standard 32/V utilities, and the additions from 2BSD. In December, 1979, Joy shipped the first of nearly a hundred copies of 3BSD, the first VAX distribution from Berkeley.

    The final release from Bell Laboratories was 32/V; thereafter all Unix releases from AT&T, initially System III and later System V, were managed by a different group that emphasized stable commercial releases. With the commercialization of Unix, the researchers at Bell Laboratories were no longer able to act as a clearing-house for the ongoing Unix research. As the research community continued to modify the Unix system, it found that it needed an organization that could produce research releases. Because of its early involvement in Unix and its history of releasing Unix-based tools, Berkeley quickly stepped into the role previously provided by the Labs.

    DARPA Support

    Meanwhile, in the offices of the planners for the Defense Advanced Research Projects Agency (DARPA), discussions were being held that would have a major influence on the work at Berkeley. One of DARPA's early successes had been to set up a nationwide computer network to link together all their major research centers. At that time, they were finding that many of the computers at these centers were reaching the end of their useful lifetime and had to be replaced. The heaviest cost of replacement was the porting of the research software to the new machines. In addition, many sites were unable to share their software because of the diversity of hardware and operating systems.

    Choosing a single hardware vendor was impractical because of the widely varying computing needs of the research groups and the undesirability of depending on a single manufacturer. Thus, the planners at DARPA decided that the best solution was to unify at the operating systems level. After much discussion, Unix was chosen as a standard because of its proven portability.

    In the fall of 1979, Bob Fabry responded to DARPA's interest in moving towards Unix by writing a proposal suggesting that Berkeley develop an enhanced version of 3BSD for the use of the DARPA community. Fabry took a copy of his proposal to a meeting of DARPA image processing and VLSI contractors, plus representatives from Bolt, Beranek, and Newman, the developers of the ARPAnet. There was some reservation whether Berkeley could produce a working system; however, the release of 3BSD in December 1979 assuaged most of the doubts.

    With the increasingly good reputation of the 3BSD release to validate his claims, Bob Fabry was able to get an 18-month contract with DARPA beginning in April 1980. This contract was to add features needed by the DARPA contractors. Under the auspices of this contract, Bob Fabry sets up an organization which was christened the Computer Systems Research Group, or CSRG for short. He immediately hired Laura Tong to handle the project administration. Fabry turned his attention to finding a project leader to manage the software development. Fabry assumed that since Joy had just passed his Ph.D. qualifying examination, he would rather concentrate on completing his degree than take the software development position. But Joy had other plans. One night in early March he phoned Fabry at home to express interest in taking charge of the further development of Unix. Though surprised by the offer, Fabry took little time to agree.

    The project started promptly. Tong set up a distribution system that could handle a higher volume of orders than Joy's previous distributions. Fabry managed to coordinate with Bob Guffy at AT&T and lawyers at the University of California to formally release Unix under terms agreeable to all. Joy incorporated Jim Kulp's job control, and added auto reboot, a 1K block file system, and support for the latest VAX machine, the VAX-11/750. By October 1980, a polished distribution that also included the Pascal compiler, the Franz Lisp system, and an enhanced mail handling system was released as 4BSD. During its nine-month lifetime, nearly 150 copies were shipped. The license arrangement was on a per-institution basis rather than a per machine basis; thus the distribution ran on about 500 machines.

    With the increasingly wide distribution and visibility of Berkeley Unix, several critics began to emerge. David Kashtan at Stanford Research Institute wrote a paper describing the results of benchmarks he had run on both VMS and Berkeley Unix. These benchmarks showed severe performance problems with the Unix system for the VAX. Setting his future plans aside for several months, Joy systematically began tuning up the kernel. Within weeks he had a rebuttal paper written showing that Kashtan's benchmarks could be made to run as well on Unix as they could on VMS.

    Rather than continue shipping 4BSD, the tuned-up system, with the addition of Robert Elz's auto configuration code, was released as 4.1BSD in June, 1981. Over its two-year lifetime about 400 distributions were shipped. The original intent had been to call it the 5BSD release; however, there were objections from AT&T that there would be customer confusion between their commercial Unix release, System V, and a Berkeley release named 5BSD. So, to resolve the issue, Berkeley agreed to change the naming scheme for future releases to stay at 4BSD and just increment the minor number.

    A Brief History of UNIX by Sam Coniglio

    September 07, 1999

    In the beginning, there was AT&T.

    Bell Labs' Ken Thompson developed UNIX in 1969 so he could play games on a scavenged DEC PDP-7. With the help of Dennis Ritchie, the inventor of the "C" programing language, Ken rewrote UNIX entirely in "C" so that it could be used on different computers. In 1974, the OS was licensed to universities for educational purposes. Over the years, hundreds of people added and improved upon the system, and it spread into the commercial world. Dozens of different UNIX "flavors" appeared, each with unique qualities, yet still having enough similarities to the original AT&T version. All of the "flavors" were based on either AT&T's System V or Berkeley System Distribution (BSD) UNIX, or a hybrid of both. During the late 1980's there were several of commercial implementations of UNIX:

    • Apple Computer's A/UX
    • AT&T's System V Release 3
    • Digital Equipment Corporation's Ultrix and OSF/1 (renamed to DEC UNIX)
    • Hewlett Packard's HP-UX
    • IBM's AIX
    • Lynx's Real-Time UNIX
    • NeXT's NeXTStep
    • Santa Cruz Operation's SCO UNIX
    • Silicon Graphics' IRIX
    • SUN Microsystems' SUN OS and Solaris

    ... and dozens more.

    The Open Standards Foundation is a UNIX industry organization designed to keep the various UNIX flavors working together. They created operating systems guidelines called POSIX to encourage inter-operability of applications from one flavor of UNIX to another. Portability of applications to different gave UNIX a distinct advantage over its mainframe competition.

    Then came the GUIs. Apple's Macintosh operating system and Microsoft's Windows operating environment simplified computing tasks, and made computers more appealing to a larger number of users. UNIX wizards enjoyed the power of the command line interface, but acknowledged the difficult learning curve for new users. The Athena Project at MIT developed the X Windows Graphical User Interface for UNIX computers. Also known as the X11 environment, corporations developed their own "flavors" of the UNIX GUIs based on X11. Eventually, a GUI standard called Motif was generally accepted by the corporations and academia.

    During the late 1990's Microsoft's Windows NT operating system started encroaching into traditional UNIX businesses such as banking and high-end graphics. Although not as reliable as UNIX, NT became popular because of the lower learning curve and its similarities to Windows 95 and 98. Many traditional UNIX companies, such as DEC and Silicon Graphics, abandoned their OS for NT. Others, such as SUN, focused their efforts on niche markets, such as the Internet.

    Linus Torvalds had a dream. He wanted to create the coolest operating system in the world that was free for anyone to use and modify. Based on an obscure UNIX flavor called MINIX, Linus took the source code and created his own flavor, called Linux. Using the power of the Internet, he distributed copies of his OS all over the world, and fellow programmers improved upon his work. In 1999, with a dozen versions of the OS and many GUIs to choose from, Linux is causing a UNIX revival. Knowing that people are used to the Windows tools, Linux developers are making applications that combine the best of Windows with the best of UNIX.

    A brief history of Unix

    UNIX development was started in 1969 at Bell Laboratories in New Jersey. Bell Laboratories was (1964–1968) involved on the development of a multi-user, time-sharing operating system called Multics (Multiplexed Information and Computing System). Multics was a failure and in in early 1969, Bell Labs withdrew from the Multics project.

    Bell Labs researchers who had worked on Multics (Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joseph Ossanna, and others) still wanted to develop an operating system for their own and Bell Labs' programming,
    job control, and resource usage needs. When Multics was withdrawn Ken Thompson and Dennis Ritchie needed to rewrite an operating system in order to play space travel on another smaller machine (a DEC PDP-7 [Programmed Data Processor 4K memory for user programs). The result was a system called UNICS (UNiplexed Information and Computing Service) which was an 'emasculated Multics'.

    The first version of Unix was written in the low-level PDP-7 assembler language. Later, a language called TMG was developed for the PDP-7 by R. M. McClure. Using TMG to develop a FORTRAN compiler, Ken Thompson instead ended up developing a compiler for a new high-level language he called B, based on the earlier BCPL language developed by Martin Richard. When the PDP-11 computer arrived at Bell Labs, Dennis Ritchie built on B to create a new language called C. Unix components were later rewritten in C, and finally with the kernel itself in 1973.

    Unix V6, released in 1975 became very popular. Unix V6 was free and was distributed with its source code.

    In 1983, AT&T released Unix System V which was a commercial version.

    Meanwhile, the University of California at Berkeley started the development of its own version of Unix. Berkeley was also involved in the inclusion of Transmission Control Protocol/Internet Protocol (TCP/IP) networking protocol.

    The following were the major mile stones in UNIX history early 1980's

    • AT&T was developing its System V Unix.

    • Berkeley took initiative on its own Unix BSD (Berkeley Software Distribution) Unix.

    • Sun Microsystems developed its own BSD-based Unix called SunOS and later was renamed to Sun Solaris.

    • Microsoft and the Santa Cruz operation (SCO) were involved in another version of UNIX called XENIX.

    • Hewlett-Packard developed HP-UX for its workstations.

    • DEC released ULTRIX.

    • In 1986, IBM developed AIX (Advanced Interactive eXecutive).

    [Jun 1, 1998] Use the Source, Luke! Again

    Jun 1, 1998 | USENIX login

    Editor's note: This article originally appeared in a slightly different form in the AUUG Newsletter.

    use the source, Luke! again

    By Warren Toomey
    <[email protected]>

    Warren Toomey is a lecturer in computer science at the Australian Defence Force Academy, where he just finished his Ph.D. in network congestion. He teaches operating systems, data networks, and system administration courses. He has been playing around on UNIX since 4.2BSD.

    So you call yourself a UNIX hacker: you know what bread() is, and the various splxx() routines don't faze you. But are you really a UNIX hacker? Let's have a look at a brief history of UNIX and the community of UNIX users and hackers that grew up around it and some recent developments for real UNIX hackers.

    UNIX took the academic world by storm in 1974 with the publication of Ken Thompson's paper about its design, which was published in Communications of the ACM. Although it didn't contain many radically new ideas, UNIX had an elegance, simplicity, and flexibility that other contemporary operating systems did not have. Soon lots of people were asking Bell Laboratories if they could get copies of this wondrous new system.

    This was the cause of some concern within AT&T, because of the restrictions of an antitrust decree brought against them in the 1950s. This decree effectively stopped AT&T from selling or supporting software: they could only engage in telco business. Their solution to meet the UNIX demand was to charge a nominal "license" fee to obtain UNIX and to distribute tapes or disks "as is." You'd receive your disk in the mail with just a short note: "Here's your rk05. Love, Dennis."

    AT&T's stance on UNIX was often seen as an OHP slide at early conferences:

    "This slide was always greeted with wild applause and laughter," says Andy Tanenbaum. This lack of support was tolerated for several reasons: Ken and Dennis did unofficially fix things if you sent them bug reports, and you also had the full source code to UNIX.

    At the time, having full source code access for a useful operating system was unheard of. Source code allowed UNIX users to study how the code worked (John Lions's commentary on the sixth edition), fix bugs, write code for new devices, and add extra functionality (the Berkeley Software Releases, AUSAM from UNSW). The access to full source code, combined with AT&T's "no support" policy, engendered the strong UNIX community spirit that thrived in the late 1970s and early 1980s, and brought many UNIX users groups into existence. When in doubt as to how a program (or the kernel) worked, you could always "use the source, Luke!"

    During this period, UNIX became wildly popular at universities and in many other places. In 1982, a review of the antitrust decree caused the breakup of AT&T into the various "Baby Bell" companies. This gave AT&T the freedom to start selling software. Source code licenses for UNIX became very expensive, as AT&T realized that UNIX was indeed a money spinner for them. Thus the era of UNIX source code hackers ended, except for some notable activities like the 4BSD work carried out at the University of California, Berkeley.

    Those organizations lucky enough to have bought a "cheap" UNIX source license before 1982 were able to obtain the 4BSD releases from UCB and continue to hack UNIX. Everybody else had to be satisfied with a binary-only license and wait for vendors to fix bugs and add extra functionality. John Lions's commentary on how the UNIX kernel worked was no longer available for study; it was restricted to one copy per source code license, and was not to be used for educational purposes.

    What were UNIX hackers going to do with no UNIX source code to hack anymore? The solution was to create UNIX clones that didn't require source code licenses. One of the first was Minix, created by Andy Tanenbaum and aimed squarely at teaching operating systems. Early versions of Minix were compatible with the seventh edition UNIX; the most recent version is POSIX compliant and can run on an AT with 2 MB of memory and 30 MB of disk space.

    Many Minix users tried to convince Andy to add features such as virtual memory and networking, but Andy wanted to keep the system small for teaching purposes. Eventually, a user named Linus Torvalds got annoyed enough that he used Minix to create another UNIX clone with these extra features. And so Linux was born.

    While Linux was taking off like a plague of rabbits, the BSD hackers were working on removing the last vestiges of UNIX source code from their system. They thought they had done so, and BSDI released BSD/386, a version of 4.3BSD that ran on Intel platforms. AT&T, however, wasn't so sure about the complete removal of UNIX source code and took them to court about it.

    AT&T is not a good company to be sued by: it has a small army of lawyers. Eventually, the conflict was settled out of court with a few compromises, and we now have several freely available BSDs: FreeBSD, NetBSD, and OpenBSD. Of course, they all come with source code.

    UNIX hackers of the late 1990s surely have an abundance of source code to hack on: Linux, Minix, OpenBSD, etc. But are they really UNIX hackers, or just UNIX clone hackers? Wouldn't it be nice if we could hack on real UNIX, for old time's sake?

    UNIX turned 25 in 1993, which makes its early versions nearly antiques. Many of the old UNIX hackers (hackers of old UNIX, that is) thought the time had come to get the old, completely antiquated UNIX systems back out for sentimental reasons. After all, ITS, CTSS, and TOPS-20 had been rescued and made publicly available, why not UNIX?

    At the time, UNIX was undergoing a crisis of ownership. Did AT&T own UNIX this week, or was it Novell, Hewlett-Packard, or SCO? UNIX is a trademark of someone, but I'm not sure who. After the dust had settled, SCO had the rights to the source code, and X/Open had dibs on the name "UNIX," which is probably still an adjective.

    During the ownership crisis, Peter Salus, Dennis Ritchie, and John Lions had begun to lobby Novell: they wanted John's commentary on UNIX to be made publicly available in printed form. It wasn't until the UNIX source code rights had been sold to SCO that this finally was approved. It helped to have some old UNIX hackers, Mike Tilson and Doug Michels, inside SCO to fight the battle. You can now buy John Lions's commentary on 6th Edition UNIX (with source code) from Peer to Peer Communications, ISBN 1-57398-013-7. As Ken Thompson says: "After 20 years, this is still the best exposition of a 'real' operating system."

    One of the restrictions on the commentary's publication is that the UNIX source contained within cannot be entered into a computer. OK, so you can read the book, but what use is source code unless you can hack at it?!

    At the time that SCO bought UNIX, I began to lobby SCO to make the old source available again, unaware of the efforts to release the Lions's commentary. SCO's initial

    response was "this will dilute the trade secrets we have in UNIX, and it wouldn't be economically viable." My efforts drew a blank.

    To help bring greater lobbying power to bear on SCO, the PDP UNIX Preservation Society (PUPS) was formed. Its aims are to fight for the release of the old UNIX source, to preserve information and source from these old systems, and to help those people who still own PDP-11s to get UNIX up and running on them. After realizing that SCO was never going to make the old UNIX source code freely available, we explored the avenue of cheap, personal-use source licenses. The society set up a Web petition on the topic and gathered nearly 400 electronic signatures.

    Inside SCO, we were very fortunate to contact Dion Johnson, who took up our cause and fought tooth and nail with the naysayers and the legal eagles at SCO. The combined efforts of the PUPS petition and Dion's hard work inside SCO has finally borne fruit.

    On March 10, 1998, SCO made cheap, personal-use UNIX source code licenses available for the following versions of UNIX: first through seventh edition UNIX, 32V, and derived systems that also run on PDP-11s, such as 2.11BSD. The cost of the license is US$100, and the main restriction is that you cannot distribute the source code to people without licenses. Finally, we can be real UNIX hackers and "use the source, Luke!" again.

    Acknowledgments and References

    I'd like to thank Dion Johnson, Steven Schultz, the members of the PDP UNIX Preservation Society, and the people who signed the PUPS petition for their help in making cheap UNIX source licenses available again. Dion, in particular, deserves a medal for his efforts on our behalf.

    You can find more about the PDP UNIX Preservation Society at <http://minnie.cs.adfa.oz.au/PUPS/> and details on how to obtain your own personal UNIX source license at <http://minnie.cs.adfa.oz.au/PUPS/getlicense.html>.

    SCO won't be distributing UNIX source code as part of the license. PUPS members have volunteered to write CDs and tapes to distribute old versions of UNIX to license holders. We currently have fifth, sixth, and seventh editions, 32V, 1BSD, all 2BSDs, Mini UNIX, and Xinu. We are looking for complete versions of PWB UNIX and AUSAM. We desperately want anything before fifth edition and hope these early systems haven't gone to the bit bucket. Please contact us if you have anything from this era worth preserving.

    If you are licensed and want a copy of the PUPS Archive, see the PUPS Web page above for more information. We expect to be deluged with requests for copies, so if you can volunteer to write CDs or tapes for us, please let us know.

    You don't need own a PDP-11 to run these old systems. The PUPS Archive has a number of excellent PDP-11 emulators. If you have bought a copy of the Lions's commentary (and you should), now you can run real sixth edition UNIX on an emulator. And if you want, you can hack the code!

    SCO - Ancient UNIX -- SCO opened early unix code.

    [Jun 12, 1996] Netizens Netbook Table of Contents by Ronda Hauben and Michael Hauben

    6/12/96 | columbia.edu

    Note: Please do not link to individual file names as file names are subject to change. Instead link to the Netizens netbook page. Thanks.

    Foreword: By Tom Truscott
    Preface: What is a Netizen?
    Introduction: Participatory Networks

    Part I - The Present: What Has Been Created and How?

    Chapter 1 - The Net and the Netizens: The Effect the Net has on People's Lives
    Chapter 2 - The Evolution of Usenet: The Poor Man's Arpanet
    Chapter 3 - The Social Forces Behind The Development of Usenet
    Chapter 4 - The World of Usenet

    Part II - The Past: Where Has It All Come From?

    Chapter 5 - The Vision of Interactive Computing and the Future
    Chapter 6 - Cybernetics, Time-sharing, Human-Computer Symbiosis and On-line Communities: Creating a Supercommunity of On-line Communities
    Chapter 7 - Behind the Net: Computer Science and the Untold Story of the ARPANET
    Chapter 8 - The Birth and Development of the ARPANET
    Chapter 9 - On the Early History and Impact of UNIX: Tools to Build the Tools for a New Millennium
    Chapter 10 - On the Early Days of Usenet: The Roots of the Cooperative Online Culture

    Part III - And the Future?

    Chapter 11 - The NTIA Conference on the Future of the Net Creating a Prototype for a Democratic Decision Making Process
    Chapter 12 - "Imminent Death of the Net Predicted!"
    Chapter 13 - The Effect of the Net on the Professional News Media: The Usenet News Collective and Man-Computer News Symbiosis
    Chapter 14 - The Net and the Future of Politics: The Ascendancy of the Commons
    Chapter 15 - Exploring New York City's On-Line Community: A Snapshot of NYC.General

    Part IV - Contributions Towards Developing a Theoretical Framework

    Chapter 16 - The Expanding Commonwealth of Learning: Printing and the Net
    Chapter 17 - `Arte': An Economic Perspective
    Chapter 18 - The Computer as Democratizer

    Bibliography
    Glossary of Acronyms

    Appendix

    Proposed draft Declaration of the Rights of Netizens



    Etc

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


    Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to to buy a cup of coffee for authors of this site

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last modified: March, 12, 2019