Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

TCP/IP Networks

News

Recommended Books Recommended Links Network Troubleshooting Tools and Strategies  OSI Protocol Layers TCP/Protocol layers FAQs   RFCs
Classic network utilities Linux networking configuration Solaris network configuration Network Security Ftp Telnet ssh Mail
    Ethernet ARP ICMP Routing NAT Firewalls
DHCP NIS NFS DNS NTP Samba LDAP RPC
Tacacs+ ICMP Tools Nmap ntop ngrep rsync  Network IDS Intrusion Detection 
inetd sniffers Tcpdump Wireshark snoop      
Event correlation Socks   Horror Stories Tips Random Findings Humor Etc

TCP/IP was and is the crown jewel of the US engineering acumen, the technology that changed the civilization as we know it in less then 50 years.  

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

The key idea behind TCP and IP was to create "network of networks". That's why Department of Defense (DOD) initialed the research project to connect a number different networks designed by different vendors into a network of networks (the "Internet").

The Army puts out a bid on a computer and DEC wins the bid. The Air Force puts out a bid and IBM wins. The Navy bid is won by Unisys. Then the President decides to invade Grenada and the armed forces discover that their computers cannot talk to each other. The DOD must build a "network" out of systems each of which, by law, was delivered by the lowest bidder on a single contract.

And TCP/IP was successful because it was relatively simple and delivered a few basic services that everyone needs (file transfer, electronic mail, remote logon) across a many different types clients, servers and operating systems. The IP component provides routing from the local LAN to the enterprise network, then the global Internet. On the battlefield a communications network will sustain damage, so the DOD designed TCP/IP to be robust and automatically recover from any node or line failure. This design allows the construction of very large networks with minimal central management.  

As with all other communications protocol, TCP/IP is composed of layers:

To insure that all types of systems from all vendors can communicate, TCP/IP was from the beginning completly  standardized and open. The sudden explosion of high speed microprocessors, fiber optics, and digital phone systems has created a burst of new options: ISDN, frame relay, FDDI, Asynchronous Transfer Mode (ATM).  so on physical level new technologies arise and become obsolete within a few years. So no single standard can govern citywide, nationwide, or worldwide communications. But on logical level TCP-IP domainates. 

The original design of TCP/IP as a Network of Networks fits nicely within the current technological uncertainty. TCP/IP data can be sent across a LAN, or it can be carried within an internal corporate network, or it can piggyback on the cable service. Furthermore, machines connected to any of these networks can communicate to any other network through gateways supplied by the network vendor.

Early research

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network, with important influences on this design.

The network's design included the recognition it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. Using a simple design, it became possible to connect almost any network to the ARPANET, irrespective of their local characteristics, thereby solving Kahn's initial problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string."

A computer, called a router, is provided with an interface to each network. It forwards packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.

Specification

From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification. A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, and TCP/IP v4. The last protocol is still in use today.

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites in the US, UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on  January 1, 1983.]

Adoption

In March 1982, the US Department of Defense adopted TCP/IP as the standard for all military computer networking. In 1985, the Internet Architecture Board held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985 the first Interop conference was held, focusing on network interoperability via further adoption of TCP/IP. It was founded by Dan Lynch, an early Internet activist. From the beginning, it was attended by large corporations, such as IBM and DEC. Interoperability conferences have been held every year since then. Every year from 1985 through 1993, the number of attendees tripled.

IBM, ATT and DEC were the first major corporations to adopt TCP/IP, despite having competing internal protocols (SNA, XNS, etc.). In IBM, from 1984, Barry Appelman's group did TCP/IP development. (Appelman later moved to AOL to be the head of all its development efforts.) They managed to navigated around the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies began offering TCP/IP stacks for DOS and MS Windows, such as the company FTP Software. The first VM/CMS TCP/IP stack came from the University of Wisconsin.

Back then, most of these TCP/IP stacks were written single-handedly by a few talented programmers. For example, John Romkey of FTP Software was the author of the MIT PC/IP package. John Romkey's PC/IP implementation was the first IBM PC TCP/IP stack. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively.

The spread of TCP/IP was fueled further in June 1989, when AT&T agreed to put into the public domain the TCP/IP code developed for UNIX. Various vendors, including IBM, included this code in their own TCP/IP stacks. Many companies sold TCP/IP stacks for Windows until Microsoft released its own TCP/IP stack in Windows 95. This event cemented TCP/IP's dominance over other protocols. These protocols included IBM's SNA, OSI, Microsoft's native NetBIOS (still widely used for file sharing), and Xerox' XNS.

Addresses

Each technology has its own convention for transmitting messages between two machines within the same network. On a phycial level packets are sent between machines by supplying the six byte unique identifier (the "MAC" address). In an SNA network, every machine has Logical Units with their own network address. DECNET, Appletalk, and Novell IPX all have a scheme for assigning numbers to each local network and to each workstation attached to the network.

On top of these local or vendor specific network addresses, TCP/IP assigns a unique number to every workstation in the net. This "IP number" is a four byte value that, by convention, is expressed by converting each byte into a decimal number (0 to 255) and separating the bytes with a period.

In early days an organization need to send an electronic mail to Hostmaster@INTERNIC.NET requesting assignment of a network number. It is still possible for almost anyone to get assignment of a number for a small "Class C" network in which the first three bytes identify the network and the last byte identifies the individual computer. Before 1996 some people followed this procedure and were assigned the numbers class C networks for a network of computers at his house. Large organizations before 1996 typically got "Class B" network where the first two bytes identify the network and the last two bytes identify each of up to 64 thousand individual workstations. For example Yale's Class B network is 130.132, so all computers with IP address 130.132.*.* are connected through Yale.

The organization then connects to the Internet through one of a dozen regional or specialized network suppliers. The network vendor is given the subscriber network number and adds it to the routing configuration in its own machines and those of the other major network suppliers.

There is no mathematical formula that translates the numbers 192.35.91 or 130.132 into "Yale University" or "New Haven, CT." The machines that manage large regional networks or the central Internet routers managed by the National Science Foundation can only locate these networks by looking each network number up in a table. There are potentially thousands of Class B networks, and millions of Class C networks, but computer memory costs are low, so the tables are reasonable. Customers that connect to the Internet, even customers as large as IBM, do not need to maintain any such information. They send all external data to the regional carrier to which they subscribe, and the regional carrier maintains the tables and does the appropriate routing.

New Haven is in a border state, split 50-50 between the Yankees and the Red Sox. In this spirit, Yale recently switched its connection from the Middle Atlantic regional network to the New England carrier. When the switch occurred, tables in the other regional areas and in the national spine had to be updated, so that traffic for 130.132 was routed through Boston instead of New Jersey. The large network carriers handle the paperwork and can perform such a switch given sufficient notice. During a conversion period, the university was connected to both networks so that messages could arrive through either path.

Subnets

Although the individual subscribers do not need to tabulate network numbers or provide explicit routing, it is convenient for most Class B networks to be internally managed as a much smaller and simpler version of the larger network organizations. It is common to subdivide the two bytes available for internal assignment into a one byte department number and a one byte workstation ID.

classb.gif

The enterprise network is built using commercially available TCP/IP router boxes. Each router has small tables with 255 entries to translate the one byte department number into selection of a destination Ethernet connected to one of the routers. Messages to the PC Lube and Tune server (130.132.59.234) are sent through the national and New England regional networks based on the 130.132 part of the number. Arriving at Yale, the 59 department ID selects an Ethernet connector in the C& IS building. The 234 selects a particular workstation on that LAN. The Yale network must be updated as new Ethernets and departments are added, but it is not effected by changes outside the university or the movement of machines within the department.

A Uncertain Path

Every time a message arrives at an IP router, it makes an individual decision about where to send it next. There is concept of a session with a preselected path for all traffic. Consider a company with facilities in New York, Los Angeles, Chicago and Atlanta. It could build a network from four phone lines forming a loop (NY to Chicago to LA to Atlanta to NY). A message arriving at the NY router could go to LA via either Chicago or Atlanta. The reply could come back the other way.

How does the router make a decision between routes? There is no correct answer. Traffic could be routed by the "clockwise" algorithm (go NY to Atlanta, LA to Chicago). The routers could alternate, sending one message to Atlanta and the next to Chicago. More sophisticated routing measures traffic patterns and sends data through the least busy link.

If one phone line in this network breaks down, traffic can still reach its destination through a roundabout path. After losing the NY to Chicago line, data can be sent NY to Atlanta to LA to Chicago. This provides continued service though with degraded performance. This kind of recovery is the primary design feature of IP. The loss of the line is immediately detected by the routers in NY and Chicago, but somehow this information must be sent to the other nodes. Otherwise, LA could continue to send NY messages through Chicago, where they arrive at a "dead end." Each network adopts some Router Protocol which periodically updates the routing tables throughout the network with information about changes in route status.

If the size of the network grows, then the complexity of the routing updates will increase as will the cost of transmitting them. Building a single network that covers the entire US would be unreasonably complicated. Fortunately, the Internet is designed as a Network of Networks. This means that loops and redundancy are built into each regional carrier. The regional network handles its own problems and reroutes messages internally. Its Router Protocol updates the tables in its own routers, but no routing updates need to propagate from a regional carrier to the NSF spine or to the other regions (unless, of course, a subscriber switches permanently from one region to another).

Undiagnosed Problems

IBM designs its SNA networks to be centrally managed. If any error occurs, it is reported to the network authorities. By design, any error is a problem that should be corrected or repaired. IP networks, however, were designed to be robust. In battlefield conditions, the loss of a node or line is a normal circumstance. Casualties can be sorted out later on, but the network must stay up. So IP networks are robust. They automatically (and silently) reconfigure themselves when something goes wrong. If there is enough redundancy built into the system, then communication is maintained.

In 1975 when SNA was designed, such redundancy would be prohibitively expensive, or it might have been argued that only the Defense Department could afford it. Today, however, simple routers cost no more than a PC. However, the TCP/IP design that, "Errors are normal and can be largely ignored," produces problems of its own.

Data traffic is frequently organized around "hubs," much like airline traffic. One could imagine an IP router in Atlanta routing messages for smaller cities throughout the Southeast. The problem is that data arrives without a reservation. Airline companies experience the problem around major events, like the Super Bowl. Just before the game, everyone wants to fly into the city. After the game, everyone wants to fly out. Imbalance occurs on the network when something new gets advertised. Adam Curry announced the server at "mtv.com" and his regional carrier was swamped with traffic the next day. The problem is that messages come in from the entire world over high speed lines, but they go out to mtv.com over what was then a slow speed phone line.

Occasionally a snow storm cancels flights and airports fill up with stranded passengers. Many go off to hotels in town. When data arrives at a congested router, there is no place to send the overflow. Excess packets are simply discarded. It becomes the responsibility of the sender to retry the data a few seconds later and to persist until it finally gets through. This recovery is provided by the TCP component of the Internet protocol.

TCP was designed to recover from node or line failures where the network propagates routing table changes to all router nodes. Since the update takes some time, TCP is slow to initiate recovery. The TCP algorithms are not tuned to optimally handle packet loss due to traffic congestion. Instead, the traditional Internet response to traffic problems has been to increase the speed of lines and equipment in order to say ahead of growth in demand.

TCP treats the data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, "This packet starts with byte 379642 and contains 200 bytes of data." The receiver can detect missing or incorrectly sequenced packets. TCP acknowledges data that has been received and retransmits data that has been lost. The TCP design means that error recovery is done end-to-end between the Client and Server machine. There is no formal standard for tracking problems in the middle of the network, though each network has adopted some ad hoc tools.

Need to Know

There are three levels of TCP/IP knowledge. Those who administer a regional or national network must design a system of long distance phone lines, dedicated routing devices, and very large configuration files. They must know the IP numbers and physical locations of thousands of subscriber networks. They must also have a formal network monitor strategy to detect problems and respond quickly.

Each large company or university that subscribes to the Internet must have an intermediate level of network organization and expertise. A half dozen routers might be configured to connect several dozen departmental LANs in several buildings. All traffic outside the organization would typically be routed to a single connection to a regional network provider.

However, the end user can install TCP/IP on a personal computer without any knowledge of either the corporate or regional network. Three pieces of information are required:

  1. The IP address assigned to this personal computer
  2. The part of the IP address (the subnet mask) that distinguishes other machines on the same LAN (messages can be sent to them directly) from machines in other departments or elsewhere in the world (which are sent to a router machine)
  3. The IP address of the router machine that connects this LAN to the rest of the world.

In the case of the PCLT server, the IP address is 130.132.59.234. Since the first three bytes designate this department, a "subnet mask" is defined as 255.255.255.0 (255 is the largest byte value and represents the number with all bits turned on). It is a Yale convention (which we recommend to everyone) that the router for each department have station number 1 within the department network. Thus the PCLT router is 130.132.59.1. Thus the PCLT server is configured with the values:

The subnet mask tells the server that any other machine with an IP address beginning 130.132.59.* is on the same department LAN, so messages are sent to it directly. Any IP address beginning with a different value is accessed indirectly by sending the message through the router at 130.132.59.1 (which is on the departmental LAN).


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 01, 2017] How To Configure SAMBA Server And Transfer Files Between Linux Windows - LinuxAndUbuntu - Linux News Apps Reviews Linux T

Jun 01, 2017 | www.linuxandubuntu.com

If you are setting this on a Ubuntu server you can use vim or nano to edit smb.conf file, for Ubuntu desktop just use the default text editor file. Note that all commands (Server or Desktop) must be run as a root. $ sudo nano /etc/samba/smb.conf ​Then add the information below to the very end of the file -

[share] 
comment = Ubuntu File Server Share 
path = /srv/samba/share 
browsable = yes 
guest ok = yes 
read only = no 
create mask = 0755 ​ 

Comment : is a short description of the share.
Path : the path of the directory to be shared.

This example uses /srv/ samba/share because, according to the Filesystem Hierarchy Standard (FHS), /srv is where site-specific data should be served. Technically Samba shares can be placed anywhere on the filesystem as long as the permissions are correct, but adhering to standards is recommended.

create mask : determines the permissions new files will have when created.

Now that Samba is configured, the directory /srv/samba/share needs to be created and the permissions need to be set. Create the directory and change permissions from the terminal - sudo mkdir -p /srv/samba/share

   sudo chown nobody:nogroup /srv/samba/share/ ​

The -p switch tells mkdir to create the entire directory tree if it does not exist.

Finally, restart the samba services to enable the new configuration: sudo systemctl restart smbd.service nmbd.service ​From a Windows client, you should now be able to browse to the Ubuntu file server and see the shared directory. If your client doesn't show your share automatically, try to access your server by its IP address, e.g. \\192.168.1.1 or hostname in a Windows Explorer window. To check that everything is working try creating a directory from Windows.

To create additional shares simply create new [dir] sections in /etc/samba/smb.conf , and restart Samba. Just make sure that the directory you want to share actually exists and the permissions are correct.

[Jul 23, 2009] Twitter's Google Docs Hack - A Warning For Cloud App Users - News - eWeekEurope.co.uk By Eric Lundquist

20-07-2009

Twitter lost its data through a hack on Google Docs. Learn from this to be very careful how much trust you place on cloud apps and Web 2.0, says Eric Lundquist

Here's the background. A hacker apparently was able to access the Google account of a Twitter employee. Twitter uses Google Docs as a method to create and share information. The hacker apparently got at the docs and sent them to TechCrunch, which decided to publish much of the information.

The entire event - not the first time Twitter has been hacked into through cloud apps - sent the Web world into a frenzy. How smart was Twitter to rely on Google applications? How can Google build up business-to-business trust when one hack opens the gates on corporate secrets? Were TechCrunch journalists right to publish stolen documents? Whatever happened to journalists using documents as a starting point for a story rather than the end point story in itself?

Alongside all this, what are the serious lessons that business execs and information technology professionals can learn from the Twitter/TechCrunch episode? Here are my suggestions:

1. Don't confuse the cloud with secure, locked-down environments.
Cloud computing is all the rage. It makes it easy to scale up applications, design around flexible demand and make content widely accessible [in the UK, the Tory party is proposing more use of it by Government, and the Labour Government has appointed a Tsar of Twitter - Editor]. But the same attributes that make the cloud easy for everyone to access makes it, well, easy for everyone to access.

2. Cloud computing requires more, not less, stringent security procedures.>br /> In your own network would you defend your most vital corporate information with only a username and user-created password? I don't think so. Recent surveys have found that Web 2.0 users are slack on security.

3. Putting security procedures in place after a hack is dumb.
Security should be a tiered approach. Non-vital information requires less security than, say, your company's five-year plan, financials or salaries. If you don't think about this stuff in advance you will pay for it when it appears on the evening news.

4. Don't rely on the good will of others to build your security.
Take the initiative. I like the ease and access of Google applications, but I would never include those capabilities in a corporate security framework without a lengthy discussion about rights, procedures and responsibilities. I'd also think about having a white hat hacker take a look at what I was planning.

5. The older IT generation has something to teach the youngsters.
The world of business 2.0 is cool, exciting... and full of holes. Those grey haired guys in the server room grew up with procedures that might seem antiquated, but were designed to protect a company's most important assets.

6. Consider compliance.
Compliance issues have to be considered whether you are going to keep your information on a local server you keep in a safe or a cloud computing platform. Finger-pointing will not satisfy corporate stakeholders or government enforcers.

[Dec 28, 2006] TCP-IP Protocol Sequence Diagrams

tutorial articles in this section describe TCP/IP and related protocols as sequence diagrams. (The sequence diagrams were generated using EventStudio System Designer 2.5).

[PDF] TCP/IP reference card from SANS

[Dec 6, 2005] TCP-IP Stack Hardening

[Dec 6, 2005] Daryl's TCP-IP Primer Good and up-to-date primer...

[Mar 19, 2005] TCP-IP Protocol Sequence Diagrams

Articles in this section describe TCP/IP and related protocols as sequence diagrams.
(The sequence diagrams were generated using EventStudio).

Obscurantism in Information Technology: Nicholas Carr's "IT Does not Matter" Fallacy and "Everything in the Cloud" Utopia

Nicholas Carr's provocative HBR article published five years ago and subsequent books suffer from the lack of understanding of IT history, electrical transmission networks (which he uses as close historical analogy) and "in the cloud" software service provider model (SaaS). He cherry-picks historical facts to fit his needs instead of trying to describe real history of development of each of those three technologies. To be more correct Carr tortures facts to get them to fit his fantasy. The central idea of the article "IT does not matter" is simply a fallacy. At best Carr managed to ask a couple of interesting questions, but provided inferior and misleading answers. While Carr is definitely a gifted writer, ignorance of technology about which he is writing leads him to absurd conclusions which due to his lucid writing style looks quite plausible for non-specialists and as such influence public opinion about IT. Still as a writer Carr comes across as a guy who can write engagingly about a variety of topics including those about which he knows almost nothing. Here lies the danger as only specialists can sense that that "Something Is Deeply Amiss" while ordinary readers tend to believe his aura of credibility emanating from the "former editor of HBR" title.

Unfortunately the charge of irrelevance of IT made by Carr was perfectly in sync with the higher management desire to accelerate outsourcing and Carr's 2003 HBR paper served as a kind of "IT outsourcing manifesto". And the fact that many people were sitting between chairs as for the value of IT outsourcing partially explains why his initial HBR article, as weak and detached from reality as it was, generated less effective rebuttals then it should. This paper is an attempt to provide a more coherent analysis of the main components of Carr's fallacious vision five years after the event.

If one looks closer at what Carr propose, it is evident that this is a pretty reactionary and defeatist framework which I would call "IT obscurantism" and which is not that different from "creativism". Like with the latter, his justifications are extremely weak and consist of one hand of usage of fuzzy facts and questionable analogies, on the other putting forward radical, absurd recommendations ("Spend less", "Follow, don't lead", "Focus on vulnerabilities, not opportunities" and "move to utility-based 'in the cloud' computing") which can hurt anybody who trusts them or, worse, tries blindly adopt them. The irony of Carr's position is that for the last five year since the publication of his HBR article local datacenters actually flourished and until 2008 had shown no signs of impeding demise. In 2008 credit crush his data centers but they are just collateral damage of financial storm. From 2003 to 2008 Data Centers experienced just another technological reorganization which increased role of Intel computers in the datacenter (including appearance of blades, as alternatives to small to midrange servers and laptops as the alternative to desktop), virtualization, wireless technologies and distributed computing. Moreover there was some trend to the consolidation of datacenters within the large companies.

The paper contains critique of key aspects of Carr's utopia including but not limited to such typical for Carr's writings problems as "Frivolous treatment of IT history", "Limited understanding of enterprise IT", " "Idealization of 'in the cloud' computing model". and "Compete absence of discussion of competing technologies". The author argues that the level of hype about "utility computing" makes prudent treating all promoters of this interesting new technology, especially those who severely lack technical depth, with extreme skepticism. Junk science is and always was based on cherry-picked evidence which has carefully been selected or edited to support a pre-selected, absurd "truth". The article claims that Carr's doom-and-gloom predictions about IT and datacenters are based on cherry-picked evidence and while future is unpredictable by definition, the total switch to the Internet based remote "in the cloud" computing probably will never materialize. Private and hybrid models are definitely more viable. There is no free lunch and moving computation to the cloud increases the load on the remote servers as well as drastically increases security requirements. Both factors increases costs. Achieving the same reliability for the cloud computing as in local solution is another problem. Outages of large datacenter are usually more severe and more difficult to recover then outages of small local datacenter. The information flow about outage has severe restrictions that additionally hurt the clients.

[Jul 30, 2008] OPEC 2.0: Why Bandwidth Is the Oil of the Information Economy By TIM WU

July 30, 2008 | NYTimes.com

AMERICANS today spend almost as much on bandwidth - the capacity to move information - as we do on energy. A family of four likely spends several hundred dollars a month on cellphones, cable television and Internet connections, which is about what we spend on gas and heating oil.

Just as the industrial revolution depended on oil and other energy sources, the information revolution is fueled by bandwidth. If we aren't careful, we're going to repeat the history of the oil industry by creating a bandwidth cartel.

Like energy, bandwidth is an essential economic input. You can't run an engine without gas, or a cellphone without bandwidth. Both are also resources controlled by a tight group of producers, whether oil companies and Middle Eastern nations or communications companies like AT&T, Comcast and Vodafone. That's why, as with energy, we need to develop alternative sources of bandwidth.

Wired connections to the home - cable and telephone lines - are the major way that Americans move information. In the United States and in most of the world, a monopoly or duopoly controls the pipes that supply homes with information. These companies, primarily phone and cable companies, have a natural interest in controlling supply to maintain price levels and extract maximum profit from their investments - similar to how OPEC sets production quotas to guarantee high prices.

But just as with oil, there are alternatives. Amsterdam and some cities in Utah have deployed their own fiber to carry bandwidth as a public utility. A future possibility is to buy your own fiber, the way you might buy a solar panel for your home.

Encouraging competition is another path, though not an easy one: most of the much-hyped competitors from earlier this decade, like businesses that would provide broadband Internet over power lines, are dead or moribund. But alternatives are important. Relying on monopoly producers for the transmission of information is a dangerous path.

After physical wires, the other major way to move information is through the airwaves, a natural resource with enormous potential. But that potential is untapped because of a false scarcity created by bad government policy.

Our current approach is a command and control system dating from the 1920s. The federal government dictates exactly what licensees of the airwaves may do with their part of the spectrum. These Soviet-style rules create waste that is worthy of Brezhnev.

Many "owners" of spectrum either hardly use the stuff or use it in highly inefficient ways. At any given moment, more than 90 percent of the nation's airwaves are empty.

The solution is to relax the overregulation of the airwaves and allow use of the wasted spaces. Anyone, so long as he or she complies with a few basic rules to avoid interference, could try to build a better Wi-Fi and become a broadband billionaire. These wireless entrepreneurs could one day liberate us from wires, cables and rising prices.

Such technologies would not work perfectly right away, but over time clever entrepreneurs would find a way, if we gave them the chance. The Federal Communications Commission promised this kind of reform nearly a decade ago, but it continues to drag its heels.

In an information economy, the supply and price of bandwidth matters, in the way that oil prices matter: not just for gas stations, but for the whole economy.

And that's why there is a pressing need to explore all alternative supplies of bandwidth before it is too late. Americans are as addicted to bandwidth as they are to oil. The first step is facing the problem.

Tim Wu is a professor at Columbia Law School and the co-author of "Who Controls the Internet?"

[Aug 7, 2007] Expect plays a crucial role in network management by Cameron Laird

Jul 31, 2007 | developerworks

If you manage systems and networks, you need Expect.

More precisely, why would you want to be without Expect? It saves hours common tasks otherwise demand. Even if you already depend on Expect, though, you might not be aware of the capabilities described below.

Expect automates command-line interactions

You don't have to understand all of Expect to begin profiting from the tool; let's start with a concrete example of how Expect can simplify your work on AIX® or other operating systems:

Suppose you have logins on several UNIX® or UNIX-like hosts and you need to change the passwords of these accounts, but the accounts are not synchronized by Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or some other mechanism that recognizes you're the same person logging in on each machine. Logging in to a specific host and running the appropriate passwd command doesn't take long-probably only a minute, in most cases. And you must log in "by hand," right, because there's no way to script your password?

Wrong. In fact, the standard Expect distribution (full distribution) includes a command-line tool (and a manual page describing its use!) that precisely takes over this chore. passmass (see Resources) is a short script written in Expect that makes it as easy to change passwords on twenty machines as on one. Rather than retyping the same password over and over, you can launch passmass once and let your desktop computer take care of updating each individual host. You save yourself enough time to get a bit of fresh air, and multiple opportunities for the frustration of mistyping something you've already entered.

The limits of Expect

This passmass application is an excellent model-it illustrates many of Expect's general properties:

You probably know enough already to begin to write or modify your own Expect tools. As it turns out, the passmass distribution actually includes code to log in by means of ssh, but omits the command-line parsing to reach that code. Here's one way you might modify the distribution source to put ssh on the same footing as telnet and the other protocols:

Listing 1. Modified passmass fragment that accepts the -ssh argument
...
} "-rlogin" {
set login "rlogin"
continue
} "-slogin" {
set login "slogin"
continue
} "-ssh" {
set login "ssh"
continue
} "-telnet" {
set login "telnet"
continue
...

In my own code, I actually factor out more of this "boilerplate." For now, though, this cascade of tests, in the vicinity of line #100 of passmass, gives a good idea of Expect's readability. There's no deep programming here-no need for object-orientation, monadic application, co-routines, or other subtleties. You just ask the computer to take over typing you usually do for yourself. As it happens, this small step represents many minutes or hours of human effort saved.

WANdoc Open Source Perl=based

WANdoc Open Source is free software that generates interactive documentation for large Cisco networks. It uses syslog and router configuration files to produce summarized, hyperlinked, and error- checked router information. It speeds up the WAN troubleshooting process and identifies inconsistencies in router deployment.

Understanding IP Addressing Everything You Ever Wanted To Know - By Chuck Semeria -- good tutorial from 3COM. This white paper is now available in the 3 pdf's below.


Pages 1 - 21
Pages 22 - 43
Pages 44 - 65

Top websites:

TCP/IP online books Free TCP/IP online books

AW • Professional - Networking Series Catalog Page Books from Addison Wesley, a respected name in technical publication.

Bill Stallings: Home Page Web Site for the Books of William Stallings

Douglas Comer This is the home page of Douglas Comer, the author of the book "Internetworking with TCP/IP".

Illustrated TCP/IP Online version of the book "Illustrated TCP/IP", by Matthew G. Naugle, published by Wiley Computer Publishing, John Wiley & Sons, Inc.

The Internet Companion Online version of the book "The Internet Companion". This book explains the basics of communication on the Internet and the applications available

Internetworking Multimedia This is a online book covering multimedia communication using the Internet

McGraw Hill Networking books A search on networking books published by McGraw Hill.

McGraw-Hill - Bet@ Books Free online prerelease versions of many new books on networking and other topics.

The Mechanics of Routing Protocols An online book published by Cisco Press.

The Network Book A comprehensive introduction to network and distributed computing technologies online

Network Reading List: TCP/IP,UNIX and Ethernet Compilation of links on the Internet relating to TCP/IP, Unix and Ethernet

Networking and Communications Prentice Hall Professional Technical Reference: Special Interests

Routing in the Internet A very comprehensive book on routing, written by Christian Huitema, from the Internet Architecture Board. A must read for those interested on routing protocols

Routing Information Protocols The Network Book, Chapter 3, Section 3. This document is part of the Network Book

TCP/IP and Data Communications Administration Guide An online book, in PDF format, explaining how to setup, maintain and expand a network using the Solaris implementation of the TCP/IP protocols

TCP/IP Network Administration, 2nd Edition Clearly written, this book is a good introduction to the TCP/IP protocols and practical applications.

Troubleshooting TCP/IP This is a sample chapter from the book "Windows NT TCP/IP Network Administration", published by OґReilly and associates which explains how to solve problems related to TCP/IP in a Windows NT environment

Understanding Networking Technologies Online course providing training on a host of networking topics.

Windows NT TCP/IP Network Administration O'Reilly publication covering TCP/IP and NT

Wireless Networking Handbook Online version of the book "Wireless Networking Handbook" by Jim Geier, and published by New Riders, Macmillan Computer Publishing


MCI Arms ISPs with Means to Counterattack Hackers

MCI Arms ISPs with Means to Counterattack Hackers [October 9] MCI introduced today a security product designed to help Internet Service Providers detect network intruders.

The networkMCI DoS (Denial of Service) Tracker constantly monitors the network and then once a denial of service attack has been detected, the product immediately works to trace the root of the attack.

The product is designed to eliminate the time technical engineers spend manually searching for the intrusion. MCI claims the product takes little programming knowledge to find the network intruder.

The DoS Tracker combats SYN, ICMP Flood, Bandwidth Saturation, and Concentrated Source, and the newly detected Smurf hacker attacks.

"Obviously, we can't guarantee the safety of other networks from all hacker activity, but we believe the networkMCI DoS Tracker provides ISPs and other network operators with a powerful tool that will help them protect their Internet assets," Rob Hagens, director of Internet Engineering.

The product is available for free from MCI's Web site.


Tutorials

TCP/IP in 14 Days

The Linux Network Administrators' Guide FAME Computer Education TCPIP for Idiots Tutorial RFC1180 Introduction to the Internet Protocols

Daryl's TCP-IP Primer Good and up-to-date primer...

Understanding IP addressing -- tutorial from 3Com

**** The Network Administrators' Guide -- the first several chapter contain good introduction to TCP/IP

Contents (fragment)

FAME Computer Education TCPIP for Idiots Tutorial

RFC1180 TCP/IP Tutorial by T. Socolofsky & C. Kale January 1991 (63 KBytes) -- old, but still decent is a tutorial (UK mirror RFC 1180)

TCP-IP and IPX Routing tutorial (mirror TCP-IP and IPX routing Tutorial )

Introduction to the Internet Protocols by Charles L. Hedrick. 3 July 1987 (Rutgers University). See also a mirror Introduction to TCPIP

Fast Guide to Subnets by Chuck Semeria (3Com)

Understanding IP Addressing

Integrating Your Machine With the Network - good guide from USAIL

PC Magazine PC Tech (A Beginner's Guide to TCPIP)

IP Masquerading for Linux


Lecture Notes


Recommended Links

Softpanorama hot topic of the month

Softpanorama Recommended

Top articles

Sites


FAQs


Win TCP/IP


Random Findings

Old and broken links


IBM Redbook

***+ TCP-IP Tutorial and Technical Overview -- a pretty decent and up to date IBM Redbook PDF

Table of Contents (old version was in HTML, now only PDF is available from the IBM site)

Part 1. Architecture and Core Protocols

  • Chapter 1. Introduction to TCP/IP - History, Architecture and Standards
  • 1.1 Internet History - Where It All Came From
  • 1.2 TCP/IP Architectural Model - What It Is All About
  • 1.3 Finding Standards for TCP/IP and the Internet
  • 1.4 Future of the Internet
  • 1.5 IBM and the Internet
  • Chapter 2. Internetworking and Transport Layer Protocols
  • 2.1 Internet Protocol (IP)
  • 2.2 Internet Control Message Protocol (ICMP) <
  • 2.3 Internet Group Management Protocol (IGMP)
  • 2.4 Address Resolution Protocol (ARP)
  • 2.5 Reverse Address Resolution Protocol (RARP)
  • 2.6 Ports and Sockets
  • 2.7 User Datagram Protocol (UDP)
  • 2.8 Transmission Control Protocol (TCP)
  • 2.9 TCP Congestion Control Algorithms
  • Chapter 3. Routing Protocols
  • 3.1 Basic IP Routing
  • 3.2 Routing Algorithms
  • 3.3 Interior Gateway Protocols (IGP)
  • 3.4 Exterior Routing Protocols
  • Chapter 4. Application Protocols 4.1 Characteristics of Applications
  • 4.2 Domain Name System (DNS)
  • 4.3 TELNET
  • 4.4 File Transfer Protocol (FTP)
  • 4.5 Trivial File Transfer Protocol (TFTP)
  • 4.6 Remote Execution Command Protocol (REXEC and RSH)
  • 4.7 Simple Mail Transfer Protocol (SMTP)
  • 4.8 Multipurpose Internet Mail Extensions (MIME)
  • 4.9 Post Office Protocol (POP)
  • 4.10 Internet Message Access Protocol Version 4 (IMAP4)
  • 4.11 Network Management
  • 4.12 Remote Printing (LPR and LPD)
  • 4.13 Network File System (NFS)
  • 4.14 X Window System
  • 4.15 Internet Relay Chat Protocol (IRCP)
  • 4.16 Finger Protocol
  • 4.17 NETSTAT
  • 4.18 Network Information Systems (NIS)
  • 4.19 NetBIOS over TCP/IP
  • 4.20 Application Programming Interfaces (APIs)
  • Part 2. Special Purpose Protocols and New Technologies

  • Chapter 5. TCP/IP Security Overview
  • 5.1 Security Exposures and Solutions
  • 5.2 A Short Introduction to Cryptography
  • 5.3 Firewalls
  • 5.4 Network Address Translation (NAT)
  • 5.5 The IP Security Architecture (IPSec)
  • 5.6 SOCKS
  • 5.7 Secure Sockets Layer (SSL)
  • 5.8 Transport Layer Security (TLS)
  • 5.9 Secure Multipurpose Internet Mail Extension (S-MIME)
  • 5.10 Virtual Private Networks (VPN) Overview
  • 5.11 Kerberos Authentication and Authorization System
  • 5.12 Remote Access Authentication Protocols
  • 5.13 Layer Two Tunneling Protocol (L2TP)
  • 5.14 Secure Electronic Transaction (SET)
  • Chapter 6. IP Version 6
  • 6.1 IPv6 Overview
  • 6.2 The IPv6 Header Format
  • 6.3 Internet Control Message Protocol Version 6 (ICMPv6)
  • 6.4 DNS in IPv6
  • 6.5 DHCP in IPv6
  • 6.6 Mobility Support in IPv6
  • 6.7 Internet Transition - Migrating from IPv4 to IPv6
  • 6.8 The Drive Towards IPv6
  • 6.9 References
  • Part 3. Connection Protocols and Platform Implementations

  • Chapter 13. Connection Protocols
  • 13.1 Serial Line IP (SLIP)
  • 13.2 Point-to-Point Protocol (PPP)
  • 13.3 Ethernet and IEEE 802.x Local Area Networks (LANs)
  • 13.4 Fiber Distributed Data Interface (FDDI)
  • 13.5 Asynchronous Transfer Mode (ATM)
  • 13.6 Data Link Switching: Switch-to-Switch Protocol
  • 13.7 Integrated Services Digital Network (ISDN)
  • 13.8 TCP/IP and X.25
  • 13.9 Frame Relay
  • 13.10 Enterprise Extender
  • 13.11 PPP Over SONET and SDH Circuits
  • 13.12 Multiprotocol Label Switching (MPLS)
  • 13.13 Multiprotocol over ATM (MPOA)
  • 13.14 Private Network-to-Network Interface (PNNI)
  • 13.15 Multi-Path Channel+ (MPC+)
  • 13.16 Multiprotocol Transport Network (MPTN)
  • 13.17 S/390 Open Systems Adapter 2
  • Chapter 14. Platform Implementations
  • 14.1 Software Operating System Implementations
  • 14.2 IBM Hardware Platform Implementations

  • Cisco materials



    Etc

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

    ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least


    Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

    The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

    Last modified: September 17, 2017