Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

TCP/IP Networks

News

Recommended Books Recommended Links Network Troubleshooting Tools and Strategies  OSI Protocol Layers TCP/Protocol layers FAQs   RFCs
Classic network utilities Linux networking configuration Solaris network configuration Network Security Ftp Telnet ssh Mail
    Ethernet ARP ICMP Routing NAT Firewalls
DHCP NIS NFS DNS NTP Samba LDAP RPC
Tacacs+ ICMP Tools Nmap ntop ngrep rsync  Network IDS Intrusion Detection 
inetd sniffers Tcpdump Wireshark snoop      
Event correlation Socks   Horror Stories Tips Random Findings Humor Etc

TCP/IP was and is the crown jewel of the US engineering acumen, the technology that changed the civilization as we know it in less then 50 years.  

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

The key idea behind TCP and IP was to create "network of networks". That's why Department of Defense (DOD) initialed the research project to connect a number different networks designed by different vendors into a network of networks (the "Internet").

The Army puts out a bid on a computer and DEC wins the bid. The Air Force puts out a bid and IBM wins. The Navy bid is won by Unisys. Then the President decides to invade Grenada and the armed forces discover that their computers cannot talk to each other. The DOD must build a "network" out of systems each of which, by law, was delivered by the lowest bidder on a single contract.

And TCP/IP was successful because it was relatively simple and delivered a few basic services that everyone needs (file transfer, electronic mail, remote logon) across a many different types clients, servers and operating systems. The IP component provides routing from the local LAN to the enterprise network, then the global Internet. On the battlefield a communications network will sustain damage, so the DOD designed TCP/IP to be robust and automatically recover from any node or line failure. This design allows the construction of very large networks with minimal central management.  

As with all other communications protocol, TCP/IP is composed of layers:

To insure that all types of systems from all vendors can communicate, TCP/IP was from the beginning completly  standardized and open. The sudden explosion of high speed microprocessors, fiber optics, and digital phone systems has created a burst of new options: ISDN, frame relay, FDDI, Asynchronous Transfer Mode (ATM).  so on physical level new technologies arise and become obsolete within a few years. So no single standard can govern citywide, nationwide, or worldwide communications. But on logical level TCP-IP domainates. 

The original design of TCP/IP as a Network of Networks fits nicely within the current technological uncertainty. TCP/IP data can be sent across a LAN, or it can be carried within an internal corporate network, or it can piggyback on the cable service. Furthermore, machines connected to any of these networks can communicate to any other network through gateways supplied by the network vendor.

Early research

The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency (DARPA) in the early 1970s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, the developer of the existing ARPANET Network Control Program (NCP) protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET.

By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, where the differences between network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the ARPANET, the hosts became responsible. Cerf credits Hubert Zimmerman and Louis Pouzin, designer of the CYCLADES network, with important influences on this design.

The network's design included the recognition it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. Using a simple design, it became possible to connect almost any network to the ARPANET, irrespective of their local characteristics, thereby solving Kahn's initial problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, will run over "two tin cans and a string."

A computer, called a router, is provided with an interface to each network. It forwards packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.

Specification

From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification. A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA then contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, and TCP/IP v4. The last protocol is still in use today.

In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London (UCL). In November, 1977, a three-network TCP/IP test was conducted between sites in the US, UK, and Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. The migration of the ARPANET to TCP/IP was officially completed on  January 1, 1983.]

Adoption

In March 1982, the US Department of Defense adopted TCP/IP as the standard for all military computer networking. In 1985, the Internet Architecture Board held a three-day workshop on TCP/IP for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985 the first Interop conference was held, focusing on network interoperability via further adoption of TCP/IP. It was founded by Dan Lynch, an early Internet activist. From the beginning, it was attended by large corporations, such as IBM and DEC. Interoperability conferences have been held every year since then. Every year from 1985 through 1993, the number of attendees tripled.

IBM, ATT and DEC were the first major corporations to adopt TCP/IP, despite having competing internal protocols (SNA, XNS, etc.). In IBM, from 1984, Barry Appelman's group did TCP/IP development. (Appelman later moved to AOL to be the head of all its development efforts.) They managed to navigated around the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies began offering TCP/IP stacks for DOS and MS Windows, such as the company FTP Software. The first VM/CMS TCP/IP stack came from the University of Wisconsin.

Back then, most of these TCP/IP stacks were written single-handedly by a few talented programmers. For example, John Romkey of FTP Software was the author of the MIT PC/IP package. John Romkey's PC/IP implementation was the first IBM PC TCP/IP stack. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively.

The spread of TCP/IP was fueled further in June 1989, when AT&T agreed to put into the public domain the TCP/IP code developed for UNIX. Various vendors, including IBM, included this code in their own TCP/IP stacks. Many companies sold TCP/IP stacks for Windows until Microsoft released its own TCP/IP stack in Windows 95. This event cemented TCP/IP's dominance over other protocols. These protocols included IBM's SNA, OSI, Microsoft's native NetBIOS (still widely used for file sharing), and Xerox' XNS.

Addresses

Each technology has its own convention for transmitting messages between two machines within the same network. On a phycial level packets are sent between machines by supplying the six byte unique identifier (the "MAC" address). In an SNA network, every machine has Logical Units with their own network address. DECNET, Appletalk, and Novell IPX all have a scheme for assigning numbers to each local network and to each workstation attached to the network.

On top of these local or vendor specific network addresses, TCP/IP assigns a unique number to every workstation in the net. This "IP number" is a four byte value that, by convention, is expressed by converting each byte into a decimal number (0 to 255) and separating the bytes with a period.

In early days an organization need to send an electronic mail to Hostmaster@INTERNIC.NET requesting assignment of a network number. It is still possible for almost anyone to get assignment of a number for a small "Class C" network in which the first three bytes identify the network and the last byte identifies the individual computer. Before 1996 some people followed this procedure and were assigned the numbers class C networks for a network of computers at his house. Large organizations before 1996 typically got "Class B" network where the first two bytes identify the network and the last two bytes identify each of up to 64 thousand individual workstations. For example Yale's Class B network is 130.132, so all computers with IP address 130.132.*.* are connected through Yale.

The organization then connects to the Internet through one of a dozen regional or specialized network suppliers. The network vendor is given the subscriber network number and adds it to the routing configuration in its own machines and those of the other major network suppliers.

There is no mathematical formula that translates the numbers 192.35.91 or 130.132 into "Yale University" or "New Haven, CT." The machines that manage large regional networks or the central Internet routers managed by the National Science Foundation can only locate these networks by looking each network number up in a table. There are potentially thousands of Class B networks, and millions of Class C networks, but computer memory costs are low, so the tables are reasonable. Customers that connect to the Internet, even customers as large as IBM, do not need to maintain any such information. They send all external data to the regional carrier to which they subscribe, and the regional carrier maintains the tables and does the appropriate routing.

New Haven is in a border state, split 50-50 between the Yankees and the Red Sox. In this spirit, Yale recently switched its connection from the Middle Atlantic regional network to the New England carrier. When the switch occurred, tables in the other regional areas and in the national spine had to be updated, so that traffic for 130.132 was routed through Boston instead of New Jersey. The large network carriers handle the paperwork and can perform such a switch given sufficient notice. During a conversion period, the university was connected to both networks so that messages could arrive through either path.

Subnets

Although the individual subscribers do not need to tabulate network numbers or provide explicit routing, it is convenient for most Class B networks to be internally managed as a much smaller and simpler version of the larger network organizations. It is common to subdivide the two bytes available for internal assignment into a one byte department number and a one byte workstation ID.

classb.gif

The enterprise network is built using commercially available TCP/IP router boxes. Each router has small tables with 255 entries to translate the one byte department number into selection of a destination Ethernet connected to one of the routers. Messages to the PC Lube and Tune server (130.132.59.234) are sent through the national and New England regional networks based on the 130.132 part of the number. Arriving at Yale, the 59 department ID selects an Ethernet connector in the C& IS building. The 234 selects a particular workstation on that LAN. The Yale network must be updated as new Ethernets and departments are added, but it is not effected by changes outside the university or the movement of machines within the department.

A Uncertain Path

Every time a message arrives at an IP router, it makes an individual decision about where to send it next. There is concept of a session with a preselected path for all traffic. Consider a company with facilities in New York, Los Angeles, Chicago and Atlanta. It could build a network from four phone lines forming a loop (NY to Chicago to LA to Atlanta to NY). A message arriving at the NY router could go to LA via either Chicago or Atlanta. The reply could come back the other way.

How does the router make a decision between routes? There is no correct answer. Traffic could be routed by the "clockwise" algorithm (go NY to Atlanta, LA to Chicago). The routers could alternate, sending one message to Atlanta and the next to Chicago. More sophisticated routing measures traffic patterns and sends data through the least busy link.

If one phone line in this network breaks down, traffic can still reach its destination through a roundabout path. After losing the NY to Chicago line, data can be sent NY to Atlanta to LA to Chicago. This provides continued service though with degraded performance. This kind of recovery is the primary design feature of IP. The loss of the line is immediately detected by the routers in NY and Chicago, but somehow this information must be sent to the other nodes. Otherwise, LA could continue to send NY messages through Chicago, where they arrive at a "dead end." Each network adopts some Router Protocol which periodically updates the routing tables throughout the network with information about changes in route status.

If the size of the network grows, then the complexity of the routing updates will increase as will the cost of transmitting them. Building a single network that covers the entire US would be unreasonably complicated. Fortunately, the Internet is designed as a Network of Networks. This means that loops and redundancy are built into each regional carrier. The regional network handles its own problems and reroutes messages internally. Its Router Protocol updates the tables in its own routers, but no routing updates need to propagate from a regional carrier to the NSF spine or to the other regions (unless, of course, a subscriber switches permanently from one region to another).

Undiagnosed Problems

IBM designs its SNA networks to be centrally managed. If any error occurs, it is reported to the network authorities. By design, any error is a problem that should be corrected or repaired. IP networks, however, were designed to be robust. In battlefield conditions, the loss of a node or line is a normal circumstance. Casualties can be sorted out later on, but the network must stay up. So IP networks are robust. They automatically (and silently) reconfigure themselves when something goes wrong. If there is enough redundancy built into the system, then communication is maintained.

In 1975 when SNA was designed, such redundancy would be prohibitively expensive, or it might have been argued that only the Defense Department could afford it. Today, however, simple routers cost no more than a PC. However, the TCP/IP design that, "Errors are normal and can be largely ignored," produces problems of its own.

Data traffic is frequently organized around "hubs," much like airline traffic. One could imagine an IP router in Atlanta routing messages for smaller cities throughout the Southeast. The problem is that data arrives without a reservation. Airline companies experience the problem around major events, like the Super Bowl. Just before the game, everyone wants to fly into the city. After the game, everyone wants to fly out. Imbalance occurs on the network when something new gets advertised. Adam Curry announced the server at "mtv.com" and his regional carrier was swamped with traffic the next day. The problem is that messages come in from the entire world over high speed lines, but they go out to mtv.com over what was then a slow speed phone line.

Occasionally a snow storm cancels flights and airports fill up with stranded passengers. Many go off to hotels in town. When data arrives at a congested router, there is no place to send the overflow. Excess packets are simply discarded. It becomes the responsibility of the sender to retry the data a few seconds later and to persist until it finally gets through. This recovery is provided by the TCP component of the Internet protocol.

TCP was designed to recover from node or line failures where the network propagates routing table changes to all router nodes. Since the update takes some time, TCP is slow to initiate recovery. The TCP algorithms are not tuned to optimally handle packet loss due to traffic congestion. Instead, the traditional Internet response to traffic problems has been to increase the speed of lines and equipment in order to say ahead of growth in demand.

TCP treats the data as a stream of bytes. It logically assigns a sequence number to each byte. The TCP packet has a header that says, in effect, "This packet starts with byte 379642 and contains 200 bytes of data." The receiver can detect missing or incorrectly sequenced packets. TCP acknowledges data that has been received and retransmits data that has been lost. The TCP design means that error recovery is done end-to-end between the Client and Server machine. There is no formal standard for tracking problems in the middle of the network, though each network has adopted some ad hoc tools.

Need to Know

There are three levels of TCP/IP knowledge. Those who administer a regional or national network must design a system of long distance phone lines, dedicated routing devices, and very large configuration files. They must know the IP numbers and physical locations of thousands of subscriber networks. They must also have a formal network monitor strategy to detect problems and respond quickly.

Each large company or university that subscribes to the Internet must have an intermediate level of network organization and expertise. A half dozen routers might be configured to connect several dozen departmental LANs in several buildings. All traffic outside the organization would typically be routed to a single connection to a regional network provider.

However, the end user can install TCP/IP on a personal computer without any knowledge of either the corporate or regional network. Three pieces of information are required:

  1. The IP address assigned to this personal computer
  2. The part of the IP address (the subnet mask) that distinguishes other machines on the same LAN (messages can be sent to them directly) from machines in other departments or elsewhere in the world (which are sent to a router machine)
  3. The IP address of the router machine that connects this LAN to the rest of the world.

In the case of the PCLT server, the IP address is 130.132.59.234. Since the first three bytes designate this department, a "subnet mask" is defined as 255.255.255.0 (255 is the largest byte value and represents the number with all bits turned on). It is a Yale convention (which we recommend to everyone) that the router for each department have station number 1 within the department network. Thus the PCLT router is 130.132.59.1. Thus the PCLT server is configured with the values:

The subnet mask tells the server that any other machine with an IP address beginning 130.132.59.* is on the same department LAN, so messages are sent to it directly. Any IP address beginning with a different value is accessed indirectly by sending the message through the router at 130.132.59.1 (which is on the departmental LAN).


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Oct 14, 2018] There're a metric fuckton of reports of systemd killing detached/nohup'd processes

Notable quotes:
"... Reading stuff in /proc is a standard mechanism and where appropriate, all the tools are doing the same including 'ss' that you mentioned (which is btw very poorly designed) ..."
Oct 14, 2018 | linux.slashdot.org
Andrew Gryaznov ( 4260673 ) #56684222 )

Total nonesense ( Score: 3 , Interesting)

I am Linux kernel network and proprietary distributions developer and have actually read the code.

Reading stuff in /proc is a standard mechanism and where appropriate, all the tools are doing the same including 'ss' that you mentioned (which is btw very poorly designed)

Also there are several implementations of the net tools, the one from busybox probably the most famous alternative one and implementations don't hesitate changing how, when and what is being presented.

What is true though is that Linux kernel APIs are sometimes messy and tools like e.g. pyroute2 are struggling with working around limitations and confusions. There is also a big mess with the whole netfilter package as the only "API" is the iptables command-line tool itself.

Linux is arguably the biggest and most important project on Earth and should respect all views, races and opinions. If you would like to implement a more efficient and streamlined network interface (which I very much beg for and may eventually find time to do) - then I'm all in with you. I have some ideas of how to make the interface programmable by extending JIT rules engine and making possible to implement the most demanding network logic in kernel directly (e.g. protocols like mptcp and algorithms like Google Congestion Control for WebRTC).

adosch ( 1397357 ) , Sunday May 27, 2018 @04:04PM ( #56684724 )
Thats... the argument? FML ( Score: 4 , Interesting)

The OP's argument is that netlink sockets are more efficient in theory so we should abandon anything that uses a pseudo-proc, re-invent the wheel and move even farther from the UNIX tradition and POSIX compliance? And it may be slower on larger systems? Define that for me because I've never experienced that. I've worked on single stove-pipe x86 systems, to the 'SPARC archteciture' generation where everyone thought Sun/Solaris was the way to go with single entire systems in a 42U rack, IRIX systems, all the way on hundreds of RPM-base linux distro that are physical, hypervised and containered nodes in an HPC which are LARGE compute systems (fat and compute nodes).

That's a total shit comment with zero facts to back it up. This is like Good Will Hunting 'the bar scene' revisited...

OP, if you're an old hat like me, I'd fucking LOVE to know how old? You sound like you've got about 5 days soaking wet under your belt with a Milkshake IPA in your hand. You sound like a millennial developer-turned-sysadmin-for-a-day who's got all but cloud-framework-administration under your belt and are being a complete poser. Any true sys-admin is going to flip-their-shit just like we ALL did with systemd, and that shit still needs to die. There, I got that off my chest.

I'd say you got two things right, but are completely off on one of them:

* Your description of inefficient is what you got right: you sound like my mother or grandmother describing their computing experiences to look at Pintrest on a web brower at times. You mind as well just said slow without any bearing, education guess or reason. Sigh...

* I would agree that some of these tools need to change, but only to handle deeper kernel containerization being built into Linux. One example that comes to mind is 'hostnamectl' where it's more dev-ops centric in terms of 'what' slice or provision you're referencing. A lot of those tools like ifconfig, route and alike still do work in any Linux environment, containerized or not --- fuck, they work today .

Anymore, I'm just a disgruntled and I'm sure soon-to-be-modded-down voice on /. that should be taken with a grain of salt. I'm not happy with the way the movements of Linux have gone, and if this doesn't sound old hat I don't know what is: At the end of the day, you have to embrace change. I'd say 0.0001% of any of us are in control of those types of changes, no matter how we feel about is as end-user administrators of those tools we've grown to be complacent about. I got about 15y left and this thing called Linux that I've made a good living on will be the-next-guys steaming pile to deal with.

Greyfox ( 87712 ) writes:
Re: ( Score: 3 )

Yeah. The other day I set up some demo video streaming on a Linux box. Fire up screen, start my streaming program. Disconnect screen and exit my ssh system, and my streaming freezes. There're a metric fuckton of reports of systemd killing detached/nohup'd processes, but I check my config file and it's not that. Although them being that willing to walk away from expected system behavior is already cause to blow a gasket. But no, something else is going on here. I tweak the streaming code to catch all catchab

jgotts ( 2785 ) writes: < jgottsNO@SPAMgmail.com > on Sunday May 27, 2018 @06:17PM ( #56685380 )
Some historical color ( Score: 4 , Interesting)

Just to give you guys some color commentary, I was participating quite heavily in Linux development from 1994-1999, and Linus even added me to the CREDITS file while I was at the University of Michigan for my fairly modest contributions to the kernel. [I prefer application development, and I'm still a Linux developer after 24 years. I currently work for the company Internet Brands.]

What I remember about ip and net is that they came about seemingly out of nowhere two decades ago and the person who wrote the tools could barely communicate in English. There was no documentation. net-tools by that time was a well-understood and well-documented package, and many Linux devs at the time had UNIX experience pre-dating Linux (which was announced in 1991 but not very usable until 1994).

We Linux developers virtually created Internet programming, where most of our effort was accomplished online, but in those days everybody still used books and of course the Linux Documentation Project. I have a huge stack of UNIX and Linux books from the 1990's, and I even wrote a mini-HOWTO. There was no Google. People who used Linux back then may seem like wizards today because we had to memorize everything, or else waste time looking it up in a book. Today, even if I'm fairly certain I already know how to do something, I look it up with Google anyway.

Given that, ip and net were downright offensive. We were supposed to switch from a well-documented system to programs written by somebody who can barely speak English (the lingua franca of Linux development)?

Today, the discussion is irrelevant. Solaris, HP-UX, and the other commercial UNIX versions are dead. Ubuntu has the common user and CentOS has the server. Google has complete documentation for these tools at a glance. In my mind, there is now no reason to not switch.

Although, to be fair, I still use ifconfig, even if it is not installed by default.

[Oct 14, 2018] The problem isn't so much new tools as new tools that suck

Systemd looks OK until you get into major troubles and start troubleshooting. After that you are ready to kill systemd developers and blow up Red Hat headquarters ;-)
Notable quotes:
"... Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior. ..."
Oct 14, 2018 | linux.slashdot.org

drinkypoo ( 153816 ) writes: < martin.espinoza@gmail.com > on Sunday May 27, 2018 @11:14AM ( #56683018 ) Homepage Journal

Re:That would break scripts which use the UI ( Score: 5 , Informative)
In general, it's better for application programs, including scripts to use an application programming interface (API) such as /proc, rather than a user interface such as ifconfig, but in reality tons of scripts do use ifconfig and such.

...and they have no other choice, and shell scripting is a central feature of UNIX.

The problem isn't so much new tools as new tools that suck. If I just type ifconfig it will show me the state of all the active interfaces on the system. If I type ifconfig interface I get back pretty much everything I want to know about it. If I want to get the same data back with the ip tool, not only can't I, but I have to type multiple commands, with far more complex arguments.

The problem isn't new tools. It's crap tools.

gweihir ( 88907 ) , Sunday May 27, 2018 @12:22PM ( #56683440 )
Re:That would break scripts which use the UI ( Score: 5 , Insightful)
The problem isn't new tools. It's crap tools.

Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior.

Anonymous Coward , Sunday May 27, 2018 @02:00PM ( #56684068 )
Re:That would break scripts which use the UI ( Score: 5 , Interesting)
The problem isn't new tools. It's crap tools.

The problem isn't new tools. It's not even crap tools. It's the mindset that we need to get rid of an ~70KB netstat, ~120KB ifconfig, etc. Like others have posted, this has more to do with the ego of the new tools creators and/or their supporters who see the old tools as some sort of competition. Well, that's the real problem, then, isn't it? They don't want to have to face competition and the notion that their tools aren't vastly superior to the user to justify switching completely, so they must force the issue.

Now, it'd be different if this was 5 years down the road, netstat wasn't being maintained*, and most scripts/dependents had already been converted over. At that point there'd be a good, serious reason to consider removing an outdated package. That's obviously not the debate, though.

* Vs developed. If seven year old stable tools are sufficiently bug free that no further work is necessary, that's a good thing.

locofungus ( 179280 ) , Sunday May 27, 2018 @02:46PM ( #56684296 )
Re:That would break scripts which use the UI ( Score: 4 , Informative)
If I type ifconfig interface I get back pretty much everything I want to know about it

How do you tell in ifconfig output which addresses are deprecated? When I run ifconfig eth0.100 it lists 8 global addreses. I can deduce that the one with fffe in the middle is the permanent address but I have no idea what the address it will use for outgoing connections.

ip addr show dev eth0.100 tells me what I need to know. And it's only a few more keystrokes to type.

Anonymous Coward , Sunday May 27, 2018 @11:13AM ( #56683016 )
Re:So ( Score: 5 , Insightful)

Following the systemd model, "if it aint broken, you're not trying hard enough"...

Anonymous Coward , Sunday May 27, 2018 @11:35AM ( #56683144 )
That's the reason ( Score: 5 , Interesting)

It done one thing: Maintain the routing table.

"ip" (and "ip2" and whatever that other candidate not-so-better not-so-replacement of ifconfig was) all have the same problem: They try to be the one tool that does everything "ip". That's "assign ip address somewhere", "route the table", and all that. But that means you still need a complete zoo of other tools, like brconfig, iwconfig/iw/whatever-this-week.

In other words, it's a modeling difference. On sane systems, ifconfig _configures the interface_, for all protocols and hardware features, bridges, vlans, what-have-you. And then route _configures the routing table_. On linux... the poor kids didn't understand what they were doing, couldn't fix their broken ifconfig to save their lives, and so went off to reinvent the wheel, badly, a couple times over.

And I say the blogposter is just as much an idiot.

Per various people, netstat et al operate by reading various files in /proc, and doing this is not the most efficient thing in the world

So don't use it. That does not mean you gotta change the user interface too. Sheesh.

However, the deeper issue is the interface that netstat, ifconfig, and company present to users.

No, that interface is a close match to the hardware. Here is an interface, IOW something that connects to a radio or a wire, and you can make it ready to talk IP (or back when, IPX, appletalk, and whatever other networks your system supported). That makes those tools hardware-centric. At least on sane systems. It's when you want to pretend shit that it all goes awry. And boy, does linux like to pretend. The linux ifconfig-replacements are IP-only-stack-centric. Which causes problems.

For example because that only does half the job and you still need the aforementioned zoo of helper utilities that do things you can have ifconfig do if your system is halfway sane. Which linux isn't, it's just completely confused. As is this blogposter.

On the other hand, the users expect netstat, ifconfig and so on to have their traditional interface (in terms of output, command line arguments, and so on); any number of scripts and tools fish things out of ifconfig output, for example.

linux' ifconfig always was enormously shitty here. It outputs lots of stuff I expect to find through netstat and it doesn't output stuff I expect to find out through ifconfig. That's linux, and that is NOT "traditional" compared to, say, the *BSDs.

As the Linux kernel has changed how it does networking, this has presented things like ifconfig with a deep conflict; their traditional output is no longer necessarily an accurate representation of reality.

Was it ever? linux is the great pretender here.

But then, "linux" embraced the idiocy oozing out of poettering-land. Everything out of there so far has caused me problems that were best resolved by getting rid of that crap code. Point in case: "Network-Manager". Another attempt at "replacing ifconfig" with something that causes problems and solves very few.

locofungus ( 179280 ) , Sunday May 27, 2018 @03:27PM ( #56684516 )
Re:That's the reason ( Score: 4 , Insightful)
It done one thing: Maintain the routing table.

Should the ip rule stuff be part of route or a separate command?

There are things that could be better with ip. IIRC it's very fussy about where the table selector goes in the argument list but route doesn't support this at all.

I also don't think route has anything like 'nexthop dev $if' which is a godsend for ipv6 configuration.

I stayed with route for years. But ipv6 exposed how incomplete the tool is - and clearly nobody cares enough to add all the missing functionality.

Perhaps ip addr, ip route, ip rule, ip mroute, ip link should be separate commands. I've never looked at the sourcecode to see whether it's mostly common or mostly separate.

Anonymous Coward writes:
Re: That's the reason ( Score: 3 , Informative)

^this^

The people who think the old tools work fine don't understand all the advanced networking concepts that are only possible with the new tools: interfaces can have multiple IPs, one IP can be assigned to multiple interfaces, there's more than one routing table, firewall rules can add metadata to packets that affects routing, etc. These features can't be accommodated by the old tools without breaking compatibility.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @09:11PM ( #56686032 )
Re:That's the reason ( Score: 3 )
Someone cared enough to implement an entirely different tool to do the same old jobs plus some new stuff, it's too bad they didn't do the sane thing and add that functionality to the old tool where it would have made sense.

It's not that simple. The iproute2 suite wasn't written to *replace* anything.
It was written to provide a user interface to the rapidly expanding RTNL API.
The net-tools maintainers (or anyone who cared) could have started porting it if they liked. They didn't. iproute2 kept growing to provide access to all the new RTNL interfaces, while net-tools got farther and farther behind.
What happened was organic. If someone brought net-tools up to date tomorrow and everyone liked the interface, iproute2 would be dead in its tracks. As it sits, myself, and most of the more advanced level system and network engineers I know have been using iproute2 for just over a decade now (really, the point where ifconfig became on incomplete and poorly simplified way to manage the networking stack)

DamnOregonian ( 963763 ) , Monday May 28, 2018 @02:26AM ( #56686960 )
Re:That's the reason ( Score: 4 , Informative)

Nope. Kernel authors come up with fancy new netlink interface for better interaction with the kernel's network stack. They don't give two squirts of piss whether or not a user-space interface exists for it yet. Some guy decides to write an interface to it. Initially, it only support things like modifying the routing rule database (something that can't be done with route) and he is trying to make an implementation of this protocal, not try to hack it into software that already has its own framework using different APIs.
This source was always freely available for the net-tools guys to take and add to their own software.
Instead, we get this. [sourceforge.net]
Nobody is giving a positive spin. This is simply how it happened. This is what happens when software isn't maintained, and you don't get to tell other people to maintain it. You're free, right now, today, to port the iproute2 functionality into net-tools. They're unwilling to, however. That's their right. It's also the right of other people to either fork it, or move to more functional software. It's your right to help influence that. Or bitch on slashdot. That probably helps, too.

TeknoHog ( 164938 ) writes:
Re: ( Score: 2 )
keep the command names the same but rewrite how they function?

Well, keep the syntax too, so old scripts would still work. The old command name could just be a script that calls the new commands under the hood. (Perhaps this is just what you meant, but I thought I'd elaborate.)

gweihir ( 88907 ) , Sunday May 27, 2018 @12:18PM ( #56683412 )
Re:So ( Score: 4 , Insightful)
What was the reason for replacing "route" anyhow? It's worked for decades and done one thing.

Idiots that confuse "new" with better and want to put their mark on things. Because they are so much greater than the people that got the things to work originally, right? Same as the systemd crowd. Sometimes, they realize decades later they were stupid, but only after having done a lot of damage for a long time.

TheRaven64 ( 641858 ) writes:
Re: ( Score: 2 )

I didn't RTFA (this is Slashdot, after all) but from TFS it sounds like exactly the reason I moved to FreeBSD in the first place: the Linux attitude of 'our implementation is broken, let's completely change the interface'. ALSA replacing OSS was the instance of this that pushed me away. On Linux, back around 2002, I had some KDE and some GNOME apps that talked to their respective sound daemon, and some things like XMMS and BZFlag that used /dev/dsp directly. Unfortunately, Linux decided to only support s

zippthorne ( 748122 ) writes:
Re: ( Score: 3 )

On the other hand, on most systems, vi is basically an alias to vim....

goombah99 ( 560566 ) , Sunday May 27, 2018 @11:08AM ( #56682986 )
Bad idea ( Score: 5 , Insightful)

Unix was founded on the ideas of lots os simple command line tools that do one job well and don't depend on system idiosyncracies. If you make the tool have to know the lower layers of the system to exploit them then you break the encapsulation. Polling proc has worked across eons of linux flavors without breaking. when you make everthing integrated it creates paralysis to change down the road for backward compatibility. small speed game now for massive fragility and no portability later.

goombah99 ( 560566 ) writes:
Re: ( Score: 3 )

Gnu may not be unix but it's foundational idea lies in the simple command tool paradigm. It's why GNU was so popular and it's why people even think that Linux is unix. That idea is the character of linux. if you want an marvelously smooth, efficient, consistent integrated system that then after a decade of revisions feels like a knotted tangle of twine in your junk drawer, try Windows.

llamalad ( 12917 ) , Sunday May 27, 2018 @11:46AM ( #56683198 )
Re:Bad idea ( Score: 5 , Insightful)

The error you're making is thinking that Linux is UNIX.

It's not. It's merely UNIX-like. And with first SystemD and now this nonsense, it's rapidly becoming less UNIX-like. The Windows of the UNIX(ish) world.

Happily, the BSDs seem to be staying true to their UNIX roots.

petes_PoV ( 912422 ) , Sunday May 27, 2018 @12:01PM ( #56683282 )
The dislike of support work ( Score: 5 , Interesting)
In theory netstat, ifconfig, and company could be rewritten to use netlink too; in practice this doesn't seem to have happened and there may be political issues involving different groups of developers with different opinions on which way to go.

No, it is far simpler than looking for some mythical "political" issues. It is simply that hackers - especially amateur ones, who write code as a hobby - dislike trying to work out how old stuff works. They like writing new stuff, instead.

Partly this is because of the poor documentation: explanations of why things work, what other code was tried but didn't work out, the reasons for weird-looking constructs, techniques and the history behind patches. It could even be that many programmers are wedded to a particular development environment and lack the skill and experience (or find it beyond their capacity) to do things in ways that are alien to it. I feel that another big part is that merely rewriting old code does not allow for the " look how clever I am " element that is present in fresh, new, software. That seems to be a big part of the amateur hacker's effort-reward equation.

One thing that is imperative however is to keep backwards compatibility. So that the same options continue to work and that they provide the same content and format. Possibly Unix / Linux only remaining advantage over Windows for sysadmins is its scripting. If that was lost, there would be little point keeping it around.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:13PM ( #56685074 )
Re:The dislike of support work ( Score: 5 , Insightful)

iproute2 exists because ifconfig, netstat, and route do not support the full capabilities of the linux network stack.
This is because today's network stack is far more complicated than it was in the past. For very simple networks, the old tools work fine. For complicated ones, you must use the new ones.

Your post could not be any more wrong. Your moderation amazes me. It seems that slashdot is full of people who are mostly amateurs.
iproute2 has been the main network management suite for linux amongst higher end sysadmins for a decade. It wasn't written to sate someone's desire to change for the sake of change, to make more complicated, to NIH. It was written because the old tools can't encompass new functionality without being rewritten themselves.

Craig Cruden ( 3592465 ) , Sunday May 27, 2018 @12:11PM ( #56683352 )
So windowification (making it incompatible) ( Score: 5 , Interesting)

So basically there is a proposal to dump existing terminal utilities that are cross-platform and create custom Linux utilities - then get rid of the existing functionality? That would be moronic! I already go nuts remoting into a windows platform and then an AIX and Linux platform and having different command line utilities / directory separators / etc. Adding yet another difference between my Linux and macOS/AIX terminals would absolutely drive me bonkers!

I have no problem with updating or rewriting or adding functionalities to existing utilities (for all 'nix platforms), but creating a yet another incompatible platform would be crazily annoying.

(not a sys admin, just a dev who has to deal with multiple different server platforms)

Anonymous Coward , Sunday May 27, 2018 @12:16PM ( #56683388 )
Output for 'ip' is machine readable, not human ( Score: 5 , Interesting)

All output for 'ip' is machine readable, not human.
Compare
$ ip route
to
$ route -n

Which is more readable? Fuckers.

Same for
$ ip a
and
$ ifconfig
Which is more readable? Fuckers.

The new commands should generally make the same output as the old, using the same options, by default. Using additional options to get new behavior. -m is commonly used to get "machine readable" output. Fuckers.

It is like the systemd interface fuckers took hold of everything. Fuckers.

BTW, I'm a happy person almost always, but change for the sake of change is fucking stupid.

Want to talk about resolv.conf, anyone? Fuckers! Easier just to purge that shit.

SigmundFloyd ( 994648 ) , Sunday May 27, 2018 @12:39PM ( #56683558 )
Linux' userland is UNSTABLE ! ( Score: 3 )

I'm growing increasingly annoyed with Linux' userland instability. Seriously considering a switch to NetBSD because I'm SICK of having to learn new ways of doing old things.

For those who are advocating the new tools as additions rather than replacements: Remember that this will lead to some scripts expecting the new tools and some other scripts expecting the old tools. You'll need to keep both flavors installed to do ONE thing. I don't know about you, but I HATE to waste disk space on redundant crap.

fluffernutter ( 1411889 ) , Sunday May 27, 2018 @12:46PM ( #56683592 )
Piss and vinigar ( Score: 5 , Interesting)

What pisses me off is when I go to run ifconfig and it isn't there, and then I Google on it and there doesn't seem to be *any* direct substitute that gives me the same information. If you want to change the command then fine, but allow the same output from the new commands. Furthermore, another bitch I have is most systemd installations don't have an easy substitute for /etc/rc.local.

what about ( 730877 ) , Sunday May 27, 2018 @01:35PM ( #56683874 ) Homepage
Let's try hard to break Linux ( Score: 3 , Insightful)

It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap.

Therefore, the only logical alternative is that they are paid (in some way) to break what is working.

Also, if you rewrite tons of systems tools you have plenty of opportunities to insert useful bugs that can be used by the various spying agencies.

You do not think that the current CPU Flaws are just by chance, right ?
Immagine the wonder of being able to spy on any machine, regardless of the level of SW protection.

There is no need to point out that I cannot prove it, I know, it just make sense to me.

Kjella ( 173770 ) writes:
Re: ( Score: 3 )
It does not make any sense that some people spend time and money replacing what is currently working with some incompatible crap. (...) There is no need to point out that I cannot prove it, I know, it just make sense to me.

Many developers fix problems like a guy about to lose a two week vacation because he can't find his passport. Rip open every drawer, empty every shelf, spread it all across the tables and floors until you find it, then rush out the door leaving everything in a mess. It solved HIS problem.

WaffleMonster ( 969671 ) , Sunday May 27, 2018 @01:52PM ( #56684010 )
Changes for changes sake ( Score: 4 , Informative)

TFA is full of shit.

IP aliases have always and still do appear in ifconfig as separate logical interfaces.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Using these false bits of information to advocate for change seems rather ridiculous.

One change I would love to see... "ping" bundled with most Linux distros doesn't support IPv6. You have to call IPv6 specific analogue which is unworkable. Knowing address family in advance is not a reasonable expectation and works contrary to how all other IPv6 capable software any user would actually run work.

Heck for a while traceroute supported both address families. The one by Olaf Kirch eons ago did then someone decided not invented here and replaced it with one that works like ping6 where you have to call traceroute6 if you want v6.

It seems anymore nobody spends time fixing broken shit... they just spend their time finding new ways to piss me off. Now I have to type journalctl and wait for hell to freeze over just to liberate log data I previously could access nearly instantaneously. It almost feels like Microsoft's event viewer now.

DamnOregonian ( 963763 ) , Sunday May 27, 2018 @05:30PM ( #56685130 )
Re:Changes for changes sake ( Score: 4 , Insightful)
TFA is full of shit. IP aliases have always and still do appear in ifconfig as separate logical interfaces.

No, you're just ignorant.
Aliases do not appear in ifconfig as separate logical interfaces.
Logical interfaces appear in ifconfig as logical interfaces.
Logical interfaces are one way to add an alias to an interface. A crude way, but a way.

The assertion ifconfig only displays one IP address per interface also demonstrably false.

Nope. Again, your'e just ignorant.

root@swalker-samtop:~# tunctl
Set 'tap0' persistent and owned by uid 0
root@swalker-samtop:~# ifconfig tap0 10.10.10.1 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.10.2/24 dev tap0
root@swalker-samtop:~# ifconfig tap0:0 10.10.10.3 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.1.1/24 scope link dev tap0:0
root@swalker-samtop:~# ifconfig tap0 | grep inet
inet 10.10.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
root@swalker-samtop:~# ifconfig tap0:0 | grep inet
inet 10.10.10.3 netmask 255.255.255.0 broadcast 10.10.10.255
root@swalker-samtop:~# ip addr show dev tap0 | grep inet
inet 10.10.1.1/24 scope link tap0
inet 10.10.10.1/24 brd 10.10.10.255 scope global tap0
inet 10.10.10.2/24 scope global secondary tap0
inet 10.10.10.3/24 brd 10.10.10.255 scope global secondary tap0:0

If you don't understand what the differences are, you really aren't qualified to opine on the matter.
Ifconfig is fundamentally incapable of displaying the amount of information that can go with layer-3 addresses, interfaces, and the architecture of the stack in general. This is why iproute2 exists.

JustNiz ( 692889 ) , Sunday May 27, 2018 @01:55PM ( #56684030 )
I propose a new word: ( Score: 5 , Funny)

SysD: (v). To force an unnecessary replacement of something that already works well with an alternative that the majority perceive as fundamentally worse.
Example usage: Wow you really SysD'd that up.

[Oct 12, 2018] How to Install Iptables on CentOS 7

Oct 12, 2018 | linuxize.com

Starting with CentOS 7, FirewallD replaces iptables as the default firewall management tool.

FirewallD is a complete firewall solution that can be controlled with a command-line utility called firewall-cmd. If you are more comfortable with the Iptables command line syntax, then you can disable FirewallD and go back to the classic iptables setup.

This tutorial will show you how to disable the FirewallD service and install iptables.

Prerequisites

Before starting with the tutorial, make sure you are logged in as a user with sudo privileges .

Disable FirewallD

To disable the FirewallD on your CentOS 7 system , follow these steps:

  1. Type the following command to stop the FirewallD service:
    sudo systemctl stop firewalld
    
    Copy
  2. Disable the FirewallD service to start automatically on system boot:
    sudo systemctl disable firewalld
    
    Copy
  3. Mask the FirewallD service to prevent it from being started by another services:
    sudo systemctl mask --now firewalld
    
    Copy
Install and Enable Iptables

Perform the following steps to install Iptables on a CentOS 7 system:

  1. Run the following command to install the iptables-service package from the CentOS repositories:
    sudo yum install iptables-services
    
    Copy
  2. Once the package is installed start the Iptables service:
    sudo systemctl start iptables
    sudo systemctl start iptables6
    
    Copy
  3. Enable the Iptables service to start automatically on system boot:
    sudo systemctl enable iptables
    sudo systemctl enable iptables6
    
    Copy
  4. Check the iptables service status with:
    sudo systemctl status iptables
    sudo systemctl status iptables6
    
    Copy
  5. To can check the current iptables rules use the following commands:
    sudo iptables -nvL
    sudo iptables6 -nvL
    
    Copy

    By default only the SSH port 22 is open. The output should look something like this:

    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
     5400 6736K ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            state RELATED,ESTABLISHED
        0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
        2   148 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
        3   180 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            state NEW tcp dpt:22
        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
    
    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited
    
    Chain OUTPUT (policy ACCEPT 4298 packets, 295K bytes)
     pkts bytes target     prot opt in     out     source               destination
    
    Copy

At this point, you have successfully enabled the iptables service and you can start building your firewall. The changes will persist after reboot.

Conclusion

In this tutorial, you learned how to disable the FirewallD service and install iptables.

If you have any question or remarks, please leave a comment below.

[Oct 02, 2018] 16 iptables tips and tricks for sysadmins Opensource.com

Oct 02, 2018 | opensource.com

Avoid locking yourself out

Scenario: You are going to make changes to the iptables policy rules on your company's primary server. You want to avoid locking yourself -- and potentially everybody else -- out. (This costs time and money and causes your phone to ring off the wall.)

Tip #1: Take a backup of your iptables configuration before you start working on it.

Back up your configuration with the command:

/sbin/iptables-save > /root/iptables-works
Tip #2: Even better, include a timestamp in the filename.

Add the timestamp with the command:

/sbin/iptables-save > /root/iptables-works-`date +%F`

You get a file with a name like:

/root/iptables-works-2018-09-11

If you do something that prevents your system from working, you can quickly restore it:

/sbin/iptables-restore < /root/iptables-works-2018-09-11
Tip #3: Every time you create a backup copy of the iptables policy, create a link to the file with 'latest' in the name.
ln –s /root/iptables-works-`date +%F` /root/iptables-works-latest
Tip #4: Put specific rules at the top of the policy and generic rules at the bottom.

Avoid generic rules like this at the top of the policy rules:

iptables -A INPUT -p tcp --dport 22 -j DROP

The more criteria you specify in the rule, the less chance you will have of locking yourself out. Instead of the very generic rule above, use something like this:

iptables -A INPUT -p tcp --dport 22 –s 10.0.0.0/8 –d 192.168.100.101 -j DROP

This rule appends ( -A ) to the INPUT chain a rule that will DROP any packets originating from the CIDR block 10.0.0.0/8 on TCP ( -p tcp ) port 22 ( --dport 22 ) destined for IP address 192.168.100.101 ( -d 192.168.100.101 ).

There are plenty of ways you can be more specific. For example, using -i eth0 will limit the processing to a single NIC in your server. This way, the filtering actions will not apply the rule to eth1 .

Tip #5: Whitelist your IP address at the top of your policy rules.

This is a very effective method of not locking yourself out. Everybody else, not so much.

iptables -I INPUT -s <your IP> -j ACCEPT

You need to put this as the first rule for it to work properly. Remember, -I inserts it as the first rule; -A appends it to the end of the list.

Tip #6: Know and understand all the rules in your current policy.

Not making a mistake in the first place is half the battle. If you understand the inner workings behind your iptables policy, it will make your life easier. Draw a flowchart if you must. Also remember: What the policy does and what it is supposed to do can be two different things.

Set up a workstation firewall policy

Scenario: You want to set up a workstation with a restrictive firewall policy.

Tip #1: Set the default policy as DROP. # Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0] Tip #2: Allow users the minimum amount of services needed to get their work done.

The iptables rules need to allow the workstation to get an IP address, netmask, and other important information via DHCP ( -p udp --dport 67:68 --sport 67:68 ). For remote management, the rules need to allow inbound SSH ( --dport 22 ), outbound mail ( --dport 25 ), DNS ( --dport 53 ), outbound ping ( -p icmp ), Network Time Protocol ( --dport 123 --sport 123 ), and outbound HTTP ( --dport 80 ) and HTTPS ( --dport 443 ).

# Set a default policy of DROP
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT DROP [0:0]

# Accept any related or established connections
-I INPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT
-I OUTPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT

# Allow all traffic on the loopback interface
-A INPUT -i lo -j ACCEPT
-A OUTPUT -o lo -j ACCEPT

# Allow outbound DHCP request
-A OUTPUT –o eth0 -p udp --dport 67:68 --sport 67:68 -j ACCEPT

# Allow inbound SSH
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT

# Allow outbound email
-A OUTPUT -i eth0 -p tcp -m tcp --dport 25 -m state --state NEW -j ACCEPT

# Outbound DNS lookups
-A OUTPUT -o eth0 -p udp -m udp --dport 53 -j ACCEPT

# Outbound PING requests
-A OUTPUT –o eth0 -p icmp -j ACCEPT

# Outbound Network Time Protocol (NTP) requests
-A OUTPUT –o eth0 -p udp --dport 123 --sport 123 -j ACCEPT

# Outbound HTTP
-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT

COMMIT

Restrict an IP address range

Scenario: The CEO of your company thinks the employees are spending too much time on Facebook and not getting any work done. The CEO tells the CIO to do something about the employees wasting time on Facebook. The CIO tells the CISO to do something about employees wasting time on Facebook. Eventually, you are told the employees are wasting too much time on Facebook, and you have to do something about it. You decide to block all access to Facebook. First, find out Facebook's IP address by using the host and whois commands.

host -t a www.facebook.com
www.facebook.com is an alias for star.c10r.facebook.com.
star.c10r.facebook.com has address 31.13.65.17
whois 31.13.65.17 | grep inetnum
inetnum: 31.13.64.0 - 31.13.127.255

Then convert that range to CIDR notation by using the CIDR to IPv4 Conversion page. You get 31.13.64.0/18 . To prevent outgoing access to www.facebook.com , enter:

iptables -A OUTPUT -p tcp -i eth0 –o eth1 –d 31.13.64.0/18 -j DROP
Regulate by time

Scenario: The backlash from the company's employees over denying access to Facebook access causes the CEO to relent a little (that and his administrative assistant's reminding him that she keeps HIS Facebook page up-to-date). The CEO decides to allow access to Facebook.com only at lunchtime (12PM to 1PM). Assuming the default policy is DROP, use iptables' time features to open up access.

iptables –A OUTPUT -p tcp -m multiport --dport http,https -i eth0 -o eth1 -m time --timestart 12:00 --timestart 12:00 –timestop 13:00 –d
31.13.64.0/18 -j ACCEPT

This command sets the policy to allow ( -j ACCEPT ) http and https ( -m multiport --dport http,https ) between noon ( --timestart 12:00 ) and 13PM ( --timestop 13:00 ) to Facebook.com ( –d 31.13.64.0/18 ).

Regulate by time -- Take 2

Scenario: During planned downtime for system maintenance, you need to deny all TCP and UDP traffic between the hours of 2AM and 3AM so maintenance tasks won't be disrupted by incoming traffic. This will take two iptables rules:

iptables -A INPUT -p tcp -m time --timestart 02:00 --timestop 03:00 -j DROP
iptables -A INPUT -p udp -m time --timestart 02:00 --timestop 03:00 -j DROP

With these rules, TCP and UDP traffic ( -p tcp and -p udp ) are denied ( -j DROP ) between the hours of 2AM ( --timestart 02:00 ) and 3AM ( --timestop 03:00 ) on input ( -A INPUT ).

Limit connections with iptables

Scenario: Your internet-connected web servers are under attack by bad actors from around the world attempting to DoS (Denial of Service) them. To mitigate these attacks, you restrict the number of connections a single IP address can have to your web server:

iptables –A INPUT –p tcp –syn -m multiport -–dport http,https –m connlimit -–connlimit-above 20 –j REJECT -–reject-with-tcp-reset

Let's look at what this rule does. If a host makes more than 20 ( -–connlimit-above 20 ) new connections ( –p tcp –syn ) in a minute to the web servers ( -–dport http,https ), reject the new connection ( –j REJECT ) and tell the connecting host you are rejecting the connection ( -–reject-with-tcp-reset ).

Monitor iptables rules

Scenario: Since iptables operates on a "first match wins" basis as packets traverse the rules in a chain, frequently matched rules should be near the top of the policy and less frequently matched rules should be near the bottom. How do you know which rules are traversed the most or the least so they can be ordered nearer the top or the bottom?

Tip #1: See how many times each rule has been hit.

Use this command:

iptables -L -v -n –line-numbers

The command will list all the rules in the chain ( -L ). Since no chain was specified, all the chains will be listed with verbose output ( -v ) showing packet and byte counters in numeric format ( -n ) with line numbers at the beginning of each rule corresponding to that rule's position in the chain.

Using the packet and bytes counts, you can order the most frequently traversed rules to the top and the least frequently traversed rules towards the bottom.

Tip #2: Remove unnecessary rules.

Which rules aren't getting any matches at all? These would be good candidates for removal from the policy. You can find that out with this command:

iptables -nvL | grep -v "0     0"

Note: that's not a tab between the zeros; there are five spaces between the zeros.

Tip #3: Monitor what's going on.

You would like to monitor what's going on with iptables in real time, like with top . Use this command to monitor the activity of iptables activity dynamically and show only the rules that are actively being traversed:

watch --interval=5 'iptables -nvL | grep -v "0     0"'

watch runs 'iptables -nvL | grep -v "0 0"' every five seconds and displays the first screen of its output. This allows you to watch the packet and byte counts change over time.

Report on iptables

Scenario: Your manager thinks this iptables firewall stuff is just great, but a daily activity report would be even better. Sometimes it's more important to write a report than to do the work.

Use the packet filter/firewall/IDS log analyzer FWLogwatch to create reports based on the iptables firewall logs. FWLogwatch supports many log formats and offers many analysis options. It generates daily and monthly summaries of the log files, allowing the security administrator to free up substantial time, maintain better control over network security, and reduce unnoticed attacks.

Here is sample output from FWLogwatch:

fwlogwatch.png FWLogwatch output More than just ACCEPT and DROP

We've covered many facets of iptables, all the way from making sure you don't lock yourself out when working with iptables to monitoring iptables to visualizing the activity of an iptables firewall. These will get you started down the path to realizing even more iptables tips and tricks.

[Jul 16, 2018] netstat to find ports which are in use on linux server

Another example of more or less complex pipeline using cat
Oct 02, 2008 | midnight-cafe.co.uk

Below is command to find out number of connections to each ports which are in use using netstat & cut.

netstat -nap | grep 'tcp\|udp' | awk '{print $4}' | cut -d: -f2 | sort | uniq -c | sort -n

Below is description of each commands ::

Netstat command is used to check all incoming and outgoing connections on linux server. Using Grep command you can sort lines which are matching pattern you defined.

AWk is very important command generally used for scanning pattern and process it. It is powerful tool for shell scripting. Sort is used to sort output and sort -n is for sorting output in numeric order.

Uniq -c this help to get uniq output by deleting duplicate lines from it.

[Jul 16, 2018] Listing TCP apps listening on ports

Jun 13, 2018 | www.fredshack.com

netstat -nltp

[Jun 01, 2017] How To Configure SAMBA Server And Transfer Files Between Linux Windows - LinuxAndUbuntu - Linux News Apps Reviews Linux T

Jun 01, 2017 | www.linuxandubuntu.com

If you are setting this on a Ubuntu server you can use vim or nano to edit smb.conf file, for Ubuntu desktop just use the default text editor file. Note that all commands (Server or Desktop) must be run as a root. $ sudo nano /etc/samba/smb.conf ​Then add the information below to the very end of the file -

[share] 
comment = Ubuntu File Server Share 
path = /srv/samba/share 
browsable = yes 
guest ok = yes 
read only = no 
create mask = 0755 ​ 

Comment : is a short description of the share.
Path : the path of the directory to be shared.

This example uses /srv/ samba/share because, according to the Filesystem Hierarchy Standard (FHS), /srv is where site-specific data should be served. Technically Samba shares can be placed anywhere on the filesystem as long as the permissions are correct, but adhering to standards is recommended.

create mask : determines the permissions new files will have when created.

Now that Samba is configured, the directory /srv/samba/share needs to be created and the permissions need to be set. Create the directory and change permissions from the terminal - sudo mkdir -p /srv/samba/share

   sudo chown nobody:nogroup /srv/samba/share/ ​

The -p switch tells mkdir to create the entire directory tree if it does not exist.

Finally, restart the samba services to enable the new configuration: sudo systemctl restart smbd.service nmbd.service ​From a Windows client, you should now be able to browse to the Ubuntu file server and see the shared directory. If your client doesn't show your share automatically, try to access your server by its IP address, e.g. \\192.168.1.1 or hostname in a Windows Explorer window. To check that everything is working try creating a directory from Windows.

To create additional shares simply create new [dir] sections in /etc/samba/smb.conf , and restart Samba. Just make sure that the directory you want to share actually exists and the permissions are correct.

Obscurantism in Information Technology: Nicholas Carr's "IT Does not Matter" Fallacy and "Everything in the Cloud" Utopia

Nicholas Carr's provocative HBR article published five years ago and subsequent books suffer from the lack of understanding of IT history, electrical transmission networks (which he uses as close historical analogy) and "in the cloud" software service provider model (SaaS). He cherry-picks historical facts to fit his needs instead of trying to describe real history of development of each of those three technologies. To be more correct Carr tortures facts to get them to fit his fantasy. The central idea of the article "IT does not matter" is simply a fallacy. At best Carr managed to ask a couple of interesting questions, but provided inferior and misleading answers. While Carr is definitely a gifted writer, ignorance of technology about which he is writing leads him to absurd conclusions which due to his lucid writing style looks quite plausible for non-specialists and as such influence public opinion about IT. Still as a writer Carr comes across as a guy who can write engagingly about a variety of topics including those about which he knows almost nothing. Here lies the danger as only specialists can sense that that "Something Is Deeply Amiss" while ordinary readers tend to believe his aura of credibility emanating from the "former editor of HBR" title.

Unfortunately the charge of irrelevance of IT made by Carr was perfectly in sync with the higher management desire to accelerate outsourcing and Carr's 2003 HBR paper served as a kind of "IT outsourcing manifesto". And the fact that many people were sitting between chairs as for the value of IT outsourcing partially explains why his initial HBR article, as weak and detached from reality as it was, generated less effective rebuttals then it should. This paper is an attempt to provide a more coherent analysis of the main components of Carr's fallacious vision five years after the event.

If one looks closer at what Carr propose, it is evident that this is a pretty reactionary and defeatist framework which I would call "IT obscurantism" and which is not that different from "creativism". Like with the latter, his justifications are extremely weak and consist of one hand of usage of fuzzy facts and questionable analogies, on the other putting forward radical, absurd recommendations ("Spend less", "Follow, don't lead", "Focus on vulnerabilities, not opportunities" and "move to utility-based 'in the cloud' computing") which can hurt anybody who trusts them or, worse, tries blindly adopt them. The irony of Carr's position is that for the last five year since the publication of his HBR article local datacenters actually flourished and until 2008 had shown no signs of impeding demise. In 2008 credit crush his data centers but they are just collateral damage of financial storm. From 2003 to 2008 Data Centers experienced just another technological reorganization which increased role of Intel computers in the datacenter (including appearance of blades, as alternatives to small to midrange servers and laptops as the alternative to desktop), virtualization, wireless technologies and distributed computing. Moreover there was some trend to the consolidation of datacenters within the large companies.

The paper contains critique of key aspects of Carr's utopia including but not limited to such typical for Carr's writings problems as "Frivolous treatment of IT history", "Limited understanding of enterprise IT", " "Idealization of 'in the cloud' computing model". and "Compete absence of discussion of competing technologies". The author argues that the level of hype about "utility computing" makes prudent treating all promoters of this interesting new technology, especially those who severely lack technical depth, with extreme skepticism. Junk science is and always was based on cherry-picked evidence which has carefully been selected or edited to support a pre-selected, absurd "truth". The article claims that Carr's doom-and-gloom predictions about IT and datacenters are based on cherry-picked evidence and while future is unpredictable by definition, the total switch to the Internet based remote "in the cloud" computing probably will never materialize. Private and hybrid models are definitely more viable. There is no free lunch and moving computation to the cloud increases the load on the remote servers as well as drastically increases security requirements. Both factors increases costs. Achieving the same reliability for the cloud computing as in local solution is another problem. Outages of large datacenter are usually more severe and more difficult to recover then outages of small local datacenter. The information flow about outage has severe restrictions that additionally hurt the clients.

[Jul 23, 2009] Twitter's Google Docs Hack - A Warning For Cloud App Users - News - eWeekEurope.co.uk By Eric Lundquist

20-07-2009

Twitter lost its data through a hack on Google Docs. Learn from this to be very careful how much trust you place on cloud apps and Web 2.0, says Eric Lundquist

Here's the background. A hacker apparently was able to access the Google account of a Twitter employee. Twitter uses Google Docs as a method to create and share information. The hacker apparently got at the docs and sent them to TechCrunch, which decided to publish much of the information.

The entire event - not the first time Twitter has been hacked into through cloud apps - sent the Web world into a frenzy. How smart was Twitter to rely on Google applications? How can Google build up business-to-business trust when one hack opens the gates on corporate secrets? Were TechCrunch journalists right to publish stolen documents? Whatever happened to journalists using documents as a starting point for a story rather than the end point story in itself?

Alongside all this, what are the serious lessons that business execs and information technology professionals can learn from the Twitter/TechCrunch episode? Here are my suggestions:

1. Don't confuse the cloud with secure, locked-down environments.
Cloud computing is all the rage. It makes it easy to scale up applications, design around flexible demand and make content widely accessible [in the UK, the Tory party is proposing more use of it by Government, and the Labour Government has appointed a Tsar of Twitter - Editor]. But the same attributes that make the cloud easy for everyone to access makes it, well, easy for everyone to access.

2. Cloud computing requires more, not less, stringent security procedures.>br /> In your own network would you defend your most vital corporate information with only a username and user-created password? I don't think so. Recent surveys have found that Web 2.0 users are slack on security.

3. Putting security procedures in place after a hack is dumb.
Security should be a tiered approach. Non-vital information requires less security than, say, your company's five-year plan, financials or salaries. If you don't think about this stuff in advance you will pay for it when it appears on the evening news.

4. Don't rely on the good will of others to build your security.
Take the initiative. I like the ease and access of Google applications, but I would never include those capabilities in a corporate security framework without a lengthy discussion about rights, procedures and responsibilities. I'd also think about having a white hat hacker take a look at what I was planning.

5. The older IT generation has something to teach the youngsters.
The world of business 2.0 is cool, exciting... and full of holes. Those grey haired guys in the server room grew up with procedures that might seem antiquated, but were designed to protect a company's most important assets.

6. Consider compliance.
Compliance issues have to be considered whether you are going to keep your information on a local server you keep in a safe or a cloud computing platform. Finger-pointing will not satisfy corporate stakeholders or government enforcers.

[Jul 30, 2008] OPEC 2.0: Why Bandwidth Is the Oil of the Information Economy By TIM WU

July 30, 2008 | NYTimes.com

AMERICANS today spend almost as much on bandwidth - the capacity to move information - as we do on energy. A family of four likely spends several hundred dollars a month on cellphones, cable television and Internet connections, which is about what we spend on gas and heating oil.

Just as the industrial revolution depended on oil and other energy sources, the information revolution is fueled by bandwidth. If we aren't careful, we're going to repeat the history of the oil industry by creating a bandwidth cartel.

Like energy, bandwidth is an essential economic input. You can't run an engine without gas, or a cellphone without bandwidth. Both are also resources controlled by a tight group of producers, whether oil companies and Middle Eastern nations or communications companies like AT&T, Comcast and Vodafone. That's why, as with energy, we need to develop alternative sources of bandwidth.

Wired connections to the home - cable and telephone lines - are the major way that Americans move information. In the United States and in most of the world, a monopoly or duopoly controls the pipes that supply homes with information. These companies, primarily phone and cable companies, have a natural interest in controlling supply to maintain price levels and extract maximum profit from their investments - similar to how OPEC sets production quotas to guarantee high prices.

But just as with oil, there are alternatives. Amsterdam and some cities in Utah have deployed their own fiber to carry bandwidth as a public utility. A future possibility is to buy your own fiber, the way you might buy a solar panel for your home.

Encouraging competition is another path, though not an easy one: most of the much-hyped competitors from earlier this decade, like businesses that would provide broadband Internet over power lines, are dead or moribund. But alternatives are important. Relying on monopoly producers for the transmission of information is a dangerous path.

After physical wires, the other major way to move information is through the airwaves, a natural resource with enormous potential. But that potential is untapped because of a false scarcity created by bad government policy.

Our current approach is a command and control system dating from the 1920s. The federal government dictates exactly what licensees of the airwaves may do with their part of the spectrum. These Soviet-style rules create waste that is worthy of Brezhnev.

Many "owners" of spectrum either hardly use the stuff or use it in highly inefficient ways. At any given moment, more than 90 percent of the nation's airwaves are empty.

The solution is to relax the overregulation of the airwaves and allow use of the wasted spaces. Anyone, so long as he or she complies with a few basic rules to avoid interference, could try to build a better Wi-Fi and become a broadband billionaire. These wireless entrepreneurs could one day liberate us from wires, cables and rising prices.

Such technologies would not work perfectly right away, but over time clever entrepreneurs would find a way, if we gave them the chance. The Federal Communications Commission promised this kind of reform nearly a decade ago, but it continues to drag its heels.

In an information economy, the supply and price of bandwidth matters, in the way that oil prices matter: not just for gas stations, but for the whole economy.

And that's why there is a pressing need to explore all alternative supplies of bandwidth before it is too late. Americans are as addicted to bandwidth as they are to oil. The first step is facing the problem.

Tim Wu is a professor at Columbia Law School and the co-author of "Who Controls the Internet?"

[Aug 7, 2007] Expect plays a crucial role in network management by Cameron Laird

Jul 31, 2007 | developerworks

If you manage systems and networks, you need Expect.

More precisely, why would you want to be without Expect? It saves hours common tasks otherwise demand. Even if you already depend on Expect, though, you might not be aware of the capabilities described below.

Expect automates command-line interactions

You don't have to understand all of Expect to begin profiting from the tool; let's start with a concrete example of how Expect can simplify your work on AIX® or other operating systems:

Suppose you have logins on several UNIX® or UNIX-like hosts and you need to change the passwords of these accounts, but the accounts are not synchronized by Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), or some other mechanism that recognizes you're the same person logging in on each machine. Logging in to a specific host and running the appropriate passwd command doesn't take long-probably only a minute, in most cases. And you must log in "by hand," right, because there's no way to script your password?

Wrong. In fact, the standard Expect distribution (full distribution) includes a command-line tool (and a manual page describing its use!) that precisely takes over this chore. passmass (see Resources) is a short script written in Expect that makes it as easy to change passwords on twenty machines as on one. Rather than retyping the same password over and over, you can launch passmass once and let your desktop computer take care of updating each individual host. You save yourself enough time to get a bit of fresh air, and multiple opportunities for the frustration of mistyping something you've already entered.

The limits of Expect

This passmass application is an excellent model-it illustrates many of Expect's general properties:

You probably know enough already to begin to write or modify your own Expect tools. As it turns out, the passmass distribution actually includes code to log in by means of ssh, but omits the command-line parsing to reach that code. Here's one way you might modify the distribution source to put ssh on the same footing as telnet and the other protocols:

Listing 1. Modified passmass fragment that accepts the -ssh argument
...
} "-rlogin" {
set login "rlogin"
continue
} "-slogin" {
set login "slogin"
continue
} "-ssh" {
set login "ssh"
continue
} "-telnet" {
set login "telnet"
continue
...

In my own code, I actually factor out more of this "boilerplate." For now, though, this cascade of tests, in the vicinity of line #100 of passmass, gives a good idea of Expect's readability. There's no deep programming here-no need for object-orientation, monadic application, co-routines, or other subtleties. You just ask the computer to take over typing you usually do for yourself. As it happens, this small step represents many minutes or hours of human effort saved.

[Dec 28, 2006] TCP-IP Protocol Sequence Diagrams

tutorial articles in this section describe TCP/IP and related protocols as sequence diagrams. (The sequence diagrams were generated using EventStudio System Designer 2.5).

[PDF] TCP/IP reference card from SANS

[Dec 6, 2005] TCP-IP Stack Hardening

[Dec 6, 2005] Daryl's TCP-IP Primer Good and up-to-date primer...

[Mar 19, 2005] TCP-IP Protocol Sequence Diagrams

Articles in this section describe TCP/IP and related protocols as sequence diagrams.
(The sequence diagrams were generated using EventStudio).

WANdoc Open Source Perl=based

WANdoc Open Source is free software that generates interactive documentation for large Cisco networks. It uses syslog and router configuration files to produce summarized, hyperlinked, and error- checked router information. It speeds up the WAN troubleshooting process and identifies inconsistencies in router deployment.

Understanding IP Addressing Everything You Ever Wanted To Know - By Chuck Semeria -- good tutorial from 3COM. This white paper is now available in the 3 pdf's below.


Pages 1 - 21
Pages 22 - 43
Pages 44 - 65

Top websites:

TCP/IP online books Free TCP/IP online books

AW • Professional - Networking Series Catalog Page Books from Addison Wesley, a respected name in technical publication.

Bill Stallings: Home Page Web Site for the Books of William Stallings

Douglas Comer This is the home page of Douglas Comer, the author of the book "Internetworking with TCP/IP".

Illustrated TCP/IP Online version of the book "Illustrated TCP/IP", by Matthew G. Naugle, published by Wiley Computer Publishing, John Wiley & Sons, Inc.

The Internet Companion Online version of the book "The Internet Companion". This book explains the basics of communication on the Internet and the applications available

Internetworking Multimedia This is a online book covering multimedia communication using the Internet

McGraw Hill Networking books A search on networking books published by McGraw Hill.

McGraw-Hill - Bet@ Books Free online prerelease versions of many new books on networking and other topics.

The Mechanics of Routing Protocols An online book published by Cisco Press.

The Network Book A comprehensive introduction to network and distributed computing technologies online

Network Reading List: TCP/IP,UNIX and Ethernet Compilation of links on the Internet relating to TCP/IP, Unix and Ethernet

Networking and Communications Prentice Hall Professional Technical Reference: Special Interests

Routing in the Internet A very comprehensive book on routing, written by Christian Huitema, from the Internet Architecture Board. A must read for those interested on routing protocols

Routing Information Protocols The Network Book, Chapter 3, Section 3. This document is part of the Network Book

TCP/IP and Data Communications Administration Guide An online book, in PDF format, explaining how to setup, maintain and expand a network using the Solaris implementation of the TCP/IP protocols

TCP/IP Network Administration, 2nd Edition Clearly written, this book is a good introduction to the TCP/IP protocols and practical applications.

Troubleshooting TCP/IP This is a sample chapter from the book "Windows NT TCP/IP Network Administration", published by OґReilly and associates which explains how to solve problems related to TCP/IP in a Windows NT environment

Understanding Networking Technologies Online course providing training on a host of networking topics.

Windows NT TCP/IP Network Administration O'Reilly publication covering TCP/IP and NT

Wireless Networking Handbook Online version of the book "Wireless Networking Handbook" by Jim Geier, and published by New Riders, Macmillan Computer Publishing


MCI Arms ISPs with Means to Counterattack Hackers

MCI Arms ISPs with Means to Counterattack Hackers [October 9] MCI introduced today a security product designed to help Internet Service Providers detect network intruders.

The networkMCI DoS (Denial of Service) Tracker constantly monitors the network and then once a denial of service attack has been detected, the product immediately works to trace the root of the attack.

The product is designed to eliminate the time technical engineers spend manually searching for the intrusion. MCI claims the product takes little programming knowledge to find the network intruder.

The DoS Tracker combats SYN, ICMP Flood, Bandwidth Saturation, and Concentrated Source, and the newly detected Smurf hacker attacks.

"Obviously, we can't guarantee the safety of other networks from all hacker activity, but we believe the networkMCI DoS Tracker provides ISPs and other network operators with a powerful tool that will help them protect their Internet assets," Rob Hagens, director of Internet Engineering.

The product is available for free from MCI's Web site.


Tutorials

TCP/IP in 14 Days

The Linux Network Administrators' Guide FAME Computer Education TCPIP for Idiots Tutorial RFC1180 Introduction to the Internet Protocols

Daryl's TCP-IP Primer Good and up-to-date primer...

Understanding IP addressing -- tutorial from 3Com

**** The Network Administrators' Guide -- the first several chapter contain good introduction to TCP/IP

Contents (fragment)

FAME Computer Education TCPIP for Idiots Tutorial

RFC1180 TCP/IP Tutorial by T. Socolofsky & C. Kale January 1991 (63 KBytes) -- old, but still decent is a tutorial (UK mirror RFC 1180)

TCP-IP and IPX Routing tutorial (mirror TCP-IP and IPX routing Tutorial )

Introduction to the Internet Protocols by Charles L. Hedrick. 3 July 1987 (Rutgers University). See also a mirror Introduction to TCPIP

Fast Guide to Subnets by Chuck Semeria (3Com)

Understanding IP Addressing

Integrating Your Machine With the Network - good guide from USAIL

PC Magazine PC Tech (A Beginner's Guide to TCPIP)

IP Masquerading for Linux


Lecture Notes


Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites


FAQs


Win TCP/IP


Random Findings

Old and broken links


IBM Redbook

***+ TCP-IP Tutorial and Technical Overview -- a pretty decent and up to date IBM Redbook PDF

Table of Contents (old version was in HTML, now only PDF is available from the IBM site)

Part 1. Architecture and Core Protocols

  • Chapter 1. Introduction to TCP/IP - History, Architecture and Standards
  • 1.1 Internet History - Where It All Came From
  • 1.2 TCP/IP Architectural Model - What It Is All About
  • 1.3 Finding Standards for TCP/IP and the Internet
  • 1.4 Future of the Internet
  • 1.5 IBM and the Internet
  • Chapter 2. Internetworking and Transport Layer Protocols
  • 2.1 Internet Protocol (IP)
  • 2.2 Internet Control Message Protocol (ICMP) <
  • 2.3 Internet Group Management Protocol (IGMP)
  • 2.4 Address Resolution Protocol (ARP)
  • 2.5 Reverse Address Resolution Protocol (RARP)
  • 2.6 Ports and Sockets
  • 2.7 User Datagram Protocol (UDP)
  • 2.8 Transmission Control Protocol (TCP)
  • 2.9 TCP Congestion Control Algorithms
  • Chapter 3. Routing Protocols
  • 3.1 Basic IP Routing
  • 3.2 Routing Algorithms
  • 3.3 Interior Gateway Protocols (IGP)
  • 3.4 Exterior Routing Protocols
  • Chapter 4. Application Protocols 4.1 Characteristics of Applications
  • 4.2 Domain Name System (DNS)
  • 4.3 TELNET
  • 4.4 File Transfer Protocol (FTP)
  • 4.5 Trivial File Transfer Protocol (TFTP)
  • 4.6 Remote Execution Command Protocol (REXEC and RSH)
  • 4.7 Simple Mail Transfer Protocol (SMTP)
  • 4.8 Multipurpose Internet Mail Extensions (MIME)
  • 4.9 Post Office Protocol (POP)
  • 4.10 Internet Message Access Protocol Version 4 (IMAP4)
  • 4.11 Network Management
  • 4.12 Remote Printing (LPR and LPD)
  • 4.13 Network File System (NFS)
  • 4.14 X Window System
  • 4.15 Internet Relay Chat Protocol (IRCP)
  • 4.16 Finger Protocol
  • 4.17 NETSTAT
  • 4.18 Network Information Systems (NIS)
  • 4.19 NetBIOS over TCP/IP
  • 4.20 Application Programming Interfaces (APIs)
  • Part 2. Special Purpose Protocols and New Technologies

  • Chapter 5. TCP/IP Security Overview
  • 5.1 Security Exposures and Solutions
  • 5.2 A Short Introduction to Cryptography
  • 5.3 Firewalls
  • 5.4 Network Address Translation (NAT)
  • 5.5 The IP Security Architecture (IPSec)
  • 5.6 SOCKS
  • 5.7 Secure Sockets Layer (SSL)
  • 5.8 Transport Layer Security (TLS)
  • 5.9 Secure Multipurpose Internet Mail Extension (S-MIME)
  • 5.10 Virtual Private Networks (VPN) Overview
  • 5.11 Kerberos Authentication and Authorization System
  • 5.12 Remote Access Authentication Protocols
  • 5.13 Layer Two Tunneling Protocol (L2TP)
  • 5.14 Secure Electronic Transaction (SET)
  • Chapter 6. IP Version 6
  • 6.1 IPv6 Overview
  • 6.2 The IPv6 Header Format
  • 6.3 Internet Control Message Protocol Version 6 (ICMPv6)
  • 6.4 DNS in IPv6
  • 6.5 DHCP in IPv6
  • 6.6 Mobility Support in IPv6
  • 6.7 Internet Transition - Migrating from IPv4 to IPv6
  • 6.8 The Drive Towards IPv6
  • 6.9 References
  • Part 3. Connection Protocols and Platform Implementations

  • Chapter 13. Connection Protocols
  • 13.1 Serial Line IP (SLIP)
  • 13.2 Point-to-Point Protocol (PPP)
  • 13.3 Ethernet and IEEE 802.x Local Area Networks (LANs)
  • 13.4 Fiber Distributed Data Interface (FDDI)
  • 13.5 Asynchronous Transfer Mode (ATM)
  • 13.6 Data Link Switching: Switch-to-Switch Protocol
  • 13.7 Integrated Services Digital Network (ISDN)
  • 13.8 TCP/IP and X.25
  • 13.9 Frame Relay
  • 13.10 Enterprise Extender
  • 13.11 PPP Over SONET and SDH Circuits
  • 13.12 Multiprotocol Label Switching (MPLS)
  • 13.13 Multiprotocol over ATM (MPOA)
  • 13.14 Private Network-to-Network Interface (PNNI)
  • 13.15 Multi-Path Channel+ (MPC+)
  • 13.16 Multiprotocol Transport Network (MPTN)
  • 13.17 S/390 Open Systems Adapter 2
  • Chapter 14. Platform Implementations
  • 14.1 Software Operating System Implementations
  • 14.2 IBM Hardware Platform Implementations

  • Cisco materials



    Etc

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


    Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

    The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last modified: October 15, 2018