Displays statistics by protocol. By default, statistics are shown for the TCP, UDP, ICMP,
and IP protocols. The -s parameter can be used to specify a set of protocols.
# netstat -s
Ip:
2461 total packets received
0 forwarded
0 incoming packets discarded
2431 incoming packets delivered
2049 requests sent out
Icmp:
0 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
1 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 1
Tcp:
159 active connections openings
1 passive connection openings
4 failed connection attempts
0 connection resets received
1 connections established
2191 segments received
1745 segments send out
24 segments retransmited
0 bad segments received.
4 resets sent
Udp:
243 packets received
1 packets to unknown port received.
0 packet receive errors
281 packets sent
9. Showing Statistics by TCP Protocol
Showing statistics of only TCP protocol by using option netstat -st .
# netstat -st
Tcp:
2805201 active connections openings
1597466 passive connection openings
1522484 failed connection attempts
37806 connection resets received
1 connections established
57718706 segments received
64280042 segments send out
3135688 segments retransmited
74 bad segments received.
17580 resets sent
10. Showing Statistics by UDP Protocol
# netstat -su
Udp:
1774823 packets received
901848 packets to unknown port received.
0 packet receive errors
2968722 packets sent
11. Displaying Service name with PID
Displaying service name with their PID number, using option netstat -tp will display
"PID/Program Name".
# netstat -tp
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.0.2:ssh 192.168.0.1:egs ESTABLISHED 2179/sshd
tcp 1 0 192.168.0.2:59292 www.gov.com:http CLOSE_WAIT 1939/clock-applet
12. Displaying Promiscuous Mode
Displaying Promiscuous mode with -ac switch, netstat print the selected information or
refresh screen every five second. Default screen refresh in every second.
Finding un-configured address families with some useful information.
# netstat --verbose
netstat: no support for `AF IPX' on this system.
netstat: no support for `AF AX25' on this system.
netstat: no support for `AF X25' on this system.
netstat: no support for `AF NETROM' on this system.
19. Finding Listening Programs
Find out how many listening programs running on a port.
# netstat --statistics --raw
Ip:
62175683 total packets received
52970 with invalid addresses
0 forwarded
Icmp:
875519 ICMP messages received
destination unreachable: 901671
echo request: 8
echo replies: 16253
IcmpMsg:
InType0: 83
IpExt:
InMcastPkts: 117
That's it, If you are looking for more information and options about netstat command, refer
netstat manual docs or use man netstat command to know all the information. If we've missed
anything in the list, please inform us using our comment section below. So, we could keep
updating this list based on your comments.
ss (socket statistics) is a command line tool that monitors socket connections and displays the socket statistics of the Linux
system. It can display stats for PACKET sockets, TCP sockets, UDP sockets, DCCP sockets, RAW sockets, Unix domain sockets, and much
more.
This replaces the deprecated netstat command in the latest version of Linux. The ss command is much faster and prints more detailed
network statistics than the netstat command.
If you are familiar with the netstat command, it will be easier for you to understand the ss command as it uses similar command
line options to display network connections information.
Refer the following link to see other network command tutorials.
The basic ss command without any arguments, which displays all the socket or network connections as shown below:
$ ss
Understanding the output header:
Netid: Type of socket. Common types are TCP, UDP, u_str (Unix stream), and u_seq (Unix sequence).
State: State of the socket. Common states are ESTAB (established), UNCONN (unconnected), LISTEN (listening), CLOSE-WAIT, and
SYN-SENT.
Recv-Q: Number of received packets in the queue.
Send-Q: Number of sent packets in the queue.
Local Address:Port "" Address of local machine and port.
Peer Address:Port "" Address of remote machine and port.
The default output shows thousands of lines at once and part of the output will be not visible on the terminal, so use the "less'
command for page-wise reporting.
By default the "t" option reports only the tcp sockets that are "established" or CONNECTED", and doesn't report the tcp sockets
that are "LISTENING". Use the "-a' option together with "-t', if you want to view them all at once.
To list process name and pid associated to the network connections, run: Make a note, you need to run this command with sudo privilege
to view all process name and associated pid.
To view overall summary of all socket connections, run: It prints the results in a tabular format, which including the number
of TCP & UDP, IPv4 and IPv6 socket connections.
$ ss -s
Total: 1278
TCP: 35 (estab 10, closed 11, orphaned 0, timewait 2)
Transport Total IP IPv6
RAW 1 0 1
UDP 11 7 4
TCP 24 13 11
INET 36 20 16
FRAG 0 0 0
10) View extended output of socket connections
To view extended output of socket connections, run. The extended output will display the uid of the socket and socket's inode
number.
This article is the final part of my
three-part series covering 18 different
tcpdump
tips
and tricks where I continue to demonstrate features that help you filter and organize the information returned by
tcpdump
.
I recommend reading
parts
one
and
two
before
continuing with the content below.
# tcpdump -i any -c4 -A
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
21:03:21.363917 wlp0s20f3 Out IP6 kkulkarni > ff02::1:ff0e:bfb6: ICMP6, neighbor solicitation, who has kkulkarni, length 32
`.... :.........Q{AZq..w.................................r.pm.....`.b...
21:03:21.363953 lo In IP6 kkulkarni.45656 > kkulkarni.hostmon: Flags [S], seq 3428690149, win 65476, options [mss 65476,sackOK,TS val 1750938785 ecr 0,nop,wscale 7,tfo cookiereq,nop,nop], length 0
`....,...........r.pm............r.pm....X...].....................
h]4........."...
21:03:21.363972 lo In IP6 kkulkarni.hostmon > kkulkarni.45656: Flags [S.], seq 3072789718, ack 3428690150, win 65464, options [mss 65476,sackOK,TS val 1750938785 ecr 1750938785,nop,wscale 7], length 0
`....(...........r.pm............r.pm......X.'...].................
h]4.h]4.....
21:03:21.363988 lo In IP6 kkulkarni.45656 > kkulkarni.hostmon: Flags [.], ack 1, win 512, options [nop,nop,TS val 1750938785 ecr 1750938785], length 0
`.... ...........r.pm............r.pm....X...]...'.......w.....
h]4.h]4.
4 packets captured
173 packets received by filter
0 packets dropped by kernel
15. Options for extra verbosity
With some Linux programs, it's sometimes
useful to have more verbose output.
tcpdump
uses
-v
,
-vv
,
or
-vvv
to
provide different levels of verbosity. See below for examples with no verbosity to three levels of verbosity.
Default verbosity:
# tcpdump -i any -c1
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
21:06:00.903186 lo In IP kkulkarni.39876 > kkulkarni.hostmon: Flags [S], seq 1718143023, win 65495, options [mss 65495,sackOK,TS val 1879208671 ecr 0,nop,wscale 7,tfo cookiereq,nop,nop], length 0
1 packet captured
100 packets received by filter
0 packets dropped by kernel
Using the
-v
option:
# tcpdump -i any -c1 -v
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
21:06:04.209638 lo In IP6 (flowlabel 0xd17f0, hlim 1, next-header TCP (6) payload length: 44) kkulkarni.33022 > kkulkarni.hostmon: Flags [S], cksum 0x0d5b (incorrect -> 0x6c92), seq 2003870985, win 65476, options [mss 65476,sackOK,TS val 3266653263 ecr 0,nop,wscale 7,tfo cookiereq,nop,nop], length 0
1 packet captured
20 packets received by filter
0 packets dropped by kernel
Here is the
-vv
option:
# tcpdump -i any -c1 -vv
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
21:06:05.916423 tun0 Out IP (tos 0x0, ttl 64, id 22069, offset 0, flags [DF], proto TCP (6), length 1360)
kkulkarni.37152 > 10.0.115.119.https: Flags [.], cksum 0xe218 (correct), seq 168413028:168414336, ack 944490821, win 502, options [nop,nop,TS val 1351042119 ecr 3391883323], length 1308
1 packet captured
235 packets received by filter
0 packets dropped by kernel
Finally, display the highest level of detail
with the
-vvv
option:
The arping command is one of the lesser known commands that works much like the ping
command.
The name stands for "arp ping" and it's a tool that allows you to perform limited ping
requests in that it collects information on local systems only. The reason for this is that it
uses a Layer 2 network protocol and is, therefore, non-routable. The arping command is used for
discovering and probing hosts on your local network.
You can use it much like ping and, as with ping , you can set a count for the packets to be
sent using -c (e.g., arping -c 2 hostname) or allow it to keep sending requests until you type
^c . In this first example, we send two requests to a system:
$ arping -c 2 192.168.0.7
ARPING 192.168.0.7 from 192.168.0.11 enp0s25
Unicast reply from 192.168.0.7 [20:EA:16:01:55:EB] 64.895ms
Unicast reply from 192.168.0.7 [20:EA:16:01:55:EB] 5.423ms
Sent 2 probes (1 broadcast(s))
Received 2 response(s)
Note that the response shows the time it takes to receive replies and the MAC address of the
system being probed.
If you use the -f option, your arping will stop as soon as it has confirmed that the system
is responding. That might sound efficient, but it will never get to the stopping point if the
system -- possibly some non-existent or shut down system -- fails to respond. Using a small
value is generally a better approach. In this next example, the command tried 83 times to reach
the remote system before I killed it with a ^c , and it then provided the count.
$ arping -f 192.168.0.77
ARPING 192.168.0.77 from 192.168.0.11 enp0s25
^CSent 83 probes (83 broadcast(s))
Received 0 response(s)
For a system that is up and ready to respond, the response is quick.
$ arping -f 192.168.0.7
ARPING 192.168.0.7 from 192.168.0.11 enp0s25
Unicast reply from 192.168.0.7 [20:EA:16:01:55:EB] 82.963ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Broadcast – send out for all to receive
The ping command can reach remote systems easily where arping tries but doesn't get any
responses. Compare the responses below.
Only Oracle Cloud VMware Solution provides you with exactly the same experience as running
VMware on-premises. And when they say "the same", they really mean literally the same.
$ arping -c 2 world.std.com
ARPING 192.74.137.5 from 192.168.0.11 enp0s25
Sent 2 probes (2 broadcast(s))
Received 0 response(s)
$ ping -c 2 world.std.com
PING world.std.com (192.74.137.5) 56(84) bytes of data.
64 bytes from world.std.com (192.74.137.5): icmp_seq=1 ttl=48 time=321 ms
64 bytes from world.std.com (192.74.137.5): icmp_seq=2 ttl=48 time=331 ms
-- - world.std.com ping statistics -- -
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 321.451/326.068/330.685/4.617 ms
Clearly, arping cannot collect information on the remote server.
If you want to use arping for a range of systems, you can use a command like the following,
which would be fairly quick because it only tries once to reach each host in the range
provided.
$ for num in {1..100}; do arping -c 1 192.168.0.$num; done
ARPING 192.168.0.1 from 192.168.0.11 enp0s25
Unicast reply from 192.168.0.1 [F8:8E:85:35:7F:B9] 5.530ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.2 from 192.168.0.11 enp0s25
Sent 1 probes (1 broadcast(s))
Received 0 response(s)
ARPING 192.168.0.3 from 192.168.0.11 enp0s25
Unicast reply from 192.168.0.3 [02:0F:B5:22:E5:90] 76.856ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
ARPING 192.168.0.4 from 192.168.0.11 enp0s25
Unicast reply from 192.168.0.4 [02:0F:B5:5B:D9:66] 83.000ms
Sent 1 probes (1 broadcast(s))
Received 1 response(s)
Notice that we see some responses that show one response was received and others for which
there were no responses.
Here's a simple script that will provide a list of which systems in a network range respond
and which do not:
HP Care Pack services offer aid to taxed IT groups, with remote device management, coverage
for accidental damage, and on-site support.
#!/bin/bash
for num in {1..255}; do
echo -n "192.168.0.$num "
arping -c 1 192.168.0.$num | grep "1 response"
if [ $? != 0 ]; then
echo ""
fi
done
Change the IP address range in the script to match your local network. The output should
look something like this:
$ ./detectIPs
192.168.0.1 Received 1 response(s)
192.168.0.2 Received 1 response(s)
192.168.0.3 Received 1 response(s)
192.168.0.4 Received 1 response(s)
192.168.0.5
192.168.0.6 Received 1 response(s)
192.168.0.7 Received 1 response(s)
192.168.0.8
192.168.0.9 Received 1 response(s)
192.168.0.10
192.168.0.11 Received 1 response(s)
If you only want to see the responding systems, simplify the script like this:
#!/bin/bash
for num in {1..30}; do
arping -c 1 192.168.0.$num | grep "1 response" > /dev/null
if [ $? == 0 ]; then
echo "192.168.0.$num "
fi
done
Below is what the output will look like with the second script. It lists only responding
systems.
Network troubleshooting sometimes requires tracking specific network packets based on
complex filter criteria or just determining whether a connection can be made.
... ... ...
Using the ncat command, you will set up a TCP listener, which is a TCP service
that waits for a connection from a remote system on a specified port. The following command
starts a listening socket on TCP port 9999.
$ sudo ncat -l 9999
This command will "hang" your terminal. You can place the command into background mode, to
operate similar to a service daemon using the & (ampersand) signal. Your
prompt will return.
$ sudo ncat -l 8080 &
From a remote system, use the following command to attempt a connection:
$ telnet <IP address of ncat system> 9999
The attempt should fail as shown:
Trying <IP address of ncat system>...
telnet: connect to address <IP address of ncat system>: No route to host
This might be similar to the message you receive when attempting to connect to your original
service. The first thing to try is to add a firewall exception to the ncat
system:
$ sudo firewall-cmd --add-port=9999/tcp
This command allows TCP requests on port 9999 to pass through to a listening daemon on port
9999.
Retry the connection to the ncat system:
$ telnet <IP address of ncat system> 9999
Trying <IP address of ncat system>...
Connected to <IP address of ncat system>.
Escape character is '^]'.
This message means that you are now connected to the listening port, 9999, on the remote
system. To disconnect, use the keyboard combination, CTRL + ] . Type quit to return to a
prompt.
$ telnet <IP address of ncat system> 9999
Trying <IP address of ncat system>...
Connected to <IP address of ncat system>.
Escape character is '^]'.
^]
telnet>quit
Connection closed.
$
Disconnecting will also kill the TCP listening port on the remote (ncat) system, so don't
attempt another connection until you reissue the ncat command. If you want to keep
the listening port open rather than letting it die each time you disconnect, issue the -k (keep
open) option. This option keeps the listening port alive. Some sysadmins don't use this option
because they might leave a listening port open potentially causing security problems or port
conflicts with other services.
$ sudo ncat -k -l 9999 &
What ncat tells you
The success of connecting to the listening port of the ncat system means that
you can bind a port to your system's NIC. You can successfully create a firewall exception. And
you can successfully connect to that listening port from a remote system. Failures along the
path will help narrow down where your problem is.
What ncat doesn't tell you
Unfortunately, there's no solution for connectivity issues in this troubleshooting technique
that isn't related to binding, port listening, or firewall exceptions. This is a limited scope
troubleshooting session, but it's quick, easy, and definitive. What I've found is that most
connectivity issues boil down to one of these three. My next step in the process would be to
remove and reinstall the service package. If that doesn't work, download a different version of
the package and see if that works for you. Try going back at least two revisions until you find
one that works. You can always update to the latest version after you have a working
service.
Wrap up
The ncat command is a useful troubleshooting tool. This article only focused on
one tiny aspect of the many uses for ncat . Troubleshooting is as much of an art
as it is a science. You have to know which answers you have and which ones you don't have. You
don't have to troubleshoot or test things that already work. Explore ncat 's
various uses and see if your connectivity issues go away faster than they did before.
I use Telnet, netcat, Nmap, and other tools to test whether a remote service is up and
whether I can connect to it. These tools are handy, but they aren't installed by default on all
systems.
Fortunately, there is a simple way to test a connection without using external tools. To see
if a remote server is running a web, database, SSH, or any other service, run:
If the connection fails, the Failed to connect message is displayed on your
screen.
Assume serverA is behind a firewall/NAT. I want to see if the firewall is configured to
allow a database connection to serverA , but I haven't installed a database server yet. To
emulate a database port (or any other port), I can use the following:
Netcat (also known as 'nc') is a networking tool used for reading or writing from
TCP and UDP sockets using an easy interface. It is designed as a dependable 'back-end'
device that can be used directly or easily driven by other programs and scripts. Therefore,
this tool is a treat to network administrators, programmers, and pen-testers as it's a feature
rich network debugging and investigation tool.
To open netcat simply go to your shell and enter 'nc':
#nc
CONNECTING TO A HOST WITH NETCAT
Use the -u option to start a TCP
connection to a specified host and port:
#nc -u <host_ip> <port>
LISTEN TO INBOUND
CONNECTIONS
You can set nc to listen on a port using -l option
#nc -l <port>
SCAN PORTS WITH
NETCAT
This can easily be done using the '-z' flag which instructs netcat not to initiate a
connection but just check if the port is open. For example, In the following command we
instruct netcat to check which ports are open between 80 and 100 on ' localhost '
#nc -z <host_ip> <port_range>
ADVANCED PORT SCAN
To run an advanced port scan on a target, use the following command
#nc -v -n -z -w1 -r <target_ip>
This command will attempt to connect to random ports (-r) on the target ip running verbosely
(-v) without resolving names (-n). without sending any data (-z) and waiting no more than 1
second for a connection to occur (-w1)
TCP BANNER GRABBING WITH NETCAT
You can grab the banner of any tcp service running on an ip address using nc:
#echo "" | nc -v -n -w1 <target_ip> <port_range>
TRANSFER FILES WITH NETCAT
For this, you should have nc installed on both sending and receiving machines. First you
have to start the nc in listener mode in receiving host
#nc -l <port> > file.txt
Now run the following command on the sending host:
#nc <target_ip> <port> --send-only < data.txt
In conclusion, Netcat comes with a lot of cool features that we can use to simplify our
day-to-day tasks. Make sure to check out this article to learn
some more interesting features in this tool.
As you know from my previous two
articles,
Linux
troubleshooting: Setting up a TCP listener with ncat
and
The
ncat command is a problematic security tool for Linux sysadmins
,
netcat
is
a command that is both your best friend and your worst enemy. And this article further perpetuates this fact with
a look into how
ncat
delivers
a useful, but potentially dangerous, option for creating a port redirection link. I show you how to set up a port
or site forwarding link so that you can perform maintenance on a site while still serving customers.
The scenario
You need to perform maintenance on an
Apache installation on
server1
, but you don't
want the service to appear offline for your customers, which in this scenario are internal corporate users of the
labor portal that records hours worked for your remote users. Rather than notifying them that the portal will be
offline for six to eight hours, you've decided to create a forwarding service to another system,
server2
,
while you take care of
server1
's needs.
This method is an easy way of keeping a
specific service alive without tinkering with DNS or corporate firewall NAT settings.
Server1: Port 8088
Server2: Port 80
The steps
To set up this site/service forward, you
need to satisfy the following prerequisites:
ncat-nmap
package (should be installed by default)
A functional duplicate of the
server1
portal
on
server2
Root or
sudo
access
to servers 1 and 2 for firewall changes
If you've cleared these hurdles, it's time
to make this change happen.
The implementation
Configuring
ncat
in
this way makes use of named pipes, which is an efficient way to create this two-way communication link by writing
to and reading from a file in your home directory. There are multiple ways to do this, but I'm going to use the
one that works best for this type of port forwarding.
Create the named pipe
Creating the named pipe is easy using the
mkfifo
command.
I used the
file
command
to demonstrate that the file is there and it is a named pipe. This command is not required for the service to
work. I named the file
svr1_to_svr2
,
but you can use any name you want. I chose this name because I'm forwarding from
server1
to
server2
.
Create the forward service
Formally, this was called
setting
up a Listener-to-Client relay
, but it makes a little more sense if you think of this in firewall terms, hence
my "forward" name and description.
Issuing this command drops you back to
your prompt because you put the service into the background with the
&
.
As you can see, the named pipe and the service are both created as a standard user. I discussed the reasons for
this restriction in my previous article,
The
ncat command is a problematic security tool for Linux sysadmins
.
Command breakdown
The first part of the command,
ncat
-k -l 8088
, sets up the listener for connections that ordinarily would be answered by the Apache service
on
server1
. That service is offline, so you
create a listener to answer those requests. The
-k
option
is the keep-alive feature, meaning that it can serve multiple requests. The
-l
is
the listen option.
Port
8088
is the port you want to mimic, which is that of the customer portal.
The second part, to the right of the pipe
operator (
|
),
accepts and relays the requests to 192.168.1.60 on port 80. The named pipe
(svr1_to_svr2
)
handles the data in and out.
The usage
Now that you have your relay set up, it's
easy to use. Point your browser to the original host and customer portal, which is
http://server1:8088
.
This automatically redirects your browser to
server2
on
port 80. Your browser still displays the original URL and port.
I have found that too many repetitive
requests can cause this service to fail with a broken pipe message on
server1
.
This doesn't always kill the service, but it can. My suggestion is to set up a script to check for the
forward
command,
and if it doesn't exist, restart it. You can't check for the existence of the
svr1_to_svr2
file
because it always exists. Remember, you created it with the
mkfifo
command.
The caveat
The downside of this
ncat
capability
is that a user could forward traffic to their own duplicate site and gather usernames and passwords. The malicious
actor would have to kill the current port listener/web service to make this work, but it's possible to do this
even without root access. Sysadmins have to maintain vigilance through monitoring and alerting to avoid this type
of security loophole.
The wrap up
The
ncat
command
has so many uses that it requires one article per feature to describe each one. This article introduced you to the
concept of Listener-to-Client relay, or service forwarding, as I call it. It's useful for short maintenance
periods but should not be used for permanent redirects. For those, you should edit DNS and corporate firewall NAT
rules to send requests to their new destinations. You should remind yourself to turn off any
ncat
listeners
when you're finished with them as they do open a system to compromise. Never create these services with the root
user account.
[ Make managing your network easier than ever with
Network
automation for everyone
, a free book from Red Hat. ]
Check out these related articles on Enable Sysadmin
The life of a sysadmin is hectic, rushed,
and often frustrating. So, what you really need is a toolbox filled with tools that you easily recognize and can
use quickly without another learning curve when things are going bad. One such tool is the
ncat
command.
ncat - Concatenate and redirect sockets
The
ncat
command
has many uses, but the one I use it for is troubleshooting network connectivity issues. It is a handy, quick, and
easy to use tool that I can't live without. Follow along and see if you decide to add it to your toolbox as well.
Ncat is a feature-packed networking
utility which reads and writes data across networks from the command line. Ncat was written for the Nmap
Project and is the culmination of the currently splintered family of Netcat incarnations. It is designed to be
a reliable back-end tool to instantly provide network connectivity to other applications and users. Ncat will
not only work with IPv4 and IPv6 but provides the user with a virtually limitless number of potential uses.
Among Ncat's vast number of features
there is the ability to chain Ncats together; redirection of TCP, UDP, and SCTP ports to other sites; SSL
support; and proxy connections via SOCKS4, SOCKS5 or HTTP proxies (with optional proxy authentication as well).
Firewall problem or something else?
You've just installed <insert network
service here>, and you can't connect to it from another computer on the same network. It's frustrating. The
service is enabled. The service is started. You think you've created the correct firewall exception for it, but
yet, it doesn't respond.
Your troubleshooting life begins. In what
can stretch from minutes to days to infinity and beyond, you attempt to troubleshoot the problem. It could be many
things: an improperly configured (or unconfigured) firewall exception, a NIC binding problem, a software problem
somewhere in the service's code, a service misconfiguration, some weird compatibility issue, or something else
unrelated to the network or the service blocking access. This is your scenario. Where do you start when you've
checked all of the obvious places?
The ncat command to the rescue
The
ncat
command
should be part of your basic Linux distribution, but if it isn't, install the
nmap-ncat
package
and you'll have the latest version of it. Check the
ncat
man
page for usage, if you're interested in its many capabilities beyond this simple troubleshooting exercise.
Using the
ncat
command,
you will set up a TCP listener, which is a TCP service that waits for a connection from a remote system on a
specified port. The following command starts a listening socket on TCP port 9999.
$ sudo ncat -l 9999
This command will "hang" your terminal.
You can place the command into background mode, to operate similar to a service daemon using the
&
(ampersand)
signal. Your prompt will return.
$ sudo ncat -l 8080 &
From a remote system, use the following
command to attempt a connection:
$ telnet <IP address of ncat system> 9999
The attempt should fail as shown:
Trying <IP address of ncat system>...
telnet: connect to address <IP address of ncat system>: No route to host
This might be similar to the message you
receive when attempting to connect to your original service. The first thing to try is to add a firewall exception
to the
ncat
system:
$ sudo firewall-cmd --add-port=9999/tcp
This command allows TCP requests on port
9999 to pass through to a listening daemon on port 9999.
Retry the connection to the
ncat
system:
$ telnet <IP address of ncat system> 9999
Trying <IP address of ncat system>...
Connected to <IP address of ncat system>.
Escape character is '^]'.
This message means that you are now
connected to the listening port, 9999, on the remote system. To disconnect, use the keyboard combination,
CTRL
+ ]
. Type
quit
to return to a
prompt.
$ telnet <IP address of ncat system> 9999
Trying <IP address of ncat system>...
Connected to <IP address of ncat system>.
Escape character is '^]'.
^]
telnet>quit
Connection closed.
$
Disconnecting will also kill the TCP
listening port on the remote (ncat) system, so don't attempt another connection until you reissue the
ncat
command.
If you want to keep the listening port open rather than letting it die each time you disconnect, issue the -k
(keep open) option. This option keeps the listening port alive. Some sysadmins don't use this option because they
might leave a listening port open potentially causing security problems or port conflicts with other services.
$ sudo ncat -k -l 9999 &
What ncat tells you
The success of connecting to the listening
port of the
ncat
system
means that you can bind a port to your system's NIC. You can successfully create a firewall exception. And you can
successfully connect to that listening port from a remote system. Failures along the path will help narrow down
where your problem is.
What ncat doesn't tell you
Unfortunately, there's no solution for
connectivity issues in this troubleshooting technique that isn't related to binding, port listening, or firewall
exceptions. This is a limited scope troubleshooting session, but it's quick, easy, and definitive. What I've found
is that most connectivity issues boil down to one of these three. My next step in the process would be to remove
and reinstall the service package. If that doesn't work, download a different version of the package and see if
that works for you. Try going back at least two revisions until you find one that works. You can always update to
the latest version after you have a working service.
Wrap up
The
ncat
command
is a useful troubleshooting tool. This article only focused on one tiny aspect of the many uses for
ncat
.
Troubleshooting is as much of an art as it is a science. You have to know which answers you have and which ones
you don't have. You don't have to troubleshoot or test things that already work. Explore
ncat
's
various uses and see if your connectivity issues go away faster than they did before.
I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not
involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use
it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing
with them can give a much deeper understanding of what's going on and ultimately lead to better code.
My favourite secret weapon in this path to understanding is strace.
strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS)
calls your program is making.
An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the
things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc)
its use has a very high hit rate in resolving problems out of developers' normal problem space.
Usage Patterns
strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.
My Netcat Server Won't Start!
Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source,
and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.
Simply running through strace will likely give you clues as to what's gone on.
$ nc -l localhost 80
nc: Permission denied
Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).
Simply put strace at the front of your command. Note that the following output has been heavily edited for space
reasons (deep breath):
To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse
when a few things are explained.
For each line:
the first entry on the left is the system call being performed
the bit in the parentheses are the arguments to the system call
the right side of the equals sign is the return value of the system call
open("/etc/gai.conf", O_RDONLY) = 3
Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf
and the constant O_RDONLY , and the return value was 3 .
How to make sense of this?
Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above
line is the attempt to open a file with read-only permission.
In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that
return "Permission denied":
We might therefore want to understand what "bind" is and why it might be failing.
You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is
in the manpages-dev package, and can be invoked by eg man 2 bind (I just used strace to
determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!).
You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs
that match your installation.
Right there in my man 2 bind page it says:
ERRORS
EACCES The address is protected, and the user is not the superuser.
So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.
My Library Is Not Loading!
Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the
output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:
We observe that the file is found in what looks like an unusual place.
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
Inspecting the environment, we see that:
$ env | grep myperl
PERL5LIB=/space/myperllib
So the solution is to set the same env variable before running:
export PERL5LIB=/space/myperllib
Get to know the internals bit by bit
If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things
about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a
look at the file /etc/gai.conf , which I'd never come across before writing this.
Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens &
Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully)
browse C code more and more understanding will grow.
Gotchas
If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes
and straces them. -ff creates a separate file with the pid suffixed to the name.
If you're on solaris, this program doesn't exist – you need to use truss instead.
Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies
(on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the
executable up.
Other useful tidbits
You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with
-p .
If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and
-T ) can help significantly.
A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking
if a config file exists.
"... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
curl transfers a URL. Use this command to test an application's endpoint or
connectivity to an upstream service endpoint. c url can be useful for determining if
your application can reach another service, such as a database, or checking if your service is
healthy.
As an example, imagine your application throws an HTTP 500 error indicating it can't reach a
MongoDB database:
The -I option shows the header information and the -s option silences the
response body. Checking the endpoint of your database from your local desktop:
$ curl -I -s
database: 27017
HTTP / 1.0 200 OK
So what could be the problem? Check if your application can get to other places besides the
database from the application host:
This indicates that your application cannot resolve the database because the URL of the
database is unavailable or the host (container or VM) does not have a nameserver it can use to
resolve the hostname.
The socat utility is a relay
for bidirectional data transfers between two independent data channels.
There are many different types of channels socat can connect, including:
Files
Pipes
Devices (serial line, pseudo-terminal, etc)
Sockets (UNIX, IP4, IP6 - raw, UDP, TCP)
SSL sockets
Proxy CONNECT connections
File descriptors (stdin, etc)
The GNU line editor (readline)
Programs
Combinations of two of these
This tool is regarded as the advanced version of netcat . They do similar things, but socat
has more additional functionality, such as permitting multiple clients to listen on a port, or
reusing connections.
Why do we need socat?
There are many ways to use socate effectively. Here are a few examples:
TCP port forwarder (one-shot or daemon)
External socksifier
Tool to attack weak firewalls (security and audit)
Shell interface to Unix sockets
IP6 relay
Redirect TCP-oriented programs to a serial line
Logically connect serial lines on different computers
Establish a relatively secure environment ( su and chroot ) for
running client or server shell scripts with network connections
How do we use socat?
The syntax for socat is fairly simple:
socat [options] <address> <address>
You must provide the source and destination addresses for it to work. The syntax for these
addresses is:
protocol:ip:port
Examples of using socat
Let's get started with some basic examples of using socat for various
connections.
1. Connect to TCP port 80 on the local or remote system:
# socat - TCP4:www.example.com:80
In this case, socat transfers data between STDIO (-) and a TCP4 connection to
port 80 on a host named www.example.com.
2. Use socat as a TCP port forwarder:
For a single connection, enter:
# socat TCP4-LISTEN:81 TCP4:192.168.1.10:80
For multiple connections, use the fork option as used in the examples
below:
In this example, when a client connects to port 3334, a new child process is generated. All
data sent by the clients is appended to the file /tmp/test.log . If the file does
not exist, socat creates it. The option reuseaddr allows an immediate
restart of the server process.
In this case, socat transfers data from stdin to the specified
multicast address using UDP over port 6666 for both the local and remote connections. The
command also tells the interface eth0 to accept multicast packets for the given
group.
Practical uses for socat
Socat is a great tool for troubleshooting. It is also handy for easily making
remote connections. Practically, I have used socat for remote MySQL connections.
In the example below, I demonstrate how I use socat to connect my web application
to a remote MySQL server by connecting over the local socket.
The above command connects to the remote server 192.168.100.5 by using port 3307.
However, all communication will be done on the Unix socket
/var/lib/mysql/mysql.sock , and this makes it appear to be a local
server.
Wrap up
socat is a sophisticated utility and indeed an excellent tool for every
sysadmin to get things done and for troubleshooting. Follow this link to read more examples of
using socat .
"Error, some other host
already uses address" is printed when running 'service network restart' or 'ifup ethX' command
on a CentOS/RHEL system. How to check for a duplicate IP address in the network?
Using arping Command Run the arping command with the -D switch to enable Duplicate
Address
Detection. In the following example, substitute the address that you believe has been
duplicated, and the interface that address is on.
The ip command is used to assign an address to a network interface and/or configure network interface parameters on Linux operating
systems. This command replaces old good and now deprecated ifconfig command on modern Linux distributions.
Find out which interfaces are configured on the system.
Query the status of a IP interface.
Configure the local loop-back, Ethernet and other IP interfaces.
Mark the interface as up or down.
Configure and modify default and static routing.
Set up tunnel over IP.
Show ARP or NDISC cache entry.
Assign, delete, set up IP address, routes, subnet and other IP information to IP interfaces.
List IP Addresses and property information.
Manage and display the state of all network.
Gather multicast IP addresses info.
Show neighbor objects i.e. ARP cache, invalidate ARP cache, add an entry to ARP cache and more.
Set or delete routing entry.
Find the route an address (say 8.8.8.8 or 192.168.2.24) will take.
Modify the status of interface.
Purpose
Use this command to display and configure the network parameters for host interfaces.
Syntax
ip OBJECT COMMAND
ip [options] OBJECT COMMAND
ip OBJECT help
Understanding ip command OBJECTS syntax
OBJECTS can be any one of the following and may be written in full or abbreviated form:
Object
Abbreviated form
Purpose
link
l
Network device.
address
a addr
Protocol (IP or IPv6) address on a device.
addrlabel
addrl
Label configuration for protocol address selection.
neighbour
n neigh
ARP or NDISC cache entry.
route
r
Routing table entry.
rule
ru
Rule in routing policy database.
maddress
m maddr
Multicast address.
mroute
mr
Multicast routing cache entry.
tunnel
t
Tunnel over IP.
xfrm
x
Framework for IPsec protocol.
To get information about each object use help command as follows:
ip OBJECT help
ip OBJECT h
ip a help
ip r help
Warning : The commands described below must be executed with care. If you make a mistake, you will loos connectivity to the server.
You must take special care while working over the ssh based remote session.
ip command examples
Don't be intimidated by ip command syntax. Let us get started quickly with examples.
Displays info about all network interfaces
Type the following command to list and show all ip address associated on on all network interfaces: ip a
OR ip addr
Sample outputs:
Fig.01 Showing IP address assigned to eth0, eth1, lo using ip command
You can select between IPv4 and IPv6 using the following syntax:
### Only show TCP/IP IPv4 ##
ip -4 a
### Only show TCP/IP IPv6 ###
ip -6 a
It is also possible to specify and list particular interface TCP/IP details:
### Only show eth0 interface ###
ip a show eth0
ip a list eth0
ip a show dev eth0
### Only show running interfaces ###
ip link ls up
Assigns the IP address to the interface
The syntax is as follows to add an IPv4/IPv6 address: ip a add {ip_addr/mask} dev {interface}
To assign 192.168.1.200/255.255.255.0 to eth0, enter: ip a add 192.168.1.200/255.255.255.0 dev eth0
OR ip a add 192.168.1.200/24 dev eth0
ADDING THE BROADCAST ADDRESS ON THE INTERFACE
By default, the ip command does not set any broadcast address unless explicitly requested. So syntax is as follows to set broadcast
ADDRESS: ip addr add brd {ADDDRESS-HERE} dev {interface}
ip addr add broadcast {ADDDRESS-HERE} dev {interface}
ip addr add broadcast 172.20.10.255 dev dummy0
It is possible to use the special symbols such as + and - instead of the broadcast address by setting/resetting
the host bits of the interface pre x. In this example, add the address 192.168.1.50 with netmask 255.255.255.0 (/24) with standard
broadcast and label "eth0Home" to the interface eth0: ip addr add 192.168.1.50/24 brd + dev eth0 label eth0Home
You can set loopback address to the loopback device lo as follows: ip addr add 127.0.0.1/8 dev lo brd + scope host
Remove / Delete the IP address from the interface
The syntax is as follows to remove an IPv4/IPv6 address: ip a del {ipv6_addr_OR_ipv4_addr} dev {interface}
To delete 192.168.1.200/24 from eth0, enter: ip a del 192.168.1.200/24 dev eth0
Flush the IP address from the interface
You can delete or remote an IPv4/IPv6 address one-by-one as
described above . However,
the flush command can remove as flush the IP address as per given condition. For example, you can delete all the IP addresses from
the private network 192.168.2.0/24 using the following command: ip -s -s a f to 192.168.2.0/24
Sample outputs:
2: eth0 inet 192.168.2.201/24 scope global secondary eth0
2: eth0 inet 192.168.2.200/24 scope global eth0
*** Round 1, deleting 2 addresses ***
*** Flush is complete after 1 round ***
You can disable IP address on all the ppp (Point-to-Point) interfaces: ip -4 addr flush label "ppp*"
Here is another example for all the Ethernet interfaces: ip -4 addr flush label "eth*"
How do I change the state of the device to UP or DOWN?
The syntax is as follows: ip link set dev {DEVICE} {up|down}
To make the state of the device eth1 down, enter: ip link set dev eth1 down
To make the state of the device eth1 up, enter: ip link set dev eth1 up
How do I change the txqueuelen of the device?
You can set the
length of the transmit queue of the device using ifconfig command or ip command as follows: ip link set txqueuelen {NUMBER} dev {DEVICE}
In this example, change the default txqueuelen from 1000 to 10000 for the eth0: ip link set txqueuelen 10000 dev eth0
ip a list eth0
How do I change the MTU of the device?
For gigabit networks
you can set maximum transmission units (MTU) sizes (JumboFrames) for better network performance. The syntax is: ip link set mtu {NUMBER} dev {DEVICE}
To change the MTU of the device eth0 to 9000, enter: ip link set mtu 9000 dev eth0
ip a list eth0
Sample outputs:
2: eth0: mtu 9000 qdisc pfifo_fast state UP qlen 1000
link/ether 00:08:9b:c4:30:30 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.10/24 brd 192.168.1.255 scope global eth1
inet6 fe80::208:9bff:fec4:3030/64 scope link
valid_lft forever preferred_lft forever
Display neighbour/arp cache
The syntax is: ip n show
ip neigh show
Sample outputs (note: I masked out some data with alphabets):
74.xx.yy.zz dev eth1 lladdr 00:30:48:yy:zz:ww REACHABLE
10.10.29.66 dev eth0 lladdr 00:30:48:c6:0a:d8 REACHABLE
74.ww.yyy.xxx dev eth1 lladdr 00:1a:30:yy:zz:ww REACHABLE
10.10.29.68 dev eth0 lladdr 00:30:48:33:bc:32 REACHABLE
74.fff.uu.cc dev eth1 lladdr 00:30:48:yy:zz:ww STALE
74.rr.ww.fff dev eth1 lladdr 00:30:48:yy:zz:ww DELAY
10.10.29.65 dev eth0 lladdr 00:1a:30:38:a8:00 REACHABLE
10.10.29.74 dev eth0 lladdr 00:30:48:8e:31:ac REACHABLE
The last field show the the state of the " neighbour unreachability detection " machine for this entry:
STALE The neighbour is valid, but is probably already unreachable, so the kernel will try to check it at the first transmission.
DELAY A packet has been sent to the stale neighbour and the kernel is waiting for confirmation.
REACHABLE The neighbour is valid and apparently reachable.
Add a new ARP entry
The syntax is: ip neigh add {IP-HERE} lladdr {MAC/LLADDRESS} dev {DEVICE} nud {STATE}
In this example, add a permanent ARP entry for the neighbour 192.168.1.5 on the device eth0: ip neigh add 192.168.1.5 lladdr 00:1a:30:38:a8:00 dev eth0 nud perm
Where,
neighbour state (nud)
meaning
permanent
The neighbour entry is valid forever and can be only be removed administratively
noarp
The neighbour entry is valid. No attempts to validate this entry will be made but it can be removed when its lifetime expires.
stale
The neighbour entry is valid but suspicious. This option to ip neigh does not change the neighbour state if it was valid
and the address is not changed by this command.
reachable
The neighbour entry is valid until the reachability timeout expires.
Delete a ARP entry
The syntax to invalidate or delete an ARP entry for the neighbour 192.168.1.5 on the device eth1 is as follows. ip neigh del {IPAddress} dev {DEVICE}
ip neigh del 192.168.1.5 dev eth1
CHANGE ARE STATE TO REACHABLE FOR THE NEIGHBOUR 192.168.1.100 ON THE DEVICE ETH1
ip neigh chg 192.168.1.100 dev eth1 nud reachable
Flush ARP entry
This flush or f command flushes neighbour/arp tables, by specifying some condition. The syntax is: ip -s -s n f {IPAddress}
In this example, flush neighbour/arp table ip -s -s n f 192.168.1.5
OR ip -s -s n flush 192.168.1.5
ip route: Routing table management commands
Use the following command to manage or manipulate the kernel routing table.
Show routing table
To display the contents of the routing tables: ip r
ip r list
ip route list
ip r list [options]
ip route
Sample outputs:
default via 192.168.1.254 dev eth1
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.10
Display routing for 192.168.1.0/24: ip r list 192.168.1.0/24
Sample outputs:
192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.10
Add a new route
The syntax is: ip route add {NETWORK/MASK} via {GATEWAYIP}
ip route add {NETWORK/MASK} dev {DEVICE}
ip route add default {NETWORK/MASK} dev {DEVICE}
ip route add default {NETWORK/MASK} via {GATEWAYIP}
The syntax is as follows to delete default gateway: ip route del default
In this example, delete the route created in
previous subsection : ip route del 192.168.1.0/24 dev eth0
Old vs. new tool
Deprecated Linux command and their replacement cheat sheet:
Old command (Deprecated)
New command
ifconfig -a
ip a
ifconfig enp6s0 down
ip link set enp6s0 down
ifconfig enp6s0 up
ip link set enp6s0 up
ifconfig enp6s0 192.168.2.24
ip addr add 192.168.2.24/24 dev enp6s0
ifconfig enp6s0 netmask 255.255.255.0
ip addr add 192.168.1.1/24 dev enp6s0
ifconfig enp6s0 mtu 9000
ip link set enp6s0 mtu 9000
ifconfig enp6s0:0 192.168.2.25
ip addr add 192.168.2.25/24 dev enp6s0
netstat
ss
netstat -tulpn
ss -tulpn
netstat -neopa
ss -neopa
netstat -g
ip maddr
route
ip r
route add -net 192.168.2.0 netmask 255.255.255.0 dev enp6s0
ip route add 192.168.2.0/24 dev enp6s0
route add default gw 192.168.2.254
ip route add default via 192.168.2.254
arp -a
ip neigh
arp -v
ip -s neigh
arp -s 192.168.2.33 1:2:3:4:5:6
ip neigh add 192.168.3.33 lladdr 1:2:3:4:5:6 dev enp6s0
I have a small favor to ask. More people are reading the nixCraft. Many of you block advertising which is your right, and advertising
revenues are not sufficient to cover my operating costs. So you can see why I need to ask for your help. The nixCraft takes a lot
of my time and hard work to produce. If everyone who reads nixCraft, who likes it, helps fund it, my future would be more secure.
You can donate as little as $1 to support nixCraft:
Can you please comment if it is possible to configure a point-to-point interface using the "ip" command set? I am especially
looking to change the broadcast nature of an eth interface (the link encap and network type) to behave as point-to-point link.
At the same time I don't want to use the PPP, or ay other protocol.
How save configuration for after reboot?
there are for example ip route save, but its in binary and mostly useless.
ip command need to have ip xxx dump, with make valid ip calls to make same configuration. same as iptables have iptables-save.
now, in ages of cloud, we need json interface, so we can all power of ip incorporate in couble easy steps to REST interface.
Comments It is difficult to find a Linux computer that is not connected to the network , be
it server or workstation. From time to time it becomes necessary to diagnose faults,
intermittence or slowness in the network. In this article, we will review some of the
Linux commands most used for network diagnostics. 10 Linux Commands For Network
Diagnostics1. ping One of the first commands, if not the first one, when
diagnosing a network failure or intermittence. The ping tool will help us determine if
there is a connection in the network, be it local or the Internet.
[root @ horla] # ping www.linuxandubuntu.com
PING www.linuxandubuntu.com (173.274.34.38) 56 (84) bytes of data.
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 1 ttl = 59 time = 2.52 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 2 ttl = 59 time = 2.26 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 3 ttl = 59 time = 2.31 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 4 ttl = 59 time = 2.36 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 5 ttl = 59 time = 2.33 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 6 ttl = 59 time = 2.24 ms
64 bytes from r4-nyc.webserversystems.com (173.274.34.38): icmp_seq = 7 ttl = 59 time = 2.35 ms
2. traceroute This command allows us to see the jumps that are needed to reach a
destination. In this case, we see the jumps that are required to reach our website. This test was
done from a laptop with Linux. In the example, we make a traceroute to our website,
www.linuxandubuntu.com.
horla @ horla-ProBook: ~ $ traceroute www.linuxandubuntu.com
traceroute to www.linuxandubuntu.com (173.274.34.38), 30 hops max, 60 byte packets
1 linuxandubuntu.com (192.168.1.1) 267,686 ms 267,656 ms 267,616 ms
2 10.104.0.1 (10.104.0.1) 267.630 ms 267.579 ms 267.553 ms
3 10,226,252,209 (10,226,252,209) 267,459 ms 267,426 ms 267,396 ms
4 * * *
5 10,111.2,137 (10,111.2,137) 266,913 ms 10,111.2,141 (10,111.2,141) 266,784 ms 10,111.2,101 (10,111.2,101) 266,678 ms
6 5.53.0.149 (5.53.0.149) 266.594 ms 104.340 ms 104.273 ms
7 5.53.3.155 (5.53.3.155) 135.133 ms 94.142.98.147 (94.142.98.147) 135.055 ms 176.52.255.35 (176.52.255.35) 135.069 ms
8 94,142,127,229 (94,142,127,229) 197,890 ms 5.53.6.49 (5.53.6.49) 197,850 ms 94,142,126,161 (94,142,126,161) 223,327 ms
9 ae-11.r07.nycmny01.us.bb.gin.ntt.net (129.250.9.1) 197.702 ms 197.715 ms 180.145 ms
10 * * *
11 csc180.gsc.webair.net (173.239.0.26) 179.719 ms 149.475 ms 149.383 ms
12 dsn010.gsc.webair.net (173.239.0.34) 149.288 ms 168.309 ms 168.202 ms
13 r4-nyc.webserversystems.com (173.274.34.38) 168.086 ms 168.105 ms 142.733 ms
horla @ horla-ProBook: ~ $
3. route This command allows us to see the route that our Linux team uses to connect
to the network, in this case. Our equipment leaves through router 192.168.1.1.
horla @ horla-ProBook: ~ $ route -n
Core IP route table
Destination Gateway Genmask Indic Metric Ref Use Interface
0.0.0.0 192.168.1.1 0.0.0.0 UG 600 0 0 wlo1
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlo1
192.168.1.0 0.0.0.0 255.255.255.0 U 600 0 0 wlo1
horla @ horla-ProBook: ~ $
4. dig This command allows us to verify if the DNS is working correctly, before that,
we must verify which DNS we have in the network configuration. In this example, we want to see the
IP address of our website, www.linuxandubuntu.com which returns us 173.274.34.38.
horla-ProBook: ~ $ dig www.linuxandubuntu.com
; << >> DiG 9.10.3-P4-Ubuntu << >> www.linuxandubuntu.com ;; global options: + cmd ;; Got answer: ;; - >> HEADER << - opcode: QUERY, status: NOERROR, id: 12083 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:; www.linuxandubuntu.com. IN A
;; ANSWER SECTION: www.linuxandubuntu.com. 2821 IN A 173.274.34.38
;; Query time: 21 msec ;; SERVER: 127.0.1.1 # 53 (127.0.1.1) ;; WHEN: Wed Nov 7 19:58:30 PET 2018 ;; MSG SIZE rcvd: 51
horla @ horla-ProBook: ~ $
5. ethtool This tool is a replacement for mii-tool. It comes from CentOS6 onwards and
allows to see if the network card is physically connected to the network, that is. We can diagnose
if the network cable is actually connected to the switch.
# ethtool eth0
Settings for eth0: Supported ports: []
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No Advertised
link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: Unknown! Duplex: Unknown! (255)
Port: Other PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes
6. IP ADDR LS Another of the specific tools of Linux that allows us to list the
network cards and their respective IP addresses. This tool is very useful when you have several IP
addresses configured.
[root@linux named]# ip addr ls
1: lo: mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth6: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:15:5d:a0:f6:05 brd ff:ff:ff:ff:ff:ff
inet 193.82.34.169/27 brd 190.82.35.192 scope global eth6
inet 192.168.61.10/24 brd 192.168.61.255 scope global eth6:1
inet6 fe80::215:5dff:fea0:f605/64 scope link
valid_lft forever preferred_lft forever
7. ifconfig As essential as the previous ones, ifconfig allows us to see the network
configuration of the cards installed in our team. In this case, 1 physical network card
disconnected in p37s0, the local network card or localhost lo and the wireless network card wlo1
which is connected to the network is shown. We intentionally highlight the installed cards and the
assigned IP addresses.
8. mtr Another one of our favorite tools MTR or My Traceroute allows us to
see the router jumps and ping each one. This is very useful to determine which of these routers are
those that have delays in network traffic.
9. nslookup Another tool to know the IP address of the host we want to reach. In this
case, we want to know the IP of our website, www.linuxandubuntu.com.
10. nmtui-edit Network Manager Text User Interface (nmtui or Network Manager based on
command line). It uses ncurses and allows us to easily configure from the terminal and without
additional dependencies. It offers a graphical interface, based on text, so that the user makes
those modifications. Conclusion With these networking commands , we will have the
opportunity to perform a much more direct and precise management on the various parameters of the
network in Linux environments. Also With the mtr command as we mention above, we can have a simpler
control over the state of our network and check in a much more central way its different aspects
focused on its optimization. Thanks for reading.
"... Reading stuff in /proc is a standard mechanism and where appropriate, all the tools are doing the same including 'ss' that you mentioned (which is btw very poorly designed) ..."
I am Linux kernel network and proprietary distributions developer and have actually read
the code.
Reading stuff in /proc is a standard mechanism and where
appropriate, all the tools are doing the same including 'ss' that you mentioned (which is btw
very poorly designed)
Also there are several implementations of the net tools, the one from busybox probably the
most famous alternative one and implementations don't hesitate changing how, when and what is
being presented.
What is true though is that Linux kernel APIs are sometimes messy and tools like e.g.
pyroute2 are struggling with working around limitations and confusions. There is also a big
mess with the whole netfilter package as the only "API" is the iptables command-line tool
itself.
Linux is arguably the biggest and most important project on Earth and should respect all
views, races and opinions. If you would like to implement a more efficient and streamlined
network interface (which I very much beg for and may eventually find time to do) - then I'm
all in with you. I have some ideas of how to make the interface programmable by extending JIT
rules engine and making possible to implement the most demanding network logic in kernel
directly (e.g. protocols like mptcp and algorithms like Google Congestion Control for
WebRTC).
The OP's argument is that netlink sockets are more efficient in theory so we should
abandon anything that uses a pseudo-proc, re-invent the wheel and move even farther from the
UNIX tradition and POSIX compliance? And it may be slower on larger
systems? Define that for me because I've never experienced that. I've worked on single
stove-pipe x86 systems, to the 'SPARC archteciture' generation where everyone thought
Sun/Solaris was the way to go with single entire systems in a 42U rack, IRIX systems, all the
way on hundreds of RPM-base linux distro that are physical, hypervised and containered nodes
in an HPC which are LARGE compute systems (fat and compute nodes).
That's a total shit comment with zero facts to back it up. This is like Good Will Hunting
'the bar scene' revisited...
OP, if you're an old hat like me, I'd fucking LOVE to know how old? You sound like
you've got about 5 days soaking wet under your belt with a Milkshake IPA in your hand. You
sound like a millennial developer-turned-sysadmin-for-a-day who's got all but
cloud-framework-administration under your belt and are being a complete poser. Any true
sys-admin is going to flip-their-shit just like we ALL did with systemd, and that shit still
needs to die. There, I got that off my chest.
I'd say you got two things right, but are completely off on one of them:
* Your description of inefficient is what you got right: you sound like my mother
or grandmother describing their computing experiences to look at Pintrest on a web brower at
times. You mind as well just said slow without any bearing, education guess or reason.
Sigh...
* I would agree that some of these tools need to change, but only to handle deeper kernel
containerization being built into Linux. One example that comes to mind is 'hostnamectl'
where it's more dev-ops centric in terms of 'what' slice or provision you're referencing. A
lot of those tools like ifconfig, route and alike still do work in any Linux environment,
containerized or not --- fuck, they work today .
Anymore, I'm just a disgruntled and I'm sure soon-to-be-modded-down voice on /. that should be taken with a grain of salt. I'm not happy with the way
the movements of Linux have gone, and if this doesn't sound old hat I don't know what
is: At the end of the day, you have to embrace change. I'd say 0.0001% of any of us are in
control of those types of changes, no matter how we feel about is as end-user administrators
of those tools we've grown to be complacent about. I got about 15y left and this thing
called Linux that I've made a good living on will be the-next-guys steaming pile to deal
with.
Yeah. The other day I set up some demo video streaming on a Linux box. Fire up screen,
start my streaming program. Disconnect screen and exit my ssh system, and my streaming
freezes. There're a metric fuckton of reports of systemd killing detached/nohup'd processes,
but I check my config file and it's not that. Although them being that willing to walk away
from expected system behavior is already cause to blow a gasket. But no, something else is
going on here. I tweak the streaming code to catch all catchab
Just to give you guys some color commentary, I was participating quite heavily in Linux
development from 1994-1999, and Linus even added me to the CREDITS file while I was at the
University of Michigan for my fairly modest contributions to the kernel. [I prefer
application development, and I'm still a Linux developer after 24 years. I currently work for
the company Internet Brands.]
What I remember about ip and net is that they came about seemingly out of nowhere two
decades ago and the person who wrote the tools could barely communicate in English. There was
no documentation. net-tools by that time was a well-understood and well-documented package,
and many Linux devs at the time had UNIX experience pre-dating Linux (which was announced in
1991 but not very usable until 1994).
We Linux developers virtually created Internet programming, where most of our effort was
accomplished online, but in those days everybody still used books and of course the Linux
Documentation Project. I have a huge stack of UNIX and Linux books from the 1990's, and I
even wrote a mini-HOWTO. There was no Google. People who used Linux back then may seem like
wizards today because we had to memorize everything, or else waste time looking it up in a
book. Today, even if I'm fairly certain I already know how to do something, I look it up with
Google anyway.
Given that, ip and net were downright offensive. We were supposed to switch from a
well-documented system to programs written by somebody who can barely speak English (the
lingua franca of Linux development)?
Today, the discussion is irrelevant. Solaris, HP-UX, and the other commercial UNIX
versions are dead. Ubuntu has the common user and CentOS has the server. Google has complete
documentation for these tools at a glance. In my mind, there is now no reason to not
switch.
Although, to be fair, I still use ifconfig, even if it is not installed by default.
Systemd looks OK until you get into major troubles and start troubleshooting. After that you are ready to kill systemd
developers and blow up Red Hat headquarters ;-)
Notable quotes:
"... Crap tools written by morons with huge egos and rather mediocre skills. Happens time and again an the only sane answer to these people is "no". Good new tools also do not have to be pushed on anybody, they can compete on merit. As soon as there is pressure to use something new though, you can be sure it is inferior. ..."
In general, it's better for application programs, including scripts to use an
application programming interface (API) such as /proc, rather than a
user interface such as ifconfig, but in reality tons of scripts do use ifconfig and
such.
...and they have no other choice, and shell scripting is a central feature of UNIX.
The problem isn't so much new tools as new tools that suck. If I just type ifconfig it
will show me the state of all the active interfaces on the system. If I type ifconfig
interface I get back pretty much everything I want to know about it. If I want to get the
same data back with the ip tool, not only can't I, but I have to type multiple commands, with
far more complex arguments.
Crap tools written by morons with huge egos and rather mediocre skills. Happens time and
again an the only sane answer to these people is "no". Good new tools also do not have to be
pushed on anybody, they can compete on merit. As soon as there is pressure to use something
new though, you can be sure it is inferior.
The problem isn't new tools. It's not even crap tools. It's the mindset that we need to
get rid of an ~70KB netstat, ~120KB ifconfig, etc. Like others have posted, this has more to
do with the ego of the new tools creators and/or their supporters who see the old tools as
some sort of competition. Well, that's the real problem, then, isn't it? They don't want to
have to face competition and the notion that their tools aren't vastly superior to the user
to justify switching completely, so they must force the issue.
Now, it'd be different if this was 5 years down the road, netstat wasn't being
maintained*, and most scripts/dependents had already been converted over. At that point
there'd be a good, serious reason to consider removing an outdated package. That's obviously
not the debate, though.
* Vs developed. If seven year old stable tools are sufficiently bug free that no further
work is necessary, that's a good thing.
If I type ifconfig interface I get back pretty much everything I want to know about it
How do you tell in ifconfig output which addresses are deprecated? When I run ifconfig
eth0.100 it lists 8 global addreses. I can deduce that the one with fffe in the middle is the
permanent address but I have no idea what the address it will use for outgoing
connections.
ip addr show dev eth0.100 tells me what I need to know. And it's only a few more
keystrokes to type.
"ip" (and "ip2" and whatever that other candidate not-so-better not-so-replacement of
ifconfig was) all have the same problem: They try to be the one tool that does everything
"ip". That's "assign ip address somewhere", "route the table", and all that. But that means
you still need a complete zoo of other tools, like brconfig,
iwconfig/iw/whatever-this-week.
In other words, it's a modeling difference. On sane systems, ifconfig _configures the
interface_, for all protocols and hardware features, bridges, vlans, what-have-you. And then
route _configures the routing table_. On linux... the poor kids didn't understand what they
were doing, couldn't fix their broken ifconfig to save their lives, and so went off to
reinvent the wheel, badly, a couple times over.
And I say the blogposter is just as much an idiot.
Per various people, netstat et al operate by reading various files in
/proc, and doing this is not the most efficient thing in the world
So don't use it. That does not mean you gotta change the user interface too. Sheesh.
However, the deeper issue is the interface that netstat, ifconfig, and company present
to users.
No, that interface is a close match to the hardware. Here is an interface, IOW something
that connects to a radio or a wire, and you can make it ready to talk IP (or back when, IPX,
appletalk, and whatever other networks your system supported). That makes those tools
hardware-centric. At least on sane systems. It's when you want to pretend shit that it all
goes awry. And boy, does linux like to pretend. The linux ifconfig-replacements are
IP-only-stack-centric. Which causes problems.
For example because that only does half the job and you still need the aforementioned zoo
of helper utilities that do things you can have ifconfig do if your system is halfway sane.
Which linux isn't, it's just completely confused. As is this blogposter.
On the other hand, the users expect netstat, ifconfig and so on to have their
traditional interface (in terms of output, command line arguments, and so on); any number
of scripts and tools fish things out of ifconfig output, for example.
linux' ifconfig always was enormously shitty here. It outputs lots of stuff I expect to
find through netstat and it doesn't output stuff I expect to find out through ifconfig.
That's linux, and that is NOT "traditional" compared to, say, the *BSDs.
As the Linux kernel has changed how it does networking, this has presented things like
ifconfig with a deep conflict; their traditional output is no longer necessarily an
accurate representation of reality.
Was it ever? linux is the great pretender here.
But then, "linux" embraced the idiocy oozing out of poettering-land. Everything out of
there so far has caused me problems that were best resolved by getting rid of that crap code.
Point in case: "Network-Manager". Another attempt at "replacing ifconfig" with something that
causes problems and solves very few.
Should the ip rule stuff be part of route or a separate command?
There are things that could be better with ip. IIRC it's very fussy about where the table
selector goes in the argument list but route doesn't support this at all.
I also don't think route has anything like 'nexthop dev $if' which is a godsend for ipv6
configuration.
I stayed with route for years. But ipv6 exposed how incomplete the tool is - and clearly
nobody cares enough to add all the missing functionality.
Perhaps ip addr, ip route, ip rule, ip mroute, ip link should be separate commands. I've
never looked at the sourcecode to see whether it's mostly common or mostly separate.
The people who think the old tools work fine don't understand all the advanced networking
concepts that are only possible with the new tools: interfaces can have multiple IPs, one IP
can be assigned to multiple interfaces, there's more than one routing table, firewall rules
can add metadata to packets that affects routing, etc. These features can't be accommodated
by the old tools without breaking compatibility.
Someone cared enough to implement an entirely different tool to do the same old jobs
plus some new stuff, it's too bad they didn't do the sane thing and add that functionality
to the old tool where it would have made sense.
It's not that simple. The iproute2 suite wasn't written to *replace* anything.
It was written to provide a user interface to the rapidly expanding RTNL API.
The net-tools maintainers (or anyone who cared) could have started porting it if they liked.
They didn't. iproute2 kept growing to provide access to all the new RTNL interfaces, while
net-tools got farther and farther behind.
What happened was organic. If someone brought net-tools up to date tomorrow and everyone
liked the interface, iproute2 would be dead in its tracks. As it sits, myself, and most of
the more advanced level system and network engineers I know have been using iproute2 for just
over a decade now (really, the point where ifconfig became on incomplete and poorly
simplified way to manage the networking stack)
Nope. Kernel authors come up with fancy new netlink interface for better interaction with
the kernel's network stack. They don't give two squirts of piss whether or not a user-space
interface exists for it yet. Some guy decides to write an interface to it. Initially, it only
support things like modifying the routing rule database (something that can't be done with
route) and he is trying to make an implementation of this protocal, not try to hack it into
software that already has its own framework using different APIs.
This source was always freely available for the net-tools guys to take and add to their own
software.
Instead, we get
this. [sourceforge.net]
Nobody is giving a positive spin. This is simply how it happened. This is what happens when
software isn't maintained, and you don't get to tell other people to maintain it. You're
free, right now, today, to port the iproute2 functionality into net-tools. They're unwilling
to, however. That's their right. It's also the right of other people to either fork it, or
move to more functional software. It's your right to help influence that. Or bitch on
slashdot. That probably helps, too.
keep the command names the same but rewrite how they function?
Well, keep the syntax too, so old scripts would still work. The old command name could
just be a script that calls the new commands under the hood. (Perhaps this is just what you
meant, but I thought I'd elaborate.)
What was the reason for replacing "route" anyhow? It's worked for decades and done one
thing.
Idiots that confuse "new" with better and want to put their mark on things. Because they
are so much greater than the people that got the things to work originally, right? Same as
the systemd crowd. Sometimes, they realize decades later they were stupid, but only after
having done a lot of damage for a long time.
I didn't RTFA (this is Slashdot, after all) but from TFS it sounds like exactly the reason
I moved to FreeBSD in the first place: the Linux attitude of 'our implementation is broken,
let's completely change the interface'. ALSA replacing OSS was the instance of this that
pushed me away. On Linux, back around 2002, I had some KDE and some GNOME apps that talked to
their respective sound daemon, and some things like XMMS and BZFlag that used
/dev/dsp directly. Unfortunately, Linux decided to only support s
Unix was founded on the ideas of lots os simple command line tools that do one job well
and don't depend on system idiosyncracies. If you make the tool have to know the lower layers
of the system to exploit them then you break the encapsulation. Polling proc has worked
across eons of linux flavors without breaking. when you make everthing integrated it creates
paralysis to change down the road for backward compatibility. small speed game now for
massive fragility and no portability later.
Gnu may not be unix but it's foundational idea lies in the simple command tool paradigm.
It's why GNU was so popular and it's why people even think that Linux is unix. That idea is
the character of linux. if you want an marvelously smooth, efficient, consistent integrated
system that then after a decade of revisions feels like a knotted tangle of twine in your
junk drawer, try Windows.
The error you're making is thinking that Linux is UNIX.
It's not. It's merely UNIX-like. And with first SystemD and now this nonsense, it's
rapidly becoming less UNIX-like. The Windows of the UNIX(ish) world.
Happily, the BSDs seem to be staying true to their UNIX roots.
In theory netstat, ifconfig, and company could be rewritten to use netlink too; in
practice this doesn't seem to have happened and there may be political issues involving
different groups of developers with different opinions on which way to go.
No, it is far simpler than looking for some mythical "political" issues. It is simply that
hackers - especially amateur ones, who write code as a hobby - dislike trying to work out how
old stuff works. They like writing new stuff, instead.
Partly this is because of the poor documentation: explanations of why things work, what
other code was tried but didn't work out, the reasons for weird-looking constructs,
techniques and the history behind patches. It could even be that many programmers are wedded
to a particular development environment and lack the skill and experience (or find it beyond
their capacity) to do things in ways that are alien to it. I feel that another big part is
that merely rewriting old code does not allow for the " look how clever I am " element
that is present in fresh, new, software. That seems to be a big part of the amateur hacker's
effort-reward equation.
One thing that is imperative however is to keep backwards compatibility. So that the same
options continue to work and that they provide the same content and format. Possibly Unix /
Linux only remaining advantage over Windows for sysadmins is its scripting. If that was lost,
there would be little point keeping it around.
iproute2 exists because ifconfig, netstat, and route do not support the full capabilities
of the linux network stack.
This is because today's network stack is far more complicated than it was in the past. For
very simple networks, the old tools work fine. For complicated ones, you must use the new
ones.
Your post could not be any more wrong. Your moderation amazes me. It seems that slashdot
is full of people who are mostly amateurs.
iproute2 has been the main network management suite for linux amongst higher end sysadmins
for a decade. It wasn't written to sate someone's desire to change for the sake of change, to
make more complicated, to NIH. It was written because the old tools can't encompass new
functionality without being rewritten themselves.
So basically there is a proposal to dump existing terminal utilities that are
cross-platform and create custom Linux utilities - then get rid of the existing
functionality? That would be moronic! I already go nuts remoting into a windows platform and
then an AIX and Linux platform and having different command line utilities / directory
separators / etc. Adding yet another difference between my Linux and macOS/AIX terminals
would absolutely drive me bonkers!
I have no problem with updating or rewriting or adding functionalities to existing
utilities (for all 'nix platforms), but creating a yet another incompatible platform would be
crazily annoying.
(not a sys admin, just a dev who has to deal with multiple different server platforms)
All output for 'ip' is machine readable, not human.
Compare
$ ip route
to
$ route -n
Which is more readable? Fuckers.
Same for
$ ip a
and
$ ifconfig
Which is more readable? Fuckers.
The new commands should generally make the same output as the old, using the same options,
by default. Using additional options to get new behavior. -m is commonly used to get "machine
readable" output. Fuckers.
It is like the systemd interface fuckers took hold of everything. Fuckers.
BTW, I'm a happy person almost always, but change for the sake of change is fucking
stupid.
Want to talk about resolv.conf, anyone? Fuckers! Easier just to purge that shit.
I'm growing increasingly annoyed with Linux' userland instability. Seriously considering a
switch to NetBSD because I'm SICK of having to learn new ways of doing old things.
For those who are advocating the new tools as additions rather than replacements: Remember
that this will lead to some scripts expecting the new tools and some other scripts expecting
the old tools. You'll need to keep both flavors installed to do ONE thing. I don't
know about you, but I HATE to waste disk space on redundant crap.
What pisses me off is when I go to run ifconfig and it isn't there, and then I Google on
it and there doesn't seem to be *any* direct substitute that gives me the same information.
If you want to change the command then fine, but allow the same output from the new commands.
Furthermore, another bitch I have is most systemd installations don't have an easy substitute
for /etc/rc.local.
It does not make any sense that some people spend time and money replacing what is
currently working with some incompatible crap.
Therefore, the only logical alternative is that they are paid (in some way) to break what
is working.
Also, if you rewrite tons of systems tools you have plenty of opportunities to insert
useful bugs that can be used by the various spying agencies.
You do not think that the current CPU Flaws are just by chance, right ?
Immagine the wonder of being able to spy on any machine, regardless of the level of SW
protection.
There is no need to point out that I cannot prove it, I know, it just make sense to
me.
It does not make any sense that some people spend time and money replacing what is
currently working with some incompatible crap. (...) There is no need to point out that I
cannot prove it, I know, it just make sense to me.
Many developers fix problems like a guy about to lose a two week vacation because he can't
find his passport. Rip open every drawer, empty every shelf, spread it all across the tables
and floors until you find it, then rush out the door leaving everything in a mess. It solved
HIS problem.
IP aliases have always and still do appear in ifconfig as separate logical interfaces.
The assertion ifconfig only displays one IP address per interface also demonstrably
false.
Using these false bits of information to advocate for change seems rather ridiculous.
One change I would love to see... "ping" bundled with most Linux distros doesn't support
IPv6. You have to call IPv6 specific analogue which is unworkable. Knowing address family in
advance is not a reasonable expectation and works contrary to how all other IPv6 capable
software any user would actually run work.
Heck for a while traceroute supported both address families. The one by Olaf Kirch eons
ago did then someone decided not invented here and replaced it with one that works like ping6
where you have to call traceroute6 if you want v6.
It seems anymore nobody spends time fixing broken shit... they just spend their time
finding new ways to piss me off. Now I have to type journalctl and wait for hell to freeze
over just to liberate log data I previously could access nearly instantaneously. It almost
feels like Microsoft's event viewer now.
TFA is full of shit. IP aliases have always and still do appear in ifconfig as separate
logical interfaces.
No, you're just ignorant.
Aliases do not appear in ifconfig as separate logical interfaces.
Logical interfaces appear in ifconfig as logical interfaces.
Logical interfaces are one way to add an alias to an interface. A crude way, but a way.
The assertion ifconfig only displays one IP address per interface also demonstrably
false.
Nope. Again, your'e just ignorant.
root@swalker-samtop:~# tunctl
Set 'tap0' persistent and owned by uid 0
root@swalker-samtop:~# ifconfig tap0 10.10.10.1 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.10.2/24 dev tap0
root@swalker-samtop:~# ifconfig tap0:0 10.10.10.3 netmask 255.255.255.0 up
root@swalker-samtop:~# ip addr add 10.10.1.1/24 scope link dev tap0:0
root@swalker-samtop:~# ifconfig tap0 | grep inet
inet 10.10.1.1 netmask 255.255.255.0 broadcast 0.0.0.0
root@swalker-samtop:~# ifconfig tap0:0 | grep inet
inet 10.10.10.3 netmask 255.255.255.0 broadcast 10.10.10.255
root@swalker-samtop:~# ip addr show dev tap0 | grep inet
inet 10.10.1.1/24 scope link tap0
inet 10.10.10.1/24 brd 10.10.10.255 scope global tap0
inet 10.10.10.2/24 scope global secondary tap0
inet 10.10.10.3/24 brd 10.10.10.255 scope global secondary tap0:0
If you don't understand what the differences are, you really aren't qualified to opine on
the matter.
Ifconfig is fundamentally incapable of displaying the amount of information that can go with
layer-3 addresses, interfaces, and the architecture of the stack in general. This is why
iproute2 exists.
SysD: (v). To force an unnecessary replacement of something that already works well with
an alternative that the majority perceive as fundamentally worse.
Example usage: Wow you really SysD'd that up.
Netstat command is used to check all
incoming and outgoing connections on linux server. Using Grep command you can sort
lines which are matching pattern you defined.
AWk is very important command generally
used for scanning pattern and process it. It is powerful tool for shell scripting. Sort
is used to sort output and sort -n is for sorting output in numeric order.
Uniq -c this help to get uniq output by deleting duplicate lines
from it.
Telnet, the protocol and the command line tool, were how system administrators used to log
into remote servers. However, due to the fact that there is no encryption all communication,
including passwords, are sent in plaintext meant that Telnet was abandoned in favour of SSH
almost as soon as SSH was created.
For the purposes of logging into a remote server, you should never, and probably have never
considered it. This does not mean that the telnet command is not a very useful
tool when used for debugging remote connection problems.
In this guide, we will explore using telnet to answer the all too common
question, "Why can't I ###### connect‽".
This frustrated question is usually encountered after installing a application server like a
web server, an email server, an ssh server, a Samba server etc, and for some reason, the client
won't connect to the server.
telnet isn't going to solve your problem but it will, very quickly, narrow down
where you need to start looking to fix your problem.
telnet is a very simple command to use for debugging network related issues and
has the syntax:
telnet <hostname or IP> <port>
Because telnet will initially simply establish a connection to the port without
sending any data it can be used with almost any protocol including encrypted protocols.
There are four main errors that you will encounter when trying to connect to a problem
server. We will look at all four, explore what they mean and look at how you should fix
them.
For this guide we will assume that we have just installed a Samba server at samba.example.com and we can't get a
local client to connect to the server.
Error 1 - The connection that hangs forever
First, we need to attempt to connect to the Samba server with telnet . This is
done with the following command (Samba listens on port 445):
telnet samba.example.com 445
Sometimes, the connection will get to this point stop and hang indefinitely:
This means that telnet has not received any response to its request to
establish a connection. This can happen for two reasons:
There is a router down between you and the server.
There is a firewall dropping your request.
In order to rule out 1. run a quick mtr samba.example.com to
the server. If the server is accessible then it's a firewall (note: it's almost always a
firewall).
Firstly, check if there are any firewall rules on the server itself with the following
command iptables -L -v -n , if there are none then you will get the following
output:
iptables -L -v -n
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
If you see anything else then this is likely the problem. In order to check, stop
iptables for a moment and run telnet samba.example.com 445 again and
see if you can connect. If you still can't connect see if your provider and/or office has a
firewall in place that is blocking you.
Error 2 - DNS problems
A DNS issue will occur if the hostname you are using does not resolve to an IP address. The
error that you will see is as follows:
telnet samba.example.com 445
Server lookup failure: samba.example.com:445, Name or service not known
The first step here is to substitute the IP address of the server for the hostname. If you
can connect to the IP but not the hostname then the problem is the hostname.
This can happen for many reasons (I have seen all of the following):
Is the domain registered? Use whois to find out if it is.
Is the domain expired? Use whois to find out if it is.
Are you using the correct hostname? Use dig or host to ensure
that the hostname you are using resolves to the correct IP.
Is your A record correct? Check that you didn't accidentally create an A record for
something like smaba.example.com .
Always double check the spelling and the correct hostname (is it
samba.example.com or samba1.example.com ) as this will often trip you
up especially with long, complicated or foreign hostnames.
Error 3 - The server isn't
listening on that port
This error occurs when telnet is able to reach to the server but there is
nothing listening on the port you specified. The error looks like this:
telnet samba.example.com 445
Trying 172.31.25.31...
telnet: Unable to connect to remote host: Connection refused
This can happen for a couple of reasons:
Are you sure you're connecting to the right server?
Your application server is not listening on the port you think it is. Check exactly what
it's doing by running netstat -plunt on the server and see what port it is, in
fact, listening on.
The application server isn't running. This can happen when the application server exits
immediately and silently after you start it. Start the server and run ps auxf or
systemctl status application.service to check it's running.
Error 4 - The connection was closed by the server
This error happens when the connection was successful but the application server has a built
in security measure that killed the connection as soon as it was made. This error looks
like:
telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
Connection closed by foreign host.
The last line Connection closed by foreign host. indicates that the connection
was actively terminated by the server. In order to fix this, you need to look at the security
configuration of the application server to ensure your IP or user is allowed to connect to
it.
A successful connection
This is what a successful telnet connection attempt looks like:
telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
The connection will stay open for a while depending on the timeout of the application server
you are connected to.
A telnet connection is closed by typing CTRL+] and then when you see the
telnet> prompt, type "quit" and hit ENTER i.e.:
telnet samba.example.com 445
Trying 172.31.25.31...
Connected to samba.example.com.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
Conclusion
There are a lot of reasons that a client application can't connect to a server. The exact
reason can be difficult to establish especially when the client is a GUI that offers little or
no error information. Using telnet and observing the output will allow you to very
rapidly narrow down where the problem lies and save you a whole lot of time.
Another way to process the content from tcpdump
is to save the raw network packet data to a file and then process the file to find and decode the information that you
want.
There are a number of modules in different languages that provide functionality for reading and decoding the data
captured by tcpdump and snoop. For example, within Perl, there are two modules: Net::SnoopLog (for snoop) and Net::TcpDumpLog
(for tcpdump). These will read the raw data content. The basic interfaces for both of these modules is the same.
To start, first you need to create a binary record of the packets going past on the network by writing out the data
to a file using either snoop or tcpdump. For this example, we'll use tcpdump and the Net::TcpDumpLog module: $
tcpdump -w packets.raw.
Once you have amassed the network data, you can start to process the network data contents to find the information
you want. The Net::TcpDumpLog parses the raw network data saved by tcpdump. Because the data is in it's raw binary format,
parsing the information requires processing this binary data. For convenience, another suite of modules, NetPacket::*,
provides decoding of the raw data.
For example, Listing 8 shows a simple script that prints out the IP address information for all
of the packets.
use Net::TcpDumpLog;
use NetPacket::Ethernet;
use NetPacket::IP;
my $log = Net::TcpDumpLog->new();
$log->read("packets.raw");
foreach my $index ($log->indexes)
{
my $packet = $log->data($index);
my $ethernet = NetPacket::Ethernet->decode($packet);
if ($ethernet->{type} == 0x0800)
{
my $ip = NetPacket::IP->decode($ethernet->{data});
printf(" %s to %s protocol %s \n",
$ip->{src_ip},$ip->{dest_ip},$ip->{proto});
}
}
The first part is to extract each packet. The Net::TcpDumpLog module serializes each packet, so that we
can read each packet by using the packet ID. The data() method then returns the raw data for the entire
packet.
As with the output from snoop, we have to extract each of the blocks of data from the raw network packet information.
So in this example, we first need to extract the ethernet packet, including the data payload, from the raw network packet.
The NetPacket::Ethernet module does this for us.
Since we are looking for IP packets, we can check for IP packets by looking at the Ethernet packet type. IP packets
have an ID of 0x0800.
The NetPacket::IP module can then be used to extract the IP information from the data payload of the
Ethernet packet. The module provides the source IP, destination IP and protocol information, among others, which we
can then print.
Using this basic framework you can perform more complex lookups and decoding that do not rely on the automated solutions
provided by tcpdump or snoop. For example, if you suspect that there is HTTP traffic going past on a non-standard port
(i.e., not port 80), you could look for the string HTTP on ports other than 80 from the suspected host IP using the
script in Listing 9.
$ perl http-non80.pl
Found HTTP traffic on non-port 80
192.168.0.2 (port: 39280) to 168.143.162.100 (port: 80)
GET /statuses/user_timeline.json HTTP/1.1
Found HTTP traffic on non-port 80
192.168.0.2 (port: 39282) to 168.143.162.100 (port: 80)
GET /statuses/friends_timeline.json HTTP/1
In this particular case we're seeing traffic from the host to an external website (Twitter).
Obviously, in this example, we are dumping out the raw data, but you could use the same basic structure to decode
and the data in any format using any public or proprietary protocol structure. If you are using or developing a protocol
using this method, and know the protocol format, you could extract and monitor the data being transferred.
Although, as already mentioned, tools like tcpdump, iptrace and snoop provide basic network analysis and decoding,
there are GUI-based tools that make the process even easier. Wireshark is one such tool that supports a vast array of
network protocol decoding and analysis.
One of the main benefits of Wireshark is that you can capture packets over a period of time (just as with tcpdump)
and then interactively analyze and filter the content based on the different protocols, ports and other data. Wireshark
also supports a huge array of protocol decoders, enabling you to examine in minute detail the contents of the packets
and conversations.
You can see the basic screenshot of Wireshark showing all of the packets of all types being listed in
Figure 1. The window is divided into three main sections: the list of filtered packets, the decoded
protocol details, and the raw packet data in hex/ASCII format.
The goal of Xplico is to extract the applications data from an Internet traffic capture. For example, from a pcap
file Xplico extracts each email (POP, IMAP, and SMTP protocols), all HTTP contents, each... VoIP call (SIP), and so
on. Xplico isn't a packet sniffer or a network protocol analyzer; it's an IP/Internet traffic decoder or network forensic
analysis tool (NFAT).
About:
vnStat is a console-based network traffic monitor that keeps a log of hourly, daily, and monthly network traffic for
the selected interface(s). However, it isn't a packet sniffer. The traffic information is analyzed from the /proc filesystem.
That way, vnStat can be used even without root permissions.
Release focus: Minor bugfixes
Changes:
This release fixes a bug that caused a segmentation fault if the environment variable "HOME" wasn't defined, which in
turn caused most PHP/CGI scripts using vnStat to malfunction. Some minor feature enhancements are also included.
This release improves OpenBSD, HP-UX, Cygwin/Win32, x86_64, and little endian support. Enhancements were made to allow
editing packets with tcpreplay. libpcap detection was improved.
Tcpreplay is a set of Unix tools which allows the editing and replaying of captured network traffic in pcap (tcpdump)
format. It can be used to test a variety of passive and inline network devices, including IPS's, UTM's, routers, firewalls,
and NIDS.
Release focus: Major bugfixes
Changes:
This release fixes some serious regression bugs that prevented tcprewrite from editing most packets on Intel and other
little-endian systems. Some smaller bugfixes and tweaks to improve replay performance were made.
About: netrw is a simple (but powerful) tool for transporting data over the Internet. Its main purpose is
to simplify and speed up file transfers to hosts without an FTP server. It can also be used for uploading data to some
other user. It is something like one-way netcat (nc) with some nice features concerning data transfers. It can compute
and check message digest (MD5, SHA-1, and some others) of all the data being transferred. It can also print information
on progress and average speed. At the end, it sums up the transfer.
Changes: A bug causing netread to sometimes end up in an endless loop after receiving all data was fixed.
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
Copyright 1996-2021 by Softpanorama Society. www.softpanorama.org
was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP)
without any remuneration. This document is an industrial compilation designed and created exclusively
for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong
to respective owners. Quotes are made for educational purposes only
in compliance with the fair use doctrine.
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.