Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

SSH for System Administrators

News Application Layer of TCP/IP Protocol Recommended Books Recommended Links SSH troubleshooting Passwordless login Server Configuration
SCP Teraterm Putty WinSCP FileZilla Free SSH Windows clients sftp
Mini-tutorial SSH Usage in Pipes SSH client ssh & TCP Wrappers TCP Wrappers Xinetd sshfc
Reverse SSH Tunnels Tunneling/port forwarding X11 forwaring over ssh SSH Bouncing   Software distribution Digital Signatures
X11 forwarding over SSH SOCKS4 proxy   Cheap Web hosting with SSH access   Securing SSH daemon Installation of PaderSyncSSH on Blackberry
Public-Key Cryptography SSH History Horror Stories Tips Random Findings Humor Etc

Secure Shell (SSH) was originally developed by Tatu Ylцnen, Finland, as a secure replacement for Telnet and rsh. Essentially he re-implemented rsh over secure channel. Later the author introduced some restrictions and the previous freely licensed version of the project was forked by FreeBSD team to what now became OpenSSH. which become now dominant SSH implementation. See History. rlogin, rcp, rsh. It is based on so called Public-Key Cryptography Optional compression of traffic is provided.

SSH can use many authentication schemes such as SecurID, Kerberos, S/KEY to provide a highly secure remote access point to UNIX servers. By default, the OpenSSH server listens for requests on port 22 and port 6010 for X11 forwarding.

SSH1 was the the first version (protocol v1.2 and v1.5) that was free in the earlier days, but licensing has become more restrictive and SSH Communications and DataFellows are trying to get people to move to the newer SSH2 (which is commercial).

OpenSSH has been produced by the OpenBSD team and community. It was first integrated in OpenBSD in 1999. Linux got it much later as a present from OpenBSD community.

OpenSSH intended as a drop-in replacement for the Berkeley "r"-tools (rsh, rlogin, rcp). It can also tunnel via encrypted tunnel X windows and other TCP/IP application level protocols.

SSH usage in pipes

See SSH Usage in Pipes for more info

Like r-tools ssh supports cross computer pipes:

tar cvzf - . | rsh xxx.xxx.xxx.xxx "( cd $dir; tar xzf - )"
or
tar cvzf - . | ssh xxx.xxx.xxx.xxx 	"( cd $dir; tar xzf - )"

This is also possible to do in the reverse direction

ssh xxx.xxx.xxx.xxx "( cd $dir; tar cvzf - )" | tar xzf -

Components

Openssh is a leading SSH implementation (by world famous OpenBSD team). It was first included into OpenBSD 2.6. The software was developed outside the USA, using code from roughly 10 countries, and is freely useable and re-useable by everyone under a BSD license.

The OpenSSH suite includes three main components:

Also included is sshd which is the server part of the package, and several utilities like ssh-add, ssh-agent, ssh-keysign, ssh-keyscan, ssh-keygen and sftp-server.

Binary name Description
ssh rlogin/rsh-like client program
ssh-agent This command can store passphrase for private RSA keys in memory, to respond to challenges (challenge response) from the server. This simplifies repeated authentication (imitation of no password authentication)
ssh-add Tool which adds keys to ssh-agent
sftp FTP-like program that works over SSH1 and SSH2 protocol
scp File copy program that acts like rcp
ssh-keygen This command generates RSA public and private keys pair
ssh-keyscan A utility for gathering the public ssh host keys from a
number of SSH servers. The keys gathered are
displayed on the standard output. This output can then be compared with the key in the file /etc/ssh/ssh_known_hosts and be included in the
file.
ssh-keysign Utility for hostbased authentication
sshd The daemon that permits you to login
sftp-server SFTP server subsystem (started automatically by sshd)

OpenSSH supports SSH protocol versions 1.3, 1.5, and 2.0. Only the latest versions of v. 2.0 have required level of security. Earlier versions have serious security vulnerabilities.

It's not without drawbacks: since SSH is encrypted, troubleshooting is quite challenging. There are also multiple hazards that are inherent in the protocol of that level complexity. Vulnerabilities with SSH, especially were one of the major exploits that were used against ISPs. Beyond protocol problems, there are architecture issues by allowing encrypted pipes. It defeats IDS mechanisms.

Shhh . . . secrets about SSH

Secure Shell (SSH) is a rich subsystem used to log in to remote systems, copy files, and tunnel through firewalls—securely. Since SSH is a subsystem, it offers plenty of options to customize and streamline its operation. In fact, SSH provides an entire "dot directory", named $HOME/.ssh, to contain all its data. (Your .ssh directory must be mode 600 to preclude access by others. A mode other than 600 interferes with proper operation.) Specifically, the file $HOME/.ssh/config can define lots of shortcuts, including aliases for machine names, per-host access controls, and more.

Here is a typical block found in $HOME/.ssh/config to customize SSH for a specific host:

Host worker
HostName worker.example.com
IdentityFile ~/.ssh/id_rsa_worker
User joeuser

Each block in ~/.ssh/config configures one or more hosts. Separate individual blocks with a blank line. This block uses four options: Host, HostName, IdentityFile, and User. Host establishes a nickname for the machine specified by HostName. A nickname allows you to type ssh worker instead of ssh worker.example.com. Moreover, the IdentityFile and User options dictate how to log in to worker. The former option points to a private key to use with the host; the latter option provides the login ID. Thus, this block is the equivalent of the command:

ssh [email protected] -i ~/.ssh/id_rsa_worker

A powerful but little-known option is ControlMaster. If set, multiple SSH sessions to the same host share a single connection. Once the first connection is established, credentials are not required for subsequent connections, eliminating the drudgery of typing a password each and every time you connect to the same machine. ControlMaster is so handy, you'll likely want to enable it for every machine. That's accomplished easily enough with the host wildcard, *:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p
>
As you might guess, a block tagged Host * applies to every host, even those not explicitly named in the config file. ControlMaster auto tries to reuse an existing connection but will create a new connection if a shared connection cannot be found. ControlPath points to a file to persist a control socket for sharing. %r is replaced by the remote login user name, %h is replaced by the target host name, and %p stands in for the port used for the connection. (You can also use %l; it is replaced with the local host name.) The specification above creates control sockets with file names akin to:
[email protected]:22

Each control socket is removed when all connections to the remote host are severed. If you want to know which machines you are connected to at any time, simply type ls ~/.ssh and look at the host name portion of the control socket (%h).

The SSH configuration file is so expansive, it too has its own man page. Type man ssh_config to see all possible options. And here's a clever SSH trick: You can tunnel from a local system to a remote one via SSH. The command line to use looks something like this:

$ ssh example.com -L 5000:localhost:3306
>

This command says, "Connect via example.com and establish a tunnel between port 5000 on the local machine and port 3306 [the MySQL server port] on the machine named 'localhost.'" Because localhost is interpreted on example.com as the tunnel is established, localhost is example.com. With the outbound tunnel—formally called a local forward—established, local clients can connect to port 5000 and talk to the MySQL server running on example.com.

This is the general form of tunneling:

$ ssh proxyhost localport:targethost:targetport

Here, proxyhost is a machine you can access via SSH and one that has a network connection (not via SSH) to targethost. localport is a non-privileged port (any unused port above 1024) on your local system, and targetport is the port of the service you want to connect to.

The previous command tunnels out from your machine to the outside world. You can also use SSH to tunnel in, or connect to your local system from the outside world. This is the general form of an inbound tunnel:

$ ssh user@proxyhost -R proxyport:targethosttargetport

When establishing an inbound tunnel—formally called a remote forward—the roles of proxyhost and targethost are reversed: The target is your local machine, and the proxy is the remote machine. user is your login on the proxy. This command provides a concrete example:

$ ssh [email protected] -R 8080:localhost:80

The command reads, "Connect to example.com as joe, and connect the remote port 8080 to local port 80." This command gives users on example.com a tunnel to Joe's machine. A remote user can connect to 8080 to hit the Web server on Joe's machine.

In addition to -L and -R for local and remote forwards, respectively, SSH offers -D to create an HTTP proxy on a remote machine. See the SSH man page for the proper syntax.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Sep 09, 2020] SSH password automation in Linux with sshpass by Evans Amoany

Aug 31, 2020 | www.redhat.com

The sshpass utility helps administrators more easily manage SSH connections in scripts. More Linux resources

Connecting and transferring files to remote systems is something system administrators do all the time. One essential tool used by many system administrators on Linux platforms is SSH. SSH supports two forms of authentication:

  1. Password authentication
  2. Public-key Authentication

Public-key authentication is considered the most secure form of these two methods, though password authentication is the most popular and easiest. However, with password authentication, the user is always asked to enter the password. This repetition is tedious. Furthermore, SSH also requires manual intervention when used in a shell script. If automation is needed when using SSH password authentication, then a simple tool called sshpass is indispensable.

What is sshpass?

The sshpass utility is designed to run SSH using the keyboard-interactive password authentication mode, but in a non-interactive way.

SSH uses direct TTY access to ensure that the password is indeed issued by an interactive keyboard user. sshpass runs SSH in a dedicated TTY, fooling SSH into thinking it is getting the password from an interactive user.

Install sshpass

You can install sshpass with this simple command:

# yum install sshpass
Use sshpass

Specify the command you want to run after the sshpass options. Typically, the command is ssh with arguments, but it can also be any other command. The SSH password prompt is, however, currently hardcoded into sshpass .

The synopsis for the sshpass command is described below:

sshpass [-ffilename|-dnum|-ppassword|-e] [options] command arguments

Where:

-ppassword
    The password is given on the command line. 
-ffilename
    The password is the first line of the file filename. 
-dnumber
    number is a file descriptor inherited by sshpass from the runner. The password is read from the open file descriptor. 
-e
    The password is taken from the environment variable "SSHPASS".
Examples

To better understand the value and use of sshpass , let's look at some examples with several different utilities, including SSH, Rsync, Scp, and GPG.

Example 1: SSH

Use sshpass to log into a remote server by using SSH. Let's assume the password is !4u2tryhack . Below are several ways to use the sshpass options.

A. Use the -p (this is considered the least secure choice and shouldn't be used):

$ sshpass -p !4u2tryhack ssh [email protected]

The -p option looks like this when used in a shell script:

$ sshpass -p !4u2tryhack ssh -o StrictHostKeyChecking=no [email protected]

B. Use the -f option (the password should be the first line of the filename):

$ echo '!4u2tryhack' >pass_file
$ chmod 0400 pass_file
$ sshpass -f pass_file ssh [email protected]

Here is the -f option when used in shell script:

$ sshpass -f pass_file ssh -o StrictHostKeyChecking=no [email protected]

C. Use the -e option (the password should be the first line of the filename):

$ SSHPASS='!4u2tryhack' sshpass -e ssh [email protected]

The -e option when used in shell script looks like this:

$ SSHPASS='!4u2tryhack' sshpass -e ssh -o StrictHostKeyChecking=no [email protected]

Example 2: Rsync

Use sshpass with rsync :

$ SSHPASS='!4u2tryhack' rsync --rsh="sshpass -e ssh -l username" /custom/ host.example.com:/opt/custom/

The above uses the -e option, which passes the password to the environment variable SSHPASS

We can use the -f switch like this:

$ rsync --rsh="sshpass -f pass_file ssh -l username" /custom/ host.example.com:/opt/custom/

Example 3: Scp

Use sshpass with scp:

$ scp -r /var/www/html/example.com --rsh="sshpass -f pass_file ssh -l user" host.example.com:/var/www/html

Example 4: GPG

You can also use sshpass with a GPG-encrypted file. When the -f switch is used, the reference file is in plaintext. Let's see how we can encrypt a file with GPG and use it.

First, create a file as follows:

$ echo '!4u2tryhack' > .sshpasswd

Next, encrypt the file using the gpg command:

$ gpg -c .sshpasswd

Remove the file which contains the plaintext:

$ rm .sshpasswd

Finally, use it as follows:

$ gpg -d -q .sshpassword.gpg > pass_file; sshpass -f pass_file ssh [email protected]
Wrap up

sshpass is a simple tool that can be of great help to sysadmins. This doesn't, by any means, override the most secure form of SSH authentication, which is public-key authentication. However, sshpass can also be added to the sysadmin toolbox.

[Jun 10, 2020] Linux security: Protect your systems with fail2ban by Ken Hess

Notable quotes:
"... For us, fail2ban uses iptables to ban the IP address of the offending system for a "bantime" of 600 seconds (10 minutes). ..."
"... You can, of course, change any of these settings to meet your needs. Ten minutes seems to be long enough to cause a bot or script to "move on" to less secure hosts. However, ten minutes isn't so long as to alienate users who mistype their passwords more than three times. ..."
Jun 04, 2020 | russia-insider.com
Linux security is a constant struggle but you can use fail2ban to protect authenticated services.

Security, for system administrators, is an ongoing struggle because you must secure your systems enough to protect them from unwanted attacks but not so much that user productivity is hindered. It's a difficult balance to maintain. There are always complaints of "too much" security, but when a system is compromised, the complaints range from, "There wasn't enough security" to "Why didn't you use better security controls?" The struggle is real. There are controls you can put into place that are both effective against intruder attack and yet stealthy enough to allow users to operate in a generally unfettered manner. Fail2ban is the answer to protect services from brute force and other automated attacks.

Note: Fail2ban can only be used to protect services that require username/password authentication. For example, you can't protect ping with fail2ban.

In this article, I demonstrate how to protect the SSH daemon (SSHD) from a brute force attack. You can set up filters, as fail2ban calls them, to protect almost every listening service on your system.

Installation and initial setup

Fortunately, there is a ready-to-install package for fail2ban that includes all dependencies, if any, for your system.

$ sudo dnf -y install fail2ban

Enable and start fail2ban .

$ sudo systemctl enable fail2ban
$ sudo systemctl start fail2ban

Unless you have some sort of syntax problem in your fail2ban configuration, you won't see any standard output messages.

Now to configure a few basic things in fail2ban to protect the system without it interfering with itself. Copy the /etc/fail2ban/jail.conf file to /etc/fail2ban/jail.local .

The jail.local file is the configuration file of interest for us.

$ sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Open /etc/fail2van/jail.local in your favorite editor and make the following changes or check to be sure these few parameters are set. Look for the setting ignoreip and add all IP addresses to this line that must have access without the possibility of a lockout. By default, you should add the loopback address, and all IP addresses local to the protected system.

ignoreip = 127.0.0.1/8 192.168.1.10 192.168.1.20

You can also add entire networks of IP addresses, but this takes away much of the protection that you wish to engage fail2ban for. Keep it simple and local for now. Save the jail.local file and restart the fail2ban service.

$ sudo systemctl restart fail2ban

You must restart fail2ban every time you make a configuration change.

Setting up a filtered service

A fresh install of fail2ban doesn't really do much for you. You have to set up so-called filters for any service that you want to protect. Almost every Linux system must be accessible by SSH. There are some circumstances where you would most certainly stop and disable SSHD to better secure your system, but I assume that every Linux system allows SSH connections.

Passwords, as everyone knows, are not a good security solution. However, it is often the standard by which we live. So, if user or administrative access is limited to SSH, then you should take steps to protect it. Using fail2ban to "watch" SSHD for failed access attempts with subsequent banning is a good start.

Note: Before implementing any security control that might hinder a user's access to a system, inform the users that this new control might lock them out of a system for ten minutes (or however long you decide) if their failed login attempts exceed your threshold setting.

To set up filtered services, you must create a corresponding "jail" file under the /etc/fail2ban/jail.d directory. For SSHD, create a new file named sshd.local and enter service filtering instructions into it.

[sshd]
enabled = true
port = ssh
action = iptables-multiport
logpath = /var/log/secure
maxretry = 3
bantime = 600

Create the [sshd] heading and enter the setting you see above as a starting place. Most of the settings are self-explanatory. For the two that might not be intuitively obvious, the "action" setting describes the action you want fail2ban to take in the case of a violation. For us, fail2ban uses iptables to ban the IP address of the offending system for a "bantime" of 600 seconds (10 minutes).

You can, of course, change any of these settings to meet your needs. Ten minutes seems to be long enough to cause a bot or script to "move on" to less secure hosts. However, ten minutes isn't so long as to alienate users who mistype their passwords more than three times.

Once you're satisfied with the settings, restart the fail2ban service.

What banning looks like

On the protected system (192.168.1.83), tail the /var/log/fail2ban.log to see any current ban actions.

2020-05-15 09:12:06,722 fail2ban.filter         [25417]: INFO    [sshd] Found 192.168.1.69 - 2020-05-15 09:12:06
2020-05-15 09:12:07,018 fail2ban.filter         [25417]: INFO    [sshd] Found 192.168.1.69 - 2020-05-15 09:12:07
2020-05-15 09:12:07,286 fail2ban.actions        [25417]: NOTICE  [sshd] Ban 192.168.1.69
2020-05-15 09:22:08,931 fail2ban.actions        [25417]: NOTICE  [sshd] Unban 192.168.1.69

You can see that the IP address 192.168.1.69 was banned at 09:12 and unbanned ten minutes later at 09:22.

On the remote system, 192.168.1.69, a ban action looks like the following:

$ ssh 192.168.1.83

[email protected]'s password: 

Permission denied, please try again.

[email protected]'s password: 

Permission denied, please try again.

[email protected]'s password: 

Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

$ ssh 192.168.1.83

ssh: connect to host 192.168.1.83 port 22: Connection refused

You can see that I entered my password incorrectly three times before being banned. The banned user, unless explicitly informed, won't know why they can no longer reach the target system. The fail2ban filter performs a silent ban action. It gives no explanation to the remote user, nor is the user notified when the ban is lifted.

Unbanning a system

It will inevitably happen that a system gets banned that needs to be quickly unbanned. In other words, you can't or don't want to wait for the ban period to expire. The following command will immediately unban a system.

$ sudo fail2ban-client set sshd unbanip 192.168.1.69

You don't need to restart the fail2ban daemon after issuing this command.

Wrap up

That's basically how fail2ban works. You set up a filter, and when conditions are met, then the remote system is banned. You can ban for longer periods of time, and you can set up multiple filters to protect your system. Remember that fail2ban is a single solution and does not secure your system from other vulnerabilities. A layered, multi-faceted approach to security is the strategy you want to pursue. No single solution provides enough security.

You can find examples of other filters and some advanced fail2ban implementations described at fail2ban.org .

[ Want to learn more about security? Check out the IT security and compliance checklist . ]

[Jan 15, 2020] HowTo Disable SSH Host Key Checking - ShellHacks

Jan 15, 2020 | www.shellhacks.com

HowTo: Disable SSH Host Key Checking Posted on Tuesday December 27th, 2016 Sunday March 19th, 2017 by admin

By default, the SSH client verifies the identity of the host to which it connects.

If the remote host key is unknown to your SSH client, you would be asked to accept it by typing "yes" or "no".

This could cause a trouble when running from script that automatically connects to a remote host over SSH protocol.

Cool Tip: Slow SSH login? Password prompt takes too long? You can easily remove the delay! Read more →

This article explains how to bypass this verification step by disabling host key checking .

The Authenticity Of Host Can't Be Established

When you log into a remote host that you have never connected before, the remote host key is most likely unknown to your SSH client, and you would be asked to confirm its fingerprint :

The authenticity of host ***** can't be established.
RSA key fingerprint is *****.
Are you sure you want to continue connecting (yes/no)?
If your answer is 'yes', the SSH client continues login, and stores the host key locally in the file ~/.ssh/known_hosts .

If your answer is 'no', the connection will be terminated.

If you would like to bypass this verification step , you can set the " StrictHostKeyChecking " option to " no " on the command line:

$ ssh -o "StrictHostKeyChecking=no" user@host

This option disables the prompt and automatically adds the host key to the ~/.ssh/known_hosts file.

Remote Host Identification Has Changed

However, even with " StrictHostKeyChecking=no ", you may be refused to connect with the following warning message:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
*****
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/user/.ssh/known_hosts:1
RSA host key for ***** has changed and you have requested strict checking.
Host key verification failed.

If you are sure that it is harmless and the remote host key has been changed in a legitimate way, you can skip the host key checking by sending the key to a null known_hosts file:

$ ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no" user@host

You can also set these options permanently in ~/.ssh/config (for the current user) or in /etc/ssh/ssh_config (for all users).

Cool Tip: Log in to a remote Linux server without entering password! Set up password-less SSH login! Read more →

Also the option can be set either for the all hosts or for a given set of IP addresses.

Disable SSH host key checking for all hosts
Host *
   StrictHostKeyChecking no
   UserKnownHostsFile=/dev/null
Disable SSH host key checking For 192.168.0.0/24
Host 192.168.0.*
   StrictHostKeyChecking no
   UserKnownHostsFile=/dev/null
Comments (7) ssh 7 Replies to "HowTo: Disable SSH Host Key Checking"
  1. Justin Tilson says: Reply Tuesday September 12th, 2017 at 08:00 PM

    Exactly what I needed. Thx for posting.

  2. Arvi says: Reply Friday April 27th, 2018 at 08:27 AM

    Well explained.

  3. Pete says: Reply Friday May 18th, 2018 at 10:40 AM

    I was looking for a way to disable host checking from Python's pexpect. -o "UserKnownHostsFile=/dev/null" did the job. Thank you.

  4. tsuj says: Reply Saturday May 19th, 2018 at 08:47 PM

    Feels like it's a little irresponsible to tell people to do this without warning them of the dangers of doing so

  5. Nick says: Reply Monday April 15th, 2019 at 04:24 PM

    Thanks!

  6. pAbLo says: Reply Thursday May 16th, 2019 at 10:29 AM

    Thanks for this post, is exactly what I need.

  7. Michael Q says: Reply Monday June 17th, 2019 at 06:00 PM

    Note: It is one thing to do this to allow a local IP address such as above 192.168.x.x but it risky to do with a remote host etc.. I would probably just edit ~/.ssh/known_hosts or wipe the file and start over if I am seeing the messages above.

[Aug 20, 2019] How to exclude file when using scp command recursively

Aug 12, 2019 | www.cyberciti.biz

I need to copy all the *.c files from local laptop named hostA to hostB including all directories. I am using the following scp command but do not know how to exclude specific files (such as *.out): $ scp -r ~/projects/ user@hostB:/home/delta/projects/ How do I tell scp command to exclude particular file or directory at the Linux/Unix command line? One can use scp command to securely copy files between hosts on a network. It uses ssh for data transfer and authentication purpose. Typical scp command syntax is as follows: scp file1 user@host:/path/to/dest/ scp -r /path/to/source/ user@host:/path/to/dest/ scp [options] /dir/to/source/ user@host:/dir/to/dest/

Scp exclude files

I don't think so you can filter or exclude files when using scp command. However, there is a great workaround to exclude files and copy it securely using ssh. This page explains how to filter or excludes files when using scp to copy a directory recursively.

How to use rsync command to exclude files

The syntax is:

rsync -av -e ssh --exclude='*.out' /path/to/source/ user@hostB:/path/to/dest/

Where,

  1. -a : Recurse into directories i.e. copy all files and subdirectories. Also, turn on archive mode and all other options (-rlptgoD)
  2. -v : Verbose output
  3. -e ssh : Use ssh for remote shell so everything gets encrypted
  4. --exclude='*.out' : exclude files matching PATTERN e.g. *.out or *.c and so on.
Example of rsync command

In this example copy all file recursively from ~/virt/ directory but exclude all *.new files:
$ rsync -av -e ssh --exclude='*.new' ~/virt/ root@centos7:/tmp

[Aug 12, 2019] How to Set up SSH Tunneling (Port Forwarding)

Aug 08, 2019 | linuxize.com
SSH tunneling or SSH port forwarding is a method of creating an encrypted SSH connection between a client and a server machine through which services ports can be relayed.

SSH forwarding is useful for transporting network data of services that uses an unencrypted protocol, such as VNC or FTP , accessing geo-restricted content or bypassing intermediate firewalls. Basically, you can forward any TCP port and tunnel the traffic over a secure SSH connection.

There are three types of SSH port forwarding:

In this article, we will talk about how to set up local, remote, and dynamic encrypted SSH tunnels.

Local Port Forwarding

Local port forwarding allows you to forward a port on the local (ssh client) machine to a port on the remote (ssh server) machine, which is then forwarded to a port on the destination machine.

In this type of forwarding the SSH client listens on a given port and tunnels any connection to that port to the specified port on the remote SSH server, which then connects to a port on the destination machine. The destination machine can be the remote SSH server or any other machine.

Local port forwarding is mostly used to connect to a remote service on an internal network such as a database or VNC server.

In Linux, macOS and other Unix systems to create a local port forwarding pass the -L option to the ssh client:

ssh -L [LOCAL_IP:]LOCAL_PORT:DESTINATION:DESTINATION_PORT [USER@]SSH_SERVER

The options used are as follows:

You can use any port number greater than 1024 as a LOCAL_PORT . Ports numbers less than 1024 are privileged ports and can be used only by root. If your SSH server is listening on a port other than 22 (the default) use the -p [PORT_NUMBER] option.

The destination hostname must be resolvable from the SSH server.

Let's say you have a MySQL database server running on machine db001.host on an internal (private) network, on port 3306 which is accessible from the machine pub001.host and you want to connect using your local machine mysql client to the database server. To do so you can forward the connection like so:

ssh -L 3336:db001.host:3306 [email protected]

Once you run the command, you'll be prompted to enter the remote SSH user password. After entering it, you will be logged in to the remote server and the SSH tunnel will be established. It is a good idea to set up an SSH key-based authentication and connect to the server without entering a password.

Now if you point your local machine database client to 127.0.0.1:3336 , the connection will be forwarded to the db001.host:3306 MySQL server through the pub001.host machine which will act as an intermediate server.

You can forward multiple ports to multiple destinations in a single ssh command. For example, you have another MySQL database server running on machine db002.host and you want to connect to both servers from your local client you would run:

ssh -L 3336:db001.host:3306 3337:db002.host:3306 [email protected]

To connect to the second server you would use 127.0.0.1:3337 .

When the destination host is the same as the SSH server instead of specifying the destination host IP or hostname you can use localhost .

Say you need to connect to a remote machine through VNC which runs on the same server and it is not accessible from the outside. The command you would use is:

ssh -L 5901:127.0.0.1:5901 -N -f [email protected]

The -f option tells the ssh command to run in the background and -N not to execute a remote command. We are using localhost because the VNC and the SSH server are running on the same host.

If you are having trouble setting up tunneling check your remote SSH server configuration and make sure AllowTcpForwarding is not set to no . By default, forwarding is allowed.

Remote Port Forwarding

Remote port forwarding is the opposite of local port forwarding. It allows you to forward a port on the remote (ssh server) machine to a port on the local (ssh client) machine, which is then forwarded to a port on the destination machine.

In this type of forwarding the SSH server listens on a given port and tunnels any connection to that port to the specified port on the local SSH client, which then connects to a port on the destination machine. The destination machine can be the local or any other machine.

In Linux, macOS and other Unix systems to create a remote port forwarding pass the -R option to the ssh client:

ssh -R [REMOTE:]REMOTE_PORT:DESTINATION:DESTINATION_PORT [USER@]SSH_SERVER

The options used are as follows:

Local port forwarding is mostly used to give access to an internal service to someone from the outside.

Let's say you are developing a web application on your local machine and you want to show a preview to your fellow developer. You do not have a public IP so the other developer can't access the application via the Internet.

If you have access to a remote SSH server you can set up a remote port forwarding as follows:

ssh -L 8080:127.0.0.1:3000 -N -f [email protected]

The command above will make ssh server to listen on port 8080 and tunnel all traffic from this port to your local machine on port 3000 .

Now your fellow developer can type the_ssh_server_ip:8080 in his/her browser and preview your awesome application.

If you are having trouble setting up remote port forwarding make sure GatewayPorts is set to yes in the remote SSH server configuration.

Dynamic Port Forwarding

Dynamic port forwarding allows you to create a socket on the local (ssh client) machine which acts as a SOCKS proxy server. When a client connects to this port the connection is forwarded to the remote (ssh server) machine, which is then forwarded to a dynamic port on the destination machine.

This way, all the applications using the SOCKS proxy will connect to the SSH server and the server will forward all the traffic to its actual destination.

In Linux, macOS and other Unix systems to create a dynamic port forwarding (SOCKS) pass the -D option to the ssh client:

ssh -R [LOCAL_IP:]LOCAL_PORT [USER@]SSH_SERVER

The options used are as follows:

A typical example of a dynamic port forwarding is to tunnel the web browser traffic through an SSH server.

The following command will create a SOCKS tunnel on port 9090 :

ssh -D 9090 -N -f [email protected]

Once the tunneling is established you can configure your application to use it. This article explains how to configure Firefox and Google Chrome browser to use the SOCKS proxy.

The port forwarding has to be separately configured for each application that you want to tunnel the traffic thought it.

Set up SSH Tunneling in Windows

Windows users can create SSH tunnels using the PuTTY SSH client. You can download PuTTY here .

  1. Launch Putty and enter the SSH server IP Address in the Host name (or IP address) field.
    <img alt="" src=/post/how-to-setup-ssh-tunneling/launch-putty.jpg>
  2. Under the Connection menu, expand SSH and select Tunnels . Check the Local radio button to setup local, Remote for remote, and Dynamic for dynamic port forwarding.
    • If setting up local forwarding enter the local forwarding port in the Source Port field and in Destination enter the destination host and IP, for example, localhost:5901 .
    • For remote port forwarding enter the remote SSH server forwarding port in the Source Port field and in Destination enter the destination host and IP, for example, localhost:3000 .
    • If setting up dynamic forwarding enter only the local SOCKS port in the Source Port field.
    <img alt="" src=/post/how-to-setup-ssh-tunneling/configure-tunnel-putty.jpg>
  3. Click on the Add button as shown in the image below.
    <img alt="" src=/post/how-to-setup-ssh-tunneling/add-tunnel-putty.jpg>
  4. Go back to the Session page to save the settings so that you do not need to enter them each time. Enter the session name in the Saved Session field and click on the Save button.
    <img alt="" src=/post/how-to-setup-ssh-tunneling/save-session-putty.jpg>
  5. Select the saved session and log in to the remote server by clicking on the Open button.
    <img alt="" src=/post/how-to-setup-ssh-tunneling/open-session-putty.jpg>

    A new window asking for your username and password will show up. Once you enter your username and password you will be logged in to your server and the SSH tunnel will be started.

    Setting up public key authentication will allow you to connect to your server without entering a password.

Conclusion

We have shown you how to set up SSH tunnels and forward the traffic through a secure SSH connection. For ease of use, you can define the SSH tunnel in your SSH config file or create a Bash alias that will set up the SSH tunnel.

If you hit a problem or have feedback, leave a comment below.

[Aug 03, 2019] How to Access a Remote Server Using a Jump Host

Notable quotes:
"... A classic scenario is connecting from your desktop or laptop from inside your company's internal network, which is highly secured with firewalls to a DMZ. In order to easily manage a server in a DMZ, you may access it via a jump host . ..."
Aug 03, 2019 | www.tecmint.com

How to Access a Remote Server Using a Jump Host

by Aaron Kili | Published: November 29, 2018 | Last Updated: December 4, 2018

Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free Shell Scripting eBooks
A jump host (also known as a jump server ) is an intermediary host or an SSH gateway to a remote network, through which a connection can be made to another host in a dissimilar security zone, for example a demilitarized zone ( DMZ ). It bridges two dissimilar security zones and offers controlled access between them.

A jump host should be highly secured and monitored especially when it spans a private network and a DMZ with servers providing services to users on the internet.

A classic scenario is connecting from your desktop or laptop from inside your company's internal network, which is highly secured with firewalls to a DMZ. In order to easily manage a server in a DMZ, you may access it via a jump host .

In this article, we will demonstrate how to access a remote Linux server via a jump host and also we will configure necessary settings in your per-user SSH client configurations.

Consider the following scenario.

SSH Jump Host SSH Jump Host

In above scenario, you want to connect to HOST 2 , but you have to go through HOST 1 , because of firewalling, routing and access privileges. There is a number of valid reasons why jumphosts are needed..

Dynamic Jumphost List

The simplest way to connect to a target server via a jump host is using the -J flag from the command line. This tells ssh to make a connection to the jump host and then establish a TCP forwarding to the target server, from there (make sure you've Passwordless SSH Login between machines).

$ ssh -J host1 host2

If usernames or ports on machines differ, specify them on the terminal as shown.

$ ssh -J username@host1:port username@host2:port   
Multiple Jumphosts List

The same syntax can be used to make jumps over multiple servers.

$ ssh -J username@host1:port,username@host2:port username@host3:port
Static Jumphost List

Static jumphost list means, that you know the jumphost or jumphosts that you need to connect a machine. Therefore you need to add the following static jumphost 'routing' in ~/.ssh/config file and specify the host aliases as shown.

### First jumphost. Directly reachable
Host vps1
  HostName vps1.example.org

### Host to jump to via jumphost1.example.org
Host contabo
  HostName contabo.example.org
  ProxyJump contabo

Now try to connect to a target server via a jump host as shown.

$ ssh -J vps1 contabo
Login to Target Host via Jumphost

Login to Target Host via Jumphost

The second method is to use the ProxyCommand option to add the jumphost configuration in your ~.ssh/config or $HOME/.ssh/config file as shown.

In this example, the target host is contabo and the jumphost is vps1 .

Host vps1
        HostName vps1.example.org
        IdentityFile ~/.ssh/vps1.pem
        User ec2-user

Host contabo
        HostName contabo.example.org    
        IdentityFile ~/.ssh/contabovps
        Port 22
        User admin      
        Proxy Command ssh -q -W %h:%p vps1

Where the command Proxy Command ssh -q -W %h:%p vps1 , means run ssh in quiet mode (using -q ) and in stdio forwarding (using -W ) mode, redirect the connection through an intermediate host ( vps1 ).

Then try to access your target host as shown.

$ ssh contabo

The above command will first open an ssh connection to vps1 in the background effected by the ProxyCommand , and there after, start the ssh session to the target server contabo .

For more information, see the ssh man page or refer to: OpenSSH/Cookbxook/Proxies and Jump Hosts .

That's all for now! In this article, we have demonstrated how to access a remote server via a jump host. Use the feedback form below to ask any questions or share your thoughts with us.

[Jun 26, 2019] How To Enable Or Disable SSH Access For A Particular User Or Group In Linux by Magesh Maruthamuthu

May 23, 2019 | www.2daygeek.com
How To Allow A User To Access SSH In Linux?

... ... ...

# echo "AllowUsers user3" >> /etc/ssh/sshd_config

You can double check this by running the following command.

# cat /etc/ssh/sshd_config | grep -i allowusers
AllowUsers user3

That's it. Just bounce the ssh service...

We can deny/disable the ssh access for a particular user or list of the users using the following method.

... ... ...

# echo "DenyUsers user1" >> /etc/ssh/sshd_config

You can double check this by running the following command.

# cat /etc/ssh/sshd_config | grep -i denyusers
DenyUsers user1

That's it. Just bounce the ssh service...

How To Allow Groups To Access SSH In Linux?
... ... ...
# echo "AllowGroups 2g-admin" >> /etc/ssh/sshd_config
How To Deny Group To Access SSH In Linux?

... ... ...

# echo "DenyGroups 2g-admin" >> /etc/ssh/sshd_config
... ... ...

[Jun 22, 2019] Using SSH X session forwarding by Seth Kenlon

Jun 22, 2019 | www.redhat.com

Normally, you would forward a remote computer's X11 graphical display to your local computer with the -X option, but the OpenSSH application places additional security limits on such connections as a precaution. As long as you're starting a shell on a trusted machine, you can use the -Y option to opt out of the excess security:

$ ssh -Y 93.184.216.34

Now you can launch an instance of any one of the remote computer's applications, but have it appear on your screen. For instance, try launching the Nautilus file manager:

remote$ nautilus &

The result is a Nautilus file manager window on your screen, displaying files on the remote computer. Your user can't see the window you're seeing, but at least you have graphical access to what they are using. Through this, you can debug, modify settings, or perform actions that are otherwise unavailable through a normal text-based SSH session.

Keep in mind, though, that a forwarded X11 session does not bring the whole remote session to you. You don't have access to the target computer's audio playback, for example, though you can make the remote system play audio through its speakers. You also can't access any custom application themes on the target computer, and so on (at least, not without some skillful redirection of environment variables).

However, if you only need to view files or use an application that you don't have access to locally, forwarding X can be invaluable.

[Jun 18, 2019] Using SSH Port Forwarding as a Security Tool in Linux by Helder

Images deleted...
Jun 05, 2019 | linuxhandbook.com
Learn to configure SSH port forwarding on your Linux system. Remote forwarding is also explained.

Regular Linux users know about SSH , as it is basically what allows them to connect to any server remotely to be able to manage it via command line. However, this is not the only thing SSH can provide you for, it can also act as a great security tool to encrypt your connections even when there is no encryption by default.

For example, let's say you have a remote Linux desktop that you wish to connect via SMTP or email but the firewall on that network currently blocks the SMTP port (25) which is very common. Through a SSH tunnel you would simply connect to that particular SMTP service using another port by simply using SSH without having to reconfigure SMTP configuration to a different port and on top of that, gaining the encryption capabilities of SSH.

Configure OpenSSH for port forwarding

In order for OpenSSH Server to allow forwarding, you have to make sure it is active in the configuration. To do this, you must edit your /etc/ssh/ssh_config file.

For Ubuntu 18.04 this file has changed a little bit so, you must un-comment one line in it:

By default this line comes commented, you need to un-comment to allow forwarding

Once un-commented, you need to restart the SSH service to apply the changes:

restart SSH Daemon to apply changes recently done in its configuration

Now that we have our target configured to allow SSH forwarding, we simply need to re-route things through a port we know is not blocked. Let's use a very uncommonly blocked port like 3300:

So now we have done this, all traffic that comes to port 25 will automatically sent over to port 3300. From another computer or client we simply will connect to this server to its port 3300 and we will then be able to interact with it as it was SMTP server without any firewall restrictions to its 25 port, basically we simply re-routed its port 25 traffic to another (non blocked) one to be able to access it.

READ 3 Ways to List Users in Linux The other way around: Remote Forwarding

We talked about forwarding a local port to another port, but let's say you want to do it exactly opposite: you want to route a remote port or something you currently can access from the server to a local port.

To explain it easily, let's use an example similar to the previous one: from this server you access a particular server through port 25 (SMTP) and you want to "share" that through a local port 3302 so anyone else can connect to your server to the 3302 port and see whatever that server sees on port 25:

Summing up and some tips on SSH port forwarding

As you can see, this SSH forwarding acts like a very small VPN, because it routes things to given ports. Whenever you execute these commands, they will open SSH shells, as it understands you need to interact to the server via SSH. If you don't need this, it will be enough to simply add the "-N" option in them, so they will simply not open any shell.

Liked the article? Please share it and help us grow :)

About Helder Systems Engineer, technology evangelist, Ubuntu user, Linux enthusiast, father and husband.

[Dec 05, 2018] How to make putty ssh connection never to timeout when user is idle?

Dec 05, 2018 | askubuntu.com

David MZ ,Feb 13, 2013 at 18:07

I have a Ubuntu 12.04 server I bought, if I connect with putty using ssh and a sudoer user putty gets disconnected by the server after some time if I am idle How do I configure Ubuntu to keep this connection alive indefinitely?

das Keks ,Feb 13, 2013 at 18:24

If you go to your putty settings -> Connection and set the value of "Seconds between keepalives" to 30 seconds this should solve your problem.

kokbira ,Feb 19 at 11:42

?????? "0 to turn off" or 30 to turn off????????? I think he must put 0 instead of 30! – kokbira Feb 19 at 11:42

das Keks ,Feb 19 at 11:46

No, it's the time between keepalives. If you set it to 0, no keepalives are sent but you want putty to send keepalives to keep the connection alive. – das Keks Feb 19 at 11:46

Aaron ,Mar 19 at 20:39

I did this but still it drops.. – Aaron Mar 19 at 20:39

0xC0000022L ,Feb 13, 2013 at 19:29

In addition to the answer from "das Keks" there is at least one other aspect that can affect this behavior. Bash (usually the default shell on Ubuntu) has a value TMOUT which governs (decimal value in seconds) after which time an idle shell session will time out and the user will be logged out, leading to a disconnect in an SSH session.

In addition I would strongly recommend that you do something else entirely. Set up byobu (or even just tmux alone as it's superior to GNU screen ) and always log in and attach to a preexisting session (that's GNU screen and tmux terminology). This way even if you get forcibly disconnected - let's face it, a power outage or network interruption can always happen - you can always resume your work where you left. And that works across different machines. So you can connect to the same session from another machine (e.g. from home). The possibilities are manifold and it's a true productivity booster. And not to forget, terminal multiplexers overcome one of the big disadvantages of PuTTY: no tabbed interface. Now you get "tabs" in the form of windows and panes inside GNU screen and tmux .

apt-get install tmux
apt-get install byobu

Byobu is a nice frontend to both terminal multiplexers, but tmux is so comfortable that in my opinion it obsoletes byobu to a large extent. So my recommendation would be tmux .

Also search for "dotfiles", in particular tmux.conf and .tmux.conf on the web for many good customizations to get you started.

Rajesh ,Mar 19, 2015 at 15:10

Go to PuTTy options --> Connection
  1. Change the default value for "Seconds between keepalives(0 to turn off)" : from 0 to 600 (10 minutes) --This varies...reduce if 10 minutes doesn't help
  2. Check the "Enable TCP_keepalives (SO_KEEPALIVE option)" check box.
  3. Finally save setting for session

,

I keep my PuTTY sessions alive by monitoring the cron logs
tail -f /var/log/cron

I want the PuTTY session alive because I'm proxying through socks.

[Nov 03, 2018] setting up your own VPN server

Nov 03, 2018 | linuxize.com

The simpler alternative is to route your local network traffic with an encrypted SOCKS proxy tunnel. This way, all your applications using the proxy will connect to the SSH server and the server will forward all the traffic to its actual destination. Your ISP (internet service provider) and other third parties will not be able to inspect your traffic and block your access to websites.

This tutorial will walk you through the process of creating an encrypted SSH tunnel and configuring Firefox and Google Chrome web browsers to use SOCKS proxy.

Prerequisites Set up the SSH tunnel

We'll create an SSH tunnel that will securely forwards traffic from your local machine on port 9090 to the SSH server on port 22 . You can use any port number greater than 1024 .

Linux and macOS

If you run Linux, macOS or any other Unix-based operating system on your local machine, you can easily start an SSH tunnel with the following command:

ssh -N -D 9090 [USER]@[SERVER_IP]

The options used are as follows:

Once you run the command, you'll be prompted to enter your user password. After entering it, you will be logged in to your server and the SSH tunnel will be established.

You can setup a SSH key-based authentication and connect to your server without entering a password.

Windows

Windows users can create an SSH tunnel using the PuTTY SSH client. You can download PuTTY here .

  1. Launch Putty and enter your server IP Address in the Host name (or IP address) field.
  2. Under the Connection menu, expand SSH and select Tunnels . Enter the port 9090 in the Source Port field , and check the Dynamic radio button.
  3. Click on the Add button as shown in the image bellow.
  4. Go back to the Session page to save the settings so that you do not need to enter them each time. Enter the session name in the Saved Session field and click on the Save button.
  5. Select the saved session and login to the remote server by clicking on the Open button.

    A new window asking for your username and password will show up. Once you enter your username and password you will be logged in to your server and the SSH tunnel will be started.

Configuring Your Browser to Use Proxy

Now that you have open the SSH SOCKS tunnel the last step is to configure your preferred browser to use it.

Firefox

The steps bellow are the same for Windows, macOS, and Linux.

  1. In the upper right hand corner, click on the hamburger icon to open Firefox's menu:
  2. Click on the ⚙ Preferences link.
  3. Scroll down to the Network Settings section and click on the Settings... button.
  4. A new window will open.
    • Select the Manual proxy configuration radio button.
    • Enter 127.0.0.1 in the SOCKS Host field and 9090 in the Port field.
    • Check the Proxy DNS when using SOCKS v5 checkbox.
    • Click on the OK button to save the settings.

At this point your Firefox is configured and you can browse the Internet thought your SSH tunnel. To verify it you can open google.com , type "what is my ip" and you should see your server IP address.

To revert back to the default settings go to Network Settings , select the Use system proxy settings radio button and save the settings.

There are also several plugins that can help you to configure Firefox's proxy settings such as FoxyProxy .

Google Chrome

Google Chrome uses the default system proxy settings. Instead of changing your operating system proxy settings you can either use an addon such as SwitchyOmega or start Chrome web browser from the command line.

To launch Chrome using a new profile and your SSH tunnel use the following command:

Linux :

/usr/bin/google-chrome \
    --user-data-dir="$HOME/proxy-profile" \
    --proxy-server="socks5://localhost:9090"

macOS :

"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" \
    --user-data-dir="$HOME/proxy-profile" \
    --proxy-server="socks5://localhost:9090"

Windows :

"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" ^
    --user-data-dir="%USERPROFILE%\proxy-profile" ^
    --proxy-server="socks5://localhost:9090"

The profile will be created automatically if it does not exist. This way you can run multiple instances of Chrome at the same time.

To confirm the SSH tunnel is working properly, open google.com , and type "what is my ip". The IP shown should be the IP address of your server.

Conclusion

You have learned how to set up an SSH SOCKS 5 tunnel and configure your browser to access the Internet privately and anonymously.

If you hit a problem or have a feedback, leave a comment below.

[Aug 07, 2018] Linux Basics How To Create and Install SSH Keys on the Shell

Notable quotes:
"... ssh-keygen -b 4096 -t rsa ..."
"... /root/.ssh/id_rsa ..."
Aug 07, 2018 | www.howtoforge.com
ssh-keygen -o -b 4096 -t rsa

The above command kicks off the SSH Key installation process for users. The -o option instructs ssh-keygen to store the private key in the new OpenSSH format instead of the old (and more compatible PEM format). It is highly recommended to use the -o option as the new OpenSSH format has an increased resistance to brute-force password cracking. In case the -o option does not work on your server (it has been introduced in 2014) or you need a private key in the old PEM format, then use the command ' ssh-keygen -b 4096 -t rsa '.

The -b option of the ssh-keygen command is used to set the key length to 4096 bit instead of the default 1024 bit for security reasons.

Upon entering the primary Gen Key command, users need to go through the following drill by answering the following prompts:

Enter the file where you wish to save the key (/home/demo/.ssh/id_rsa)

Users need to press ENTER in order to save the file to the user home

The next prompt would read as follows:

Enter passphrase

If, as an administrator, you wish to assign the passphrase, you may do so when prompted (as per the question above), though this is optional, and you may leave the field vacant in case you do not wish to assign a passphrase.

However, it is pertinent to note there that keying in a unique passphrase does offer a bevy of benefits listed below:

1. The security of a key, even when highly encrypted, depends largely on its invisibility to any other party. I 2. In the likely instance of a passphrase-secure private key falling into the custody of an unauthorized user, they will be rendered unable to log in to its allied accounts until they can crack the passphrase. This invariably gives the victim (the hacked user) precious extra time to avert the hacking bid On the downside, assigning a passphrase to the key requires you to key it in every time you make use of the Key Pair, which makes the process a tad tedious, nonetheless absolutely failsafe.

Here is a broad outline of the end-to-end key generation process:

root@server1:~# ssh-keygen -b 4096 -o -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:KBZP/guc7lND8I239zKv8PRziF/5jnA6N0nEocCDlLA root@server1
The key's randomart image is:
+---[RSA 2048]----+
| .o.+ |
| ..o + . |
| . Eo o o o . |
| = .+ o . o |
| o +.S. . . |
| . o oo . . . .|
| +.....o+.+.|
| ... . +==Boo|
| .o.. +O==o|
+----[SHA256]-----+

The public key can now be traced to the link ~/.ssh/id_rsa.pub

The private key (identification) can now be traced to the link-/home/demo/.ssh/id_rsa 3

Step Two: Copying the Public Key

Once the distinct key pair has been generated, the next step remains to place the public key on the virtual server that we intend to use. Users would be able to copy the public key into the authorized_keys file of the new machine using the ssh-copy-id command. Given below is the prescribed format (strictly an example) for keying in the username and IP address, and must be replaced with actual system values:

ssh-copy-id [email protected]

As an alternative, users may paste the keys by using SSH (as per the given command):

cat ~/.ssh/id_rsa.pub | ssh [email protected] "cat >> ~/.ssh/authorized_keys"

Either of the above commands, when used, shall toss the following message on your system:

The authenticity of host '192.168.0.100 ' can't be established. RSA key fingerprint is b1:2d:32:67:ce:35:4d:5f:13:a8:cd:c0:c4:48:86:12. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.0.100' (RSA) to the list of known hosts. [email protected]'s password: Now try logging into the machine, with "ssh '[email protected]'", and check in: ~/.ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting.

After the above drill, users are ready to go ahead and log into [email protected] without being prompted for a password. However, if you have earlier assigned a passphrase to the key (as per Step 2 above), you will be prompted to enter the passphrase at this point (and each time for subsequent log-ins.).

Step Three (This Step is Optional): Disabling the Password to Facilitate Root Login

After users have copied their SSH keys unto your server and ensured seamless log-in with the SSH keys only, they have the option to restrict the root login, and permit the same only through SSH keys. To accomplish this, users need to access the SSH configuration file using the following command:

sudo nano /etc/ssh/sshd_config

Once the file is accessed, users need to find the line within the file that includes PermitRootLogin , and modify the same to ensure a foolproof connection using the SSH key. The following command shall help you do that:

PermitRootLogin without-password

The last step in the process remains to implement the changes by using the following command:

reload ssh

The above completes the process of installing SSH keys on the Linux server.

Converting OpenSSH private key to new format

Most older OpenSSH keys are stored in the PEM format. While this format is compatible with many older applications, it has the drawback that the password of a password-protected private key can be attacked with brute-force attacks. This chapter explains how to convert a private key in PEM format to one in the new OpenSSH format.

ssh-keygen -p -o -f /root/.ssh/id_rsa

The path /root/.ssh/id_rsa is the path of the old private key file.

Conclusion

The above steps shall help you install SSH keys on any virtual private server in a completely safe, secure and hassle-free manner.

[Aug 07, 2018] Managing Multiple Linux Servers with ClusterSSH Linux.com The source for Linux information

Aug 07, 2018 | www.linux.com

Managing Multiple Linux Servers with ClusterSSH

If you're a Linux system administrator, chances are you've got more than one machine that you're responsible for on a daily basis. You may even have a bank of machines that you maintain that are similar -- a farm of Web servers, for example. If you have a need to type the same command into several machines at once, you can login to each one with SSH and do it serially, or you can save yourself a lot of time and effort and use a tool like ClusterSSH.

ClusterSSH is a Tk/Perl wrapper around standard Linux tools like XTerm and SSH. As such, it'll run on just about any POSIX-compliant OS where the libraries exist -- I've run it on Linux, Solaris, and Mac OS X. It requires the Perl libraries Tk ( perl-tk on Debian or Ubuntu) and X11::Protocol ( libx11-protocol-perl on Debian or Ubuntu), in addition to xterm and OpenSSH.

Installation

Installing ClusterSSH on a Debian or Ubuntu system is trivial -- a simple sudo apt-get install clusterssh will install it and its dependencies. It is also packaged for use with Fedora, and it is installable via the ports system on FreeBSD. There's also a MacPorts version for use with Mac OS X, if you use an Apple machine. Of course, it can also be compiled from source.

Configuration

ClusterSSH can be configured either via its global configuration file -- /etc/clusters , or via a file in the user's home directory called .csshrc . I tend to favor the user-level configuration as that lets multiple people on the same system to setup their ClusterSSH client as they choose. Configuration is straightforward in either case, as the file format is the same. ClusterSSH defines a "cluster" as a group of machines that you'd like to control via one interface. With that in mind, you enumerate your clusters at the top of the file in a "clusters" block, and then you describe each cluster in a separate section below.

For example, let's say I've got two clusters, each consisting of two machines. "Cluster1" has the machines "Test1" and "Test2" in it, and "Cluster2" has the machines "Test3" and "Test4" in it. The ~.csshrc (or /etc/clusters ) control file would look like this:

clusters = cluster1 cluster2

cluster1 = test1 test2
cluster2 = test3 test4

You can also make meta-clusters -- clusters that refer to clusters. If you wanted to make a cluster called "all" that encompassed all the machines, you could define it two ways. First, you could simply create a cluster that held all the machines, like the following:

clusters = cluster1 cluster2 all

cluster1 = test1 test2
cluster2 = test3 test4
all = test1 test2 test3 test4

However, my preferred method is to use a meta-cluster that encompasses the other clusters:

clusters = cluster1 cluster2 all

cluster1 = test1 test2
cluster2 = test3 test4
all = cluster1 cluster2

ClusterSSH

By calling out the "all" cluster as containing cluster1 and cluster2, if either of those clusters ever change, the change is automatically captured so you don't have to update the "all" definition. This will save you time and headache if your .csshrc file ever grows in size.

Using ClusterSSH

Using ClusterSSH is similar to launching SSH by itself. Simply running cssh -l <username> <clustername> will launch ClusterSSH and log you in as the desired user on that cluster. In the figure below, you can see I've logged into "cluster1" as myself. The small window labeled "CSSH [2]" is the Cluster SSH console window. Anything I type into that small window gets echoed to all the machines in the cluster -- in this case, machines "test1" and "test2". In a pinch, you can also login to machines that aren't in your .csshrc file, simply by running cssh -l <username> <machinename1> <machinename2> <machinename3> .

If I want to send something to one of the terminals, I can simply switch focus by clicking in the desired XTerm, and just type in that window like I usually would. ClusterSSH has a few menu items that really help when dealing with a mix of machines. As per the figure below, in the "Hosts" menu of the ClusterSSH console there's are several options that come in handy.

"Retile Windows" does just that if you've manually resized or moved something. "Add host(s) or Cluster(s)" is great if you want to add another set of machines or another cluster to the running ClusterSSH session. Finally, you'll see each host listed at the bottom of the "Hosts" menu. By checking or unchecking the boxes next to each hostname, you can select which hosts the ClusterSSH console will echo commands to. This is handy if you want to exclude a host or two for a one-off or particular reason. The final menu option that's nice to have is under the "Send" menu, called "Hostname". This simply echoes each machine's hostname to the command line, which can be handy if you're constructing something host-specific across your cluster.

Resize Windows

Caveats with ClusterSSH

Like many UNIX tools, ClusterSSH has the potential to go horribly awry if you aren't very careful with its use. I've seen ClusterSSH mistakes take out an entire tier of Web servers simply by propagating a typo in an Apache configuration. Having access to multiple machines at once, possibly as a privileged user, means mistakes come at a great cost. Take care, and double-check what you're doing before you punch that Enter key.

Conclusion

ClusterSSH isn't a replacement for having a configuration management system or any of the other best practices when managing a number of machines. However, if you need to do something in a pinch outside of your usual toolset or process, or if you're doing prototype work, ClusterSSH is indispensable. It can save a lot of time when doing tasks that need to be done on more than one machine, but like any power tool, it can cause a lot of damage if used haphazardly.

[Aug 07, 2018] SSH Tips And Tricks You Need

Aug 07, 2018 | www.symkat.com

Programming

SSH is one of the most widely used protocols for connecting to remote shells. While there are numerous SSH clients the most-used still remains OpenSSH's ssh . OpenSSH has been the default ssh client for every major Linux operation, and is trusted by cloud computing providers such as Amazon's EC2 services and web hosting companies like MediaTemple . There is a plethora of tips and tricks that can be used to make your experience even better than it already is. Read on to discover some of the best tweaks to your favorite SSH client.

Adding A Keep-Alive

A keep-alive is a small piece of data transmitted between a client and a server to ensure that the connection is still open or to keep the connection open. Many protocols implement this as a way of cleaning up dead connections to the server. If a client does not respond, the connection is closed.

SSH does not enable this by default. There are pros and cons to this. A major pro is that under a lot of conditions if you disconnect from the Internet, your connection will be usable when you reconnect. For those who drop out of WiFi a lot, this is a major plus when you discover you don't need to login again.

For those who get the following message from their SSH client when they stop typing for a few minutes it's not as convenient:

symkat@symkat:~$ Read from remote host symkat.com: Connection reset by peer
Connection to symkat.com closed.

This happens because your router or firewall is trying to clean up dead connections. It's seeing that no data has been transmitted in N seconds and falsely assumes that the connection is no longer in use.

To rectify this you can add a Keep-Alive. This will ensure that your connection stays open to the server and the firewall doesn't close it.

To make all connections from your shell send a keepalive add the following to your ~/.ssh/config file:

KeepAlive yes
ServerAliveInterval 60

The con is that if your connection drops and a KeepAlive packet is sent SSH will disconnect you. If that becomes a problem, you can always actually fix the Internet connection.

Multiplexing Your Connection

Do you make a lot of connections to the same servers? You may not have noticed how slow an initial connection to a shell is. If you multiplex your connection you will definitely notice it though. Let's test the difference between a multiplexed connection using SSH keys and a non-multiplexed connection using SSH keys:

# Without multiplexing enabled:
$ time ssh [email protected] uptime
 20:47:42 up 16 days,  1:13,  3 users,  load average: 0.00, 0.01, 0.00

real    0m1.215s
user    0m0.031s
sys 0m0.008s

# With multiplexing enabled:
$ time ssh [email protected] uptime
 20:48:43 up 16 days,  1:14,  4 users,  load average: 0.00, 0.00, 0.00

real    0m0.174s
user    0m0.003s
sys 0m0.004s

We can see that multiplexing the connection is much faster, in this instance on an order of 7 times faster than not multiplexing the connection. Multiplexing allows us to have a â??controlâ? connection, which is your initial connection to a server, this is then turned into a UNIX socket file on your computer. All subsequent connections will use that socket to connect to the remote host. This allows us to save time by not requiring all the initial encryption, key exchanges, and negotiations for subsequent connections to the server.

To enable multiplexing do the following:

In a shell:

$ mkdir -p ~/.ssh/connections
$ chmod 700 ~/.ssh/connections

Add this to your ~/.ssh/config file:

Host *
ControlMaster auto
ControlPath ~/.ssh/connections/%r_%h_%p

A negative to this is that some uses of ssh may fail to work with your multiplexed connection. Most notably commands which use tunneling like git, svn or rsync, or forwarding a port. For these you can add the option -oControlMaster=no . To prevent a specific host from using a multiplexed connection add the following to your ~/.ssh/config file:

Host YOUR_SERVER_OR_IP
MasterControl no

There are security precautions that one should take with this approach. Let's take a look at what actually happens when we connect a second connection:

$ ssh -v -i /dev/null [email protected]
OpenSSH_4.7p1, OpenSSL 0.9.7l 28 Sep 2006
debug1: Reading configuration data /Users/symkat/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: Applying options for *
debug1: auto-mux: Trying existing master
Last login:
symkat@symkat:~$ exit

As we see no actual authentication took place. This poses a significant security risk if running it from a host that is not trusted, as a user who can read and write to the socket can easily make the connection without having to supply a password. Take the same care to secure the sockets as you take in protecting a private key.

Using SSH As A Proxy

Even Starbucks now has free WiFi in its stores. It seems the world has caught on to giving free Internet at most retail locations. The downside is that more teenagers with "Got Root?" stickers are camping out at these locations running the latest version of wireshark.

SSH's encryption can stand up to most any hostile network, but what about web traffic?

Most web browsers, and certainly all the popular ones, support using a proxy to tunnel your traffic. SSH can provide a SOCKS proxy on localhost that tunnels to your remote server with the -D option. You get all the encryption of SSH for your web traffic, and can rest assured no one will be capturing your login credentials to all those non-ssl websites you're using.

$ ssh -D1080 -oControlMaster=no [email protected]
symkat@symkat:~$

Now there is a proxy running on 127.0.0.1:1080 that can be used in a web browser or email client. Any application that supports SOCKS 4 or 5 proxies can use 127.0.0.1:1080 to tunnel its traffic.

$ nc -vvv 127.0.0.1 1080
Connection to 127.0.0.1 1080 port [tcp/socks] succeeded!
Using One-Off Commands

Often times you may want only a single piece of information from a remote host. "Is the file system full?" "What's the uptime on the server?" "Who is logged in?"

Normally you would need to login, type the command, see the output and then type exit (or Control-D for those in the know.) There is a better way: combine the ssh with the command you want to execute and get your result:

 $ ssh [email protected] uptime
 18:41:16 up 15 days, 23:07,  0 users,  load average: 0.00, 0.00, 0.00

This executed the ssh symkat.com, logged in as symkat, and ran the command uptime on symkat. If you're not using SSH keys then you'll be presented with a password prompt before the command is executed.

$ ssh [email protected] ps aux | echo $HOSTNAME
symkats-macbook-pro.local

This executed the command ps aux on symkat.com, sent the output to STDOUT, a pipe on my local laptop picked it up to execute echo $HOSTNAME locally. Although in most situations using auxiliary data processing like grep or awk will work flawlessly, there are many situations where you need your pipes and file IO redirects to work on the remote system instead of the local system. In that case you would want to wrap the command in single quotes:

$ ssh [email protected] 'ps aux | echo $HOSTNAME'
symkat.com

As a basic rule if you're using > >> < - or | you're going to want to wrap in single quotes.

It is also worth noting that in using this method of executing a command some programs will not work. Notably anything that requires a terminal, such as screen, irssi, less, or a plethora of other interactive or curses based applications. To force a terminal to be allocated you can use the -t option:

$ ssh [email protected] screen -r
Must be connected to a terminal.
$ ssh â?�t [email protected] screen -r
$ This worked!
Making SSH A Pipe

Pipes are useful. The concept is simple: take the output from one program's STDOUT and feed it to another program's STDIN. OpenSSH can be used as a pipe into a remote system. Let's say that we would like to transfer a directory structure from one machine to another. The directory structure has a lot of files and sub directories.

We could make a tarball of the directory on our own server and scp it over. If the file system this directory is on lacks the space though we may be better off piping the tarballed content to the remote system.

$ ls content/
1   18  27  36  45  54  63  72  81  90
10  19  28  37  46  55  64  73  82  91
100 2   29  38  47  56  65  74  83  92
11  20  3   39  48  57  66  75  84  93
12  21  30  4   49  58  67  76  85  94
13  22  31  40  5   59  68  77  86  95
14  23  32  41  50  6   69  78  87  96
15  24  33  42  51  60  7   79  88  97
16  25  34  43  52  61  70  8   89  98
17  26  35  44  53  62  71  80  9   99

$ tar -cz content | ssh [email protected] 'tar -xz'
$ ssh symcat@symkat
symkat@lazygeek:~$ ls content/
1    14  2   25  30  36  41  47  52  58  63  69  74  8   85  90  96
10   15  20  26  31  37  42  48  53  59  64  7   75  80  86  91  97
100  16  21  27  32  38  43  49  54  6   65  70  76  81  87  92  98
11   17  22  28  33  39  44  5   55  60  66  71  77  82  88  93  99
12   18  23  29  34  4   45  50  56  61  67  72  78  83  89  94
13   19  24  3   35  40  46  51  57  62  68  73  79  84  9   95

What we did in this example was to create a new archive ( -c ) and to compress the archive with gzip ( -z ). Because we did not use -f to tell it to output to a file, the compressed archive was send to STDOUT. We then piped STDOUT with | to ssh . We used a one-off command in ssh to invoke tar with the extract ( -x ) and gzip compressed ( -z ) arguments. This read the compressed archive from the originating server and unpacked it into our server. We then logged in to see the listing of files.

Additionally, we can pipe in the other direction as well. Take for example a situation where you with to make a copy of a remote database, into a local database:

symkat@chard:~$ echo "create database backup" | mysql -uroot -ppassword
symkat@chard:~$ ssh [email protected] 'mysqldump -udbuser -ppassword symkat' | \
> mysql -uroot -ppassword backup
symkat@chard:~$ echo "use backup;select count(*) from wp_links;" | mysql -uroot -ppassword
count(*)
12
symkat@chard:~$

What we did here is to create the database backup on our local machine. Once we had the database created we used a one-off command to get a dump of the database from symkat.com. The SQL Dump came through STDOUT and was piped to another command. We used mysql to access the database, and read STDIN (which is where the data now is after piping it) to create the database on our local machine. We then ran a MySQL command to ensure that there is data in the backup table. As we can see, SSH can provide a true pipe in either direction.

Using a Non Standard Port

Many people run SSH on an alternate port for one reason or another. For instance, if outgoing port 22 is blocked at your college or place of employment you may have ssh listen on port 443.

Instead of saying ssh -p443 [email protected] you can add a configuration option to your ~/.ssh/config file that is specific to yourserver.com:

Host yourserver.com
Port 443

You can extrapolate from this information further that you can make ssh configurations specific to a host. There is little reason to use all those -oOptions when you have a well-written ~/.ssh/config file.


harris6 years ago ,

Good Article ! I would like to try to implement Two Factor authentication with Google Authenticator , steps can be followed here http://www.digitaljournal.sg

Scribe63 • 7 years ago ,

Can you explain this syntax.

$ ssh -D1080 -oControlMaster=no [email protected]

I understand:
[-D [bind_address:]port]
[-o option]

What i may not understanding is "[email protected]".
Is this a user account on a remote server symkat.com or the local machine.

symkat Mod Scribe637 years ago ,

Yes, user@hostname is common in SSH lines. Although, you could say, "-l symkat symkat.com " (-l is username), [email protected] works just the same. Anything preceding the @ is the username to submit, and anything following the @ is the hostname or IP address to connect to.

Serge • 8 years ago ,

Is there a way to multiplex sshfs connections to obtain a higher throughput ???

Kent • 8 years ago ,

I'm quite surprised you didn't cover key based authentication.

My favorite trick for key-based authentication is having per-host keys, which gives you an extra layer of theoretical security in the event your key is leaked.

1. If your public key is leaked, nasty people could ( in theory, but its unlikely ) give you permission to log into their machines with said key, and then log your actions, which, are you not observant, could be an information leak. ( This is insane paranoia really ).

2. If your *private* key is leaked, every machine you gave a copy of your public key to is now vulnerable. ( This is a much more valid concern ).

Having per-host keys makes this much weaker in some respects, because if you have a per-host key, then stealing *a* key will only give them access to *one* machine instead of several. However, in saying that, chances are, if they get in and steal *one* key, if you have multiple, they can probably steal *every* key, meaning blocking all those accesses via key deletion becomes much harder. I'm not sure which is the most sane option really, I still just like per-host keys =P.

Doing this is very similar to setting up per-host auto-master connections.


# ~/.ssh/config
IdentityFile ~/.ssh/ident/%r@%h
IdentityFile ~/.ssh/id_rsa
IdentityFile ~/.ssh/id_dsa

the %r@%h uses the same syntax as the connection multiplexing.

In my case, it would mean for [email protected] I'd create

~/.ssh/ident/[email protected]
~/.ssh/ident/[email protected]

and then send a copy of [email protected] to the admin of bar.com to put in the 'foo' users "authorized keys" file.

It will then JustWork(TM).

And if you can't be arsed having to set up a seperate key for a given host, it tries the per-host one before using the general key, so you can just send them your common .pub file instead =).

David Orchard • 8 years ago ,

Another option that I like when signing on to remote hosts who's ip changes - like AWS - is to prevent ssh from doing strict host key checking via
-o StrictHostKeyChecking=no

Matteo • 8 years ago ,

Here's mine: remote to local mysql backup in one line
ssh user@server "/usr/bin/mysqldump -u user -p password database" | dd of=/where/you/want/the/dump.sql

Lars • 8 years ago ,

Here's another one I found useful... Redirect local STDOUT to a file on a remote server.

If in the example above I wanted to create a tar.gz file of contents on the remote machine:
tar -cz contents | ssh [email protected] "cat > contents.tar.gz"

Michael • 8 years ago ,

Interesting read, but I don't understand the part where you are piping into echo. echo never reads its stdin, so what's the point?

Altreus Michael8 years ago ,

To clarify symkat's point: The fact that echo doesn't read its stdin *is* the point.

This use of piping to echo is being used to illustrate the difference between

ssh [email protected] ps aux | echo $HOSTNAME

and

ssh [email protected] 'ps aux | echo $HOSTNAME'

Clarifying that with the single quotes, the piping is done by the *remote* server, hence the different value of $HOSTNAME.

A similar point could have been made using && instead of |.

Perpetualrabbit Stephen Veit8 years ago ,

Wow. You must have looked in the wrong place all that time, because it is right there in the manpage:

# man ssh_config

Specifies whether the system should send TCP keepalive messages to the other side. If they are sent, death of the connection or crash of one of the machines will be properly noticed. This option only uses TCP keepalives (as opposed to using ssh level keepalives), so takes a long time to notice when the connection dies. As such, you probably want the ServerAliveInterval option as well. However, this means
that connections will die if the route is down temporarily, and some people find it annoying.

The default is "yes" (to send TCP keepalive messages), and the client will notice if the network goes down or the remote host dies. This is important in scripts, and many users want it too.

To disable TCP keepalive messages, the value should be set to "no".

Multimedia Mike • 8 years ago ,

My personal favorite SSH trick: Establishing a VPN by running PPPD on both ends of an SSH connection:

http://www.faqs.org/docs/Li...

Scott Carlson • 8 years ago ,

ProxyCommand is great. I use it all the time, including using my dd-wrt router as a jump point to my internal network.

using a command restriction in authorized_keys allows a password-less key to be used to ship backups to a remote box.

Dave Drager8 years ago ,

All great pointers for SSH tricks!

Joe Shaw • 8 years ago ,

Lots of additional great tips on this Hacker News thread: http://news.ycombinator.com...

[Aug 07, 2018] 9 Awesome SSH Tricks by tychoish

Notable quotes:
"... original connection ..."
"... And it just works ..."
Mar 01, 2011 | tychoish.com

...Here's a list of 10 things that I think are particularly awesome and perhaps a bit off the beaten path.

Update: ( 2011-09-19 ) There are some user-submitted ssh-tricks on the wiki now! Please feel free to add your favorites. Also the hacker news thread might be helpful for some.

SSH Config

I used SSH regularly for years before I learned about the config file, that you can create at ~/.ssh/config to tell how you want ssh to behave.

Consider the following configuration example:

Host example.com *.example.net
User root
Host dev.example.net dev.example.net
User shared
Port 220
Host test.example.com
User root
UserKnownHostsFile /dev/null
StrictHostKeyChecking no
Host t
HostName test.example.org
Host *
Compression yes
CompressionLevel 7
Cipher blowfish
ServerAliveInterval 600
ControlMaster auto
ControlPath /tmp/ssh-%r@%h:%p

I'll cover some of the settings in the " Host * " block, which apply to all outgoing ssh connections, in other items in this post, but basically you can use this to create shortcuts with the ssh command, to control what username is used to connect to a given host, what port number, if you need to connect to an ssh daemon running on a non-standard port. See " man ssh_config " for more information. Control Master/Control Path

This is probably the coolest thing that I know about in SSH. Set the " ControlMaster " and " ControlPath " as above in the ssh configuration. Anytime you try to connect to a host that matches that configuration a "master session" is created. Then, subsequent connections to the same host will reuse the same master connection rather than attempt to renegotiate and create a separate connection. The result is greater speed less overhead.

This can cause problems if you' want to do port forwarding, as this must be configured on the original connection , otherwise it won't work. SSH Keys

While ControlMaster/ControlPath is the coolest thing you can do with SSH, key-based authentication is probably my favorite. Basically, rather than force users to authenticate with passwords, you can use a secure cryptographic method to gain (and grant) access to a system. Deposit a public key on servers far and wide, while keeping a "private" key secure on your local machine. And it just works .

You can generate multiple keys, to make it more difficult for an intruder to gain access to multiple machines by breaching a specific key, or machine. You can specify specific keys and key files to be used when connected to specific hosts in the ssh config file (see above.) Keys can also be (optionally) encrypted locally with a pass-code, for additional security. Once I understood how secure the system is (or can be), I found my self thinking "I wish you could use this for more than just SSH." SSH Agent

Most people start using SSH keys because they're easier and it means that you don't have to enter a password every time that you want to connect to a host. But the truth is that in most cases you want to have unencrypted private keys that have meaningful access to systems because once someone has access to a copy of the private key the have full access to the system. That's not good.

But the truth is that typing in passwords is a pain, so there's a solution: the ssh-agent . Basically one authenticates to the ssh-agent locally, which decrypts the key and does some magic, so that then whenever the key is needed for the connecting to a host you don't have to enter your password. ssh-agent manages the local encryption on your key for the current session.

SSH Reagent

I'm not sure where I found this amazing little function but it's great. Typically, ssh-agents are attached to the current session, like the window manager, so that when the window manager dies, the ssh-agent loses the decrypted bits from your ssh key. That's nice, but it also means that if you have some processes that exist outside of your window manager's session (e.g. Screen sessions) they loose the ssh-agent and get trapped without access to an ssh-agent so you end up having to restart would-be-persistent processes, or you have to run a large number of ssh-agents which is not ideal.

Enter "ssh-reagent." stick this in your shell configuration (e.g. ~/.bashrc or ~/.zshrc ) and run ssh-reagent whenever you have an agent session running and a terminal that can't see it.

ssh-reagent () {
  for agent in /tmp/ssh-*/agent.*; do
      export SSH_AUTH_SOCK=$agent
      if ssh-add -l 2>&1 > /dev/null; then
         echo Found working SSH Agent:
         ssh-add -l
         return
      fi
  done
  echo Cannot find ssh agent - maybe you should reconnect and forward it?
}

It's magic.

SSHFS and SFTP

Typically we think of ssh as a way to run a command or get a prompt on a remote machine. But SSH can do a lot more than that, and the OpenSSH package that probably the most popular implementation of SSH these days has a lot of features that go beyond just "shell" access. Here are two cool ones:

SSHFS creates a mountable file system using FUSE of the files located on a remote system over SSH. It's not always very fast, but it's simple and works great for quick operations on local systems, where the speed issue is much less relevant.

SFTP, replaces FTP (which is plagued by security problems,) with a similar tool for transferring files between two systems that's secure (because it works over SSH) and is just as easy to use. In fact most recent OpenSSH daemons provide SFTP access by default.

There's more, like a full VPN solution in recent versions, secure remote file copy, port forwarding, and the list could go on. SSH Tunnels

SSH includes the ability to connect a port on your local system to a port on a remote system, so that to applications on your local system the local port looks like a normal local port, but when accessed the service running on the remote machine responds. All traffic is really sent over ssh.

I set up an SSH tunnel for my local system to the outgoing mail server on my server. I tell my mail client to send mail to localhost server (without mail server authentication!), and it magically goes to my personal mail relay encrypted over ssh. The applications of this are nearly endless.

Keep Alive Packets

The problem: unless you're doing something with SSH it doesn't send any packets, and as a result the connections can be pretty resilient to network disturbances. That's not a problem, but it does mean that unless you're actively using an SSH session, it can go silent causing your local area network's NAT to eat a connection that it thinks has died, but hasn't. The solution is to set the " ServerAliveInterval [seconds] " configuration in the SSH configuration so that your ssh client sends a "dummy packet" on a regular interval so that the router thinks that the connection is active even if it's particularly quiet. It's good stuff.

/dev/null .known_hosts

A lot of what I do in my day job involves deploying new systems, testing something out and then destroying that installation and starting over in the same virtual machine. So my "test rigs" have a few IP addresses, I can't readily deploy keys on these hosts, and every time I redeploy SSH's host-key checking tells me that a different system is responding for the host, which in most cases is the symptom of some sort of security error, and in most cases knowing this is a good thing, but in some cases it can be very annoying.

These configuration values tell your SSH session to save keys to ` /dev/null (i.e. drop them on the floor) and to not ask you to verify an unknown host:

UserKnownHostsFile /dev/null
StrictHostKeyChecking no

This probably saves me a little annoyance and minute or two every day or more, but it's totally worth it. Don't set these values for hosts that you actually care about.


I'm sure there are other awesome things you can do with ssh, and I'd live to hear more . Onward and Upward!

[Nov 29, 2017] SSH Tips And Tricks You Need

Aug 23, 2010 | SymKat

Multiplexing Your Connection

Do you make a lot of connections to the same servers? You may not have noticed how slow an initial connection to a shell is. If you multiplex your connection you will definitely notice it though. Let's test the difference between a multiplexed connection using SSH keys and a non-multiplexed connection using SSH keys:


# Without multiplexing enabled:
$ time ssh [email protected] uptime
 20:47:42 up 16 days,  1:13,  3 users,  load average: 0.00, 0.01, 0.00

real	0m1.215s
user	0m0.031s
sys	0m0.008s

# With multiplexing enabled:
$ time ssh [email protected] uptime
 20:48:43 up 16 days,  1:14,  4 users,  load average: 0.00, 0.00, 0.00

real	0m0.174s
user	0m0.003s
sys	0m0.004s

We can see that multiplexing the connection is much faster, in this instance on an order of 7 times faster than not multiplexing the connection. Multiplexing allows us to have a "control" connection, which is your initial connection to a server, this is then turned into a UNIX socket file on your computer. All subsequent connections will use that socket to connect to the remote host. This allows us to save time by not requiring all the initial encryption, key exchanges, and negotiations for subsequent connections to the server.

To enable multiplexing do the following:

In a shell:


$ mkdir -p ~/.ssh/connections
$ chmod 700 ~/.ssh/connections

Add this to your ~/.ssh/config file:


Host *
ControlMaster auto
ControlPath ~/.ssh/connections/%r_%h_%p

A negative to this is that some uses of ssh may fail to work with your multiplexed connection. Most notably commands which use tunneling like git, svn or rsync, or forwarding a port. For these you can add the option -oControlMaster=no. To prevent a specific host from using a multiplexed connection add the following to your ~/.ssh/config file:

Host YOUR_SERVER_OR_IP
MasterControl no

There are security precautions that one should take with this approach. Let's take a look at what actually happens when we connect a second connection:


$ ssh -v -i /dev/null [email protected]
OpenSSH_4.7p1, OpenSSL 0.9.7l 28 Sep 2006
debug1: Reading configuration data /Users/symkat/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: Applying options for *
debug1: auto-mux: Trying existing master
Last login:
symkat@symkat:~$ exit

As we see no actual authentication took place. This poses a significant security risk if running it from a host that is not trusted, as a user who can read and write to the socket can easily make the connection without having to supply a password. Take the same care to secure the sockets as you take in protecting a private key.

Using SSH As A Proxy

Even Starbucks now has free WiFi in its stores. It seems the world has caught on to giving free Internet at most retail locations. The downside is that more teenagers with "Got Root?" stickers are camping out at these locations running the latest version of wireshark.

SSH's encryption can stand up to most any hostile network, but what about web traffic?

Most web browsers, and certainly all the popular ones, support using a proxy to tunnel your traffic. SSH can provide a SOCKS proxy on localhost that tunnels to your remote server with the -D option. You get all the encryption of SSH for your web traffic, and can rest assured no one will be capturing your login credentials to all those non-ssl websites you're using.


$ ssh -D1080 -oControlMaster=no [email protected]
symkat@symkat:~$

Now there is a proxy running on 127.0.0.1:1080 that can be used in a web browser or email client. Any application that supports SOCKS 4 or 5 proxies can use 127.0.0.1:1080 to tunnel its traffic.


$ nc -vvv 127.0.0.1 1080
Connection to 127.0.0.1 1080 port [tcp/socks] succeeded!

Using One-Off Commands

Often times you may want only a single piece of information from a remote host. "Is the file system full?" "What's the uptime on the server?" "Who is logged in?"

Normally you would need to login, type the command, see the output and then type exit (or Control-D for those in the know.) There is a better way: combine the ssh with the command you want to execute and get your result:


$ ssh [email protected] uptime
 18:41:16 up 15 days, 23:07,  0 users,  load average: 0.00, 0.00, 0.00

This executed the ssh symkat.com, logged in as symkat, and ran the command "uptime" on symkat. If you're not using SSH keys then you'll be presented with a password prompt before the command is executed.


$ ssh [email protected] ps aux | echo $HOSTNAME
symkats-macbook-pro.local

This executed the command ps aux on symkat.com, sent the output to STDOUT, a pipe on my local laptop picked it up to execute "echo $HOSTNAME" locally. Although in most situations using auxiliary data processing like grep or awk will work flawlessly, there are many situations where you need your pipes and file IO redirects to work on the remote system instead of the local system. In that case you would want to wrap the command in single quotes:


$ ssh [email protected] 'ps aux | echo $HOSTNAME'
symkat.com

As a basic rule if you're using > >> < - or | you're going to want to wrap in single quotes.

It is also worth noting that in using this method of executing a command some programs will not work. Notably anything that requires a terminal, such as screen, irssi, less, or a plethora of other interactive or curses based applications. To force a terminal to be allocated you can use the -t option:


$ ssh [email protected] screen -r
Must be connected to a terminal.
$ ssh -t [email protected] screen -r
$ This worked!

Making SSH A Pipe

Pipes are useful. The concept is simple: take the output from one program's STDOUT and feed it to another program's STDIN. OpenSSH can be used as a pipe into a remote system. Let's say that we would like to transfer a directory structure from one machine to another. The directory structure has a lot of files and sub directories.

We could make a tarball of the directory on our own server and scp it over. If the file system this directory is on lacks the space though we may be better off piping the tarballed content to the remote system.

....
$ tar -cz content | ssh [email protected] 'tar -xz'
$ ssh symcat@symkat
....

What we did in this example was to create a new archive (-c) and to compress the archive with gzip (-z). Because we did not use -f to tell it to output to a file, the compressed archive was send to STDOUT. We then piped STDOUT with | to ssh. We used a one-off command in ssh to invoke tar with the extract (-x) and gzip compressed (-z) arguments. This read the compressed archive from the originating server and unpacked it into our server. We then logged in to see the listing of files.

Additionally, we can pipe in the other direction as well. Take for example a situation where you with to make a copy of a remote database, into a local database:

symkat@chard:~$ echo "create database backup" | mysql -uroot -ppassword
symkat@chard:~$ ssh [email protected] 'mysqldump -udbuser -ppassword symkat' | mysql -uroot -ppassword backup
symkat@chard:~$ echo use backup;select count(*) from wp_links;" | mysql -uroot -ppassword
count(*)
12
symkat@chard:~$

What we did here is to create the database "backup" on our local machine. Once we had the database created we used a one-off command to get a dump of the database from symkat.com. The SQL Dump came through STDOUT and was piped to another command. We used mysql to access the database, and read STDIN (which is where the data now is after piping it) to create the database on our local machine. We then ran a MySQL command to ensure that there is data in the backup table. As we can see, SSH can provide a true pipe in either direction.

[Nov 29, 2017] SSH Tips And Tricks You Need

Aug 23, 2010 | SymKat

Multiplexing Your Connection

Do you make a lot of connections to the same servers? You may not have noticed how slow an initial connection to a shell is. If you multiplex your connection you will definitely notice it though. Let's test the difference between a multiplexed connection using SSH keys and a non-multiplexed connection using SSH keys:


# Without multiplexing enabled:
$ time ssh [email protected] uptime
 20:47:42 up 16 days,  1:13,  3 users,  load average: 0.00, 0.01, 0.00

real	0m1.215s
user	0m0.031s
sys	0m0.008s

# With multiplexing enabled:
$ time ssh [email protected] uptime
 20:48:43 up 16 days,  1:14,  4 users,  load average: 0.00, 0.00, 0.00

real	0m0.174s
user	0m0.003s
sys	0m0.004s

We can see that multiplexing the connection is much faster, in this instance on an order of 7 times faster than not multiplexing the connection. Multiplexing allows us to have a "control" connection, which is your initial connection to a server, this is then turned into a UNIX socket file on your computer. All subsequent connections will use that socket to connect to the remote host. This allows us to save time by not requiring all the initial encryption, key exchanges, and negotiations for subsequent connections to the server.

To enable multiplexing do the following:

In a shell:


$ mkdir -p ~/.ssh/connections
$ chmod 700 ~/.ssh/connections

Add this to your ~/.ssh/config file:


Host *
ControlMaster auto
ControlPath ~/.ssh/connections/%r_%h_%p

A negative to this is that some uses of ssh may fail to work with your multiplexed connection. Most notably commands which use tunneling like git, svn or rsync, or forwarding a port. For these you can add the option -oControlMaster=no. To prevent a specific host from using a multiplexed connection add the following to your ~/.ssh/config file:

Host YOUR_SERVER_OR_IP
MasterControl no

There are security precautions that one should take with this approach. Let's take a look at what actually happens when we connect a second connection:


$ ssh -v -i /dev/null [email protected]
OpenSSH_4.7p1, OpenSSL 0.9.7l 28 Sep 2006
debug1: Reading configuration data /Users/symkat/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: Applying options for *
debug1: auto-mux: Trying existing master
Last login:
symkat@symkat:~$ exit

As we see no actual authentication took place. This poses a significant security risk if running it from a host that is not trusted, as a user who can read and write to the socket can easily make the connection without having to supply a password. Take the same care to secure the sockets as you take in protecting a private key.

Using SSH As A Proxy

Even Starbucks now has free WiFi in its stores. It seems the world has caught on to giving free Internet at most retail locations. The downside is that more teenagers with "Got Root?" stickers are camping out at these locations running the latest version of wireshark.

SSH's encryption can stand up to most any hostile network, but what about web traffic?

Most web browsers, and certainly all the popular ones, support using a proxy to tunnel your traffic. SSH can provide a SOCKS proxy on localhost that tunnels to your remote server with the -D option. You get all the encryption of SSH for your web traffic, and can rest assured no one will be capturing your login credentials to all those non-ssl websites you're using.


$ ssh -D1080 -oControlMaster=no [email protected]
symkat@symkat:~$

Now there is a proxy running on 127.0.0.1:1080 that can be used in a web browser or email client. Any application that supports SOCKS 4 or 5 proxies can use 127.0.0.1:1080 to tunnel its traffic.


$ nc -vvv 127.0.0.1 1080
Connection to 127.0.0.1 1080 port [tcp/socks] succeeded!

Using One-Off Commands

Often times you may want only a single piece of information from a remote host. "Is the file system full?" "What's the uptime on the server?" "Who is logged in?"

Normally you would need to login, type the command, see the output and then type exit (or Control-D for those in the know.) There is a better way: combine the ssh with the command you want to execute and get your result:


$ ssh [email protected] uptime
 18:41:16 up 15 days, 23:07,  0 users,  load average: 0.00, 0.00, 0.00

This executed the ssh symkat.com, logged in as symkat, and ran the command "uptime" on symkat. If you're not using SSH keys then you'll be presented with a password prompt before the command is executed.


$ ssh [email protected] ps aux | echo $HOSTNAME
symkats-macbook-pro.local

This executed the command ps aux on symkat.com, sent the output to STDOUT, a pipe on my local laptop picked it up to execute "echo $HOSTNAME" locally. Although in most situations using auxiliary data processing like grep or awk will work flawlessly, there are many situations where you need your pipes and file IO redirects to work on the remote system instead of the local system. In that case you would want to wrap the command in single quotes:


$ ssh [email protected] 'ps aux | echo $HOSTNAME'
symkat.com

As a basic rule if you're using > >> < - or | you're going to want to wrap in single quotes.

It is also worth noting that in using this method of executing a command some programs will not work. Notably anything that requires a terminal, such as screen, irssi, less, or a plethora of other interactive or curses based applications. To force a terminal to be allocated you can use the -t option:


$ ssh [email protected] screen -r
Must be connected to a terminal.
$ ssh -t [email protected] screen -r
$ This worked!

Making SSH A Pipe

Pipes are useful. The concept is simple: take the output from one program's STDOUT and feed it to another program's STDIN. OpenSSH can be used as a pipe into a remote system. Let's say that we would like to transfer a directory structure from one machine to another. The directory structure has a lot of files and sub directories.

We could make a tarball of the directory on our own server and scp it over. If the file system this directory is on lacks the space though we may be better off piping the tarballed content to the remote system.

....
$ tar -cz content | ssh [email protected] 'tar -xz'
$ ssh symcat@symkat
....

What we did in this example was to create a new archive (-c) and to compress the archive with gzip (-z). Because we did not use -f to tell it to output to a file, the compressed archive was send to STDOUT. We then piped STDOUT with | to ssh. We used a one-off command in ssh to invoke tar with the extract (-x) and gzip compressed (-z) arguments. This read the compressed archive from the originating server and unpacked it into our server. We then logged in to see the listing of files.

Additionally, we can pipe in the other direction as well. Take for example a situation where you with to make a copy of a remote database, into a local database:

symkat@chard:~$ echo "create database backup" | mysql -uroot -ppassword
symkat@chard:~$ ssh [email protected] 'mysqldump -udbuser -ppassword symkat' | mysql -uroot -ppassword backup
symkat@chard:~$ echo use backup;select count(*) from wp_links;" | mysql -uroot -ppassword
count(*)
12
symkat@chard:~$

What we did here is to create the database "backup" on our local machine. Once we had the database created we used a one-off command to get a dump of the database from symkat.com. The SQL Dump came through STDOUT and was piped to another command. We used mysql to access the database, and read STDIN (which is where the data now is after piping it) to create the database on our local machine. We then ran a MySQL command to ensure that there is data in the backup table. As we can see, SSH can provide a true pipe in either direction.

[Nov 11, 2017] How to configure login banners for ssh

Notable quotes:
"... Now, you need to supply this file and path to sshd daemon so that it can fetch this banner for each user login request. For that open /etc/sshd/sshd_config file and search for line #Banner none Here you have to edit file and write your filename and remove hash mark. It should look like : Banner /etc/login.warn ..."
"... Save file and restart sshd daemon. To avoid disconnecting existing connected users, use HUP signal to restart sshd. ..."
Nov 11, 2017 | kerneltalks.com

How to display message when user connects to system before login

This message will be displayed to user when he connects to server and before he logged in. Means when he enter the username, this message will be displayed before password prompt.

You can use any filename and enter your message within. Here we used /etc/login.warn file and put our messages inside.

Shell
# cat /etc/login.warn !!!! Welcome to KernelTalks test server !!!! This server is meant for testing Linux commands and tools. If you are not associated with kerneltalks.com and not authorized please dis-connect immediately.

Now, you need to supply this file and path to sshd daemon so that it can fetch this banner for each user login request. For that open /etc/sshd/sshd_config file and search for line #Banner none Here you have to edit file and write your filename and remove hash mark. It should look like : Banner /etc/login.warn

Save file and restart sshd daemon. To avoid disconnecting existing connected users, use HUP signal to restart sshd.

Shell
root@kerneltalks # ps -ef |grep -i sshd root 14255 1 0 18:42 ? 00:00:00 /usr/sbin/sshd -D root 19074 14255 0 18:46 ? 00:00:00 sshd: ec2-user [priv] root 19177 19127 0 18:54 pts/0 00:00:00 grep -i sshd root@kerneltalks # kill -HUP 14255

Thats it! Open new session and try login. You will be greeted with the message you configured in above steps .

[Nov 01, 2017] SSH, SOCKS, and cURL by Tom Ryder

Nov 18, 2012 | sanctum.geek.nz

Port forwarding using SSH tunnels is a convenient way to circumvent well-intentioned firewall rules, or to access resources on otherwise unaddressable networks, particularly those behind NAT (with addresses such as 192.168.0.1 ).

However, it has a shortcoming in that it only allows us to address a specific host and port on the remote end of the connection; if we forward a local port to machine A on the remote subnet, we can't also reach machine B unless we forward another port. Fetching documents from a single server therefore works just fine, but browsing multiple resources over the endpoint is a hassle.

The proper way to do this, if possible, is to have a VPN connection into the appropriate network, whether via a virtual interface or a network route through an IPsec tunnel. In cases where this isn't possible or practicable, we can use a SOCKS proxy set up via an SSH connection to delegate all kinds of network connections through a remote machine, using its exact network stack, provided our client application supports it.

Being command-line junkies, we'll show how to set the tunnel up with ssh and to retrieve resources on it via curl , but of course graphical browsers are able to use SOCKS proxies as well.

As an added benefit, using this for browsing implicitly encrypts all of the traffic up to the remote endpoint of the SSH connection, including the addresses of the machines you're contacting; it's thus a useful way to protect unencrypted traffic from snoopers on your local network, or to circumvent firewall policies.

Establishing the tunnel

First of all we'll make an SSH connection to the machine we'd like to act as a SOCKS proxy, which has access to the network services that we don't. Perhaps it's the only publically addressable machine in the network.

$ ssh -fN -D localhost:8001 remote.example.com

In this example, we're backgrounding the connection immediately with -f , and explicitly saying we don't intend to run a command or shell with -N . We're only interested in establishing the tunnel.

Of course, if you do want a shell as well, you can leave these options out:

$ ssh -D localhost:8001 remote.example.com

If the tunnel setup fails, check that AllowTcpForwarding is set to yes in /etc/ssh/sshd_config on the remote machine:

AllowTcpForwarding yes

Note that in both cases we use localhost rather than 127.0.0.1 , in order to establish both IPv4 and IPv6 sockets if appropriate.

We can then check that the tunnel is established with ss on GNU/Linux:

# ss dst :8001
State      Recv-Q Send-Q   Local Address:Port       Peer Address:Port
ESTAB      0      0            127.0.0.1:45666         127.0.0.1:8001
ESTAB      0      0            127.0.0.1:45656         127.0.0.1:8001
ESTAB      0      0            127.0.0.1:45654         127.0.0.1:8001
Requesting documents

Now that we have a SOCKS proxy running on the far end of the tunnel, we can use it to retrieve documents from some of the servers that are otherwise inaccessible. For example, when we were trying to run this from the client side, we found it wouldn't work:

$ curl http://private.example/contacts.html
curl: (6) Couldn't resolve host 'private.example'

This is because the example subnet is on a remote and unroutable LAN. If its name comes from a private DNS server, we may not even be able to resolve its address, let alone retrieve the document.

We can fix both problems with our local SOCKS proxy, by pointing curl to it with its --proxy option:

$ curl --proxy socks5h://localhost:8001 http://private.example/contacts.html
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
    <head>
        <title>Contacts</title>
...

Older versions of curl may need to use the --socks5-hostname option:

$ curl --socks5-hostname localhost:8001 http://private.example/contacts.html

This not only tunnels our HTTP request through to remote.example.com and returns any response, it does the DNS lookup on the other end too. This means we can not only retrieve documents from remote servers, we can resolve their hostnames too, even if our client side can't contact the appropriate DNS server on its own. This is what the h suffix does in the socks5h:// URI syntax above.

We can configure graphical web browsers to use the SOCKS proxy in the same way, optionally including DNS resolution:

Browsers are not the only application that can use SOCKS proxies; many IM clients such as Pidgin and Bitlbee can use them too, for example.

Making things more permanent

If this all works for you and you'd like to set up the SOCKS proxy on the far end each time you connect, you can add it to your ssh_config file in $HOME/.ssh/config :

Host remote.example.com
    DynamicForward localhost:8001

With this done, you should only need to type the hostname of the machine to get a shell and to set up the dynamic forward in the background:

$ ssh remote.example.com
Posted in SSH Tagged circumvent , curl , firewall , remote , socks

[Oct 31, 2017] Shortcut for adding SSH keys by Tom Ryder

Jan 23, 2012 | sanctum.geek.nz

If you've dabbled with SSH much, for example by following the excellent suso.org tutorial a few years ago, you'll know about adding keys to allow passwordless login (or, if you prefer, a passphrase) using public key authentication. Specifically, you copy the public key ~/.ssh/id_rsa.pub or ~/.ssh/id_dsa.pub off the machine from which you wish to connect into the /.ssh/authorized_keys file on the target machine. That will allow you to open an SSH session with the machine from the user account on the local machine to the one on the remote machine, without having to type in a password.

tom@conan:~$ scp ~/.ssh/id_rsa.pub crom:.ssh/conan.pubkey
tom@conan:~$ ssh crom
Password:
tom@crom:~$ cd .ssh
tom@crom:~$ cat .ssh/conan.pubkey >>~/.ssh/authorized_keys

However, there's a nice shortcut that I didn't know about when I first learned how to do this, which has since been added to that tutorial too -- specifically, the ssh-copy-id tool, which is available in most modern OpenSSH distributions and combines this all into one less error-prone step. If you have it available to you, it's definitely a much better way to add authorized keys onto a remote machine.

tom@conan:~$ ssh-copy-id crom

Incidentally, this isn't just good for convenience or for automated processes; strong security policies for publically accessible servers might disallow logging in via passwords completely, as usernames and passwords can be guessed. It's a lot harder to guess an entire SSH key, so forcing this login method will reduce your risk of script kiddies or automated attacks brute-forcing your OpenSSH server to zero. You can arrange this by setting ChallengeResponseAuthentication to no in your sshd_config , but if that's a remote server, be careful not to lock yourself out!

[Oct 31, 2017] SSH tunnels and escapes by Tom Ryder

Jan 25, 2012 | sanctum.geek.nz

Quite apart from replacing Telnet and other insecure protocols as the primary means of choice for contacting and administrating services, the OpenSSH implementation of the SSH protocol has developed into a general-purpose toolbox for all kinds of well-secured communication, whether using both simple challenge-response authentication in the form of user and password logins, or for more complex public key authentication.

SSH is useful in a general sense for tunneling pretty much any kind of TCP traffic, and doing so securely and with appropriate authentication. This can be used both for ad-hoc purposes such as talking to a process on a remote host that's only listening locally or within a secured network, or for bypassing restrictive firewall rules, to more stable implementations such as setting up a persistent SSH tunnel between two machines to ensure sensitive traffic that might otherwise be sent in cleartext is not only encrypted but authenticated. I'll discuss a couple of simple examples here, in addition to talking about the SSH escape sequences, about which I don't seem to have seen very much information online.

SSH tunnelling for port forwarding

Suppose you're at work or on a client site and you need some information off a webserver on your network at home, perhaps a private wiki you run, or a bug tracker or version control repository. This being private information, and your HTTP daemon perhaps not the most secure in the world, the server only listens on its local address of 192.168.1.1 , and HTTP traffic is not allowed through your firewall anyway. However, SSH traffic is, so all you need to do is set up a tunnel to port forward a local port on your client machine to a local port on the remote machine. Assuming your SSH-accessible firewall was listening on firewall.yourdomain.com , one possible syntax would be:

$ ssh [email protected] -L5080:192.168.1.1:80

If you then pointed your browser to localhost:5080 , your traffic would be transparently tunnelled to your webserver by your firewall, and you could act more or less as if you were actually at home on your office network with the webserver happily trusting all of your requests. This will work as long as the SSH session is open, and there are means to background it instead if you prefer -- see man ssh and look for the -f and -N options. As you can see by the use of the 192.168.1.1 address here, this also works through NAT.

This can work in reverse, too; if you need to be able to access a service on your local network that might be behind a restrictive firewall from a remote machine, a perhaps less typical but still useful case, you could set up a tunnel to listen for SSH connections on the network you're on from your remote firewall:

$ ssh [email protected] -R5022:localhost:22 -f -N

As long as this TCP session stays active on the machine, you'll be able to point an SSH client on your firewall to localhost on port 5022, and it will open an SSH session as normal:

$ ssh localhost -p 5022

I have used this as an ad-hoc VPN back into a remote site when the established VPN system was being replaced, and it worked very well. With appropriate settings for sshd , you can even allow other machines on that network to use the forward through the firewall, by allowing GatewayPorts and providing a bind_address to the SSH invocation. This is also in the manual.

SSH's practicality and transparency in this regard has meant it's quite typical for advanced or particularly cautious administrators to make the SSH daemon the only process on appropriate servers that listens on a network interface other than localhost , or as the only port left open on a private network firewall, since an available SSH service proffers full connectivity for any legitimate user with a basic knowledge of SSH tunnelling anyway. This has the added bonus of transparent encryption when working on any sort of insecure network. This would be a necessity, for example, if you needed to pass sensitive information to another network while on a public WiFi network at a café or library; it's the same rationale for using HTTPS rather than HTTP wherever possible on public networks.

Escape sequences

If you use these often, however, you'll probably find it's a bit inconvenient to be working on a remote machine through an SSH session, and then have to start a new SSH session or restart your current one just to forward a local port to some resource that you discovered you need on the remote machine. Fortunately, the OpenSSH client provides a shortcut in the form of its escape sequence, ~C .

Typed on its own at a fresh Bash prompt in an ssh session, before any other character has been inserted or deleted, this will drop you to an ssh> prompt. You can type ? and press Enter here to get a list of the commands available:

$ ~C
ssh> ?
Commands:
    -L[bind_address:]port:host:hostport  Request local forward
    -R[bind_address:]port:host:hostport  Request remote forward
    -D[bind_address:]port                Request dynamic forward
    -KR[bind_address:]port               Cancel remote forward

The syntax for the -L and -R commands is the same as when used as a parameter for SSH. So to return to our earlier example, if you had an established SSH session to the firewall of your local network, to forward a port you could drop to the ssh> prompt and type -L5080:localhost:80 to get the same port forward rule working.

[Oct 31, 2017] Uses for ~/.ssh/config by Tom Ryder

Feb 17, 2012 | sanctum.geek.nz

Posted on For system and network administrators or other users who frequently deal with sessions on multiple machines, SSH ends up being one of the most oft-used Unix tools. SSH usually works so well that until you use it for something slightly more complex than starting a terminal session on a remote machine, you tend to use it fairly automatically. However, the ~/.ssh/config file bears mentioning for a few ways it can make using the ssh a client a little easier. Abbreviating hostnames

If you often have to SSH into a machine with a long host and/or network name, it can get irritating to type it every time. For example, consider the following command:

$ ssh web0911.colo.sta.solutionnetworkgroup.com

If you interact with the web0911 machine a lot, you could include a stanza like this in your ~/.ssh/config :

Host web0911
    HostName web0911.colo.sta.solutionnetworkgroup.com

This would allow you to just type the following for the same result:

$ ssh web0911

Of course, if you have root access on the system, you could also do this by adding the hostname to your /etc/hosts file, or by adding the domain to your /etc/resolv.conf to search it, but I prefer the above solution as it's cleaner and doesn't apply system-wide.

Fixing alternative ports

If any of the hosts with which you interact have SSH processes listening on alternative ports, it can be a pain to both remember the port number and to type it in every time:

$ ssh webserver.example.com -p 5331

You can affix this port permanently into your .ssh/config file instead:

Host webserver.example.com
    Port 5331

This will allow you to leave out the port definition when you call ssh on that host:

$ ssh webserver.example.com
Custom identity files

If you have a private/public key setup working between your client machine and the server, but for whatever reason you need to use a different key from your normal one, you'll be using the -i flag to specify the key pair that should be used for the connection:

$ ssh -i ~/.ssh/id_dsa.mail srv1.mail.example.com
$ ssh -i ~/.ssh/id_dsa.mail srv2.mail.example.com

You can specify a fixed identity file in .ssh/config just for these hosts instead, using an asterisk to match everything in that domain:

Host *.mail.example.com
    IdentityFile ~/.ssh/id_dsa.mail

I need to do this for Mikrotik's RouterOS connections, as my own private key structure is 2048-bit RSA which RouterOS doesn't support, so I keep a DSA key as well just for that purpose.

Logging in as a different user

By default, if you omit a username, SSH assumes the username on the remote machine is the same as the local one, so for servers on which I'm called tom , I can just type:

tom@conan:$ ssh server.network

However, on some machines I might be known as a different username, and hence need to remember to connect with one of the following:

tom@conan:$ ssh -l tomryder server.anothernetwork
tom@conan:$ ssh [email protected]

If I always connect as the same user, it makes sense to put that into my .ssh/config instead, so I can leave it out of the command entirely:

Host server.anothernetwork
    User tomryder
SSH proxies

If you have an SSH server that's only accessible to you via an SSH session on an intermediate machine, which is a very common situation when dealing with remote networks using private RFC1918 addresses through network address translation, you can automate that in .ssh/config too. Say you can't reach the host nathost directly, but you can reach some other SSH server on the same private subnet that is publically accessible, publichost.example.com :

Host nathost
    ProxyCommand ssh -q -W %h:%p public.example.com

This will allow you to just type:

$ ssh nathost
More information

The above are the .ssh/config settings most useful to me, but there are plenty more available; check man ssh_config for a complete list.

[Oct 31, 2017] SSH agents by Tom Ryder

Notable quotes:
"... The ssh-agent program is designed as a wrapper for a shell. If you have a private and public key setup ready, and you have remote machines for which your key is authorised, you can get an idea of how the agent works by typing: ..."
Feb 24, 2012 | sanctum.geek.nz

Public key authentication has a lot of advantages for connecting to servers, particularly if it's the only allowed means of authentication, reducing the chances of a brute force password attack to zero. However, it doesn't solve the problem of having to type in a password or passphrase on each connection, unless you're using a private key with no passphrase, which is quite risky if the private key is compromised.

Thankfully, there's a nice supplement to a well-secured SSH key setup in the use of agents on trusted boxes to securely store decrypted keys per-session, per-user. Judicious use of an SSH agent program on a trusted machine allows you to connect to any server for which your public key is authorised by typing your passphrase to decrypt your private key only once.

SSH agent setup

The ssh-agent program is designed as a wrapper for a shell. If you have a private and public key setup ready, and you have remote machines for which your key is authorised, you can get an idea of how the agent works by typing:

$ ssh-agent bash

This will prompt you for your passphrase, and once entered, within the context of that subshell, you will be able to connect to authorised remote servers without typing in the passphrase again. Once loaded, you can examine the identities you have by using ssh-add -l to see the fingerprints, and ssh-add -L for the public keys:

$ ssh-agent bash
Enter passphrase for /home/user/.ssh/id_rsa:
Identity added: /home/user/.ssh/id_rsa (/home/user/.ssh/id_rsa)
$ ssh-add -l
2048 07:1e:7d:c4:8a:0e:bc:b0:74:40:71:49:7c:70:9c /home/user/.ssh/id_rsa (RSA)
$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC+WvWXmVPx6UYB/uf+HTh1Y5zEVOmSeFfj6IC0fwN
lELVoFco9qdM4cuh6E6UaDURezjLSiayKt237DFHMgK9Hp4QPgN3ZJ7f7mesH7EHRnpLcvt0Rl3k1I4
C6gConwmkPZj3ax/cr6DAI9v7Ggeo7YPdKYhntB4TCEZfXlfihF5Vh5A2Od8cCNqy5KFKsFaLoI8Gwr
+ZC0CoxIoW6t5t6C/ZNRK2ojVwRWvp3nxcZsOzSdZJu3jcNHGSr0fxpdythRrOjzdDHgCiBuH+7mGKa
tLewbchdj8AgdeCE410xDJkov+tQuGYXZQAOx+JzWgiDI0VzWZsaV2QuyEF4NyG/
/home/user/.ssh/id_rsa

You can set up your .bashrc file to automatically search for accessible SSH agents to use for the credentials for new connections, and to prompt you for a passphrase to open a new one if it need be. There are very workable instructions on GitHub for setting this up.

If you want to shut down the agent at any time, you can use ssh-agent -k .

$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 790 killed;
SSH agent forwarding

Where the configuration of the remote machine allows it, you can forward authentication requests made from the remote machine back to the agent on your workstation. This is handy for working with semi-trusted gateway machines that you trust to forward your authentication requests correctly, but on which you'd prefer not to put your private key.

This means that if you connect to a remote machine from your workstation running an SSH agent with the following, using the -A parameter:

user@workstation:~$ ssh -A remote.example.com

You can then connect to another machine from remote.example.com using your private key on workstation :

user@remote:~$ ssh another.example.com
SSH agent authentication via PAM

It's also possible to use SSH agent authentication as a PAM method for general authentication, such as for sudo , using pam_ssh_agent_auth .

[Oct 31, 2017] Restricting public keys by Tom Ryder

Mar 09, 2012 | sanctum.geek.nz

It may be the case that while you're happy to allow a user or process to have public key authentication access to your server via the ~/.ssh/authorized_keys file, you don't necessarily want to give them a full shell, or you may want to restrict them from doing things like SSH port forwarding or X11 forwarding.

One method that's supposed to prevent users from accessing a shell is by defining their shell in /etc/passwd as /bin/false , which does indeed prevent them from logging in with the usual ssh or ssh command syntax. This isn't a good approach because it still allows port forwarding and other SSH-enabled services.

If you want to restrict the use of logins with a public key, you can prepend option pairs to its line in the authorized_keys file. Some of the most useful options here include:

So, for example, a public key that is only used to run a script called runscript on the server by the client [email protected] :

command="runscript",client="client.example",no-pty,no-agent-forwarding,no-port-forwarding ssh-rsa AAAAB2....19Q [email protected]

A public key for a user whom you were happy to allow to log in from anywhere with a full shell, but did not want to allow agent, port, or X11 forwarding:

no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-rsa AAAAD3....19Q [email protected]

Use of these options goes a long way to making your public key authentication setup harder to exploit, and is very consistent with the principle of least privilege . To see a complete list of the options available, check out the man page for sshd .

[Oct 31, 2017] Additional sshd ports by Tom Ryder

Dec 11, 2012 | sanctum.geek.nz

Occasionally you may find yourself using a network behind a firewall that doesn't allow outgoing TCP connections with a destination port of 22, meaning you're unable to connect to your OpenSSH server, perhaps to take advantage of a SOCKS proxy for encrypted and unfiltered web browsing.

Since these restricted networks almost always allow port 443 out, since it's the destination port for outgoing HTTPS requests, an easy workaround is to have your OpenSSH server listen on port 443 if it isn't already using the port.

This is sometimes given as a rationale for changing the sshd port completely, but you don't need to do that; you can simply add another Port directive to sshd_config(5) :

Port 22
Port 443

After restarting the OpenSSH server with this new line in place, you can verify that it's listening with ss(8) or netstat(8)

# ss -lnp src :22
State      Recv-Q Send-Q    Local Address:Port      Peer Address:Port
LISTEN     0      128                  :::22                  :::*
users:(("sshd",3039,6))
LISTEN     0      128                   *:22                   *:*
users:(("sshd",3039,5))
# ss -lnp src :443
State      Recv-Q Send-Q    Local Address:Port      Peer Address:Port
LISTEN     0      128                  :::443                 :::*
users:(("sshd",3039,4))
LISTEN     0      128                   *:443                  *:*
users:(("sshd",3039,3))

You'll then be able to connect to the server on port 443, the same way you would on port 22. If you intend this setup to be permanent, it would be a good idea to save the configuration in your ssh_config(5) file, or whichever SSH client you happen to use. Posted in SSH Tagged additional ports , multiple ports , workaround

[Oct 31, 2017] What's new in SSHGuard 2.1

Oct 31, 2017 | www.ctrl.blog

SSHGuard is an intrusion prevention utility that parses logs and automatically blocks misbehaving IP addresses (or their subnets) with the system firewall. SSHGuard version 2.1 was just released with new blocking services, the ability to block a configurable-sized subnet, and better log reading capabilities.

[Oct 31, 2017] Uses for ~-.ssh-config

Oct 31, 2017 | sanctum.geek.nz

For system and network administrators or other users who frequently deal with sessions on multiple machines, SSH ends up being one of the most oft-used Unix tools. SSH usually works so well that until you use it for something slightly more complex than starting a terminal session on a remote machine, you tend to use it fairly automatically. However, the ~/.ssh/config file bears mentioning for a few ways it can make using the ssh a client a little easier. Abbreviating hostnames

If you often have to SSH into a machine with a long host and/or network name, it can get irritating to type it every time. For example, consider the following command:

$ ssh web0911.colo.sta.solutionnetworkgroup.com

If you interact with the web0911 machine a lot, you could include a stanza like this in your ~/.ssh/config :

Host web0911
    HostName web0911.colo.sta.solutionnetworkgroup.com

This would allow you to just type the following for the same result:

$ ssh web0911

Of course, if you have root access on the system, you could also do this by adding the hostname to your /etc/hosts file, or by adding the domain to your /etc/resolv.conf to search it, but I prefer the above solution as it's cleaner and doesn't apply system-wide.

Fixing alternative ports

If any of the hosts with which you interact have SSH processes listening on alternative ports, it can be a pain to both remember the port number and to type it in every time:

$ ssh webserver.example.com -p 5331

You can affix this port permanently into your .ssh/config file instead:

Host webserver.example.com
    Port 5331

This will allow you to leave out the port definition when you call ssh on that host:

$ ssh webserver.example.com
Custom identity files

If you have a private/public key setup working between your client machine and the server, but for whatever reason you need to use a different key from your normal one, you'll be using the -i flag to specify the key pair that should be used for the connection:

$ ssh -i ~/.ssh/id_dsa.mail srv1.mail.example.com
$ ssh -i ~/.ssh/id_dsa.mail srv2.mail.example.com

You can specify a fixed identity file in .ssh/config just for these hosts instead, using an asterisk to match everything in that domain:

Host *.mail.example.com
    IdentityFile ~/.ssh/id_dsa.mail

I need to do this for Mikrotik's RouterOS connections, as my own private key structure is 2048-bit RSA which RouterOS doesn't support, so I keep a DSA key as well just for that purpose.

Logging in as a different user

By default, if you omit a username, SSH assumes the username on the remote machine is the same as the local one, so for servers on which I'm called tom , I can just type:

tom@conan:$ ssh server.network

However, on some machines I might be known as a different username, and hence need to remember to connect with one of the following:

tom@conan:$ ssh -l tomryder server.anothernetwork
tom@conan:$ ssh [email protected]

If I always connect as the same user, it makes sense to put that into my .ssh/config instead, so I can leave it out of the command entirely:

Host server.anothernetwork
    User tomryder
SSH proxies

If you have an SSH server that's only accessible to you via an SSH session on an intermediate machine, which is a very common situation when dealing with remote networks using private RFC1918 addresses through network address translation, you can automate that in .ssh/config too. Say you can't reach the host nathost directly, but you can reach some other SSH server on the same private subnet that is publically accessible, publichost.example.com :

Host nathost
    ProxyCommand ssh -q -W %h:%p public.example.com

This will allow you to just type:

$ ssh nathost
More information

The above are the .ssh/config settings most useful to me, but there are plenty more available; check man ssh_config for a complete list.

[Oct 17, 2017] 5 SSH alias examples in Linux - The Linux Juggernaut

Oct 17, 2017 | www.linuxnix.com

about:blank

As a Linux user, we use ssh command to log in to remote machines. The more you use ssh command, the more time you are wasting in typing some significant commands. We can use either alias defined in your .bashrc file or functions to minimize the time you spend on CLI. But this is not a better solution. The better solution is to use SSH-alias in ssh config file.

A couple of examples where we can better the ssh commands we use.

Connecting to ssh to AWS instance is a pain. Just to type below command, every time is complete waste your time as well.

ssh -p 3000 -i /home/surendra/mysshkey.pem [email protected]

to

ssh aws1

Connecting to a system when debugging.

ssh -vvv [email protected]

to

ssh xyz

In this post, we will see how to achieve shorting of your ssh commands without using bash alias or functions. The main advantage of ssh alias is that all your ssh command shortcuts are stored in a single file and easy to maintain. The other advantage is we can use same alias for both SSH and SCP commands alike

Before we jump into actual configurations, we should know difference between /etc/ssh/ssh_config, /etc/ssh/sshd_config, and ~/.ssh/config files. Below is the explanation for these files.

System-level SSH configurations are stored in /etc/ssh/ssh_config. Whereas user-level ssh configurations are stored in ~/.ssh/config file.

System-level SSH configurations are stored in /etc/ssh/ssh_config. Whereas system level SSH server configurations are stored in /etc/ssh/sshd_config file.

... ... ...

Example1: Create SSH alias for a host(www.linuxnix.com)

Edit file ~/.ssh/config with following content

Host tlj
  User root
  HostName 18.197.176.13
  port 22

... ... ...

Examaple5: Resolve SSH timeout issues in Linux. By default, your ssh logins are timed out if you don't activily use the terminial.

SSH timeouts are one more pain where you have to re-login to a remote machine after a certain time. We can set SSH time out right in side your ~/.ssh/config file to make your session active for whatever time you want. To achieve this we will use two SSH options for keeping the session alive. One ServerAliveInterval keeps your session live for number of seconds and ServerAliveCountMax will initial session after session for a given number.

ServerAliveInterval A
ServerAliveCountMax B

Example:

Host tlj linuxnix linuxnix.com
  User root
  HostName 18.197.176.13
  port 22
  ServerAliveInterval 60
  ServerAliveCountMax 30

We will see some other exiting howto in our next post. Keep visiting linuxnix.com.

[Aug 14, 2017] How-To: Thwart brute force SSH attacks in CentOS/RHEL 5

Aug 14, 2017 | deadlockprocess.wordpress.com

5 Replies

UPDATE: This was a good exercise but I decided to replace the script with denyhosts: http://denyhosts.sourceforge.net/ . In CentOS, just install the EPEL repo first, then you can install it via yum.

This is one of the problems that my team encountered when we opened up a firewall for SSH connections. Brute force SSH attacks using botnets are just everywhere! And if you're not careful, it's quite a headache if one of your servers was compromised.

Lot of tips can be found in the Internet and this is the approach that I came up with based on numerous sites that I've read.

  1. strong passwords
    DUH! This is obvious but most people ignore it. Don't be lazy.
  2. disable root access through SSH
    Most of the time, direct root access is not needed. Disabling it is highly recommended.
    • open /etc/ssh/sshd_config
    • enable and set this SSH config to no: PermitRootLogin no
    • restart SSH: service sshd restart
  3. limit users who can log-in through SSH
    Users who can use the SSH service can be specified. Botnets often use user names that were added by an application, so listing the users can lessen the vulnerability.
    • open /etc/ssh/sshd_config
    • enable and list the users with this SSH config: AllowUsers user1 user2 user3
    • restart SSH: service sshd restart
  4. use a script to automatically block malicious IPs
    Utilizing SSH daemon's log file (in CentOS/RHEL, it's in /var/log/secure ), a simple script can be written that can automatically block malicious IPs using tcp_wrapper's host.deny
    If AllowUsers is enabled, the SSH daemon will log invalid attempts in this format:
    sshd[8207]: User apache from 125.5.112.165 not allowed because not listed in AllowUsers
    sshd[15398]: User ftp from 222.169.11.13 not allowed because not listed in AllowUsers

    SSH also logs invalid attempts in this format: sshd[6419]: Failed password for invalid user zabbix from 69.10.143.168 port 50962 ssh2 Based on the information above, I came up with this script:

    #!/bin/bash
    
    # always exclude these IPs
    exclude_ips='192.168.60.1|192.168.60.10'
    
    file_log='/var/log/secure'
    file_host_deny='/etc/hosts.deny'
    
    tmp_list='/tmp/ips.for.restriction'
    
    if [[ -e $tmp_list ]]
    then
        rm $tmp_list
    fi
    
    # set the separator to new lines only
    IFS=$'\n'
    
    # REGEX filter
    filter="^$(date +%b\\s*%e).+(not listed in AllowUsers|\
    Failed password.+invalid user)"
    
    for ip in $( pcregrep  $filter $file_log \
      | perl -ne 'if (m/from\s+([^\s]+)\s+(not|port)/) { print $1,"\n"; }' )
    do
        if [[ $ip ]]
        then
            echo "ALL: $ip" >> $tmp_list
        fi
    done
    
    # reset
    unset IFS
    
    cat $file_host_deny >> $tmp_list
    sort -u $tmp_list  | pcregrep -v $exclude_ips > $file_host_deny
    

    I deployed the script in root's crontab and set it to run every minute 🙂

There, of course YMMV. Always test deployments and I'm pretty sure there are a lot of other tools available 🙂 This entry was posted in bash , centos/rhel , how to , linux , security , ssh and tagged security , ssh on July 8, 2010 by tar .

[Aug 04, 2017] SSH Troubleshooting - Metawerx Java Wiki

Jun 97, 2007 | wiki.metawerx.net
SSH Troubleshooting

This page shows common problems experienced with SSH in general, and when establishing an SSH tunnel , and solutions for each problem.

Tip: Most port-forwarding problems are caused by a basic misunderstanding of how an SSH tunnel actually works, so it is highly recommended that you read the SSH Tunnel page before continuing.

Table of Contents

Connection Problems Unable to open connection: Host does not exist Connection fails with the following error:
Unable to open connection:
Host does not exist
This error occurs when:
ping servername
Unable to open connection: gethostbyname: unknown error Connection fails with the
    following error:
Unable to open connection:
gethostbyname: unknown error
This error occurs when:

Connection refused Connection fails with the following error:

Failed to connect to 100.101.102.103: Network error: Connection refused
Network error: Connection refused
FATAL ERROR: Network error: Connection refused

This error occurs when:

Failed to add the host to the list of known hosts (/home/USERNAME/.ssh/known_hosts) Connection works, but the following warning is issued
Failed to add the host to the list of known hosts (/home/USERNAME/.ssh/known_hosts)

This error occurs when:

To fix, execute these commands (as root) to reset the permissions to their correct values (replace USERNAME with the appropriate username)

cd ~
chown USERNAME /home/username
chown USERNAME -R /home/username/.ssh
chmod 700 /home/USERNAME/.ssh
chmod 600 /home/USERNAME/.ssh/*
Authentication Problems When using a key, you are prompted for a password (instead of automatically authenticating)

This can be caused be:

Unable to use key file "keys\KEYNAME.ppk" (unable to open file)

This is caused by an inability to open the specified SSH key file.

Tunnel Problems / Port Forwarding Problems

Note that some of these errors will only appear if verbose-output (-v) is switched on for the PLINK command or SSH commands. PuTTY hides them, but PLINK can be used with exactly the same command line arguments, so test with PLINK and the -v command line option.

Forwarded connection refused by server: Administratively prohibited [open failed], or channel N: open failed: administratively prohibited: open failed

This error appears in the PLINK/PuTTY/ssh window when:

For example, you have tried to connect to servername.example.com using an SSH command line argument such as:

-L 127.0.0.1:3500:servername.example.com:3506
However, servername.example.com does not exist, is not permitted, or cannot be resolved correctly by the remote server. Unfortunately, the error message is quite vague, and always makes it look like a security issue. Verify the server name is correct and try again, then check with your administrator.

When this is the problem the following will appear in the SSH server logs (eg: /var/log/auth.log in Linux):

Nov 28 17:00:57 server sshd[27850]: error: connect_to servername.example.com: unknown host (Name or service not known)

or

Aug 26 17:48:10 server sshd[24180]: Received request to connect to host servername.example.com port NNNN, but the request was denied.

Forwarded connection refused by server: Connect failed [Connection refused]

This error appears in the PLINK/PuTTY/ssh window, when you try to establish a connection to the tunnel, and the server cannot connect to the remote port specified.

For example, you have specified that the tunnel goes to servername.example.com:3506 using an SSH command line argument such as:

-L 127.0.0.1:3500:servername.example.com:3506
When you then try to telnet to 127.0.0.1:3500 on the client machine, this is tunnelled through to the server, which then attempts to connect to servername.example.com:3506. However, that that connection between the server and servername.example.com:3506 is refused.

Check the tunnel server:port is correct, or ensure that the server is able to connect to the specified server:port.

Service lookup failed for destination port ""

This error appears in the PLINK/PuTTY/ssh window, if your tunnel definition is incomplete or incorrect.

For example, the additional space after "3500:" in the following line will cause this error:

line which causes error:
-L 127.0.0.1:3500: mysql5.metawerx.net:3506
correct line:
-L 127.0.0.1:3500:mysql5.metawerx.net:3506
Local port 127.0.0.1:nnnnn forwarding to nnn.nnn.nnn.nnn:nnnnn failed: Network error: Permission denied

This error appears in the PLINK/PuTTY/ssh window, if your PuTTY client cannot listen on the local port you have specified.

This normally occurs because of another service already running on that port.

For example, the tunnel below will fail if you have a local version of SQL/Server already listening on port 1433:

-L 127.0.0.1:1433:sql2005-1.metawerx.net:1433

To fix, close the program that is listening on that port (ie: SQL/Server in the example above).

Advanced: You can also adjust to tunnel from another port, such as 127.0.0.2:1433 or 127.0.0.1:1434. However, with SQL/Server, the Management Console application will only allow connections to 1433. Additionally, it listens on 0.0.0.0:1433, preventing use of port 1433 on any other IP address. Therefore, unless you first adjust the SQL/Server registry settings to listen on a specific IP first, it is not possible to have SQL/Server running at the same time as a local tunnel.

<some program>: not found

If you have connected successfully, but get errors when you try to enter commands at the tunnel prompt, this is because you have access to the tunnel itself, but not to an SSH prompt or any tools on the server. You should not be running these commands at the SSH prompt itself.

Example errors:

If you were trying to establish an SSH tunnel, you have already accomplished this part. Your tunnel should be listening on 127.0.0.1:<some port>. The commands you are trying to execute should be performed in a new Command Prompt or Shell.

Remember - the tunnel is providing access to a remote service, on your local machine, as if the server is your own computer.

You can therefore use any command line or GUI tools at your disposal, and connect directly to 127.0.0.1:<whatever port>.

If you are confused about how this works, see the SSH Tunnel page for diagrams and a full explanation.

See Also
Problem not found / not solved? Something to add?

[Aug 04, 2017] John E. McCarthy

Dec 20, 2016 | www.racf.bnl.gov
Contributors: Christopher Hollowell, John DeStefano There are a number of problems that can cause failures when connecting to the RACF. Here are some things to look at and try in order to resolve your problem.
Contents
  1. Private and Public Key Issues
  2. Username Issues
  3. Ownership/Access Rights Issues
  4. PuTTY Issues
  5. Viewing Your Public Key
  6. Frozen Sessions and Terminals
  7. Host Key Issues
  8. Error: Agent admitted failure to sign using the key
  9. Further Troubleshooting
Private and Public Key Issues Username Issues

If your username on your local system is different from your username at the RACF, then you must specify your RACF username when you connect to the RACF, using the -l option to the ssh commmand:

ssh -l [username] [RACF-hostname]

or prepending username@ to the SSH gateway system name (no space between the @ and the SSH gateway system name):

ssh [username]@[RACF-hostname]

In Windows SSH clients, there is typically a text box in which you type in your username.

Ownership/Access Rights Issues

If you are using a Linux/UNIX based SSH client, please check the ownership and access rights of your ~/.ssh/ directory and
the private key file in that directory. Both must be owned by your local user account (not necessarily the same as your
RACF user account). The rights on your ~/.ssh/ directory should be 700 , and the rights on the private key file (possibly,
but not definitely, named ~/.ssh/id_rsa ), should be 600 . The important thing here is that "group" and "other" access rights
must be 00 .

PuTTY Issues

If you are using PuTTY in Windows , then you have to either import your private key , or somehow tell PuTTY where the key file is.

In the main PuTTY Configuration, click on SSH and then Auth . The window will have a text box where you can put the path
of the key or browse for it. See Windows SSH Key Generation for more information on generating SSH keys for use with PuTTY.

You may also need to forward your private key through a remote gateway machine to another server. See SSH Agent for more information on storing and forwarding your private key.

Viewing Your Public Key

You can view the contents of the public key you uploaded to the RACF by directing your Web client to:
https://www.racf.bnl.gov/docs/authentication/ssh/sshkeys
and clicking on SSH Public Key File Viewing Utility . You can check this against the public key that may be on your local
system (the public key is not required to be on your local system; the private key is required to be there). If they
are not the same, then the private key on your local system may not paired with the public key you uploaded to the RACF.

If you have both private and public keys on your local system, check the date/time stamps on them, as they should be the same. If they are not the same, then the private key on your local system may not be paired with the public key that you uploaded to the RACF. If you are using the openssh client, then you can also check to see if your local private key is paired to the public key that you uploaded to the RACF. Run the command:

ssh-keygen -y

on your local system. It will ask for the filename of your private key and its passphrase and will display the public
key (without the trailing comment field) that is paired with it. Check this against the results of viewing the public key
you uploaded to the RACF as described above.

Frozen Sessions and Terminals

If your connection or session intermittently freezes, try adding a server keep-alive option to your usual SSH command:

ssh ... -o ServerAliveInterval=120

This ensures that a set of request and acknowledgment packets will be sent between the connection every two minutes, even when no other data has been requested. You can also add this option to your SSH configuration file ( ~/.ssh/config ) instead of specifying it with each SSH command:

 ServerAliveInterval 120
Host Key Issues

Sometimes host key problems can close the ssh connection before login completes. If you see an error like this:

ssh_exchange_identification: Connection closed by remote host

Then you might try removing the offending host key from your ~/.ssh/known_hosts file and try again.

Error: Agent admitted failure to sign using the key

This error might occur if you accidentally load the wrong SSH identity for a specific key, if you've uploaded a new public key that hasn't yet been synced with your account (or uploaded multiple or invalid keys), or if you're trying to load too many SSH identities at one time. Your best recourse is usually to:

  1. Log out of all current sessions
  2. Log back in
  3. Add your identity with the ssh-add command.
Further Troubleshooting

Some additional SSH-related sites and resources:

If all else fails, try running this command, substituting your account user name for [username] :

... ... ...

[Aug 04, 2017] Troubleshooting SSH Connections

Aug 04, 2017 | www.unixlore.net

I've helped a few people recently who have had trouble getting OpenSSH working properly; I've also had my share of issues over the years. Generally problems with SSH connections fall into two groups - network related and server related. Most of these problems can be fixed fairly quickly if you know what to look for.

Network Related Problems

These will typically be caused by improper routing or firewall configurations. Here are some things to check.

1. If your SSH server sits behind a firewall or router, make sure the default route of your internal SSH server points back to that firewall or router. Seems obvious, but it's common to forget about the return trip packets need to make. This will display your default gateway:

netstat -rn | grep UG

Sometimes the default gateway is just one of your server interfaces, this is OK as long as that interface is directly connected to something that knows how to get back to your client.

2. While you're at it, make sure the incoming SSH packets are actually getting to your SSH server. Tcpdump works very nicely for this, you'll need to be root to run it on the server:

tcpdump -n -i eth0 tcp port 22 and host [IP address of client]

Just replace eth0 by your client-facing interface name. If you don't see incoming SSH packets during connection attempts, it's probably due to a firewall or router access list.

SSH Server Problems

All of these issues revolve around SSH server configuration settings - not misconfigurations necessarily, just settings you may not be aware of.

1. Permissions can be a problem - in its default configuration, OpenSSH sets StrictModes to yes and won't allow any connections if the account you're trying to SSH into has group- or world-writable permissions on its home directory, ~/.ssh directory, or ~/.ssh/authorized_keys file. I typically just make the two directories mode 700 and the authorized_keys file mode 600. The sshd man page suggests this one-liner:

chmod go-w ~/ ~/.ssh ~/.ssh/authorized_keys

2. On Debian or Ubuntu systems, it is possible the keys you are using to connect are blacklisted. This is only an issue on Debian or Debian-based clients, and stems from this now-famous vulnerability in May of 2008 . To detect any such blacklisted keys, run ssh-vulnkey on the client, while logged into the account you are connecting from. Debian and Ubuntu SSH servers will reject any such keys unless the PermitBlacklistedKeys directive in the /etc/ssh/sshd_config file is set to no . I don't recommend you actually leave this security check disabled, but it can be useful to temporarily disable it during testing.

3. Finally, if all else fails, you can see exactly what the SSH server is doing by running it in debug mode on a non-standard port:

/usr/sbin/sshd -d -p 2222

Then, on the client, connect and watch the server output:

ssh -vv -p 2222 [Server IP]

Note the -vv option to provide verbose client output. This alone can sometimes help debug connection issues (and try -vvv for even more output).

[Aug 03, 2017] centos - SELinux preventing ssh via public key

Notable quotes:
"... When I have SELinux enabled I am unable to ssh into the server using the public key. If I setenabled 0 , $USER can now log in. ..."
Jun 13, 2014 | unix.stackexchange.com

Q:

I have user $USER which is a system user account with an authorized users file. When I have SELinux enabled I am unable to ssh into the server using the public key. If I setenabled 0 , $USER can now log in.

What SELinux bool/policy should I change to correct this behaviour without disabling SELinux entirely?

It's worth noting that $USER can login with a password under this default SELinux configuration, I'd appreciate some insight as to what is happening here, and why SELinux isn't blocking that. (I will be disabling

A:

Assuming the filesystem permissions are correct on ~/.ssh/*, then check the output of
sealert -a /var/log/audit/audit.log

There should be a clue in an AVC entry there. Most likely the solution will boil down to running:
restorecon -R -v ~/.ssh

[Aug 03, 2017] SSH Permission denied on Correct Password Authentication - Super User

Aug 03, 2017 | superuser.com
could successfully SSH into my machine yesterday with the exact same credentials I am using today. The machine is running Centos 6.3 . But now for some reason it is giving me permission denied. Here is my -v print out, sshd_config, and ssh_config files.
$ ssh -vg -L 3333:localhost:6666 misfitred@devilsmilk
OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012
debug1: Reading configuration data /etc/ssh_config
debug1: Connecting to devilsmilk [10.0.10.113] port 22.
debug1: Connection established.
debug1: identity file /home/kgraves/.ssh/id_rsa type -1
debug1: identity file /home/kgraves/.ssh/id_rsa-cert type -1
debug1: identity file /home/kgraves/.ssh/id_dsa type -1
debug1: identity file /home/kgraves/.ssh/id_dsa-cert type -1
debug1: identity file /home/kgraves/.ssh/id_ecdsa type -1
debug1: identity file /home/kgraves/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.1
debug1: match: OpenSSH_6.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5 none
debug1: kex: client->server aes128-ctr hmac-md5 none
debug1: sending SSH2_MSG_KEX_ECDH_INIT
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ECDSA de:1c:37:d7:84:0b:f8:f9:5e:da:11:49:57:4f:b8:f1
debug1: Host 'devilsmilk' is known and matches the ECDSA host key.
debug1: Found key in /home/kgraves/.ssh/known_hosts:1
debug1: ssh_ecdsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password,keyboard-interacti                                ve
debug1: Next authentication method: publickey
debug1: Trying private key: /home/kgraves/.ssh/id_rsa
debug1: Trying private key: /home/kgraves/.ssh/id_dsa
debug1: Trying private key: /home/kgraves/.ssh/id_ecdsa
debug1: Next authentication method: keyboard-interactive
debug1: Authentications that can continue: publickey,password,keyboard-interacti                                ve
debug1: Next authentication method: password
misfitred@devilsmilk's password:
debug1: Authentications that can continue: publickey,password,keyboard-interacti                                ve
Permission denied, please try again.
misfitred@devilsmilk's password:
debug1: Authentications that can continue: publickey,password,keyboard-interacti                                ve
Permission denied, please try again.
misfitred@devilsmilk's password:
debug1: Authentications that can continue: publickey,password,keyboard-interactive
debug1: No more authentication methods to try.
Permission denied (publickey,password,keyboard-interactive).

Here is my sshd_config file on devilsmilk:

#   $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented.  Uncommented options change a
# default value.

Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::

# Disable legacy (protocol version 1) support in the server for new
# installations. In future the default will change to require explicit
# activation of protocol 1
#Protocol 2

# HostKey for protocol version 1
# HostKey /etc/ssh/ssh_host_key
# HostKeys for protocol version 2
# HostKey /etc/ssh/ssh_host_rsa_key
# HostKey /etc/ssh/ssh_host_dsa_key

# Lifetime and size of ephemeral version 1 server key
#KeyRegenerationInterval 1h
#ServerKeyBits 1024

# Logging
# obsoletes QuietMode and FascistLogging
#SyslogFacility AUTH
#LogLevel INFO

# Authentication:

#LoginGraceTime 2m
#PermitRootLogin yes 
StrictModes no
#MaxAuthTries 6
#MaxSessions 10

#RSAAuthentication yes
#PubkeyAuthentication yes
#AuthorizedKeysFile .ssh/authorized_keys
#AuthorizedKeysCommand none
#AuthorizedKeysCommandRunAs nobody

# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#RhostsRSAAuthentication no
# similar for protocol version 2
#HostbasedAuthentication yes
# Change to yes if you don't trust ~/.ssh/known_hosts for
# RhostsRSAAuthentication and HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes

# To disable tunneled clear text passwords, change to no here!
#PasswordAuthentication yes
#PermitEmptyPasswords no

# Change to no to disable s/key passwords
#ChallengeResponseAuthentication yes

# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no
#KerberosUseKuserok yes

# GSSAPI options
#GSSAPIAuthentication no
#GSSAPIAuthentication yes
#GSSAPICleanupCredentials yes
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no

# Set this to 'yes' to enable PAM authentication, account processing, 
# and session processing. If this is enabled, PAM authentication will 
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication.  Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
#UsePAM no

# Accept locale-related environment variables
#AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
#AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
#AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
#AcceptEnv XMODIFIERS

#AllowAgentForwarding yes
AllowTcpForwarding yes
GatewayPorts yes
#X11Forwarding no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PrintMotd yes
#PrintLastLog yes
TCPKeepAlive yes
#UseLogin no
#UsePrivilegeSeparation yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#ShowPatchLevel no
#UseDNS yes
#PidFile /var/run/sshd.pid
#MaxStartups 10
#PermitTunnel no
#ChrootDirectory none

# no default banner path
#Banner none

# override default of no subsystems
Subsystem   sftp    /usr/libexec/openssh/sftp-server

# Example of overriding settings on a per-user basis
#Match User anoncvs
#   X11Forwarding no
#   AllowTcpForwarding no
#   ForceCommand cvs server

And here is my ssh_config file:

#   $OpenBSD: ssh_config,v 1.25 2009/02/17 01:28:32 djm Exp $

# This is the ssh client system-wide configuration file.  See
# ssh_config(5) for more information.  This file provides defaults for
# users, and the values can be changed in per-user configuration files
# or on the command line.

# Configuration data is parsed as follows:
#  1. command line options
#  2. user-specific file
#  3. system-wide file
# Any configuration value is only changed the first time it is set.
# Thus, host-specific definitions should be at the beginning of the
# configuration file, and defaults at the end.

# Site-wide defaults for some commonly used options.  For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.

# Host *
#   ForwardAgent no
#   ForwardX11 no
#   RhostsRSAAuthentication no
#   RSAAuthentication yes
#   PasswordAuthentication yes
#   HostbasedAuthentication no
#   GSSAPIAuthentication no
#   GSSAPIDelegateCredentials no
#   GSSAPIKeyExchange no
#   GSSAPITrustDNS no
#   BatchMode no
#   CheckHostIP yes
#   AddressFamily any
#   ConnectTimeout 0
#   StrictHostKeyChecking ask
#   IdentityFile ~/.ssh/identity
#   IdentityFile ~/.ssh/id_rsa
#   IdentityFile ~/.ssh/id_dsa
#   Port 22
#   Protocol 2,1
#   Cipher 3des
#   Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc
#   MACs hmac-md5,hmac-sha1,[email protected],hmac-ripemd160
#   EscapeChar ~
#   Tunnel no
#   TunnelDevice any:any
#   PermitLocalCommand no
#   VisualHostKey no
#Host * 
# GSSAPIAuthentication yes
# If this option is set to yes then remote X11 clients will have full access
# to the original X11 display. As virtually no X11 client supports the untrusted
# mode correctly we set this to yes.
    ForwardX11Trusted yes
# Send locale-related environment variables
    SendEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES 
    SendEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT 
    SendEnv LC_IDENTIFICATION LC_ALL LANGUAGE
    SendEnv XMODIFIERS

UPDATE REQUEST 1: /var/log/secure

Jan 29 12:26:26 localhost sshd[2317]: Server listening on 0.0.0.0 port 22.
Jan 29 12:26:26 localhost sshd[2317]: Server listening on :: port 22.
Jan 29 12:26:34 localhost polkitd(authority=local): Registered Authentication Agent for session /org/freedesktop/ConsoleKit/Session1 (system bus name :1.29 [/usr/libexec/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 12:36:09 localhost pam: gdm-password[3029]: pam_unix(gdm-password:session): session opened for user misfitred by (uid=0)
Jan 29 12:36:09 localhost polkitd(authority=local): Unregistered Authentication Agent for session /org/freedesktop/ConsoleKit/Session1 (system bus name :1.29, object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 12:36:11 localhost polkitd(authority=local): Registered Authentication Agent for session /org/freedesktop/ConsoleKit/Session2 (system bus name :1.45 [/usr/libexec/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 12:53:39 localhost polkitd(authority=local): Operator of unix-session:/org/freedesktop/ConsoleKit/Session2 successfully authenticated as unix-user:root to gain TEMPORARY authorization for action org.freedesktop.packagekit.system-update for system-bus-name::1.64 [gpk-update-viewer] (owned by unix-user:misfitred)
Jan 29 12:54:02 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 12:54:06 localhost sshd[2317]: Received signal 15; terminating.
Jan 29 12:54:06 localhost sshd[3948]: Server listening on 0.0.0.0 port 22.
Jan 29 12:54:06 localhost sshd[3948]: Server listening on :: port 22.
Jan 29 12:55:46 localhost su: pam_unix(su:session): session closed for user root
Jan 29 12:55:56 localhost pam: gdm-password[3029]: pam_unix(gdm-password:session): session closed for user misfitred
Jan 29 12:55:56 localhost polkitd(authority=local): Unregistered Authentication Agent for session /org/freedesktop/ConsoleKit/Session2 (system bus name :1.45, object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 12:55:58 localhost polkitd(authority=local): Registered Authentication Agent for session /org/freedesktop/ConsoleKit/Session3 (system bus name :1.78 [/usr/libexec/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 12:56:29 localhost pam: gdm-password[4044]: pam_unix(gdm-password:auth): conversation failed
Jan 29 12:56:29 localhost pam: gdm-password[4044]: pam_unix(gdm-password:auth): auth could not identify password for [misfitred]
Jan 29 12:56:29 localhost pam: gdm-password[4044]: gkr-pam: no password is available for user
Jan 29 12:57:11 localhost pam: gdm-password[4051]: pam_selinux_permit(gdm-password:auth): Cannot determine the user's name
Jan 29 12:57:11 localhost pam: gdm-password[4051]: pam_succeed_if(gdm-password:auth): error retrieving user name: Conversation error
Jan 29 12:57:11 localhost pam: gdm-password[4051]: gkr-pam: couldn't get the user name: Conversation error
Jan 29 12:57:17 localhost pam: gdm-password[4053]: pam_unix(gdm-password:session): session opened for user misfitred by (uid=0)
Jan 29 12:57:17 localhost polkitd(authority=local): Unregistered Authentication Agent for session /org/freedesktop/ConsoleKit/Session3 (system bus name :1.78, object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 12:57:17 localhost polkitd(authority=local): Registered Authentication Agent for session /org/freedesktop/ConsoleKit/Session4 (system bus name :1.93 [/usr/libexec/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 12:57:49 localhost unix_chkpwd[4495]: password check failed for user (root)
Jan 29 12:57:49 localhost su: pam_unix(su:auth): authentication failure; logname=misfitred uid=501 euid=0 tty=pts/0 ruser=misfitred rhost=  user=root
Jan 29 12:58:04 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 13:16:16 localhost su: pam_unix(su:session): session closed for user root
Jan 29 13:18:05 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 13:21:14 localhost su: pam_unix(su:session): session closed for user root
Jan 29 13:21:40 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 13:24:17 localhost su: pam_unix(su:session): session opened for user misfitred by misfitred(uid=0)
Jan 29 13:27:10 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 13:28:55 localhost su: pam_unix(su:session): session closed for user root
Jan 29 13:28:55 localhost su: pam_unix(su:session): session closed for user misfitred
Jan 29 13:28:55 localhost su: pam_unix(su:session): session closed for user root
Jan 29 13:29:00 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 13:31:48 localhost sshd[3948]: Received signal 15; terminating.
Jan 29 13:31:48 localhost sshd[5498]: Server listening on 0.0.0.0 port 22.
Jan 29 13:31:48 localhost sshd[5498]: Server listening on :: port 22.
Jan 29 13:44:58 localhost sshd[5498]: Received signal 15; terminating.
Jan 29 13:44:58 localhost sshd[5711]: Server listening on 0.0.0.0 port 22.
Jan 29 13:44:58 localhost sshd[5711]: Server listening on :: port 22.
Jan 29 14:00:19 localhost sshd[5711]: Received signal 15; terminating.
Jan 29 14:00:19 localhost sshd[5956]: Server listening on 0.0.0.0 port 22.
Jan 29 14:00:19 localhost sshd[5956]: Server listening on :: port 22.
Jan 29 15:03:00 localhost sshd[5956]: Received signal 15; terminating.
Jan 29 15:10:23 localhost su: pam_unix(su:session): session closed for user root
Jan 29 15:10:38 localhost pam: gdm-password[4053]: pam_unix(gdm-password:session): session closed for user misfitred
Jan 29 15:10:38 localhost polkitd(authority=local): Unregistered Authentication Agent for session /org/freedesktop/ConsoleKit/Session4 (system bus name :1.93, object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 15:11:21 localhost polkitd(authority=local): Registered Authentication Agent for session /org/freedesktop/ConsoleKit/Session1 (system bus name :1.29 [/usr/libexec/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 15:11:32 localhost pam: gdm-password[2919]: pam_unix(gdm-password:session): session opened for user misfitred by (uid=0)
Jan 29 15:11:32 localhost polkitd(authority=local): Unregistered Authentication Agent for session /org/freedesktop/ConsoleKit/Session1 (system bus name :1.29, object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 29 15:11:33 localhost polkitd(authority=local): Registered Authentication Agent for session /org/freedesktop/ConsoleKit/Session2 (system bus name :1.45 [/usr/libexec/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 29 15:15:10 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 15:30:24 localhost userhelper[3700]: running '/usr/share/system-config-users/system-config-users ' with root privileges on behalf of 'root'
Jan 29 15:32:00 localhost su: pam_unix(su:session): session opened for user misfitred by misfitred(uid=0)
Jan 29 15:32:23 localhost passwd: gkr-pam: changed password for 'login' keyring
Jan 29 15:32:39 localhost passwd: pam_unix(passwd:chauthtok): password changed for misfitred
Jan 29 15:32:39 localhost passwd: gkr-pam: couldn't change password for 'login' keyring: 1
Jan 29 15:33:06 localhost passwd: pam_unix(passwd:chauthtok): password changed for misfitred
Jan 29 15:33:06 localhost passwd: gkr-pam: changed password for 'login' keyring
Jan 29 15:37:08 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 15:38:16 localhost su: pam_unix(su:session): session closed for user root
Jan 29 15:38:16 localhost su: pam_unix(su:session): session closed for user misfitred
Jan 29 15:38:16 localhost su: pam_unix(su:session): session closed for user root
Jan 29 15:38:25 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 15:42:47 localhost su: pam_unix(su:session): session closed for user root
Jan 29 15:47:13 localhost sshd[4111]: pam_unix(sshd:session): session opened for user misfitred by (uid=0)
Jan 29 16:49:40 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 29 16:55:19 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
Jan 30 08:34:57 localhost sshd[4111]: pam_unix(sshd:session): session closed for user misfitred
Jan 30 08:34:57 localhost su: pam_unix(su:session): session closed for user root
Jan 30 08:35:24 localhost su: pam_unix(su:session): session opened for user root by misfitred(uid=501)
ssh permissions centos user-accounts openssh
share improve this question edited Jan 30 '13 at 13:40 asked Jan 29 '13 at 20:24 Kentgrav 743 16
1
Have you tried another user? Or changing the password for this one? – fboaventura Jan 29 '13 at 20:46
1
I agree with fboaventura; The configs look fine; try changing the password for your user to what you think it should be, also check that it isn't expired/account locked. And try another user just in case. Also, are you able to log in locally as that user? i.e. is the error specific to SSH or is it having an error via other auth mechs. – Justin Jan 29 '13 at 22:56
1
(1) caplock? (2) From server, post related error in /var/log/secure John Siu Jan 29 '13 at 23:18
@ fboaventura & Justin I did try another user and I also changed the password and tried it again with no success. I can login locally just fine and I can also SSH to localhost just fine. – Kentgrav Jan 30 '13 at 13:33
@ John Siu I added the /var/log/secure and I attempted the SSH right before I copied it. And nothing was added to it. Hope it helps. – Kentgrav Jan 30 '13 at 13:41
add a comment |
3 Answers active oldest votes
up vote 19 down vote server's /etc/ssh/sshd_config :
  1. To enable password authentication, uncomment
    #PasswordAuthentication yes
    
  2. To enable root login, uncomment
    #PermitRootLogin yes
    
  3. To enable ssh key login, uncomment
    #PubkeyAuthentication yes
    #AuthorizedKeysFile .ssh/authorized_keys
    

I believe (1) is what you're looking for.

share improve this answer edited Jul 10 '16 at 11:54 The Sexiest Man in Jamaica 113 answered Jan 30 '13 at 14:02 John Siu 4,542 10 20
Yeah I did this already I actually figured out what the problem was. And as I thought...it was the one thing that should have been blatantly obvious. – Kentgrav Jan 30 '13 at 16:08
2
For anyone else who is wondering you can find it sshd_config here: /etc/ssh/sshd_config – Oliver Dixon Aug 21 '15 at 9:20
The problem with this answer is that the defaults are commented out by default as the comments in the file explain. It doesn't matter if (1) is commented or not because the default is "yes". The correct answer is below. It's probably a DNS problem and can easily test by using the IP address instead of the domain name. – Colin Keenan Sep 18 '15 at 4:41
and you will have to restart the ssh service – Radu Gabriel May 24 at 12:56

[Aug 02, 2017] Why am I still getting a password prompt with ssh with public key authentication

Aug 02, 2017 | unix.stackexchange.com

Thom , asked Apr 16 '12 at 14:38

I'm working from the URL I found here:

http://web.archive.org/web/20160404025901/http://jaybyjayfresh.com/2009/02/04/logging-in-without-a-password-certificates-ssh/

My ssh client is Ubuntu 64 bit 11.10 desktop and my server is Centos 6.2 64 bit. I have followed the directions. I still get a password prompt on ssh.

I'm not sure what to do next.

Rob , answered Apr 17 '12 at 15:28

Make sure the permissions on the ~/.ssh directory and its contents are proper. When I first set up my ssh key auth, I didn't have the ~/.ssh folder properly set up, and it yelled at me.

¹ Except on some distributions (Debian and derivatives) which have patched the code to allow group writability if you are the only user in your group.

Tgr , answered Nov 12 '12 at 7:55

If you have root access to the server, the easy way to solve such problems is to run sshd in debug mode, e.g.:
service ssh stop      # will not kill existing ssh connections
/usr/sbin/sshd -d     # full path to sshd executable needed, 'which sshd' can help
...debug output...
service ssh start

(If you can access the server through any port, you can just use /usr/sbin/sshd -d -p <port number> to avoid having to stop the SSH server. You still need to be root though.)

In the debug output, look for something like

debug1: trying public key file /path/to/home/.ssh/authorized_keys
...
Authentication refused: bad ownership or modes for directory /path/to/home/

cee , answered Sep 23 '12 at 9:31

Is your home dir encrypted? If so, for your first ssh session you will have to provide a password. The second ssh session to the same server is working with auth key. If this is the case, you could move your authorized_keys to an unencrypted dir and change the path in ~/.ssh/config .

What I ended up doing was create a /etc/ssh/username folder, owned by username, with the correct permissions, and placed the authorized_keys file in there. Then changed the AuthorizedKeysFile directive in /etc/ssh/config to :

AuthorizedKeysFile    /etc/ssh/%u/authorized_keys

This allows multiple users to have this ssh access without compromising permissions.

Sahil , answered Jul 3 '12 at 7:34

I faced challenges when the home directory on the remote does not have correct privileges. In my case the user changed the home dir to 777 for some local access with in the team. The machine could not connect with ssh keys any longer. I changed the permission to 744 and it started to work again.

gusior , answered Nov 7 '13 at 0:16

After copying keys to the remote machine and putting them inside the authorized_keys you've got to do something like this:
ssh-agent bash
ssh-add ~/.ssh/id_dsa or id_rsa

Ravindra , answered May 17 '13 at 8:46

Just try these following commands
  1. ssh-keygen

    Press Enter key till you get the prompt

  2. ssh-copy-id -i root@ip_address

    (It will once ask for the password of the host system)

  3. ssh root@ip_address

    Now you should be able to login without any password

David Mackintosh , answered Sep 8 '14 at 18:44

SELinux on RedHat/CentOS 6 has an issue with pubkey authentication , probably when some of the files are created selinux is not setting its ACLs correctly.

To manually fix the SElinux ACLs for the root user:

restorecon -R -v /root/.ssh

Joachim Nilsson , answered Nov 6 '14 at 9:34

We ran into the same problem and we followed the steps in the answer. But it still did not work for us. Our problem was that login worked from one client but not from another (the .ssh directory was NFS mounted and both clients were using the same keys).

So we had to go one step further. By running the ssh command in verbose mode you get a lot of information.

ssh -vv user@host

What we discovered was that the default key (id_rsa) was not accepted and instead the ssh client offered a key matching the client hostname:

debug1: Offering public key: /home/user/.ssh/id_rsa                                    
debug2: we sent a publickey packet, wait for reply                                        
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Offering public key: /home/user/.ssh/id_dsa                                    
debug2: we sent a publickey packet, wait for reply                                        
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Offering public key: user@myclient                                          
debug2: we sent a publickey packet, wait for reply                                        
debug1: Server accepts key: pkalg ssh-rsa blen 277

Obviously this will not work from any other client.

So the solution in our case was to switch the default rsa key to the one that contained user@myclient. When a key is default, there is no checking for client name.

Then we ran into another problem, after the switch. Apparently the keys are cached in the local ssh agent and we got the following error on the debug log:

'Agent admitted failure to sign using the key'

This was solved by reloading the keys to the ssh agent:

ssh-add

,

It would be SSH miss configuration at server end. Server side sshd_config file has to be edited. Located in /etc/ssh/sshd_config . In that file, change variables

Based on http://kaotickreation.com/2008/05/21/disable-ssh-password-authentication-for-added-security/

[Mar 04, 2017] shell - How to scp a folder from remote to local - Stack Overflow

Notable quotes:
"... Did not know about the config file, this is awesome! ..."
Feb 18, 2017 | stackoverflow.com
How to scp a folder from remote to local? [closed]
Ask Question up vote 1197 down vote favorite 340

I am not sure whether it is possible to scp a folder from remote to local, but still I am left with no other options. I use ssh to log into my server and from there I would like to copy the folder foo to home/user/Desktop (my local). Is there any command so that I can do this?

To use full power of scp you need to go through next steps:

  1. Public key authorisation
  2. Create ssh aliases

Then, for example if you'll have this ~/.ssh/config :


Host
 test
    
User
 testuser
    
HostName
 test
-
site
.
com
    
Port
22022
Host
 prod
    
User
 produser
    
HostName
 production
-
site
.
com
    
Port
22022

you'll save yourself from password entry and simplify scp syntax like this:


scp 
-
r prod
:/
path
/
foo 
/
home
/
user
/
Desktop
# copy to local

scp 
-
r prod
:/
path
/
foo test
:/
tmp            
# copy from remote prod to remote test

More over, you will be able to use remote path-completion:


scp test
:/
var
/
log
/
# press tab twice
Display
 all 
151
 possibilities
?
(
y or n
)

Update:

For enabling remote bash-completion you need to have bash-shell on both <source> and <target> hosts, and properly working bash-completion. For more information see related questions:

How to enable autocompletion for remote paths when using scp?
SCP filename tab completion

6
Did not know about the config file, this is awesome! dmastylo Mar 1 '14 at 20:27

What I always use is:


scp 
-
r username@IP
:/
path
/
to
/
server
/
source
/
folder
/
.

. (dot) : it means current folder . so copy from server and paste here only.

IP : can be an IP address like 125.55.41.311 or it can be host like ns1.mysite.com .

[Feb 18, 2017] ssh - scp without replacing existing files in the destination - Unix Linux Stack Exchange

Notable quotes:
"... scp will overwrite the files only if you have write permissions to them. In other words: You can make scp effectively skip said files by temporarily removing the write permissions on them (if you are the files' owner, that is). ..."
"... before running scp (it will complain and skip the existing files). And change them back afterward ( chmod +w to get umask based value). If the files do not all have write permission according to your umask, you would somehow have to store the permissions so that you can restore them. (Gilles' answer overwrites existing files if locally they are newer, I lost valuable data that way. Do not understand why that wrong and harmful answer has so many up votes). I don't get it: how did rsync --ignore-existing cause you to lose data? – ..."
"... Unable to create temporary file Clock skew detected ..."
"... In my case - I could not do this and the solution was: lftp . lftp 's usage for syncronization is below: ..."
"... To copy a whole bunch of files, it's faster to tar them. By using -k you also prevent tar from overwriting files when unpacking it on the target system. ..."
Feb 18, 2017 | unix.stackexchange.com

scp without replacing existing files in the destination

How do I copy an entire directory into a directory of the same name without replacing the content in the destination directory? (instead, I would like to add to the contents of the destination folder) ssh rsync scp synchronization

Use rsync , and pass -u if you want to only update files that are newer in the original directory, or --ignore-existing to skip all files that already exist in the destination.
rsync -au /local/directory/ host:/remote/directory/
rsync -a --ignore-existing /local/directory/ host:/remote/directory/

(Note the / on the source side: without it rsync would create /remote/directory/directory .)

@Anthon I don't understand your comment and I don't see an answer or comment by chandra. --ignore-existing does add without replacing, what data loss do you see? – Gilles Nov 27 '13 at 9:59

Sorry, I only looked at your first example that is where you can have data loss (and is IMHO not what the OP asked for), if you include --ignore-existing data-loss should not happen. – Anthon Nov 27 '13 at 10:08

This does not help if the remote system does not have rsync easily available.... (Like Win32-OpenSSH) – Gert van den Berg Oct 25 '16 at 8:00

@GertvandenBerg rsync is pretty easy to install on Windows, no harder than SSH. – Gilles Oct 25 '16 at 11:51

@Gilles: True, but all of the options seems to involve Cygwin DLLs... (The current state of the MS port of OpenSSH is such that enabling compression on scp is enough to break SCP...) (Getting rsync functional over Win32-OpenSSH also seems non-trivial - hopefully that improves over time) (Solaris 10 is the other example, where a third party package and --rsync-path is needed) – Gert van den Berg Oct 25 '16 at 13:01

scp will overwrite the files only if you have write permissions to them. In other words: You can make scp effectively skip said files by temporarily removing the write permissions on them (if you are the files' owner, that is).

Anthon 49.6k 14 68 132 answered Oct 15 '12 at 21:10 Reimund 491 4 2

Thanks for this. Was exactly the trick I was looking for. – saccharine Jul 16 '13 at 21:02

make sure you copy the files back you add a * to do so. Example scp -r [email protected]:/location/of/files/* /local/location/ Rick May 27 '15 at 19:16

find . -type f -exec chown root:root {} \; – ling Aug 21 '16 at 19:58

You can copy only new files by date. Use find

scp  `find /data/*.gz -type f -mtime +7` USER@SERVER:/backup/

Naks

If you can make the destination file contents read-only:

find . -type f -exec chmod a-w

before running scp (it will complain and skip the existing files). And change 
      them back afterward ( chmod +w to get umask based value). If the files do not all 
      have write permission according to your umask, you would somehow have to store the permissions 
      so that you can restore them. 

(Gilles' answer overwrites existing files if locally they are newer, I lost valuable data that way. Do not understand why that wrong and harmful answer has so many up votes).

I don't get it: how did rsync --ignore-existing cause you to lose data? – Gilles Nov 27 '13 at 10:01 add a comment |

I had a similar task, in my case I could not use rsync , csync , or FUSE because my storage has only SFTP. rsync could not change the date and time for the file, some other utilities (like csync ) showed me other errors: " Unable to create temporary file Clock skew detected ".

If you have access to the storage-server - just install openssh-server or launch rsync as a daemon here.

In my case - I could not do this and the solution was: lftp . lftp 's usage for syncronization is below:

lftp -c "open -u login,password sftp://sft.domain.tld/; \
    mirror -c --verbose=9 -e -R -L /srs/folder /rem/folder"

/src/folder - is the folder on my PC, /rem/folder - is sftp://sft.domain.tld/rem/folder .

You may find man pages by the link: http://lftp.yar.ru/lftp-man.html

scp does overwrite files and there's no switch to stop it doing that, but you can copy things out the way, do the scp and then copy the existing files back. Examples:

  1. Copy all existing files out the way
    mkdir original_files ; cp -r * original_files/
    
    
  2. Copy everything using scp
    scp -r user@server:dir/* ./
    
    
  3. Copy the original files over anything scp has written over:
    cp -r original_files/* ./
    
    

This method doesn't help when you're trying to pull files over from a remote and pick up where you left off. I.e. if the whole purpose is to save time. – Oliver Williams Dec 1 '16 at 17:58

>To copy a whole bunch of files, it's faster to tar them. By using -k you also prevent tar from overwriting files when unpacking it on the target system.

tar -c <source-dir> | ssh <name>@<host> 'tar -kxzf - -C <target-dir>' 

It does make a remote connection. First it tar's the source, pipes it into the ssh connection and unpacks it on the remote system. – huembi Aug 22 '16 at 21:17

[Dec 26, 2016] How To Stop SSH Session From Disconnecting In Linux

Notable quotes:
"... The following steps needs to be performed in your SSH client, not in the remote server. ..."
Dec 26, 2016 | www.ostechnix.com

The following steps needs to be performed in your SSH client, not in the remote server.

To configure the current user, edit SSH config file:

sudo nano ~/.ssh/config

Add the following lines:

Host *
 ServerAliveInterval 60

Please ensure you indent the second line with a space . Let me explain what these lines do. Once you added these lines in your SSH client system, it will send a packet called no-op (No Operation) to your Remote system. The no-op packet will inform the remote system "Nothing to do". Also it tells the SSH client is still connected with the remote system, hence do not close the TCP connection and log you out.

Here "Host *" indicates this configuration is applicable for all remote hosts. "ServerAliveInterval 60" indicates the number of seconds to wait to send a no-op packet.

... ... ...

To apply this settings for all users (globally) in your system, add or modify the following line in /etc/ssh/ssh_config file.

ServerAliveInterval 60

SSH Proxy for Secure Web Browsing

Do you have the need to securely browse an internal-only company webpage remotely? Well, here is a method for tunnelling your web browser through an encrypted connection. Please note that this will NOT hide the DNS queries which can reveal the target site.

Many people don't realize that SSH can emulate a SOCKS proxy. You can use any server you have SSH terminal access to as your own personal proxy.

Lazy Linux: 10 essential tricks for admins by Vallard Benincosa Certified Technical Sales Specialist, IBM

20 Jul 2008 | IBM DeveloperWorks

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.

  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 [email protected]

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh [email protected] .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.

  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)

Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 [email protected]

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 [email protected]

    This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

[Aug 09, 2012] Speaking UNIX: The best-kept secrets of UNIX power users by Martin Streicher, Software Developer, Pixel, Byte, and Comma

May 25, 2010 | IBM DeveloperWorld

Shhh . . . secrets about SSH

Secure Shell (SSH) is a rich subsystem used to log in to remote systems, copy files, and tunnel through firewalls-securely. Since SSH is a subsystem, it offers plenty of options to customize and streamline its operation. In fact, SSH provides an entire "dot directory", named $HOME/.ssh, to contain all its data. (Your .ssh directory must be mode 600 to preclude access by others. A mode other than 600 interferes with proper operation.) Specifically, the file $HOME/.ssh/config can define lots of shortcuts, including aliases for machine names, per-host access controls, and more.

Here is a typical block found in $HOME/.ssh/config to customize SSH for a specific host:

Host worker
HostName worker.example.com
IdentityFile ~/.ssh/id_rsa_worker
User joeuser

Each block in ~/.ssh/config configures one or more hosts. Separate individual blocks with a blank line. This block uses four options: Host, HostName, IdentityFile, and User. Host establishes a nickname for the machine specified by HostName. A nickname allows you to type ssh worker instead of ssh worker.example.com. Moreover, the IdentityFile and User options dictate how to log in to worker. The former option points to a private key to use with the host; the latter option provides the login ID. Thus, this block is the equivalent of the command:

ssh [email protected] -i ~/.ssh/id_rsa_worker

A powerful but little-known option is ControlMaster. If set, multiple SSH sessions to the same host share a single connection. Once the first connection is established, credentials are not required for subsequent connections, eliminating the drudgery of typing a password each and every time you connect to the same machine. ControlMaster is so handy, you'll likely want to enable it for every machine. That's accomplished easily enough with the host wildcard, *:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p

As you might guess, a block tagged Host * applies to every host, even those not explicitly named in the config file. ControlMaster auto tries to reuse an existing connection but will create a new connection if a shared connection cannot be found. ControlPath points to a file to persist a control socket for sharing. %r is replaced by the remote login user name, %h is replaced by the target host name, and %p stands in for the port used for the connection. (You can also use %l; it is replaced with the local host name.) The specification above creates control sockets with file names akin to:
[email protected]:22

Each control socket is removed when all connections to the remote host are severed. If you want to know which machines you are connected to at any time, simply type ls ~/.ssh and look at the host name portion of the control socket (%h).

The SSH configuration file is so expansive, it too has its own man page. Type man ssh_config to see all possible options. And here's a clever SSH trick: You can tunnel from a local system to a remote one via SSH. The command line to use looks something like this:

$ ssh example.com -L 5000:localhost:3306

This command says, "Connect via example.com and establish a tunnel between port 5000 on the local machine and port 3306 [the MySQL server port] on the machine named 'localhost.'" Because localhost is interpreted on example.com as the tunnel is established, localhost is example.com. With the outbound tunnel-formally called a local forward-established, local clients can connect to port 5000 and talk to the MySQL server running on example.com.

This is the general form of tunneling:

$ ssh proxyhost localport:targethost:targetport

Here, proxyhost is a machine you can access via SSH and one that has a network connection (not via SSH) to targethost. localport is a non-privileged port (any unused port above 1024) on your local system, and targetport is the port of the service you want to connect to.

The previous command tunnels out from your machine to the outside world. You can also use SSH to tunnel in, or connect to your local system from the outside world. This is the general form of an inbound tunnel:

$ ssh user@proxyhost -R proxyport:targethosttargetport

When establishing an inbound tunnel-formally called a remote forward-the roles of proxyhost and targethost are reversed: The target is your local machine, and the proxy is the remote machine. user is your login on the proxy. This command provides a concrete example:

$ ssh [email protected] -R 8080:localhost:80

The command reads, "Connect to example.com as joe, and connect the remote port 8080 to local port 80." This command gives users on example.com a tunnel to Joe's machine. A remote user can connect to 8080 to hit the Web server on Joe's machine.

In addition to -L and -R for local and remote forwards, respectively, SSH offers -D to create an HTTP proxy on a remote machine. See the SSH man page for the proper syntax.

Martin Streicher is a freelance Ruby on Rails developer and the former Editor-in-Chief of Linux Magazine. Martin holds a Masters of Science degree in computer science from Purdue University and has programmed UNIX-like systems since 1986. He collects art and toys. You can reach Martin at [email protected].

[Sep 17, 2011] libssh 0.5.2

libssh is a C library to access SSH services from a program. It can remotely execute programs, transfer files, and serve as a secure and transparent tunnel for remote programs. Its Secure FTP implementation

[Jun 10, 2011] Non-interactive ssh password auth

Sshpass is a tool for non-interactively performing password authentication with SSH's so called "interactive keyboard password authentication"
May 5, 2008 | SourceForge.net

Syntax

sshpass [options] command arguments

Options

If not option is given, sshpass reads the password from the standard input. The user may give at most one alternative source for the password:

-p password - The password is given on the command line. Please note the section titled "SECURITY CONSIDERATIONS".

-f filename - The password is the first line of the file filename.

-d number - number is a file descriptor inherited by sshpass from the runner. The password is read from the open file descriptor.

-e - The password is taken from the environment variable "SSHPASS".

Security Considerations

First and foremost, users of sshpass should realize that ssh's insistance on only getting the password interactively is not without reason. It is close to impossible to securely store the password, and users of sshpass should consider whether ssh's public key authentication provides the same end-user experience, while involving less hassle and being more secure.

The -p option should be considered the least secure of all of sshpass's options. All system users can see the password in the command line with a simple "ps"command. Sshpass makes no attempt to hide the password, as such attempts create race conditions without actually solving the problem. Users of sshpass are encouraged to use one of the other password passing techniques, which are all more secure.

In particular, people writing programs that are meant to communicate the password programatically are encouraged to use an anonymous pipe and pass the pipe's reading end to sshpass using the -d option.

sshpass Examples

1) Run rsync over SSH using password authentication, passing the password on the command line:

rsync -rsh='sshpass -p 12345 ssh -l test' host.example.com:path

2) Specify password explicitly

sshpass -p [yourpassword] ssh [yourusername]@[host] 

[Aug 23, 2010] SSH Tips And Tricks You Need

SymKat

Multiplexing Your Connection

Do you make a lot of connections to the same servers? You may not have noticed how slow an initial connection to a shell is. If you multiplex your connection you will definitely notice it though. Let's test the difference between a multiplexed connection using SSH keys and a non-multiplexed connection using SSH keys:

# Without multiplexing enabled:
$ time ssh [email protected] uptime
 20:47:42 up 16 days,  1:13,  3 users,  load average: 0.00, 0.01, 0.00

real	0m1.215s
user	0m0.031s
sys	0m0.008s

# With multiplexing enabled:
$ time ssh [email protected] uptime
 20:48:43 up 16 days,  1:14,  4 users,  load average: 0.00, 0.00, 0.00

real	0m0.174s
user	0m0.003s
sys	0m0.004s

We can see that multiplexing the connection is much faster, in this instance on an order of 7 times faster than not multiplexing the connection. Multiplexing allows us to have a "control"connection, which is your initial connection to a server, this is then turned into a UNIX socket file on your computer. All subsequent connections will use that socket to connect to the remote host. This allows us to save time by not requiring all the initial encryption, key exchanges, and negotiations for subsequent connections to the server.

To enable multiplexing do the following:

In a shell:

$ mkdir -p ~/.ssh/connections
$ chmod 700 ~/.ssh/connections

Add this to your ~/.ssh/config file:

Host *
ControlMaster auto
ControlPath ~/.ssh/connections/%r_%h_%p

A negative to this is that some uses of ssh may fail to work with your multiplexed connection. Most notably commands which use tunneling like git, svn or rsync, or forwarding a port. For these you can add the option -oControlMaster=no. To prevent a specific host from using a multiplexed connection add the following to your ~/.ssh/config file:

Host YOUR_SERVER_OR_IP
MasterControl no

There are security precautions that one should take with this approach. Let's take a look at what actually happens when we connect a second connection:

$ ssh -v -i /dev/null [email protected]
OpenSSH_4.7p1, OpenSSL 0.9.7l 28 Sep 2006
debug1: Reading configuration data /Users/symkat/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: Applying options for *
debug1: auto-mux: Trying existing master
Last login:
symkat@symkat:~$ exit

As we see no actual authentication took place. This poses a significant security risk if running it from a host that is not trusted, as a user who can read and write to the socket can easily make the connection without having to supply a password. Take the same care to secure the sockets as you take in protecting a private key.

Using SSH As A Proxy

Even Starbucks now has free WiFi in its stores. It seems the world has caught on to giving free Internet at most retail locations. The downside is that more teenagers with "Got Root?"stickers are camping out at these locations running the latest version of wireshark.

SSH's encryption can stand up to most any hostile network, but what about web traffic?

Most web browsers, and certainly all the popular ones, support using a proxy to tunnel your traffic. SSH can provide a SOCKS proxy on localhost that tunnels to your remote server with the -D option. You get all the encryption of SSH for your web traffic, and can rest assured no one will be capturing your login credentials to all those non-ssl websites you're using.

$ ssh -D1080 -oControlMaster=no [email protected]
symkat@symkat:~$

Now there is a proxy running on 127.0.0.1:1080 that can be used in a web browser or email client. Any application that supports SOCKS 4 or 5 proxies can use 127.0.0.1:1080 to tunnel its traffic.

$ nc -vvv 127.0.0.1 1080
Connection to 127.0.0.1 1080 port [tcp/socks] succeeded!

Using One-Off Commands

Often times you may want only a single piece of information from a remote host. "Is the file system full?""What's the uptime on the server?""Who is logged in?"

Normally you would need to login, type the command, see the output and then type exit (or Control-D for those in the know.) There is a better way: combine the ssh with the command you want to execute and get your result:

$ ssh [email protected] uptime
 18:41:16 up 15 days, 23:07,  0 users,  load average: 0.00, 0.00, 0.00

This executed the ssh symkat.com, logged in as symkat, and ran the command "uptime"on symkat. If you're not using SSH keys then you'll be presented with a password prompt before the command is executed.

$ ssh [email protected] ps aux | echo $HOSTNAME
symkats-macbook-pro.local

This executed the command ps aux on symkat.com, sent the output to STDOUT, a pipe on my local laptop picked it up to execute "echo $HOSTNAME" locally. Although in most situations using auxiliary data processing like grep or awk will work flawlessly, there are many situations where you need your pipes and file IO redirects to work on the remote system instead of the local system. In that case you would want to wrap the command in single quotes:

$ ssh [email protected] 'ps aux | echo $HOSTNAME'
symkat.com

As a basic rule if you're using > >> < - or | you're going to want to wrap in single quotes.

It is also worth noting that in using this method of executing a command some programs will not work. Notably anything that requires a terminal, such as screen, irssi, less, or a plethora of other interactive or curses based applications. To force a terminal to be allocated you can use the -t option:

$ ssh [email protected] screen -r
Must be connected to a terminal.
$ ssh -t [email protected] screen -r
$ This worked!

Making SSH A Pipe

Pipes are useful. The concept is simple: take the output from one program's STDOUT and feed it to another program's STDIN. OpenSSH can be used as a pipe into a remote system. Let's say that we would like to transfer a directory structure from one machine to another. The directory structure has a lot of files and sub directories.

We could make a tarball of the directory on our own server and scp it over. If the file system this directory is on lacks the space though we may be better off piping the tarballed content to the remote system.

....
$ tar -cz content | ssh [email protected] 'tar -xz'
$ ssh symcat@symkat
....

What we did in this example was to create a new archive (-c) and to compress the archive with gzip (-z). Because we did not use -f to tell it to output to a file, the compressed archive was send to STDOUT. We then piped STDOUT with | to ssh. We used a one-off command in ssh to invoke tar with the extract (-x) and gzip compressed (-z) arguments. This read the compressed archive from the originating server and unpacked it into our server. We then logged in to see the listing of files.

Additionally, we can pipe in the other direction as well. Take for example a situation where you with to make a copy of a remote database, into a local database:

symkat@chard:~$ echo "create database backup"| mysql -uroot -ppassword
symkat@chard:~$ ssh [email protected] 'mysqldump -udbuser -ppassword symkat' | mysql -uroot -ppassword backup
symkat@chard:~$ echo use backup;select count(*) from wp_links;"| mysql -uroot -ppassword
count(*)
12
symkat@chard:~$

What we did here is to create the database "backup"on our local machine. Once we had the database created we used a one-off command to get a dump of the database from symkat.com. The SQL Dump came through STDOUT and was piped to another command. We used mysql to access the database, and read STDIN (which is where the data now is after piping it) to create the database on our local machine. We then ran a MySQL command to ensure that there is data in the backup table. As we can see, SSH can provide a true pipe in either direction.

[Mar 25, 2010] Use ssh_config To Simplify Your Life By Mitch Frazier

Mar 25, 2010 | Linux Journal

When using multiple systems the indispensable tool is, as we all know, ssh. Using ssh you can login to other (remote) systems and work with them as if you were sitting in front of them. Even if some of your systems exist behind firewalls you can still get to them with ssh, but getting there can end up requiring a number of command line options and the more systems you have the more difficult it gets to remember them. However, you don't have to remember them, at least not more than once: you can just enter them into ssh's config file and be done with it.

For example, let's say that you have two "servers" that you connect to regularly, one at your house that's behind your firewall. Further, let's say that you use dyndns to make your home IP address known, and that you've got ssh listening on port 12022 rather than the default port 22 (and you've got your firewall forwarding that port to the server). So to connect you need to run:

$ ssh -p 12022 example.dyndns.org

The second system, let's say is local and you just connect with:

$ ssh 192.168.1.15

The second one is not too bad to type, but a name would be easier. You could put the name in your /etc/hosts file, or you could set up a local DNS server, but you can also solve this problem using ssh's config file.

To create an ssh config file execute the commands:

$ touch ~/.ssh/config
$ chmod 600 ~/.ssh/config

Now use your favorite text editor to edit the file and enter the following into it:

Host server1
HostName example.dyndns.org
Port 12022

Host server2
HostName 192.168.1.15

The Host option starts a new "section": all the options that follow apply to that host till a new "Host" option is seen. The "HostName" option specifies the "real" host name that ssh tries to connect to (otherwise the "Host" value is used). The "Port" is obviously the port that ssh tries to connect to, if you don't specify a port, the default port is used.

Now you can connect much more simply:

$ ssh server1
$ ssh server2

These are just a few of the options that you can set in ssh's config file. You can also, for example, specify that X11 forwarding be enabled. You can set up local and remote port forwarding (i.e. ssh's -L and -R command line options, respectively). Take a look at the man page (man ssh_config) for more information on the available options.

One of the added benefits of using ssh's config file is that programs like scp, rsync, and rdiff-backup automatically pick up these options also and work just as you'd expect (hope).

______________________

Mitch Frazier is an Associate Editor for Linux Journal and the Web Editor for linuxjournal.com.

[Mar 16, 2010] Resuming interrupted sftp transfers in Linux

Even though the Linux version of the sftp client doesn't offer a direct way to resume an interrupted transfer, doing so is quite simple by using common shell tools, as long as you are able to login to the remote server through a console. Assuming that you are transferring data.zip from source_server to target_server and the transfer was interrupted, you can do the following:

This works for both text and binary files. Apparently a better way would be integrating this ability into the sftp client, which is the way some clients such as putty and winscp work, but until that happy day you can use the tips above as a workaround.

Comments

mikeX:

lftp is a very nice command line ftp client, which supports tab completion, directory mirroring and of course resuming of interrupted downloads. It can also work as an sftp client, with the right protocol prefix, e.g. lftp sftp://user@host, and can even be used in batch mode.

You can find a copy at http://lftp.yar.ru/. Still, nice trick :)

[Jun 2, 2009] sshwindows.sf.net OpenSSH for Windows

OpenSSH for Windows is a free package that installs a minimal OpenSSH server and client utilities in the Cygwin package without needing the full Cygwin installation.

The OpenSSH for Windows package provides full SSH/SCP/SFTP support. SSH terminal support provides a familiar Windows Command prompt, while retaining Unix/Cygwin-style paths for SCP and SFTP.

[Jun 1, 2009] HOWTO setup the Cygwin SSH daemon on a Windows 2003 server

Note : This set of instructions has worked for me at our institution. You should read /usr/share/doc/Cygwin/openssh.README after installing cygwin and check the cygwin mailing list if you encounter problems.

[Mar 28, 2009] Linux.com Starting SSH connections simply with SSHMenu

June 18, 2008 | Linux.com
SSHMenu adds a button to your GNOME panel that displays a configurable drop-down list of hosts that you have might like to connect to with SSH.

SSHMenu is packaged and available in repositories for both Ubuntu (as sshmenu-gnome) and Fedora (gnome-applet-sshmenu). Other SSHMenu packages available for both distributions do not include GNOME support. In those, the button for the SSH menu is started in its own window and an xterm is started when you wish to connect to a host with SSH. If you install the GNOME-aware SSHMenu packages, you can add SSHMenu to your panel by right-clicking the panel and choosing "Add to Panel..." and selecting the "SSH Menu Applet."

When using the GNOME-aware SSHMenu, a gnome-terminal is started to handle your SSH connections, and you can select the profile gnome-terminal should use on a per-host basis. That lets you specify a font and background color in the terminal that can act as a reminder of which host that terminal is connected with.

[Mar 28, 2009] SSHMenu

SSHMenu is a GNOME panel applet* that keeps all your regular SSH connections within a single mouse click.

Each menu option will open an SSH session in a new terminal window. You can organise groups of hosts with separator bars or sub-menus. You can even open all the connections on a submenu (in separate windows or tabs) with one click.

Here's a killer feature: imagine if every time you connected to a production server the terminal window had a red-tinted background, to remind you to tread carefully. Using terminal profiles, SSHMenu allows you to specify colours, fonts, transparency and a variety of other settings on a per-connection basis. You can even set window size and position.

[Mar 10, 2009] Cluster SSH

Perl-based
freshmeat.net

Cluster SSH opens terminal windows with connections to specified hosts and an administration console. Any text typed into the administration console is replicated to all other connected and active windows. This tool is intended for, but not limited to, cluster administration where the same configuration or commands must be run on each node within the cluster. Performing these commands all at once via this tool ensures all nodes are kept in sync.

[Aug 24, 2008] SSH Tutorial for Linux by Mark Krenz (Suso Technology Services Inc.)

This document covers the SSH client on the Linux Operating System and other OSes that use OpenSSH. If you use Windows, please read the document SSH Tutorial for Windows If you use Mac OS X, you should already have OpenSSH installed and can use this document as a reference.

This is one of the top tutorials covering SSH on the Internet. It was originally written back in 1999 and was completely revised in 2006 to include new and more accurate information. It has been read by over 227,000 people and consistently appears at the top of Google's search results for SSH Tutorial and Linux SSH.

[Jun 10, 2008] Bélier 1.0

Belier allows opening a shell or executing a command on a remote computer through an SSH session. The main feature of Belier is its ability to cross several intermediate computers before realizing the... job. You can execute commands with any account available on the remote computer. It is possible to switch an account on intermediate computers before accessing the final computer, and Belier will generate one script for each final computer to reach

[May 6, 2008] freshmeat.net Project details for Silk Tree by Aleksandr O. Levchuk

Ruby script

Silk Tree propagate /etc/passwd and /etc/group files from a master to a list of hosts via SSH. Neither the sending nor the receiving end connect to each other as root. Instead there is a read-only sudo sub-component on the receiver's side that makes the final modifications in /etc. Many checks are made to ensure reliable authorization updates. ACLs are used to enforce a simple security policy. Differences between old and new versions are shown. Two small scripts are included for exporting LDAP users and groups.

[Mar 5, 2008] MindTerm 3.2 by Martin Forssen

About: MindTerm is a complete ssh-client in pure Java. It can be used either as a standalone Java application or as a Java applet. Three packages of importance are provided (terminal, ssh, and security). The terminal package is a rather complete vt102/xterm-terminal, and the ssh-package contains the ssh- protocol and also "drop-in" socket replacements to use ssh-tunnels transparently from a Java application/applet. It also contains functionality to realize a ssh-server. Finally, the security package contains RSA, DES, 3DES, Blowfish, IDEA, and RC4 ciphers.

[Mar 2, 2008] From John Hinsley

Mar 29, 2000

Q: I use telnet from my Linux box at home to use the HP_UX boxes at university. No problems with telnet, but is there a way to get it to export the X display so that I can use tools other than command line ones?

John Hinsley

Short answer: Use ssh instead.

The default for telnet is to preserve a number of environment settings, including TERM, and DISPLAY. (Any recent telnet daemon should also perform some sanitization on these variables to prevent some degenerate values from being propagated through them to a potentially vulnerable program).

So, if you issue a 'set', 'env' or 'printenv' command and look you might find that your DISPLAY variable IS set. However, it's probably set to the wrong thing.

When you run 'startx' on the local system, it sets your DISPLAY variable to something like: DISPLAY=:0.0 X client programs seeing this value under Linux or UNIX will attempt to connect to the X server via a local UNIX domain socket (one of those nodes in the a filesystem whose permissions/type starts with an "s" in a "long" 'ls' output). That works for the local processes talking to the local X server.

However, to start a remote process that needs to talk to your local X server you must set the DISPLAY variable to a hostname and display number. What you need is something like

DISPLAY=123.45.67.85:0.0 or DISPLAY=foo.bar.not:0.0

Programs that are linked against X libraries will automatically search their environment for a DISPLAY value. If it specifies a hostname or IP address, they will attempt to open a TCP connection (Internet domain socket) instead of a local file/node (UNIX domain socket) connection. Specifically they will try to connect to port 6000 for :0.0, and 6001 for ...:1.0, etc. (Incidentally, the .0 in :0.0 or localhost:0.0 refers to a possible display number. Some X servers support multiple displays/monitors, and these address each of the displays as 0.0, 0.1, 1.0, 1.1 etc).

So, one solution is to use the following sort of command (assuming that you are using a Bourne compatible shell like 'bash' which is the Linux default):

DISPLAY=your.local.hostname:0.0 telnet to.your.remote...

... this variation of the familiar syntax sets this value for the DISPLAY in the environment of the following command (that is on the same line as the assignment, and NOT separated with one of the normal command delimiters, like the semicolon).

Naturally you'd probably put this into whatever function, alias, or shell script you are using to start these telnet sessions. You could use a more portable syntax like:

DISPLAY=`hostname`:0.0 telnet ... 

... where the backtick (command substitution) expression is used to fill in the blank. This will allow those shell scripts, etc to adapt to whatever system you copy them to, and will save you from having to fix all of them if you change hostname (and ISP).

Of course, these days your machine's hostname might not match anything that your ISP has set for you. So you might want to extract your IP address and use that instead of your idea of your hostname. I'll leave the extraction of your IP address from the output of the 'ifconfig' command using sh, awk, PERL, TCL or whatever, as an exercise to the reader, it's not difficult (*).

Another problem with using straight IP addresses is that you might be going through some sort of IP masquerading (NAT --- network address translation) between your local system and the remote.

There is a better way!

USE ssh!

ssh will automatically handle your DISPLAY variable for you. When you establish a remote shell session using ssh it creates it's own version of the DISPLAY variable, one which points "localhost:10" (or localhost:11, etc).

What? Yep! You read that right. Your ssh client tells the remote sshd (daemon/server) to pretend to be the "10th (or later) X server" on the remote system. The the sshd will listen for X protocol activity on TCP port 6010 (or 6011, 6012, etc) and relay that through to your local X server. This feature of ssh is called X11 port forwarding. It is completely transparent and automatic.

On top of that all the traffic between your remote X clients and your local display server is encrypted from the time it gets to the remote sshd (X proxy) until it gets to your local ssh client process. It can't be sniffed or spoofed (not without some heretofore unheard of cryptanalysis or the application of a WHOLE LOT or brute computing force).

Also, when you install and configure ssh you can put one or more public keys in the ~/.ssh/authorized_keys on each of the remote systems to which you want access. So long as you keep the corresponding private keys secure on your system, you can safely access your remote accounts without a password. It's as convenient as 'rsh' and as safe as Kerberos (possibly more so).

You can even publish one or more ssh public identities. Then anyone who wants to let you access an account on any of their systems can just add that to the authorized_keys file there. Possession of the public key can let them let you in, while not directly compromising the security of any other sites to which you have access.

On top of all that you can also use the 'scp' program as a "secure 'rcp'." That's a way to copy files to and from a remote system using basically the same syntax as a 'cp' command and without having to start up a copy of ftp or C-Kermit, etc.

It's also possible to set up ssh tunnels and run any number of common protocols through them.

There's also an ssh-agent program. This is a way of allowing you to login, start up one shell under the ssh-agent, give it your passphrase (in effect unlocking your local private key) and having all your other ssh commands in all of descendent processes, including those on remote systems all automatically use the "unlocked" key. When you exit that one ssh-agent shell or X session, you've effectively "locked" the key back up. (It's actually a rather clever hack).

Oh, yeah! That X11 forwarding trick works right through any IP masquerading, NAT, or applications proxying. It's just more traffic between your ssh client and the remote daemon multiplexed in band with the rest of your session.

It makes no sense to use rsh or telnet in the modern world. We should all switch to more secure protocols like ssh, Kerberos etc. (Ironically, the emergence of IPSec and the future of ubiquitously secure DNS may eventually make the 'net safe for plain telnet and rsh protocols. But that's a different story.)

[Dec 11, 2007] An Illustrated Guide to SSH Agent Forwarding

[Dec 1, 2007] gnome-sshman by Jordi Ivars

About: Gnome-sshman is an SSH session manager for GNOME. It is easy and fast to use, and is useful for system administrators that need to connect to many SSH servers. Gnome-sshman saves ssh sessions and allows you to open a saved session with a double click in nautilus.

Changes: The "open sessions folder" button was removed, so nautilus is now an optional dependency. A session information tool was added to view session data and attach notes to an ssh session. Telnet support was added. A warning is given if you are closing a session with opened tabs. A preferences window was added to change colors, fonts, and set other default options. Gconf support was added. Two bugs were corrected: a cypher module bug with the hwrandom module and a bug with GNOME 2.20 nautilus in background mode.

[Dec 1, 2007] Suse 9 Admin Guide

[Nov 20, 2007] Common threads OpenSSH key management, Part 3

[Oct 27, 2007] UNIX System Administration Tools

autosync
Copies files to remote hosts based on a configuration file. (Perl)
View the README
Download version 1.4 - gzipped tarball, 5 KB
Last update: April 2007

SftpDrive Access SFTP as a Windows Drive Letter

SftpDrive lets your applications access your files from anywhere on the Internet safely and securely, like a VPN, without the VPN.

SSH is the industry standard for remote access to Linux, Mac OS X, and UNIX computers because it's safe, secure, and just works from anywhere on the Internet. SSH servers like OpenSSH and VShell have a powerful system called SFTP built-in. Unrelated to the archaic FTP protocol, SFTP is a modern, secure system that gives you the power to treat your network files as if they were right on your desktop. Stream movies and music. Run programs. Load and save any file from any application. Best of all, your SSH server is ready to go.

[Jun 20, 2007] Gentoo Linux Documentation -- Keychain

Many of us use the excellent OpenSSH as a secure, encrypted replacement for the venerable telnet and rsh commands. One of OpenSSH's (and the commercial SSH2's) intriguing features is its ability to authenticate users using the RSA and DSA authentication protocols, which are based upon a pair of complementary numerical "keys". And one of the main appeals of RSA and DSA authentication is the promise of being able to establish connections to remote systems without supplying a password. The keychain script makes handling RSA and DSA keys both convenient and secure. It acts as a front-end to ssh-agent, allowing you to easily have one long-running ssh-agent process per system, rather than per login session. This dramatically reduces the number of times you need to enter your passphrase from once per new login session to once every time your local machine is rebooted.

Keychain was first introduced in a series of IBM developerWorks articles.

Current versions of keychain are known to run on Linux, BSD, Cygwin, Tru64 UNIX, HP-UX, Mac OS X, and Solaris using whatever variant of Bourne shell you have available.

Building OpenSSH--Tools and Tradeoffs, Updated for OpenSSH 3.7.1p2 TCP Wrappers

TCP wrappers provides limited, connection-oriented host-based firewall functionality with which connections can be denied or accepted based on the originating host. Connection attempts are logged using syslog(3C). OpenSSH uses this functionality by linking in the libwrap library. TCP wrappers is dependent on the name and IP address information returned by the name services, such as DNS. It cannot stop low-level network-based attacks, such as port scanning, IP spoofing, or denial of service. For those, a packet-based firewall solution such as SunScreenTM software is necessary. The Solaris 9 OE has TCP wrappers integrated into it, package SFWtcpd, which is located in the /usr/sfw directory. For the Solaris 8 OE, TCP wrappers can be found on the Software Companion CD (starting in the Solaris 8 10/00 release). For the Solaris 2.6 and 7 OE releases, TCP wrappers must be downloaded and built from the source. TCP wrappers is not required to build OpenSSH.

[Jun 4, 2007] Secure Shell FAQ Section 4 Running Secure Shell

On modern machines this is not a problem and you can run ssh via intetd/xinetd, which provides you the ability to use TCP wrapper controls unless ssh is complied with them.

4.6. Can I run Secure Shell from inetd?

Yes, you can. No, you generally shouldn't. And boy, do I hate this question :)

When the Secure Shell daemon is started, it processes its configuration file and generates a cryptographic key. This can take several seconds, especially on a slow or busy server, and the startup time can be unacceptably long.

However, as Mike Friedman writes: "What many people (including me) do is run a 'backup' sshd at a non-standard port out of inetd, for use just when the standalone sshd has failed. This gives you a way to login to restart the regular sshd (or to investigate why it won't start!), but the latter would still be what most users normally connect to (at the standard port 22)."

If you decide to run Secure Shell via inetd:

To reduce the startup time for SSH1, you can reduce the size of the key that is generated with the -b flag (e.g. "-b 512"). The default keysize is 768 bits, and a keysize of 512 bits should be small enough to reduce the startup time. This is not recommended, however, as a 512-bit key is significantly easier to break than a larger key. The key size cannot be altered at runtime with SSH2; a new server key must be generated with ssh-keygen2.

When starting sshd from inetd, be sure to pass it the -i flag so it behaves properly.

[May 10, 2007] Redhat Enterprise Linux securely mount remote Linux - UNIX directory or file system using SSHFS nixCraft by Vivek

cyberciti.biz

FUSE is a Linux kernelmodule also available for FreeBSD, OpenSolaris and Mac OS X that allows non-privileged users to create their own file systems without the need to write any kernel code. This is achieved by running the file system code in user space, while the FUSE module only provides a "bridge"to the actual kernel interfaces. FUSE was officially merged into the mainstream Linux kernel tree in kernel version 2.6.14.

You need to use SSHFS to access to a remote filesystem through SSH or even you can use Gmail account to store files.

Following instructions are tested on CentOS, Fedora Core and RHEL 4/5 only. But instructions should work with any other Linux distro without a problem.

Step # 1: Download and Install FUSE

Visit fuse home page and download latest source code tar ball. Use wget command to download fuse package:
# wget http://superb-west.dl.sourceforge.net/sourceforge/fuse/fuse-2.6.5.tar.gz
Untar source code:
# tar -zxvf fuse-2.6.5.tar.gz
Compile and Install fuse:
# cd fuse-2.6.5
# ./configure
# make
# make install

Step # 2: Configure Fuse shared libraries loading

You need to configure dynamic linker run time bindings using ldconfig command so that sshfs command can load shared libraries such as libfuse.so.2:
# vi /etc/ld.so.conf.d/fuse.conf
Append following path:
/usr/local/lib
Run ldconfig:
# ldconfig

Step # 3: Install sshfs

Now fuse is loaded and ready to use. Now you need sshfs to access and mount file system using ssh. Visit sshfs home page and download latest source code tar ball. Use wget command to download fuse package:
# wget http://easynews.dl.sourceforge.net/sourceforge/fuse/sshfs-fuse-1.7.tar.gz
Untar source code:
# tar -zxvf sshfs-fuse-1.7.tar.gz
Compile and Install fuse:
# cd sshfs-fuse-1.7
# ./configure
# make
# make install

Mounting your remote filesystem

Now you have working setup, all you need to do is mount a filesystem under Linux. First create a mount point:
# mkdir /mnt/remote
Now mount a remote server filesystem using sshfs command:
# sshfs [email protected]: /mnt/remote
Where,

When promoted supply vivek (ssh user) password. Make sure you replace username and hostname as per your requirements.

Now you can access your filesystem securely using Internet or your LAN/ WAN:
# cd /mnt/remote
# ls
# cp -a /ftpdata . &

To unmount file system just type:
# fusermount -u /mnt/remote ...

[Jan 1, 2007] GNU screen and SSH

Updated 13 Sep 04. Nevermind. phil_g's comment says it well. keychain is the way to go. I'll rewrite this when I have more time.

Some co-workers turned me on to GNU screen last year. It's a handy addition to my toolbox. It became most useful after I learned how to use it with SSH. The original URL that gave me the solution appears to be gone (a message in the now-defunct gnu-screen Yahoo group). So I thought I'd write this up and see how it fares when people google gnu screen ssh.

The solution I settled on is a nested invocation I learned from Jason White. I recommend you read my screenrc and my slave screenrc in another window and read along here for commentary. You run an "outer" screen session (the "slave" session) that in turn runs an "inner" (or "master") session. You use the regular escape sequence (Ctrl-A d) to detatch from the master, and you map Ctrl-^ to be the control key for the slave session. If you press Ctrl-^ while using screen this way, you'll see one process in the slave session. It's running ssh-agent. That's the key to using ssh with screen. The slave's only purpose is to run ssh-agent. The master runs as a child of that. Consequently, all shells in the master session are running under the ssh-agent. Just run ssh-add from any master shell, and then all shells have your ssh identity.

For more information about GNU screen, see GNU Screen: an introduction and beginner's tutorial or Power Sessions with Screen. For more information about SSH, see openssh.com.

Nested Screens Not Necessary
phil_g
2004-07-06 08:57 am UTC (link)

You don't need to use nested screens to get this effect. I achieve it by the use of a simple wrapper script for screen. To attach to a screen session, I have a single script that I run; it loads the agent before starting screen. (I use keychain to ensure that only one agent instance is running, regardless of how many times I attach to screen.) See my attach-screen script for specific details.

[Dec 8, 2006] The Connection Manager A Python-based tool to connect to foreign systems using a variety of methods.

[Dec 8, 2006] pam_eps A PAM module for ssh authentication against a remote server.

[Dec 8, 2006] syncpasswd An Expect script that synchronizes passwords via SSH on multiple platforms.

[Dec 8, 2006] SecurID authentication for OpenSSH A SecurID authentication method for OpenSSH.

[Dec 8, 2006] Autossh A program to monitor and automatically reestablish SSH connections.

[Dec 8, 2006] snapshot Incremental disk-to-disk backups using rsync

[Dec 8, 2006] tssh A shell script that tunnels an ssh session through untrusted machines.

[Dec 8, 2006] PerlSSH A Perl module to use SSH and SCP with Perl.

[Nov 5, 2006] Shell Scripts for execution on multiple servers

[Nov 5, 2006] Python Scripts for execution on multiple servers

[Sept 15, 2006] fsh Fast and secure remote command execution.

[Sept 15, 2006] remote_update.pl A script to automate administration via SSH.

[Sept 3, 2006] SSH with Keys HOWTO A document which shows how to use SSH with keys, passphrases, and ssh-agent.

[Aug 27, 2006] Mitigating the Security Risks of SSH The True Issues

But what about administrators using SSH on other platforms? Will they just plop in this tool as a simple FTP replacement, get it to work in that limited role, and then declare success?

The biggest issues with SSH lie at Layer 8 of the OSI model-politics and personnel:

Security mitigations must do more than suggest technical settings for one SSH version. (And the technical settings vary by version, anyway, so don't expect this article to be a primer on SSH server and client security. There are too many features to discuss, and we must address greater issues than just technical settings.)

So what can your organization do to help secure multiple versions of SSH running on multiple operating systems?

Server clinic Connect securely with ssh

[Oct 10, 2005] Eleven SSH Tricks Linux Journal by Daniel Allen

SSH is the descendant of rsh and rlogin, which are non-encrypted programs for remote shell logins. Rsh and rlogin, like telnet, have a long lineage but now are outdated and insecure. However, these programs evolved a surprising number of nifty features over two decades of UNIX development, and the best of them made their way into SSH. Following are the 11 tricks I have found useful for squeezing the most power out of SSH.

SSH, The Secure Shell The Definitive Guide

by Daniel J. Barrett (Author), Richard Silverman (Author)

Sample Chapters

Read an excerpt from this book on ONLamp.com:

www.perl.com - Recipe of the Day

[ Aug 6, 2001] Building and Deploying OpenSSH for the Solaris Operating Environment

Submitted by <Nobody Important> on Monday at 23:31:53 (EDT))

The July BluePrints OnLine magazine includes an article on building OpenSSH for Solaris. It talks about compiling the Zlib compression library, OpenSSL, PRNGD, and OpenSSH using either Forte Compilers or GCC and the appropriate compilation options. There are also some included scripts to help build a Solaris software package for easier deployment and a quite useful and powerful init script.

[May 16, 2001] http://www.chiark.greenend.org.uk/~sgtatham/putty/

PuTTY is free software that provides an ssh client, telnet, and several other things. PuTTY also does both ssh1 and ssh2, and saves settings (e.g., hostnames, IP addresses, and telnet, ssh, orraw selections), providing me a way to record (instead of having to remember) the systems that I can connect to. PuTTY also allows me to change window colors.

[May 26, 2000] DevShed - The Shell Game By Icarus Melonfire Nine parts paper about ssh and its installation.


Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

SSH, The Secure Shell The Definitive Guide

This site is operated by the authors of the O'Reilly book on SSH. The first edition was published in February of 2001, by Dan Barrett and Richard Silverman. Joined by Robert Byrnes, we completed the second edition in May of 2005.

We have also written O'Reilly's

Robert Byrnes

Tutorials


FAQs


Solaris

Solaris 9 and 10 has integrated SSH2.

Secure Shell in the Enterprise

Installing OpenSSH Packages - SPARC and Intel-Solaris 8


Windows Servers

sshwindows.sf.net OpenSSH for Windows


Random Findings



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: September 10, 2020