Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers May the source be with you, but remember the KISS principle ;-)

# Unix Sysadmin Tips

 News Enterprise Unix System Administration Recommended Links Unix System Monitoring Job schedulers Unix Configuration Management Tools Perl Admin Tools and Scripts Baseliners Bash Tips and Tricks WinSCP Tips Attaching to and detaching from screen sessions Midnight Commander Tips and Tricks WinSCP Tips Linux netwoking tips RHEL Tips Suse Tips Filesystems tips Shell Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips AWK Tips Linux Start up and Run Levels Unix System Monitoring Job schedulers Grub Simple Unix Backup Tools Sysadmin Horror Stories History Humor Etc

#### Lazy Linux: 10 essential tricks for admins by Vallard Benincosa  Certified Technical Sales Specialist, IBM

###### 20 Jul 2008 | IBM DeveloperWorks

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time—and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom# cd /media/cdrom# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done 

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat—especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.

Figure 1. GRUB screen after reboot

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:

Figure 2. Ready to edit the kernel line

Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:

Figure 3. Append the argument with the number 1

Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd New UNIX password:Retype new UNIX password:passwd: all authentication tokens updated successfully 

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.

Figure 4. Poking a hole in the firewall

Here's how to proceed:

1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.

2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

You can do this with the following syntax:

~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

thedude@blackbox:~$while [ 1 ]; do date; sleep 300; done  to keep the machine busy. And minimize the window. 3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password: root@tech:~# ssh thedude@blackbox.example.com . 4. Once tech is on the blackbox, they can SSH to ginger using the following command: thedude@blackbox:~$: ssh -p 2222 root@localhost

5. Tech will then be prompted for a password. They should enter the root password of ginger.

6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

1. Start a VNC server session on ginger. This is done by running something like:

root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99 

The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com 

Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

thedude@blackbox:~$vncviewer localhost:99  That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel. 3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running: root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com  This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech! 4. From tech, VNC to ginger by running the command: root@tech:~# vncviewer localhost:99 . Tech will now have a VNC session directly to ginger. While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy. Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5. Figure 5. Putty can forward SSH ports for tunneling If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system. Trick 7: Checking your bandwidth Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem. The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere. So they do this. But now the question is: How much bandwidth do they really have? Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well, 1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes" But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this: # wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz  You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes: tar zxvf iperf*gzcd iperf-2.0.2./configure -prefix=/home/bob/perfmakemake install  On ginger, run: # /home/bob/perf/bin/iperf -s -f M  This machine will act as the server and print out performance speeds in MBps. On the beckham node, run: # /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60  You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps. In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem. I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together! Trick 8: Command-line scripting and utilities A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors. For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line: # P=1; for i in$(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr$P + 1);done >>/etc/hosts 

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';done | sort | uniq  A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery. First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's: free -m | grep Mem | awk '{print$2}'

That command says to:

• Use the free command to get the memory size in megabytes.
• Take the output of that command and use grep to get the line that has the string Mem in it.
• Take that line and use awk to print the second field, which is the total memory in the node.

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

• If all the nodes, n001-n200, have the same memory size, then only one number will be displayed. This is the size of memory as seen by each operating system.
• If node memory size is different, you will see several memory size values.
• Finally, if the SSH failed on a certain node, then you may see some error messages.

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system—a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less ...BIOS InformationVendor: LENOVOVersion: 7LET52WW (1.22 )Release Date: 08/27/2007... 

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

# ethtool -i eth0driver: e1000version: 7.3.20-k2-NAPIfirmware-version: 0.3-0 

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

• Work with others. Share screen sessions and watch how others work—you'll see new approaches to doing things. You may need to swallow your pride and let other people drive, but often you can learn a lot.
• Read the man pages. Seriously; reading man pages, even on commands you know like the back of your hand, can provide amazing insights. For example, did you know you can do network programming with awk?
• Solve problems. As the system administrator, you are always solving problems whether they are created by you or by others. This is called experience, and experience makes you better and more efficient.

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

 Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

 Top Visited

Your browser does not support iframes.

Switchboard Latest Past week Past month

## Old News ;-)

#### [Jan 14, 2018] How to remount filesystem in read write mode under Linux

###### Jan 14, 2018 | kerneltalks.com

Most of the time on newly created file systems of NFS filesystems we see error like below :

 1 2 3 4 root @ kerneltalks # touch file1 touch : cannot touch ' file1 ' : Read - only file system

This is because file system is mounted as read only. In such scenario you have to mount it in read-write mode. Before that we will see how to check if file system is mounted in read only mode and then we will get to how to re mount it as a read write filesystem.

How to check if file system is read only

To confirm file system is mounted in read only mode use below command –

 1 2 3 4 # cat /proc/mounts | grep datastore / dev / xvdf / datastore ext3 ro , seclabel , relatime , data = ordered 0 0

Grep your mount point in cat /proc/mounts and observer third column which shows all options which are used in mounted file system. Here ro denotes file system is mounted read-only.

You can also get these details using mount -v command

 1 2 3 4 root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( ro , relatime , seclabel , data = ordered )

In this output. file system options are listed in braces at last column.

Re-mount file system in read-write mode

To remount file system in read-write mode use below command –

 1 2 3 4 5 6 root @ kerneltalks # mount -o remount,rw /datastore root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( rw , relatime , seclabel , data = ordered )

Observe after re-mounting option ro changed to rw . Now, file system is mounted as read write and now you can write files in it.

Note : It is recommended to fsck file system before re mounting it.

You can check file system by running fsck on its volume.

 1 2 3 4 5 6 7 8 9 10 root @ kerneltalks # df -h /datastore Filesystem Size Used Avail Use % Mounted on / dev / xvda2 10G 881M 9.2G 9 % / root @ kerneltalks # fsck /dev/xvdf fsck from util - linux 2.23.2 e2fsck 1.42.9 ( 28 - Dec - 2013 ) / dev / xvdf : clean , 12 / 655360 files , 79696 / 2621440 blocks

Sometimes there are some corrections needs to be made on file system which needs reboot to make sure there are no processes are accessing file system.

#### [Jan 14, 2018] Linux yes Command Tutorial for Beginners (with Examples)

###### Jan 14, 2018 | www.howtoforge.com

You can see that user has to type 'y' for each query. It's in situation like these where yes can help. For the above scenario specifically, you can use yes in the following way:

yes | rm -ri test Q3. Is there any use of yes when it's used alone?

Yes, there's at-least one use: to tell how well a computer system handles high amount of loads. Reason being, the tool utilizes 100% processor for systems that have a single processor. In case you want to apply this test on a system with multiple processors, you need to run a yes process for each processor.

#### [Jan 14, 2018] Working with Vim Editor Advanced concepts

###### Jan 14, 2018 | linuxtechlab.com

Opening multiple files with VI/VIM editor

To open multiple files, command would be same as is for a single file; we just add the file name for second file as well.

$vi file1 file2 file 3 Now to browse to next file, we can use$ :n

or we can also use

$:e filename Run external commands inside the editor We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is,$ :! command

An example for this would be

$:! df -H Searching for a pattern To search for a word or pattern in the text file, we use following two commands in command mode, • command '/' searches the pattern in forward direction • command '?' searched the pattern in backward direction Both of these commands are used for same purpose, only difference being the direction they search in. An example would be,$ :/ search pattern (If at beginning of the file)

$:/ search pattern (If at the end of the file) Searching & replacing a pattern We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is,$ :s/pattern_to_be_found/New_pattern/g

Suppose we want to find word "alpha" & replace it with word "beta", the command would be

$:s/alpha/beta/g If we want to only replace the first occurrence of word "alpha", then the command would be$ :s/alpha/beta/

Using Set commands

We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor,

$:set ic ignores cases while searching$ :set smartcase enforce case sensitive search

$:set nu display line number at the begining of the line$ :set hlsearch highlights the matching words

$: set ro change the file type to read only$ : set term prints the terminal type

$: set ai sets auto-indent$ :set noai unsets the auto-indent

Some other commands to modify vi editors are,

$:colorscheme its used to change the color scheme for the editor. (for VIM editor only)$ :syntax on will turn on the color syntax for .xml, .html files etc. (for VIM editor only)

This complete our tutorial, do mention your queries/questions or suggestions in the comment box below.

#### [Jan 14, 2018] Learn to use Wget command with 12 examples

###### Jan 14, 2018 | linuxtechlab.com

If we want to save the downloaded file with a different name than its default name, we can use '-O' parameter with wget command to do so,

$wget -O nagios_latest https://downloads.sourceforge.net/project/nagios/nagios-4.x/nagios-4.3.1/nagios-4.3.1.tar.gz?r=&ts=1489637334&use_mirror=excellmedia  Replicate whole website If you need to download all contents of a website, you can do so by using '--mirror' parameter, $ wget --mirror -p --convert-links -P /home/dan xyz.com


Here, wget – mirror is command to download the website,

-p, will download all files necessary to display HTML files properly,

--convert-links, will convert links in documents for viewing,

-P /home/dan, will save the file in /home/dan directory.

Download only a certain type of files

To download only a file with certain format type, use '-r -A' parameters,

$wget -r -A.txt Website_url  Exclude a certain file type While downloading a website, if you don't want to download a certain file type you can do so by using '- – reject' parameter, $ wget --reject=png Website_url


#### [Jan 14, 2018] Sysadmin Tips on Preparing for Vacation by Kyle Rankin

##### "... Make sure all of your backup scripts are working and all of your backups are up to date. ..."
###### Jan 11, 2018 | www.linuxjournal.com

... ... ...

If you do need to take your computer, I highly recommend making a full backup before the trip. Your computer is more likely to be lost, stolen or broken while traveling than when sitting safely at the office, so I always take a backup of my work machine before a trip. Even better than taking a backup, leave your expensive work computer behind and use a cheaper more disposable machine for travel and just restore your important files and settings for work on it before you leave and wipe it when you return. If you decide to go the disposable computer route, I recommend working one or two full work days on this computer before the vacation to make sure all of your files and settings are in place.

Documentation

Good documentation is the best way to reduce or eliminate how much you have to step in when you aren't on call, whether you're on vacation or not. Everything from routine procedures to emergency response should be documented and kept up to date. Honestly, this falls under standard best practices as a sysadmin, so it's something you should have whether or not you are about to go on vacation.

• First, all routine procedures from how you deploy code and configuration changes, how you manage tickets, how you perform security patches, how you add and remove users, and how the overall environment is structured should be documented in a clear step-by-step way. If you use automation tools for routine procedures, whether it's as simple as a few scripts or as complex as full orchestration tools, you should make sure you document not only how to use the automation tools, but also how to perform the same tasks manually should the automation tools fail.
• If you are on call, that means you have a monitoring system in place that scans your infrastructure for problems and pages you when it finds any. Every single system check in your monitoring tool should have a corresponding playbook that a sysadmin can follow to troubleshoot and fix the problem. If your monitoring tool allows you to customize the alerts it sends, create corresponding wiki entries for each alert name, and then customize the alert so that it provides a direct link to the playbook in the wiki.
• If you happen to be the subject-matter expert on a particular system, make sure that documentation in particular is well fleshed out and understandable. These are the systems that will pull you out of your vacation, so look through those documents for any assumptions you may have made when writing them that a junior member of the team might not understand. Have other members of the team review the documentation and ask you questions.

One saying about documentation is that if something is documented in two places, one of them will be out of date. Even if you document something only in one place, there's a good chance it is out of date unless you perform routine maintenance. It's a good practice to review your documentation from time to time and update it where necessary and before a vacation is a particularly good time to do it. If you are the only person that knows about the new way to perform a procedure, you should make sure your documentation covers it.

Finally, have your team maintain a page to capture anything that happens while you are gone that they want to tell you about when you get back. If you are the main maintainer of a particular system, but they had to perform some emergency maintenance of it while you were gone, that's the kind of thing you'd like to know about when you get back. If there's a central place for the team to capture these notes, they will be more likely to write things down as they happen and less likely to forget about things when you get back.

Stable State

The more stable your infrastructure is before you leave and the more stable it stays while you are gone, the less likely you'll be disturbed on your vacation. Right before a vacation is a terrible time to make a major change to critical systems. If you can, freeze changes in the weeks leading up to your vacation. Try to encourage other teams to push off any major changes until after you get back.

Before a vacation is also a great time to perform any preventative maintenance on your systems. Check for any systems about to hit a disk warning threshold and clear out space. In general, if you collect trending data, skim through it for any resources that are trending upward that might go past thresholds while you are gone. If you have any tasks that might add extra load to your systems while you are gone, pause or postpone them if you can. Make sure all of your backup scripts are working and all of your backups are up to date.

Emergency Contact Methods

Although it would be great to unplug completely while on vacation, there's a chance that someone from work might want to reach you in an emergency. Depending on where you plan to travel, some contact options may work better than others. For instance, some cell-phone plans that work while traveling might charge high rates for calls, but text messages and data bill at the same rates as at home.

... ... ... Kyle Rankin is senior security and infrastructure architect, the author of many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him @kylerankin

#### [Oct 27, 2017] Neat trick of using su command for killing all processes for a particular user

###### Oct 27, 2017 | unix.stackexchange.com

If you pass -1 as the process ID argument to either the kill shell command or the kill C function , then the signal is sent to all the processes it can reach, which in practice means all the processes of the user running the kill command or syscall.

su -c 'kill -TERM -1' bob


In C (error checking omitted):

if (fork() == 0) {
setuid(uid);
signal(SIGTERM, SIG_DFL);
kill(-1, SIGTERM);
}


#### [Oct 27, 2017] c - How do I kill all a user's processes using their UID - Unix Linux Stack Exchange

###### Oct 27, 2017 | unix.stackexchange.com

osgx ,Aug 4, 2011 at 10:07

Use pkill -U UID or pkill -u UID or username instead of UID. Sometimes skill -u USERNAME may work, another tool is killall -u USERNAME .

Skill was a linux-specific and is now outdated, and pkill is more portable (Linux, Solaris, BSD).

pkill allow both numberic and symbolic UIDs, effective and real http://man7.org/linux/man-pages/man1/pkill.1.html

pkill - ... signal processes based on name and other attributes

    -u, --euid euid,...
Only match processes whose effective user ID is listed.
Either the numerical or symbolical value may be used.
-U, --uid uid,...
Only match processes whose real user ID is listed.  Either the
numerical or symbolical value may be used.


Man page of skill says is it allowed only to use username, not user id: http://man7.org/linux/man-pages/man1/skill.1.html

skill, snice ... These tools are obsolete and unportable. The command syntax is poorly defined. Consider using the killall, pkill

  -u, --user user
The next expression is a username.


killall is not marked as outdated in Linux, but it also will not work with numberic UID; only username: http://man7.org/linux/man-pages/man1/killall.1.html

killall - kill processes by name

   -u, --user
Kill only processes the specified user owns.  Command names
are optional.


I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use full list of processes (doing some readdir of /proc ). I think, they will iterate over /proc digital subfolders and check every found process for match.

To get list of users, use getpwent (it will get one user per call).

skill (procps & procps-ng) and killall (psmisc) tools both uses getpwnam library call to parse argument of -u option, and only username will be parsed. pkill (procps & procps-ng) uses both atol and getpwnam to parse -u / -U argument and allow both numeric and textual user specifier.

; ,Aug 4, 2011 at 10:11

pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux specifically. – Lars Wirzenius Aug 4 '11 at 10:11

Petesh ,Aug 4, 2011 at 10:58

##### OpenSuse also has RPM
###### cisofy.com

Feb 20, 2017 | cisofy.com

sudo lynis

[ Lynis 2.4.0 ]

################################################################################
Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
welcome to redistribute it under the terms of the GNU General Public License.
See the LICENSE file for details about using this software.

2007-2016, CISOfy - https://cisofy.com/lynis/
Enterprise support available (compliance, plugins, interface and tools)
################################################################################

[+] Initializing program
------------------------------------
Usage: lynis command [options]

Command:

audit
audit system                  : Perform local security scan
audit system remote     : Remote security scan
audit dockerfile        : Analyze Dockerfile

show
show                          : Show all commands
show version                  : Show Lynis version
show help                     : Show help

update
update info                   : Show update details
update release                : Update Lynis release

Options:

--no-log                          : Don't create a log file
--pentest                         : Non-privileged scan (useful for pentest)
--profile                : Scan the system with the given profile file
--quick (-Q)                      : Quick mode, don't wait for user input

Layout options
--no-colors                       : Don't use colors in output
--quiet (-q)                      : No output
--reverse-colors                  : Optimize color display for light backgrounds

Misc options
--debug                           : Debug logging to screen
--view-manpage (--man)            : View man page
--verbose                         : Show more details on screen
--version (-V)                    : Display version number and quit

Enterprise options
--plugin-dir ""             : Define path of available plugins
--upload                          : Upload data to central node

More options available. Run '/usr/sbin/lynis show options', or use the man page.

No command provided. Exiting..


#### [Feb 19, 2017] How to change the hostname on CentOS and Ubuntu

###### Feb 19, 2017 | www.rosehosting.com
To change the hostname on your CentOS or Ubuntu machine you should run the following command:
# hostnamectl set-hostname virtual.server.com

For more command options you can add the --help  flag at the end.
# hostnamectl --help
hostnamectl [OPTIONS...] COMMAND ...

Query or change system hostname.

-h --help              Show this help
--version           Show package version
-H --host=[USER@]HOST  Operate on remote host
-M --machine=CONTAINER Operate on local container
--transient         Only set transient hostname
--static            Only set static hostname
--pretty            Only set pretty hostname

Commands:
status                 Show current hostname settings
set-hostname NAME      Set system hostname
set-icon-name NAME     Set icon name for host
set-chassis NAME       Set chassis type for host
set-deployment NAME    Set deployment environment for host
set-location NAME      Set location for host


#### [Feb 19, 2017] Trash-cli A Command Line Trashcan For Unix-like Systems - OSTechNix

###### Feb 19, 2017 | www.ostechnix.com
Trash-cli supports the following functions:
• trash-put – Delete files and folders
• trash-empty – Empty the trashcan.
• trash-list – List deleted files and folders.
• trash-restore – Restore a trashed file or folder.
• trash-rm – Remove individual files from the trashcan.

#### [Feb 15, 2017] Web proxy, NAS and email server installed as appliance

###### Feb 15, 2017 | www.cyberciti.biz
Operating system : Linux

Purpose : Turn normal server into appliances

Artica Tech offers a powerful but easy-to-use Enterprise-Class Web Security and Control solution,

usually the preserve of large companies. Prices starting at 99€ / year for 5 users.

#### [Feb 15, 2017] Synkron – Folder synchronisation

###### synkron.sourceforge.net

Folder synchronisation

Synkron is an application that helps you keep your files and folders always updated. You can easily sync your documents, music or pictures to have their latest versions everywhere.

Synkron provides an easy-to-use interface and a lot of features. Moreover, it is free and cross-platform.

Features

• Sync multiple folders. With Synkron you can sync multiple folders at once
• Analyse. Analyse folders to see what is going to be done in sync.
• Blacklist. Exclude files from sync. Apply wildcards to sync only the files you want.
• Restore. Restore files that were overwritten or deleted in previous syncs.
• Options. Synkron lets you configure your synchronisations in detail.
• Runs everywhere. Synkron is a cross-platform application that runs on Windows, Mac OS X and Linux.
• Documentation. Have a look at the documentation to learn about all the features of Synkron.

Get Synkron at SourceForge.net. Fast, secure and Free Open Source software downloads Copyright ©2011, Matúš Tomlein

#### [Feb 15, 2017] grep like tool

###### Feb 15, 2017 | www.cyberciti.biz
, optimized for programmers. This tool isn't aimed to "search all text files". It is specifically created to search source code trees, not trees of text files. It searches entire trees by default while ignoring Subversion, Git and other VCS directories and other files that aren't your source code.

 Operating system : Cross-platform Purpose : Search source trees Download url : beyondgrep.com

#### [Feb 15, 2017] 15 Greatest Open Source Terminal Applications Of 2012

###### Dec 11, 2012 | www.cyberciti.biz
Last updated January 7, 2013 in Command Line Hacks , Open Source , Web Developer

Linux on the desktop is making great progress. However, the real beauty of Linux and Unix like operating system lies beneath the surface at the command prompt. nixCraft picks his best open source terminal applications of 2012.

Most of the following tools are packaged by all major Linux distributions and can be installed on *BSD or Apple OS X. #3: ngrep – Network grep

Fig.02: ngrep in action
Ngrep is a network packet analyzer. It follows most of GNU grep's common features, applying them to the network layer. Ngrep is not related to tcpdump. It is just an easy to use tool. You can run queries such as:

## grep all HTTP GET or POST requests from network traffic on eth0 interface  ##
sudo
ngrep
-l
-q
-d
eth0
"^GET |^POST "
tcp and port
80


## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l -q -d eth0 "^GET |^POST " tcp and port 80

I often use this tool to find out security related problems and tracking down other network and server related problems.

... ... ...

#5: dtrx

dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like systems that take all the hassle out of extracting archives. As a sysadmin, I download source code and tar balls. This tool saves lots of time.

• You only need to remember one simple command to extract tar, zip, cpio, deb, rpm, gem, 7z, cab, lzh, rar, gz, bz2, lzma, xz, and many kinds of exe files, including Microsoft Cabinet archives, InstallShield archives, and self-extracting zip files. If they have any extra compression, like tar.bz2 files, dtrx will take care of that for you, too.
• dtrx will make sure that archives are extracted into their own dedicated directories.
• dtrx makes sure you can read and write all the files you just extracted, while leaving the rest of the permissions intact.
• Recursive extraction: dtrx can find archives inside the archive and extract those too.
#6:dstat – Versatile resource statistics tool

Fig.05: dstat in action
As a sysadmin, I heavily depends upon tools such as vmstat, iostat and friends for troubleshooting server issues. Dstat overcomes some of the limitations provided by vmstat and friends. It adds some extra features. It allows me to view all of my system resources instantly. I can compare disk usage in combination with interrupts from hard disk controller, or compare the network bandwidth numbers directly with the disk throughput and much more.

... ... ..

#8:mtr – Traceroute+ping in a single network diagnostic tool

Fig.07: mtr in action
The mtr command combines the functionality of the traceroute and ping programs in a single network diagnostic tool. Use mtr to monitor outgoing bandwidth, latency and jitter in your network. A great little app to solve network problems. If you see a sudden increase in packetloss or response time is often an indication of a bad or simply overloaded link.

#9:multitail – Tail command on steroids

Fig.08: multitail in action (image credit – official project)
MultiTail is a program for monitoring multiple log files, in the fashion of the original tail program. This program lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). I often use this tool when I am monitoring logs on my server.

... ... ...

#11: netcat – TCP/IP swiss army knife

Fig.10: nc server and telnet client in action
Netcat or nc is a simple Linux or Unix command which reads and writes data across network connections, using TCP or UDP protocol. I often use this tool to open up a network pipe to test network connectivity, make backups, bind to sockets to handle incoming / outgoing requests and much more. In this example, I tell nc to listen to a port # 3005 and execute /usr/bin/w command when client connects and send data back to the client:
$nc -l -p 3005 -e /usr/bin/w  From a different system try to connect to port # 3005: $ telnet server1.cyberciti.biz.lan 3005

... ... ...

#14: lftp: A better command-line ftp/http/sftp client

This is the best and most sophisticated sftp/ftp/http download and upload client program. I often use this tool to:

1. Recursively mirroring entire directory trees from a ftp server
2. Accelerate ftp / http download speed
4. Backup files to a remote ftp servers.
5. Transfers can be scheduled for execution at a later time.
6. Bandwidth can be throttled and transfer queues can be set up.
7. Lftp has shell-like command syntax allowing you to launch several commands in parallel in background (&).
8. Segmented file transfer, that allows more than one connection for the same file.
9. And much more.

... ... ...

#16: Rest
• Mutt – Email client and I often use mutt to send email attachments from my shell scripts .
• bittorrent – Command line torrent client.
• screen – A full-screen window manager and must have tool for all *nix admins.
• rsync – Sync files and save bandwidth.
• sar – Old good system activity collector and reporter.
• lsof – List open files.
• vim – Best text editor ever.
• elinks or lynx – I use this browse remotely when some sites (such as RHN or Novell or Sun/Oracle) require registration/login before making downloads.
• wget – Best download tool ever. I use wget all the time, even with Gnome desktop.
• mplayer – Best console mp3 player that can play any audio file format.
• newsbeuter – Text mode rss feed reader with podcast support.
• parallel – Build and execute shell command lines from standard input in parallel.
• iftop – Display bandwidth usage on network interface by host.
• iotop – Find out what's stressing and increasing load on your hard disks.
Conclusion

This is my personal FOSS terminal apps list and it is not absolutely definitive, so if you've got your own terminal apps, share in the comments below.

• GuentherHugo July 16, 2014, 8:27 am have a look at cluster-ssh
• Whattteva August 23, 2013, 8:00 pm This is not quite a terminal program, but Terminator is one of the best terminal emulators I know of out there. It makes multi-tasking in the terminal 100 times better, IMHO.
• Boy nux January 8, 2013, 3:23 am lsblk
watch
• Brendon December 30, 2012, 7:05 pm This is a great list – some of these utilities I've only recently discovered and others I know will be super useful.

Another one that hasn't been mentioned here is iperf. From the Debian package description:

Iperf is a modern alternative for measuring TCP and UDP bandwidth performance, allowing the tuning of various parameters and characteristics.

Features:

* Measure bandwidth, packet loss, delay jitter

* Report MSS/MTU size and observed read sizes.

* Support for TCP window size via socket buffers.

* Multi-threaded. Client and server can have multiple simultaneous connections.

* Client can create UDP streams of specified bandwidth.

* Multicast and IPv6 capable.

* Options can be specified with K (kilo-) and M (mega-) suffices.

* Can run for specified time, rather than a set amount of data to transfer.

* Picks the best units for the size of data being reported.

* Server handles multiple connections.

* Print periodic, intermediate bandwidth, jitter, and loss reports at specified

intervals.

* Server can be run as a daemon.

* Use representative streams to test out how link layer compression affects

Homepage: http://iperf.sourceforge.net/

This is the first tool I use when I am troubleshooting file server transfer speeds, for example.

• Strx December 13, 2012, 1:18 pm Good list, take a look here for some good combinations
http://www.bashoneliners.com/

thanks, bye

vidir – edit directories (part of the 'moreutils' package)

• @yjmbo December 12, 2012, 2:16 am htop, for sure. Thanks for dtrx, I'd not heard of that one.

mitmproxy ( http://mitmproxy.org/ ) might be a nice complement for nc/nmap/openssl it's a curses-based HTTP/HTTPS proxy that lets you examine, edit and replay the conversations your browser is having with the rest of the world

• phusss December 12, 2012, 12:48 am socat > netcat
openssh > *
:)

#### [Feb 14, 2017] Three useful aliases for du command

###### Feb 14, 2017 | www.cyberciti.biz

Rishi G June 12, 2012, 4:01 am

Here are 4 commands i use for checking out disk usages.
#Grabs the disk usage in the current directory
alias usage='du -ch | grep total'

#Gets the total disk usage on your machine
alias totalusage='df -hl --total | grep total'

#Shows the individual partition usages without the temporary memory values
alias partusage='df -hlT --exclude-type=tmpfs --exclude-type=devtmpfs'

#Gives you what is using the most space. Both directories and files. Varies on
#current directory
alias most='du -hsx * | sort -rh | head -10'

• shadowbq December 17, 2012, 2:08 pm usage is better written as

alias usage='du -ch 2> /dev/null |tail -1′

• Mark January 12, 2013, 6:08 pm Thank you all for your aliases.
I found this one long time ago and it proved to be useful.

# shoot the fat ducks in your current dir and sub dirs
alias ducks='du -ck | sort -nr | head'

• Karsten July 17, 2013, 9:30 pm While it would still work, the problem with usage='du -ch | grep total' is that you will also get directory names that happen to also have the word 'total' in them.

A better way to do this might be: 'du -ch | tail -1'

• Karsten July 17, 2013, 9:57 pm Over dinner I thought to myself "hmm, what if I want to use the total in a script?" and came up with this in mid entrÃ©e:

du -h | awk 'END{print $1}' Now you'll just get something like: 92G • James C. Woodburn June 12, 2012, 11:45 am I always create a ps2 command that I can easily pass a string to and look for it in the process table. I even have it remove the grep of the current line. alias ps2='ps -ef | grep -v $$| grep -i ' • sbin_bash March 26, 2013, 1:14 pm with header: alias psg='ps -Helf | grep -v$$ | grep -i -e WCHAN -e ' #### [Feb 14, 2017] 15 Greatest Open Source Terminal Applications Of 2012 ###### Feb 14, 2017 | www.cyberciti.biz on December 11, 2012 last updated January 7, 2013 in Command Line Hacks , Open Source , Web Developer L inux on the desktop is making great progress. However, the real beauty of Linux and Unix like operating system lies beneath the surface at the command prompt. nixCraft picks his best open source terminal applications of 2012. Most of the following tools are packaged by all major Linux distributions and can be installed on *BSD or Apple OS X. #3: ngrep – Network grep Fig.02: ngrep in action Ngrep is a network packet analyzer. It follows most of GNU grep's common features, applying them to the network layer. Ngrep is not related to tcpdump. It is just an easy to use tool. You can run queries such as:  ## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l -q -d eth0 "^GET |^POST " tcp and port 80  ## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l -q -d eth0 "^GET |^POST " tcp and port 80 I often use this tool to find out security related problems and tracking down other network and server related problems. ... ... ... #5: dtrx Fig.04: dtrx in action dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like systems that take all the hassle out of extracting archives. As a sysadmin, I download source code and tar balls. This tool saves lots of time. • You only need to remember one simple command to extract tar, zip, cpio, deb, rpm, gem, 7z, cab, lzh, rar, gz, bz2, lzma, xz, and many kinds of exe files, including Microsoft Cabinet archives, InstallShield archives, and self-extracting zip files. If they have any extra compression, like tar.bz2 files, dtrx will take care of that for you, too. • dtrx will make sure that archives are extracted into their own dedicated directories. • dtrx makes sure you can read and write all the files you just extracted, while leaving the rest of the permissions intact. • Recursive extraction: dtrx can find archives inside the archive and extract those too. • Download dtrx #6:dstat – Versatile resource statistics tool Fig.05: dstat in action As a sysadmin, I heavily depends upon tools such as vmstat, iostat and friends for troubleshooting server issues. Dstat overcomes some of the limitations provided by vmstat and friends. It adds some extra features. It allows me to view all of my system resources instantly. I can compare disk usage in combination with interrupts from hard disk controller, or compare the network bandwidth numbers directly with the disk throughput and much more. ... ... .. #8:mtr – Traceroute+ping in a single network diagnostic tool Fig.07: mtr in action The mtr command combines the functionality of the traceroute and ping programs in a single network diagnostic tool. Use mtr to monitor outgoing bandwidth, latency and jitter in your network. A great little app to solve network problems. If you see a sudden increase in packetloss or response time is often an indication of a bad or simply overloaded link. #9:multitail – Tail command on steroids Fig.08: multitail in action (image credit – official project) MultiTail is a program for monitoring multiple log files, in the fashion of the original tail program. This program lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). I often use this tool when I am monitoring logs on my server. ... ... ... #11: netcat – TCP/IP swiss army knife Fig.10: nc server and telnet client in action Netcat or nc is a simple Linux or Unix command which reads and writes data across network connections, using TCP or UDP protocol. I often use this tool to open up a network pipe to test network connectivity, make backups, bind to sockets to handle incoming / outgoing requests and much more. In this example, I tell nc to listen to a port # 3005 and execute /usr/bin/w command when client connects and send data back to the client: $ nc -l -p 3005 -e /usr/bin/w 
From a different system try to connect to port # 3005:
$telnet server1.cyberciti.biz.lan 3005  ... ... ... #14: lftp: A better command-line ftp/http/sftp client This is the best and most sophisticated sftp/ftp/http download and upload client program. I often use this tool to: 1. Recursively mirroring entire directory trees from a ftp server 2. Accelerate ftp / http download speed 3. Location bookmarks and resuming downloads. 4. Backup files to a remote ftp servers. 5. Transfers can be scheduled for execution at a later time. 6. Bandwidth can be throttled and transfer queues can be set up. 7. Lftp has shell-like command syntax allowing you to launch several commands in parallel in background (&). 8. Segmented file transfer, that allows more than one connection for the same file. 9. And much more. 10. Download lftp ... ... ... #16: Rest • Mutt – Email client and I often use mutt to send email attachments from my shell scripts . • bittorrent – Command line torrent client. • screen – A full-screen window manager and must have tool for all *nix admins. • rsync – Sync files and save bandwidth. • sar – Old good system activity collector and reporter. • lsof – List open files. • vim – Best text editor ever. • elinks or lynx – I use this browse remotely when some sites (such as RHN or Novell or Sun/Oracle) require registration/login before making downloads. • wget – Best download tool ever. I use wget all the time, even with Gnome desktop. • mplayer – Best console mp3 player that can play any audio file format. • newsbeuter – Text mode rss feed reader with podcast support. • parallel – Build and execute shell command lines from standard input in parallel. • iftop – Display bandwidth usage on network interface by host. • iotop – Find out what's stressing and increasing load on your hard disks. Conclusion This is my personal FOSS terminal apps list and it is not absolutely definitive, so if you've got your own terminal apps, share in the comments below. #### [Feb 04, 2017] Quickly find differences between two directories ##### You will be surprised, but GNU diff use in Linux understands the situation when two arguments are directories and behaves accordingly ###### Feb 04, 2017 | www.cyberciti.biz The diff command compare files line by line. It can also compare two directories: # Compare two folders using diff ## diff /etc /tmp/etc_old  Rafal Matczak September 29, 2015, 7:36 am § Quickly find differences between two directories And quicker:  diff -y <(ls -l${DIR1}) <(ls -l ${DIR2})  #### [Feb 04, 2017] Restoring deleted /tmp folder ###### Jan 13, 2015 | cyberciti.biz As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is: mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp  #### [Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints ###### Feb 04, 2017 | hints.macworld.com ##### The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS Use CDPATH to access frequent directories in bash Mar 21, '05 10:01:00AM • Contributed by: jonbauman I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ): The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr". Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use): CDPATH=".:~:~/Library"   This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example: $ cd
$cd Documents /Users/baumanj/Documents$ cd Pictures
$cd Preferences /Users/username/Library/Preferences etc... [ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.] cdable_vars is also nice Authored by: clh on Mar 21, '05 08:16:26PM Check out the bash command shopt -s cdable_vars From the man bash page: cdable_vars If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to. With this set, if I give the following bash command: export d="/Users/chap/Desktop" I can then simply type cd d to change to my Desktop directory. I put the shopt command and the various export commands in my .bashrc file. #### [May 08, 2014] 25 Even More – Sick Linux Commands UrFix's Blog 6) Display a cool clock on your terminal watch -t -n1 "date +%T|figlet" This command displays a clock on your terminal which updates the time every second. Press Ctrl-C to exit. A couple of variants: A little bit bigger text: watch -t -n1 "date +%T|figlet -f big"You can try other figlet fonts, too. Big sideways characters: watch -n 1 -t '/usr/games/banner -w 30$(date +%M:%S)'This requires a particular version of banner and a 40-line terminal or you can adjust the width ("30″ here).

7) intercept stdout/stderr of another process
strace -ff -e trace=write -e write=1,2 -p SOME_PID

8) Remove duplicate entries in a file without sorting.
awk '!x[$0]++' <file>  Using awk, find duplicates in a file without sorting, which reorders the contents. awk will not reorder them, and still find and remove duplicates which you can then redirect into another file. 9) Record a screencast and convert it to an mpeg ffmpeg -f x11grab -r 25 -s 800x600 -i :0.0 /tmp/outputFile.mpg  Grab X11 input and create an MPEG at 25 fps with the resolution 800×600 10) Mount a .iso file in UNIX/Linux mount /path/to/file.iso /mnt/cdrom -oloop  "-o loop" lets you use a file as a block device 11) Insert the last command without the last argument (bash) !:-  /usr/sbin/ab2 -f TLS1 -S -n 1000 -c 100 -t 2 http://www.google.com/then !:- http://www.urfix.com/is the same as /usr/sbin/ab2 -f TLS1 -S -n 1000 -c 100 -t 2 http://www.urfix.com/ 12) Convert seconds to human-readable format date -d@1234567890  This example, for example, produces the output, "Fri Feb 13 15:26:30 EST 2009″ 13) Job Control ^Z$bg $disown  You're running a script, command, whatever.. You don't expect it to take long, now 5pm has rolled around and you're ready to go home… Wait, it's still running… You forgot to nohup it before running it… Suspend it, send it to the background, then disown it… The ouput wont go anywhere, but at least the command will still run… 14) Edit a file on a remote host using vim vim scp://username@host//path/to/somefile  15) Monitor the queries being run by MySQL watch -n 1 mysqladmin --user=<user> --password=<password> processlist  Watch is a very useful command for periodically running another command – in this using mysqladmin to display the processlist. This is useful for monitoring which queries are causing your server to clog up. 16) escape any command aliases \[command]  e.g. if rm is aliased for 'rm -i', you can escape the alias by prepending a backslash: rm [file] # WILL prompt for confirmation per the alias \rm [file] # will NOT prompt for confirmation per the default behavior of the command 17) Show apps that use internet connection at the moment. (Multi-Language) ss -p  for one line per process: ss -p | catfor established sockets only: ss -p | grep STAfor just process names: ss -p | cut -f2 -sd\"or ss -p | grep STA | cut -f2 -d\" 18) Send pop-up notifications on Gnome notify-send ["<title>"] "<body>"  The title is optional. Options: -t: expire time in milliseconds. -u: urgency (low, normal, critical). -i: icon path. On Debian-based systems you may need to install the 'libnotify-bin' package. Useful to advise when a wget download or a simulation ends. Example: wget URL ; notify-send "Done" 19) quickly rename a file mv filename.{old,new}  20) Remove all but one specific file rm -f !(survivior.txt)  21) Generate a random password 30 characters long strings /dev/urandom | grep -o '[[:alnum:]]' | head -n 30 | tr -d '\n'; echo  Find random strings within /dev/urandom. Using grep filter to just Alphanumeric characters, and then print the first 30 and remove all the line feeds. 22) Run a command only when load average is below a certain threshold echo "rm -rf /unwanted-but-large/folder" | batch  Good for one off jobs that you want to run at a quiet time. The default threshold is a load average of 0.8 but this can be set using atrun. 23) Binary Clock watch -n 1 'echo "obase=2;date +%s" | bc'  Create a binary clock. 24) Processor / memory bandwidthd? in GB/s dd if=/dev/zero of=/dev/null bs=1M count=32768 Read 32GB zero's and throw them away. How fast is your system? 25) Backup all MySQL Databases to individual files for I in$(mysql -e 'show databases' -s --skip-column-names); do mysqldump $I | gzp > "$I.sql.gz"; done



#### [May 08, 2014] 25 Best Linux Commands UrFix's Blog

25) sshfs name@server:/path/to/folder /path/to/mount/point
Mount folder/filesystem through SSH
Install SSHFS from http://fuse.sourceforge.net/sshfs.html
Will allow you to mount a folder security over a network.

24) !!:gs/foo/bar
Runs previous command replacing foo by bar every time that foo appears
Very useful for rerunning a long command changing some arguments globally.
As opposed to ^foo^bar, which only replaces the first occurrence of foo, this one changes every occurrence.

23) mount | column -t
currently mounted filesystems in nice layout
Particularly useful if you're mounting different drives, using the following command will allow you to see all the filesystems currently mounted on your computer and their respective specs with the added benefit of nice formatting.

22) <space>command
Execute a command without saving it in the history
Prepending one or more spaces to your command won't be saved in history.
Useful for pr0n or passwords on the commandline.

21) ssh user@host cat /path/to/remotefile | diff /path/to/localfile -
Compare a remote file with a local file
Useful for checking if there are differences between local and remote files.

20) mount -t tmpfs tmpfs /mnt -o size=1024m
Mount a temporary ram partition
Makes a partition in ram which is useful if you need a temporary working space as read/write access is fast.
Be aware that anything saved in this partition will be gone after your computer is turned off.

19) dig +short txt <keyword>.wp.dg.cx
Query Wikipedia via console over DNS
Query Wikipedia by issuing a DNS query for a TXT record. The TXT record will also include a short URL to the complete corresponding Wikipedia entry.

18) netstat -tlnp
Lists all listening ports together with the PID of the associated process
The PID will only be printed if you're holding a root equivalent ID.

17) dd if=/dev/dsp | ssh -c arcfour -C username@host dd of=/dev/dsp
output your microphone to a remote computer's speaker
This will output the sound from your microphone port to the ssh target computer's speaker port. The sound quality is very bad, so you will hear a lot of hissing.

16) echo "ls -l" | at midnight
Execute a command at a given time
This is an alternative to cron which allows a one-off task to be scheduled for a certain time.

15) curl -u user:pass -d status="Tweeting from the shell" http://twitter.com/statuses/update.xml
Update twitter via curl

14) ssh -N -L2001:localhost:80 somemachine
start a tunnel from some machine's port 80 to your local post 2001
now you can acces the website by going to http://localhost:2001/

13) reset
Salvage a borked terminal
If you bork your terminal by sending binary data to STDOUT or similar, you can get your terminal back using this command rather than killing and restarting the session. Note that you often won't be able to see the characters as you type them.

12) ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq /tmp/out.mpg
Capture video of a linux desktop

11) > file.txt
Empty a file
For when you want to flush all content from a file without removing it (hat-tip to Marc Kilgus).

10) $ssh-copy-id user@host Copy ssh keys to user@host to enable password-less ssh logins. To generate the keys use the command ssh-keygen 9) ctrl-x e Rapidly invoke an editor to write a long, complex, or tricky command Next time you are using your shell, try typing ctrl-x e (that is holding control key press x and then e). The shell will take what you've written on the command line thus far and paste it into the editor specified by$EDITOR. Then you can edit at leisure using all the powerful macros and commands of vi, emacs, nano, or whatever.

8 ) !whatever:p
Check command history, but avoid running it
!whatever will search your command history and execute the first command that matches 'whatever'. If you don't feel safe doing this put :p on the end to print without executing. Recommended when running as superuser.

mtr, better than traceroute and ping combined
mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool.
As mtr starts, it investigates the network connection between the host mtr runs on and HOSTNAME. by sending packets with purposly low TTLs. It continues to send packets with low TTL, noting the response time of the intervening routers. This allows mtr to print the response percentage and response times of the internet route to HOSTNAME. A sudden increase in packetloss or response time is often an indication of a bad (or simply over‐loaded) link.

6 ) cp filename{,.bak}
quickly backup or copy a file with bash

5) ^foo^bar
Runs previous command but replacing
Really useful for when you have a typo in a previous command. Also, arguments default to empty so if you accidentally run: echo "no typozs"
you can correct it with ^z

4) cd -
change to the previous working directory

3):w !sudo tee %
Save a file you edited in vim without the needed permissions
I often forget to sudo before editing a file I don't have write permissions on. When you come to save that file and get the infamous "E212: Can't open file for writing", just issue that vim command in order to save the file without the need to save it to a temp file and then copy it back again.

2) python -m SimpleHTTPServer
Serve current directory tree at http://$HOSTNAME:8000/ 1) sudo !! Run the last command as root Useful when you forget to use sudo for a command. "!!" grabs the last run command. Monitoring Processes with pgrep By Sandra Henry-Stocker This week, we're going to look at a simple bash script for monitoring processes that we want to ensure are running all the time. We'll use a couple cute scripting "tricks" to facilitate this process and make it as useful as possible. The basic command we're going to use is pgrep. For those of you unfamiliar with pgrep, it's a very nice Solaris command that looks in the process queue to see whether a process by a particular name is running. If it finds the requested process, it returns the process id. For example: % pgrep httpd 1345 1346 1347 1348  This output tells us that there are four httpd processes running on our system. These processes might look like this if we were to execute a ps -ef command: % ps -ef | grep httpd output  The pgrep command, therefore, accomplishes what many of us used to do with strings of Unix command of this variety: % ps -ef | grep httpd | grep -v grep | awk '{print$2}'

In this command, we ran the ps command, narrowed the output down to only those lines containing the word "httpd", removed the grep command itself, and then printed out the second column of the output, the process id. With pgrep, extracting the process ids for the processes that we want to track is faster and "cleaner". Let's look at a couple code segments. First, the old way:
for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
RUNNING = ps -ef | grep $PROC | grep -v grep | wc -l if [$RUNNING ge 1 ]; then
echo $proc1 is running else echo$proc1 is down
fi
done

For each process, we generate a count of the number of instances we detect in the ps output and, if this number is one or more, we issue the "running" output. Otherwise, we display a message saying the process is down.

Now, here's out replacement code using pgrep:

for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
if [ pgrep $PROC ]; then echo$PROC is running
else
echo $PROC is down fi done  In this case, we've simplified our code in a couple of ways. First, we rely on pgrep to give is output (procids) if the process is running and nothing if it isn't. Second, because we're not using ps and grep, we don't have to remove the output that isn't relevant to our task. We don't have to remove the ps output relating to the other running processes and to the process generated by our grep command. The process for killing a set of processes would be quite similar. In fact, we could use both pgrep and a "sister" command, pkill in a similar manner. for PROC in [ proc1 proc2 proc3 proc4 proc5 ] do if [ pgrep$PROC ]; then
pkill $PROC else echo$PRIC is not running
fi
done

The pgrep command is more predictable because we know we're going to get only the process id and that we won't be matching on other strings that just happen to appear in the ps output (e.g., if someone were editing the httpd.conf file).

The pgrep, pkill and related commands are not only easier to use. The code is easier to read and understand. One of the reasons for using sequences of commands such as this:

ps -ef | grep $PROC | grep -v grep | wc -l  was to ensure that we knew what our answer would have to look like. If we left off the final "wc -l", we might get one or a number of pieces of output and have to deal with this fact when we went to check it. In addition, we could use similar logic when the number of processes, rather than just some or none, was important. We would just check the number against what we expected to see. Even so, anyone reading this script a year later would have to stop and think through this command. This is not true for pgrep. The command "pgrep httpd" is easy and quick to interpret as "if httpd is running". The "if [ pgrep$PROC ]" is especially efficient as well. This statement tests whether there is output from the command and is compact and readable. Much as I love Unix for the way it allows me to pipe output from one command to the other, I love it even more when I don't have to.
	sh -x
By S. Lee Henry


Whenever you enter a command in a Unix shell, whether interactively or through a script, the shell expands your commands and passes them on to the Unix kernel for execution. Normally, the shell does its work invisibly. In fact, it so unobtrusively processes your commands that you can easily forget that it's actually doing something for you. As we saw last week, presenting the shell with a command like "rm *" can, on rare occasion, results in a complaint. When the shell balks, producing an error indicating that the argument list is too long, it suddenly reminds us of its presence and that it is subject to resource limitations just like everything else.

Invoking the shell with an option to display commands as it processes them is another way to become acquainted with the shell's method of intercepting and interpreting your commands. The Bourne family shells use the option -x. If you enter the shell using a -x, then commands will be displayed for you before execution. For example:
    boson% /bin/ksh -x
$date + date Mon Jun 4 07:11:01 EDT 2001  You can also see file expansion as the shell provides it for you: $ ls oops*
+ ls oops1 oops2 oop3 oops4 oops5 oopsies
oops1 oops2 oop3 oops4 oops5 oopsies
This is all very exciting, of course, but of limited utility once you get a solid appreciation of how hard the shell is working for you command line after command line. The sh -x "trick" can be very useful when you are debugging a script though. Instead of inserting lines of code like "echo at end of loop" to help determine your code is failing, you can change your "shebang" line to include the -x option:
    #!/bin/sh -x
Afterwards, when you run the script, each line of code will display as it is processed so you can easily see which of the commands are working and where your breakdown is occurring. This is far more useful than looking at no output or little output and wondering where processing is hanging up -- especially true for a complex script where execution follows numerous paths. Being able to watch the executed commands and the order in which they are executed while the script is running can be an invaluable debugging aid -- particularly for complex scripts that don't write much output to the screen while running.

#### How Many is Too Many? By Sandra Henry-Stocker

I surprised myself recently when I issued a command to remove all the files in an old, and clearly unimportant, directory and received this response:
    bin/rm: arg list too long

I seldom encounter this response when cleaning up server directories that I manage, so seeing it surprised me. When I began listing the directory's contents, I wasn't surprised that my command had failed. The directory contained more than 200,000 small, old, and meaningless files, which would take a long time to list, consumes quite a bit of directory file space, and would comprise a very long command line if the shell were about to manage it. Even if every file name had only eight characters, then a line containing all of their names (with blank characters separating the names) would be nearly 1.8 million bytes long. Not surprisingly, my shell balked at the task.

Situations like this remind us that, even though Unix is flexible, powerful, and fun, each of the commands has built in limits. My shell could not allocate adequate space to "expand" the asterisk that I presented in my "rm *" command to a list of all 200,000+ files.

Of course, Unix offers several ways to solve every problem and running out of space to expand a command merely invites one to solve the problem differently. In my case, the easiest solution was to remove the directory along with its contents. The rm -r command, since it doesn't require any argument expansion, is "happy" to comply with such a request. Had I not wanted to remove every file in the directory, I would have gone through a little more trouble. I could have removed subsets of the files, using commands like "rm a*" or "rm *5" until I had removed all of the unwanted files.

A third approach would have been the appropriate for preserving only a small number of the directory's files ? especially files that are easily described by substring or date. I would have tarred up the interesting files using tar and a wild card or a find command to create an include file.

You will not often encounter situations where the shell will be unable to expand your file names into a workable command. Few directories house as many files as the one that I was cleaning up, and the Unix shells allocate enough buffer space for most commands that you might enter. Even so, limits exist and you might happen to bump into one of them every few years.

#### Moving Around the Console.

So you're new to Linux and wondering how this virtual terminal stuff works. Well you can work in six different terminals at a time. To move around from one to another:

To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6
That's cool. But I just did locate on something and a lot of stuff scrolled up. How do I scroll up to see what flew by?
Shift +  PgUp - Scroll Up
Shift +  PgDn - Scroll Down

Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.


If you had X running and wanted to change from X to text based and vice versa

To change to text based from X - Ctrl + Alt + F(n) where n = 1..6

To change to X from text based - Alt + F7

Something unexpected happened and I want to shut down my X server.
Just press:

Ctrl + Alt + Backspace


#### What do you do when you need to see what a program is doing, but it's not one that you'd normally run from the command line?

###### LinuxMonth

What do you do when you need to see what a program is doing, but it's not one that you'd normally run from the command line? Perhaps it's one that is called as a network daemon from inetd, is called from inside another shell script or application, or is even called from cron. Is it actually being called? What command line parameters is it being handed? Why is it dying?

Let's assume the app in question is /the/path/to/myapp . Here's what you do. Make sure you have the "strace" program installed. Download "apptrace" from ftp://ftp.stearns.org/pub/apptrace/ and place it in your path, mode 755. Then type:

apptrace /the/path/to/myapp

When that program is called in the future, apptrace will record the last time myapp ran (see the timestamp on myapp-last-run), the command line parameters used (see myapp-parameters), and the strace output from running myapp (see myapp.pid.trace) in either $HOME/apptrace or /tmp/apptrace if$HOME is not set.

Note that if the original application is setuid-root, strace will not honor that flag and it will run with the permissions of the user running it like any other non-setuid-root app. See the man page for strace for more information on why.

When you've found out what you need to know and wish to stop monitoring the application, type:

mv -f /the/path/to/myapp.orig /the/path/to/myapp

Many thanks to David S. Miller , kernel hacker extraordinaire, for the right to publish his idea. His original version was:

It's actually pretty easy once if you can get a shell on the machine
before the event, once you know the program in question:

mv /path/to/${PROGRAM} /path/to/${PROGRAM}.ORIG
edit /path/to/${PROGRAM} #!/bin/sh strace -f -o /tmp/${PROGRAM}.trace /path/to/${PROGRAM}.ORIG$*

I do it all the time to debug network services started from
inetd for example.


#### Ever wonder what ports are open on your Linux machine ?

Did you ever want to know who was connecting to your machine and what services were they connecting to ? Netstat does just that.

To take a look at all TCP ports that are open on you system.
The use of the '-n' option will give you numerical addresses instead of determining the host. This speeds up the response of the output. The '-l' option only shows connections which are in "LISTEN" mode. And '-t' only shows the TCP connections.

netstat -nlt

[user@mymachine /home/user]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN



The above output show that I have 3 open ports (80, 3306, 22) on my sytem and are waiting for connections on all of the interfaces. The three ports are 80 => apache , 3306 => mysql, 22 => ssh.

Let's take a look at the active connections to this machine. For this you don't use the '-l' option but instead use the '-a' option. The '-a' stand for, yup, you guessed it, show all.

netstat -nat

[user@mymachine /user]# netstat -nat
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 206.112.62.102:80       204.210.35.27:3467      ESTABLISHED
tcp        0      0 206.112.62.102:80       208.229.189.4:2582      FIN_WAIT2
tcp        0   7605 206.112.62.102:80       208.243.30.195:36957    CLOSING
tcp        0      0 206.112.62.102:22       10.60.1.18:3150         ESTABLISHED
tcp        0      0 206.112.62.102:22       10.60.1.18:3149         ESTABLISHED
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN


The above output shows I have 3 web request that are currently being made or are about to finish up. It also show I have 2 SSH connections established. Now I know which IP address are making web requests or have SSH connections open. For more info on the different states, ie "FIN_WAIT2" and "CLOSING" please consult your local man pages.

Well that was a quick tip on how to use netstat to see what TCP ports are open on your machine and who is connecting to them. Hope it was helpful. Share the knowledge !

#### how virtual terminal stuff works

So you're new to Linux and wondering how this virtual terminal stuff works. Well you can work in six different terminals at a time. To move around from one to another:

To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6



That's cool. But I just did locate on something and a lot of stuff scrolled up. How do I scroll up to see what flew by?

Shift +  PgUp - Scroll Up
Shift +  PgDn - Scroll Down

Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.


If you had X running and wanted to change from X to text based and vice versa

To change to text based from X - Ctrl + Alt + F(n) where n = 1..6

To change to X from text based - Alt + F7

Something unexpected happened and I want to shut down my X server. Just press:

Ctrl + Alt + Backspace

apptrace /the/path/to/myapp

When that program is called in the future, apptrace will record the last time myapp ran (see the timestamp on myapp-last-run), the command line parameters used (see myapp-parameters), and the strace output from running myapp (see myapp.pid.trace) in either $HOME/apptrace or /tmp/apptrace if$HOME is not set.

Note that if the original application is setuid-root, strace will not honor that flag and it will run with the permissions of the user running it like any other non-setuid-root app. See the man page for strace for more information on why.

When you've found out what you need to know and wish to stop monitoring the application, type:

mv -f /the/path/to/myapp.orig /the/path/to/myapp

#### RPM - Installing, Querying, Deleting your packages.

RPM (Redhat Package Manager) is an excellent package manager. RPM, created by Red Hat, can be used for building, installing, querying, updating, verifying, and removing software packages. This brief article will show you some of the usage of the rpm tool.

So you have an rpm package that you wish to install. But you want to find out more information about the package, like who built it, and when was it built. Or you want to find out a short description about the package. The following command will show you such information.

	rpm -qpi packagename.rpm

Now that you know more about the package, you're ready to install it. But before you install it you want to get a list of files and find out where will these files be installed. The following command will show you exactly that.

	rpm -qpl packagename.rpm

To actually install the package, use:

	rpm -i packagename.rpm

But what if I have an older version of the rpm already installed ? Then you want to upgrade the package. The following command will remove any older version of the package and install the newer version.

	rpm -Uvh packagename.rpm

How do I check all the packages installed on my system ? The following will list their names and version numbers.

	rpm -qa

and to see all the packages installed with the latest ones on top.

	rpm -qa --last

And if you want to see what package a file belongs to, if any, you can do the following. This command will show the rpm name or tell you that the file does not belong to any packages.

	rpm -qf file

And if you wanted to uninstall the package, you can do the following.

	rpm -e packagename


and to unistall even if it other packages depend on it. Note: This is dangerous; this should only be done if you are absolutely sure the dependency does not apply in your case.

	rpm -e packagename --nodeps


There are a lot more comands to help you manage your packages better. But this will cover the thirst of most users. If you want to learn more about rpms type man rpm at your prompt or visit www.rpm.org. In particular, see the RPM-HOWTO at www.rpm.org.

#### Linux-etc Quickies

"Some snippets of helpful advice were lying around my hard drive, so I thought it a good time to unload it. There's no theme to any of it, really, but I think that themes are sometimes overrated, don't you?"

"My favorite mail reader, pine 4.21, does not lag behind when it comes to modern features. For example, it supports rule-based filtering just like those graphical clients that get all the press these days. Just head to Main menu -> Setup -> Rules -> Filters -> Add. Voila!"

"Red Hat 6.2 ships with the ability to display TrueType fonts with the XFree86 X server. Oddly, the freetype package doesn't include any TrueType fonts, nor does it provide clear instructions on how to add them to your system."

#### Linux Today - O'Reilly Network Top 10 Tips for Linux Users

• "Switch to another console.
• Linux lets you use "virtual consoles" to log on to multiple sessions simultaneously, so you can do more than one operation or log on as another user. Logging on to another virtual console is like sitting down and logging in at a different physical terminal, except you are actually at one terminal, switching between login sessions."
• "Temporarily use a different shell.
• Every user account has a shell associated with it. The default Linux shell is bash; a popular alternative is tcsh. The last field of the password table (/etc/passwd) entry for an account contains the login shell information. You can get the information by checking the password table, or you can use the finger command."
• "Print a man page.
• Here are a few useful tips for viewing or printing manpages:

To print a manpage, run the command:

man | col -b | lpr

The col -b command removes any backspace or other characters that would make the printed manpage difficult to read."

#### Troubleshooting Tips

From the SGI Admin Guide - last I checked the CPU spends most of its time waiting for something to do

Table 5-3 : Indications of an I/O-Bound System

Field Value sar Option

%busy (% time disk is busy) >85 sar -d

%rcache (reads in buffer cache) low, <85 sar -b

%wcache (writes in buffer cache) low, <60% sar -b

%wio (idle CPU waiting for disk I/O) dev. system >30 sar -u
fileserver >80

Table 5-5 Indications of Excessive Swapping/Paging


bswot/s (ransfers from memory to disk swap area)	>200	sar -w

bswin/s (transfers to memory)				>200	sar -w

%swpocc (time swap queue is occupied)			>10	sar -q

rflt/s (page reference fault)				>0	sar -t

freemem (average pages for user processes)		<100	sar -r

Indications of a CPU bound systems

%idle (% of time CPU has no work to do)			<5	sar -u

runq-sz (processes in memory waiting for CPU)		>2	sar -q

%runocc (% run queue occupied and processes not executing)	>90	sar -q


hypermail /usr/local/src/src/hypermail - mailing list to web page converter; grep hypermail /etc/aliases shows which lists use hypermail

pwck, grpck should be run weekly to make sure ok; grpck produces a ton of errors

can use local man pages - text only - see Ch3 User Services
put in /usr/local/manl (try /usr/man/local/manl) suffix .l
long ones pack -> pack program.1;mv program.1.z /usr/man/local/mannl/program.z

Linux Gazette Index

• Getting the most from multiple X servers - in the office and at home
• Starting and stopping daemons
• Disabling the console screensaver
• Linux Kernel Split
• Incorrect Tip....(lilo mem=128M)
• Re: Command line editing

Wed, 17 May 2000 08:38:09 +0200
From: Sebastian Schleussner Sebastian.Schleussner@gmx.de

I have been trying to set command line editing (vi mode) as part of
my bash shell environment and have been unsuccessful so far. You might
think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my
start up scripts. I have tried all possible combinations but it JUST DOES
NOT WORK. I inserted the line in /etc/profile , in my .bash_profile, in
my .bashrc etc but I cannot get it to work. How can I get this done? This
used to be a breeze in the korn shell. Where am I going wrong?

Hi!
I recently learned from the SuSE help that you have to put the line
set keymap vi
into your /etc/inputrc or ~/.inputrc file, in addition to what you did
('set -o vi' in ~/.bashrc or /etc/profile)!
I hope that will do the trick for you.

Cheers,
Sebastian Schleussner

• mouse wheel and netscape
• Utility for those who changing HDDs very often

For those who are changing HDDs very often, here is small ugly but working utility which I wrote.

It detects filesystem types of all accessible partitions and checks/mounts them in folders named after device (hda7,hdb1,hdb3,sd1,...).

So you will never have to write sequences of fdisk,fsck,mount,df...

• Traceroute Resources

You maybe interested in checking the site "Tracerote Lists by States. Backbone Maps List" http://cities.lk.net/trlist.html

You can find there many links to the traceroute resources sorted by the next items:

• Traceroute List by States
• Traceroute against Spam
• Other Traceroute Lists
• Traceroute and other tools
• Traceroute Analysis

Other thing is the List of Backbone Maps, sorted by Geographical Location, also some other info about backbones.

• Info-search tips for Midnight Commander users

Mon, 31 Jan 2000 14:57:13 -0800
From: Ben Okopnik <fuzzybear@pocketmail.com>

Funny thing; I was just about to post this tip when I read Matt Willis' "HOWTO searching script" in LG45. Still, this script is a good bit more flexible (allows diving into subdirectories, actually displays the HOWTO or the document whether .gz or .html or whatever format, etc.), uses the Bash shell instead of csh (well, _I_ see it as an advantage ...), and reads the entire /usr/doc hierarchy - perfect for those times when the man page isn't quite enough. I find myself using it about as often as I do the 'man' command.

You will need the Midnight Commander on your system to take advantage of this (in my opinion, one of the top three apps ever written for the Linux console). I also find that it is at its best when used under X-windows, as this allows the use of GhostView, xdvi, and all the other nifty tools that aren't available on the console.

To use it, type (for example)

doc xl


and press Enter. The script will respond with a menu of all the /usr/doc subdirs beginning with 'xl' prefixed by menu numbers; simply select the number for the directory that you want, and the script will switch to that directory and present you with another menu. Whenever your selection is an actual file, MC will open it in the appropriate manner - and when you exit that view of it, you'll be presented with the menu again. To quit the script, press 'Ctrl-C'.

A couple of built-in minor features (read: 'bugs') - if given a nonsense number as a selection, 'doc' will drop you into your home directory. Simply 'Ctrl-C' to get out and try again. Also, for at least one directory in '/usr/doc' (the 'gimp-manual/html') there is simply not enough scroll-back buffer to see all the menu-items (526 of them!). I'm afraid that you'll simply have to switch there and look around; fortunately, MC makes that relatively easy!

Oh, one more MC tip. If you define the 'CDPATH' variable in your .bash_profile and make '/usr/doc' one of the entries in it, you'll be able to switch to any directory in that hierarchy by simply typing 'cd <first_few_letters_of_dir_name>' and pressing the Tab key for completion. Just like using 'doc', in some ways...

Hope this is of help.

#### Copy Your Linux Install to a Different Partition or Drive

Jul 9, 2009

If you need to move your Linux installation to a different hard drive or partition (and keep it working) and your distro uses grub this tech tip is what you need.

To start, get a live CD and boot into it. I prefer Ubuntu for things like this. It has Gparted. Now follow the steps outlined below.

Copying

• Mount both your source and destination partitions.
• Run this command from a terminal:
  $sudo cp -afv /path/to/source/* /path/to/destination  Don't forget the asterisk after the source path. • After the command finishes copying, shut down, remove the source drive, and boot the live CD again. Configuration • Mount your destination drive (or partition). • Run the command "gksu gedit" (or use nano or vi). • Edit the file /etc/fstab. Change the UUID or device entry with the mount point / (the root partition) to your new drive. You can find your new drive's (or partition's) UUID with this command: $ ls -l /dev/disk/by-uuid/
• Edit the file /boot/grub/menu.lst. Change the UUID of the appropriate entries at the bottom of the file to the new one.

Install Grub

• Run sudo grub.
• At the Grub prompt, type:
  find /boot/grub/menu.lst
This will tell you what your new drive and partition's number is. (Something like hd(0,0))
• Type:
  root hd(0,0)
but replace "hd(0,0)" with your partition's number from above.
• Type:
  setup hd(0)
but replace "hd(0)" with your drive's number from above. (Omit the comma and the number after it).

That's it! You should now have a bootable working copy of your source drive on your destination drive! You can use this to move to a different drive, partition, or filesystem.

Related Stories:
Linux - Compare two directories(Feb 18, 2009)
Cloning Linux Systems With CloneZilla Server Edition (CloneZilla SE)(Jan 22, 2009)
Copying a Filesystem between Computers(Oct 28, 2008)
rsnapshot: rsync-Based Filesystem Snapshot(Aug 26, 2008)
K9Copy Helps Make DVD Backups Easy(Aug 23, 2008)

#### UNIX tips Productivity tips by Michael Stutz

Useful command-line secrets for increasing productivity in the office

Level: Intermediate

Michael Stutz (stutz@dsl.org), Author, Consultant

19 Sep 2006
Updated 21 Sep 2006

Using UNIX in a day-to-day office setting doesn't have to be clumsy. Learn some of the many ways, both simple and complex, to use the power of the UNIX shell and available system tools to greatly increase your productivity in the office.

Introduction

The language of the UNIX® command line is notoriously versatile: With a panorama of small tools and utilities and a shell to combine and execute them, you can specify many precise and complex tasks.

But when used in an office setting, these same tools can become a powerful ally toward increasing your productivity. Many techniques unique to UNIX can be applied to the issue of workplace efficiency.

This article gives several suggestions and techniques for bolstering office productivity at the command-line level: how to review your current system habits, how to time your work, secrets for manipulating dates, a quick and simple method of sending yourself a reminder, and a way to automate repetitive interactions.

Review your daily habits

The first step toward increasing your office productivity using the UNIX command line is to take a close look at your current day-to-day habits. The tools and applications you regularly use and the files you access and modify can give you an idea of what routines are taking up a lot of your time -- and what you might be avoiding.

Review the tools you use

You'll want to see what tools and applications you're using regularly. You can easily ascertain your daily work habits on the system with the shell's history built-in, which outputs an enumerated listing of the input lines you've sent to the shell in the current and past sessions. See Listing 1 for a typical example.

Listing 1. Sample output of the shell history built-in
 $history 1 who 2 ls 3 cd /usr/local/proj 4 ls 5 cd websphere 6 ls 7 ls -l$ 

The actual history is usually kept in a file so that it can be kept through future sessions; for example, the Korn shell keeps its command history hidden in the .sh_history file in the user's home directory, and the Bash shell uses .bash_history. These files are usually overwritten when they reach a certain length, but many shells have variables to set the maximum length of the history; the Korn and Bash shells have the HISTSIZE and HISTFILESIZE variables, which you can set in your shell startup file.

It can be useful to run history through sort to get a list of the most popular commands. Then, use awk to strip out the command name minus options and arguments, and pass the sorted list to uniq to give an enumerated list. Finally, call sort again to resort the list in reverse order (highest first) by the first column, which is the enumeration itself. Listing 2 shows an example of this in action.

Listing 2. Listing the commands in the shell history by popularity
 $history|awk '{print$2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort -r 4 ls 2 cd 1 who$ 

If your history file is large, you can run periodic checks by piping to tail first -- for example, to check the last 1,000 commands, try:
 $history|tail -1000|awk '{print$2}'|awk 'BEGIN {FS="|"} {print $1}'|sort|uniq -c|sort -r  Review the files you access or modify Use the same principle to review the files that you've modified or accessed. To do this, use the find utility to locate and review all files you've accessed or changed during a certain time period -- today, yesterday, or at any date or segment of time in the past. You generally can't find out who last accessed or modified a file, because this information isn't easily available under UNIX, but you can review your personal files by limiting the search to only files contained in your home directory tree. You can also limit the search to only files in the directory of a particular project that you're monitoring or otherwise working on. The find utility has several flags that aid in locating files by time, as listed in Table 1. Directories aren't regular files but are accessed every time you list them or make them the current working directory, so exclude them in the search using a negation and the -type flag. Table 1. Selected flags of the find utility FlagDescription -daystartThis flag starts at the beginning of the day. -atimeThe time the file was last accessed -- in number of days. -ctimeThe time the file's status last changed -- in number of days. -mtimeThe time the file was last modified -- in number of days. -aminThe time the file was last accessed -- in number of minutes. (It is not available on all implementations.) -cminThe time the file's status last changed -- in number of minutes. (It is not available on all implementations.) -mminThe time the file was last modified -- in number of minutes. (It is not available on all implementations.) -typeThis flag describes the type of file, such as d for directories. -user XFiles belonging to user X. -group XFiles belonging to group X. -newer XFiles that are newer than file X. Here's how to list all the files in your home directory tree that were modified exactly one hour ago: $ find ~ -mmin 60 \! -type d


Giving a negative value for a flag means to match that number or sooner. For example, here's how to list all the files in your home directory tree that were modified exactly one hour ago or any time since:
 $find ~ -mmin -60 \! -type d  Not all implementations of find support the min flags. If yours doesn't, you can make a workaround by using touch to create a dummy file whose timestamp is older than what you're looking for, and then search for files newer than it with the -newer flag:  $ date Mon Oct 23 09:42:42 EDT 2006 $touch -t 10230842 temp$ ls -l temp -rw-r--r-- 1 joe joe 0 Oct 23 08:42 temp $find ~ -newer temp \! -type d  The special -daystart flag, when used in conjunction with any of the day options, measures days from the beginning of the current day instead of from 24 hours previous to when the command is executed. Try listing all of your files, existing anywhere on the system, that have been accessed any time from the beginning of the day today up until right now:  $ find / -user whoami -daystart -atime -1 \! -type d 

Similarly, you can list all the files in your home directory tree that were modified at any time today:
 $find ~ -daystart -mtime -1 \! -type d  Give different values for the various time flags to change the search times. You can also combine flags. For instance, you can list all the files in your home directory tree that were both accessed and modified between now and seven days ago:  $ find ~ -daystart -atime -7 -mtime -7 \! -type d 

You can also find files based on a specific date or a range of time, measured in either days or minutes. The general way to do this is to use touch to make a dummy file or files, as described earlier.

When you want to find files that match a certain range, make two dummy files whose timestamps delineate the range. Then, use the -newer flag with the older file, and use "\! -newer" on the second file.

For example, to find all the files in the /usr/share directory tree that were accessed in August, 2006, try the following:

 $touch -d "Aug 1 2006" file.start$ touch -d "Sep 1 2006" file.end $find /usr/share -daystart -newer file.start \! -daystart -newer file.end  Finally, it's sometimes helpful when listing the contents of a directory to view the files sorted by their time of last modification. Some versions of the ls tool have the -c option, which sorts by the time of file modification, showing the most recently modified files first. In conjunction with the -l (long-listing) and -t (sort by modification time) options, you can peruse a directory listing by the most recently modified files first; the long listing shows the file modification time instead of the default creation time:  $ ls -ltc /usr/local/proj/websphere | less 

Another useful means of increasing office productivity using UNIX is to time commands that you regularly execute. Then, you can evaluate the results and determine whether you're spending too much time waiting for a particular process to finish.

Time command execution

Is the system slowing you down? How long are you waiting at the shell, doing nothing, while a particular command is being executed? How long does it take you to run through your usual morning routine?

You can get concrete answers to these questions when you use the date, sleep, and echo commands to time your work.

To do this, type a long input line that first contains a date statement to output the time and date in the desired format (usually hours and minutes suffice). Then, run the command input line -- this can be several lines strung together with shell directives -- and finally, get the date again on the same input line. If the commands you're testing produce a lot of output, redirect it so that you can read both start and stop dates. Calculate the difference between the two dates:

 $date; system-backup > /dev/null; system-diag > /dev/null;\ > netstat > /dev/null; df > /dev/null; date  Test your typing speed You can use these same principles to test your typing speed:  $ date;cat|wc -w;date 

This command works best if you give a long typing sample that lasts at least a minute, but ideally three minutes or more. Take the difference in minutes between the two dates and divide by the number of words you typed (which is output by the middle command) to get the average number of words per minute you type.

You can automate this by setting variables for the start and stop dates and for the command that outputs the number of words. But to do this right, you must be careful to avoid a common error in calculation when subtracting times. A GNU extension to the date command, the %s format option, avoids such errors -- it outputs the number of seconds since the UNIX epoch, which is defined as midnight UTC on January 1, 1970. Then, you can calculate the time based on seconds alone.

Assign a variable, SPEED, as the output of an echo command to set up the right equation to pipe to a calculator tool, such as bc. Then, output a new echo statement that outputs a message with the speed:

 $START=date +%s;WORDS=cat|wc -w; STOP=date +%s; SPEED=\ > echo "$WORDS / ( ( $STOP -$START ) / 60 )"|bc;echo \ > "You have a typing speed of $SPEED words per minute."  You can put this in a script and then change the permissions to make it executable by all users, so that others on the system can use it, too, as in Listing 3.  $ typespeed The quick brown fox jumped over the lazy dog. The quick brown dog-- ... --jumped over the lazy fox. ^D You have a typing speed of 82.33333333 words per minute. $ Know your dates The date tool can do much more than just print the current system date. You can use it to get the day of the week on which a given date falls and to get dates relative to the current date. Get the day of a date Another GNU extension to the date command, the -d option, comes in handy when you don't have a desk calendar nearby -- and what UNIX person bothers with one? With this powerful option, you can quickly find out what day of the week a particular date falls on by giving the date as a quoted argument:  $ date -d "nov 22" Wed Nov 22 00:00:00 EST 2006 $ In this example, you see that November 22 of this year is on a Wednesday. So, when it's suggested that the big meeting be held on November 22, you'll know right away that it falls on a Wednesday -- which is the day you're out in the field office. Get relative dates The -d option can also tell you what the date will be relative to the current date -- either a number of days or weeks from now, or before now (ago). Do this by quoting this relative offset as an argument to the -d option. Suppose, for example, that you need to know the date two weeks hence. If you're at a shell prompt, you can get the answer immediately:  $ date -d '2 weeks' 

There are other important ways to use this command. With the next directive, you can get the day of the week for a coming day:
 $date -d 'next monday' With the ago directive, you can get dates in the past:  $ date -d '30 days ago' 

And you can use negative numbers to get dates in reverse:

 $date -d 'dec 14 -2 weeks' This technique is useful to give yourself a reminder based on a coming date, perhaps in a script or shell startup file, like so:  DAY=date -d '2 weeks' +"%b %d" if test "echo$DAY" = "Aug 16"; then echo 'Product launch is now two weeks away!'; fi 

Give yourself reminders

Use the tools at your disposal to leave reminders for yourself on the system -- they take up less space than notes on paper, and you'll see them from anywhere you happen to be logged in.

Know when it's time to leave

When you're working on the system, it's easy to get distracted. The leave tool, common on the IBM AIX® operating system and Berkeley Software Distribution (BSD) systems (see Resources) can help.

Give leave the time when you have to leave, using a 24-hour format: HHMM. It runs in the background, and five minutes before that given time, it outputs on your terminal a reminder for you to leave. It does this again one minute before the given time if you're still logged in, and then at the time itself -- and from then on, it keeps sending reminders every minute until you log out (or kill the leave process). See Listing 4 for an example. When you log out, the leave process is killed.

Listing 4. Example of running the leave command
 $leave When do you have to leave? 1830 Alarm set for Fri Aug 4 18:30. (pid 1735)$ date +"Time now: %l:%M%p" Time now: 6:20PM $You have to leave in 5 minutes.$ date +"Time now: %l:%M%p" Time now: 6:25PM $Just one more minute!$ date +"Time now: %l:%M%p" Time now: 6:29PM $Time to leave!$ date +"Time now: %l:%M%p" Time now: 6:30PM $Time to leave!$ date +"Time now: %l:%M%p" Time now: 6:31PM $kill 1735$ sleep 120; date +"Time now: %l:%M%p" Time now: 6:33PM $ You can give relative times. If you want to leave a certain amount of time from now, precede the time argument with a +. So, to be reminded to leave in two hours, type the following:  $ leave +0200 

To give a time amount in minutes, make the hours field 0. For example, if you know you have only 10 more minutes before you absolutely have to go, type:

#!/bin/sh
strace -f -o /tmp/${PROGRAM}.trace /path/to/${PROGRAM}.ORIG $* I do it all the time to debug network services started from inetd for example.  #### Linux Today - O'Reilly Network Top 10 Tips for Linux Users • "Switch to another console. • Linux lets you use "virtual consoles" to log on to multiple sessions simultaneously, so you can do more than one operation or log on as another user. Logging on to another virtual console is like sitting down and logging in at a different physical terminal, except you are actually at one terminal, switching between login sessions." • "Temporarily use a different shell. • Every user account has a shell associated with it. The default Linux shell is bash; a popular alternative is tcsh. The last field of the password table (/etc/passwd) entry for an account contains the login shell information. You can get the information by checking the password table, or you can use the finger command." • "Print a man page. • Here are a few useful tips for viewing or printing manpages: To print a manpage, run the command: man | col -b | lpr The col -b command removes any backspace or other characters that would make the printed manpage difficult to read." #### Troubleshooting Tips System performance From the SGI Admin Guide - last I checked the CPU spends most of its time waiting for something to do Table 5-3 : Indications of an I/O-Bound System Field Value sar Option %busy (% time disk is busy) >85 sar -d %rcache (reads in buffer cache) low, <85 sar -b %wcache (writes in buffer cache) low, <60% sar -b %wio (idle CPU waiting for disk I/O) dev. system >30 sar -u fileserver >80 Table 5-5 Indications of Excessive Swapping/Paging  bswot/s (ransfers from memory to disk swap area) >200 sar -w bswin/s (transfers to memory) >200 sar -w %swpocc (time swap queue is occupied) >10 sar -q rflt/s (page reference fault) >0 sar -t freemem (average pages for user processes) <100 sar -r Indications of a CPU bound systems %idle (% of time CPU has no work to do) <5 sar -u runq-sz (processes in memory waiting for CPU) >2 sar -q %runocc (% run queue occupied and processes not executing) >90 sar -q  hypermail /usr/local/src/src/hypermail - mailing list to web page converter; grep hypermail /etc/aliases shows which lists use hypermail pwck, grpck should be run weekly to make sure ok; grpck produces a ton of errors can use local man pages - text only - see Ch3 User Services put in /usr/local/manl (try /usr/man/local/manl) suffix .l long ones pack -> pack program.1;mv program.1.z /usr/man/local/mannl/program.z Linux Gazette Index More 2-Cent Tips • Getting the most from multiple X servers - in the office and at home • Starting and stopping daemons • Disabling the console screensaver • Linux Kernel Split • Incorrect Tip....(lilo mem=128M) • Re: Command line editing Wed, 17 May 2000 08:38:09 +0200 From: Sebastian Schleussner Sebastian.Schleussner@gmx.de I have been trying to set command line editing (vi mode) as part of my bash shell environment and have been unsuccessful so far. You might think this is trivial - well so did I. I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my start up scripts. I have tried all possible combinations but it JUST DOES NOT WORK. I inserted the line in /etc/profile , in my .bash_profile, in my .bashrc etc but I cannot get it to work. How can I get this done? This used to be a breeze in the korn shell. Where am I going wrong? Hi! I recently learned from the SuSE help that you have to put the line set keymap vi into your /etc/inputrc or ~/.inputrc file, in addition to what you did ('set -o vi' in ~/.bashrc or /etc/profile)! I hope that will do the trick for you. Cheers, Sebastian Schleussner More 2-Cent Tips • mouse wheel and netscape • Utility for those who changing HDDs very often For those who are changing HDDs very often, here is small ugly but working utility which I wrote. It detects filesystem types of all accessible partitions and checks/mounts them in folders named after device (hda7,hdb1,hdb3,sd1,...). So you will never have to write sequences of fdisk,fsck,mount,df... • Traceroute Resources You maybe interested in checking the site "Tracerote Lists by States. Backbone Maps List" http://cities.lk.net/trlist.html You can find there many links to the traceroute resources sorted by the next items: • Traceroute List by States • Traceroute against Spam • Other Traceroute Lists • Traceroute and other tools • Traceroute Analysis Other thing is the List of Backbone Maps, sorted by Geographical Location, also some other info about backbones. More 2-Cent Tips #### faq_builder.pl script Sat, 11 Mar 2000 07:08:15 +0100 (CET) From: Hans Zoebelein <hzo@goldfish.cube.net> Everybody who is running a software project needs a FAQ to clarify questions about the project and to enlighten newbies how to run the software. Writing FAQs can be a time consuming process without much fun. Now here comes a little Perl script which transforms simple ASCII input into HTML output which is perfect for FAQs (Frequently Asked Questions). I'm using this script on a daily basis and it is really nice and spares a lot of time. Check out http://leb.net/blinux/blinux-faq.html for results. Attachment faq_builder.txt is the ASCII input to produce faq_builder.html using faq_builder.pl script. 'faq_builder.pl faq_builder.txt > faq_builder.html' does the trick. Faq_builder.html is the the description how to use faq_builder.pl. #### Fantastic book on linux - available for free both on/offline! Sat, 18 Mar 2000 16:15:22 GMT From: Esben Maaløe (Acebone) <acebone@f2s.com> Hi! When I browse through the 2 cent tips, I see a lot of general Sysadmin/bash questions that could be answered by a book called "An Introduction to Linux Systems Administration" - written by David Jones and Bruce Jamieson. You can check it out at www.infocom.cqu.edu.au/Units/aut99/85321 It's available both on-line and as downloadable PostScript file. Perhaps it's also available in PDF. It's a great book, and a great read! #### Quick tip for mounting FDs, CDs, etc... Fri, 25 Feb 2000 15:49:17 -0800 From: <fuzzybear@pocketmail.com> If you can't or don't want to use auto-mounting, and are tired of typing out all those 'mount' and 'umount' commands, here's a script called 'fd' that will do "the right thing at the right time" - and is easily modified for other devices: #!/bin/bash d="/mnt/fd0" if [ -n "$(mount $d 2>&1)" ]; then umount$d; fi


It's a fine example of "obfuscated Bash scripting" , but it works well - I use it and its relatives 'cdr', 'dvd', and 'fdl' (Linux-ext2 floppy) every day.

Ben Okopnik

#### 2 Cent Tips

Wed, 08 Mar 2000 16:13:59 -0500
From: Bolen Coogler <bcoogler@dscga.com> How to set vi edit mode in bash for Mandrake 7.0

If, like me, you prefer vi-style command line editing in bash, here's how to get it working in Mandrake 7.0.

When I wiped out Redhat 5.2 on my PC and installed Mandrake 7.0, I found vi command line editing no longer worked, even after issuing the "set -o vi" command. After much hair pulling and gnashing of teeth, I finally found the problem is with the /etc/inputrc file. I still don't know which line in this file caused the problem. If you have this same problem in Mandrake or some other distribution, my suggestion for a fix is:

1. su to root. 2. Save a copy of the original /etc/inputrc file (you may want it back).

3. Replace the contents of /etc/inputrc with the following:

set convert-meta off
set input-meta on
set output-meta on
set keymap vi
set editing-mode vi


The next time you start a terminal session, vi editing will be functional.

--Bolen Coogler

LG52 2-Cent Tips

LG51 2-Cent Tips

#### Info-search tips for Midnight Commander users

Mon, 31 Jan 2000 14:57:13 -0800
From: Ben Okopnik <fuzzybear@pocketmail.com>

Funny thing; I was just about to post this tip when I read Matt Willis' "HOWTO searching script" in LG45. Still, this script is a good bit more flexible (allows diving into subdirectories, actually displays the HOWTO or the document whether .gz or .html or whatever format, etc.), uses the Bash shell instead of csh (well, _I_ see it as an advantage ...), and reads the entire /usr/doc hierarchy - perfect for those times when the man page isn't quite enough. I find myself using it about as often as I do the 'man' command.

You will need the Midnight Commander on your system to take advantage of this (in my opinion, one of the top three apps ever written for the Linux console). I also find that it is at its best when used under X-windows, as this allows the use of GhostView, xdvi, and all the other nifty tools that aren't available on the console.

To use it, type (for example)

doc xl

and press Enter. The script will respond with a menu of all the /usr/doc subdirs beginning with 'xl' prefixed by menu numbers; simply select the number for the directory that you want, and the script will switch to that directory and present you with another menu. Whenever your selection is an actual file, MC will open it in the appropriate manner - and when you exit that view of it, you'll be presented with the menu again. To quit the script, press 'Ctrl-C'.

A couple of built-in minor features (read: 'bugs') - if given a nonsense number as a selection, 'doc' will drop you into your home directory. Simply 'Ctrl-C' to get out and try again. Also, for at least one directory in '/usr/doc' (the 'gimp-manual/html') there is simply not enough scroll-back buffer to see all the menu-items (526 of them!). I'm afraid that you'll simply have to switch there and look around; fortunately, MC makes that relatively easy!

Oh, one more MC tip. If you define the 'CDPATH' variable in your .bash_profile and make '/usr/doc' one of the entries in it, you'll be able to switch to any directory in that hierarchy by simply typing 'cd <first_few_letters_of_dir_name>' and pressing the Tab key for completion. Just like using 'doc', in some ways...

Hope this is of help.

#### dual booting NT and linux

Thu, 03 Feb 2000 22:30:06 +0000
From: Clive Wright <clive_wright@telinco.co.uk>

I am not familiar with Norton Ghost; however I have been successfully dual booting NT 4 and versions of linux (currently Redhat 6.0) for the past year.

First let me refer you to the excellent article on multibooting by Tom de Blende in issue 47 of LG. Note step 17. "The tricky part is configuring Lilo. You must keep Lilo OUT OF THE MBR! The mbr is reserved for NT. If you'd install Lilo in your mbr, NT won't boot anymore".

As your requirements are quite modest they can easily be accomplished without any third party software ie. "Bootpart".

If NT is on a Fat partition then install MSdos and use the NT loader floppy disks to repair the startup environment. If NT is on an NTFS partition then you will need a Fat partition to load MSdos. Either way you should get to a stage where you can use NT's boot manager to select between NT and MSdos.

Boot into dos and from the dos prompt: "copy bootsect.dos *.lux".

Use attrib to remove attributes from boot.ini "attrib -s -h -r boot.ini" and edit the boot.ini file; after a line similar to C:\bootsect.dos="MS-DOS v6.22" add the line C:\bootsect.lux="Redhat Linux".

Save the edited file and replace the attributes.

At the boot menu you should now have four options: two for NT (normal and vga mode) and one each for msdos and Linux. To get the linux option to work you will have to use redhat's boot disk to boot into Linux and configure Lilo. Log on as root and use your favorite text editor to edit /etc/lilo.conf. Here is a copy of mine:

boot=/c/bootsect.lux
map=/boot/map
install=/boot/boot.b
prompt
timeout=1
image=/boot/vmlinuz-2.2.14
label=linux
root=/dev/hda5


It can be quite minimal as it only has one operating system to boot; there is no requirement for a prompt and the timeout is reduced to 1 so that it boots almost immediately without further user intervention. If your linux root partition is not /dev/hda5 then the root line will require amendment.

I mount my MSdos C: drive as /c/ under linux. I am sure this will make some unix purists cringe but I find C: to /c easy to type and easy to remember. If you are happy with that; then all that is required is to create the mount point, "mkdir /c" and mount the C: drive. "mount -t msdos /dev/hda1 /c" will do for now but you may want to include /dev/hda1 in /etc/fstab so that it will automatically mounted in the future; useful for exporting files to make them available to NT.

Check that /c/bootsect.lux is visible to Linux "ls /c/bootsect*"

/c/bootsect.dos  /c/bootsect.lux


Then run "lilo"

Added linux *
`

Following an orderly shutdown and reboot you can now select Redhat Linux at NT's boot prompt and boot into Linux. I hope you find the above useful.

#### developerWorks Linux Technical library view

Linux Magazine: Tip Pack: KDE(Aug 03, 2000)
O'Reilly Network: 12 Tips on Building Firewalls(Jul 29, 2000)
Linux.com: LILO Security Tips(Apr 20, 2000)
About.com: Small Computer Tips(Aug 16, 1999)
Ext2.org: Misc Kernel Tips #2(Jul 06, 1999)
Ext2.org: Misc kernel tips(May 29, 1999)
Online book -- 100 Linux Tips and Tricks(May 12, 1999)
PC Week: Tips for those taking the Linux plunge(Apr 01, 1999)
ZDNet AnchorDesk: Tips and Tricks to Get You Started [with Linux](Jan 21, 1999)
Linux Tips and Tricks(Jan 02, 1999)

Learn

## Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least

Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

 You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.