Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Unix Sysadmin Tips

News Enterprise Unix System Administration Recommended Links Unix System Monitoring Job schedulers Unix Configuration Management Tools Perl Admin Tools and Scripts Baseliners
Saferm -- wrapper for rm command PDSH -- a parallel remote shell TeraTerm Macros Linux implementation of sar Mon -- King of Simplicity among Unix Monitoring packages      
Bash Tips and Tricks WinSCP Tips Attaching to and detaching from screen sessions Midnight Commander Tips and Tricks WinSCP Tips Linux netwoking tips RHEL Tips Suse Tips
Filesystems tips Shell Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips AWK Tips Linux Start up and Run Levels
Unix System Monitoring Job schedulers  Grub Simple Unix Backup Tools  Sysadmin Horror Stories History Humor Etc

Lazy Linux: 10 essential tricks for admins by Vallard Benincosa  Certified Technical Sales Specialist, IBM

20 Jul 2008 | IBM DeveloperWorks

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time—and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat—especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot
GRUB screen after reboot
 

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line
Ready to edit the kernel line
 

Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1
Append the argument with the number 1
 

Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall
Poking a hole in the firewall
 

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.

     
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh thedude@blackbox.example.com .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.

     
  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com

    This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling
Putty can forward SSH ports for tunneling
 

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system—a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

  Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.
 
Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Oct 05, 2020] Modular Perl in Red Hat Enterprise Linux 8 - Red Hat Developer

Notable quotes:
"... perl-DBD-SQLite ..."
"... perl-DBD-SQLite:1.58 ..."
"... perl-libwww-perl ..."
"... multi-contextual ..."
Oct 05, 2020 | developers.redhat.com

Modular Perl in Red Hat Enterprise Linux 8 By Petr Pisar May 16, 2019

Red Hat Enterprise Linux 8 comes with modules as a packaging concept that allows system administrators to select the desired software version from multiple packaged versions. This article will show you how to manage Perl as a module.

Installing from a default stream

Let's install Perl:

# yum --allowerasing install perl
Last metadata expiration check: 1:37:36 ago on Tue 07 May 2019 04:18:01 PM CEST.
Dependencies resolved.
==========================================================================================
 Package                       Arch    Version                Repository             Size
==========================================================================================
Installing:
 perl                          x86_64  4:5.26.3-416.el8       rhel-8.0.z-appstream   72 k
Installing dependencies:
[ ]
Transaction Summary
==========================================================================================
Install  147 Packages

Total download size: 21 M
Installed size: 59 M
Is this ok [y/N]: y
[ ]
  perl-threads-shared-1.58-2.el8.x86_64                                                   

Complete!

Next, check which Perl you have:

$ perl -V:version
version='5.26.3';

You have 5.26.3 Perl version. This is the default version supported for the next 10 years and, if you are fine with it, you don't have to know anything about modules. But what if you want to try a different version?

Everything you need to grow your career.

With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.

SIGN UP Discovering streams

Let's find out what Perl modules are available using the yum module list command:

# yum module list
Last metadata expiration check: 1:45:10 ago on Tue 07 May 2019 04:18:01 PM CEST.
[ ]
Name                 Stream           Profiles     Summary
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24             common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d]         common [d]   SQLite DBI driver
perl-DBI             1.641 [d]        common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

Here you can see a Perl module is available in versions 5.24 and 5.26. Those are called streams in the modularity world, and they denote an independent variant, usually a different version, of the same software stack. The [d] flag marks a default stream. That means if you do not explicitly enable a different stream, the default one will be used. That explains why yum installed Perl 5.26.3 and not some of the 5.24 micro versions.

Now suppose you have an old application that you are migrating from Red Hat Enterprise Linux 7, which was running in the rh-perl524 software collection environment, and you want to give it a try on Red Hat Enterprise Linux 8. Let's try Perl 5.24 on Red Hat Enterprise Linux 8.

Enabling a Stream

First, switch the Perl module to the 5.24 stream:

# yum module enable perl:5.24
Last metadata expiration check: 2:03:16 ago on Tue 07 May 2019 04:18:01 PM CEST.
Problems in request:
Modular dependency problems with Defaults:

 Problem 1: conflicting requests
  - module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
 Problem 2: conflicting requests
  - module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
Dependencies resolved.
==========================================================================================
 Package              Arch                Version              Repository            Size
==========================================================================================
Enabling module streams:
 perl                                     5.24

Transaction Summary
==========================================================================================

Is this ok [y/N]: y
Complete!

Switching module streams does not alter installed packages (see 'module enable' in dnf(8)
for details)

Here you can see a warning that the freeradius:3.0 stream is not compatible with perl:5.24 . That's because FreeRADIUS was built for Perl 5.26 only. Not all modules are compatible with all other modules.

Next, you can see a confirmation for enabling the Perl 5.24 stream. And, finally, there is another warning about installed packages. The last warning means that the system still can have installed RPM packages from the 5.26 stream, and you need to explicitly sort it out.

Changing modules and changing packages are two separate phases. You can fix it by synchronizing a distribution content like this:

# yum --allowerasing distrosync
Last metadata expiration check: 0:00:56 ago on Tue 07 May 2019 06:33:36 PM CEST.
Modular dependency problems:

 Problem 1: module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
  - conflicting requests
 Problem 2: module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
  - conflicting requests
Dependencies resolved.
==========================================================================================
 Package           Arch   Version                              Repository            Size
==========================================================================================
[ ]
Downgrading:
 perl              x86_64 4:5.24.4-403.module+el8+2770+c759b41a
                                                               rhel-8.0.z-appstream 6.1 M
[ ]
Transaction Summary
==========================================================================================
Upgrade    69 Packages
Downgrade  66 Packages

Total download size: 20 M
Is this ok [y/N]: y
[ ]
Complete!

And try the perl command again:

$ perl -V:version
version='5.24.4';

Great! It works. We switched to a different Perl version, and the different Perl is still invoked with the perl command and is installed to a standard path ( /usr/bin/perl ). No scl enable incantation is needed, in contrast to the software collections.

You could notice the repeated warning about FreeRADIUS. A future YUM update is going to clean up the unnecessary warning. Despite that, I can show you that other Perl-ish modules are compatible with any Perl stream.

Dependent modules

Let's say the old application mentioned before is using DBD::SQLite Perl module. (This nomenclature is a little ambiguous: Red Hat Enterprise Linux has modules; Perl has modules. If I want to emphasize the difference, I will say the Modularity modules or the CPAN modules.) So, let's install CPAN's DBD::SQLite module. Yum can search in a packaged CPAN module, so give a try:

# yum --allowerasing install 'perl(DBD::SQLite)'
[ ]
Dependencies resolved.
==========================================================================================
 Package          Arch    Version                             Repository             Size
==========================================================================================
Installing:
 perl-DBD-SQLite  x86_64  1.58-1.module+el8+2519+e351b2a7     rhel-8.0.z-appstream  186 k
Installing dependencies:
 perl-DBI         x86_64  1.641-2.module+el8+2701+78cee6b5    rhel-8.0.z-appstream  739 k
Enabling module streams:
 perl-DBD-SQLite          1.58
 perl-DBI                 1.641

Transaction Summary
==========================================================================================
Install  2 Packages

Total download size: 924 k
Installed size: 2.3 M
Is this ok [y/N]: y
[ ]
Installed:
  perl-DBD-SQLite-1.58-1.module+el8+2519+e351b2a7.x86_64
  perl-DBI-1.641-2.module+el8+2701+78cee6b5.x86_64

Complete!

Here you can see DBD::SQLite CPAN module was found in the perl-DBD-SQLite RPM package that's part of perl-DBD-SQLite:1.58 module, and apparently it requires some dependencies from the perl-DBI:1.641 module, too. Thus, yum asked for enabling the streams and installing the packages.

Before playing with DBD::SQLite under Perl 5.24, take a look at the listing of the Modularity modules and compare it with what you saw the first time:

# yum module list
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24 [e]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

Notice that perl:5.24 is enabled ( [e] ) and thus takes precedence over perl:5.26, which would otherwise be a default one ( [d] ). Other enabled Modularity modules are perl-DBD-SQLite:1.58 and perl-DBI:1.641. Those are were enabled when you installed DBD::SQLite. These two modules have no other streams.

In general, any module can have multiple streams. At most, one stream of a module can be the default one. And, at most, one stream of a module can be enabled. An enabled stream takes precedence over a default one. If there is no enabled or a default stream, content of the module is unavailable.

If, for some reason, you need to disable a stream, even a default one, you do that with yum module disable MODULE:STREAM command.

Enough theory, back to some productive work. You are ready to test the DBD::SQLite CPAN module now. Let's create a test database, a foo table inside with one textual column called bar , and let's store a row with Hello text there:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test});
    $dbh->do(q{CREATE TABLE foo (bar text)});
    $sth=$dbh->prepare(q{INSERT INTO foo(bar) VALUES(?)});
    $sth->execute(q{Hello})'

Next, verify the Hello string was indeed stored by querying the database:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello

It seems DBD::SQLite works.

Non-modular packages may not work with non-default streams

So far, everything is great and working. Now I will show what happens if you try to install an RPM package that has not been modularized and is thus compatible only with the default Perl, perl:5.26:

# yum --allowerasing install 'perl(LWP)'
[ ]
Error: 
 Problem: package perl-libwww-perl-6.34-1.el8.noarch requires perl(:MODULE_COMPAT_5.26.2), but none of the providers can be installed
  - cannot install the best candidate for the job
  - package perl-libs-4:5.26.3-416.el8.i686 is excluded
  - package perl-libs-4:5.26.3-416.el8.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

Yum will report an error about perl-libwww-perl RPM package being incompatible. The LWP CPAN module that is packaged as perl-libwww-perl is built only for Perl 5.26, and therefore RPM dependencies cannot be satisfied. When a perl:5.24 stream is enabled, the packages from perl:5.26 stream are masked and become unavailable. However, this masking does not apply to non-modular packages, like perl-libwww-perl. There are plenty of packages that were not modularized yet. If you need some of them to be available and compatible with a non-default stream (e.g., not only with perl:5.26 but also with perl:5.24) do not hesitate to contact Red Hat support team with your request.

Resetting a module

Let's say you tested your old application and now you want to find out if it works with the new Perl 5.26.

To do that, you need to switch back to the perl:5.26 stream. Unfortunately, switching from an enabled stream back to a default or to a yet another non-default stream is not straightforward. You'll need to perform a module reset:

# yum module reset perl
[ ]
Dependencies resolved.
==========================================================================================
 Package              Arch                Version              Repository            Size
==========================================================================================
Resetting module streams:
 perl                                     5.24                                           

Transaction Summary
==========================================================================================

Is this ok [y/N]: y
Complete!

Well, that did not hurt. Now you can synchronize the distribution again to replace the 5.24 RPM packages with 5.26 ones:

# yum --allowerasing distrosync
[ ]
Transaction Summary
==========================================================================================
Upgrade    65 Packages
Downgrade  71 Packages

Total download size: 22 M
Is this ok [y/N]: y
[ ]

After that, you can check the Perl version:

$ perl -V:version
version='5.26.3';

And, check the enabled modules:

# yum module list
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24             common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

As you can see, we are back at the square one. The perl:5.24 stream is not enabled, and perl:5.26 is the default and therefore preferred. Only perl-DBD-SQLite:1.58 and perl-DBI:1.641 streams remained enabled. It does not matter much because those are the only streams. Nonetheless, you can reset them back using yum module reset perl-DBI perl-DBD-SQLite if you like.

Multi-context streams

What happened with the DBD::SQLite? It's still there and working:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello

That is possible because the perl-DBD-SQLite module is built for both 5.24 and 5.26 Perls. We call these modules multi-contextual . That's the case for perl-DBD-SQLite or perl-DBI, but not the case for FreeRADIUS, which explains the warning you saw earlier. If you want to see these low-level details, such which contexts are available, which dependencies are required, or which packages are contained in a module, you can use the yum module info MODULE:STREAM command.

Afterword

I hope this tutorial shed some light on modules -- the fresh feature of Red Hat Enterprise Linux 8 that enables us to provide you with multiple versions of software on top of one Linux platform. If you need more details, please read documentation accompanying the product (namely, user-space component management document and yum(8) manual page ) or ask the support team for help.

[Jul 14, 2020] Important Linux -proc filesystem files you need to know - Enable Sysadmin

Jul 14, 2020 | www.redhat.com

The /proc files I find most valuable, especially for inherited system discovery, are:

And the most valuable of those are cpuinfo and meminfo .

Again, I'm not stating that other files don't have value, but these are the ones I've found that have the most value to me. For example, the /proc/uptime file gives you the system's uptime in seconds. For me, that's not particularly valuable. However, if I want that information, I use the uptime command that also gives me a more readable version of /proc/loadavg as well.

By comparison:

$ cat /proc/uptime
46901.13 46856.69

$ cat /proc/loadavg 
0.00 0.01 0.03 2/111 2039

$ uptime
 00:56:13 up 13:01,  2 users,  load average: 0.00, 0.01, 0.03

I think you get the idea.

/proc/cmdline

This file shows the parameters passed to the kernel at the time it is started.

$ cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-1062.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto spectre_v2=retpoline rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

The value of this information is in how the kernel was booted because any switches or special parameters will be listed here, too. And like all information under /proc , it can be found elsewhere and usually with better formatting, but /proc files are very handy when you can't remember the command or don't want to grep for something.

/proc/cpuinfo

The /proc/cpuinfo file is the first file I check when connecting to a new system. I want to know the CPU make-up of a system and this file tells me everything I need to know.

$ cat /proc/cpuinfo 

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 142
model name      : Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
stepping        : 9
cpu MHz         : 2303.998
cache size      : 4096 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq monitor ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase avx2 invpcid rdseed clflushopt md_clear flush_l1d
bogomips        : 4607.99
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

This is a virtual machine and only has one vCPU. If your system contains more than one CPU, the CPU numbering begins at 0 for the first CPU.

/proc/meminfo

The /proc/meminfo file is the second file I check on a new system. It gives me a general and a specific look at a system's memory allocation and usage.

$ cat /proc/meminfo 
MemTotal:        1014824 kB
MemFree:          643608 kB
MemAvailable:     706648 kB
Buffers:            1072 kB
Cached:           185568 kB
SwapCached:            0 kB
Active:           187568 kB
Inactive:          80092 kB
Active(anon):      81332 kB
Inactive(anon):     6604 kB
Active(file):     106236 kB
Inactive(file):    73488 kB
Unevictable:           0 kB
Mlocked:               0 kB
***Output truncated***

I think most sysadmins either use the free or the top command to pull some of the data contained here. The /proc/meminfo file gives me a quick memory overview that I like and can redirect to another file as a snapshot.

/proc/version

The /proc/version command provides more information than the related uname -a command does. Here are the two compared:

$ cat /proc/version
Linux version 3.10.0-1062.el7.x86_64 (mockbuild@kbuilder.bsys.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Wed Aug 7 18:08:02 UTC 2019

$ uname -a
Linux centos7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Usually, the uname -a command is sufficient to give you kernel version info but for those of you who are developers or who are ultra-concerned with details, the /proc/version file is there for you.

Wrapping up

The /proc filesystem has a ton of valuable information available to system administrators who want a convenient, non-command way of getting at raw system info. As I stated earlier, there are other ways to display the information in /proc . Additionally, some of the /proc info isn't what you'd want to use for system assessment. For example, use commands such as vmstat 5 5 or iostat 5 5 to get a better picture of system performance rather than reading one of the available /proc files.

[Jul 12, 2020] 6 handy Bash scripts for Git - Opensource.com

Jul 12, 2020 | opensource.com

6 handy Bash scripts for Git These six Bash scripts will make your life easier when you're working with Git repositories. 15 Jan 2020 Bob Peterson (Red Hat) Feed 86 up 2 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 More on Git

I wrote a bunch of Bash scripts that make my life easier when I'm working with Git repositories. Many of my colleagues say there's no need; that everything I need to do can be done with Git commands. While that may be true, I find the scripts infinitely more convenient than trying to figure out the appropriate Git command to do what I want. 1. gitlog

gitlog prints an abbreviated list of current patches against the master version. It prints them from oldest to newest and shows the author and description, with H for HEAD , ^ for HEAD^ , 2 for HEAD~2, and so forth. For example:

$ gitlog
-----------------------[ recovery25 ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

If I want to see what patches are on a different branch, I can specify an alternate branch:

$ gitlog recovery24
2. gitlog.id

gitlog.id just prints the patch SHA1 IDs:

$ gitlog.id
-----------------------[ recovery25 ]-----------------------
56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9

Again, it assumes the current branch, but I can specify a different branch if I want.

3. gitlog.id2

gitlog.id2 is the same as gitlog.id but without the branch line at the top. This is handy for cherry-picking all patches from one branch to the current branch:

$ # create a new branch
$ git branch --track origin/master
$ # check out the new branch I just created
$ git checkout recovery26
$ # cherry-pick all patches from the old branch to the new one
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done 4. gitlog.grep

gitlog.grep greps for a string within that collection of patches. For example, if I find a bug and want to fix the patch that has a reference to function inode_go_sync , I simply do:

$ gitlog.grep inode_go_sync
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
152:-static void inode_go_sync(struct gfs2_glock *gl)
153:+static int inode_go_sync(struct gfs2_glock *gl)
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

So, now I know that patch HEAD~9 is the one that needs fixing. I use git rebase -i HEAD~10 to edit patch 9, git commit -a --amend , then git rebase --continue to make the necessary adjustments.

5. gitbranchcmp3

gitbranchcmp3 lets me compare my current branch to another branch, so I can compare older versions of patches to my newer versions and quickly see what's changed and what hasn't. It generates a compare script (that uses the KDE tool Kompare , which works on GNOME3, as well) to compare the patches that aren't quite the same. If there are no differences other than line numbers, it prints [SAME] . If there are only comment differences, it prints [same] (in lower case). For example:

$ gitbranchcmp3 recovery24
Branch recovery24 has 47 patches
Branch recovery25 has 50 patches

(snip)
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time

Missing from recovery25:
The missing:
Compare script generated at: /tmp/compare_mismatches.sh 6. gitlog.find

Finally, I have gitlog.find , a script to help me identify where the upstream versions of my patches are and each patch's current status. It does this by matching the patch description. It also generates a compare script (again, using Kompare) to compare the current patch to the upstream counterpart:

$ gitlog.find
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
Not found upstream
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
Not found upstream
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
Not found upstream
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
Not found upstream
Compare script generated: /tmp/compare_upstream.sh

The patches are shown on two lines, the first of which is your current patch, followed by the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream status:

Some of my scripts make assumptions based on how I normally work with Git. For example, when searching for upstream patches, it uses my well-known Git tree's location. So, you will need to adjust or improve them to suit your conditions. The gitlog.find script is designed to locate GFS2 and DLM patches only, so unless you're a GFS2 developer, you will want to customize it to the components that interest you.

Source code

Here is the source for these scripts.

1. gitlog #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done

if [[ $branch =~ . * for-next. * ]]
then
start =HEAD
# start=origin/for-next
else
start =origin / master
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s %n" $i
patches =$ ( echo $patches - 1 | bc )
done
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch 2. gitlog.id #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 3. gitlog.id2 #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 4. gitlog.grep #!/bin/bash
param1 = $1
param2 = $2

if test "x $param2 " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
string = $param1
else
branch = $param1
string = $param2
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
/ usr / bin / git show --pretty =email --patch-with-stat $i | grep -n " $string "
patches =$ ( echo $patches - 1 | bc )
done 5. gitbranchcmp3 #!/bin/bash
#
# gitbranchcmp3 <old branch> [<new_branch>]
#
oldbranch = $1
newbranch = $2
script = / tmp / compare_mismatches.sh

/ usr / bin / rm -f $script
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

if test "x $newbranch " = x; then
newbranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

declare -a oldsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $oldbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )
declare -a newsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $newbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )

#echo "old: " $oldsha1s
oldcount = ${#oldsha1s[@]}
echo "Branch $oldbranch has $oldcount patches"
oldcount =$ ( echo $oldcount - 1 | bc )
#for o in `seq 0 ${#oldsha1s[@]}`; do
# echo -n ${oldsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done

#echo "new: " $newsha1s
newcount = ${#newsha1s[@]}
echo "Branch $newbranch has $newcount patches"
newcount =$ ( echo $newcount - 1 | bc )
#for o in `seq 0 ${#newsha1s[@]}`; do
# echo -n ${newsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
echo

for new in ` seq 0 $newcount ` ; do
newsha = ${newsha1s[$new]}
newdesc = ` git show $newsha | head -5 | tail -1 | cut -b5- `
oldsha = " "
same = "[ ]"
for old in ` seq 0 $oldcount ` ; do
if test " ${oldsha1s[$old]} " = "match" ; then
continue ;
fi
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
if test " $olddesc " = " $newdesc " ; then
oldsha = ${oldsha1s[$old]}
#echo $oldsha
git show $oldsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# No differences
same = "[SAME]"
oldsha1s [ $old ] = "match"
break
fi
git show $oldsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# Differences in comments only
same = "[same]"
oldsha1s [ $old ] = "match"
break
fi
oldsha1s [ $old ] = "match"
echo "compare_them $oldsha $newsha " >> $script
fi
done
echo " $new $oldsha $newsha $same $newdesc "
done

echo
echo "Missing from $newbranch :"
the_missing = ""
# Now run through the olds we haven't matched up
for old in ` seq 0 $oldcount ` ; do
if test ${oldsha1s[$old]} ! = "match" ; then
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
echo " ${oldsha1s[$old]} $olddesc "
the_missing = ` echo " $the_missing ${oldsha1s[$old]} " `
fi
done

echo "The missing: " $the_missing
echo "Compare script generated at: $script "
#git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' ' 6. gitlog.find #!/bin/bash
#
# Find the upstream equivalent patch
#
# gitlog.find
#
cwd = $PWD
param1 = $1
ubranch = $2
patches = 0
script = / tmp / compare_upstream.sh
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " cwd= $PWD " >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " cd ~/linux.git/fs/gfs2" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " cd $cwd " >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

#echo "Gathering upstream patch info. Please wait."
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

cd ~ / linux.git
if test "X ${ubranch} " = "X" ; then
ubranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi
utracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
#
# gather a list of gfs2 patches from master just in case we can't find it
#
#git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / gfs2 / > / tmp / gronk.gfs2
# ms = in Linus's master
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / dlm / > / tmp / gronk.dlm

cd $cwd
LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
desc = `/ usr / bin / git show --abbrev-commit -s --pretty =format: "%s" $i `
cd ~ / linux.git
cmp = 1
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "lo %h %<|(32)%an %s" $utracking .. $ubranch | grep " $desc " `
# lo = in local for-next
if test "X $up_eq " = "X" ; then
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "fn %h %<|(32)%an %s" master.. $utracking | grep " $desc " `
# fn = in for-next for next merge window
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.gfs2 `
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.dlm `
if test "X $up_eq " = "X" ; then
up_eq = " Not found upstream"
cmp = 0
fi
fi
fi
fi
echo " $up_eq "
if [ $cmp -eq 1 ] ; then
UP_SHA1 = ` echo $up_eq | cut -d ' ' -f2 `
echo "compare_them $UP_SHA1 $i " >> $script
fi
cd $cwd
patches =$ ( echo $patches - 1 | bc )
done
echo "Compare script generated: $script "

[Jul 11, 2020] Own your own content Vallard's Blog

Jul 11, 2020 | benincosa.com

Posted on December 31, 2019 by Vallard

Reading this morning on Hacker News was this article on how the old Internet has died because we trusted all our content to Facebook and Google. While hyperbole abounds in the headline and there are plenty of internet things out there that aren't owned by Google nor Facebook (including this AWS free blog) it is true much of the information and content is in the hands of a giant Ad serving service and a social echo chamber. (well that is probably too harsh)

I heard this advice many years ago that you should own your own content. While there isn't much value in my trivial or obscure blog that nobody reads, it matters to me and is the reason I've ran it on my own software, my own servers, for 10+ years. This blog, for example, runs on open source WordPress, a Linux server hosted by a friend, and managed by me as I login and make changes.

But of course, that is silly! Why not publish on Medium like everyone else? Or publish on someone else's service? Isn't that the point of the internet? Maybe. But in another sense, to me, the point is freedom. Freedom to express, do what I want, say what I will with no restrictions. The ability to own what I say and freedom from others monetizing me directly. There's no walled garden and anyone can access the content I write in my own little funzone.

While that may seem like ridiculousness, to me it's part of my hobby, and something I enjoy. In the next decade, whether this blog remains up or is shut down, is not dependent upon the fates of Google, Facebook, Amazon, nor Apple. It's dependent upon me, whether I want it up or not. If I change my views, I can delete it. It won't just sit on the Internet because someone else's terms of service agreement changed. I am in control, I am in charge. That to me is important and the reason I run this blog, don't use other people's services, and why I advocate for owning your own content.

[Jul 09, 2020] My Favourite Secret Weapon strace

Jul 09, 2020 | zwischenzugs.com

Why strace ?

I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing with them can give a much deeper understanding of what's going on and ultimately lead to better code.

My favourite secret weapon in this path to understanding is strace.

strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS) calls your program is making.

An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc) its use has a very high hit rate in resolving problems out of developers' normal problem space.

Usage Patterns

strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.

My Netcat Server Won't Start!

Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source, and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.

Simply running through strace will likely give you clues as to what's gone on.

$  nc -l localhost 80
nc: Permission denied

Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).

Simply put strace at the front of your command. Note that the following output has been heavily edited for space reasons (deep breath):

 $ strace nc -l localhost 80
 execve("/bin/nc", ["nc", "-l", "localhost", "80"], [/* 54 vars */]) = 0
 brk(0)                                  = 0x1e7a000
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9c0000
 access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
 open("/usr/local/lib/tls/x86_64/libglib-2.0.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)
 stat("/usr/local/lib/tls/x86_64", 0x7fff5686c240) = -1 ENOENT (No such file or directory)
 [...]
 open("libglib-2.0.so.0", O_RDONLY)      = -1 ENOENT (No such file or directory)
 open("/etc/ld.so.cache", O_RDONLY)      = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=179820, ...}) = 0
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libglib-2.0.so.0", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\320k\1"..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=975080, ...}) = 0
 mmap(NULL, 3072520, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751c4b3000
 mprotect(0x7f751c5a0000, 2093056, PROT_NONE) = 0
 mmap(0x7f751c79f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xec000) = 0x7f751c79f000
 mmap(0x7f751c7a1000, 520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f751c7a1000
 close(3)                                = 0
 open("/usr/local/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory)
[...]
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\20\""..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=51728, ...}) = 0
 mmap(NULL, 2148104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751b8b0000
 mprotect(0x7f751b8bc000, 2093056, PROT_NONE) = 0
 mmap(0x7f751babb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f751babb000
 close(3)                                = 0
 mprotect(0x7f751babb000, 4096, PROT_READ) = 0
 munmap(0x7f751c994000, 179820)          = 0
 open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 3
 fcntl(3, F_GETFD)                       = 0x1 (flags FD_CLOEXEC)
 fstat(3, {st_mode=S_IFREG|0644, st_size=315, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "127.0.0.1\tlocalhost\n127.0.1.1\tal"..., 4096) = 315
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 open("/etc/gai.conf", O_RDONLY)         = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "# Configuration for getaddrinfo("..., 4096) = 3343
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 futex(0x7f751c4af460, FUTEX_WAKE_PRIVATE, 2147483647) = 0
 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
 getsockname(3, {sa_family=AF_INET, sin_port=htons(58567), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
 getsockname(3, {sa_family=AF_INET6, sin6_port=htons(42803), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse when a few things are explained.

For each line:

open("/etc/gai.conf", O_RDONLY)         = 3

Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf and the constant O_RDONLY , and the return value was 3 .

How to make sense of this?

Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above line is the attempt to open a file with read-only permission.

In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that return "Permission denied":

 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

We might therefore want to understand what "bind" is and why it might be failing.

You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is in the manpages-dev package, and can be invoked by eg ​​ man 2 bind (I just used strace to determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!). You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs that match your installation.

Right there in my man 2 bind page it says:

ERRORS
EACCES The address is protected, and the user is not the superuser.

So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.

My Library Is Not Loading!

Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:

$ strace perl a.pl
execve("/usr/bin/perl", ["perl", "a.pl"], [/* 57 vars */]) = 0
brk(0)                                  = 0xa8f000
[...]fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
fstat(3, {st_mode=S_IFREG|0664, st_size=14, ...}) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
brk(0xad1000)                           = 0xad1000
read(3, "use blahlib;\n\n", 4096)       = 14
stat("/space/myperllib/blahlib.pmc", 0x7fffbaf7f3d0) = -1 ENOENT (No such file or directory)
stat("/space/myperllib/blahlib.pm", {st_mode=S_IFREG|0644, st_size=7692, ...}) = 0
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffbaf7f090) = -1 ENOTTY (Inappropriate ioctl for device)
[...]mmap(0x7f4c45ea8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x4000) = 0x7f4c45ea8000
close(5)                                = 0
mprotect(0x7f4c45ea8000, 4096, PROT_READ) = 0
brk(0xb55000)                           = 0xb55000
read(4, "swrite($_[0], $_[1], $_[2], $_[3"..., 4096) = 3596
brk(0xb77000)                           = 0xb77000
read(4, "", 4096)                       = 0
close(4)                                = 0
read(3, "", 4096)                       = 0
close(3)                                = 0
exit_group(0)                           = ?

We observe that the file is found in what looks like an unusual place.

open("/space/myperllib/blahlib.pm", O_RDONLY) = 4

Inspecting the environment, we see that:

$ env | grep myperl
PERL5LIB=/space/myperllib

So the solution is to set the same env variable before running:

export PERL5LIB=/space/myperllib
Get to know the internals bit by bit

If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a look at the file /etc/gai.conf , which I'd never come across before writing this.

Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens & Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully) browse C code more and more understanding will grow.

Gotchas

If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes and straces them. -ff creates a separate file with the pid suffixed to the name.

If you're on solaris, this program doesn't exist – you need to use truss instead.

Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies (on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the executable up.

Other useful tidbits

You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with -p .

If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and -T ) can help significantly.

vasudevram February 11, 2018 at 5:29 pm

Interesting post. One point: The errors start earlier than what you said.There is a call to access() near the top of the strace output, which fails:

access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)

vasudevram February 11, 2018 at 5:29 pm

I guess that could trigger the other errors.

Benji Wiebe February 11, 2018 at 7:30 pm

A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking if a config file exists.

vasudevram February 11, 2018 at 8:24 pm

>A failed access or open system call is not usually an error in the context of launching a program.

Yes, good point, that could be so, if the programmer meant to ignore the error, and if it was not an issue to do so.

>Generally it is merely checking if a config file exists.

The file name being access'ed is "/etc/ld.so.nohwcap" – not sure if it is a config file or not.

[Jul 08, 2020] Exit Codes

From bash manual: The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes.
For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
The exit status is used by the Bash conditional commands (see Conditional Constructs) and some of the list constructs (see Lists).
All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid options or missing arguments.
Jul 08, 2020 | zwischenzugs.com

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code.

1 is a 'general error', and others can give you more information (e.g. which signal killed it, for example). 255 is upper limit and is "internal error"

grep joeuser /etc/passwd # in case of success returns 0, otherwise 1

or

grep not_there /dev/null
echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

[Jul 07, 2020] The Missing Readline Primer by Ian Miell

Highly recommended!
This is from the book Learn Bash the Hard Way, available for $6.99.
Jul 07, 2020 | zwischenzugs.com

The Missing Readline Primer zwischenzugs Uncategorized April 23, 2019 7 Minutes

Readline is one of those technologies that is so commonly used many users don't realise it's there.

I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with it over the years.

Bash Without Readline

First you're going to see what bash looks like without readline.

In your 'normal' bash shell, hit the TAB key twice. You should see something like this:

    Display all 2335 possibilities? (y or n)

That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap tab twice.

Hit n to get out of that autocomplete.

Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands should be brought back to the command line.

Now type:

$ bash --noediting

The --noediting flag starts up bash without the readline library enabled.

If you hit TAB twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab direct to the screen, moving your cursor along. Autocomplete has gone.

Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the up or down arrows as you did above to see that that no longer works as well.

Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:

$ exit
Other Shortcuts

There are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most commonly-used of these before explaining how you can find out more.

$ echo 'some command'

There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.

If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of the line. Another way of representing this 'multi-key' way of inputting is to write it like this: \C-a . This is one conventional way to represent this kind of input. The \C represents the control key, and the -a represents that the a key is depressed at the same time.

Now if you hit \C-e ( ctrl and e ) then your cursor has moved to the end of the line. I use these two dozens of times a day.

Another frequently useful one is \C-l , which clears the screen, but leaves your command line intact.

The last one I'll show you allows you to search your history to find matching commands while you type. Hit \C-r , and then type ec . You should see the echo command you just ran like this:

    (reverse-i-search)`ec': echo echo

Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that you've input before (if you've only got one echo command in your history then you will only see one). As you see them you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.

There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind` to Show Readline Shortcuts

If you type:

$ bind -p

You will see a list of bindings that readline is capable of. There's a lot of them!

Have a read through if you're interested, but don't worry about understanding them all yet.

If you type:

$ bind -p | grep C-a

you'll pick out the 'beginning-of-line' binding you used before, and see the \C-a notation I showed you before.

As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.

If you want to look through the entirety of the bind -p output, then you will want to know that \M refers to the Meta key (which you might also know as the Alt key), and \e refers to the Esc key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also tries to auto-complete the command you are typing. This is documented as:

    "\e?": possible-completions

in the bind -p output.

Readline and Terminal Options

If you've looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked at earlier:

    "\C-r": reverse-search-history

You might also have seen that there is another binding that allows you to search forward through your history too:

    "\C-s": forward-search-history

What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.

Watch out though! Hitting \C-s to search forward through the history might well not work for you.

Why is this, if the binding is there and readline is switched on?

It's because something picked up the \C-s before it got to the readline library: the terminal settings.

The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before readline gets to see it.

If you type:

$ stty -e

you should get output similar to this:

speed 9600 baud; 47 rows; 202 columns;
lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc
iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk
oflags: opost onlcr -oxtabs -onocr -onlret
cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf
discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      ^S      ^Z      0       ^W

You can see on the last four lines ( discard dsusp [...] ) there is a table of key bindings that your terminal will pick up before readline sees them. The ^ character (known as the 'caret') here represents the ctrl key that we previously represented with a \C .

If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one way of describing these key combinations.

If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you can use the stty program to unset that binding. In this case, we want to unset the 'stop' binding.

If you are in the same situation, type:

$ stty stop undef

Now, if you re-run stty -e , the last two lines might look like this:

[...]
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

where the stop entry now has <undef> underneath it.

Strangely, for me C-r is also bound to 'reprint' above ( ^R ).

But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.

While we are looking at this table:

discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

it's worth noting a few other key bindings that are used regularly.

First, one you may well already be familiar with is \C-c , which interrupts a program, terminating it:

$ sleep 99
[[Hit \C-c]]
^C
$

Similarly, \C-z suspends a program, allowing you to 'foreground' it again and continue with the fg builtin.

$ sleep 10
[[ Hit \C-z]]
^Z
[1]+  Stopped                 sleep 10
$ fg
sleep 10

\C-d sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it on a bash shell, the bash shell you are in will close.

Finally, \C-w deletes the word before the cursor

These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.

Daz April 29, 2019 at 11:15 pm

Hi Ian,

What OS are you running because stty -e gives the following on Centos 6.x and Ubuntu 18.04.2

stty -e
stty: invalid argument '-e'
Try 'stty –help' for more information. Reply

Leon May 14, 2019 at 5:12 am

`stty -a` works for me (Ubuntu 14)

yachris May 16, 2019 at 4:40 pm

You might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus' program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.

AriSweedler May 17, 2019 at 4:50 am

I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline commands sourced at startup!

I do not know exactly when or how the inputrc is read.

Most of what I learned about inputrc stuff is from https://www.topbug.net/blog/2017/07/31/inputrc-for-humans/ .

Here is my inputrc, if anyone wants: https://github.com/AriSweedler/dotfiles/blob/master/.inputrc .

[Jul 04, 2020] Eleven bash Tips You Might Want to Know by Ian Miell

Highly recommended!
Notable quotes:
"... Material here based on material from my book Learn Bash the Hard Way . Free preview available here . ..."
"... natively in bash ..."
Jul 04, 2020 | zwischenzugs.com

Here are some tips that might help you be more productive with bash.

1) ^x^y^

A gem I use all the time.

Ever typed anything like this?

$ grp somestring somefile
-bash: grp: command not found

Sigh. Hit 'up', 'left' until at the 'p' and type 'e' and return.

Or do this:

$ ^rp^rep^
grep 'somestring' somefile
$

One subtlety you may want to note though is:

$ grp rp somefile
$ ^rp^rep^
$ grep rp somefile

If you wanted rep to be searched for, then you'll need to dig into the man page and use a more powerful history command:

$ grp rp somefile
$ !!:gs/rp/rep
grep rep somefile
$

... ... ...


Material here based on material from my book
Learn Bash the Hard Way .
Free preview available here .


3) shopt vs set

This one bothered me for a while.

What's the difference between set and shopt ?

set s we saw before , but shopt s look very similar. Just inputting shopt shows a bunch of options:

$ shopt
cdable_vars    off
cdspell        on
checkhash      off
checkwinsize   on
cmdhist        on
compat31       off
dotglob        off

I found a set of answers here . Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options. But I'm still unsure if you know the answer, let me know.

4) Here Docs and Here Strings

'Here docs' are files created inline in the shell.

The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

Type this:

$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc

Notice that:

Lesser known is the 'here string':

$ cat > asd <<< 'This file has one line'
5) String Variable Manipulation

You may have written code like this before, where you use tools like sed to manipulate strings:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS

But you may not be aware that this is possible natively in bash .

This means that you can dispense with lots of sed and awk shenanigans.

One way to rewrite the above is:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS

The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script .

If you want to use glob patterns that are greedy (see globbing here ) then you double up:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ echo ${VAR##HEADER*}
$ echo ${VAR%%*FOOTER}
6) ​Variable Defaults

These are very handy when you're knocking up scripts quickly.

If you have a variable that's not set, you can 'default' them by using this. Create a file called default.sh with these contents

#!/bin/bash
FIRST_ARG="${1:-no_first_arg}"
SECOND_ARG="${2:-no_second_arg}"
THIRD_ARG="${3:-no_third_arg}"
echo ${FIRST_ARG}
echo ${SECOND_ARG}
echo ${THIRD_ARG}

Now run chmod +x default.sh and run the script with ./default.sh first second .

Observer how the third argument's default has been assigned, but not the first two.

You can also assign directly with ${VAR: = defaultval} (equals sign, not dash) but note that this won't work with positional variables in scripts or functions. Try changing the above script to see how it fails.

7) Traps

The trap built-in can be used to 'catch' when a signal is sent to your script.

Here's an example I use in my own cheapci script:

function cleanup() {
    rm -rf "${BUILD_DIR}"
    rm -f "${LOCK_FILE}"
    # get rid of /tmp detritus, leaving anything accessed 2 days ago+
    find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
    echo "cleanup done"                                                                                                                          
} 
trap cleanup TERM INT QUIT

Any attempt to CTRL-C , CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

Be aware:

  • Trap logic can get very tricky (eg handling signal race conditions)
  • The KILL signal can't be trapped in this way

But mostly I've used this for 'cleanups' like the above, which serve their purpose.

8) Shell Variables

It's well worth getting to know the standard shell variables available to you . Here are some of my favourites:

RANDOM

Don't rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

$ echo ${RANDOM}
16313
$ # Not enough digits?
$ echo ${RANDOM}${RANDOM}
113610703
$ NEWFILE=/tmp/newfile_${RANDOM}
$ touch $NEWFILE
REPLY

No need to give a variable name for read

$ read
my input
$ echo ${REPLY}
LINENO and SECONDS

Handy for debugging

$ echo ${LINENO}
115
$ echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO
174380
174381
116

Note that there are two 'lines' above, even though you used ; to separate the commands.

TMOUT

You can timeout reads, which can be really handy in some scripts

#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}

... ... ...

10) Associative Arrays

Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here ).

What I didn't know until I read up on it was that you can have associative arrays in bash.

Type this out for a demo:

$ declare -A MYAA=([one]=1 [two]=2 [three]=3)
$ MYAA[one]="1"
$ MYAA[two]="2"
$ echo $MYAA
$ echo ${MYAA[one]}
$ MYAA[one]="1"
$ WANT=two
$ echo ${MYAA[$WANT]}

Note that this is only available in bashes 4.x+.

... ... ...

[Jul 04, 2020] Learn Bash Debugging Techniques the Hard Way by Ian Miell

Highly recommended!
Notable quotes:
"... NOTE: If you are on a Mac, then you might only get second-level granularity on the date! ..."
Jul 04, 2020 | zwischenzugs.com

... ... ... Managing Variables

Variables are a core part of most serious bash scripts (and even one-liners!), so managing them is another important way to reduce the possibility of your script breaking.

Change your script to add the 'set' line immediately after the first line and see what happens:

#!/bin/bash
set -o nounset
A="some value"
echo "${A}"
echo "${B}"

...I always set nounset on my scripts as a habit. It can catch many problems before they become serious.

Tracing Variables

If you are working with a particularly complex script, then you can get to the point where you are unsure what happened to a variable.

Try running this script and see what happens:

#!/bin/bash 
set -o nounset 
declare A="some value" 
function a { 
  echo "${BASH_SOURCE}>A A=${A} LINENO:${1}" 
} 
trap "a $LINENO" DEBUG 
B=value 
echo "${A}" 
A="another value" 
echo "${A}" 
echo "${B}"

There's a problem with this code. The output is slightly wrong. Can you work out what is going on? If so, try and fix it.

You may need to refer to the bash man page, and make sure you understand quoting in bash properly.

It's quite a tricky one to fix 'properly', so if you can't fix it, or work out what's wrong with it, then ask me directly and I will help.

Profiling Bash Scripts

Returning to the xtrace (or set -x flag), we can exploit its use of a PS variable to implement the profiling of a script:

#!/bin/bash
set -o nounset
set -o xtrace
declare A="some value"
PS4='$(date "+%s%N => ")'
B=
echo "${A}"
A="another value"
echo "${A}"
echo "${B}"
ls
pwd
curl -q bbc.co.uk

From this you should be able to tell what PS4 does. Have a play with it, and read up and experiment with the other PS variables to get familiar with what they do.

NOTE: If you are on a Mac, then you might only get second-level granularity on the date!

Linting with Shellcheck

Finally, here is a very useful tip for understanding bash more deeply and improving any bash scripts you come across.

Shellcheck is a website and a package available on most platforms that gives you advice to help fix and improve your shell scripts. Very often, its advice has prompted me to research more deeply and understand bash better.

Here is some example output from a script I found on my laptop:

$ shellcheck shrinkpdf.sh
In shrinkpdf.sh line 44:
          -dColorImageResolution=$3             \
                                 ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 46:
          -dGrayImageResolution=$3              \
                                ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 48:
          -dMonoImageResolution=$3              \
                                ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 57:
        if [ ! -f "$1" -o ! -f "$2" ]; then
                      ^-- SC2166: Prefer [ p ] || [ q ] as [ p -o q ] is not well defined.
In shrinkpdf.sh line 60:
        ISIZE="$(echo $(wc -c "$1") | cut -f1 -d\ )"
                      ^-- SC2046: Quote this to prevent word splitting.
                      ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
In shrinkpdf.sh line 61:
        OSIZE="$(echo $(wc -c "$2") | cut -f1 -d\ )"
                      ^-- SC2046: Quote this to prevent word splitting.
                      ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.

The most common reminders are regarding potential quoting issues, but you can see other useful tips in the above output, such as preferred arguments to the test construct, and advice on "useless" echo s.

Exercise

1) Find a large bash script on a social coding site such as GitHub, and run shellcheck over it. Contribute back any improvements you find.


[Jul 02, 2020] 7 Bash history shortcuts you will actually use by Ian Miell

Highly recommended!
Notable quotes:
"... The "last argument" one: !$ ..."
"... The " n th argument" one: !:2 ..."
"... The "all the arguments": !* ..."
"... The "last but n " : !-2:$ ..."
"... The "get me the folder" one: !$:h ..."
"... I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need. ..."
"... Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means). ..."
Oct 02, 2019 | opensource.com

7 Bash history shortcuts you will actually use Save time on the command line with these essential Bash shortcuts. 02 Oct 2019 Ian 205 up 12 comments Image by : Opensource.com x Subscribe now

Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining only the well-known !! trick I learned when I first started using Bash.

So most of them were never committed to memory.

More on Bash This article outlines the shortcuts I actually use every day. It is based on some of the contents of my book, Learn Bash the hard way ; (you can read a preview of it to learn more).

When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required, but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to get them under your fingers, as the time you save will be significant in the long run.

1. The "last argument" one: !$

If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command into your line.

Consider this scenario:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory

Ach, I put the wrongfile filename in my command. I should have put rightfile instead.

You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:

$ mv / path / to / rightfile ! $
mv / path / to / rightfile / some / other / place

and the command will work.

There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last command is one I use the most.

2. The " n th argument" one: !:2

Ever done anything like this?

$ tar -cvf afolder afolder.tar
tar: failed to open

Like many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.

tar_2x.png

When you mix up arguments like that, you can run:

$ ! : 0 ! : 1 ! : 3 ! : 2
tar -cvf afolder.tar afolder

and your reputation will be saved.

The last command's items are zero-indexed and can be substituted in with the number after the !: .

Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.

3. The "all the arguments": !*

Imagine I run a command like:

$ grep '(ping|pong)' afile

The arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .

I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).

$ egrep ! : 1 -$
egrep '(ping|pong)' afile
ping

You don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).

4. The "last but n " : !-2:$

The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the original one, which means that the last command is no longer the one I want to reference.

For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
$ ls / path / to /
rightfile

I can no longer use the !$ shortcut.

In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to grab the last argument from an older command:

$ mv / path / to / rightfile ! - 2 :$
mv / path / to / rightfile / some / other / place

Again, once you learn it, you may be surprised at how often you need it.

5. The "get me the folder" one: !$:h

This one looks less promising on the face of it, but I use it dozens of times daily.

Imagine I run a command like this:

$ tar -cvf system.tar / etc / system
tar: / etc / system: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.

The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.

I can do this at a stroke with:

$ cd ! $:h
cd / etc

This one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only the /etc ."

6. The "the current line" one: !#:1

For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning it. I wish I'd done so a long time ago. I most commonly use it to make backup files:

$ cp / path / to / some / file ! #:1.bak
cp / path / to / some / file / path / to / some / file.bak

but once under the fingers, it can be a very quick alternative to

7. The "search and replace" one: !!:gs

This one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.

Say I want to tell the world that my s key does not work and outputs f instead:

$ echo my f key doef not work
my f key doef not work

Then I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:

$ !! :gs / f / s /
echo my s key does not work
my s key does not work

It doesn't work only on single characters; I can replace words or sentences, too:

$ !! :gs / does / did /
echo my s key did not work
my s key did not work Test them out

Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?

$ ping ! #:0:gs/i/o
$ vi / tmp /! : 0 .txt
$ ls ! $:h
$ cd ! - 2 :h
$ touch ! $! - 3 :$ !! ! $.txt
$ cat ! : 1 -$ Conclusion

Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks to learn, these are my favorites that I frequently put to use.

If you want to dive even deeper into all that Bash can teach you, pick up my book, Learn Bash the hard way or check out my online course, Master the Bash shell .


This article was originally posted on Ian's blog, Zwischenzugs.com , and is reused with permission.

Orr, August 25, 2019 at 10:39 pm

BTW – you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls' and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!

Dima August 26, 2019 at 7:40 am

Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.

cbarrick on 03 Oct 2019

Your "current line" example is too contrived. Your example is copying to a backup like this:

$ cp /path/to/some/file !#:1.bak

But a better way to write that is with filename generation:

$ cp /path/to/some/file{,.bak}

That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.

Darryl Martin August 26, 2019 at 4:41 pm

I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing several down and sticking them on my terminal for reference.

A couple additions I'm sure you know.

  1. I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
  2. I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing me to modify it if necessary.

Ricardo J. Barberis on 06 Oct 2019

The problem with bash's history shorcuts for me is... that I never had the need to learn them.

Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means).

Examples:

Ctrl+R for a Reverse search
Ctrl+A to move to the begnining of the line (Home key also)
Ctrl+E to move to the End of the line (End key also)
Ctrl+K to Kill (delete) text from the cursor to the end of the line
Ctrl+U to kill text from the cursor to the beginning of the line
Alt+F to move Forward one word (Ctrl+Right arrow also)
Alt+B to move Backward one word (Ctrl+Left arrow also)
etc.

YMMV of course.

[Jul 02, 2020] Some Relatively Obscure Bash Tips zwischenzugs

Jul 02, 2020 | zwischenzugs.com

2) |&

You may already be familiar with 2>&1 , which redirects standard error to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe both standard output and standard error into the next stage of the pipeline like this:

if doesnotexist |& grep 'command not found' >/dev/null
then
  echo oops
fi
3) $''

This construct allows you to specify specific bytes in scripts without fear of triggering some kind of encoding problem. Here's a command that will grep through files looking for UK currency ('£') signs in hexadecimal recursively:

grep -r $'\xc2\xa3' *

You can also use octal:

grep -r $'\302\243' *
4) HISTIGNORE

If you are concerned about security, and ever type in commands that might have sensitive data in them, then this one may be of use.

This environment variable does not put the commands specified in your history file if you type them in. The commands are separated by colons:

HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"

You have to specify the whole line, so a glob character may be needed if you want to exclude commands and their arguments or flags.

5) fc

If readline key bindings aren't under your fingers, then this one may come in handy.

It calls up the last command you ran, and places it into your preferred editor (specified by the EDITOR variable). Once edited, it re-runs the command.

6) ((i++))

If you can't be bothered with faffing around with variables in bash with the $[] construct, you can use the C-style compound command.

So, instead of:

A=1
A=$[$A+1]
echo $A

you can do:

A=1
((A++))
echo $A

which, especially with more complex calculations, might be easier on the eye.

7) caller

Another builtin bash command, caller gives context about the context of your shell's

SHLVL is a related shell variable which gives the level of depth of the calling stack.

This can be used to create stack traces for more complex bash scripts.

Here's a die function, adapted from the bash hackers' wiki that gives a stack trace up through the calling frames:

#!/bin/bash
die() {
  local frame=0
  ((FRAMELEVEL=SHLVL - frame))
  echo -n "${FRAMELEVEL}: "
  while caller $frame; do
    ((frame++));
    ((FRAMELEVEL=SHLVL - frame))
    if [[ ${FRAMELEVEL} -gt -1 ]]
    then
      echo -n "${FRAMELEVEL}: "
    fi
  done
  echo "$*"
  exit 1
}

which outputs:

3: 17 f1 ./caller.sh
2: 18 f2 ./caller.sh
1: 19 f3 ./caller.sh
0: 20 main ./caller.sh
*** an error occured ***
8) /dev/tcp/host/port

This one can be particularly handy if you find yourself on a container running within a Kubernetes cluster service mesh without any network tools (a frustratingly common experience).

Bash provides you with some virtual files which, when referenced, can create socket connections to other servers.

This snippet, for example, makes a web request to a site and returns the output.

exec 9<>/dev/tcp/brvtsdflnxhkzcmw.neverssl.com/80
echo -e "GET /online HTTP/1.1\r\nHost: brvtsdflnxhkzcmw.neverssl.com\r\n\r\n" >&9
cat <&9

The first line opens up file descriptor 9 to the host brvtsdflnxhkzcmw.neverssl.com on port 80 for reading and writing. Line two sends the raw HTTP request to that socket connection's file descriptor. The final line retrieves the response.

Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much everyone is running on https, but when running from application containers within a service mesh can still prove invaluable, as requests there are initiated using HTTP.

9) Co-processes

Since version 4 of bash it has offered the capability to run named coprocesses.

It seems to be particularly well-suited to managing the inputs and outputs to other processes in a fine-grained way. Here's an annotated and trivial example:

coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)

This sets up the coprocess as a subshell with the name testproc .

Within the subshell, there's a never-ending while loop that counts its own iterations with the i variable. It outputs two lines: the iteration number, and a line read in from standard input.

After creating the coprocess, bash sets up an array with that name with the file descriptor numbers for the standard input and standard output. So this:

echo "${testproc[@]}"

in my terminal outputs:

63 60

Bash also sets up a variable with the process identifier for the coprocess, which you can see by echoing it:

echo "${testproc_PID}"

You can now input data to the standard input of this coprocess at will like this:

echo input1 >&"${testproc[1]}"

In this case, the command resolves to: echo input1 >&60 , and the >&[INTEGER] construct ensures the redirection goes to the coprocess's standard input.

Now you can read the output of the coprocess's two lines in a similar way, like this:

read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"

You might use this to create an expect -like script if you were so inclined, but it could be generally useful if you want to manage inputs and outputs. Named pipes are another way to achieve a similar result.

Here's a complete listing for those who want to cut and paste:

!/bin/bash
coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)
echo "${testproc[@]}"
echo "${testproc_PID}"
echo input1 >&"${testproc[1]}"
read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"
echo "${output1a}"
echo "${output1b}"
echo input2 >&"${testproc[1]}"
read -r output2a <&"${testproc[0]}"
read -r output2b <&"${testproc[0]}"
echo "${output2a}"
echo "${output2b}"

[Jul 01, 2020] Use curl to test an application's endpoint or connectivity to an upstream service endpoint

Notable quotes:
"... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
Jul 01, 2020 | opensource.com

curl

curl transfers a URL. Use this command to test an application's endpoint or connectivity to an upstream service endpoint. c url can be useful for determining if your application can reach another service, such as a database, or checking if your service is healthy.

As an example, imagine your application throws an HTTP 500 error indicating it can't reach a MongoDB database:

$ curl -I -s myapplication: 5000
HTTP / 1.0 500 INTERNAL SERVER ERROR

The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop:

$ curl -I -s database: 27017
HTTP / 1.0 200 OK

So what could be the problem? Check if your application can get to other places besides the database from the application host:

$ curl -I -s https: // opensource.com
HTTP / 1.1 200 OK

That seems to be okay. Now try to reach the database from the application host. Your application is using the database's hostname, so try that first:

$ curl database: 27017
curl: ( 6 ) Couldn 't resolve host ' database '

This indicates that your application cannot resolve the database because the URL of the database is unavailable or the host (container or VM) does not have a nameserver it can use to resolve the hostname.

[Jul 01, 2020] Stupid Bash tricks- History, reusing arguments, files and directories, functions, and more by Valentin Bajrami

A moderately interesting example here is the example of changing sudo systemctl start into sudo systemctl stop via !!:s/status/start/
But it probably can be optimized so that you do not need to type start (it can be deleted as the last word). So you can try !0 stop instead
Jul 01, 2020 | www.redhat.com

See also Bash bang commands- A must-know trick for the Linux command line - Enable Sysadmin

Let's say I run the following command:

$> sudo systemctl status sshd

Bash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status with my previous command. That command was saved in history , so I can reference it. I simply run:

$> !!:s/status/start/
sudo systemctl start sshd

The above expression has the following content:

The result is that the sshd service is started.

Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:

$> echo "HISTSIZE=5000" >> ~/.bashrc && source ~/.bashrc

What if I want to display the last three commands in my history? I enter:

$> history 3
 1002  ls
 1003  tail audit.log
 1004  history 3

I run tail on audit.log by referring to the history line number. In this case, I use line 1003:

$> !1003
tail audit.log
Reference the last argument of the previous command

When I want to list directory contents for different directories, I may change between directories quite often. There is a nice trick you can use to refer to the last argument of the previous command. For example:

$> pwd
/home/username/
$> ls some/very/long/path/to/some/directory
foo-file bar-file baz-file

In the above example, /some/very/long/path/to/some/directory is the last argument of the previous command.

If I want to cd (change directory) to that location, I enter something like this:

$> cd $_

$> pwd
/home/username/some/very/long/path/to/some/directory

Now simply use a dash character to go back to where I was:

$> cd -
$> pwd
/home/username/

[Jun 26, 2020] Vim show line numbers by default on Linux

Notable quotes:
"... Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. ..."
"... We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers. ..."
Feb 29, 2020 | www.cyberciti.biz

How do I show line numbers in Vim by default on Linux? Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions.

.... Let us see how to display the line number in vim permanently. Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions. My experience shows that line numbers are useful for debugging shell scripts, program code, and configuration files. Let us see how to display the line number in vim permanently.

Vim show line numbers by default

Turn on absolute line numbering by default in vim:

  1. Open vim configuration file ~/.vimrc by typing the following command:
    vim ~/.vimrc
  2. Append set number
  3. Press the Esc key
  4. To save the config file, type :w and hit Enter key
  5. You can temporarily disable the absolute line numbers within vim session, type:
    :set nonumber
  6. Want to enable disabled the absolute line numbers within vim session? Try:
    :set number
  7. We can see vim line numbers on the left side.
Relative line numbers

Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. Once again edit the ~/vimrc, run:
vim ~/vimrc
Finally, turn relative line numbers on:
set relativenumber
Save and close the file in vim text editor.
VIM relative line numbers

How to show "Hybrid" line numbers in Vim by default

What happens when you put the following two config directives in ~/.vimrc ?
set number
set relativenumber

That is right. We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers.

Conclusion

Today we learned about permanent line number settings for the vim text editor. By adding the "set number" config directive in Vim configuration file named ~/.vimrc, we forced vim to show line numbers each time vim started. See vim docs here for more info and following tutorials too:

[May 20, 2020] The mktemp Command Tutorial With Examples For Beginners

May 20, 2020 | www.ostechnix.com

Mktemp is part of GNU coreutils package. So don't bother with installation. We will see some practical examples now.

To create a new temporary file, simply run:

$ mktemp

You will see an output like below:

/tmp/tmp.U0C3cgGFpk

How To Create temporary file using mktemp command in Linux

As you see in the output, a new temporary file with random name "tmp.U0C3cgGFpk" is created in /tmp directory. This file is just an empty file.

You can also create a temporary file with a specified suffix. The following command will create a temporary file with ".txt" extension:

$ mktemp --suffix ".txt"
/tmp/tmp.sux7uKNgIA.txt

How about a temporary directory? Yes, it is also possible! To create a temporary directory, use -d option.

$ mktemp -d

This will create a random empty directory in /tmp folder.

Sample output:

/tmp/tmp.PE7tDnm4uN

Create temporary directory using mktemp command in Linux

All files will be created with u+rw permission, and directories with u+rwx , minus umask restrictions. In other words, the resulting file will have read and write permissions for the current user, but no permissions for the group or others. And the resulting directory will have read, write and executable permissions for the current user, but no permissions for groups or others.

You can verify the file permissions using "ls" command:

$ ls -al /tmp/tmp.U0C3cgGFpk
-rw------- 1 sk sk 0 May 14 13:20 /tmp/tmp.U0C3cgGFpk

Verify the directory permissions using "ls" command:

$ ls -ld /tmp/tmp.PE7tDnm4uN
drwx------ 2 sk sk 4096 May 14 13:25 /tmp/tmp.PE7tDnm4uN

Check file and directory permissions in Linux


Suggested read:


Create temporary files or directories with custom names using mktemp command

As I already said, all files and directories are created with a random file names. We can also create a temporary file or directory with a custom name. To do so, simply add at least 3 consecutive 'X's at the end of the file name like below.

$ mktemp ostechnixXXX
ostechnixq70

Similarly, to create directory, just run:

$ mktemp -d ostechnixXXX
ostechnixcBO

Please note that if you choose a custom name, the files/directories will be created in the current working directory, not /tmp location . In this case, you need to manually clean up them.

Also, as you may noticed, the X's in the file name are replaced with random characters. You can however add any suffix of your choice.

For instance, I want to add "blog" at the end of the filename. Hence, my command would be:

$ mktemp ostechnixXXX --suffix=blog
ostechnixZuZblog

Now we do have the suffix "blog" at the end of the filename.

If you don't want to create any file or directory, you can simply perform a dry run like below.

$ mktemp -u
/tmp/tmp.oK4N4U6rDG

For help, run:

$ mktemp --help
Why do we actually need mktemp?

You might wonder why do we need "mktemp" while we can easily create empty files using "touch filename" command. The mktemp command is mainly used for creating temporary files/directories with random name. So, we don't need to bother figuring out the names. Since mktemp randomizes the names, there won't be any name collision. Also, mktemp creates files safely with permission 600(rw) and directories with permission 700(rwx), so the other users can't access it. For more details, check man pages.

$ man mktemp

[May 06, 2020] Creating and managing partitions in Linux with parted Enable Sysadmin by Tyler Carrigan

Apr 30, 2020 | www.redhat.com

Red Hat Sysddmin

Listing partitions with parted

The first thing that you want to do anytime that you need to make changes to your disk is to find out what partitions you already have. Displaying existing partitions allows you to make informed decisions moving forward and helps you nail down the partition names will need for future commands. Run the parted command to start parted in interactive mode and list partitions. It will default to your first listed drive. You will then use the print command to display disk information.

[root@rhel ~]# parted /dev/sdc
    GNU Parted 3.2
    Using /dev/sdc
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) print                                                            
    Error: /dev/sdc: unrecognised disk label
    Model: ATA VBOX HARDDISK (scsi)                                           
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:
    (parted)

Creating new partitions with parted

Now that you can see what partitions are active on the system, you are going to add a new partition to /dev/sdc . You can see in the output above that there is no partition table for this partition, so add one by using the mklabel command. Then use mkpart to add the new partition. You are creating a new primary partition using the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.

(parted) mklabel msdos                                                    
    (parted) mkpart                                                           
    Partition type?  primary/extended? primary                                
    File system type?  [ext2]? ext4                                           
    Start? 1                                                                  
    End? 50                                                                   
    (parted)                                                                  
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.3MB  49.3MB  primary  ext4         lba

Modifying existing partitions with parted

Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then shrink it back to the original 50 MB. First, note the partition number. You can find this information by using the print command. You are then going to use the resizepart command to make the modifications.

(parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [50.3MB]? 100                                                       
        
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End    Size    Type     File system  Flags
     1      1049kB  100MB  99.0MB  primary

You can see in the above output that I resized partition number one from 50 MB to 100 MB. You can then verify the changes with the print command. You can now resize it back down to 50 MB. Keep in mind that shrinking a partition can cause data loss.

    (parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [100MB]? 50                                                         
    Warning: Shrinking a partition can cause data loss, are you sure you want to
    continue?
    Yes/No? yes                                                               
    
    (parted) print
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.0MB  49.0MB  primary

Removing partitions with parted

Now, let's look at how to remove the partition you created at /dev/sdc1 by using the rm command inside of the parted suite. Again, you will need the partition number, which is found in the print output.

NOTE: Be sure that you have all of the information correct here, there are no safeguards or are you sure? questions asked. When you run the rm command, it will delete the partition number you give it.

    (parted) rm 1                                                             
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start  End  Size  Type  File system  Flags

[Apr 03, 2020] Use Midnight Commander like a pro by Igor Kilmer

Apr 03, 2020 | klimer.eu

Panels

Common actions

Panel options

Bonus assignments

[Mar 12, 2020] 7 tips to speed up your Linux command line navigation Enable Sysadmin

Mar 12, 2020 | www.redhat.com

A bonus shortcut

You can use the keyboard combination, Alt+. , to repeat the last argument.

Note: The shortcut is Alt+. (dot).

$ mkdir /path/to/mydir

$ cd Alt.

You are now in the /path/to/mydir directory.

[Mar 05, 2020] Using Ctags with MC

Mar 05, 2020 | frankhesse.wordpress.com

the Midnight Commander's built-in editor turned out to be. Below is one of the features of mc 4.7, namely the use of the ctags / etags utilities together with mcedit to navigate through the code.

Code Navigation
Training
Support for this functionality appeared in mcedit from version 4.7.0-pre1.
To use it, you need to index the directory with the project using the ctags or etags utility, for this you need to run the following commands:

$ cd /home/user/projects/myproj
$ find . -type f -name "*.[ch]" | etags -lc --declarations -

or
$ find . -type f -name "*.[ch]" | ctags --c-kinds=+p --fields=+iaS --extra=+q -e -L-

')

me marginwidth=


After the utility completes, a TAGS file will appear in the root directory of our project, which mcedit will use.
Well, practically all that needs to be done in order for mcedit to find the definition of the functions of variables or properties of the object under study.

Using
Imagine that we need to determine the place where the definition of the locked property of an edit object is located in some source code of a rather large project.


/* Succesful, so unlock both files */
if (different_filename) {
if (save_lock)
edit_unlock_file (exp);
if (edit->locked)
edit->locked = edit_unlock_file (edit->filename);
} else {
if (edit->locked || save_lock)
edit->locked = edit_unlock_file (edit->filename);
}

me marginwidth=

To do this, put the cursor at the end of the word locked and press alt + enter , a list of possible options appears, as in the screenshot below.
image

After selecting the desired option, we get to the line with the definition.

[Mar 05, 2020] How to switch the editor in mc (midnight commander) from nano to mcedit?

Jan 01, 2014 | askubuntu.com

Ask Question Asked 9 years, 2 months ago Active 6 months ago Viewed 123k times

https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html


sdu ,

Using ubuntu 10.10 the editor in mc (midnight commander) is nano. How can i switch to the internal mc editor (mcedit)?

Isaiah ,

Press the following keys in order, one at a time:
  1. F9 Activates the top menu.
  2. o Selects the Option menu.
  3. c Opens the configuration dialog.
  4. i Toggles the use internal edit option.
  5. s Saves your preferences.

Hurnst , 2014-06-21 02:34:51

Run MC as usual. On the command line right above the bottom row of menu selections type select-editor . This should open a menu with a list of all of your installed editors. This is working for me on all my current linux machines.

, 2010-12-09 18:07:18

You can also change the standard editor. Open a terminal and type this command:
sudo update-alternatives --config editor

You will get an list of the installed editors on your system, and you can chose your favorite.

AntonioK , 2015-01-27 07:06:33

If you want to leave mc and system settings as it is now, you may just run it like
$ EDITOR=mcedit

> ,

Open Midnight Commander, go to Options -> Configuration and check "use internal editor" Hit save and you are done.

[Mar 05, 2020] How to change your hostname in Linux Enable Sysadmin

Notable quotes:
"... pretty ..."
"... transient ..."
"... Want to try out Red Hat Enterprise Linux? Download it now for free. ..."
Mar 05, 2020 | www.redhat.com

How to change your hostname in Linux What's in a name, you ask? Everything. It's how other systems, services, and users "see" your system.

Posted March 3, 2020 | by Tyler Carrigan (Red Hat)

Image
Image by Pixabay
More Linux resources

Your hostname is a vital piece of system information that you need to keep track of as a system administrator. Hostnames are the designations by which we separate systems into easily recognizable assets. This information is especially important to make a note of when working on a remotely managed system. I have experienced multiple instances of companies changing the hostnames or IPs of storage servers and then wondering why their data replication broke. There are many ways to change your hostname in Linux; however, in this article, I'll focus on changing your name as viewed by the network (specifically in Red Hat Enterprise Linux and Fedora).

Background

A quick bit of background. Before the invention of DNS, your computer's hostname was managed through the HOSTS file located at /etc/hosts . Anytime that a new computer was connected to your local network, all other computers on the network needed to add the new machine into the /etc/hosts file in order to communicate over the network. As this method did not scale with the transition into the world wide web era, DNS was a clear way forward. With DNS configured, your systems are smart enough to translate unique IPs into hostnames and back again, ensuring that there is little confusion in web communications.

Modern Linux systems have three different types of hostnames configured. To minimize confusion, I list them here and provide basic information on each as well as a personal best practice:

It is recommended to pick a pretty hostname that is unique and not easily confused with other systems. Allow the transient and static names to be variations on the pretty, and you will be good to go in most circumstances.

Working with hostnames

Now, let's look at how to view your current hostname. The most basic command used to see this information is hostname -f . This command displays the system's fully qualified domain name (FQDN). To relate back to the three types of hostnames, this is your transient hostname. A better way, at least in terms of the information provided, is to use the systemd command hostnamectl to view your transient hostname and other system information:

Image

Before moving on from the hostname command, I'll show you how to use it to change your transient hostname. Using hostname <x> (where x is the new hostname), you can change your network name quickly, but be careful. I once changed the hostname of a customer's server by accident while trying to view it. That was a small but painful error that I overlooked for several hours. You can see that process below:

Image

It is also possible to use the hostnamectl command to change your hostname. This command, in conjunction with the right flags, can be used to alter all three types of hostnames. As stated previously, for the purposes of this article, our focus is on the transient hostname. The command and its output look something like this:

Image

The final method to look at is the sysctl command. This command allows you to change the kernel parameter for your transient name without having to reboot the system. That method looks something like this:

Image GNOME tip

Using GNOME, you can go to Settings -> Details to view and change the static and pretty hostnames. See below:

Image Wrapping up

I hope that you found this information useful as a quick and easy way to manipulate your machine's network-visible hostname. Remember to always be careful when changing system hostnames, especially in enterprise environments, and to document changes as they are made.

Want to try out Red Hat Enterprise Linux? Download it now for free. Topics: Linux Tyler Carrigan Tyler is a community manager at Enable Sysadmin, a submarine veteran, and an all-round tech enthusiast! He was first introduced to Red Hat in 2012 by way of a Red Hat Enterprise Linux-based combat system inside the USS Georgia Missile Control Center. More about me

[Mar 05, 2020] Debug your shell scripts with bashdb by Ben Martin

Nov 24, 2008 | www.linux.com

Author: Ben Martin

The Bash Debugger Project (bashdb) lets you set breakpoints, inspect variables, perform a backtrace, and step through a bash script line by line. In other words, it provides the features you expect in a C/C++ debugger to anyone programming a bash script.

To see if your standard bash executable has bashdb support, execute the command shown below; if you are not taken to a bashdb prompt then you'll have to install bashdb yourself.

$ bash --debugger -c "set|grep -i dbg" ... bashdb

The Ubuntu Intrepid repository contains a package for bashdb, but there is no special bashdb package in the openSUSE 11 or Fedora 9 repositories. I built from source using version 4.0-0.1 of bashdb on a 64-bit Fedora 9 machine, using the normal ./configure; make; sudo make install commands.

You can start the Bash Debugger using the bash --debugger foo.sh syntax or the bashdb foo.sh command. The former method is recommended except in cases where I/O redirection might cause issues, and it's what I used. You can also use bashdb through ddd or from an Emacs buffer.

The syntax for many of the commands in bashdb mimics that of gdb, the GNU debugger. You can step into functions, use next to execute the next line without stepping into any functions, generate a backtrace with bt , exit bashdb with quit or Ctrl-D, and examine a variable with print $foo . Aside from the prefixing of the variable with $ at the end of the last sentence, there are some other minor differences that you'll notice. For instance, pressing Enter on a blank line in bashdb executes the previous step or next command instead of whatever the previous command was.

The print command forces you to prefix shell variables with the dollar sign ( $foo ). A slightly shorter way of inspecting variables and functions is to use the x foo command, which uses declare to print variables and functions.

Both bashdb and your script run inside the same bash shell. Because bash lacks some namespace properties, bashdb will include some functions and symbols into the global namespace which your script can get at. bashdb prefixes its symbols with _Dbg_ , so you should avoid that prefix in your scripts to avoid potential clashes. bashdb also uses some environment variables; it uses the DBG_ prefix for its own, and relies on some standard bash ones that begin with BASH_ .

me name=

To illustrate the use of bashdb, I'll work on the small bash script below, which expects a numeric argument n and calculates the nth Fibonacci number .

#!/bin/bash version="0.01"; fibonacci() { n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} if [ $n -le 1 ]; then echo $n else l=`fibonacci $((n-1))` r=`fibonacci $((n-2))` echo $((l + r)) fi } for i in `seq 1 10` do result=$(fibonacci $i) echo "i=$i result=$result" done

The below session shows bashdb in action, stepping over and then into the fibonacci function and inspecting variables. I've made my input text bold for ease of reading. An initial backtrace ( bt ) shows that the script begins at line 3, which is where the version variable is written. The next and list commands then progress to the next line of the script a few times and show the context of the current execution line. After one of the next commands I press Enter to execute next again. I invoke the examine command through the single letter shortcut x . Notice that the variables are printed out using declare as opposed to their display on the next line using print . Finally I set a breakpoint at the start of the fibonacci function and continue the execution of the shell script. The fibonacci function is called and I move to the next line a few times and inspect a variable.

$ bash --debugger ./fibonacci.sh ... (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb bt ->0 in file `./fibonacci.sh' at line 3 ##1 main() called from file `./fibonacci.sh' at line 0 bashdb next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb list 16:==>for i in `seq 1 10` 17: do 18: result=$(fibonacci $i) 19: echo "i=$i result=$result" 20: done bashdb next (/home/ben/testing/bashdb/fibonacci.sh:18): 18: result=$(fibonacci $i) bashdb (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb x i result declare -- i="1" declare -- result="" bashdb print $i $result 1 bashdb break fibonacci Breakpoint 1 set in file /home/ben/testing/bashdb/fibonacci.sh, line 5. bashdb continue Breakpoint 1 hit (1 times). (/home/ben/testing/bashdb/fibonacci.sh:5): 5: fibonacci() { bashdb next (/home/ben/testing/bashdb/fibonacci.sh:6): 6: n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} bashdb next (/home/ben/testing/bashdb/fibonacci.sh:7): 7: if [ $n -le 1 ]; then bashdb x n declare -- n="2" bashdb quit

Notice that the number in the bashdb prompt toward the end of the above example is enclosed in parentheses. Each set of parentheses indicates that you have entered a subshell. In this example this is due to being inside a shell function.

In the below example I use a watchpoint to see if and where the result variable changes. Notice the initial next command. I found that if I didn't issue that next then my watch would fail to work. As you can see, after I issue c to continue execution, execution is stopped whenever the result variable is about to change, and the new and old value are displayed.

(/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb<1> watch result 0: ($result)==0 arith: 0 bashdb<2> c Watchpoint 0: $result changed: old value: '' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb<3> c i=1 result=1 i=2 result=1 Watchpoint 0: $result changed: old value: '1' new value: '2' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"

To get around the strange initial next requirement I used the watche command in the below session, which lets you stop whenever an expression becomes true. In this case I'm not overly interested in the first few Fibonacci numbers so I set a watch to have execution stop when the result is greater than 4. You can also use a watche command without a condition; for example, watche result would stop execution whenever the result variable changed.

$ bash --debugger ./fibonacci.sh (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> watche result > 4 0: (result > 4)==0 arith: 1 bashdb<1> continue i=1 result=1 i=2 result=1 i=3 result=2 i=4 result=3 Watchpoint 0: result > 4 changed: old value: '0' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"

When a shell script goes wrong, many folks use the time-tested method of incrementally adding in echo or printf statements to look for invalid values or code paths that are never reached. With bashdb, you can save yourself time by just adding a few watches on variables or setting a few breakpoints.

[Mar 04, 2020] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Jan 01, 2019 | stackoverflow.com

A command-line HTML pretty-printer: Making messy HTML readable [closed] Ask Question Asked 10 years, 1 month ago Active 10 months ago Viewed 51k times


knorv ,

Closed. This question is off-topic . It is not currently accepting answers.

jonjbar ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

tidy inputfile.html

Paul Brit ,

Update 2018: The homebrew/dupes is now deprecated, tidy-html5 may be directly installed.
brew install tidy-html5

Original reply:

Tidy from OS X doesn't support HTML5 . But there is experimental branch on Github which does.

To get it:

 brew tap homebrew/dupes
 brew install tidy --HEAD
 brew untap homebrew/dupes

That's it! Have fun!

Boris , 2019-11-16 01:27:35

Error: No available formula with the name "tidy" . brew install tidy-html5 works. – Pysis Apr 4 '17 at 13:34

[Feb 29, 2020] files - How to get over device or resource busy

Jan 01, 2011 | unix.stackexchange.com

ripper234 , 2011-04-13 08:51:26

I tried to rm -rf a folder, and got "device or resource busy".

In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one . Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)

camh , 2011-04-13 09:22:46

The tool you want is lsof , which stands for list open files .

It has a lot of options, so check the man page, but if you want to see all open files under a directory:

lsof +D /path

That will recurse through the filesystem under /path , so beware doing it on large directory trees.

Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

kip2 , 2014-04-03 01:24:22

sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:

umount /path

BillThor ,

I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

user73011 ,

Here is the solution:
  1. Go into the directory and type ls -a
  2. You will find a .xyz file
  3. vi .xyz and look into what is the content of the file
  4. ps -ef | grep username
  5. You will see the .xyz content in the 8th column (last row)
  6. kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
  7. Now try to delete the folder or file.

Choylton B. Higginbottom ,

I had this same issue, built a one-liner starting with @camh recommendation:
lsof +D ./ | awk '{print $2}' | tail -n +2 | xargs kill -9

The awk command grabs the PIDS. The tail command gets rid of the pesky first entry: "PID". I used -9 on kill, others might have safer options.

user5359531 ,

I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz .

My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

This typically happens in directories where I am installing or compiling software libraries.

gloriphobia , 2017-03-23 12:56:22

I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn't get rid of it -- I kept getting "Device or resource busy" !

By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

Hopefully this can help someone else who comes across this problem!

bil , 2018-04-04 14:10:20

Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this
ps -ef | grep name-of-busy-dir

Showed me the process and the PID (column two).

sudo kill -15 pid-here

fixed it.

Prabhat Kumar Singh , 2017-08-01 08:07:36

If you have the server accessible, Try

Deleting that dir from the server

Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.

I too had this problem where

lsof +D path : gives no output

ps -ef : gives no relevant information

[Feb 28, 2020] linux - Convert a time span in seconds to formatted time in shell - Stack Overflow

Jan 01, 2012 | stackoverflow.com

Convert a time span in seconds to formatted time in shell Ask Question Asked 7 years, 3 months ago Active 2 years ago Viewed 43k times


Darren , 2012-11-16 18:59:53

I have a variable of $i which is seconds in a shell script, and I am trying to convert it to 24 HOUR HH:MM:SS. Is this possible in shell?

sampson-chen , 2012-11-16 19:17:51

Here's a fun hacky way to do exactly what you are looking for =)
date -u -d @${i} +"%T"

Explanation:

glenn jackman ,

Another approach: arithmetic
i=6789
((sec=i%60, i/=60, min=i%60, hrs=i/60))
timestamp=$(printf "%d:%02d:%02d" $hrs $min $sec)
echo $timestamp

produces 1:53:09

Alan Tam , 2014-02-17 06:48:21

The -d argument applies to date from coreutils (Linux) only.

In BSD/OS X, use

date -u -r $i +%T

kossboss , 2015-01-07 13:43:36

Here is my algo/script helpers on my site: http://ram.kossboss.com/seconds-to-split-time-convert/ I used this elogant algo from here: Convert seconds to hours, minutes, seconds
convertsecs() {
 ((h=${1}/3600))
 ((m=(${1}%3600)/60))
 ((s=${1}%60))
 printf "%02d:%02d:%02d\n" $h $m $s
}
TIME1="36"
TIME2="1036"
TIME3="91925"

echo $(convertsecs $TIME1)
echo $(convertsecs $TIME2)
echo $(convertsecs $TIME3)

Example of my second to day, hour, minute, second converter:

# convert seconds to day-hour:min:sec
convertsecs2dhms() {
 ((d=${1}/(60*60*24)))
 ((h=(${1}%(60*60*24))/(60*60)))
 ((m=(${1}%(60*60))/60))
 ((s=${1}%60))
 printf "%02d-%02d:%02d:%02d\n" $d $h $m $s
 # PRETTY OUTPUT: uncomment below printf and comment out above printf if you want prettier output
 # printf "%02dd %02dh %02dm %02ds\n" $d $h $m $s
}
# setting test variables: testing some constant variables & evaluated variables
TIME1="36"
TIME2="1036"
TIME3="91925"
# one way to output results
((TIME4=$TIME3*2)) # 183850
((TIME5=$TIME3*$TIME1)) # 3309300
((TIME6=100*86400+3*3600+40*60+31)) # 8653231 s = 100 days + 3 hours + 40 min + 31 sec
# outputting results: another way to show results (via echo & command substitution with         backticks)
echo $TIME1 - `convertsecs2dhms $TIME1`
echo $TIME2 - `convertsecs2dhms $TIME2`
echo $TIME3 - `convertsecs2dhms $TIME3`
echo $TIME4 - `convertsecs2dhms $TIME4`
echo $TIME5 - `convertsecs2dhms $TIME5`
echo $TIME6 - `convertsecs2dhms $TIME6`

# OUTPUT WOULD BE LIKE THIS (If none pretty printf used): 
# 36 - 00-00:00:36
# 1036 - 00-00:17:16
# 91925 - 01-01:32:05
# 183850 - 02-03:04:10
# 3309300 - 38-07:15:00
# 8653231 - 100-03:40:31
# OUTPUT WOULD BE LIKE THIS (If pretty printf used): 
# 36 - 00d 00h 00m 36s
# 1036 - 00d 00h 17m 16s
# 91925 - 01d 01h 32m 05s
# 183850 - 02d 03h 04m 10s
# 3309300 - 38d 07h 15m 00s
# 1000000000 - 11574d 01h 46m 40s

Basile Starynkevitch ,

If $i represents some date in second since the Epoch, you could display it with
  date -u -d @$i +%H:%M:%S

but you seems to suppose that $i is an interval (e.g. some duration) not a date, and then I don't understand what you want.

Shilv , 2016-11-24 09:18:57

I use C shell, like this:
#! /bin/csh -f

set begDate_r = `date +%s`
set endDate_r = `date +%s`

set secs = `echo "$endDate_r - $begDate_r" | bc`
set h = `echo $secs/3600 | bc`
set m = `echo "$secs/60 - 60*$h" | bc`
set s = `echo $secs%60 | bc`

echo "Formatted Time: $h HOUR(s) - $m MIN(s) - $s SEC(s)"
Continuing @Daren`s answer, just to be clear: If you want to use the conversion to your time zone , don't use the "u" switch , as in: date -d @$i +%T or in some cases date -d @"$i" +%T

[Feb 16, 2020] Recover deleted files in Debian with TestDisk

Images deletes; see the original link for details
Feb 16, 2020 | vitux.com

... ... ...

You can verify if the utility is indeed installed on your system and also check its version number by using the following command:

$ testdisk --version

Or,

$ testdisk -v

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png" alt="Check TestDisk version" width="734" height="216" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png 734w, https://vitux.com/wp-content/uploads/2019/10/word-image-64-300x88.png 300w" sizes="(max-width: 734px) 100vw, 734px" />

Step 2: Run TestDisk and create a new testdisk.log file

Use the following command in order to run the testdisk command line utility:

$ sudo testdisk

The output will give you a description of the utility. It will also let you create a testdisk.log file. This file will later include useful information about how and where your lost file was found, listed and resumed.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png" alt="Using Testdisk" width="736" height="411" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-65-300x168.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The above output gives you three options about what to do with this file:

Create: (recommended)- This option lets you create a new log file.

Append: This option lets you append new information to already listed information in this file from any previous session.

No Log: Choose this option if you do not want to record anything about the session for later use.

Important: TestDisk is a pretty intelligent tool. It does know that many beginners will also be using the utility for recovering lost files. Therefore, it predicts and suggests the option you should be ideally selecting on a particular screen. You can see the suggested options in a highlighted form. You can select an option through the up and down arrow keys and then entering to make your choice.

In the above output, I would opt for creating a new log file. The system might ask you the password for sudo at this point.

Step 3: Select your recovery drive

The utility will now display a list of drives attached to your system. In my case, it is showing my hard drive as it is the only storage device on my system.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png" alt="Choose recovery drive" width="729" height="493" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png 729w, https://vitux.com/wp-content/uploads/2019/10/word-image-66-300x203.png 300w" sizes="(max-width: 729px) 100vw, 729px" />

Select Proceed, through the right and left arrow keys and hit Enter. As mentioned in the note in the above screenshot, correct disk capacity must be detected in order for a successful file recovery to be performed.

Step 4: Select Partition Table Type of your Selected Drive

Now that you have selected a drive, you need to specify its partition table type of your on the following screen:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png" alt="Choose partition table" width="736" height="433" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-67-300x176.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The utility will automatically highlight the correct choice. Press Enter to continue.

If you are sure that the testdisk intelligence is incorrect, you can make the correct choice from the list and then hit Enter.

Step 5: Select the 'Advanced' option for file recovery

When you have specified the correct drive and its partition type, the following screen will appear:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png" alt="Advanced file recovery options" width="736" height="446" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-68-300x182.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

Recovering lost files is only one of the features of testdisk, the utility offers much more than that. Through the options displayed in the above screenshot, you can select any of those features. But here we are interested only in recovering our accidentally deleted file. For this, select the Advanced option and hit enter.

In this utility if you reach a point you did not intend to, you can go back by using the q key.

Step 6: Select the drive partition where you lost the file

If your selected drive has multiple partitions, the following screen lets you choose the relevant one from them.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png" alt="Choose partition from where the file shall be recovered" width="736" height="499" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-69-300x203.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

I lost my file while I was using Linux, Debian. Make your choice and then choose the List option from the options shown at the bottom of the screen.

This will list all the directories on your partition.

Step 7: Browse to the directory from where you lost the file

When the testdisk utility displays all the directories of your operating system, browse to the directory from where you deleted/lost the file. I remember that I lost the file from the Downloads folder in my home directory. So I will browse to home:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png" alt="Select directory" width="733" height="458" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png 733w, https://vitux.com/wp-content/uploads/2019/10/word-image-70-300x187.png 300w" sizes="(max-width: 733px) 100vw, 733px" />

My username (sana):

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png" alt="Choose user folder" width="735" height="449" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-71-300x183.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

And then the Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png" alt="Choose downloads" width="738" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png 738w, https://vitux.com/wp-content/uploads/2019/10/word-image-72-300x185.png 300w" sizes="(max-width: 738px) 100vw, 738px" />

Tip: You can use the left arrow to go back to the previous directory.

When you have reached your required directory, you will see the deleted files in colored or highlighted form.

And, here I see my lost file "accidently_removed.docx" in the list. Of course, I intentionally named it this as I had to illustrate the whole process to you.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png" alt="Highlighted files" width="735" height="498" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-73-300x203.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

Step 8: Copy the deleted file to be restored

By now, you must have found your lost file in the list. Use the C option to copy the selected file. This file will later be restored to the location you will specify in the next step:

Step 9: Specify the location where the found file will be restored

Now that we have copied the lost file that we have now found, the testdisk utility will display the following screen so that we can specify where to restore it.

You can specify any accessible location as it is only a simple UI thing to copy and paste the file to your desired location.

I am specifically selecting the location from where I lost the file, my Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png" alt="Choose location to restore file" width="732" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png 732w, https://vitux.com/wp-content/uploads/2019/10/word-image-74-300x187.png 300w" sizes="(max-width: 732px) 100vw, 732px" />

Step 10: Copy/restore the file to the selected location

After making the selection about where you want to restore the file, click the C button. This will restore your file to that location:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png" alt="Restored file successfully" width="735" height="496" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-75-300x202.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

See the text in green in the above screenshot? This is actually great news. Now my file is restored on the specified location.

This might seem to be a slightly long process but it is definitely worth getting your lost file back. The restored file will most probably be in a locked state. This means that only an authorized user can access and open it.

We all need this tool time and again, but if you want to delete it till you further need it you can do so through the following command:

$ sudo apt-get remove testdisk

You can also delete the testdisk.log file if you want. It is such a relief to get your lost file back!

Recover deleted files in Debian with TestDisk Karim Buzdar February 11, 2020 Debian , Linux , Shell Market smarter with automated messaging tools. ads via Carbon Search About This Site Vitux.com aims to become a Linux compendium with lots of unique and up to date tutorials. Most Popular Copyright © vitux.com

[Feb 16, 2020] A List Of Useful Console Services For Linux Users by sk

Images deletes; see the original link for details
Feb 13, 2020 | www.ostechnix.com
Cheatsheets for Linux/Unix commands

You probably heard about cheat.sh . I use this service everyday! This is one of the useful service for all Linux users. It displays concise Linux command examples.

For instance, to view the curl command cheatsheet , simply run the following command from your console:

$ curl cheat.sh/curl

It is that simple! You don't need to go through man pages or use any online resources to learn about commands. It can get you the cheatsheets of most Linux and unix commands in couple seconds.

ls command cheatsheet:

$ curl cheat.sh/ls

find command cheatsheet:

$ curl cheat.sh/find

It is highly recommended tool!


Recommended read:


... ... ...

IP Address

We can find the local ip address using ip command. But what about the public IP address? It is simple!

To find your public IP address, just run the following commands from your Terminal:

$ curl ipinfo.io/ip
157.46.122.176
$ curl eth0.me
157.46.122.176
$ curl checkip.amazonaws.com
157.46.122.176
$ curl icanhazip.com
2409:4072:631a:c033:cc4b:4d25:e76c:9042

There is also a console service to display the ip address in JSON format.

$ curl httpbin.org/ip
{
  "origin": "157.46.122.176"
}

... ... ...

Dictionary

Want to know the meanig of an English word? Here is how you can get the meaning of a word – gustatory

$ curl 'dict://dict.org/d:gustatory'
220 pan.alephnull.com dictd 1.12.1/rf on Linux 4.4.0-1-amd64 <auth.mime> <100411284.5191.1581597016@pan.alephnull.com>
250 ok
150 1 definitions retrieved
151 "Gustatory" gcide "The Collaborative International Dictionary of English v.0.48"
Gustatory \Gust"a*to*ry\, a.
Pertaining to, or subservient to, the sense of taste; as, the
gustatory nerve which supplies the front of the tongue.
[1913 Webster]
.
250 ok [d/m/c = 1/0/16; 0.000r 0.000u 0.000s]
221 bye [d/m/c = 0/0/0; 0.000r 0.000u 0.000s]
Text sharing

You can share texts via some console services. These text sharing services are often useful for sharing code.

Here is an example.

$ echo "Welcome To OSTechNix!" | curl -F 'f:1=<-' ix.io
http://ix.io/2bCA

The above command will share the text "Welcome To OSTechNix" via ix.io site. Anyone can view access this text from a web browser by navigating to the URL – http://ix.io/2bCA

Another example:

$ echo "Welcome To OSTechNix!" | curl -F file=@- 0x0.st
http://0x0.st/i-0G.txt
File sharing

Not just text, we can even share files to anyone using a console service called filepush .

$ curl --upload-file ostechnix.txt filepush.co/upload/ostechnix.txt
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    72    0     0  100    72      0     54  0:00:01  0:00:01 --:--:--    54http://filepush.co/8x6h/ostechnix.txt
100   110  100    38  100    72     27     53  0:00:01  0:00:01 --:--:--    81

The above command will upload the ostechnix.txt file to filepush.co site. You can access this file from anywhere by navgating to the link – http://filepush.co/8x6h/ostechnix.txt

Another text sharing console service is termbin :

$ echo "Welcome To OSTechNix!" | nc termbin.com 9999

There is also another console service named transfer.sh . But it doesn't work at the time of writing this guide.

Browser

There are many text browsers are available for Linux. Browsh is one of them and you can access it right from your Terminal using command:

$ ssh brow.sh

Browsh is a modern, text browser that supports graphics including video. Technically speaking, it is not much of a browser, but some kind of terminal front-end of browser. It uses headless Firefox to render the web page and then converts it to ASCII art. Refer the following guide for more details.

Create QR codes for given string

Do you want to create QR-codes for a given string? That's easy!

$ curl qrenco.de/ostechnix

Here is the QR code for "ostechnix" string.

URL Shortners

Want to shorten a long URLs shorter to make them easier to post or share with your friends? Use Tinyurl console service to shorten them:

$ curl -s http://tinyurl.com/api-create.php?url=https://www.ostechnix.com/pigz-compress-and-decompress-files-in-parallel-in-linux/
http://tinyurl.com/vkc5c5p

[Jan 25, 2020] timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time

You can achieve the same affect with at command which allows more flexible time patterns.
Jan 23, 2020 | linuxize.com

timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time. In other words, timeout allows you to run a command with a time limit. The timeout command is a part of the GNU core utilities package which is installed on almost any Linux distribution.

It is handy when you want to run a command that doesn't have a built-in timeout option.

In this article, we will explain how to use the Linux timeout command.

How to Use the timeout Command #

The syntax for the timeout command is as follows:

timeout [OPTIONS] DURATION COMMAND [ARG]

The DURATION can be a positive integer or a floating-point number, followed by an optional unit suffix:

When no unit is used, it defaults to seconds. If the duration is set to zero, the associated timeout is disabled.

The command options must be provided before the arguments.

Here are a few basic examples demonstrating how to use the timeout command:

If you want to run a command that requires elevated privileges such as tcpdump , prepend sudo before timeout :

sudo timeout 300 tcpdump -n -w data.pcap
Sending Specific Signal #

If no signal is given, timeout sends the SIGTERM signal to the managed command when the time limit is reached. You can specify which signal to send using the -s ( --signal ) option.

For example, to send SIGKILL to the ping command after one minute you would use:

sudo timeout -s SIGKILL ping 8.8.8.8

The signal can be specified by its name like SIGKILL or its number like 9 . The following command is identical to the previous one:

sudo timeout -s 9 ping 8.8.8.8

To get a list of all available signals, use the kill -l command:

kill -l
Killing Stuck Processes #

SIGTERM , the default signal that is sent when the time limit is exceeded can be caught or ignored by some processes. In that situations, the process continues to run after the termination signal is send.

To make sure the monitored command is killed, use the -k ( --kill-after ) option following by a time period. When this option is used after the given time limit is reached, the timeout command sends SIGKILL signal to the managed program that cannot be caught or ignored.

In the following example, timeout runs the command for one minute, and if it is not terminated, it will kill it after ten seconds:

sudo timeout -k 10 1m ping 8.8.8.8

timeout -k "./test.sh"

killed after the given time limit is reached

Preserving the Exit Status #

timeout returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command.

To return the exit status of the command even when the time limit is reached, use the --preserve-status option:

timeout --preserve-status 5 ping 8.8.8.8
Running in Foreground #

By default, timeout runs the managed command in the background. If you want to run the command in the foreground, use the --foreground option:

timeout --foreground 5m ./script.sh

This option is useful when you want to run an interactive command that requires user input.

Conclusion #

The timeout command is used to run a given command with a time limit.

timeout is a simple command that doesn't have a lot of options. Typically you will invoke timeout only with two arguments, the duration, and the managed command.

If you have any questions or feedback, feel free to leave a comment.

timeout terminal

Related Tutorials

If you like our content, please consider buying us a coffee.
Thank you for your support!

Buy me a coffee

Sign up to our newsletter and get our latest tutorials and news straight to your mailbox.

Subscribe

We'll never share your email address or spam you.

Jan 25, 2020

Pidof Command in Linux
<img alt="" src=/post/pidof-command-in-linux/featured.jpg>

Jan 22, 2020

Tcpdump Command in Linux
<img alt="" src=/post/tcpdump-command-in-linux/featured.jpg>

Jan 17, 2020

Id command in Linux
<img alt="" src=/post/id-command-in-linux/featured.jpg>
Write a comment Please enable JavaScript to view the <a href=https://disqus.com/?ref_noscript>comments powered by Disqus.</a> ESC © 2020 Linuxize.com Privacy Policy Contact <div><img src="//pixel.quantserve.com/pixel/p-31iz6hfFutd16.gif?labels=Domain.linuxize_com,DomainId.93605" border="0" height="1" width="1" alt="Quantcast"/></div> <img src="https://sb.scorecardresearch.com/p?c1=2&c2=20015427&cv=2.0&cj=1"/>

[Jan 16, 2020] Watch Command in Linux

Jan 16, 2020 | linuxhandbook.com

Last Updated on January 10, 2020 By Abhishek Leave a Comment

Watch is a great utility that automatically refreshes data. Some of the more common uses for this command involve monitoring system processes or logs, but it can be used in combination with pipes for more versatility.
watch [options] [command]
Watch command examples
Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?ssl=1" alt="Watch Command" srcset="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?w=800&amp;ssl=1 800w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=300%2C169&amp;ssl=1 300w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=768%2C432&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Using watch command without any options will use the default parameter of 2.0 second refresh intervals.

As I mentioned before, one of the more common uses is monitoring system processes. Let's use it with the free command . This will give you up to date information about our system's memory usage.

watch free

Yes, it is that simple my friends.

Every 2.0s: free                                pop-os: Wed Dec 25 13:47:59 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     3846372    25571572      676612     3178904    27702636
Swap:             0           0           0
Adjust refresh rate of watch command

You can easily change how quickly the output is updated using the -n flag.

watch -n 10 free
Every 10.0s: free                               pop-os: Wed Dec 25 13:58:32 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     4522508    24864196      715600     3210144    26988920
Swap:             0           0           0

This changes from the default 2.0 second refresh to 10.0 seconds as you can see in the top left corner of our output.

Remove title or header info from watch command output
watch -t free

The -t flag removes the title/header information to clean up output. The information will still refresh every 2 seconds but you can change that by combining the -n option.

              total        used        free      shared  buff/cache   available
Mem:       32596848     3683324    25089268     1251908     3824256    27286132
Swap:             0           0           0
Highlight the changes in watch command output

You can add the -d option and watch will automatically highlight changes for us. Let's take a look at this using the date command. I've included a screen capture to show how the highlighting behaves.

Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/watch_command.gif?ssl=1" alt="Watch Command" data-recalc-dims="1"/>
Using pipes with watch

You can combine items using pipes. This is not a feature exclusive to watch, but it enhances the functionality of this software. Pipes rely on the | symbol. Not coincidentally, this is called a pipe symbol or sometimes a vertical bar symbol.

watch "cat /var/log/syslog | tail -n 3"

While this command runs, it will list the last 3 lines of the syslog file. The list will be refreshed every 2 seconds and any changes will be displayed.

Every 2.0s: cat /var/log/syslog | tail -n 3                                                      pop-os: Wed Dec 25 15:18:06 2019

Dec 25 15:17:24 pop-os dbus-daemon[1705]: [session uid=1000 pid=1705] Successfully activated service 'org.freedesktop.Tracker1.Min
er.Extract'
Dec 25 15:17:24 pop-os systemd[1591]: Started Tracker metadata extractor.
Dec 25 15:17:45 pop-os systemd[1591]: tracker-extract.service: Succeeded.

Conclusion

Watch is a simple, but very useful utility. I hope I've given you ideas that will help you improve your workflow.

This is a straightforward command, but there are a wide range of potential uses. If you have any interesting uses that you would like to share, let us know about them in the comments.

[Jan 16, 2020] Linux tools How to use the ss command by Ken Hess (Red Hat)

ss is the Swiss Army Knife of system statistics commands. It's time to say buh-bye to netstat and hello to ss.
Jan 13, 2020 | www.redhat.com

If you're like me, you still cling to soon-to-be-deprecated commands like ifconfig , nslookup , and netstat . The new replacements are ip , dig , and ss , respectively. It's time to (reluctantly) let go of legacy utilities and head into the future with ss . The ip command is worth a mention here because part of netstat 's functionality has been replaced by ip . This article covers the essentials for the ss command so that you don't have to dig (no pun intended) for them.

More Linux resources

Formally, ss is the socket statistics command that replaces netstat . In this article, I provide netstat commands and their ss replacements. Michale Prokop, the developer of ss , made it easy for us to transition into ss from netstat by making some of netstat 's options operate in much the same fashion in ss .

For example, to display TCP sockets, use the -t option:

$ netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 rhel8:ssh               khess-mac:62036         ESTABLISHED

$ ss -t
State         Recv-Q          Send-Q                    Local Address:Port                   Peer Address:Port          
ESTAB         0               0                          192.168.1.65:ssh                    192.168.1.94:62036

You can see that the information given is essentially the same, but to better mimic what you see in the netstat command, use the -r (resolve) option:

$ ss -tr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:ssh                             khess-mac:62036

And to see port numbers rather than their translations, use the -n option:

$ ss -ntr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:22                              khess-mac:62036

It isn't 100% necessary that netstat and ss mesh, but it does make the transition a little easier. So, try your standby netstat options before hitting the man page or the internet for answers, and you might be pleasantly surprised at the results.

For example, the netstat command with the old standby options -an yield comparable results (which are too long to show here in full):

$ netstat -an |grep LISTEN

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     
unix  2      [ ACC ]     STREAM     LISTENING     28165    /run/user/0/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     20942    /var/lib/sss/pipes/private/sbus-dp_implicit_files.642
unix  2      [ ACC ]     STREAM     LISTENING     28174    /run/user/0/bus
unix  2      [ ACC ]     STREAM     LISTENING     20241    /var/run/lsm/ipc/simc
<truncated>

$ ss -an |grep LISTEN

u_str             LISTEN              0                    128                                             /run/user/0/systemd/private 28165                  * 0                   
                                                            
u_str             LISTEN              0                    128                   /var/lib/sss/pipes/private/sbus-dp_implicit_files.642 20942                  * 0                   
                                                            
u_str             LISTEN              0                    128                                                         /run/user/0/bus 28174                  * 0                   
                                                            
u_str             LISTEN              0                    5                                                     /var/run/lsm/ipc/simc 20241                  * 0                   
<truncated>

The TCP entries fall at the end of the ss command's display and at the beginning of netstat 's. So, there are layout differences even though the displayed information is really the same.

If you're wondering which netstat commands have been replaced by the ip command, here's one for you:

$ netstat -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      all-systems.mcast.net
enp0s3          1      all-systems.mcast.net
lo              1      ff02::1
lo              1      ff01::1
enp0s3          1      ff02::1:ffa6:ab3e
enp0s3          1      ff02::1:ff8d:912c
enp0s3          1      ff02::1
enp0s3          1      ff01::1

$ ip maddr
1:	lo
	inet  224.0.0.1
	inet6 ff02::1
	inet6 ff01::1
2:	enp0s3
	link  01:00:5e:00:00:01
	link  33:33:00:00:00:01
	link  33:33:ff:8d:91:2c
	link  33:33:ff:a6:ab:3e
	inet  224.0.0.1
	inet6 ff02::1:ffa6:ab3e
	inet6 ff02::1:ff8d:912c
	inet6 ff02::1
	inet6 ff01::1

The ss command isn't perfect (sorry, Michael). In fact, there is one significant ss bummer. You can try this one for yourself to compare the two:

$ netstat -s 

Ip:
    Forwarding: 2
    6231 total packets received
    2 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    3104 incoming packets delivered
    2011 requests sent out
    243 dropped because of missing route
<truncated>

$ ss -s

Total: 182
TCP:   3 (estab 1, closed 0, orphaned 0, timewait 0)

Transport Total     IP        IPv6
RAW	  1         0         1        
UDP	  3         2         1        
TCP	  3         2         1        
INET	  7         4         3        
FRAG	  0         0         0

If you figure out how to display the same info with ss , please let me know.

Maybe as ss evolves, it will include more features. I guess Michael or someone else could always just look at the netstat command to glean those statistics from it. For me, I prefer netstat , and I'm not sure exactly why it's being deprecated in favor of ss . The output from ss is less human-readable in almost every instance.

What do you think? What about ss makes it a better option than netstat ? I suppose I could ask the same question of the other net-tools utilities as well. I don't find anything wrong with them. In my mind, unless you're significantly improving an existing utility, why bother deprecating the other?

There, you have the ss command in a nutshell. As netstat fades into oblivion, I'm sure I'll eventually embrace ss as its successor.

Want more on networking topics? Check out the Linux networking cheat sheet .

Ken Hess is an Enable SysAdmin Community Manager and an Enable SysAdmin contributor. Ken has used Red Hat Linux since 1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open source and other topics. More about me

[Jan 16, 2020] Thirteen Useful Tools for Working with Text on the Command Line - Make Tech Easier

Jan 16, 2020 | www.maketecheasier.com

Thirteen Useful Tools for Working with Text on the Command Line By Karl Wakim – Posted on Jan 9, 2020 Jan 9, 2020 in Linux Text Tool Linux Command Line Featured

GNU/Linux distributions include a wealth of programs for handling text, most of which are provided by the GNU core utilities. There's somewhat of a learning curve, but these utilities can prove very useful and efficient when used correctly.

Here are thirteen powerful text manipulation tools every command-line user should know.

1. cat

Cat was designed to con cat enate files but is most often used to display a single file. Without any arguments, cat reads standard input until Ctrl + D is pressed (from the terminal or from another program output if using a pipe). Standard input can also be explicitly specified with a - .

Cat has a number of useful options, notably:

In the following example, we are concatenating and numbering the contents of file1, standard input, and file3.

cat -n file1 - file3
Linux Text Tools Cat
2. sort

As its name suggests, sort sorts file contents alphabetically and numerically.

Linux Text Tools Sort
3. uniq

Uniq takes a sorted file and removes duplicate lines. It is often chained with sort in a single command.

Linux Text Tools Uniq
4. comm

Comm is used to compare two sorted files, line by line. It outputs three columns: the first two columns contain lines unique to the first and second file respectively, and the third displays those found in both files.

Linux Text Tools Comm
5. cut

Cut is used to retrieve specific sections of lines, based on characters, fields, or bytes. It can read from a file or from standard input if no file is specified.

Cutting by character position

The -c option specifies a single character position or one or more ranges of characters.

For example:

Linux Text Tools Cut Char

Cutting by field

Fields are separated by a delimiter consisting of a single character, which is specified with the -d option. The -f option selects a field position or one or more ranges of fields using the same format as above.

Linux Text Tools Cut Field
6. dos2unix

GNU/Linux and Unix usually terminate text lines with a line feed (LF), while Windows uses carriage return and line feed (CRLF). Compatibility issues can arise when handling CRLF text on Linux, which is where dos2unix comes in. It converts CRLF terminators to LF.

In the following example, the file command is used to check the text format before and after using dos2unix .

Linux Text Tools Dos2unix
7. fold

To make long lines of text easier to read and handle, you can use fold , which wraps lines to a specified width.

Fold strictly matches the specified width by default, breaking words where necessary.

fold -w 30 longline.txt
Linux Text Tools Fold

If breaking words is undesirable, you can use the -s option to break at spaces.

fold -w 30 -s longline.txt
Linux Text Tools Fold Spaces
8. iconv

This tool converts text from one encoding to another, which is very useful when dealing with unusual encodings.

iconv -f input_encoding -t output_encoding -o output_file input_file

Note: you can list the available encodings with iconv -l

9. sed

sed is a powerful and flexible s tream ed itor, most commonly used to find and replace strings with the following syntax.

The following command will read from the specified file (or standard input), replacing the parts of text that match the regular expression pattern with the replacement string and outputting the result to the terminal.

sed s/pattern/replacement/g filename

To modify the original file instead, you can use the -i flag.

Linux Text Tools Sed
10. wc

The wc utility prints the number of bytes, characters, words, or lines in a file.

Linux Text Tools Wc
11. split

You can use split to divide a file into smaller files, by number of lines, by size, or to a specific number of files.

Splitting by number of lines

split -l num_lines input_file output_prefix
Linux Text Tools Split Lines

Splitting by bytes

split -b bytes input_file output_prefix
Linux Text Tools Split Bytes

Splitting to a specific number of files

split -n num_files input_file output_prefix
Linux Text Tools Split Number
12. tac

Tac, which is cat in reverse, does exactly that: it displays files with the lines in reverse order.

Linux Text Tools Tac
13. tr

The tr tool is used to translate or delete sets of characters.

A set of characters is usually either a string or ranges of characters. For instance:

Refer to the tr manual page for more details.

To translate one set to another, use the following syntax:

tr SET1 SET2

For instance, to replace lowercase characters with their uppercase equivalent, you can use the following:

tr "a-z" "A-Z"
Linux Text Tools Tr

To delete a set of characters, use the -d flag.

tr -d SET
Linux Text Tools Tr D

To delete the complement of a set of characters (i.e. everything except the set), use -dc .

tr -dc SET
Linux Text Tools Tr Dc
Conclusion

There is plenty to learn when it comes to Linux command line. Hopefully, the above commands can help you to better deal with text in the command line.

[Dec 12, 2019] Use timedatectl to Control System Time and Date in Linux

Dec 12, 2019 | www.maketecheasier.com

Mastering the Command Line: Use timedatectl to Control System Time and Date in Linux By Himanshu Arora – Posted on Nov 11, 2014 Nov 9, 2014 in Linux

The timedatectl command in Linux allows you to query and change the system clock and its settings. It comes as part of systemd, a replacement for the sysvinit daemon used in the GNU/Linux and Unix systems.

In this article, we will discuss this command and the features it provides using relevant examples.

Timedatectl examples

Note – All examples described in this article are tested on GNU bash, version 4.3.11(1).

Display system date/time information

Simply run the command without any command line options or flags, and it gives you information on the system's current date and time, as well as time-related settings. For example, here is the output when I executed the command on my system:

$ timedatectl
      Local time: Sat 2014-11-08 05:46:40 IST
  Universal time: Sat 2014-11-08 00:16:40 UTC
        Timezone: Asia/Kolkata (IST, +0530)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

So you can see that the output contains information on LTC, UTC, and time zone, as well as settings related to NTP, RTC and DST for the localhost.

Update the system date or time using the set-time option

To set the system clock to a specified date or time, use the set-time option followed by a string containing the new date/time information. For example, to change the system time to 6:40 am, I used the following command:

$ sudo timedatectl set-time "2014-11-08 06:40:00"

and here is the output:

$ timedatectl
      Local time: Sat 2014-11-08 06:40:02 IST
  Universal time: Sat 2014-11-08 01:10:02 UTC
        Timezone: Asia/Kolkata (IST, +0530)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

Observe that the Local time field now shows the updated time. Similarly, you can update the system date, too.

Update the system time zone using the set-timezone option

To set the system time zone to the specified value, you can use the set-timezone option followed by the time zone value. To help you with the task, the timedatectl command also provides another useful option. list-timezones provides you with a list of available time zones to choose from.

For example, here is the scrollable list of time zones the timedatectl command produced on my system:

timedatectl-timezones

To change the system's current time zone from Asia/Kolkata to Asia/Kathmandu, here is the command I used:

$ timedatectl set-timezone Asia/Kathmandu

and to verify the change, here is the output of the timedatectl command:

$ timedatectl
      Local time: Sat 2014-11-08 07:11:23 NPT
  Universal time: Sat 2014-11-08 01:26:23 UTC
        Timezone: Asia/Kathmandu (NPT, +0545)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

You can see that the time zone was changed to the new value.

Configure RTC

You can also use the timedatectl command to configure RTC (real-time clock). For those who are unaware, RTC is a battery-powered computer clock that keeps track of the time even when the system is turned off. The timedatectl command offers a set-local-rtc option which can be used to maintain the RTC in either local time or universal time.

This option requires a boolean argument. If 0 is supplied, the system is configured to maintain the RTC in universal time:

$ timedatectl set-local-rtc 0

but in case 1 is supplied, it will maintain the RTC in local time instead.

$ timedatectl set-local-rtc 1

A word of caution : Maintaining the RTC in the local time zone is not fully supported and will create various problems with time zone changes and daylight saving adjustments. If at all possible, use RTC in UTC.

Another point worth noting is that if set-local-rtc is invoked and the --adjust-system-clock option is passed, the system clock is synchronized from the RTC again, taking the new setting into account. Otherwise the RTC is synchronized from the system clock.

Configure NTP-based network time synchronization

NTP, or Network Time Protocol, is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. It is intended to synchronize all participating computers to within a few milliseconds of UTC.

The timedatectl command provides a set-ntp option that controls whether NTP based network time synchronization is enabled. This option expects a boolean argument. To enable NTP-based time synchronization, run the following command:

$ timedatectl set-ntp true

To disable, run:

$ timedatectl set-ntp false
Conclusion

As evident from the examples described above, the timedatectl command is a handy tool for system administrators who can use it to to adjust various system clocks and RTC configurations as well as poll remote servers for time information. To learn more about the command, head over to its man page .

[Dec 12, 2019] Set Time-Date-Timezone using Command Line in Linux

Dec 12, 2019 | linoxide.com

Set Time/Date/Timezone in Ubuntu Linux February 5, 2019 Updated September 27, 2019 By Pungki Arianto LINUX COMMANDS , LINUX HOWTO How to set time and time zone in ubuntu linux

Time is an important aspect in Linux systems especially in critical services such as cron jobs. Having the correct time on the server ensures that the server operates in a healthy environment that consists of distributed systems and maintains accuracy in the workplace.

In this tutorial, we will focus on how to set time/date/time zone and to synchronize the server clock with your Ubuntu Linux machine.

Check Current Time

You can verify the current time and date using the date and the timedatectl commands. These linux commands can be executed straight from the terminal as a regular user or as a superuser. The commands are handy usefulness of the two commands is seen when you want to correct a wrong time from the command line.

Using the date command

Log in as a root user and use the command as follows

$ date

Output

check date using date command

You can also use the same command to check a date 2 days ago

$ date --date="2 days ago"

Output

check date 2 days ago

Using timedatectl command

Checking on the status of the time on your system as well as the present time settings, use the command timedatectl as shown

# timedatectl

or

# timedatectl  status

how to set time

Changing Time

We use the timedatectl to change system time using the format HH:MM: SS. HH stands for the hour in 24-hour format, MM stands for minutes and SS for seconds.

Setting the time to 09:08:07 use the command as follows (using the timedatectl)

# timedatectl set-time 09:08:07
using date command

Changing time means all the system processes are running on the same clock putting the desktop and server at the same time. From the command line, use date command as follows

# date +%T -s "10:13:13"

Where,
• 10: Hour (hh)
• 13: Minute (mm)
• 13: Second (ss)

To change the locale to either AM or PM use the %p in the following format.

# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"
Change Date

Generally, you want your system date and time is set automatically. If for some reason you have to change it manually using date command, we can use this command :

# date --set="20140125 09:17:00"

It will set your current date and time of your system into 'January 25, 2014' and '09:17:00 AM'. Please note, that you must have root privilege to do this.

You can use timedatectl to set the time and the date respectively. The accepted format is YYYY-MM-DD, YYYY represents the year, MM the month in two digits and DD for the day in two digits. Changing the date to 15 January 2019, you should use the following command

# timedatectl set-time 20190115
Create custom date format

To create custom date format, use a plus sign (+)

$ date +"Day : %d Month : %m Year : %Y"
Day: 05 Month: 12 Year: 2013

$ date +%D
12/05/13

%D format follows Year/Month/Day format .

You can also put the day name if you want. Here are some examples :

$ date +"%a %b %d %y"
Fri 06 Dec 2013

$ date +"%A %B %d %Y"
Friday December 06 2013

$ date +"%A %B %d %Y %T"
Friday December 06 2013 00:30:37

$ date +"%A %B-%d-%Y %c"
Friday December-06-2013 12:30:37 AM WIB

List/Change time zone

Changing the time zone is crucial when you want to ensure that everything synchronizes with the Network Time Protocol. The first thing to do is to list all the region's time zones using the list-time zones option or grep to make the command easy to understand

# timedatectl list-timezones

The above command will present a scrollable format.

list time zones

Recommended timezone for servers is UTC as it doesn't have daylight savings. If you know, the specific time zones set it using the name using the following command

# timedatectl set-timezone America/Los_Angeles

To display timezone execute

# timedatectl | grep "Time"

check timezone

Set the Local-rtc

The Real-time clock (RTC) which is also referred to as the hardware clock is independent of the operating system and continues to run even when the server is shut down.

Use the following command

# timedatectl set-local-rtc 0

In addition, the following command for the local time

# timedatectl set-local-rtc 1
Check/Change CMOS Time

The computer CMOS battery will automatically synchronize time with system clock as long as the CMOS is working correctly.

Use the hwclock command to check the CMOS date as follows

# hwclock

check time using hwclock

To synchronize the CMOS date with system date use the following format

# hwclock –systohc

To have the correct time for your Linux environment is critical because many operations depend on it. Such operations include logging events and corn jobs as well. we hope you found this article useful.

Read Also:

[Nov 09, 2019] Mirroring a running system into a ramdisk Oracle Linux Blog

Nov 09, 2019 | blogs.oracle.com

javascript:void(0)

Mirroring a running system into a ramdisk Greg Marsden

In this blog post, Oracle Linux kernel developer William Roche presents a method to mirror a running system into a ramdisk.

A RAM mirrored System ?

There are cases where a system can boot correctly but after some time, can lose its system disk access - for example an iSCSI system disk configuration that has network issues, or any other disk driver problem. Once the system disk is no longer accessible, we rapidly face a hang situation followed by I/O failures, without the possibility of local investigation on this machine. I/O errors can be reported on the console:

 XFS (dm-0): Log I/O Error Detected....

Or losing access to basic commands like:

# ls
-bash: /bin/ls: Input/output error

The approach presented here allows a small system disk space to be mirrored in memory to avoid the above I/O failures situation, which provides the ability to investigate the reasons for the disk loss. The system disk loss will be noticed as an I/O hang, at which point there will be a transition to use only the ram-disk.

To enable this, the Oracle Linux developer Philip "Bryce" Copeland created the following method (more details will follow):

Disk and memory sizes:

As we are going to mirror the entire system installation to the memory, this system installation image has to fit in a fraction of the memory - giving enough memory room to hold the mirror image and necessary running space.

Of course this is a trade-off between the memory available to the server and the minimal disk size needed to run the system. For example a 12GB disk space can be used for a minimal system installation on a 16GB memory machine.

A standard Oracle Linux installation uses XFS as root fs, which (currently) can't be shrunk. In order to generate a usable "small enough" system, it is recommended to proceed to the OS installation on a correctly sized disk space. Of course, a correctly sized installation location can be created using partitions of large physical disk. Then, the needed application filesystems can be mounted from their current installation disk(s). Some system adjustments may also be required (services added, configuration changes, etc...).

This configuration phase should not be underestimated as it can be difficult to separate the system from the needed applications, and keeping both on the same space could be too large for a RAM disk mirroring.

The idea is not to keep an entire system load active when losing disks access, but to be able to have enough system to avoid system commands access failure and analyze the situation.

We are also going to avoid the use of swap. When the system disk access is lost, we don't want to require it for swap data. Also, we don't want to use more memory space to hold a swap space mirror. The memory is better used directly by the system itself.

The system installation can have a swap space (for example a 1.2GB space on our 12GB disk example) but we are neither going to mirror it nor use it.

Our 12GB disk example could be used with: 1GB /boot space, 11GB LVM Space (1.2GB swap volume, 9.8 GB root volume).

Ramdisk memory footprint:

The ramdisk size has to be a little larger (8M) than the root volume size that we are going to mirror, making room for metadata. But we can deal with 2 types of ramdisk:

We can expect roughly 30% to 50% memory space gain from zram compared to brd, but zram must use 4k I/O blocks only. This means that the filesystem used for root has to only deal with a multiple of 4k I/Os.

Basic commands:

Here is a simple list of commands to manually create and use a ramdisk and mirror the root filesystem space. We create a temporary configuration that needs to be undone or the subsequent reboot will not work. But we also provide below a way of automating at startup and shutdown.

Note the root volume size (considered to be ol/root in this example):

?
1 2 3 # lvs --units k -o lv_size ol/root LSize 10268672.00k

Create a ramdisk a little larger than that (at least 8M larger):

?
1 # modprobe brd rd_nr=1 rd_size=$((10268672 + 8*1024))

Verify the created disk:

?
1 2 3 # lsblk /dev/ram0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ram0 1:0 0 9.8G 0 disk

Put the disk under lvm control

?
1 2 3 4 5 6 7 8 9 # pvcreate /dev/ram0 Physical volume "/dev/ram0" successfully created. # vgextend ol /dev/ram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/ram0 Logical volume ol/root successfully converted.

We now have ol/root mirror to our /dev/ram0 disk.

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 40.70 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

A few minutes (or seconds) later, the synchronization is completed:

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

We have our mirrored configuration running !

For security, we can also remove the swap and /boot, /boot/efi(if it exists) mount points:

?
1 2 3 # swapoff -a # umount /boot/efi # umount /boot

Stopping the system also requires some actions as you need to cleanup the configuration so that it will not be looking for a gone ramdisk on reboot.

?
1 2 3 4 5 6 7 # lvconvert -y -m 0 ol/root /dev/ram0 Logical volume ol/root successfully converted. # vgreduce ol /dev/ram0 Removed "/dev/ram0" from volume group "ol" # mount /boot # mount /boot/efi # swapon -a
What about in-memory compression ?

As indicated above, zRAM devices can compress data in-memory, but 2 main problems need to be fixed:

Make lvm work with zram:

The lvm configuration file has to be changed to take into account the "zram" type of devices. Including the following "types" entry to the /etc/lvm/lvm.conf file in its "devices" section:

?
1 2 3 devices { types = [ "zram" , 16 ] }
Root file system I/Os:

A standard Oracle Linux installation uses XFS, and we can check the sector size used (depending on the disk type used) with

?
1 2 3 4 5 6 7 8 9 10 # xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

We can notice here that the sector size (sectsz) used on this root fs is a standard 512 bytes. This fs type cannot be mirrored with a zRAM device, and needs to be recreated with 4k sector sizes.

Transforming the root file system to 4k sector size:

This is simply a backup (to a zram disk) and restore procedure after recreating the root FS. To do so, the system has to be booted from another system image. Booting from an installation DVD image can be a good possibility.

?
1 2 3 sh-4.2 # vgchange -a y ol 2 logical volume(s) in volume group "ol" now active sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 sh-4.2 # modprobe zram sh-4.2 # echo 10G > /sys/block/zram0/disksize sh-4.2 # mkfs.xfs /dev/zram0 meta-data=/dev/zram0 isize=256 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mkdir /mnt2 sh-4.2 # mount /dev/zram0 /mnt2 sh-4.2 # xfsdump -L BckUp -M dump -f /mnt2/ROOT /mnt xfsdump: using file dump (drive_simple) strategy xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsdump: level 0 dump of localhost:/mnt ... xfsdump: dump complete: 130 seconds elapsed xfsdump: Dump Summary: xfsdump: stream 0 /mnt2/ROOT OK (success) xfsdump: Dump Status: SUCCESS sh-4.2 # umount /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 sh-4.2 # mkfs.xfs -f -s size=4096 /dev/mapper/ol-root meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 sh-4.2 # xfsrestore -f /mnt2/ROOT /mnt xfsrestore: using file dump (drive_simple) strategy xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsrestore: searching media for dump ... xfsrestore: restore complete: 337 seconds elapsed xfsrestore: Restore Summary: xfsrestore: stream 0 /mnt2/ROOT OK (success) xfsrestore: Restore Status: SUCCESS sh-4.2 # umount /mnt sh-4.2 # umount /mnt2
?
1 sh-4.2 # reboot
?
1 2 3 4 5 6 7 8 9 10 $ xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

With sectsz=4096, our system is now ready for zRAM mirroring.

Basic commands with a zRAM device: ?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 # modprobe zram # zramctl --find --size 10G /dev/zram0 # pvcreate /dev/zram0 Physical volume "/dev/zram0" successfully created. # vgextend ol /dev/zram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/zram0 Logical volume ol/root successfully converted. # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 12.38 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # zramctl NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lzo 10G 9.8G 5.3G 5.5G 1

The compressed disk uses a total of 5.5GB of memory to mirror a 9.8G volume size (using in this case 8.5G).

Removal is performed the same way as brd, except that the device is /dev/zram0 instead of /dev/ram0.

Automating the process:

Fortunately, the procedure can be automated on system boot and shutdown with the following scripts (given as examples).

The start method: /usr/sbin/start-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/start-raid1-ramdisk ]

After a chmod 555 /usr/sbin/start-raid1-ramdisk, running this script on a 4k xfs root file system should show something like:

?
1 2 3 4 5 6 7 8 9 10 11 # /usr/sbin/start-raid1-ramdisk Volume group "ol" is already consistent. RAID1 ramdisk: intending to use 10276864 K of memory for facilitation of [ / ] Physical volume "/dev/zram0" successfully created. Volume group "ol" successfully extended Logical volume ol/root successfully converted. Waiting for mirror to synchronize... LVM RAID1 sync of [ / ] took 00:01:53 sec Logical volume ol/root changed. NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lz4 9.8G 9.8G 5.5G 5.8G 1

The stop method: /usr/sbin/stop-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/stop-raid1-ramdisk ]

After a chmod 555 /usr/sbin/stop-raid1-ramdisk, running this script should show something like:

?
1 2 3 4 5 6 # /usr/sbin/stop-raid1-ramdisk Volume group "ol" is already consistent. Logical volume ol/root changed. Logical volume ol/root successfully converted. Removed "/dev/zram0" from volume group "ol" Labels on physical volume "/dev/zram0" successfully wiped.

A service Unit file can also be created: /etc/systemd/system/raid1-ramdisk.service [https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/raid1-ramdisk.service]

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [Unit] Description=Enable RAMdisk RAID 1 on LVM After= local -fs.target Before= shutdown .target reboot.target halt.target [Service] ExecStart=/usr/sbin/start-raid1-ramdisk ExecStop=/usr/sbin/stop-raid1-ramdisk Type=oneshot RemainAfterExit= yes TimeoutSec=0 [Install] WantedBy=multi-user.target
Conclusion:

When the system disk access problem manifests itself, the ramdisk mirror branch will provide the possibility to investigate the situation. This procedure goal is not to keep the system running on this memory mirror configuration, but help investigate a bad situation.

When the problem is identified and fixed, I really recommend to come back to a standard configuration -- enjoying the entire memory of the system, a standard system disk, a possible swap space etc.

Hoping the method described here can help. I also want to thank for their reviews Philip "Bryce" Copeland who also created the first prototype of the above scripts, and Mark Kanda who also helped testing many aspects of this work.

[Nov 09, 2019] chkservice Is A systemd Unit Manager With A Terminal User Interface

The site is https://github.com/linuxenko/chkservice The tool is written in C++
Looks like in version 0.3 the author increased the complexity by adding features which probably are not needed at all
Nov 07, 2019 | www.linuxuprising.com

chkservice systemd manager
chkservice, a terminal user interface (TUI) for managing systemd units, has been updated recently with window resize and search support.

chkservice is a simplistic systemd unit manager that uses ncurses for its terminal interface. Using it you can enable or disable, and start or stop a systemd unit. It also shows the units status (enabled, disabled, static or masked).

You can navigate the chkservice user interface using keyboard shortcuts:

To enable or disable a unit press Space , and to start or stop a unity press s . You can access the help screen which shows all available keys by pressing ? .

The command line tool had its first release in August 2017, with no new releases until a few days ago when version 0.2 was released, quickly followed by 0.3.

With the latest 0.3 release, chkservice adds a search feature that allows easily searching through all systemd units.

To search, type / followed by your search query, and press Enter . To search for the next item matching your search query you'll have to type / again, followed by Enter or Ctrl + m (without entering any search text).

Another addition to the latest chkservice is window resize support. In the 0.1 version, the tool would close when the user tried to resize the terminal window. That's no longer the case now, chkservice allowing the resize of the terminal window it runs in.

And finally, the last addition to the latest chkservice 0.3 is G-g navigation support . Press G ( Shift + g ) to navigate to the bottom, and g to navigate to the top.

Download and install chkservice

The initial (0.1) chkservice version can be found in the official repositories of a few Linux distributions, including Debian and Ubuntu (and Debian or Ubuntu based Linux distribution -- e.g. Linux Mint, Pop!_OS, Elementary OS and so on).

There are some third-party repositories available as well, including a Fedora Copr, Ubuntu / Linux Mint PPA, and Arch Linux AUR, but at the time I'm writing this, only the AUR package was updated to the latest chkservice version 0.3.

You may also install chkservice from source. Use the instructions provided in the tool's readme to either create a DEB package or install it directly.

[Nov 08, 2019] Multiple Linux sysadmins working as root

No new interesting ideas for such an important topic whatsoever. One of the main problems here is documenting actions of each administrator in such a way that the set of actions was visible to everybody in a convenient and transparent matter. With multiple terminal opened Unix history is not the file from which you can deduct each sysadmin actions as parts of the history from additional terminals are missing. , not smooch access. Actually Solaris has some ideas implemented in Solaris 10, but they never made it to Linux
May 21, 2012 | serverfault.com

In our team we have three seasoned Linux sysadmins having to administer a few dozen Debian servers. Previously we have all worked as root using SSH public key authentication. But we had a discussion on what is the best practice for that scenario and couldn't agree on anything.

Everybody's SSH public key is put into ~root/.ssh/authorized_keys2

Using personalized accounts and sudo

That way we would login with personalized accounts using SSH public keys and use sudo to do single tasks with root permissions. In addition we could give ourselves the "adm" group that allows us to view log files.

Using multiple UID 0 users

This is a very unique proposal from one of the sysadmins. He suggest to create three users in /etc/passwd all having UID 0 but different login names. He claims that this is not actually forbidden and allow everyone to be UID 0 but still being able to audit.

Comments:

The second option is the best one IMHO. Personal accounts, sudo access. Disable root access via SSH completely. We have a few hundred servers and half a dozen system admins, this is how we do it.

How does agent forwarding break exactly?

Also, if it's such a hassle using sudo in front of every task you can invoke a sudo shell with sudo -s or switch to a root shell with sudo su -

thepearson thepearson 775 8 8 silver badges 18 18 bronze badges

add a comment | 9 With regard to the 3rd suggested strategy, other than perusal of the useradd -o -u userXXX options as recommended by @jlliagre, I am not familiar with running multiple users as the same uid. (hence if you do go ahead with that, I would be interested if you could update the post with any issues (or sucesses) that arise...)

I guess my first observation regarding the first option "Everybody's SSH public key is put into ~root/.ssh/authorized_keys2", is that unless you absolutely are never going to work on any other systems;

  1. then at least some of the time, you are going to have to work with user accounts and sudo

The second observation would be, that if you work on systems that aspire to HIPAA, PCI-DSS compliance, or stuff like CAPP and EAL, then you are going to have to work around the issues of sudo because;

  1. It an industry standard to provide non-root individual user accounts, that can be audited, disabled, expired, etc, typically using some centralized user database.

So; Using personalized accounts and sudo

It is unfortunate that as a sysadmin, almost everything you will need to do on a remote machine is going to require some elevated permissions, however it is annoying that most of the SSH based tools and utilities are busted while you are in sudo

Hence I can pass on some tricks that I use to work-around the annoyances of sudo that you mention. The first problem is that if root login is blocked using PermitRootLogin=no or that you do not have the root using ssh key, then it makes SCP files something of a PITA.

Problem 1 : You want to scp files from the remote side, but they require root access, however you cannot login to the remote box as root directly.

Boring Solution : copy the files to home directory, chown, and scp down.

ssh userXXX@remotesystem , sudo su - etc, cp /etc/somefiles to /home/userXXX/somefiles , chown -R userXXX /home/userXXX/somefiles , use scp to retrieve files from remote.

Less Boring Solution : sftp supports the -s sftp_server flag, hence you can do something like the following (if you have configured password-less sudo in /etc/sudoers );

sftp  -s '/usr/bin/sudo /usr/libexec/openssh/sftp-server' \
userXXX@remotehost:/etc/resolv.conf

(you can also use this hack-around with sshfs, but I am not sure its recommended... ;-)

If you don't have password-less sudo rights, or for some configured reason that method above is broken, I can suggest one more less boring file transfer method, to access remote root files.

Port Forward Ninja Method :

Login to the remote host, but specify that the remote port 3022 (can be anything free, and non-reserved for admins, ie >1024) is to be forwarded back to port 22 on the local side.

 [localuser@localmachine ~]$ ssh userXXX@remotehost -R 3022:localhost:22
Last login: Mon May 21 05:46:07 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------

Get root in the normal fashion...

-bash-3.2$ sudo su -
[root@remotehost ~]#

Now you can scp the files in the other direction avoiding the boring boring step of making a intermediate copy of the files;

[root@remotehost ~]#  scp -o NoHostAuthenticationForLocalhost=yes \
 -P3022 /etc/resolv.conf localuser@localhost:~
localuser@localhost's password: 
resolv.conf                                 100%  
[root@remotehost ~]#

Problem 2: SSH agent forwarding : If you load the root profile, e.g. by specifying a login shell, the necessary environment variables for SSH agent forwarding such as SSH_AUTH_SOCK are reset, hence SSH agent forwarding is "broken" under sudo su - .

Half baked answer :

Anything that properly loads a root shell, is going to rightfully reset the environment, however there is a slight work-around your can use when you need BOTH root permission AND the ability to use the SSH Agent, AT THE SAME TIME

This achieves a kind of chimera profile, that should really not be used, because it is a nasty hack , but is useful when you need to SCP files from the remote host as root, to some other remote host.

Anyway, you can enable that your user can preserve their ENV variables, by setting the following in sudoers;

 Defaults:userXXX    !env_reset

this allows you to create nasty hybrid login environments like so;

login as normal;

[localuser@localmachine ~]$ ssh userXXX@remotehost 
Last login: Mon May 21 12:33:12 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
-bash-3.2$ env | grep SSH_AUTH
SSH_AUTH_SOCK=/tmp/ssh-qwO715/agent.1971

create a bash shell, that runs /root/.profile and /root/.bashrc . but preserves SSH_AUTH_SOCK

-bash-3.2$ sudo -E bash -l

So this shell has root permissions, and root $PATH (but a borked home directory...)

bash-3.2# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=user_u:system_r:unconfined_t
bash-3.2# echo $PATH
/usr/kerberos/sbin:/usr/local/sbin:/usr/sbin:/sbin:/home/xtrabm/xtrabackup-manager:/usr/kerberos/bin:/opt/admin/bin:/usr/local/bin:/bin:/usr/bin:/opt/mx/bin

But you can use that invocation to do things that require remote sudo root, but also the SSH agent access like so;

bash-3.2# scp /root/.ssh/authorized_keys ssh-agent-user@some-other-remote-host:~
/root/.ssh/authorized_keys              100%  126     0.1KB/s   00:00    
bash-3.2#

Tom H Tom H 8,793 3 3 gold badges 34 34 silver badges 57 57 bronze badges

add a comment | 2 The 3rd option looks ideal - but have you actually tried it out to see what's happenning? While you might see the additional usernames in the authentication step, any reverse lookup is going to return the same value.

Allowing root direct ssh access is a bad idea, even if your machines are not connected to the internet / use strong passwords.

Usually I use 'su' rather than sudo for root access.

symcbean symcbean 18.8k 1 1 gold badge 24 24 silver badges 40 40 bronze badges

add a comment | 2 I use (1), but I happened to type

rm -rf / tmp *

on one ill-fated day.I can see to be bad enough if you have more than a handful admins.

(2) Is probably more engineered - and you can become full-fledged root through sudo su -. Accidents are still possible though.

(3) I would not touch with a barge pole. I used it on Suns, in order to have a non-barebone-sh root account (if I remember correctly) but it was never robust - plus I doubt it would be very auditable.

add a comment | 2 Definitely answer 2.
  1. Means that you're allowing SSH access as root . If this machine is in any way public facing, this is just a terrible idea; back when I ran SSH on port 22, my VPS got multiple attempts hourly to authenticate as root. I had a basic IDS set up to log and ban IPs that made multiple failed attempts, but they kept coming. Thankfully, I'd disabled SSH access as the root user as soon as I had my own account and sudo configured. Additionally, you have virtually no audit trail doing this.
  2. Provides root access as and when it is needed. Yes, you barely have any privileges as a standard user, but this is pretty much exactly what you want; if an account does get compromised, you want it to be limited in its abilities. You want any super user access to require a password re-entry. Additionally, sudo access can be controlled through user groups, and restricted to particular commands if you like, giving you more control over who has access to what. Additionally, commands run as sudo can be logged, so it provides a much better audit trail if things go wrong. Oh, and don't just run "sudo su -" as soon as you log in. That's terrible, terrible practice.
  3. Your sysadmin's idea is bad. And he should feel bad. No, *nix machines probably won't stop you from doing this, but both your file system, and virtually every application out there expects each user to have a unique UID. If you start going down this road, I can guarantee that you'll run into problems. Maybe not immediately, but eventually. For example, despite displaying nice friendly names, files and directories use UID numbers to designate their owners; if you run into a program that has a problem with duplicate UIDs down the line, you can't just change a UID in your passwd file later on without having to do some serious manual file system cleanup.

sudo is the way forward. It may cause additional hassle with running commands as root, but it provides you with a more secure box, both in terms of access and auditing.

Rohaq Rohaq 121 3 3 bronze badges

Definitely option 2, but use groups to give each user as much control as possible without needing to use sudo. sudo in front of every command loses half the benefit because you are always in the danger zone. If you make the relevant directories writable by the sysadmins without sudo you return sudo to the exception which makes everyone feel safer.

Julian Julian 121 4 4 bronze badges

In the old days, sudo did not exist. As a consequence, having multiple UID 0 users was the only available alternative. But it's still not that good, notably with logging based on the UID to obtain the username. Nowadays, sudo is the only appropriate solution. Forget anything else.

It is documented permissible by fact. BSD unices have had their toor account for a long time, and bashroot users tend to be accepted practice on systems where csh is standard (accepted malpractice ;)

add a comment | 0 Perhaps I'm weird, but method (3) is what popped into my mind first as well. Pros: you'd have every users name in logs and would know who did what as root. Cons: they'd each be root all the time, so mistakes can be catastrophic.

I'd like to question why you need all admins to have root access. All 3 methods you propose have one distinct disadvantage: once an admin runs a sudo bash -l or sudo su - or such, you lose your ability to track who does what and after that, a mistake can be catastrophic. Moreover, in case of possible misbehaviour, this even might end up a lot worse.

Instead you might want to consider going another way:

This way, martin would be able to safely handle postfix, and in case of mistake or misbehaviour, you'd only lose your postfix system, not entire server.

Same logic can be applied to any other subsystem, such as apache, mysql, etc.

Of course, this is purely theoretical at this point, and might be hard to set up. It does look like a better way to go tho. At least to me. If anyone tries this, please let me know how it went.

Tuncay Göncüoğlu Tuncay Göncüoğlu 561 3 3 silver badges 9 9 bronze badges

[Nov 08, 2019] Perl tricks for system administrators by Ruth Holloway Feed

Notable quotes:
"... /home/<department>/<username> ..."
Jul 27, 2016 | opensource.com

Did you know that Perl is a great programming language for system administrators? Perl is platform-independent so you can do things on different operating systems without rewriting your scripts. Scripting in Perl is quick and easy, and its portability makes your scripts amazingly useful. Here are a few examples, just to get your creative juices flowing! Renaming a bunch of files

Suppose you need to rename a whole bunch of files in a directory. In this case, we've got a directory full of .xml files, and we want to rename them all to .html . Easy-peasy!

#!/usr/bin/perl
use strict ;
use warnings ;

foreach my $file ( glob "*.xml" ) {
my $new = substr ( $file , 0 , - 3 ) . "html" ;
rename $file , $new ;
}

Then just cd to the directory where you need to make the change, and run the script. You could put this in a cron job, if you needed to run it regularly, and it is easily enhanced to accept parameters.

Speaking of accepting parameters, let's take a look at a script that does just that.

Creating a Linux user account

Programming and development

Suppose you need to regularly create Linux user accounts on your system, and the format of the username is first initial/last name, as is common in many businesses. (This is, of course, a good idea, until you get John Smith and Jane Smith working at the same company -- or want John to have two accounts, as he works part-time in two different departments. But humor me, okay?) Each user account needs to be in a group based on their department, and home directories are of the format /home/<department>/<username> . Let's take a look at a script to do that:

#!/usr/bin/env perl
use strict ;
use warnings ;

my $adduser = '/usr/sbin/adduser' ;

use Getopt :: Long qw ( GetOptions ) ;

# If the user calls the script with no parameters,
# give them help!

if ( not @ ARGV ) {
usage () ;
}

# Gather our options; if they specify any undefined option,
# they'll get sent some help!

my %opts ;
GetOptions ( \%opts ,
'fname=s' ,
'lname=s' ,
'dept=s' ,
'run' ,
) or usage () ;

# Let's validate our inputs. All three parameters are
# required, and must be alphabetic.
# You could be clever, and do this with a foreach loop,
# but let's keep it simple for now.

if ( not $opts { fname } or $opts { fname } !~ /^[a-zA-Z]+$/ ) {
usage ( "First name must be alphabetic" ) ;
}
if ( not $opts { lname } or $opts { lname } !~ /^[a-zA-Z]+$/ ) {
usage ( "Last name must be alphabetic" ) ;
}
if ( not $opts { dept } or $opts { dept } !~ /^[a-zA-Z]+$/ ) {
usage ( "Department must be alphabetic" ) ;
}

# Construct the username and home directory

my $username = lc ( substr ( $opts { fname } , 0 , 1 ) . $opts { lname }) ;
my $home = "/home/$opts{dept}/$username" ;

# Show them what we've got ready to go.

print "Name: $opts{fname} $opts{lname} \n " ;
print "Username: $username \n " ;
print "Department: $opts{dept} \n " ;
print "Home directory: $home \n\n " ;

# use qq() here, so that the quotes in the --gecos flag
# get carried into the command!

my $cmd = qq ( $adduser -- home $home -- ingroup $opts { dept } \\
-- gecos "$opts{fname} $opts{lname}" $username ) ;

print "$cmd \n " ;
if ( $opts { run }) {
system $cmd ;
} else {
print "You need to add the --run flag to actually execute \n " ;
}

sub usage {
my ( $msg ) = @_ ;
if ( $msg ) {
print "$msg \n\n " ;
}
print "Usage: $0 --fname FirstName --lname LastName --dept Department --run \n " ;
exit ;
}

As with the previous script, there are opportunities for enhancement, but something like this might be all that you need for this task.

One more, just for fun!

Change copyright text in every Perl source file in a directory tree

Now we're going to try a mass edit. Suppose you've got a directory full of code, and each file has a copyright statement somewhere in it. (Rich Bowen wrote a great article, Copyright statements proliferate inside open source code a couple of years ago that discusses the wisdom of copyright statements in open source code. It is a good read, and I recommend it highly. But again, humor me.) You want to change that text in each and every file in the directory tree. File::Find and File::Slurp are your friends!

#!/usr/bin/perl
use strict ;
use warnings ;

use File :: Find qw ( find ) ;
use File :: Slurp qw ( read_file write_file ) ;

# If the user gives a directory name, use that. Otherwise,
# use the current directory.

my $dir = $ARGV [ 0 ] || '.' ;

# File::Find::find is kind of dark-arts magic.
# You give it a reference to some code,
# and a directory to hunt in, and it will
# execute that code on every file in the
# directory, and all subdirectories. In this
# case, \&change_file is the reference
# to our code, a subroutine. You could, if
# what you wanted to do was really short,
# include it in a { } block instead. But doing
# it this way is nice and readable.

find ( \&change_file , $dir ) ;

sub change_file {
my $name = $_ ;

# If the file is a directory, symlink, or other
# non-regular file, don't do anything

if ( not - f $name ) {
return ;
}
# If it's not Perl, don't do anything.

if ( substr ( $name , - 3 ) ne ".pl" ) {
return ;
}
print "$name \n " ;

# Gobble up the file, complete with carriage
# returns and everything.
# Be wary of this if you have very large files
# on a system with limited memory!

my $data = read_file ( $name ) ;

# Use a regex to make the change. If the string appears
# more than once, this will change it everywhere!

$data =~ s/Copyright Old/Copyright New/g ;

# Let's not ruin our original files

my $backup = "$name.bak" ;
rename $name , $backup ;
write_file ( $name , $data ) ;

return ;
}

Because of Perl's portability, you could use this script on a Windows system as well as a Linux system -- it Just Works because of the underlying Perl interpreter code. In our create-an-account code above, that one is not portable, but is Linux-specific because it uses Linux commands such as adduser .

In my experience, I've found it useful to have a Git repository of these things somewhere that I can clone on each new system I'm working with. Over time, you'll think of changes to make to the code to enhance the capabilities, or you'll add new scripts, and Git can help you make sure that all your tools and tricks are available on all your systems.

I hope these little scripts have given you some ideas how you can use Perl to make your system administration life a little easier. In addition to these longer scripts, take a look at a fantastic list of Perl one-liners, and links to other Perl magic assembled by Mischa Peterson.

[Nov 08, 2019] Manage NTP with Chrony by David Both

Dec 03, 2018 | opensource.com

Chronyd is a better choice for most networks than ntpd for keeping computers synchronized with the Network Time Protocol.

"Does anybody really know what time it is? Does anybody really care?"
Chicago , 1969

Perhaps that rock group didn't care what time it was, but our computers do need to know the exact time. Timekeeping is very important to computer networks. In banking, stock markets, and other financial businesses, transactions must be maintained in the proper order, and exact time sequences are critical for that. For sysadmins and DevOps professionals, it's easier to follow the trail of email through a series of servers or to determine the exact sequence of events using log files on geographically dispersed hosts when exact times are kept on the computers in question.

I used to work at an organization that received over 20 million emails per day and had four servers just to accept and do a basic filter on the incoming flood of email. From there, emails were sent to one of four other servers to perform more complex anti-spam assessments, then they were delivered to one of several additional servers where the emails were placed in the correct inboxes. At each layer, the emails would be sent to one of the next-level servers, selected only by the randomness of round-robin DNS. Sometimes we had to trace a new message through the system until we could determine where it "got lost," according to the pointy-haired bosses. We had to do this with frightening regularity.

Most of that email turned out to be spam. Some people actually complained that their [joke, cat pic, recipe, inspirational saying, or other-strange-email]-of-the-day was missing and asked us to find it. We did reject those opportunities.

Our email and other transactional searches were aided by log entries with timestamps that -- today -- can resolve down to the nanosecond in even the slowest of modern Linux computers. In very high-volume transaction environments, even a few microseconds of difference in the system clocks can mean sorting thousands of transactions to find the correct one(s).

The NTP server hierarchy

Computers worldwide use the Network Time Protocol (NTP) to synchronize their times with internet standard reference clocks via a hierarchy of NTP servers. The primary servers are at stratum 1, and they are connected directly to various national time services at stratum 0 via satellite, radio, or even modems over phone lines. The time service at stratum 0 may be an atomic clock, a radio receiver tuned to the signals broadcast by an atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS satellites.

To prevent time requests from time servers lower in the hierarchy (i.e., with a higher stratum number) from overwhelming the primary reference servers, there are several thousand public NTP stratum 2 servers that are open and available for anyone to use. Many organizations with large numbers of hosts that need an NTP server will set up their own time servers so that only one local host accesses the stratum 2 time servers, then they configure the remaining network hosts to use the local time server which, in my case, is a stratum 3 server.

NTP choices

The original NTP daemon, ntpd , has been joined by a newer one, chronyd . Both keep the local host's time synchronized with the time server. Both services are available, and I have seen nothing to indicate that this will change anytime soon.

Chrony has features that make it the better choice for most environments for the following reasons:

The NTP and Chrony RPM packages are available from standard Fedora repositories. You can install both and switch between them, but modern Fedora, CentOS, and RHEL releases have moved from NTP to Chrony as their default time-keeping implementation. I have found that Chrony works well, provides a better interface for the sysadmin, presents much more information, and increases control.

Just to make it clear, NTP is a protocol that is implemented with either NTP or Chrony. If you'd like to know more, read this comparison between NTP and Chrony as implementations of the NTP protocol.

This article explains how to configure Chrony clients and servers on a Fedora host, but the configuration for CentOS and RHEL current releases works the same.

Chrony structure

The Chrony daemon, chronyd , runs in the background and monitors the time and status of the time server specified in the chrony.conf file. If the local time needs to be adjusted, chronyd does it smoothly without the programmatic trauma that would occur if the clock were instantly reset to a new time.

Chrony's chronyc tool allows someone to monitor the current status of Chrony and make changes if necessary. The chronyc utility can be used as a command that accepts subcommands, or it can be used as an interactive text-mode program. This article will explain both uses.

Client configuration

The NTP client configuration is simple and requires little or no intervention. The NTP server can be defined during the Linux installation or provided by the DHCP server at boot time. The default /etc/chrony.conf file (shown below in its entirety) requires no intervention to work properly as a client. For Fedora, Chrony uses the Fedora NTP pool, and CentOS and RHEL have their own NTP server pools. Like many Red Hat-based distributions, the configuration file is well commented.

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.fedora.pool.ntp.org iburst

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys

# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

Let's look at the current status of NTP on a virtual machine I use for testing. The chronyc command, when used with the tracking subcommand, provides statistics that report how far off the local system is from the reference server.

[root@studentvm1 ~]# chronyc tracking
Reference ID : 23ABED4D (ec2-35-171-237-77.compute-1.amazonaws.com)
Stratum : 3
Ref time (UTC) : Fri Nov 16 16:21:30 2018
System time : 0.000645622 seconds slow of NTP time
Last offset : -0.000308577 seconds
RMS offset : 0.000786140 seconds
Frequency : 0.147 ppm slow
Residual freq : -0.073 ppm
Skew : 0.062 ppm
Root delay : 0.041452706 seconds
Root dispersion : 0.022665167 seconds
Update interval : 1044.2 seconds
Leap status : Normal
[root@studentvm1 ~]#

The Reference ID in the first line of the result is the server the host is synchronized to -- in this case, a stratum 3 reference server that was last contacted by the host at 16:21:30 2018. The other lines are described in the chronyc(1) man page .

The sources subcommand is also useful because it provides information about the time source configured in chrony.conf .

[root@studentvm1 ~]# chronyc sources
210 Number of sources = 5
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ 192.168.0.51 3 6 377 0 -2613us[-2613us] +/- 63ms
^+ dev.smatwebdesign.com 3 10 377 28m -2961us[-3534us] +/- 113ms
^+ propjet.latt.net 2 10 377 465 -1097us[-1085us] +/- 77ms
^* ec2-35-171-237-77.comput> 2 10 377 83 +2388us[+2395us] +/- 95ms
^+ PBX.cytranet.net 3 10 377 507 -1602us[-1589us] +/- 96ms
[root@studentvm1 ~]#

The first source in the list is the time server I set up for my personal network. The others were provided by the pool. Even though my NTP server doesn't appear in the Chrony configuration file above, my DHCP server provides its IP address for the NTP server. The "S" column -- Source State -- indicates with an asterisk ( * ) the server our host is synced to. This is consistent with the data from the tracking subcommand.

The -v option provides a nice description of the fields in this output.

[root@studentvm1 ~]# chronyc sources -v
210 Number of sources = 5

.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ 192.168.0.51 3 7 377 28 -2156us[-2156us] +/- 63ms
^+ triton.ellipse.net 2 10 377 24 +5716us[+5716us] +/- 62ms
^+ lithium.constant.com 2 10 377 351 -820us[ -820us] +/- 64ms
^* t2.time.bf1.yahoo.com 2 10 377 453 -992us[ -965us] +/- 46ms
^- ntp.idealab.com 2 10 377 799 +3653us[+3674us] +/- 87ms
[root@studentvm1 ~]#

If I wanted my server to be the preferred reference time source for this host, I would add the line below to the /etc/chrony.conf file.

server 192.168.0.51 iburst prefer

I usually place this line just above the first pool server statement near the top of the file. There is no special reason for this, except I like to keep the server statements together. It would work just as well at the bottom of the file, and I have done that on several hosts. This configuration file is not sequence-sensitive.

The prefer option marks this as the preferred reference source. As such, this host will always be synchronized with this reference source (as long as it is available). We can also use the fully qualified hostname for a remote reference server or the hostname only (without the domain name) for a local reference time source as long as the search statement is set in the /etc/resolv.conf file. I prefer the IP address to ensure that the time source is accessible even if DNS is not working. In most environments, the server name is probably the better option, because NTP will continue to work even if the server's IP address changes.

If you don't have a specific reference source you want to synchronize to, it is fine to use the defaults.

Configuring an NTP server with Chrony

The nice thing about the Chrony configuration file is that this single file configures the host as both a client and a server. To add a server function to our host -- it will always be a client, obtaining its time from a reference server -- we just need to make a couple of changes to the Chrony configuration, then configure the host's firewall to accept NTP requests.

Open the /etc/chrony.conf file in your favorite text editor and uncomment the local stratum 10 line. This enables the Chrony NTP server to continue to act as if it were connected to a remote reference server if the internet connection fails; this enables the host to continue to be an NTP server to other hosts on the local network.

Let's restart chronyd and track how the service is working for a few minutes. Before we enable our host as an NTP server, we want to test a bit.

[root@studentvm1 ~]# systemctl restart chronyd ; watch chronyc tracking

The results should look like this. The watch command runs the chronyc tracking command every two seconds so we can watch changes occur over time.

Every 2.0s: chronyc tracking studentvm1: Fri Nov 16 20:59:31 2018

Reference ID : C0A80033 (192.168.0.51)
Stratum : 4
Ref time (UTC) : Sat Nov 17 01:58:51 2018
System time : 0.001598277 seconds fast of NTP time
Last offset : +0.001791533 seconds
RMS offset : 0.001791533 seconds
Frequency : 0.546 ppm slow
Residual freq : -0.175 ppm
Skew : 0.168 ppm
Root delay : 0.094823152 seconds
Root dispersion : 0.021242738 seconds
Update interval : 65.0 seconds
Leap status : Normal

Notice that my NTP server, the studentvm1 host, synchronizes to the host at 192.168.0.51, which is my internal network NTP server, at stratum 4. Synchronizing directly to the Fedora pool machines would result in synchronization at stratum 3. Notice also that the amount of error decreases over time. Eventually, it should stabilize with a tiny variation around a fairly small range of error. The size of the error depends upon the stratum and other network factors. After a few minutes, use Ctrl+C to break out of the watch loop.

To turn our host into an NTP server, we need to allow it to listen on the local network. Uncomment the following line to allow hosts on the local network to access our NTP server.

# Allow NTP client access from local network.
allow 192.168.0.0/16

Note that the server can listen for requests on any local network it's attached to. The IP address in the "allow" line is just intended for illustrative purposes. Be sure to change the IP network and subnet mask in that line to match your local network's.

Restart chronyd .

[root@studentvm1 ~]# systemctl restart chronyd

To allow other hosts on your network to access this server, configure the firewall to allow inbound UDP packets on port 123. Check your firewall's documentation to find out how to do that.

Testing

Your host is now an NTP server. You can test it with another host or a VM that has access to the network on which the NTP server is listening. Configure the client to use the new NTP server as the preferred server in the /etc/chrony.conf file, then monitor that client using the chronyc tools we used above.

Chronyc as an interactive tool

As I mentioned earlier, chronyc can be used as an interactive command tool. Simply run the command without a subcommand and you get a chronyc command prompt.

[root@studentvm1 ~]# chronyc
chrony version 3.4
Copyright (C) 1997-2003, 2007, 2009-2018 Richard P. Curnow and others
chrony comes with ABSOLUTELY NO WARRANTY. This is free software, and
you are welcome to redistribute it under certain conditions. See the
GNU General Public License version 2 for details.

chronyc>

You can enter just the subcommands at this prompt. Try using the tracking , ntpdata , and sources commands. The chronyc command line allows command recall and editing for chronyc subcommands. You can use the help subcommand to get a list of possible commands and their syntax.

Conclusion

Chrony is a powerful tool for synchronizing the times of client hosts, whether they are all on the local network or scattered around the globe. It's easy to configure because, despite the large number of options available, only a few configurations are required for most circumstances.

After my client computers have synchronized with the NTP server, I like to set the system hardware clock from the system (OS) time by using the following command:

/sbin/hwclock --systohc

This command can be added as a cron job or a script in cron.daily to keep the hardware clock synced with the system time.

Chrony and NTP (the service) both use the same configuration, and the files' contents are interchangeable. The man pages for chronyd , chronyc , and chrony.conf contain an amazing amount of information that can help you get started or learn about esoteric configuration options.

Do you run your own NTP server? Let us know in the comments and be sure to tell us which implementation you are using, NTP or Chrony.

[Nov 08, 2019] Vim universe. fzf - command line fuzzy finder by Alexey Samoshkin

Nov 08, 2019 | www.youtube.com

Zeeshan Jan , 1 month ago (edited)

Alexey thanks for great video, I have a question, how did you integrate the fzf and bat. When I am in my zsh using tmux then when I type fzf and search for a file I am not able to select multiple files using TAB I can do this inside VIM but not in the tmux iTerm terminal also I am not able to see the preview I have already installed bat using brew on my mac book pro. also when I type cd ** it doesn't work

Paul Hale , 4 months ago

Thanks for the video. When searching in vim dotfiles are hidden. How can we configure so that dotfiles are shown but .git and .git subfolders are ignored?

[Nov 08, 2019] 10 resources every sysadmin should know about Opensource.com

Nov 08, 2019 | opensource.com

Cheat

Having a hard time remembering a command? Normally you might resort to a man page, but some man pages have a hard time getting to the point. It's the reason Chris Allen Lane came up with the idea (and more importantly, the code) for a cheat command .

The cheat command displays cheatsheets for common tasks in your terminal. It's a man page without the preamble. It cuts to the chase and tells you exactly how to do whatever it is you're trying to do. And if it lacks a common example that you think ought to be included, you can submit an update.

$ cheat tar
# To extract an uncompressed archive:
tar -xvf '/path/to/foo.tar'

# To extract a .gz archive:
tar -xzvf '/path/to/foo.tgz'
[ ... ]

You can also treat cheat as a local cheatsheet system, which is great for all the in-house commands you and your team have invented over the years. You can easily add a local cheatsheet to your own home directory, and cheat will find and display it just as if it were a popular system command.

[Nov 08, 2019] A Linux user's guide to Logical Volume Management Opensource.com

Nov 08, 2019 | opensource.com

In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes.

Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups.

Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one.

Adding a new logical volume

There are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently.

The basic steps for adding a new logical volume are as follows.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create a new logical volumes (LV) from the space in the volume group.
  6. Create a filesystem on the new logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.

Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems.

Example

This example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install hard drive

If there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps.

Create Physical Volume from hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Extend the existing Volume Group

In this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Create the Logical Volume

First create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvcreate -L +50G --name Stuff MyVG01
Create the filesystem

Creating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume.

mkfs -t ext4 /dev/MyVG01/Stuff
Add a filesystem label

Adding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems.

e2label /dev/MyVG01/Stuff Stuff
Mount the filesystem

At this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem.

You should also check to verify the volume has been created correctly. You can use the df , lvs, and vgs commands to do this.

Resizing a logical volume in an LVM filesystem

The need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group.
  6. If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.
Example

This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install the hard drive

If there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps.

Create a Physical Volume from the hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Add PV to existing Volume Group

For this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Extend the Logical Volume

Extend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvextend -L +50G /dev/MyVG01/Stuff
Expand the filesystem

Extending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume.

resize2fs /dev/MyVG01/Stuff

You should check to verify the resizing has been performed correctly. You can use the df , lvs, and vgs commands to do this.

Tips

Over the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you.

I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch. Topics Business Linux How-tos and tutorials Sysadmin About the author David Both - David Both is an Open Source Software and GNU/Linux advocate, trainer, writer, and speaker who lives in Raleigh North Carolina. He is a strong proponent of and evangelist for the "Linux Philosophy." David has been in the IT industry for nearly 50 years. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for over 20 years. David prefers to purchase the components and build his...

[Nov 08, 2019] 10 killer tools for the admin in a hurry Opensource.com

Nov 08, 2019 | opensource.com

NixCraft
Use the site's internal search function. With more than a decade of regular updates, there's gold to be found here -- useful scripts and handy hints that can solve your problem straight away. This is often the second place I look after Google.

Webmin
This gives you a nice web interface to remotely edit your configuration files. It cuts down on a lot of time spent having to juggle directory paths and sudo nano , which is handy when you're handling several customers.

Windows Subsystem for Linux
The reality of the modern workplace is that most employees are on Windows, while the grown-up gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks from (gasp) a Windows desktop.

What do you do? Install a virtual machine? It's actually much faster and far less work to configure if you install the Windows Subsystem for Linux compatibility layer, now available at no cost on Windows 10.

This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries on the local machine, have full access to both Windows and Linux filesystems, and mount network drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors.

mRemoteNG
This is an excellent SSH and remote desktop client for when you have 100+ servers to manage.

Setting up a network so you don't have to do it again

A poorly planned network is the sworn enemy of the admin who hates working overtime.

IP Addressing Schemes that Scale
The diabolical thing about running out of IP addresses is that, when it happens, the network's grown large enough that a new addressing scheme is an expensive, time-consuming pain in the proverbial.

Ain't nobody got time for that!

At some point, IPv6 will finally arrive to save the day. Until then, these one-size-fits-most IP addressing schemes should keep you going, no matter how many network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and espresso machines the world throws at us.

Linux Chmod Permissions Cheat Sheet
A short but sweet cheat sheet of Bash commands to set permissions across the network. This is so when Bill from Customer Service falls for that ransomware scam, you're recovering just his files and not the entire company's.

VLSM Subnet Calculator
Just put in the number of networks you want to create from an address space and the number of hosts you want per network, and it calculates what the subnet mask should be for everything.

Single-purpose Linux distributions

Need a Linux box that does just one thing? It helps if someone else has already sweated the small stuff on an operating system you can install and have ready immediately.

Each of these has, at one point, made my work day so much easier.

Porteus Kiosk
This is for when you want a computer totally locked down to just a web browser. With a little tweaking, you can even lock the browser down to just one website. This is great for public access machines. It works with touchscreens or with a keyboard and mouse.

Parted Magic
This is an operating system you can boot from a USB drive to partition hard drives, recover data, and run benchmarking tools.

IPFire
Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire." That's my second favorite thing about this Linux distribution. My favorite is that it's a seriously solid software suite. It's so easy to set up and configure, and there is a heap of plugins available to extend it.

What about your top tools and cheat sheets?

So, how about you? What tools, resources, and cheat sheets have you found to make the workday easier? I'd love to know. Please share in the comments.

[Nov 02, 2019] LVM spanning over multiple disks What disk is a file on? Can I lose a drive without total loss

Notable quotes:
"... If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part. ..."
"... So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost. ..."
"... LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd. ..."
"... As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason. ..."
"... it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something. ..."
Mar 16, 2015 | serverfault.com

LVM spanning over multiple disks: What disk is a file on? Can I lose a drive without total loss? Ask Question Asked 8 years, 10 months ago Active 4 years, 6 months ago Viewed 9k times 7 2 I have three 990GB partitions over three drives in my server. Using LVM, I can create one ~3TB partition for file storage.

1) How does the system determine what partition to use first?
2) Can I find what disk a file or folder is physically on?
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk? storage lvm share

edited Mar 16 '15 at 12:53

HopelessN00b 49k 25 25 gold badges 121 121 silver badges 194 194 bronze badges asked Dec 2 '10 at 2:28 Luke has no name Luke has no name 989 10 10 silver badges 13 13 bronze badges

add a comment | 3 Answers 3 active oldest votes 12
  1. The system fills from the first disk in the volume group to the last, unless you configure striping with extents.
  2. I don't think this is possible, but where I'd start to look is in the lvs/vgs commands man pages.
  3. If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part.
  4. So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost.
Peter Grace Peter Grace 2,676 2 2 gold badges 22 22 silver badges 38 38 bronze badges add a comment | 3

1) How does the system determine what partition to use first?

LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd.

2) Can I find what disk a file or folder is physically on?

You can determine what PVs a LV has allocation extents on. But I don't know of a way to get that information for an individual file.

3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk?

As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason.

3dinfluence 3dinfluence 12k 1 1 gold badge 23 23 silver badges 38 38 bronze badges

add a comment | 2 I don't know the answer to #2, so I'll leave that to someone else. I suspect "no", but I'm willing to be happily surprised.

1 is: you tell it, when you combine the physical volumes into a volume group.

3 is: it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something.

And to the overall, unasked question: yeah, you probably don't really want to do that.

[Oct 08, 2019] Forward root email on Linux server

Oct 08, 2019 | www.reddit.com

Hi, generally I configure /etc/aliases to forward root messages to my work email address. I found this useful, because sometimes I become aware of something wrong...

I create specific email filter on my MUA to put everything with "fail" in subject in my ALERT subfolder, "update" or "upgrade" in my UPGRADE subfolder, and so on.

It is a bit annoying, because with > 50 server, there is lot of "rumor", anyway.

How do you manage that?

Thank you!

[Oct 02, 2019] raid5 - Can I recover a RAID 5 array if two drives have failed - Server Fault

Oct 02, 2019 | serverfault.com

Can I recover a RAID 5 array if two drives have failed? Ask Question Asked 9 years ago Active 2 years, 3 months ago Viewed 58k times I have a Dell 2600 with 6 drives configured in a RAID 5 on a PERC 4 controller. 2 drives failed at the same time, and according to what I know a RAID 5 is recoverable if 1 drive fails. I'm not sure if the fact I had six drives in the array might save my skin.

I bought 2 new drives and plugged them in but no rebuild happened as I expected. Can anyone shed some light? raid raid5 dell-poweredge share Share a link to this question

add a comment | 4 Answers 4 active oldest votes

11 Regardless of how many drives are in use, a RAID 5 array only allows for recovery in the event that just one disk at a time fails.

What 3molo says is a fair point but even so, not quite correct I think - if two disks in a RAID5 array fail at the exact same time then a hot spare won't help, because a hot spare replaces one of the failed disks and rebuilds the array without any intervention, and a rebuild isn't possible if more than one disk fails.

For now, I am sorry to say that your options for recovering this data are going to involve restoring a backup.

For the future you may want to consider one of the more robust forms of RAID (not sure what options a PERC4 supports) such as RAID 6 or a nested RAID array . Once you get above a certain amount of disks in an array you reach the point where the chance that more than one of them can fail before a replacement is installed and rebuilt becomes unacceptably high. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 13:37 longneck 21.1k 3 3 gold badges 43 43 silver badges 76 76 bronze badges answered Sep 21 '10 at 14:43 Rob Moir Rob Moir 30k 4 4 gold badges 53 53 silver badges 84 84 bronze badges

add a comment | 2 You can try to force one or both of the failed disks to be online from the BIOS interface of the controller. Then check that the data and the file system are consistent. share Share a link to this answer Copy link | improve this answer answered Sep 21 '10 at 15:35 Mircea Vutcovici Mircea Vutcovici 13.6k 3 3 gold badges 42 42 silver badges 69 69 bronze badges add a comment | 2 Direct answer is "No". In-direct -- "It depends". Mainly it depends on whether disks are partially out of order, or completely. In case there're partially broken, you can give it a try -- I would copy (using tool like ddrescue) both failed disks. Then I'd try to run the bunch of disks using Linux SoftRAID -- re-trying with proper order of disks and stripe-size in read-only mode and counting CRC mismatches. It's quite doable, I should say -- this text in Russian mentions 12 disk RAID50's recovery using LSR , for example. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 15:12 Skyhawk 13.5k 3 3 gold badges 45 45 silver badges 91 91 bronze badges answered Jun 8 '12 at 14:11 poige poige 7,370 2 2 gold badges 16 16 silver badges 38 38 bronze badges add a comment | 0 It is possible if raid was with one spare drive , and one of your failed disks died before the second one. So, you just need need to try reconstruct array virtually with 3d party software . Found small article about this process on this page: http://www.angeldatarecovery.com/raid5-data-recovery/

And, if you realy need one of died drives you can send it to recovery shops. With of this images you can reconstruct raid properly with good channces.

[Sep 23, 2019] How to recover deleted files with foremost on Linux - LinuxConfig.org

Sep 23, 2019 | linuxconfig.org
Details
System Administration
15 September 2019
Contents In this article we will talk about foremost , a very useful open source forensic utility which is able to recover deleted files using the technique called data carving . The utility was originally developed by the United States Air Force Office of Special Investigations, and is able to recover several file types (support for specific file types can be added by the user, via the configuration file). The program can also work on partition images produced by dd or similar tools.

In this tutorial you will learn:

foremost-manual <img src=https://linuxconfig.org/images/foremost_manual.png alt=foremost-manual width=1200 height=675 /> Foremost is a forensic data recovery program for Linux used to recover files using their headers, footers, and data structures through a process known as file carving. Software Requirements and Conventions Used
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution-independent
Software The "foremost" program
Other Familiarity with the command line interface
Conventions # - requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ - requires given linux commands to be executed as a regular non-privileged user
Installation

Since foremost is already present in all the major Linux distributions repositories, installing it is a very easy task. All we have to do is to use our favorite distribution package manager. On Debian and Ubuntu, we can use apt :

$ sudo apt install foremost

In recent versions of Fedora, we use the dnf package manager to install packages , the dnf is a successor of yum . The name of the package is the same:

$ sudo dnf install foremost

If we are using ArchLinux, we can use pacman to install foremost . The program can be found in the distribution "community" repository:

$ sudo pacman -S foremost

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

me name=


Basic usage
WARNING
No matter which file recovery tool or process your are going to use to recover your files, before you begin it is recommended to perform a low level hard drive or partition backup, hence avoiding an accidental data overwrite !!! In this case you may re-try to recover your files even after unsuccessful recovery attempt. Check the following dd command guide on how to perform hard drive or partition low level backup.

The foremost utility tries to recover and reconstruct files on the base of their headers, footers and data structures, without relying on filesystem metadata . This forensic technique is known as file carving . The program supports various types of files, as for example:

The most basic way to use foremost is by providing a source to scan for deleted files (it can be either a partition or an image file, as those generated with dd ). Let's see an example. Imagine we want to scan the /dev/sdb1 partition: before we begin, a very important thing to remember is to never store retrieved data on the same partition we are retrieving the data from, to avoid overwriting delete files still present on the block device. The command we would run is:

$ sudo foremost -i /dev/sdb1

By default, the program creates a directory called output inside the directory we launched it from and uses it as destination. Inside this directory, a subdirectory for each supported file type we are attempting to retrieve is created. Each directory will hold the corresponding file type obtained from the data carving process:

output
├── audit.txt
├── avi
├── bmp
├── dll
├── doc
├── docx
├── exe
├── gif
├── htm
├── jar
├── jpg
├── mbd
├── mov
├── mp4
├── mpg
├── ole
├── pdf
├── png  
├── ppt
├── pptx
├── rar
├── rif
├── sdw
├── sx
├── sxc
├── sxi
├── sxw
├── vis
├── wav
├── wmv
├── xls
├── xlsx
└── zip

When foremost completes its job, empty directories are removed. Only the ones containing files are left on the filesystem: this let us immediately know what type of files were successfully retrieved. By default the program tries to retrieve all the supported file types; to restrict our search, we can, however, use the -t option and provide a list of the file types we want to retrieve, separated by a comma. In the example below, we restrict the search only to gif and pdf files:

$ sudo foremost -t gif,pdf -i /dev/sdb1

https://www.youtube.com/embed/58S2wlsJNvo

In this video we will test the forensic data recovery program Foremost to recover a single png file from /dev/sdb1 partition formatted with the EXT4 filesystem.

me name=


Specifying an alternative destination

As we already said, if a destination is not explicitly declared, foremost creates an output directory inside our cwd . What if we want to specify an alternative path? All we have to do is to use the -o option and provide said path as argument. If the specified directory doesn't exist, it is created; if it exists but it's not empty, the program throws a complain:

ERROR: /home/egdoc/data is not empty
        Please specify another directory or run with -T.

To solve the problem, as suggested by the program itself, we can either use another directory or re-launch the command with the -T option. If we use the -T option, the output directory specified with the -o option is timestamped. This makes possible to run the program multiple times with the same destination. In our case the directory that would be used to store the retrieved files would be:

/home/egdoc/data_Thu_Sep_12_16_32_38_2019
The configuration file

The foremost configuration file can be used to specify file formats not natively supported by the program. Inside the file we can find several commented examples showing the syntax that should be used to accomplish the task. Here is an example involving the png type (the lines are commented since the file type is supported by default):

# PNG   (used in web pages)
#       (NOTE THIS FORMAT HAS A BUILTIN EXTRACTION FUNCTION)
#       png     y       200000  \x50\x4e\x47?   \xff\xfc\xfd\xfe

The information to provide in order to add support for a file type, are, from left to right, separated by a tab character: the file extension ( png in this case), whether the header and footer are case sensitive ( y ), the maximum file size in Bytes ( 200000 ), the header ( \x50\x4e\x47? ) and and the footer ( \xff\xfc\xfd\xfe ). Only the latter is optional and can be omitted.

If the path of the configuration file it's not explicitly provided with the -c option, a file named foremost.conf is searched and used, if present, in the current working directory. If it is not found the default configuration file, /etc/foremost.conf is used instead.

Adding the support for a file type

By reading the examples provided in the configuration file, we can easily add support for a new file type. In this example we will add support for flac audio files. Flac (Free Lossless Audio Coded) is a non-proprietary lossless audio format which is able to provide compressed audio without quality loss. First of all, we know that the header of this file type in hexadecimal form is 66 4C 61 43 00 00 00 22 ( fLaC in ASCII), and we can verify it by using a program like hexdump on a flac file:

$ hexdump -C
blind_guardian_war_of_wrath.flac|head
00000000  66 4c 61 43 00 00 00 22  12 00 12 00 00 00 0e 00  |fLaC..."........|
00000010  36 f2 0a c4 42 f0 00 4d  04 60 6d 0b 64 36 d7 bd  |6...B..M.`m.d6..|
00000020  3e 4c 0d 8b c1 46 b6 fe  cd 42 04 00 03 db 20 00  |>L...F...B.... .|
00000030  00 00 72 65 66 65 72 65  6e 63 65 20 6c 69 62 46  |..reference libF|
00000040  4c 41 43 20 31 2e 33 2e  31 20 32 30 31 34 31 31  |LAC 1.3.1 201411|
00000050  32 35 21 00 00 00 12 00  00 00 54 49 54 4c 45 3d  |25!.......TITLE=|
00000060  57 61 72 20 6f 66 20 57  72 61 74 68 11 00 00 00  |War of Wrath....|
00000070  52 45 4c 45 41 53 45 43  4f 55 4e 54 52 59 3d 44  |RELEASECOUNTRY=D|
00000080  45 0c 00 00 00 54 4f 54  41 4c 44 49 53 43 53 3d  |E....TOTALDISCS=|
00000090  32 0c 00 00 00 4c 41 42  45 4c 3d 56 69 72 67 69  |2....LABEL=Virgi|

As you can see the file signature is indeed what we expected. Here we will assume a maximum file size of 30 MB, or 30000000 Bytes. Let's add the entry to the file:

flac    y       30000000    \x66\x4c\x61\x43\x00\x00\x00\x22

The footer signature is optional so here we didn't provide it. The program should now be able to recover deleted flac files. Let's verify it. To test that everything works as expected I previously placed, and then removed, a flac file from the /dev/sdb1 partition, and then proceeded to run the command:

$ sudo foremost -i /dev/sdb1 -o $HOME/Documents/output

As expected, the program was able to retrieve the deleted flac file (it was the only file on the device, on purpose), although it renamed it with a random string. The original filename cannot be retrieved because, as we know, files metadata is contained in the filesystem, and not in the file itself:

/home/egdoc/Documents
└── output
    ├── audit.txt
    └── flac
        └── 00020482.flac

me name=


The audit.txt file contains information about the actions performed by the program, in this case:

Foremost version 1.5.7 by Jesse Kornblum, Kris
Kendall, and Nick Mikus
Audit File

Foremost started at Thu Sep 12 23:47:04 2019
Invocation: foremost -i /dev/sdb1 -o /home/egdoc/Documents/output
Output directory: /home/egdoc/Documents/output
Configuration file: /etc/foremost.conf
------------------------------------------------------------------
File: /dev/sdb1
Start: Thu Sep 12 23:47:04 2019
Length: 200 MB (209715200 bytes)

Num      Name (bs=512)         Size      File Offset     Comment

0:      00020482.flac         28 MB        10486784
Finish: Thu Sep 12 23:47:04 2019

1 FILES EXTRACTED

flac:= 1
------------------------------------------------------------------

Foremost finished at Thu Sep 12 23:47:04 2019
Conclusion

In this article we learned how to use foremost, a forensic program able to retrieve deleted files of various types. We learned that the program works by using a technique called data carving , and relies on files signatures to achieve its goal. We saw an example of the program usage and we also learned how to add the support for a specific file type using the syntax illustrated in the configuration file. For more information about the program usage, please consult its manual page.

[Sep 18, 2019] Delete Files That Have Not Been Accessed For A Given Time On Linux

Sep 18, 2019 | www.ostechnix.com

Delete Files That Have Not Been Accessed For A Given Time On Linux

by sk · Published September 16, 2019 · Updated September 17, 2019

We already have covered how to manually find and delete files older than X days using "find" command in Linux . Today we will do the same, but only if the files have not been accessed for a certain period of time. Say hello to "Tmpwatch" , a command line utility to recursively delete files that haven't been accessed for a given time. Not just files, tmpwatch will also delete empty directories as well.

By default, Tmpwatch will decide which files/directories should be deleted based on their atime (access time). You can, of course, change this behaviour by using ctime (inode change time), mtime (modification time) values as well. Normally, Tmpwatch can be used to delete the contents of /tmp directory and other unused/unwanted stuffs like old log files.

An important warning!!

Before start using this tool, you must know that Tmpwatch will delete files and directories recursively based on the given criteria. Do not run tmpwatch in / (root directory) . This directory contains important files which are required to keep the Linux system running. If you're not careful enough, tmpwatch will delete the important system files and directories that matches the given criteria in the entire root directory. There is no safeguard mechanism built into Tmpwatch tool to prevent you from running it on root directory. So, there is no way to undo the operation. You have been warned!

Install Tmpwatch

Tmpwatch is available in the default repositories of most Linux distributions.

On Fedora, you can install it using command:

$ sudo dnf install tmpwatch

On CentOS:

$ sudo yum install tmpwatch

On openSUSE:

$ sudo zypper install tmpwatch

On Debian and its derivatives like Ubuntu, Tmpwatch is available in different name i.e Tmpreaper . Tmpreaper is mostly based on `tmpwatch-1.2/1.4′ by Erik Troan from Redhat. Now, tmpreaper is being maintained for Debian by Paul Slootman .

To install tmpreaper on Debian, Ubuntu, Linux Mint, run:

$ sudo apt install tmpreaper
Delete Files That Have Not Been Accessed For A Given Time Using Tmpwatch / Tmpreaper

Usage of Tmpwatch and Tmpreaper is almost same. If you're on Debian-based systems, replace "Tmpwatch" with "Tmpreaper" in the following examples.

Delete files which are not accessed more than X days

To delete files more than 10 days old, run:

tmpwatch 10d /var/log/

The above command will delete all the files and empty directories which are not accessed more than 10 days from /var/log/ folder.

Delete files which are not modified more than X days

Like I already said, Tmpwatch will delete files based on their access time. You can also delete files based on their modification time (mtime) using -m option.

For example, the following command will delete files which are not modified for the 10 days in /var/log/ folder.

tmpwatch -m 10d /var/log/

Here, -m refers the modification time and d is the <time_spec> parameter. The <time_spec> parameter defines the age threshold for removing files. You can use the following time_spec parameters for removing files.

Hours is the default.

For instance, to delete files which are not modified for the past 10 hours , simply run:

tmpwatch -m 10 /var/log/

As you might have noticed, I haven't used time_spec parameter in the above command. Because, h (for hours) is default parameter, so we don't have to mention it when deleting files that haven't been modified for the past X hours.

Delete Symlinks

If you want to delete symlinks, not just regular files and directories, use -s option like below.

tmpwatch -s 10 /var/log/
Delete all files

To remove all file types, not just regular files, symlinks, and directories, use -a option.

tmpwatch -a 10 /var/log/

The above command will delete all types of files including regular files, symlinks, and directories in the /var/log/ folder.

Exclude directories from deletion

Sometimes, you might want to delete files, but not directories. if so, the command would be:

tmpwatch -am 10 --nodirs /var/log/

The above command will delete all files except the directories which are not modified for the past 10 hours.

Perform a test run without actually delete anything

Sometimes, you might want to view which files are actually going to be deleted. This will be helpful when running Tmpwatch on an important directory. If so, run Tmpwatch in test mode with -t option.

tmpwatch -t 30 /var/log/

Sample output from CentOS 7 server:

removing file /var/log/wtmp
removing directory /var/log/ppp if empty
removing directory /var/log/tuned if empty
removing directory /var/log/anaconda if empty
removing file /var/log/dmesg.old
removing file /var/log/boot.log
removing file /var/log/dnf.librepo.log

On Debian-based systems, you will see an output like below.

$ tmpreaper -t 30 /var/log/
(PID 1803) Pretending to clean up directory `/var/log/'.
(PID 1804) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 1804) Back from recursing down `apache2'.
(PID 1804) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 1804) Back from recursing down `dbconfig-common'.
(PID 1804) Pretending to clean up directory `dist-upgrade'.
(PID 1804) Back from recursing down `dist-upgrade'.
(PID 1804) Pretending to clean up directory `lxd'.
(PID 1804) Back from recursing down `lxd'.
Pretending to remove file `/var/log//cloud-init.log'.
(PID 1804) Pretending to clean up directory `landscape'.
Pretending to remove file `landscape/sysinfo.log'.
(PID 1804) Back from recursing down `landscape'.
[...]

This will only simulate the operation, but don't actually delete anything. Tmpwatch will simply perform a dry run and show you which files are going to be deleted in the output.

Force file deletion

If you want to forcibly delete the files, use -f option.

tmpwatch -f 10h /var/log/

Normally, the files owned by the current user, with no write access are not removed. The -f option will delete them as well.

Skip certain files from deletion

Tmpreaper has an option to skip files from deletion. This will be useful when you want to keep certain types of files and deleting everything else. If so, use –protect option like below.

tmpreaper --protect '*.txt' -t 10h /var/log/

This command will skip all files that have .txt extension from deletion

Sample output:

(PID 2623) Pretending to clean up directory `/var/log/'.
(PID 2624) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 2624) Back from recursing down `apache2'.
(PID 2624) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 2624) Back from recursing down `dbconfig-common'.
(PID 2624) Pretending to clean up directory `dist-upgrade'.
(PID 2624) Back from recursing down `dist-upgrade'.
Pretending to remove empty directory `dist-upgrade'.
Entry matching `--protect' pattern skipped. `ostechnix.txt'
(PID 2624) Pretending to clean up directory `lxd'.

As you can see, Tmpreaper skips the *.txt files from deletion.

This option is not available in Tmpwatch, by the way.

Setting up cron job to delete files periodically

You may not want to manually run Tmpwatch/Tmpreaper all the time. In that case, you could setup a cron job to automate the clean process.

When installing Tmpreaper , it will create a daily cron job ( /etc/cron.daily/tmpreaper ). This job will read the options from /etc/timereaper.conf file and act accordingly. Open the file and change the values as per your requirement. By default, Tmpreaper will delete files that 7 days older. You can, however, change this by modifying the value "TMPREAPER_TIME=7d" in tmpreaper.conf file.

If you use "Tmpwatch", you need to manually create cron job and put the cron entry in it.

# crontab -e

Add the following line:

0 1 * * * /usr/sbin/tmpwatch 30d /var/log/

As per the above cron job, Tmpwatch will run everyday at 1am and delete files which are 30 days older.

For more details about setting cron jobs, refer the following link.

Again, please careful while using Tmpwatch/Tmpreaper commands . Double check the path before running it to avoid data loss.

For more details, refer man pages.

$ man tmpwatch

Or,

$ man tmpreaper

[Sep 16, 2019] Artistic Style - Index

Sep 16, 2019 | astyle.sourceforge.net

Artistic Style 3.1 A Free, Fast, and Small Automatic Formatter
for C, C++, C++/CLI, Objective‑C, C#, and Java Source Code

Project Page: http://astyle.sourceforge.net/
SourceForge: http://sourceforge.net/projects/astyle/

Artistic Style is a source code indenter, formatter, and beautifier for the C, C++, C++/CLI, Objective‑C, C# and Java programming languages.

When indenting source code, we as programmers have a tendency to use both spaces and tab characters to create the wanted indentation. Moreover, some editors by default insert spaces instead of tabs when pressing the tab key. Other editors (Emacs for example) have the ability to "pretty up" lines by automatically setting up the white space before the code on the line, possibly inserting spaces in code that up to now used only tabs for indentation.

The NUMBER of spaces for each tab character in the source code can change between editors (unless the user sets up the number to his liking...). One of the standard problems programmers face when moving from one editor to another is that code containing both spaces and tabs, which was perfectly indented, suddenly becomes a mess to look at. Even if you as a programmer take care to ONLY use spaces or tabs, looking at other people's source code can still be problematic.

To address this problem, Artistic Style was created – a filter written in C++ that automatically re-indents and re-formats C / C++ / Objective‑C / C++/CLI / C# / Java source files. It can be used from a command line, or it can be incorporated as a library in another program.

[Sep 16, 2019] Usage -- PrettyPrinter 0.18.0 documentation

Sep 16, 2019 | prettyprinter.readthedocs.io

Usage

Install the package with pip :

pip install prettyprinter

Then, instead of

from pprint import pprint

do

from prettyprinter import cpprint

for colored output. For colorless output, remove the c prefix from the function name:

from prettyprinter import pprint

[Sep 16, 2019] JavaScript code prettifier

Sep 16, 2019 | github.com

Announcement: Action required rawgit.com is going away .

An embeddable script that makes source-code snippets in HTML prettier.

[Sep 16, 2019] Pretty-print for shell script

Sep 16, 2019 | stackoverflow.com

Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Sep 9 at 7:47

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(mktemp -u "XXXXXXXXXX");
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

> ,

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 16, 2019] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Notable quotes:
"... Have a look at the HTML Tidy Project: http://www.html-tidy.org/ ..."
Sep 16, 2019 | stackoverflow.com

nisetama ,Aug 12 at 10:33

I'm looking for recommendations for HTML pretty printers which fulfill the following requirements:

> ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

[Sep 13, 2019] How To Delete Files Older Or Newer Than N Days Using find (With Extra Examples) - Linux Uprising Blog

Sep 13, 2019 | www.linuxuprising.com

Only delete files matching .extension older than N days from a directory and all its subdirectories:

find /directory/path/ -type f -mtime +N -name '*.extension' -delete

You can add -maxdepth 1 to prevent the command from going through subdirectories, and only delete files and 1st level depth only directories:
find /directory/path/ -mindepth 1 -maxdepth 1 -mtime +N -delete

You may also use -ctime +N , used to match (and delete in this example) files that had their status last changed N days ago (the file attributes/metadata AND/OR file content was modified) , as opposed to -mtime , which only matches files based on when their content was last modified:
find /directory/path/ -mindepth 1 -ctime +N -delete

[Sep 12, 2019] 9 Best File Comparison and Difference (Diff) Tools for Linux

Sep 12, 2019 | www.tecmint.com

3. Kompare

Kompare is a diff GUI wrapper that allows users to view differences between files and also merge them.

Some of its features include:

  1. Supports multiple diff formats
  2. Supports comparison of directories
  3. Supports reading diff files
  4. Customizable interface
  5. Creating and applying patches to source files
Kompare Tool - Compare Two Files in Linux <img aria-describedby="caption-attachment-21311" src="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png" alt="Kompare Tool - Compare Two Files in Linux" width="1097" height="701" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png 1097w, https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux-768x491.png 768w" sizes="(max-width: 1097px) 100vw, 1097px" />

Kompare Tool – Compare Two Files in Linux

Visit Homepage : https://www.kde.org/applications/development/kompare/

4. DiffMerge

DiffMerge is a cross-platform GUI application for comparing and merging files. It has two functionality engines, the Diff engine which shows the difference between two files, which supports intra-line highlighting and editing and a Merge engine which outputs the changed lines between three files.

It has got the following features:

  1. Supports directory comparison
  2. File browser integration
  3. Highly configurable
DiffMerge - Compare Files in Linux <img aria-describedby="caption-attachment-21312" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png" alt="DiffMerge - Compare Files in Linux" width="1078" height="700" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png 1078w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux-768x499.png 768w" sizes="(max-width: 1078px) 100vw, 1078px" />

DiffMerge – Compare Files in Linux

Visit Homepage : https://sourcegear.com/diffmerge/

5. Meld – Diff Tool

Meld is a lightweight GUI diff and merge tool. It enables users to compare files, directories plus version controlled programs. Built specifically for developers, it comes with the following features:

  1. Two-way and three-way comparison of files and directories
  2. Update of file comparison as a users types more words
  3. Makes merges easier using auto-merge mode and actions on changed blocks
  4. Easy comparisons using visualizations
  5. Supports Git, Mercurial, Subversion, Bazaar plus many more
Meld - A Diff Tool to Compare File in Linux <img aria-describedby="caption-attachment-21313" src="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png" alt="Meld - A Diff Tool to Compare File in Linux" width="1028" height="708" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png 1028w, https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux-768x529.png 768w" sizes="(max-width: 1028px) 100vw, 1028px" />

Meld – A Diff Tool to Compare File in Linux

Visit Homepage : http://meldmerge.org/

6. Diffuse – GUI Diff Tool

Diffuse is another popular, free, small and simple GUI diff and merge tool that you can use on Linux. Written in Python, It offers two major functionalities, that is: file comparison and version control, allowing file editing, merging of files and also output the difference between files.

You can view a comparison summary, select lines of text in files using a mouse pointer, match lines in adjacent files and edit different file. Other features include:

  1. Syntax highlighting
  2. Keyboard shortcuts for easy navigation
  3. Supports unlimited undo
  4. Unicode support
  5. Supports Git, CVS, Darcs, Mercurial, RCS, Subversion, SVK and Monotone
DiffUse - A Tool to Compare Text Files in Linux <img aria-describedby="caption-attachment-21314" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png" alt="DiffUse - A Tool to Compare Text Files in Linux" width="1030" height="795" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png 1030w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux-768x593.png 768w" sizes="(max-width: 1030px) 100vw, 1030px" />

DiffUse – A Tool to Compare Text Files in Linux

Visit Homepage : http://diffuse.sourceforge.net/

7. XXdiff – Diff and Merge Tool

XXdiff is a free, powerful file and directory comparator and merge tool that runs on Unix like operating systems such as Linux, Solaris, HP/UX, IRIX, DEC Tru64. One limitation of XXdiff is its lack of support for unicode files and inline editing of diff files.

It has the following list of features:

  1. Shallow and recursive comparison of two, three file or two directories
  2. Horizontal difference highlighting
  3. Interactive merging of files and saving of resulting output
  4. Supports merge reviews/policing
  5. Supports external diff tools such as GNU diff, SIG diff, Cleareddiff and many more
  6. Extensible using scripts
  7. Fully customizable using resource file plus many other minor features
xxdiff Tool <img aria-describedby="caption-attachment-21315" src="https://www.tecmint.com/wp-content/uploads/2016/07/xxdiff-Tool.png" alt="xxdiff Tool" width="718" height="401" />

xxdiff Tool

Visit Homepage : http://furius.ca/xxdiff/

8. KDiff3 – – Diff and Merge Tool

KDiff3 is yet another cool, cross-platform diff and merge tool made from KDevelop . It works on all Unix-like platforms including Linux and Mac OS X, Windows.

It can compare or merge two to three files or directories and has the following notable features:

  1. Indicates differences line by line and character by character
  2. Supports auto-merge
  3. In-built editor to deal with merge-conflicts
  4. Supports Unicode, UTF-8 and many other codecs
  5. Allows printing of differences
  6. Windows explorer integration support
  7. Also supports auto-detection via byte-order-mark "BOM"
  8. Supports manual alignment of lines
  9. Intuitive GUI and many more
KDiff3 Tool for Linux <img aria-describedby="caption-attachment-21418" src="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png" alt="KDiff3 Tool for Linux" width="950" height="694" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png 950w, https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux-768x561.png 768w" sizes="(max-width: 950px) 100vw, 950px" />

KDiff3 Tool for Linux

Visit Homepage : http://kdiff3.sourceforge.net/

9. TkDiff

TkDiff is also a cross-platform, easy-to-use GUI wrapper for the Unix diff tool. It provides a side-by-side view of the differences between two input files. It can run on Linux, Windows and Mac OS X.

Additionally, it has some other exciting features including diff bookmarks, a graphical map of differences for easy and quick navigation plus many more.

Visit Homepage : https://sourceforge.net/projects/tkdiff/

Having read this review of some of the best file and directory comparator and merge tools, you probably want to try out some of them. These may not be the only diff tools available you can find on Linux, but they are known to offer some the best features, you may also want to let us know of any other diff tools out there that you have tested and think deserve to be mentioned among the best.

[Sep 07, 2019] How to Debug Bash Scripts by Mike Ward

Sep 05, 2019 | linuxconfig.org

05 September 2019

... ... ... How to use other Bash options

The Bash options for debugging are turned off by default, but once they are turned on by using the set command, they stay on until explicitly turned off. If you are not sure which options are enabled, you can examine the $- variable to see the current state of all the variables.

$ echo $-
himBHs
$ set -xv && echo $-
himvxBHs

There is another useful switch we can use to help us find variables referenced without having any value set. This is the -u switch, and just like -x and -v it can also be used on the command line, as we see in the following example:

set u option at command line <img src=https://linuxconfig.org/images/02-how-to-debug-bash-scripts.png alt="set u option at command line" width=1200 height=254 /> Setting u option at the command line

We mistakenly assigned a value of 7 to the variable called "level" then tried to echo a variable named "score" that simply resulted in printing nothing at all to the screen. Absolutely no debug information was given. Setting our -u switch allows us to see a specific error message, "score: unbound variable" that indicates exactly what went wrong.

We can use those options in short Bash scripts to give us debug information to identify problems that do not otherwise trigger feedback from the Bash interpreter. Let's walk through a couple of examples.

#!/bin/bash

read -p "Path to be added: " $path

if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
results from addpath script <img src=https://linuxconfig.org/images/03-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=417 /> Using x option when running your Bash script

In the example above we run the addpath script normally and it simply does not modify our PATH . It does not give us any indication of why or clues to mistakes made. Running it again using the -x option clearly shows us that the left side of our comparison is an empty string. $path is an empty string because we accidentally put a dollar sign in front of "path" in our read statement. Sometimes we look right at a mistake like this and it doesn't look wrong until we get a clue and think, "Why is $path evaluated to an empty string?"

Looking this next example, we also get no indication of an error from the interpreter. We only get one value printed per line instead of two. This is not an error that will halt execution of the script, so we're left to simply wonder without being given any clues. Using the -u switch,we immediately get a notification that our variable j is not bound to a value. So these are real time savers when we make mistakes that do not result in actual errors from the Bash interpreter's point of view.

#!/bin/bash

for i in 1 2 3
do
        echo $i $j
done
results from count.sh script <img src=https://linuxconfig.org/images/04-how-to-debug-bash-scripts.png alt="results from count.sh script" width=1200 height=291 /> Using u option running your script from the command line

Now surely you are thinking that sounds fine, but we seldom need help debugging mistakes made in one-liners at the command line or in short scripts like these. We typically struggle with debugging when we deal with longer and more complicated scripts, and we rarely need to set these options and leave them set while we run multiple scripts. Setting -xv options and then running a more complex script will often add confusion by doubling or tripling the amount of output generated.

Fortunately we can use these options in a more precise way by placing them inside our scripts. Instead of explicitly invoking a Bash shell with an option from the command line, we can set an option by adding it to the shebang line instead.

#!/bin/bash -x

This will set the -x option for the entire file or until it is unset during the script execution, allowing you to simply run the script by typing the filename instead of passing it to Bash as a parameter. A long script or one that has a lot of output will still become unwieldy using this technique however, so let's look at a more specific way to use options.


me name=


For a more targeted approach, surround only the suspicious blocks of code with the options you want. This approach is great for scripts that generate menus or detailed output, and it is accomplished by using the set keyword with plus or minus once again.

#!/bin/bash

read -p "Path to be added: " $path

set -xv
if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
set +xv
results from addpath script <img src=https://linuxconfig.org/images/05-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=469 /> Wrapping options around a block of code in your script

We surrounded only the blocks of code we suspect in order to reduce the output, making our task easier in the process. Notice we turn on our options only for the code block containing our if-then-else statement, then turn off the option(s) at the end of the suspect block. We can turn these options on and off multiple times in a single script if we can't narrow down the suspicious areas, or if we want to evaluate the state of variables at various points as we progress through the script. There is no need to turn off an option If we want it to continue for the remainder of the script execution.

For completeness sake we should mention also that there are debuggers written by third parties that will allow us to step through the code execution line by line. You might want to investigate these tools, but most people find that that they are not actually needed.

As seasoned programmers will suggest, if your code is too complex to isolate suspicious blocks with these options then the real problem is that the code should be refactored. Overly complex code means bugs can be difficult to detect and maintenance can be time consuming and costly.

One final thing to mention regarding Bash debugging options is that a file globbing option also exists and is set with -f . Setting this option will turn off globbing (expansion of wildcards to generate file names) while it is enabled. This -f option can be a switch used at the command line with bash, after the shebang in a file or, as in this example to surround a block of code.

#!/bin/bash

echo "ignore fileglobbing option turned off"
ls *

echo "ignore file globbing option set"
set -f
ls *
set +f
results from -f option <img src=https://linuxconfig.org/images/06-how-to-debug-bash-scripts.png alt="results from -f option" width=1200 height=314 /> Using f option to turn off file globbing How to use trap to help debug

There are more involved techniques worth considering if your scripts are complicated, including using an assert function as mentioned earlier. One such method to keep in mind is the use of trap. Shell scripts allow us to trap signals and do something at that point.

A simple but useful example you can use in your Bash scripts is to trap on EXIT .

#!/bin/bash

trap 'echo score is $score, status is $status' EXIT

if [ -z  ]; then
        status="default"
else
        status=
fi

score=0
if [ ${USER} = 'superman' ]; then
        score=99
elif [ $# -gt 1 ]; then
        score=
fi
results from using trap EXIT <img src=https://linuxconfig.org/images/07-how-to-debug-bash-scripts.png alt="results from using trap EXIT" width=1200 height=469 /> Using trap EXIT to help debug your script

me name=


As you can see just dumping the current values of variables to the screen can be useful to show where your logic is failing. The EXIT signal obviously does not need an explicit exit statement to be generated; in this case the echo statement is executed when the end of the script is reached.

Another useful trap to use with Bash scripts is DEBUG . This happens after every statement, so it can be used as a brute force way to show the values of variables at each step in the script execution.

#!/bin/bash

trap 'echo "line ${LINENO}: score is $score"' DEBUG

score=0

if [ "${USER}" = "mike" ]; then
        let "score += 1"
fi

let "score += 1"

if [ "" = "7" ]; then
        score=7
fi
exit 0
results from using trap DEBUG <img src=https://linuxconfig.org/images/08-how-to-debug-bash-scripts.png alt="results from using trap DEBUG" width=1200 height=469 /> Using trap DEBUG to help debug your script Conclusion

When you notice your Bash script not behaving as expected and the reason is not clear to you for whatever reason, consider what information would be useful to help you identify the cause then use the most comfortable tools available to help you pinpoint the issue. The xtrace option -x is easy to use and probably the most useful of the options presented here, so consider trying it out next time you're faced with a script that's not doing what you thought it would

[Sep 06, 2019] Using Case Insensitive Matches with Bash Case Statements by Steven Vona

Jun 30, 2019 | www.putorius.net

If you want to match the pattern regardless of it's case (Capital letters or lowercase letters) you can set the nocasematch shell option with the shopt builtin. You can do this as the first line of your script. Since the script will run in a subshell it won't effect your normal environment.

#!/bin/bash
 shopt -s nocasematch
 read -p "Name a Star Trek character: " CHAR
 case $CHAR in
   "Seven of Nine" | Neelix | Chokotay | Tuvok | Janeway )
       echo "$CHAR was in Star Trek Voyager"
       ;;&
   Archer | Phlox | Tpol | Tucker )
       echo "$CHAR was in Star Trek Enterprise"
       ;;&
   Odo | Sisko | Dax | Worf | Quark )
       echo "$CHAR was in Star Trek Deep Space Nine"
       ;;&
   Worf | Data | Riker | Picard )
       echo "$CHAR was in Star Trek The Next Generation" &&  echo "/etc/redhat-release"
       ;;
   *) echo "$CHAR is not in this script." 
       ;;
 esac

[Sep 04, 2019] Exec - Process Replacement Redirection in Bash by Steven Vona

Sep 02, 2019 | www.putorius.net

The Linux exec command is a bash builtin and a very interesting utility. It is not something most people who are new to Linux know. Most seasoned admins understand it but only use it occasionally. If you are a developer, programmer or DevOp engineer it is probably something you use more often. Lets take a deep dive into the builtin exec command, what it does and how to use it.

Table of Contents

Basics of the Sub-Shell

In order to understand the exec command, you need a fundamental understanding of how sub-shells work.

... ... ...

What the Exec Command Does

In it's most basic function the exec command changes the default behavior of creating a sub-shell to run a command. If you run exec followed by a command, that command will REPLACE the original process, it will NOT create a sub-shell.

An additional feature of the exec command, is redirection and manipulation of file descriptors . Explaining redirection and file descriptors is outside the scope of this tutorial. If these are new to you please read " Linux IO, Standard Streams and Redirection " to get acquainted with these terms and functions.

In the following sections we will expand on both of these functions and try to demonstrate how to use them.

How to Use the Exec Command with Examples

Let's look at some examples of how to use the exec command and it's options.

Basic Exec Command Usage – Replacement of Process

If you call exec and supply a command without any options, it simply replaces the shell with command .

Let's run an experiment. First, I ran the ps command to find the process id of my second terminal window. In this case it was 17524. I then ran "exec tail" in that second terminal and checked the ps command again. If you look at the screenshot below, you will see the tail process replaced the bash process (same process ID).

Linux terminal screenshot showing the exec command replacing a parent process instead of creating a sub-shell.
Screenshot 3

Since the tail command replaced the bash shell process, the shell will close when the tail command terminates.

Exec Command Options

If the -l option is supplied, exec adds a dash at the beginning of the first (zeroth) argument given. So if we ran the following command:

exec -l tail -f /etc/redhat-release

It would produce the following output in the process list. Notice the highlighted dash in the CMD column.

The -c option causes the supplied command to run with a empty environment. Environmental variables like PATH , are cleared before the command it run. Let's try an experiment. We know that the printenv command prints all the settings for a users environment. So here we will open a new bash process, run the printenv command to show we have some variables set. We will then run printenv again but this time with the exec -c option.

animated gif showing the exec command output with the -c option supplied.

In the example above you can see that an empty environment is used when using exec with the -c option. This is why there was no output to the printenv command when ran with exec.

The last option, -a [name], will pass name as the first argument to command . The command will still run as expected, but the name of the process will change. In this next example we opened a second terminal and ran the following command:

exec -a PUTORIUS tail -f /etc/redhat-release

Here is the process list showing the results of the above command:

Linux terminal screenshot showing the exec command using the -a option to replace the name of the first argument
Screenshot 5

As you can see, exec passed PUTORIUS as first argument to command , therefore it shows in the process list with that name.

Using the Exec Command for Redirection & File Descriptor Manipulation

The exec command is often used for redirection. When a file descriptor is redirected with exec it affects the current shell. It will exist for the life of the shell or until it is explicitly stopped.

If no command is specified, redirections may be used to affect the current shell environment.

– Bash Manual

Here are some examples of how to use exec for redirection and manipulating file descriptors. As we stated above, a deep dive into redirection and file descriptors is outside the scope of this tutorial. Please read " Linux IO, Standard Streams and Redirection " for a good primer and see the resources section for more information.

Redirect all standard output (STDOUT) to a file:
exec >file

In the example animation below, we use exec to redirect all standard output to a file. We then enter some commands that should generate some output. We then use exec to redirect STDOUT to the /dev/tty to restore standard output to the terminal. This effectively stops the redirection. Using the cat command we can see that the file contains all the redirected output.

Screenshot of Linux terminal using exec to redirect all standard output to a file
Open a file as file descriptor 6 for writing:
exec 6> file2write
Open file as file descriptor 8 for reading:
exec 8< file2read
Copy file descriptor 5 to file descriptor 7:
exec 7<&5
Close file descriptor 8:
exec 8<&-
Conclusion

In this article we covered the basics of the exec command. We discussed how to use it for process replacement, redirection and file descriptor manipulation.

In the past I have seen exec used in some interesting ways. It is often used as a wrapper script for starting other binaries. Using process replacement you can call a binary and when it takes over there is no trace of the original wrapper script in the process table or memory. I have also seen many System Administrators use exec when transferring work from one script to another. If you call a script inside of another script the original process stays open as a parent. You can use exec to replace that original script.

I am sure there are people out there using exec in some interesting ways. I would love to hear your experiences with exec. Please feel free to leave a comment below with anything on your mind.

Resources

[Sep 03, 2019] bash - How to convert strings like 19-FEB-12 to epoch date in UNIX - Stack Overflow

Feb 11, 2013 | stackoverflow.com

Asked 6 years, 6 months ago Active 2 years, 2 months ago Viewed 53k times 24 4

hellish ,Feb 11, 2013 at 3:45

In UNIX how to convert to epoch milliseconds date strings like:
19-FEB-12
16-FEB-12
05-AUG-09

I need this to compare these dates with the current time on the server.

> ,

To convert a date to seconds since the epoch:
date --date="19-FEB-12" +%s

Current epoch:

date +%s

So, since your dates are in the past:

NOW=`date +%s`
THEN=`date --date="19-FEB-12" +%s`

let DIFF=$NOW-$THEN
echo "The difference is: $DIFF"

Using BSD's date command, you would need

$ date -j -f "%d-%B-%y" 19-FEB-12 +%s

Differences from GNU date :

  1. -j prevents date from trying to set the clock
  2. The input format must be explicitly set with -f
  3. The input date is a regular argument, not an option (viz. -d )
  4. When no time is specified with the date, use the current time instead of midnight.

[Sep 03, 2019] Linux - UNIX Convert Epoch Seconds To the Current Time - nixCraft

Sep 03, 2019 | www.cyberciti.biz

Print Current UNIX Time

Type the following command to display the seconds since the epoch:

date +%s

date +%s

Sample outputs:
1268727836

Convert Epoch To Current Time

Type the command:

date -d @Epoch
date -d @1268727836
date -d "1970-01-01 1268727836 sec GMT"

date -d @Epoch date -d @1268727836 date -d "1970-01-01 1268727836 sec GMT"

Sample outputs:

Tue Mar 16 13:53:56 IST 2010

Please note that @ feature only works with latest version of date (GNU coreutils v5.3.0+). To convert number of seconds back to a more readable form, use a command like this:

date -d @1268727836 +"%d-%m-%Y %T %z"

date -d @1268727836 +"%d-%m-%Y %T %z"

Sample outputs:

16-03-2010 13:53:56 +0530

[Sep 03, 2019] command line - How do I convert an epoch timestamp to a human readable format on the cli - Unix Linux Stack Exchange

Sep 03, 2019 | unix.stackexchange.com

Gilles ,Oct 11, 2010 at 18:14

date -d @1190000000 Replace 1190000000 with your epoch

Stefan Lasiewski ,Oct 11, 2010 at 18:04

$ echo 1190000000 | perl -pe 's/(\d+)/localtime($1)/e' 
Sun Sep 16 20:33:20 2007

This can come in handy for those applications which use epoch time in the logfiles:

$ tail -f /var/log/nagios/nagios.log | perl -pe 's/(\d+)/localtime($1)/e'
[Thu May 13 10:15:46 2010] EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT;HOSTA;check_raid;0;check_raid.pl: OK (Unit 0 on Controller 0 is OK)

Stéphane Chazelas ,Jul 31, 2015 at 20:24

With bash-4.2 or above:
printf '%(%F %T)T\n' 1234567890

(where %F %T is the strftime() -type format)

That syntax is inspired from ksh93 .

In ksh93 however, the argument is taken as a date expression where various and hardly documented formats are supported.

For a Unix epoch time, the syntax in ksh93 is:

printf '%(%F %T)T\n' '#1234567890'

ksh93 however seems to use its own algorithm for the timezone and can get it wrong. For instance, in Britain, it was summer time all year in 1970, but:

$ TZ=Europe/London bash -c 'printf "%(%c)T\n" 0'
Thu 01 Jan 1970 01:00:00 BST
$ TZ=Europe/London ksh93 -c 'printf "%(%c)T\n" "#0"'
Thu Jan  1 00:00:00 1970

DarkHeart ,Jul 28, 2014 at 3:56

Custom format with GNU date :
date -d @1234567890 +'%Y-%m-%d %H:%M:%S'

Or with GNU awk :

awk 'BEGIN { print strftime("%Y-%m-%d %H:%M:%S", 1234567890); }'

Linked SO question: https://stackoverflow.com/questions/3249827/convert-from-unixtime-at-command-line

,

The two I frequently use are:
$ perl -leprint\ scalar\ localtime\ 1234567890
Sat Feb 14 00:31:30 2009

[Sep 03, 2019] Time conversion using Bash Vanstechelman.eu

Sep 03, 2019 | www.vanstechelman.eu

Time conversion using Bash This article show how you can obtain the UNIX epoch time (number of seconds since 1970-01-01 00:00:00 UTC) using the Linux bash "date" command. It also shows how you can convert a UNIX epoch time to a human readable time.

Obtain UNIX epoch time using bash
Obtaining the UNIX epoch time using bash is easy. Use the build-in date command and instruct it to output the number of seconds since 1970-01-01 00:00:00 UTC. You can do this by passing a format string as parameter to the date command. The format string for UNIX epoch time is '%s'.

lode@srv-debian6:~$ date "+%s"
1234567890

To convert a specific date and time into UNIX epoch time, use the -d parameter. The next example shows how to convert the timestamp "February 20th, 2013 at 08:41:15" into UNIX epoch time.

lode@srv-debian6:~$ date "+%s" -d "02/20/2013 08:41:15"
1361346075

Converting UNIX epoch time to human readable time
Even though I didn't find it in the date manual, it is possible to use the date command to reformat a UNIX epoch time into a human readable time. The syntax is the following:

lode@srv-debian6:~$ date -d @1234567890
Sat Feb 14 00:31:30 CET 2009

The same thing can also be achieved using a bit of perl programming:

lode@srv-debian6:~$ perl -e 'print scalar(localtime(1234567890)), "\n"'
Sat Feb 14 00:31:30 2009

Please note that the printed time is formatted in the timezone in which your Linux system is configured. My system is configured in UTC+2, you can get another output for the same command.

[Sep 03, 2019] Run PerlTidy to beautify the code

Notable quotes:
"... Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a . ..."
Sep 03, 2019 | perlmaven.com

The Code-TidyAll distribution provides a command line script called tidyall that will use Perl::Tidy to change the layout of the code.

This tandem needs 2 configuration file.

The .perltidyrc file contains the instructions to Perl::Tidy that describes the layout of a Perl-file. We used the following file copied from the source code of the Perl Maven project.

-pbp
-nst
-et=4
--maximum-line-length=120

# Break a line after opening/before closing token.
-vt=0
-vtc=0

The tidyall command uses a separate file called .tidyallrc that describes which files need to be beautified.

[PerlTidy]
select = {lib,t}/**/*.{pl,pm,t}
select = Makefile.PL
select = {mod2html,podtree2html,pods2html,perl2html}
argv = --profile=$ROOT/.perltidyrc

[SortLines]
select = .gitignore
Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a .

That created a directory called .tidyall.d/ where it stores cached versions of the files, and changed all the files that were matches by the select statements in the .tidyallrc file.

Then, I added .tidyall.d/ to the .gitignore file to avoid adding that subdirectory to the repository and ran tidyall -a again to make sure the .gitignore file is sorted.

[Sep 02, 2019] Switch statement for bash script

Sep 02, 2019 | www.linuxquestions.org
Switch statement for bash script
<a rel='nofollow' target='_blank' href='//rev.linuxquestions.org/www/delivery/ck.php?n=a054b75'><img border='0' alt='' src='//rev.linuxquestions.org/www/delivery/avw.php?zoneid=10&amp;n=a054b75' /></a>
[ Log in to get rid of this advertisement] Hello, i am currently trying out the switch statement using bash script.

CODE:
showmenu () {
echo "1. Number1"
echo "2. Number2"
echo "3. Number3"
echo "4. All"
echo "5. Quit"
}

while true
do
showmenu
read choice
echo "Enter a choice:"
case "$choice" in
"1")
echo "Number One"
;;
"2")
echo "Number Two"
;;
"3")
echo "Number Three"
;;
"4")
echo "Number One, Two, Three"
;;
"5")
echo "Program Exited"
exit 0
;;
*)
echo "Please enter number ONLY ranging from 1-5!"
;;
esac
done

OUTPUT:
1. Number1
2. Number2
3. Number3
4. All
5. Quit
Enter a choice:

So, when the code is run, a menu with option 1-5 will be shown, then the user will be asked to enter a choice and finally an output is shown. But it is possible if the user want to enter multiple choices. For example, user enter choice "1" and "3", so the output will be "Number One" and "Number Three". Any idea?

Just something to get you started.

Code:

#! /bin/bash
showmenu ()
{
    typeset ii
    typeset -i jj=1
    typeset -i kk
    typeset -i valid=0  # valid=1 if input is good

    while (( ! valid ))
    do
        for ii in "${options[@]}"
        do
            echo "$jj) $ii"
            let jj++
        done
        read -e -p 'Select a list of actions : ' -a answer
        jj=0
        valid=1
        for kk in "${answer[@]}"
        do
            if (( kk < 1 || kk > "${#options[@]}" ))
            then
                echo "Error Item $jj is out of bounds" 1>&2
                valid=0
                break
            fi
            let jj++
        done
    done
}

typeset -r c1=Number1
typeset -r c2=Number2
typeset -r c3=Number3
typeset -r c4=All
typeset -r c5=Quit
typeset -ra options=($c1 $c2 $c3 $c4 $c5)
typeset -a answer
typeset -i kk
while true
do
    showmenu
    for kk in "${answer[@]}"
    do
        case $kk in
        1)
            echo 'Number One'
            ;;
        2)
            echo 'Number Two'
            ;;
        3)
            echo 'Number Three'
            ;;
        4)
            echo 'Number One, Two, Three'
            ;;
        5)
            echo 'Program Exit'
            exit 0
            ;;
        esac
    done 
done
stevenworr
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by stevenworr
Old 11-16-2009, 10:10 PM # 4
wjs1990 Member
Registered: Nov 2009 Posts: 30
Original Poster
Rep: Reputation: 15
Ok will try it out first. Thanks.
Last edited by wjs1990; 11-16-2009 at 10:13 PM .
wjs1990
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by wjs1990
Old 11-16-2009, 10:16 PM # 5
evo2 LQ Guru
Registered: Jan 2009 Location: Japan Distribution: Mostly Debian and CentOS Posts: 5,945
Rep: Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376
This can be done just by wrapping your case block in a for loop and changing one line.

Code:

#!/bin/bash
showmenu () {
    echo "1. Number1"
    echo "2. Number2"
    echo "3. Number3"
    echo "4. All"
    echo "5. Quit"
}

while true ; do
    showmenu
    read choices
    for choice in $choices ; do
        case "$choice" in
            1)
                echo "Number One" ;;
            2)
                echo "Number Two" ;;
            3)
                echo "Number Three" ;;
            4)
                echo "Numbers One, two, three" ;;
            5)
                echo "Exit"
                exit 0 ;;
            *)
                echo "Please enter number ONLY ranging from 1-5!"
                ;;
        esac
    done
done
You can now enter any number of numbers seperated by white space.

Cheers,

EVo2.

[Sep 02, 2019] bash - Pretty-print for shell script

Oct 21, 2010 | stackoverflow.com

Pretty-print for shell script Ask Question Asked 8 years, 10 months ago Active 30 days ago Viewed 14k times


Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Aug 11 at 4:08

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1);
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

Pius Raeder ,Jan 10, 2017 at 8:35

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 02, 2019] mvdan-sh A shell parser, formatter, and interpreter (POSIX-Bash-mksh)

Written in Go language
Sep 02, 2019 | github.com
go parser shell bash formatter posix mksh interpreter bash-parser beautify
  1. Go 98.8%
  2. Other 1.2%
Type Name Latest commit message Commit time
Failed to load latest commit information.
_fuzz/ it
_js
cmd
expand
fileutil
interp
shell
syntax
.gitignore
.travis.yml
LICENSE
README.md
go.mod
go.sum
release-docker.sh
README.md

sh

A shell parser, formatter and interpreter. Supports POSIX Shell , Bash and mksh . Requires Go 1.11 or later.

Quick start

To parse shell scripts, inspect them, and print them out, see the syntax examples .

For high-level operations like performing shell expansions on strings, see the shell examples .

shfmt

Go 1.11 and later can download the latest v2 stable release:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/cmd/shfmt

The latest v3 pre-release can be downloaded in a similar manner, using the /v3 module:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/shfmt

Finally, any older release can be built with their respective older Go versions by manually cloning, checking out a tag, and running go build ./cmd/shfmt .

shfmt formats shell programs. It can use tabs or any number of spaces to indent. See canonical.sh for a quick look at its default style.

You can feed it standard input, any number of files or any number of directories to recurse into. When recursing, it will operate on .sh and .bash files and ignore files starting with a period. It will also operate on files with no extension and a shell shebang.

shfmt -l -w script.sh

Typically, CI builds should use the command below, to error if any shell scripts in a project don't adhere to the format:

shfmt -d .

Use -i N to indent with a number of spaces instead of tabs. There are other formatting options - see shfmt -h . For example, to get the formatting appropriate for Google's Style guide, use shfmt -i 2 -ci .

Packages are available on Arch , CRUX , Docker , FreeBSD , Homebrew , NixOS , Scoop , Snapcraft , and Void .

Replacing bash -n

bash -n can be useful to check for syntax errors in shell scripts. However, shfmt >/dev/null can do a better job as it checks for invalid UTF-8 and does all parsing statically, including checking POSIX Shell validity:

$ echo '${foo:1 2}' | bash -n
$ echo '${foo:1 2}' | shfmt
1:9: not a valid arithmetic operator: 2
$ echo 'foo=(1 2)' | bash --posix -n
$ echo 'foo=(1 2)' | shfmt -p
1:5: arrays are a bash feature

gosh

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/gosh

Experimental shell that uses interp . Work in progress, so don't expect stability just yet.

Fuzzing

This project makes use of go-fuzz to find crashes and hangs in both the parser and the printer. To get started, run:

git checkout fuzz
./fuzz

Caveats

$ echo '${array[spaced string]}' | shfmt
1:16: not a valid arithmetic operator: string
$ echo '${array[dash-string]}' | shfmt
${array[dash - string]}
$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))

JavaScript

A subset of the Go packages are available as an npm package called mvdan-sh . See the _js directory for more information.

Docker

To build a Docker image, checkout a specific version of the repository and run:

docker build -t my:tag -f cmd/shfmt/Dockerfile .

Related projects

[Aug 29, 2019] Parsing bash script options with getopts by Kevin Sookocheff

Mar 30, 2018 | sookocheff.com

Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes | Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your script. Bash provides the getopts built-in function to do just that. This tutorial explains how to use the getopts built-in function to parse arguments and options to a bash script.

The getopts function takes three parameters. The first is a specification of which options are valid, listed as a sequence of letters. For example, the string 'ht' signifies that the options -h and -t are valid.

The second argument to getopts is a variable that will be populated with the option or argument to be processed next. In the following loop, opt will hold the value of the current option that has been parsed by getopts .

while getopts ":ht" opt; do
  case ${opt} in
    h ) # process option a
      ;;
    t ) # process option t
      ;;
    \? ) echo "Usage: cmd [-h] [-t]"
      ;;
  esac
done

This example shows a few additional features of getopts . First, if an invalid option is provided, the option variable is assigned the value ? . You can catch this case and provide an appropriate usage message to the user. Second, this behaviour is only true when you prepend the list of valid options with : to disable the default error handling of invalid options. It is recommended to always disable the default error handling in your scripts.

The third argument to getopts is the list of arguments and options to be processed. When not provided, this defaults to the arguments and options provided to the application ( $@ ). You can provide this third argument to use getopts to parse any list of arguments and options you provide.

Shifting processed options

The variable OPTIND holds the number of options parsed by the last call to getopts . It is common practice to call the shift command at the end of your processing loop to remove options that have already been handled from $@ .

shift $((OPTIND -1))
Parsing options with arguments

Options that themselves have arguments are signified with a : . The argument to an option is placed in the variable OPTARG . In the following example, the option t takes an argument. When the argument is provided, we copy its value to the variable target . If no argument is provided getopts will set opt to : . We can recognize this error condition by catching the : case and printing an appropriate error message.

while getopts ":t:" opt; do
  case ${opt} in
    t )
      target=$OPTARG
      ;;
    \? )
      echo "Invalid option: $OPTARG" 1>&2
      ;;
    : )
      echo "Invalid option: $OPTARG requires an argument" 1>&2
      ;;
  esac
done
shift $((OPTIND -1))
An extended example – parsing nested arguments and options

Let's walk through an extended example of processing a command that takes options, has a sub-command, and whose sub-command takes an additional option that has an argument. This is a mouthful so let's break it down using an example. Let's say we are writing our own version of the pip command . In this version you can call pip with the -h option to display a help message.

> pip -h
Usage:
    pip -h                      Display this help message.
    pip install                 Install a Python package.

We can use getopts to parse the -h option with the following while loop. In it we catch invalid options with \? and shift all arguments that have been processed with shift $((OPTIND -1)) .

while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install                 Install a Python package."
      exit 0
      ;;
    \? )
      echo "Invalid Option: -$OPTARG" 1>&2
      exit 1
      ;;
  esac
done
shift $((OPTIND -1))

Now let's add the sub-command install to our script. install takes as an argument the Python package to install.

> pip install urllib3

install also takes an option, -t . -t takes as an argument the location to install the package to relative to the current directory.

> pip install urllib3 -t ./src/lib

To process this line we must find the sub-command to execute. This value is the first argument to our script.

subcommand=$1
shift # Remove `pip` from the argument list

Now we can process the sub-command install . In our example, the option -t is actually an option that follows the package argument so we begin by removing install from the argument list and processing the remainder of the line.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list
    ;;
esac

After shifting the argument list we can process the remaining arguments as if they are of the form package -t src/lib . The -t option takes an argument itself. This argument will be stored in the variable OPTARG and we save it to the variable target for further work.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list

  while getopts ":t:" opt; do
    case ${opt} in
      t )
        target=$OPTARG
        ;;
      \? )
        echo "Invalid Option: -$OPTARG" 1>&2
        exit 1
        ;;
      : )
        echo "Invalid Option: -$OPTARG requires an argument" 1>&2
        exit 1
        ;;
    esac
  done
  shift $((OPTIND -1))
  ;;
esac

Putting this all together, we end up with the following script that parses arguments to our version of pip and its sub-command install .

package=""  # Default to empty package
target=""  # Default to empty target

# Parse options to the `pip` command
while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install <package>       Install <package>."
      exit 0
      ;;
   \? )
     echo "Invalid Option: -$OPTARG" 1>&2
     exit 1
     ;;
  esac
done
shift $((OPTIND -1))

subcommand=$1; shift  # Remove 'pip' from the argument list
case "$subcommand" in
  # Parse options to the install sub command
  install)
    package=$1; shift  # Remove 'install' from the argument list

    # Process package options
    while getopts ":t:" opt; do
      case ${opt} in
        t )
          target=$OPTARG
          ;;
        \? )
          echo "Invalid Option: -$OPTARG" 1>&2
          exit 1
          ;;
        : )
          echo "Invalid Option: -$OPTARG requires an argument" 1>&2
          exit 1
          ;;
      esac
    done
    shift $((OPTIND -1))
    ;;
esac

After processing the above sequence of commands, the variable package will hold the package to install and the variable target will hold the target to install the package to. You can use this as a template for processing any set of arguments and options to your scripts.

bash getopts

[Aug 29, 2019] How do I parse command line arguments in Bash - Stack Overflow

Jul 10, 2017 | stackoverflow.com

Livven, Jul 10, 2017 at 8:11

Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great edits/comments/suggestions. In order save maintenance time, I've modified the code block to be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ". Instead, copy-paste the code block, see the output, make the change, rerun the script, and comment "I changed X to Y and " I don't have time to test your ideas and tell you if they work.
Method #1: Using bash without getopt[s]

Two common ways to pass key-value-pair arguments are:

Bash Space-Separated (e.g., --option argument ) (without getopt[s])

Usage demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

cat >/tmp/demo-space-separated.sh <<'EOF'
#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
EOF

chmod +x /tmp/demo-space-separated.sh

/tmp/demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com
Bash Equals-Separated (e.g., --option=argument ) (without getopt[s])

Usage demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

cat >/tmp/demo-equals-separated.sh <<'EOF'
#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi
EOF

chmod +x /tmp/demo-equals-separated.sh

/tmp/demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Method #2: Using bash with getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

getopt(1) limitations (older, relatively-recent getopt versions):

More recent getopt versions don't have these limitations.

Additionally, the POSIX shell (and others) offer getopts which doesn't have these limitations. I've included a simplistic getopts example.

Usage demo-getopts.sh -vf /etc/hosts foo bar

cat >/tmp/demo-getopts.sh <<'EOF'
#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"
EOF

chmod +x /tmp/demo-getopts.sh

/tmp/demo-getopts.sh -vf /etc/hosts foo bar

output from copy-pasting the block above:

verbose=1, output_file='/etc/hosts', Leftovers: foo bar

The advantages of getopts are:

  1. It's more portable, and will work in other shells like dash .
  2. It can handle multiple single options like -vf filename in the typical Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without additional code.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

johncip ,Jul 23, 2018 at 15:15

No answer mentions enhanced getopt . And the top-voted answer is misleading: It either ignores -⁠vfd style short options (requested by the OP) or options after positional arguments (also requested by the OP); and it ignores parsing-errors. Instead:

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash
# saner programming env: these switches turn some bugs into errors
set -o errexit -o pipefail -o noclobber -o nounset

# -allow a command to fail with !'s side effect on errexit
# -use return value from ${PIPESTATUS[0]}, because ! hosed $?
! getopt --test > /dev/null 
if [[ ${PIPESTATUS[0]} -ne 4 ]]; then
    echo 'I'm sorry, `getopt --test` failed in this environment.'
    exit 1
fi

OPTIONS=dfo:v
LONGOPTS=debug,force,output:,verbose

# -regarding ! and PIPESTATUS see above
# -temporarily store output to be able to check for errors
# -activate quoting/enhanced mode (e.g. by writing out "--options")
# -pass arguments only via   -- "$@"   to separate them correctly
! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")
if [[ ${PIPESTATUS[0]} -ne 0 ]]; then
    # e.g. return value is 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

d=n f=n v=n outFile=-
# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt or sudo port install getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

Tobias Kienzler ,Mar 19, 2016 at 15:23

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Robert Siemer ,Jun 1, 2018 at 1:57

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

hfossli ,Jan 31 at 20:05

More succinct way

script.sh

#!/bin/bash

while [[ "$#" -gt 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

bronson ,Apr 27 at 23:22

At the risk of adding another example to ignore, here's my scheme.

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

Robert Siemer ,Jun 6, 2016 at 19:28

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

> ,

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

[Aug 29, 2019] shell - An example of how to use getopts in bash - Stack Overflow

The key thing to understand is that getops is just parsing options. You need to shift them as a separate operation:
shift $((OPTIND-1))
May 10, 2013 | stackoverflow.com

An example of how to use getopts in bash Ask Question Asked 6 years, 3 months ago Active 10 months ago Viewed 419k times 288 132

chepner ,May 10, 2013 at 13:42

I want to call myscript file in this way:
$ ./myscript -s 45 -p any_string

or

$ ./myscript -h >>> should display help
$ ./myscript    >>> should display help

My requirements are:

I tried so far this code:

#!/bin/bash
while getopts "h:s:" arg; do
  case $arg in
    h)
      echo "usage" 
      ;;
    s)
      strength=$OPTARG
      echo $strength
      ;;
  esac
done

But with that code I get errors. How to do it with Bash and getopt ?

,

#!/bin/bash

usage() { echo "Usage: $0 [-s <45|90>] [-p <string>]" 1>&2; exit 1; }

while getopts ":s:p:" o; do
    case "${o}" in
        s)
            s=${OPTARG}
            ((s == 45 || s == 90)) || usage
            ;;
        p)
            p=${OPTARG}
            ;;
        *)
            usage
            ;;
    esac
done
shift $((OPTIND-1))

if [ -z "${s}" ] || [ -z "${p}" ]; then
    usage
fi

echo "s = ${s}"
echo "p = ${p}"

Example runs:

$ ./myscript.sh
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -h
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s "" -p ""
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 10 -p foo
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 45 -p foo
s = 45
p = foo

$ ./myscript.sh -s 90 -p bar
s = 90
p = bar

[Aug 28, 2019] How to Replace Spaces in Filenames with Underscores on the Linux Shell

You probably would be better off with -nv options for mv
Aug 28, 2019 | vitux.com
$ for file in *; do mv "$file" `echo $file | tr ' ' '_'` ; done

[Aug 28, 2019] 9 Quick 'mv' Command Practical Examples in Linux

Aug 28, 2019 | www.linuxbuzz.com

Example:5) Do not overwrite existing file at destination (mv -n)

Use '-n' option in mv command in case if we don't want to overwrite an existing file at destination,

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 09:59 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 10:10 tools.txt
[linuxbuzz@web ~]$

As we can see tools.txt is present in our current working directory and in /tmp/sysadmin, use below mv command to avoid overwriting at destination,

[linuxbuzz@web ~]$ mv -n tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$
Example:6) Forcefully overwrite write protected file at destination (mv -f)

Use '-f' option in mv command to forcefully overwrite the write protected file at destination. Let's assumes we have a file named " bands.txt " in our present working directory and in /tmp/sysadmin.

[linuxbuzz@web ~]$ ls -l bands.txt /tmp/sysadmin/bands.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 bands.txt
-r--r--r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$

As we can see under /tmp/sysadmin, bands.txt is write protected file,

Without -f option

[linuxbuzz@web ~]$ mv bands.txt /tmp/sysadmin/bands.txt

mv: try to overwrite '/tmp/sysadmin/bands.txt', overriding mode 0444 (r–r–r–)?

To forcefully overwrite, use below mv command,

[linuxbuzz@web ~]$ mv -f bands.txt /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$
Example:7) Verbose output of mv command (mv -v)

Use '-v' option in mv command to print the verbose output, example is shown below

[linuxbuzz@web ~]$ mv -v  buzz51.txt buzz52.txt buzz53.txt buzz54.txt /tmp/sysadmin/
'buzz51.txt' -> '/tmp/sysadmin/buzz51.txt'
'buzz52.txt' -> '/tmp/sysadmin/buzz52.txt'
'buzz53.txt' -> '/tmp/sysadmin/buzz53.txt'
'buzz54.txt' -> '/tmp/sysadmin/buzz54.txt'
[linuxbuzz@web ~]$
Example:8) Create backup at destination while using mv command (mv -b)

Use '-b' option to take backup of a file at destination while performing mv command, at destination backup file will be created with tilde character appended to it, example is shown below,

[linuxbuzz@web ~]$ mv -b buzz55.txt /tmp/sysadmin/buzz55.txt
[linuxbuzz@web ~]$ ls -l /tmp/sysadmin/buzz55.txt*
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:47 /tmp/sysadmin/buzz55.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:37 /tmp/sysadmin/buzz55.txt~
[linuxbuzz@web ~]$
Example:9) Move file only when its newer than destination (mv -u)

There are some scenarios where we same file at source and destination and we wan to move the file only when file at source is newer than the destination, so to accomplish, use -u option in mv command. Example is shown below

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 55 Aug 25 00:55 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 87 Aug 25 00:57 tools.txt
[linuxbuzz@web ~]$

Execute below mv command to mv file only when its newer than destination,

[linuxbuzz@web ~]$ mv -u tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$

That's all from this article, we have covered all important and basic examples of mv command.

Hopefully above examples will help you to learn more about mv command. Write your feedback and suggestions to us.

[Aug 28, 2019] Echo Command in Linux with Examples

Notable quotes:
"... The -e parameter is used for the interpretation of backslashes ..."
"... The -n option is used for omitting trailing newline. ..."
Aug 28, 2019 | linoxide.com

The -e parameter is used for the interpretation of backslashes

... ... ...

To create a new line after each word in a string use the -e operator with the \n option as shown
$ echo -e "Linux \nis \nan \nopensource \noperating \nsystem"

... ... ...

Omit echoing trailing newline

The -n option is used for omitting trailing newline. This is shown in the example below

$ echo -n "Linux is an opensource operating system"

Sample Output

Linux is an opensource operating systemjames@buster:/$

[Aug 27, 2019] Bash Variables - Bash Reference Manual

Aug 27, 2019 | bash.cyberciti.biz

BASH_LINENO

An array variable whose members are the line numbers in source files corresponding to each member of FUNCNAME . ${BASH_LINENO[$i]} is the line number in the source file where ${FUNCNAME[$i]} was called. The corresponding source file name is ${BASH_SOURCE[$i]} . Use LINENO to obtain the current line number.

[Aug 27, 2019] linux - How to show line number when executing bash script

Aug 27, 2019 | stackoverflow.com

How to show line number when executing bash script Ask Question Asked 6 years, 1 month ago Active 1 year, 4 months ago Viewed 47k times 68 31


dspjm ,Jul 23, 2013 at 7:31

I have a test script which has a lot of commands and will generate lots of output, I use set -x or set -v and set -e , so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem. Is there a method which can output the line number of the script before each line is executed? Or output the line number before the command exhibition generated by set -x ? Or any method which can deal with my script line location problem would be a great help. Thanks.

Suvarna Pattayil ,Jul 28, 2017 at 17:25

You mention that you're already using -x . The variable PS4 denotes the value is the prompt printed before the command line is echoed when the -x option is set and defaults to : followed by space.

You can change PS4 to emit the LINENO (The line number in the script or shell function currently executing).

For example, if your script reads:

$ cat script
foo=10
echo ${foo}
echo $((2 + 2))

Executing it thus would print line numbers:

$ PS4='Line ${LINENO}: ' bash -x script
Line 1: foo=10
Line 2: echo 10
10
Line 3: echo 4
4

http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate PS4 that would output everything you will possibly need for tracing:

export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'

Deqing ,Jul 23, 2013 at 8:16

In Bash, $LINENO contains the line number where the script currently executing.

If you need to know the line number where the function was called, try $BASH_LINENO . Note that this variable is an array.

For example:

#!/bin/bash       

function log() {
    echo "LINENO: ${LINENO}"
    echo "BASH_LINENO: ${BASH_LINENO[*]}"
}

function foo() {
    log "$@"
}

foo "$@"

See here for details of Bash variables.

Eliran Malka ,Apr 25, 2017 at 10:14

Simple (but powerful) solution: Place echo around the code you think that causes the problem and move the echo line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.

Even more powerful solution: Install bashdb the bash debugger and debug the script line by line

kklepper ,Apr 2, 2018 at 22:44

Workaround for shells without LINENO

In a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.

Define a function

echo_line_no () {
    grep -n "$1" $0 |  sed "s/echo_line_no//" 
    # grep the line(s) containing input $1 with line numbers
    # replace the function name with nothing 
} # echo_line_no

Use it with quotes like

echo_line_no "this is a simple comment with a line number"

Output is

16   "this is a simple comment with a line number"

if the number of this line in the source file is 16.

This basically answers the question How to show line number when executing bash script for users of ash or other shells without LINENO .

Anything more to add?

Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?

Want to know more? Read reflections on debugging

[Aug 20, 2019] Fixing Midnight Commander's unreadable dropdown menus

Apr 24, 2011 | tech.iprock.com
Skip to content April 24, 2011 by Admin
Important This is an edited version of a post that originally appeared on a blog called The Michigan Telephone Blog, which was written by a friend before he decided to stop blogging. It is reposted with his permission. Comments dated before the year 2013 were originally posted to his blog.

If you've installed Midnight Commander and haven't changed the default colors, when you try to access a dropdown menu you may see this:

Midnight Commander -- Original Colors

REALLY hard to read that menu, isn't it? Wouldn't you rather see this?

Midnight Commander -- Changed Colors

To fix the unreadable menus, just make sure Midnight Commander is not open, then use any text editor (such as nano) to open ~/.mc/ini:

nano ~/.mc/ini

Assuming that there is no existing [Colors] section in the file, just add this at the bottom of the file (if the second line exceeds the blog column width, just use copy and paste to get it all):

[Colors] base_color=default,default:menu=black,cyan:menuhot=brightmagenta,cyan:menusel=white,blue:menuhotsel=brightmagenta,blue

If there is an existing [Colors] section, you can try tweaking it using the parameters shown above. If you have a very recent version of Midnight Commander (which you probably will have if you are running Ubuntu), then instead of menu= you'll need to use menunormal= , as shown here:

[Colors] base_color=default,default:menunormal=black,cyan:menuhot=brightmagenta,cyan:menusel=white,blue:menuhotsel=brightmagenta,blue

Note that for some reason the base_color parameter must appear, or the other items are ignored. Save the change, exit the editor, and open Midnight Commander. If you then close Midnight Commander, you may find that the position of the [Colors] section has moved within the ini file -- apparently Midnight Commander rewrites the file when you close it -- but if you don't like the changes you can remove the [Colors] section to reverse the change.

I figured out how to do this after reading this blog post:
Ajnasz Blog – Midnight Commander colors and themes
Another source of information is:
Zagura's blog – Midnight Commander Color Themes

Related Posts
  • [Aug 20, 2019] Midnight Commander, using date in User menu

    Dec 31, 2013 | unix.stackexchange.com

    user2013619 ,Dec 31, 2013 at 0:43

    I would like to use MC (midnight commander) to compress the selected dir with date in its name, e.g: dirname_20131231.tar.gz

    The command in the User menu is :

    tar -czf dirname_`date '+%Y%m%d'`.tar.gz %d

    The archive is missing because %m , and %d has another meaning in MC. I made an alias for the date, but it also doesn't work.

    Does anybody solved this problem ever?

    John1024 ,Dec 31, 2013 at 1:06

    To escape the percent signs, double them:
    tar -czf dirname_$(date '+%%Y%%m%%d').tar.gz %d

    The above would compress the current directory (%d) to a file also in the current directory. If you want to compress the directory pointed to by the cursor rather than the current directory, use %f instead:

    tar -czf %f_$(date '+%%Y%%m%%d').tar.gz %f
    

    mc handles escaping of special characters so there is no need to put %f in quotes.

    By the way, midnight commander's special treatment of percent signs occurs not just in the user menu file but also at the command line. This is an issue when using shell commands with constructs like ${var%.c} . At the command line, the same as in the user menu file, percent signs can be escaped by doubling them.

    [Aug 19, 2019] Moreutils - A Collection Of More Useful Unix Utilities - OSTechNix

    Parallel is a really useful utility. RPM is installable from EPEL.
    Aug 19, 2019 | www.ostechnix.com

    ... ... ...

    On RHEL , CentOS , Scientific Linux :
    $ sudo yum install epel-release
    
    $ sudo yum install moreutils
    

    [Aug 10, 2019] LinuxQuestions.org - [SOLVED] Midnight Commander Help

    Aug 10, 2019 | www.linuxquestions.org
    CrazyCatLover 12-22-2014 02:40 AM

    Midnight Commander Help
    Hi,

    I need to know how to check the current colour for mc and how to change it.
    I google it and they talk about changeing some initial file /.mc/ini which i have no idea (no one ever gives full filename.)and i cant find it at all. Wasted an hour of my life. I just need the simplest way to change it, not another 10+ steps to change a stupid colour.


    gengisdave 12-22-2014 03:22 AM

    in some distros (mine, e.g.) it is located in ~/.local/mc/ini

    sycamorex 12-22-2014 03:24 AM

    This is the full filename. Mind you on my distro it's in ~/.config/mc/ini
    Find / Create this file and add the following (obviously change the colour values):

    The syntax is: variable=foreground_colour,background_colour
    Code:


    [Colors]
    base_color=lightgray,green:normal=green,default:selected=white,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default


    Also, have a look at this:
    http://blog.mybox.ro/2010/05/10/skin...ght-commander/

    [Aug 10, 2019] Plug-and-Pray Editing Midnight Commander's color scheme

    Aug 10, 2019 | plug-and-pray.blogspot.com

    Editing Midnight Commander's color scheme In a previous post I was sort of laying out a "formula" on how to transform your Midnight Commander default color scheme into a trasnparent skin, without talking too much about how you can change the other colors.

    To my great shame, I didn't pay too much attention to this blog or to the comments asking for further advice. I found Mateus' comment rather late (just now!) and decided to dig further, in order to find out how exactly to deal with more refined color changes, while still keeping the transparent background (in both in Midnight Commander and its editor).

    So the first thing to know is which are the colors that Midnight Commander supports; the available colors are:

    black
    gray
    lightgray
    white
    red
    brightred
    green
    brightgreen
    blue
    brightblue
    magenta
    brightmagenta
    cyan
    brightcyan
    brown
    yellow
    default

    The " default " color is the one giving out the nice transparency.

    Now, there are certain "components" in Midnight Commander's display that can have their colors altered. Here they are:

    base_color, normal, selected, marked, markselect, errors, menu, reverse, dnormal, dfocus, dhotnormal, dhotfocus, viewunderline, menuhot, menusel, menuhotsel, helpnormal, helpitalic, helpbold, helplink, helpslink, gauge, input, directory, executable, link, stalelink, device, core, special, editnormal, editbold, editmarked, errdhotnormal, errdhotfocus

    Each and every one of these "components" can have its own colors set accordingly to the user's wish. Each component is assigned a color pair and must be followed by a colon (':') in order to separate it from the color pair of the next component. Here's how this basic syntax must look like:

    component=foreground_color,background_color:

    When you start modifying the color scheme in your Midnight Commander configuration file (located at ~/.mc/ini ), you just have to add a section called " [Colors] " and proceed with enumerating the color pairs. So you'd have something like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:...:componentN= foreground_colorN,background_colorN

    For increased readability, I will "truncate" that long line, adding a backslash ('\') to indicate that in fact what follows on the next line should be adjacent to the text on the previous line. This being said, the [Colors] section could look like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:\
    component2=foreground_color2,background_color2:\
    ...
    componentN=foreground_colorN,background_colorN

    Now that you've gotten the hang of this, let's see how the [Colors] section looks like in the default Midnight Commander color scheme (you know, the "ugly" one, with blue and dull cyan):

    IMPORTANT NOTE: For visual impact's sake and due to Blogspot breaking long lines, I wrote each color pair on a single row, followed by a backslash ('\'). Please note that this does NOT work in the ~/.mc/ini file, so the final [Colors] section in your Midnight Commander configuration file MUST be a SINGLE line with no spaces and with each color pair separated from the next one by a colon (':').

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,blue:\
    normal=lightgray,blue:\
    selected=black,cyan:\
    marked=yellow,blue:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,blue:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,blue:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,blue:\
    executable=brightgreen,blue:\
    link=lightgray,blue:\
    stalelink=brightred,blue:\
    device=brightmagenta,blue:\
    core=red,blue:\
    special=black,blue:\
    editnormal=lightgray,blue:\
    editbold=yellow,blue:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now let's see. What you want to change first of all is most of the background of these "components", such that the display will be one with a neat looking transparent background. So first of all you might want to make a few changes to these color pairs by replacing the background color "blue" with "default". After doing these changes, your [Colors] section will look a bit like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,cyan:\
    marked=yellow,default:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,default:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=lightgray,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now you've got the basic "Midnight Commander transparent scheme" that was the result of this post .

    Proceeding to Mateus' question, regarding how to change the rest of the colors now, it's about the same as before. What he didn't like there (and as a matter of fact I don't quite like it, either) is the dull cyan that's still seen in the following places:

    1. the bottom line (the one displaying the F1...F10 function keys);
    2. the line that signifies the current selection, the "prompt" which shows you on which file/directory you're "on" at a given moment;
    3. the uppermost line (the "menu" line);
    4. the menus themselves, once you open them.
    To "fix" issues 1, 2, and 3 it is sufficient to alter the value of the " selected " parameter. Notice how it is initially

    selected=black,cyan:\

    My personal choice is to replace the background cyan, which I don't really like, with green. To do this, I'll change this color pair to

    selected=black,green:\

    You can, of course, change the foreground color as well. For me, it's alright to keep the foreground (the text) "black". You can change it to whatever suits your taste.

    To "fix" issue number 4 in the list above, you need to change the " menu " parameter. To get it transparent, just change the "cyan" background to "default". Make other adjustments as you see fit. In other words, change

    menu=white,cyan:\

    into, for instance,

    menu=ligthgray,default:\

    However, there are a few "leftovers" from the default color scheme.

    One of them is the parameter regarding the hotkeys in the menus (the "underlined" character on most of the menu options, showing you what key you can press in order to access that option faster than by moving to it with the arrow keys). This color pair is called " menuhot ". I changed it from

    menuhot=yellow,cyan:\

    into

    menuhot=yellow,default:\

    Another thing which might bother you is the color of the line in the panel you're in when you've "selected all" files (when you've pressed the "*" key). This parameter is called " markselect ". I changed it from

    markselect=yellow,cyan:\

    into

    markselect=white,green:\

    The color pair of the selected buttons in dialogs is called " dfocus ". I changed mine from

    dfocus=black,cyan:\

    into

    dfocus=black,green:\

    In the "focused" buttons or options, the underlined character is called " dhotfocus ". I changed mine from

    dhotfocus=blue,cyan:\

    into

    dhotfocus=brightgreen,green:\

    since the background color was already green, after I modified the " dfocus " color pair.

    The other buttons or options in the dialogs which have hotkeys assigned to them, but which are not "focused" (the buttons/options that you're not located on at a given moment) are still displayed in blue on a light gray background. This color pair is referred to as " dhotnormal ". Since the blue looks a bit odd there, I changed

    dhotnormal=blue,lightgray:\

    into

    dhotnormal=brightgreen,default:\

    Well, this is nice, in window titles and on normal (unfocused) hotkeys I get the transparent background. The problem now is that the rest of the dialog window is still light gray. To change this (to make the window transparent as well), you only need to alter the " dnormal " color pair, such as changing it from

    dnormal=black,lightgray:\

    into

    dnormal=white,default:\

    You may notice that the input fields stay cyan, as well; you find these fields in quite a lot of dialog boxes. To alter this, I changed

    input=black,cyan:\

    into

    input=black,green:\

    One thing which I consider useful is to have symbolic links displayed in bright cyan (as in the colored listings in the terminal). So I just changed

    link=lightgray,default:\

    into

    link=brightcyan,default:\

    Now, regarding the rest of the color pairs, I don't really know what they do. However, if at some point after using Midnight Commander more with this new, neat, transparent/green color scheme you'll notice unwanted leftovers, you can try out other changes in the color pairs values, one at a time, until you determine the troublesome one.

    After operating the changes above, my [Colors] section in ~/.mc/ini now looks like this:

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,green:\
    marked=yellow,default:\
    markselect=white,green:\
    errors=white,red:\
    menu=lightgray,default:\
    reverse=black,lightgray:\
    dnormal=white,default:\
    dfocus=black,green:\
    dhotnormal=brightgreen,default:\
    dhotfocus=brightgreen,green:\
    viewunderline=brightred,default:\
    menuhot=yellow,default:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,green:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=brightcyan,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    I need to direct you to the " IMPORTANT NOTE " above. The final [Colors] section above is written like this - one pair on each row, followed by a backslash - for clarity's sake. The actual final [Colors] section in your ~/.mc/ini file will have to be a one-liner, with no blanks and no backslashes. So it will probably look similar to this:

    base_color=lightgray,default:normal=lightgray,default:selected=black,green:marked=yellow,default:markselect=white,green:errors=white,red:menu=lightgray,default:reverse=black,lightgray:dnormal=white,default:dfocus=black,green:dhotnormal=brightgreen,default:dhotfocus=brightgreen,green:viewunderline=brightred,default:menuhot=yellow,default:menusel=white,black:menuhotsel=yellow,black:helpnormal=black,lightgray:helpitalic=red,lightgray:helpbold=blue,lightgray:helplink=black,cyan:helpslink=yellow,default:gauge=white,black:input=black,green:directory=white,default:executable=brightgreen,default:link=brightcyan,default:stalelink=brightred,default:device=brightmagenta,default:core=red,default:special=black,default:editnormal=lightgray,default:editbold=yellow,default:editmarked=black,cyan:errdhotnormal=yellow,red:errdhotfocus=yellow,lightgray

    Now, the next time you start mc , the new color scheme will take effect.

    As a bonus, here's a picture of how my Midnight Commander looks like, with this new "skin" on:

    Posted by Alexandra at 1:54 PM Labels: color scheme , mc , transparency

    [Aug 10, 2019] Midnight Commander color scheme ~ centosvn

    Aug 10, 2019 | centos-vn.blogspot.com

    Midnight Commander (or "mc") can have transparent panels instead of the ugly, dull default blue. So can "mcedit", its text editor.

    Here's how to do it. Edit the file ~/.mc/ini and add at the end the following:

    [Colors]
    base_color=normal=,default:selected=,:marked=,default:\
    markselect=,:menu=,:menuhot=,:menusel=,:\
    menuhotsel=,:dnormal=,:dfocus=,:dhotnormal=,:dhotfocus=,:\
    input=,:reverse=,:executable=,default:directory=,default:\
    link=,default:device=,default:special=,:core=,:helpnormal=,:\
    helplink=,:helpslink=,:editnormal=,default:

    Note #1: In the above 'code' block, there is only one line below [Colors] . I truncated the line with the backslash because of blogspot rendering issues. You just write all that on one single line, without the "\" (backslash-es).

    Note #2: At the end of this line, the " editnormal,=default: " option means that mcedit will have transparent background in your console, as well.

    To my great shame, I didn't pay too much attention to this blog or to the comments asking for further advice. I found Mateus' comment rather late (just now!) and decided to dig further, in order to find out how exactly to deal with more refined color changes, while still keeping the transparent background (in both in Midnight Commander and its editor).

    So the first thing to know is which are the colors that Midnight Commander supports; the available colors are:

    black
    gray
    lightgray
    white
    red
    brightred
    green
    brightgreen
    blue
    brightblue
    magenta
    brightmagenta
    cyan
    brightcyan
    brown
    yellow
    default

    The " default " color is the one giving out the nice transparency.

    Now, there are certain "components" in Midnight Commander's display that can have their colors altered. Here they are:

    base_color, normal, selected, marked, markselect, errors, menu, reverse, dnormal, dfocus, dhotnormal, dhotfocus, viewunderline, menuhot, menusel, menuhotsel, helpnormal, helpitalic, helpbold, helplink, helpslink, gauge, input, directory, executable, link, stalelink, device, core, special, editnormal, editbold, editmarked, errdhotnormal, errdhotfocus

    Each and every one of these "components" can have its own colors set accordingly to the user's wish. Each component is assigned a color pair and must be followed by a colon (':') in order to separate it from the color pair of the next component. Here's how this basic syntax must look like:

    component=foreground_color,background_color:

    When you start modifying the color scheme in your Midnight Commander configuration file (located at ~/.mc/ini ), you just have to add a section called " [Colors] " and proceed with enumerating the color pairs. So you'd have something like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:...:componentN= foreground_colorN,background_colorN

    For increased readability, I will "truncate" that long line, adding a backslash ('\') to indicate that in fact what follows on the next line should be adjacent to the text on the previous line. This being said, the [Colors] section could look like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:\
    component2=foreground_color2,background_color2:\
    ...
    componentN=foreground_colorN,background_colorN

    Now that you've gotten the hang of this, let's see how the [Colors] section looks like in the default Midnight Commander color scheme (you know, the "ugly" one, with blue and dull cyan):

    IMPORTANT NOTE: For visual impact's sake and due to Blogspot breaking long lines, I wrote each color pair on a single row, followed by a backslash ('\'). Please note that this does NOT work in the ~/.mc/ini file, so the final [Colors] section in your Midnight Commander configuration file MUST be a SINGLE line with no spaces and with each color pair separated from the next one by a colon (':').

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,blue:\
    normal=lightgray,blue:\
    selected=black,cyan:\
    marked=yellow,blue:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,blue:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,blue:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,blue:\
    executable=brightgreen,blue:\
    link=lightgray,blue:\
    stalelink=brightred,blue:\
    device=brightmagenta,blue:\
    core=red,blue:\
    special=black,blue:\
    editnormal=lightgray,blue:\
    editbold=yellow,blue:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now let's see. What you want to change first of all is most of the background of these "components", such that the display will be one with a neat looking transparent background. So first of all you might want to make a few changes to these color pairs by replacing the background color "blue" with "default". After doing these changes, your [Colors] section will look a bit like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,cyan:\
    marked=yellow,default:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,default:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=lightgray,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now you've got the basic "Midnight Commander transparent scheme" that was the result of this post .

    Proceeding to Mateus' question, regarding how to change the rest of the colors now, it's about the same as before. What he didn't like there (and as a matter of fact I don't quite like it, either) is the dull cyan that's still seen in the following places:

    1. the bottom line (the one displaying the F1...F10 function keys);
    2. the line that signifies the current selection, the "prompt" which shows you on which file/directory you're "on" at a given moment;
    3. the uppermost line (the "menu" line);
    4. the menus themselves, once you open them.
    To "fix" issues 1, 2, and 3 it is sufficient to alter the value of the " selected " parameter. Notice how it is initially

    selected=black,cyan:\

    My personal choice is to replace the background cyan, which I don't really like, with green. To do this, I'll change this color pair to

    selected=black,green:\

    You can, of course, change the foreground color as well. For me, it's alright to keep the foreground (the text) "black". You can change it to whatever suits your taste.

    To "fix" issue number 4 in the list above, you need to change the " menu " parameter. To get it transparent, just change the "cyan" background to "default". Make other adjustments as you see fit. In other words, change

    menu=white,cyan:\

    into, for instance,

    menu=ligthgray,default:\

    However, there are a few "leftovers" from the default color scheme.

    One of them is the parameter regarding the hotkeys in the menus (the "underlined" character on most of the menu options, showing you what key you can press in order to access that option faster than by moving to it with the arrow keys). This color pair is called " menuhot ". I changed it from

    menuhot=yellow,cyan:\

    into

    menuhot=yellow,default:\

    Another thing which might bother you is the color of the line in the panel you're in when you've "selected all" files (when you've pressed the "*" key). This parameter is called " markselect ". I changed it from

    markselect=yellow,cyan:\

    into

    markselect=white,green:\

    The color pair of the selected buttons in dialogs is called " dfocus ". I changed mine from

    dfocus=black,cyan:\

    into

    dfocus=black,green:\

    In the "focused" buttons or options, the underlined character is called " dhotfocus ". I changed mine from

    dhotfocus=blue,cyan:\

    into

    dhotfocus=brightgreen,green:\

    since the background color was already green, after I modified the " dfocus " color pair.

    The other buttons or options in the dialogs which have hotkeys assigned to them, but which are not "focused" (the buttons/options that you're not located on at a given moment) are still displayed in blue on a light gray background. This color pair is referred to as " dhotnormal ". Since the blue looks a bit odd there, I changed

    dhotnormal=blue,lightgray:\

    into

    dhotnormal=brightgreen,default:\

    Well, this is nice, in window titles and on normal (unfocused) hotkeys I get the transparent background. The problem now is that the rest of the dialog window is still light gray. To change this (to make the window transparent as well), you only need to alter the " dnormal " color pair, such as changing it from

    dnormal=black,lightgray:\

    into

    dnormal=white,default:\

    You may notice that the input fields stay cyan, as well; you find these fields in quite a lot of dialog boxes. To alter this, I changed

    input=black,cyan:\

    into

    input=black,green:\

    One thing which I consider useful is to have symbolic links displayed in bright cyan (as in the colored listings in the terminal). So I just changed

    link=lightgray,default:\

    into

    link=brightcyan,default:\

    Now, regarding the rest of the color pairs, I don't really know what they do. However, if at some point after using Midnight Commander more with this new, neat, transparent/green color scheme you'll notice unwanted leftovers, you can try out other changes in the color pairs values, one at a time, until you determine the troublesome one.

    After operating the changes above, my [Colors] section in ~/.mc/ini now looks like this:

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,green:\
    marked=yellow,default:\
    markselect=white,green:\
    errors=white,red:\
    menu=lightgray,default:\
    reverse=black,lightgray:\
    dnormal=white,default:\
    dfocus=black,green:\
    dhotnormal=brightgreen,default:\
    dhotfocus=brightgreen,green:\
    viewunderline=brightred,default:\
    menuhot=yellow,default:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,green:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=brightcyan,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    I need to direct you to the " IMPORTANT NOTE " above. The final [Colors] section above is written like this - one pair on each row, followed by a backslash - for clarity's sake. The actual final [Colors] section in your ~/.mc/ini file will have to be a one-liner, with no blanks and no backslashes. So it will probably look similar to this:

    base_color=lightgray,default:normal=lightgray,default:selected=black,green:marked=yellow,default:markselect=white,green:errors=white,red:menu=lightgray,default:reverse=black,lightgray:dnormal=white,default:dfocus=black,green:dhotnormal=brightgreen,default:dhotfocus=brightgreen,green:viewunderline=brightred,default:menuhot=yellow,default:menusel=white,black:menuhotsel=yellow,black:helpnormal=black,lightgray:helpitalic=red,lightgray:helpbold=blue,lightgray:helplink=black,cyan:helpslink=yellow,default:gauge=white,black:input=black,green:directory=white,default:executable=brightgreen,default:link=brightcyan,default:stalelink=brightred,default:device=brightmagenta,default:core=red,default:special=black,default:editnormal=lightgray,default:editbold=yellow,default:editmarked=black,cyan:errdhotnormal=yellow,red:errdhotfocus=yellow,lightgray

    Now, the next time you start mc , the new color scheme will take effect.

    As a bonus, here's a picture of how my Midnight Commander looks like, with this new "skin" on:

    Email This BlogThis! Share to Twitter Share to Facebook

    [Aug 10, 2019] Midnight Commander colors and themes

    Aug 10, 2019 | ajnasz.hu

    Koszti Lajos Midnight Commander is the most pupular file manager on unix like systems. It's fast and it has all features what you need. But it's only blue and we know, that everyone loves the eyecandy, everyone likes customizing his/her own desktop. But is there any way to custimize the mc ?
    Yes, and I try to show you, how can you create your theme .

    You can change the Midnight Commander colors if you edit the ~/.mc/ini file, where you have to add a new section, named [Colors] . You should define the new colors in this section, for example:

    [Colors] base_color=lightgray,green:normal=green,default:selected=white,gray ...

    As you see, it has a simple syntax:

    <keyword>=<foregroundcolor>,<backgroundcolor>:<keyword>= ...

    The colors are optional, so you can use this:

    [Colors] base_color=lightgray,green:normal=green:selected=,gray ...

    It's not the exactly the same as the first version!

    Fine, you can change some colors of the filemanager, but which are the keywords? These are:

    And which are the colors? I don't know all, but here are some of them:
    white, gray, blue, green, yellow, magenta, cyan, red, brown, birghtgreen, brightblue, brightmagenta, brightcyan, brightred, default

    Here is the config, what I use:

    [Colors] base_color=lightgray,green:normal=green,default:selected=white,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default

    Screenshot about my redesigned Midnight Commander

    On the screenshot you can see, that the directory color is blue, the files are green, the executable files are birghtgreen and the selected line is white on a gray background.

    And another one, what I use recently:

    [Colors] base_color=lightgray,blue:normal=blue,default:selected=white,brightblue:marked=yellow,default:markselect=yellow,gray:directory=brightblue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default

    Screenshot about my redesigned Midnight Commander

    And here is a small shell script, which will help for you to test your new theme:

    #!/bin/sh mc --colors normal=green,default:selected=brightmagenta,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default

    Download the shell script to make your own mc theme

    Save it as mccolortest.sh, make it executable with the chmod +x mccolortest.sh command, and run it with the ./mccolortest.sh command. If you want to change a color, just edit this file. When you done, copy the colors and paste it below the [Colors] section in the ~/.mc/ini . If it doesn't exists, make it yourself.

    For more information of the mc redesigning check its manual page .


    Mauricio2 hónapja ,

    Awesome!
    Thank you for your clear explanation.

    Anonymous • 6 éve ,

    Thank you for theme. I tried your last theme and it is exactly what I was searching for.

    Anonymous • 6 éve ,

    Also, in 4.8.3 here, I copied the first example scheme line and my colors are different. I can't even set the background of the select bar to gray (or "grey"): it gets replaced with black. Also, the panel headings remain blue here, unlike the (first) screenshot, and I can see no corresponding tag in the line anyway.

    Good intro, regardless. Someone should post a pointer to a more up-to-date one, though, as Google seems to find this old thread within the top few hits. Király! ;)

    --lunakid

    Ajnasz Anonymous6 éve ,

    The colors are depends on the color settings of your terminal. I don't have those settings anymore which was when I posted this article, but here is my current. If I'm right, it's similar to that. Put it into your .Xdefaults

    *background: #000000
    *foreground: #EEEEEC
    
    ! Default
    ! 0: black
    *color0: #1C1C1C
    *color8: #333333
    ! 1: red
    *color1: #C14242
    *color9: #EF2929
    ! 2: green
    *color2: #6AA037
    *color10: #9DCF70
    ! 3: yellow
    *color3: #CFAB2F
    *color11: #FCDA4F
    ! 4: blue
    *color4: #2D578A
    *color12: #729FCF
    ! 5: magenta
    *color5: #A85EB4
    *color13: #AD7FA8
    ! 6: cyan
    *color6: #2F8D8F
    *color14: #34E2E2
    ! 7: white
    *color7: #D3D7CF
    *color15: #EEEEEC
    
    Anonymous • 7 éve ,

    Now ~/.mc dir is ignored. Now is ~/.config/mc ;)

    Anonymous • 10 éve ,

    Midnight Commander supports skins starting from 4.7.0-pre3 version. You can download a skin with black as a main color from here:
    http://zool.in.ua/software/bluemoon/

    Anonymous • 10 éve ,

    I am using MC on my router ASUS WL-500GP and I am developing php scripts on it. but as I see MC in openwrt (kmaikaze 8.09) does not use syntax-highlighting and it is very unconfortable.
    Do you know how could I turn it on? I have already downloaded php.syntax file and put it into /usr/share/syntax dir but it does not seem to work. is it possible that some support is not compiled into my version or the syntax file must be compiled to another format?
    Br Zé.

    Anonymous Anonymous10 éve ,

    I found it. in ~/.mc/cedit/Syntax must be this:
    file ..\*\\.(php|PHP)$ PHP\sFile
    include php.syntax

    and in the same dir php.syntax file must be placed. (copied out from a source distrib)

    Anonymous • 10 éve ,

    hei ajnasz, your color theme so very nice, keep my eye on my pc longer than usual. Well, i don't have much time to do more explore with this tricks. I think your taste so cool. If you have any kind of theme, i should be try it. :-)

    Regards,

    Dedi

    Anonymous • 10 éve ,

    Any chance to change the color of the files by extension?

    Anonymous Anonymous10 éve ,

    Midnight Commander supports this starting from 4.7.0-pre3 version.

    Ajnasz Anonymous10 éve ,

    I didn't find anything about it. By the way, since the extension doesn't determinate the file type in UNIX like systems, it wouldn't make any sense to do it.

    Anonymous Ajnasz9 éve ,

    Don't be silly. Mp3 is just music, txt is text, doc is document. The only thing, which is not exactly determinable is the executables, but whatever, it has +x flag.

    Anonymous • 11 éve ,

    Also, you should know that most modern terminal applications allow you to redefine the exact shade of those 16 colors.

    Some of them (such as the Gnome or KDE terminals) may have a place under their preferences where you can redefine the colors.

    Older terminals, such as aterm, use ~/.Xdefaults for this. You can edit that file and add lines like this: "aterm*color1: OrangeRed" (without the quotes). What I've done with that is tell aterm that the "color1" (which was red) should now be "OrangeRed". See /usr/share/X11/rgb.txt for valid color names. You can use *color0 through *color15. So when you'll say "red" in MC's ini file, and if you use aterm, it will get replaced by color1 in ~/.Xdefaults and changed to OrangeRed. (Sorry, I don't remember the mappings between the names used by MC and 0-15 in Xdefaults by heart.)

    Anonymous • 12 éve ,

    On the same subject:
    http://www.zagura.ro/index....

    [Jul 29, 2019] A Guide to Kill, Pkill and Killall Commands to Terminate a Process in Linux

    Jul 26, 2019 | www.tecmint.com
    ... ... ...

    How about killing a process using process name

    You must be aware of process name, before killing and entering a wrong process name may screw you.

    # pkill mysqld
    
    Kill more than one process at a time.
    # kill PID1 PID2 PID3
    
    or
    
    # kill -9 PID1 PID2 PID3
    
    or
    
    # kill -SIGKILL PID1 PID2 PID3
    
    What if a process have too many instances and a number of child processes, we have a command ' killall '. This is the only command of this family, which takes process name as argument in-place of process number.

    Syntax:

    # killall [signal or option] Process Name
    

    To kill all mysql instances along with child processes, use the command as follow.

    # killall mysqld
    

    You can always verify the status of the process if it is running or not, using any of the below command.

    # service mysql status
    # pgrep mysql
    # ps -aux | grep mysql
    

    That's all for now, from my side. I will soon be here again with another Interesting and Informative topic. Till Then, stay tuned, connected to Tecmint and healthy. Don't forget to give your valuable feedback in comment section.

    [Jul 29, 2019] Locate Command in Linux

    Jul 25, 2019 | linuxize.com

    ... ... ...

    The locate command also accepts patterns containing globbing characters such as the wildcard character * . When the pattern contains no globbing characters the command searches for *PATTERN* , that's why in the previous example all files containing the search pattern in their names were displayed.

    The wildcard is a symbol used to represent zero, one or more characters. For example, to search for all .md files on the system you would use:

    locate *.md
    

    To limit the search results use the -n option followed by the number of results you want to be displayed. For example, the following command will search for all .py files and display only 10 results:

    locate -n 10 *.py
    

    By default, locate performs case-sensitive searches. The -i ( --ignore-case ) option tels locate to ignore case and run case-insensitive search.

    locate -i readme.md
    
    /home/linuxize/p1/readme.md
    /home/linuxize/p2/README.md
    /home/linuxize/p3/ReadMe.md
    

    To display the count of all matching entries, use the -c ( --count ) option. The following command would return the number of all files containing .bashrc in their names:

    locate -c .bashrc
    
    6
    

    By default, locate doesn't check whether the found files still exist on the file system. If you deleted a file after the latest database update if the file matches the search pattern it will be included in the search results.

    To display only the names of the files that exist at the time locate is run use the -e ( --existing ) option. For example, the following would return only the existing .json files:

    locate -e *.json
    

    If you need to run a more complex search you can use the -r ( --regexp ) option which allows you to search using a basic regexp instead of patterns. This option can be specified multiple times.
    For example, to search for all .mp4 and .avi files on your system and ignore case you would run:

    locate --regex -i "(\.mp4|\.avi)"
    

    [Jul 28, 2019] command line - How do I extract a specific file from a tar archive - Ask Ubuntu

    Jul 28, 2019 | askubuntu.com

    CMCDragonkai, Jun 3, 2016 at 13:04

    1. Using the Command-line tar

    Yes, just give the full stored path of the file after the tarball name.

    Example: suppose you want file etc/apt/sources.list from etc.tar :

    tar -xf etc.tar etc/apt/sources.list

    Will extract sources.list and create directories etc/apt under the current directory.

    2. Extract it with the Archive Manager

    Open the tar in Archive Manager from Nautilus, go down into the folder hierarchy to find the file you need, and extract it.

    3. Using Nautilus/Archive-Mounter

    Right-click the tar in Nautilus, and select Open with ArchiveMounter.

    The tar will now appear similar to a removable drive on the left, and you can explore/navigate it like a normal drive and drag/copy/paste any file(s) you need to any destination.

    [Jul 28, 2019] iso - midnight commander rules for accessing archives through VFS - Unix Linux Stack Exchange

    Jul 28, 2019 | unix.stackexchange.com

    ,

    Midnight Commander uses virtual filesystem ( VFS ) for displying files, such as contents of a .tar.gz archive, or of .iso image. This is configured in mc.ext with rules such as this one ( Open is Enter , View is F3 ):
    regex/\.([iI][sS][oO])$
        Open=%cd %p/iso9660://
        View=%view{ascii} isoinfo -d -i %f
    

    When I press Enter on an .iso file, mc will open the .iso and I can browse individual files. This is very useful.

    Now my question: I have also files which are disk images, i.e. created with pv /dev/sda1 > sda1.img

    I would like mc to "browse" the files inside these images in the same fashion as .iso .

    Is this possible ? How would such rule look like ?

    [Jul 28, 2019] Use Midnight Commander like a pro

    Jul 28, 2019 | klimer.eu

    May 1, 2015

    If you've used an *nix system, at some point you've stumbled upon Midnight Commander , a file manager based on the venerable Norton Commander. You're probably familiar with the basic operations ( F5 for copying, F6 for moving, F8 for deleting, etc.) and how to switch panels (ummm, the Tab key). But mc offers so much more than that. This article aims to show all the useful (YMMV) shortcuts and functionalities that are often overlooked. Most of them can be accessed using the menu ( F9 ), but who has the time to do that?

    Before we get started, let's establish some facts. This article was written and tested on the following software:

    Oh, and make sure you're running a modern and UTF-8 friendly terminal - for example, rxvt-unicode.

    Hold your horses

    There's actually one thing I'd recommend doing before you run mc . mc has the ability to exit to its current directory. Meaning, you can navigate the filesystem using mc (sometimes it's easier than cd ing into that one directory buried deep down somewhere ) and when you quit mc ( F10 ), your shell will automagically cd to that directory. This is done thanks to the mc-wrapper script that should be bundled with your installation of mc . The exact location is dependent on your distribution - in mine (Gentoo) it's /usr/libexec/mc/ , in Ubuntu supposedly it's in /usr/share/mc/bin/ . Once found, modify your ~/.bashrc :

    alias mc='. /usr/libexec/mc/mc-wrapper.sh'
    

    Restart your shell, launch mc , change to another directory, exit and your shell should be set to that new directory.

    Selecting files Accessing the shell Internal viewer ( F3 ) and editor ( F4 ) Panels Searching files Common actions Virtual File System (VFS)

    mc has a concept known as Virtual File System. Try "entering" an archive ( *.tar.gz , *.rpm or even *.jar ) - you'll be able to browse the contents of the archive like a normal folder, without unpacking it first. You extract selected files from the archive by just copying them to the other panel. Bonus points: try "entering" a *.patch file.

    This concept is even more powerful when you realize that remote locations can be viewed the same way. A quick way to browse an FTP location is to just cd to it: cd ftp://mirrors.tera-byte.com/pub/gentoo (first Gentoo FTP mirror I found). You'll be able to interact with files as you normally do. To exit this remote location, cd to a local directory. Just typing cd will suffice as it will take you to your home directory.

    VFS works for SFTP and Samba shares too. Check the manpages for more information on how to specify user/pass, etc.

    Useful options Bonus assignments

    Well, that was a lot to take in. Of course, this list is not complete (that's what man mc is there for), but I've selected the commands and functionalities that are the most useful to me . Embrace the ones you find useful, forget the rest and learn about the other ones I've missed!

    [Jul 28, 2019] Bartosz Kosarzycki's blog Midnight Commander how to compress a file-directory; Make a tar archive with midnight commander

    Jul 28, 2019 | kosiara87.blogspot.com

    Midnight Commander how to compress a file/directory; Make a tar archive with midnight commander

    To compress a file in Midnight Commader (e.g. to make a tar.gz archive) navigate to the directory you want to pack and press 'F2'. This will bring up the 'User menu'. Choose the option 'Compress the current subdirectory'. This will compress the WHOLE directory you're currently in - not the highlighted directory.

    [Jul 26, 2019] How To Check Swap Usage Size and Utilization in Linux by Vivek Gite

    Jul 26, 2019 | www.cyberciti.biz

    The procedure to check swap space usage and size in Linux is as follows:

    1. Open a terminal application.
    2. To see swap size in Linux, type the command: swapon -s .
    3. You can also refer to the /proc/swaps file to see swap areas in use on Linux.
    4. Type free -m to see both your ram and your swap space usage in Linux.
    5. Finally, one can use the top or htop command to look for swap space Utilization on Linux too.
    How to Check Swap Space in Linux using /proc/swaps file

    Type the following cat command to see total and used swap size:
    # cat /proc/swaps
    Sample outputs:

    Filename                           Type            Size    Used    Priority
    /dev/sda3                               partition       6291448 65680   0
    

    Another option is to type the grep command as follows:
    grep Swap /proc/meminfo

    SwapCached:            0 kB
    SwapTotal:        524284 kB
    SwapFree:         524284 kB
    
    Look for swap space in Linux using swapon command

    Type the following command to show swap usage summary by device
    # swapon -s
    Sample outputs:

    Filename                           Type            Size    Used    Priority
    /dev/sda3                               partition       6291448 65680   0
    
    Use free command to monitor swap space usage

    Use the free command as follows:
    # free -g
    # free -k
    # free -m

    Sample outputs:

                 total       used       free     shared    buffers     cached
    Mem:         11909      11645        264          0        324       8980
    -/+ buffers/cache:       2341       9568
    Swap:         6143         64       6079
    
    See swap size in Linux using vmstat command

    Type the following vmstat command:
    # vmstat
    # vmstat 1 5

    ... ... ...

    Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

    [Jul 26, 2019] Cheat.sh Shows Cheat Sheets On The Command Line Or In Your Code Editor>

    The choice of shell as a programming language is strange, but the idea is good...
    Notable quotes:
    "... The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget. ..."
    Jul 26, 2019 | www.linuxuprising.com

    While it does have its own cheat sheet repository too, the project is actually concentrated around the creation of a unified mechanism to access well developed and maintained cheat sheet repositories.

    The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget.

    It's worth noting that cheat.sh is not new. In fact it had its initial commit around May, 2017, and is a very popular repository on GitHub. But I personally only came across it recently, and I found it very useful, so I figured there must be some Linux Uprising readers who are not aware of this cool gem.

    cheat.sh features & more
    cheat.sh tar example
    cheat.sh major features:

    The command line client features a special shell mode with a persistent queries context and readline support. It also has a query history, it integrates with the clipboard, supports tab completion for shells like Bash, Fish and Zsh, and it includes the stealth mode I mentioned in the cheat.sh features.

    The web, curl and cht.sh (command line) interfaces all make use of https://cheat.sh/ but if you prefer, you can self-host it .

    It should be noted that each editor plugin supports a different feature set (configurable server, multiple answers, toggle comments, and so on). You can view a feature comparison of each cheat.sh editor plugin on the Editors integration section of the project's GitHub page.

    Want to contribute a cheat sheet? See the cheat.sh guide on editing or adding a new cheat sheet.

    Interested in bookmarking commands instead? You may want to give Marker, a command bookmark manager for the console , a try.

    cheat.sh curl / command line client usage examples
    Examples of using cheat.sh using the curl interface (this requires having curl installed as you'd expect) from the command line:

    Show the tar command cheat sheet:

    curl cheat.sh/tar
    

    Example with output:
    $ curl cheat.sh/tar
    # To extract an uncompressed archive:
    tar -xvf /path/to/foo.tar
    
    # To create an uncompressed archive:
    tar -cvf /path/to/foo.tar /path/to/foo/
    
    # To extract a .gz archive:
    tar -xzvf /path/to/foo.tgz
    
    # To create a .gz archive:
    tar -czvf /path/to/foo.tgz /path/to/foo/
    
    # To list the content of an .gz archive:
    tar -ztvf /path/to/foo.tgz
    
    # To extract a .bz2 archive:
    tar -xjvf /path/to/foo.tgz
    
    # To create a .bz2 archive:
    tar -cjvf /path/to/foo.tgz /path/to/foo/
    
    # To extract a .tar in specified Directory:
    tar -xvf /path/to/foo.tar -C /path/to/destination/
    
    # To list the content of an .bz2 archive:
    tar -jtvf /path/to/foo.tgz
    
    # To create a .gz archive and exclude all jpg,gif,... from the tgz
    tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/
    
    # To use parallel (multi-threaded) implementation of compression algorithms:
    tar -z ... -> tar -Ipigz ...
    tar -j ... -> tar -Ipbzip2 ...
    tar -J ... -> tar -Ipixz ...
    

    cht.sh also works instead of cheat.sh:
    curl cht.sh/tar
    

    Want to search for a keyword in all cheat sheets? Use:
    curl cheat.sh/~keyword
    

    List the Python programming language cheat sheet for random list :
    curl cht.sh/python/random+list
    

    Example with output:
    $ curl cht.sh/python/random+list
    #  python - How to randomly select an item from a list?
    #  
    #  Use random.choice
    #  (https://docs.python.org/2/library/random.htmlrandom.choice):
    
    import random
    
    foo = ['a', 'b', 'c', 'd', 'e']
    print(random.choice(foo))
    
    #  For cryptographically secure random choices (e.g. for generating a
    #  passphrase from a wordlist), use random.SystemRandom
    #  (https://docs.python.org/2/library/random.htmlrandom.SystemRandom)
    #  class:
    
    import random
    
    foo = ['battery', 'correct', 'horse', 'staple']
    secure_random = random.SystemRandom()
    print(secure_random.choice(foo))
    
    #  [Pēteris Caune] [so/q/306400] [cc by-sa 3.0]
    

    Replace python with some other programming language supported by cheat.sh, and random+list with the cheat sheet you want to show.

    Want to eliminate the comments from your answer? Add ?Q at the end of the query (below is an example using the same /python/random+list):

    $ curl cht.sh/python/random+list?Q
    import random
    
    foo = ['a', 'b', 'c', 'd', 'e']
    print(random.choice(foo))
    
    import random
    
    foo = ['battery', 'correct', 'horse', 'staple']
    secure_random = random.SystemRandom()
    print(secure_random.choice(foo))
    

    For more flexibility and tab completion you can use cht.sh, the command line cheat.sh client; you'll find instructions for how to install it further down this article. Examples of using the cht.sh command line client:

    Show the tar command cheat sheet:

    cht.sh tar
    

    List the Python programming language cheat sheet for random list :
    cht.sh python random list
    

    There is no need to use quotes with multiple keywords.

    You can start the cht.sh client in a special shell mode using:

    cht.sh --shell
    

    And then you can start typing your queries. Example:
    $ cht.sh --shell
    cht.sh> bash loop
    

    If all your queries are about the same programming language, you can start the client in the special shell mode, directly in that context. As an example, start it with the Bash context using:
    cht.sh --shell bash
    

    Example with output:
    $ cht.sh --shell bash
    cht.sh/bash> loop
    ...........
    cht.sh/bash> switch case
    

    Want to copy the previously listed answer to the clipboard? Type c , then press Enter to copy the whole answer, or type C and press Enter to copy it without comments.

    Type help in the cht.sh interactive shell mode to see all available commands. Also look under the Usage section from the cheat.sh GitHub project page for more options and advanced usage.

    How to install cht.sh command line client
    You can use cheat.sh in a web browser, from the command line with the help of curl and without having to install anything else, as explained above, as a code editor plugin, or using its command line client which has some extra features, which I already mentioned. The steps below are for installing this cht.sh command line client.

    If you'd rather install a code editor plugin for cheat.sh, see the Editors integration page.

    1. Install dependencies.

    To install the cht.sh command line client, the curl command line tool will be used, so this needs to be installed on your system. Another dependency is rlwrap , which is required by the cht.sh special shell mode. Install these dependencies as follows.

    sudo apt install curl rlwrap
    

    sudo dnf install curl rlwrap
    

    sudo pacman -S curl rlwrap
    

    sudo zypper install curl rlwrap
    

    The packages seem to be named the same on most (if not all) Linux distributions, so if your Linux distribution is not on this list, just install the curl and rlwrap packages using your distro's package manager.

    2. Download and install the cht.sh command line interface.

    You can install this either for your user only (so only you can run it), or for all users:

    curl https://cht.sh/:cht.sh > ~/.bin/cht.sh
    
    chmod +x ~/.bin/cht.sh
    

    curl https://cht.sh/:cht.sh | sudo tee /usr/local/bin/cht.sh
    
    sudo chmod +x /usr/local/bin/cht.sh
    

    If the first command appears to have frozen displaying only the cURL output, press the Enter key and you'll be prompted to enter your password in order to save the file to /usr/local/bin .

    You may also download and install the cheat.sh command completion for Bash or Zsh:

    mkdir ~/.bash.d
    
    curl https://cheat.sh/:bash_completion > ~/.bash.d/cht.sh
    
    echo ". ~/.bash.d/cht.sh" >> ~/.bashrc
    

    mkdir ~/.zsh.d
    
    curl https://cheat.sh/:zsh > ~/.zsh.d/_cht
    
    echo 'fpath=(~/.zsh.d/ $fpath)' >> ~/.zshrc
    

    Opening a new shell / terminal and it will load the cheat.sh completion.

    [Jul 26, 2019] What Is /dev/null in Linux by Alexandru Andrei

    Images removed...
    Jul 23, 2019 | www.maketecheasier.com
    ... ... ...

    In technical terms, "/dev/null" is a virtual device file. As far as programs are concerned, these are treated just like real files. Utilities can request data from this kind of source, and the operating system feeds them data. But, instead of reading from disk, the operating system generates this data dynamically. An example of such a file is "/dev/zero."

    In this case, however, you will write to a device file. Whatever you write to "/dev/null" is discarded, forgotten, thrown into the void. To understand why this is useful, you must first have a basic understanding of standard output and standard error in Linux or *nix type operating systems.

    Related : How to Use the Tee Command in Linux

    stdout and stder

    A command-line utility can generate two types of output. Standard output is sent to stdout. Errors are sent to stderr.

    By default, stdout and stderr are associated with your terminal window (or console). This means that anything sent to stdout and stderr is normally displayed on your screen. But through shell redirections, you can change this behavior. For example, you can redirect stdout to a file. This way, instead of displaying output on the screen, it will be saved to a file for you to read later – or you can redirect stdout to a physical device, say, a digital LED or LCD display.

    A full article about pipes and redirections is available if you want to learn more.

    Related : 12 Useful Linux Commands for New User

    Use /dev/null to Get Rid of Output You Don't Need

    Since there are two types of output, standard output and standard error, the first use case is to filter out one type or the other. It's easier to understand through a practical example. Let's say you're looking for a string in "/sys" to find files that refer to power settings.

    grep -r power /sys/
    

    There will be a lot of files that a regular, non-root user cannot read. This will result in many "Permission denied" errors.

    These clutter the output and make it harder to spot the results that you're looking for. Since "Permission denied" errors are part of stderr, you can redirect them to "/dev/null."

    grep -r power /sys/ 2>/dev/null
    

    As you can see, this is much easier to read.

    In other cases, it might be useful to do the reverse: filter out standard output so you can only see errors.

    ping google.com 1>/dev/null
    

    The screenshot above shows that, without redirecting, ping displays its normal output when it can reach the destination machine. In the second command, nothing is displayed while the network is online, but as soon as it gets disconnected, only error messages are displayed.

    You can redirect both stdout and stderr to two different locations.

    ping google.com 1>/dev/null 2>error.log
    

    In this case, stdout messages won't be displayed at all, and error messages will be saved to the "error.log" file.

    Redirect All Output to /dev/null

    Sometimes it's useful to get rid of all output. There are two ways to do this.

    grep -r power /sys/ >/dev/null 2>&1
    

    The string >/dev/null means "send stdout to /dev/null," and the second part, 2>&1 , means send stderr to stdout. In this case you have to refer to stdout as "&1" instead of simply "1." Writing "2>1" would just redirect stdout to a file named "1."

    What's important to note here is that the order is important. If you reverse the redirect parameters like this:

    grep -r power /sys/ 2>&1 >/dev/null
    

    it won't work as intended. That's because as soon as 2>&1 is interpreted, stderr is sent to stdout and displayed on screen. Next, stdout is supressed when sent to "/dev/null." The final result is that you will see errors on the screen instead of suppressing all output. If you can't remember the correct order, there's a simpler redirect that is much easier to type:

    grep -r power /sys/ &>/dev/null
    

    In this case, &>/dev/null is equivalent to saying "redirect both stdout and stderr to this location."

    Other Examples Where It Can Be Useful to Redirect to /dev/null

    Say you want to see how fast your disk can read sequential data. The test is not extremely accurate but accurate enough. You can use dd for this, but dd either outputs to stdout or can be instructed to write to a file. With of=/dev/null you can tell dd to write to this virtual file. You don't even have to use shell redirections here. if= specifies the location of the input file to be read; of= specifies the name of the output file, where to write.

    dd if=debian-disk.qcow2 of=/dev/null status=progress bs=1M iflag=direct
    

    In some scenarios, you may want to see how fast you can download from a server. But you don't want to write to your disk unnecessarily. Simply enough, don't write to a regular file, write to "/dev/null."

    wget -O /dev/null http://ftp.halifax.rwth-aachen.de/ubuntu-releases/18.04/ubuntu-18.04.2-desktop-amd64.iso
    
    Conclusion

    Hopefully, the examples in this article can inspire you to find your own creative ways to use "/dev/null."

    Know an interesting use-case for this special device file? Leave a comment below and share the knowledge!

    [Jul 26, 2019] How to check open ports in Linux using the CLI> by Vivek Gite

    Jul 26, 2019 | www.cyberciti.biz

    Using netstat to list open ports

    Type the following netstat command
    sudo netstat -tulpn | grep LISTEN

    ... ... ...

    For example, TCP port 631 opened by cupsd process and cupsd only listing on the loopback address (127.0.0.1). Similarly, TCP port 22 opened by sshd process and sshd listing on all IP address for ssh connections:

    Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name 
    tcp   0      0      127.0.0.1:631           0.0.0.0:*               LISTEN      0          43385      1821/cupsd  
    tcp   0      0      0.0.0.0:22              0.0.0.0:*               LISTEN      0          44064      1823/sshd
    

    Where,

    Use ss to list open ports

    The ss command is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools. The syntax is:
    sudo ss -tulpn

    ... ... ...

    Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

    [Jun 23, 2019] Utilizing multi core for tar+gzip-bzip compression-decompression

    Highly recommended!
    Notable quotes:
    "... There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. ..."
    "... You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use. ..."
    Jun 23, 2019 | stackoverflow.com

    user1118764 , Sep 7, 2012 at 6:58

    I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

    I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

    Is there any way I can utilize the unused cores to make it faster?

    Warren Severin , Nov 13, 2017 at 4:37

    The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

    Mark Adler , Sep 7, 2012 at 14:48

    You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
    tar cf - paths-to-archive | pigz > archive.tar.gz

    By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

    tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz

    user788171 , Feb 20, 2013 at 12:43

    How do you use pigz to decompress in the same fashion? Or does it only work for compression?

    Mark Adler , Feb 20, 2013 at 16:18

    pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression.

    The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores.

    Garrett , Mar 1, 2014 at 7:26

    The hyphen here is stdout (see this page ).

    Mark Adler , Jul 2, 2014 at 21:29

    Yes. 100% compatible in both directions.

    Mark Adler , Apr 23, 2015 at 5:23

    There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files.

    Jen , Jun 14, 2013 at 14:34

    You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

    For example use:

    tar -c --use-compress-program=pigz -f tar.file dir_to_zip

    Valerio Schiavoni , Aug 5, 2014 at 22:38

    Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

    bovender , Sep 18, 2015 at 10:14

    @ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

    Valerio Schiavoni , Sep 28, 2015 at 23:41

    On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

    Offenso , Jan 11, 2017 at 17:26

    I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

    Maxim Suslov , Dec 18, 2014 at 7:31

    Common approach

    There is option for tar program:

    -I, --use-compress-program PROG
          filter through PROG (must accept -d)

    You can use multithread version of archiver or compressor utility.

    Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

    $ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
    $ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive

    Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

    $ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
    $ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz
    

    Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

    p7zip

    For p7zip for compression you need a small shell script like the following:

    #!/bin/sh
    case $1 in
      -d) 7za -txz -si -so e;;
       *) 7za -txz -si -so a .;;
    esac 2>/dev/null
    

    Save it as 7zhelper.sh. Here the example of usage:

    $ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
    $ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
    
    xz

    Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

    This is a fragment of man for 5.1.0alpha version:

    Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

    However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

    Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

    Recompiling with replacement

    If you build tar from sources, then you can recompile with parameters

    --with-gzip=pigz
    --with-bzip2=lbzip2
    --with-lzip=plzip
    

    After recompiling tar with these options you can check the output of tar's help:

    $ tar --help | grep "lbzip2\|plzip\|pigz"
      -j, --bzip2                filter the archive through lbzip2
          --lzip                 filter the archive through plzip
      -z, --gzip, --gunzip, --ungzip   filter the archive through pigz
    

    mpibzip2 , Apr 28, 2015 at 20:57

    I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

    oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

    Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

    selurvedu , May 26, 2016 at 22:13

    Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

    panticz.de , Sep 1, 2014 at 15:02

    You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
    tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/
    

    einpoklum , Feb 11, 2017 at 15:59

    A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59
    If you want to have more flexibility with filenames and compression options, you can use:
    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
    tar -P --transform='s@/my/path/@@g' -cf - {} + | \
    pigz -9 -p 4 > myarchive.tar.gz
    
    Step 1: find

    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

    This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

    -exec will execute the next command using the results of find : tar

    Step 2: tar

    tar -P --transform='s@/my/path/@@g' -cf - {} +

    --transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

    -P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

    -cf - tells tar to use the tarball name we'll specify later

    {} + uses everyfiles that find found previously

    Step 3: pigz

    pigz -9 -p 4

    Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

    Step 4: archive name

    > myarchive.tar.gz

    Finally.

    [Jun 20, 2019] Exploring run filesystem on Linux by Sandra Henry-Stocker

    Jun 20, 2019 | www.networkworld.com

    /run is home to a wide assortment of data. For example, if you take a look at /run/user, you will notice a group of directories with numeric names.

    $ ls /run/user
    1000  1002  121
    

    A long file listing will clarify the significance of these numbers.

    $ ls -l
    total 0
    drwx------ 5 shs  shs  120 Jun 16 12:44 1000
    drwx------ 5 dory dory 120 Jun 16 16:14 1002
    drwx------ 8 gdm  gdm  220 Jun 14 12:18 121

    This allows us to see that each directory is related to a user who is currently logged in or to the display manager, gdm. The numbers represent their UIDs. The content of each of these directories are files that are used by running processes.

    The /run/user files represent only a very small portion of what you'll find in /run. There are lots of other files, as well. A handful contain the process IDs for various system processes.

    $ ls *.pid
    acpid.pid  atopacctd.pid  crond.pid  rsyslogd.pid
    atd.pid    atop.pid       gdm3.pid   sshd.pid
    

    As shown below, that sshd.pid file listed above contains the process ID for the ssh daemon (sshd).

    [Mar 13, 2019] Getting started with the cat command by Alan Formy-Duval

    Mar 13, 2019 | opensource.com

    6 comments

    Cat can also number a file's lines during output. There are two commands to do this, as shown in the help documentation: -b, --number-nonblank number nonempty output lines, overrides -n
    -n, --number number all output lines

    If I use the -b command with the hello.world file, the output will be numbered like this:

       $ cat -b hello.world
       1 Hello World !

    In the example above, there is an empty line. We can determine why this empty line appears by using the -n argument:

    $ cat -n hello.world
       1 Hello World !
       2
       $

    Now we see that there is an extra empty line. These two arguments are operating on the final output rather than the file contents, so if we were to use the -n option with both files, numbering will count lines as follows:

       
       $ cat -n hello.world goodbye.world
       1 Hello World !
       2
       3 Good Bye World !
       4
       $

    One other option that can be useful is -s for squeeze-blank . This argument tells cat to reduce repeated empty line output down to one line. This is helpful when reviewing files that have a lot of empty lines, because it effectively fits more text on the screen. Suppose I have a file with three lines that are spaced apart by several empty lines, such as in this example, greetings.world :

       $ cat greetings.world
       Greetings World !
    
       Take me to your Leader !
    
       We Come in Peace !
       $

    Using the -s option saves screen space:

    $ cat -s greetings.world

    Cat is often used to copy contents of one file to another file. You may be asking, "Why not just use cp ?" Here is how I could create a new file, called both.files , that contains the contents of the hello and goodbye files:

    $ cat hello.world goodbye.world > both.files
    $ cat both.files
    Hello World !
    Good Bye World !
    $
    zcat

    There is another variation on the cat command known as zcat . This command is capable of displaying files that have been compressed with Gzip without needing to uncompress the files with the gunzip command. As an aside, this also preserves disk space, which is the entire reason files are compressed!

    The zcat command is a bit more exciting because it can be a huge time saver for system administrators who spend a lot of time reviewing system log files. Where can we find compressed log files? Take a look at /var/log on most Linux systems. On my system, /var/log contains several files, such as syslog.2.gz and syslog.3.gz . These files are the result of the log management system, which rotates and compresses log files to save disk space and prevent logs from growing to unmanageable file sizes. Without zcat , I would have to uncompress these files with the gunzip command before viewing them. Thankfully, I can use zcat :

    $ cd / var / log
    $ ls * .gz
    syslog.2.gz syslog.3.gz
    $
    $ zcat syslog.2.gz | more
    Jan 30 00:02: 26 workstation systemd [ 1850 ] : Starting GNOME Terminal Server...
    Jan 30 00:02: 26 workstation dbus-daemon [ 1920 ] : [ session uid = 2112 pid = 1920 ] Successful
    ly activated service 'org.gnome.Terminal'
    Jan 30 00:02: 26 workstation systemd [ 1850 ] : Started GNOME Terminal Server.
    Jan 30 00:02: 26 workstation org.gnome.Terminal.desktop [ 2059 ] : # watch_fast: "/org/gno
    me / terminal / legacy / " (establishing: 0, active: 0)
    Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # unwatch_fast: " / org / g
    nome / terminal / legacy / " (active: 0, establishing: 1)
    Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # watch_established: " /
    org / gnome / terminal / legacy / " (establishing: 0)
    --More--

    We can also pass both files to zcat if we want to review both of them uninterrupted. Due to how log rotation works, you need to pass the filenames in reverse order to preserve the chronological order of the log contents:

    $ ls -l * .gz
    -rw-r----- 1 syslog adm 196383 Jan 31 00:00 syslog.2.gz
    -rw-r----- 1 syslog adm 1137176 Jan 30 00:00 syslog.3.gz
    $ zcat syslog.3.gz syslog.2.gz | more

    The cat command seems simple but is very useful. I use it regularly. You also don't need to feed or pet it like a real cat. As always, I suggest you review the man pages ( man cat ) for the cat and zcat commands to learn more about how it can be used. You can also use the --help argument for a quick synopsis of command line arguments.

    Victorhck on 13 Feb 2019 Permalink

    and there's also a "tac" command, that is just a "cat" upside down!
    Following your example:

    ~~~~~

    tac both.files
    Good Bye World!
    Hello World!
    ~~~~
    Happy hacking! :)
    Johan Godfried on 26 Feb 2019 Permalink

    Interesting article but please don't misuse cat to pipe to more......

    I am trying to teach people to use less pipes and here you go abusing cat to pipe to other commands. IMHO, 99.9% of the time this is not necessary!

    In stead of "cat file | command" most of the time, you can use "command file" (yes, I am an old dinosaur from a time where memory was very expensive and forking multiple commands could fill it all up)

    Uri Ran on 03 Mar 2019 Permalink

    Run cat then press keys to see the codes your shortcut send. (Press Ctrl+C to kill the cat when you're done.)

    For example, on my Mac, the key combination option-leftarrow is ^[^[[D and command-downarrow is ^[[B.

    I learned it from https://stackoverflow.com/users/787216/lolesque in his answer to https://stackoverflow.com/questions/12382499/looking-for-altleftarrowkey...

    Geordie on 04 Mar 2019 Permalink

    cat is also useful to make (or append to) text files without an editor:

    $ cat >> foo << "EOF"
    > Hello World
    > Another Line
    > EOF
    $

    [Mar 10, 2019] How do I detach a process from Terminal, entirely?

    Mar 10, 2019 | superuser.com

    stackoverflow.com, Aug 25, 2016 at 17:24

    I use Tilda (drop-down terminal) on Ubuntu as my "command central" - pretty much the way others might use GNOME Do, Quicksilver or Launchy.

    However, I'm struggling with how to completely detach a process (e.g. Firefox) from the terminal it's been launched from - i.e. prevent that such a (non-)child process

    For example, in order to start Vim in a "proper" terminal window, I have tried a simple script like the following:

    exec gnome-terminal -e "vim $@" &> /dev/null &
    

    However, that still causes pollution (also, passing a file name doesn't seem to work).

    lhunath, Sep 23, 2016 at 19:08

    First of all; once you've started a process, you can background it by first stopping it (hit Ctrl - Z ) and then typing bg to let it resume in the background. It's now a "job", and its stdout / stderr / stdin are still connected to your terminal.

    You can start a process as backgrounded immediately by appending a "&" to the end of it:

    firefox &
    

    To run it in the background silenced, use this:

    firefox </dev/null &>/dev/null &
    

    Some additional info:

    nohup is a program you can use to run your application with such that its stdout/stderr can be sent to a file instead and such that closing the parent script won't SIGHUP the child. However, you need to have had the foresight to have used it before you started the application. Because of the way nohup works, you can't just apply it to a running process .

    disown is a bash builtin that removes a shell job from the shell's job list. What this basically means is that you can't use fg , bg on it anymore, but more importantly, when you close your shell it won't hang or send a SIGHUP to that child anymore. Unlike nohup , disown is used after the process has been launched and backgrounded.

    What you can't do, is change the stdout/stderr/stdin of a process after having launched it. At least not from the shell. If you launch your process and tell it that its stdout is your terminal (which is what you do by default), then that process is configured to output to your terminal. Your shell has no business with the processes' FD setup, that's purely something the process itself manages. The process itself can decide whether to close its stdout/stderr/stdin or not, but you can't use your shell to force it to do so.

    To manage a background process' output, you have plenty of options from scripts, "nohup" probably being the first to come to mind. But for interactive processes you start but forgot to silence ( firefox < /dev/null &>/dev/null & ) you can't do much, really.

    I recommend you get GNU screen . With screen you can just close your running shell when the process' output becomes a bother and open a new one ( ^Ac ).


    Oh, and by the way, don't use " $@ " where you're using it.

    $@ means, $1 , $2 , $3 ..., which would turn your command into:

    gnome-terminal -e "vim $1" "$2" "$3" ...
    

    That's probably not what you want because -e only takes one argument. Use $1 to show that your script can only handle one argument.

    It's really difficult to get multiple arguments working properly in the scenario that you gave (with the gnome-terminal -e ) because -e takes only one argument, which is a shell command string. You'd have to encode your arguments into one. The best and most robust, but rather cludgy, way is like so:

    gnome-terminal -e "vim $(printf "%q " "$@")"
    

    Limited Atonement ,Aug 25, 2016 at 17:22

    nohup cmd &

    nohup detaches the process completely (daemonizes it)

    Randy Proctor ,Sep 13, 2016 at 23:00

    If you are using bash , try disown [ jobspec ] ; see bash(1) .

    Another approach you can try is at now . If you're not superuser, your permission to use at may be restricted.

    Stephen Rosen ,Jan 22, 2014 at 17:08

    Reading these answers, I was under the initial impression that issuing nohup <command> & would be sufficient. Running zsh in gnome-terminal, I found that nohup <command> & did not prevent my shell from killing child processes on exit. Although nohup is useful, especially with non-interactive shells, it only guarantees this behavior if the child process does not reset its handler for the SIGHUP signal.

    In my case, nohup should have prevented hangup signals from reaching the application, but the child application (VMWare Player in this case) was resetting its SIGHUP handler. As a result when the terminal emulator exits, it could still kill your subprocesses. This can only be resolved, to my knowledge, by ensuring that the process is removed from the shell's jobs table. If nohup is overridden with a shell builtin, as is sometimes the case, this may be sufficient, however, in the event that it is not...


    disown is a shell builtin in bash , zsh , and ksh93 ,

    <command> &
    disown
    

    or

    <command> &; disown
    

    if you prefer one-liners. This has the generally desirable effect of removing the subprocess from the jobs table. This allows you to exit the terminal emulator without accidentally signaling the child process at all. No matter what the SIGHUP handler looks like, this should not kill your child process.

    After the disown, the process is still a child of your terminal emulator (play with pstree if you want to watch this in action), but after the terminal emulator exits, you should see it attached to the init process. In other words, everything is as it should be, and as you presumably want it to be.

    What to do if your shell does not support disown ? I'd strongly advocate switching to one that does, but in the absence of that option, you have a few choices.

    1. screen and tmux can solve this problem, but they are much heavier weight solutions, and I dislike having to run them for such a simple task. They are much more suitable for situations in which you want to maintain a tty, typically on a remote machine.
    2. For many users, it may be desirable to see if your shell supports a capability like zsh's setopt nohup . This can be used to specify that SIGHUP should not be sent to the jobs in the jobs table when the shell exits. You can either apply this just before exiting the shell, or add it to shell configuration like ~/.zshrc if you always want it on.
    3. Find a way to edit the jobs table. I couldn't find a way to do this in tcsh or csh , which is somewhat disturbing.
    4. Write a small C program to fork off and exec() . This is a very poor solution, but the source should only consist of a couple dozen lines. You can then pass commands as commandline arguments to the C program, and thus avoid a process specific entry in the jobs table.

    Sheljohn ,Jan 10 at 10:20

    1. nohup $COMMAND &
    2. $COMMAND & disown
    3. setsid command

    I've been using number 2 for a very long time, but number 3 works just as well. Also, disown has a 'nohup' flag of '-h', can disown all processes with '-a', and can disown all running processes with '-ar'.

    Silencing is accomplished by '$COMMAND &>/dev/null'.

    Hope this helps!

    dunkyp

    add a comment ,Mar 25, 2009 at 1:51
    I think screen might solve your problem

    Nathan Fellman ,Mar 23, 2009 at 14:55

    in tcsh (and maybe in other shells as well), you can use parentheses to detach the process.

    Compare this:

    > jobs # shows nothing
    > firefox &
    > jobs
    [1]  + Running                       firefox
    

    To this:

    > jobs # shows nothing
    > (firefox &)
    > jobs # still shows nothing
    >
    

    This removes firefox from the jobs listing, but it is still tied to the terminal; if you logged in to this node via 'ssh', trying to log out will still hang the ssh process.

    ,

    To disassociate tty shell run command through sub-shell for e.g.

    (command)&

    When exit used terminal closed but process is still alive.

    check -

    (sleep 100) & exit
    

    Open other terminal

    ps aux | grep sleep
    

    Process is still alive.

    [Mar 10, 2019] linux - How to attach terminal to detached process

    Mar 10, 2019 | unix.stackexchange.com

    Ask Question 86


    Gilles ,Feb 16, 2012 at 21:39

    I have detached a process from my terminal, like this:
    $ process &
    

    That terminal is now long closed, but process is still running and I want to send some commands to that process's stdin. Is that possible?

    Samuel Edwin Ward ,Dec 22, 2018 at 13:34

    Yes, it is. First, create a pipe: mkfifo /tmp/fifo . Use gdb to attach to the process: gdb -p PID

    Then close stdin: call close (0) ; and open it again: call open ("/tmp/fifo", 0600)

    Finally, write away (from a different terminal, as gdb will probably hang):

    echo blah > /tmp/fifo

    NiKiZe ,Jan 6, 2017 at 22:52

    When original terminal is no longer accessible...

    reptyr might be what you want, see https://serverfault.com/a/284795/187998

    Quote from there:

    Have a look at reptyr , which does exactly that. The github page has all the information.
    reptyr - A tool for "re-ptying" programs.

    reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

    USAGE

    reptyr PID

    "reptyr PID" will grab the process with id PID and attach it to your current terminal.

    After attaching, the process will take input from and write output to the new terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have to run "bg" or "fg" in the old terminal. This is likely impossible to fix in a reasonable way without patching your shell.)

    manatwork ,Nov 20, 2014 at 22:59

    I am quite sure you can not.

    Check using ps x . If a process has a ? as controlling tty , you can not send input to it any more.

    9942 ?        S      0:00 tail -F /var/log/messages
    9947 pts/1    S      0:00 tail -F /var/log/messages
    

    In this example, you can send input to 9947 doing something like echo "test" > /dev/pts/1 . The other process ( 9942 ) is not reachable.

    Next time, you could use screen or tmux to avoid this situation.

    Stéphane Gimenez ,Feb 16, 2012 at 16:16

    EDIT : As Stephane Gimenez said, it's not that simple. It's only allowing you to print to a different terminal.

    You can try to write to this process using /proc . It should be located in /proc/ pid /fd/0 , so a simple :

    echo "hello" > /proc/PID/fd/0
    

    should do it. I have not tried it, but it should work, as long as this process still has a valid stdin file descriptor. You can check it with ls -l on /proc/ pid /fd/ .

    See nohup for more details about how to keep processes running.

    Stéphane Gimenez ,Nov 20, 2015 at 5:08

    Just ending the command line with & will not completely detach the process, it will just run it in the background. (With zsh you can use &! to actually detach it, otherwise you have do disown it later).

    When a process runs in the background, it won't receive input from its controlling terminal anymore. But you can send it back into the foreground with fg and then it will read input again.

    Otherwise, it's not possible to externally change its filedescriptors (including stdin) or to reattach a lost controlling terminal unless you use debugging tools (see Ansgar's answer , or have a look at the retty command).

    [Mar 10, 2019] linux - Preventing tmux session created by systemd from automatically terminating on Ctrl+C - Stack Overflow

    Mar 10, 2019 | stackoverflow.com

    Preventing tmux session created by systemd from automatically terminating on Ctrl+C Ask Question -1


    Jim Stewart ,Nov 10, 2018 at 12:55

    Since a few days I'm successfully running the new Minecraft Bedrock Edition dedicated server on my Ubuntu 18.04 LTS home server. Because it should be available 24/7 and automatically startup after boot I created a systemd service for a detached tmux session:

    tmux.minecraftserver.service

    [Unit]
    Description=tmux minecraft_server detached
    
    [Service]
    Type=forking
    WorkingDirectory=/home/mine/minecraftserver
    ExecStart=/usr/bin/tmux new -s minecraftserver -d "LD_LIBRARY_PATH=. /home/mine/minecraftser$
    User=mine
    
    [Install]
    WantedBy=multi-user.target
    

    Everything works as expected but there's one tiny thing that keeps bugging me:

    How can I prevent tmux from terminating it's whole session when I press Ctrl+C ? I just want to terminate the Minecraft server process itself instead of the whole tmux session. When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

    FlKo ,Nov 12, 2018 at 6:21

    When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

    The difference between these situations is actually unrelated to systemd. In one case, you're starting the server from a shell within the tmux session, and when the server terminates, control returns to the shell. In the other case, you're starting the server directly within the tmux session, and when it terminates there's no shell to return to, so the tmux session also dies.

    tmux has an option to keep the session alive after the process inside it dies (look for remain-on-exit in the manpage), but that's probably not what you want: you want to be able to return to an interactive shell, to restart the server, investigate why it died, or perform maintenance tasks, for example. So it's probably better to change your command to this:

    'LD_LIBRARY_PATH=. /home/mine/minecraftserver/ ; exec bash'
    

    That is, first run the server, and then, after it terminates, replace the process (the shell which tmux implicitly spawns to run the command, but which will then exit) with another, interactive shell. (For some other ways to get an interactive shell after the command exits, see e. g. this question – but note that the <(echo commands) syntax suggested in the top answer is not available in systemd unit files.)

    FlKo ,Nov 12, 2018 at 6:21

    I as able to solve this by using systemd's ExecStartPost and tmux's send-keys like this:
    [Unit]
    Description=tmux minecraft_server detached
    
    [Service]
    Type=forking
    WorkingDirectory=/home/mine/minecraftserver
    ExecStart=/usr/bin/tmux new -d -s minecraftserver
    ExecStartPost=/usr/bin/tmux send-keys -t minecraftserver "cd /home/mine/minecraftserver/" Enter "LD_LIBRARY_PATH=. ./bedrock_server" Enter
    
    User=mine
    
    [Install]
    WantedBy=multi-user.target
    

    [Feb 04, 2019] Do not play those dangerous games with resing of partitions unless absolutly nessesary

    Copying to additional drive (can be USB), repartitioning and then copying everything back is a safer bet
    May 07, 2017 | superuser.com
    womble

    In theory, you could reduce the size of sda1, increase the size of the extended partition, shift the contents of the extended partition down, then increase the size of the PV on the extended partition and you'd have the extra room.

    However, the number of possible things that can go wrong there is just astronomical

    So I'd recommend either buying a second hard drive (and possibly transferring everything onto it in a more sensible layout, then repartitioning your current drive better) or just making some bind mounts of various bits and pieces out of /home into / to free up a bit more space.

    --womble

    [Jan 26, 2019] SysVinit to Systemd Cheatsheet

    Apr 15, 2015 | FedoraProject
    Sysvinit Command Systemd Command Notes
    service frobozz start systemctl start frobozz Used to start a service (not reboot persistent)
    service frobozz stop systemctl stop frobozz Used to stop a service (not reboot persistent)
    service frobozz restart systemctl restart frobozz Used to stop and then start a service
    service frobozz reload systemctl reload frobozz When supported, reloads the config file without interrupting pending operations.
    service frobozz condrestart systemctl condrestart frobozz Restarts if the service is already running.
    service frobozz status systemctl status frobozz Tells whether a service is currently running.
    ls /etc/rc.d/init.d/ systemctl (or) systemctl list-unit-files --type=service (or)
    ls /lib/systemd/system/*.service /etc/systemd/system/*.service
    Used to list the services that can be started or stopped
    Used to list all the services and other units
    chkconfig frobozz on systemctl enable frobozz Turn the service on, for start at next boot, or other trigger.
    chkconfig frobozz off systemctl disable frobozz Turn the service off for the next reboot, or any other trigger.
    chkconfig frobozz systemctl is-enabled frobozz Used to check whether a service is configured to start or not in the current environment.
    chkconfig --list systemctl list-unit-files --type=service (or) ls /etc/systemd/system/*.wants/ Print a table of services that lists which runlevels each is configured on or off
    chkconfig frobozz --list ls /etc/systemd/system/*.wants/frobozz.service Used to list what levels this service is configured on or off
    chkconfig frobozz --add systemctl daemon-reload Used when you create a new service file or modify any configuration

    [Nov 12, 2018] Linux Find Out Which Process Is Listening Upon a Port

    Jun 25, 2012 | www.cyberciti.biz

    How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111 or udp port 7000 under Linux?

    You can the following programs to find out about port numbers and its associated process:

    1. netstat – a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
    2. fuser – a command line tool to identify processes using files or sockets.
    3. lsof – a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that opened them.
    4. /proc/$pid/ file system – Under Linux /proc includes a directory for each running process (including kernel processes) at /proc/PID, containing information about that process, notably including the processes name that opened port.

    You must run above command(s) as the root user.

    netstat example

    Type the following command:
    # netstat -tulpn
    Sample outputs:

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1138/mysqld     
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      850/portmap     
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2    
    tcp        0      0 0.0.0.0:55091           0.0.0.0:*               LISTEN      910/rpc.statd   
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1467/dnsmasq    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      992/sshd        
    tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1565/cupsd      
    tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      3813/transmission
    tcp6       0      0 :::22                   :::*                    LISTEN      992/sshd        
    tcp6       0      0 ::1:631                 :::*                    LISTEN      1565/cupsd      
    tcp6       0      0 :::7000                 :::*                    LISTEN      3813/transmission
    udp        0      0 0.0.0.0:111             0.0.0.0:*                           850/portmap     
    udp        0      0 0.0.0.0:662             0.0.0.0:*                           910/rpc.statd   
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:68              0.0.0.0:*                           3697/dhclient   
    udp        0      0 0.0.0.0:7000            0.0.0.0:*                           3813/transmission
    udp        0      0 0.0.0.0:54746           0.0.0.0:*                           910/rpc.statd
    

    TCP port 3306 was opened by mysqld process having PID # 1138. You can verify this using /proc, enter:
    # ls -l /proc/1138/exe
    Sample outputs:

    lrwxrwxrwx 1 root root 0 2010-10-29 10:20 /proc/1138/exe -> /usr/sbin/mysqld
    

    You can use grep command to filter out information:
    # netstat -tulpn | grep :80
    Sample outputs:

    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2
    
    Video demo

    https://www.youtube.com/embed/h3fJlmuGyos

    fuser command

    Find out the processes PID that opened tcp port 7000, enter:
    # fuser 7000/tcp
    Sample outputs:

    7000/tcp:             3813
    

    Finally, find out process name associated with PID # 3813, enter:
    # ls -l /proc/3813/exe
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 11:00 /proc/3813/exe -> /usr/bin/transmission
    

    /usr/bin/transmission is a bittorrent client, enter:
    # man transmission
    OR
    # whatis transmission
    Sample outputs:

    transmission (1)     - a bittorrent client
    
    Task: Find Out Current Working Directory Of a Process

    To find out current working directory of a process called bittorrent or pid 3813, enter:
    # ls -l /proc/3813/cwd
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 12:04 /proc/3813/cwd -> /home/vivek
    

    OR use pwdx command, enter:
    # pwdx 3813
    Sample outputs:

    3813: /home/vivek
    
    Task: Find Out Owner Of a Process

    Use the following command to find out the owner of a process PID called 3813:
    # ps aux | grep 3813
    OR
    # ps aux | grep '[3]813'
    Sample outputs:

    vivek     3813  1.9  0.3 188372 26628 ?        Sl   10:58   2:27 transmission
    

    OR try the following ps command:
    # ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
    Sample outputs:

    3813 vivek    vivek    transmission                   02:44:05 Fri Oct 29 10:58:40 2010
    

    Another option is /proc/$PID/environ, enter:
    # cat /proc/3813/environ
    OR
    # grep --color -w -a USER /proc/3813/environ
    Sample outputs (note –colour option):

    Fig.01: grep output
    Fig.01: grep output

    lsof Command Example

    Type the command as follows:

    lsof -i :portNumber 
    lsof -i tcp:portNumber 
    lsof -i udp:portNumber 
    lsof -i :80
    lsof -i :80 | grep LISTEN
    

    lsof -i :portNumber lsof -i tcp:portNumber lsof -i udp:portNumber lsof -i :80 lsof -i :80 | grep LISTEN

    Sample outputs:

    apache2   1607     root    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1616 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1617 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1618 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1619 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1620 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    

    Now, you get more information about pid # 1607 or 1616 and so on:
    # ps aux | grep '[1]616'
    Sample outputs:
    www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
    I recommend the following command to grab info about pid # 1616:
    # ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
    Sample outputs:

    1616 www-data www-data /usr/sbin/apache2 -k start     03:16:22 Fri Oct 29 10:20:17 2010
    

    Where,

    Help: I Discover an Open Port Which I Don't Recognize At All

    The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers:
    $ grep port /etc/services
    $ grep 443 /etc/services

    Sample outputs:

    https		443/tcp				# http protocol over TLS/SSL
    https		443/udp
    
    Check For rootkit

    I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers. See how to detecting / checking rootkits under Linux .

    Keep an Eye On Your Bandwidth Graphs

    Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.

    See also:

    See the following man pages for more information:
    $ man ps
    $ man grep
    $ man lsof
    $ man netstat
    $ man fuser

    Posted by: Vivek Gite

    The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . GOT FEEDBACK? CLICK HERE TO JOIN THE DISCUSSION

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Nov 08, 2018 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Nov 08, 2018] Determining what process is bound to a port

    Mar 14, 2011 | unix.stackexchange.com
    I know that using the command:
    lsof -i TCP

    (or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.

    Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.

    Cakemox , Mar 14, 2011 at 20:48

    netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only.
    # netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:24800           0.0.0.0:*               LISTEN      27899/synergys
    tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3361/python
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2264/mysqld
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      22964/apache2
    tcp        0      0 192.168.99.1:53         0.0.0.0:*               LISTEN      3389/named
    tcp        0      0 192.168.88.1:53         0.0.0.0:*               LISTEN      3389/named
    

    etc.

    xxx , Mar 14, 2011 at 21:01

    Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something similar for Solaris? – user5721 Mar 14 '11 at 21:01

    Rich Homolka , Mar 15, 2011 at 19:56

    netstat -p above is my vote. also look at lsof . – Rich Homolka Mar 15 '11 at 19:56

    Jonathan , Aug 26, 2014 at 18:50

    As an aside, for windows it's similar: netstat -aon | more – Jonathan Aug 26 '14 at 18:50

    sudo , May 25, 2017 at 2:24

    What about for SCTP? – sudo May 25 '17 at 2:24

    frielp , Mar 15, 2011 at 13:33

    On AIX, netstat & rmsock can be used to determine process binding:
    [root@aix] netstat -Ana|grep LISTEN|grep 80
    f100070000280bb0 tcp4       0      0  *.37               *.*        LISTEN
    f1000700025de3b0 tcp        0      0  *.80               *.*        LISTEN
    f1000700002803b0 tcp4       0      0  *.111              *.*        LISTEN
    f1000700021b33b0 tcp4       0      0  127.0.0.1.32780    *.*        LISTEN
    
    # Port 80 maps to f1000700025de3b0 above, so we type:
    [root@aix] rmsock f1000700025de3b0 tcpcb
    The socket 0x25de008 is being held by process 499790 (java).
    

    Olivier Dulac , Sep 18, 2013 at 4:05

    Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt to remove it) ? – Olivier Dulac Sep 18 '13 at 4:05

    Vitor Py , Sep 26, 2013 at 14:18

    @OlivierDulac: "Unlike what its name implies, rmsock does not remove the socket, if it is being used by a process. It just reports the process holding the socket." ( ibm.com/developerworks/community/blogs/cgaix/entry/ ) – Vitor Py Sep 26 '13 at 14:18

    Olivier Dulac , Sep 26, 2013 at 16:00

    @vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't even try to remove it when a process holds it. That's cool! Thx! – Olivier Dulac Sep 26 '13 at 16:00

    frielp , Mar 15, 2011 at 13:27

    Another tool available on Linux is ss . From the ss man page on Fedora:
    NAME
           ss - another utility to investigate sockets
    SYNOPSIS
           ss [options] [ FILTER ]
    DESCRIPTION
           ss is used to dump socket statistics. It allows showing information 
           similar to netstat. It can display more TCP and state informations  
           than other tools.
    

    Example output below - the final column shows the process binding:

    [root@box] ss -ap
    State      Recv-Q Send-Q      Local Address:Port          Peer Address:Port
    LISTEN     0      128                    :::http                    :::*        users:(("httpd",20891,4),("httpd",20894,4),("httpd",20895,4),("httpd",20896,4)
    LISTEN     0      128             127.0.0.1:munin                    *:*        users:(("munin-node",1278,5))
    LISTEN     0      128                    :::ssh                     :::*        users:(("sshd",1175,4))
    LISTEN     0      128                     *:ssh                      *:*        users:(("sshd",1175,3))
    LISTEN     0      10              127.0.0.1:smtp                     *:*        users:(("sendmail",1199,4))
    LISTEN     0      128             127.0.0.1:x11-ssh-offset                  *:*        users:(("sshd",25734,8))
    LISTEN     0      128                   ::1:x11-ssh-offset                 :::*        users:(("sshd",25734,7))
    

    Eugen Constantin Dinca , Mar 14, 2011 at 23:47

    For Solaris you can use pfiles and then grep by sockname: or port: .

    A sample (from here ):

    pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'
    

    rickumali , May 8, 2011 at 14:40

    I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e. http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it became obvious what the process was (for the record, it was Splunk ).

    One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!

    Gilles , Oct 8, 2015 at 21:04

    In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. – Gilles May 8 '11 at 14:45

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Jul 27, 2012 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Oct 23, 2018] To switch from vertical split to horizontal split fast in Vim

    Nov 24, 2013 | stackoverflow.com

    ДМИТРИЙ МАЛИКОВ, Nov 24, 2013 at 7:55

    How can you switch your current windows from horizontal split to vertical split and vice versa in Vim?

    I did that a moment ago by accident but I cannot find the key again.

    Mark Rushakoff

    Vim mailing list says (re-formatted for better readability):

    • To change two vertically split windows to horizontal split: Ctrl - W t Ctrl - W K
    • Horizontally to vertically: Ctrl - W t Ctrl - W H

    Explanations:

    • Ctrl - W t -- makes the first (topleft) window current
    • Ctrl - W K -- moves the current window to full-width at the very top
    • Ctrl - W H -- moves the current window to full-height at far left

    Note that the t is lowercase, and the K and H are uppercase.

    Also, with only two windows, it seems like you can drop the Ctrl - W t part because if you're already in one of only two windows, what's the point of making it current?

    Too much php Aug 13 '09 at 2:17

    So if you have two windows split horizontally, and you are in the lower window, you just use ^WL

    Alex Hart Dec 7 '12 at 14:10

    There are a ton of interesting ^w commands (b, w, etc)

    holms Feb 28 '13 at 9:07

    somehow doesn't work for me.. =/ –

    Lambart Mar 26 at 19:34

    Just toggle your NERDTree panel closed before 'rotating' the splits, then toggle it back open. :NERDTreeToggle (I have it mapped to a function key for convenience).

    xxx Feb 19 '13 at 20:26

    ^w followed by capital H , J , K or L will move the current window to the far left , bottom , top or right respectively like normal cursor navigation.

    The lower case equivalents move focus instead of moving the window.

    respectTheCode, Jul 21 '13 at 9:55

    1 Wow, cool! Thanks! :-) – infous Feb 6 at 8:46 it's much better since users use hjkl to move between buffers. – Afshin Mehrabani

    In VIM, take a look at the following to see different alternatives for what you might have done:

    :help opening-window

    For instance:

    Ctrl - W s
    Ctrl - W o
    Ctrl - W v
    Ctrl - W o
    Ctrl - W s

    Anon, Apr 29 at 21:45

    The command ^W-o is great! I did not know it. – Masi Aug 13 '09 at 2:20 add a comment | up vote 6 down vote The following ex commands will (re-)split any number of windows:

    If there are hidden buffers, issuing these commands will also make the hidden buffers visible.

    Mark Oct 22 at 19:31

    When you have two or more windows open horizontally or vertically and want to switch them all to the other orientation, you can use the following:

    [Oct 22, 2018] move selection to a separate file

    Highly recommended!
    Oct 22, 2018 | superuser.com

    greg0ire ,Jan 23, 2013 at 13:29

    With vim, how can I move a piece of text to a new file? For the moment, I do this:

    Is there a more efficient way to do this?

    Before

    a.txt

    sometext
    some other text
    some other other text
    end
    
    After

    a.txt

    sometext
    end
    

    b.txt

    some other text
    some other other text
    

    Ingo Karkat, Jan 23, 2013 at 15:20

    How about these custom commands:
    :command! -bang -range -nargs=1 -complete=file MoveWrite  <line1>,<line2>write<bang> <args> | <line1>,<line2>delete _
    :command! -bang -range -nargs=1 -complete=file MoveAppend <line1>,<line2>write<bang> >> <args> | <line1>,<line2>delete _
    

    greg0ire ,Jan 23, 2013 at 15:27

    This is very ugly, but hey, it seems to do in one step exactly what I asked for (I tried). +1, and accepted. I was looking for a native way to do this quickly but since there does not seem to be one, yours will do just fine. Thanks! – greg0ire Jan 23 '13 at 15:27

    Ingo Karkat ,Jan 23, 2013 at 16:15

    Beauty is in the eye of the beholder. I find this pretty elegant; you only need to type it once (into your .vimrc). – Ingo Karkat Jan 23 '13 at 16:15

    greg0ire ,Jan 23, 2013 at 16:21

    You're right, "very ugly" shoud have been "very unfamiliar". Your command is very handy, and I think I definitely going to carve it in my .vimrc – greg0ire Jan 23 '13 at 16:21

    embedded.kyle ,Jan 23, 2013 at 14:08

    By "move a piece of text to a new file" I assume you mean cut that piece of text from the current file and create a new file containing only that text.

    Various examples:

    The above only copies the text and creates a new file containing that text. You will then need to delete afterward.

    This can be done using the same range and the d command:

    Or by using dd for the single line case.

    If you instead select the text using visual mode, and then hit : while the text is selected, you will see the following on the command line:

    :'<,'>

    Which indicates the selected text. You can then expand the command to:

    :'<,'>w >> old_file

    Which will append the text to an existing file. Then delete as above.


    One liner:

    :2,3 d | new +put! "

    The breakdown:

    greg0ire, Jan 23, 2013 at 14:09

    Your assumption is right. This looks good, I'm going to test. Could you explain 2. a bit more? I'm not very familiar with ranges. EDIT: If I try this on the second line, it writes the first line to the other file, not the second line. – greg0ire Jan 23 '13 at 14:09

    embedded.kyle ,Jan 23, 2013 at 14:16

    @greg0ire I got that a bit backward, I'll edit to better explain – embedded.kyle Jan 23 '13 at 14:16

    greg0ire ,Jan 23, 2013 at 14:18

    I added an example to make my question clearer. – greg0ire Jan 23 '13 at 14:18

    embedded.kyle ,Jan 23, 2013 at 14:22

    @greg0ire I corrected my answer. It's still two steps. The first copies and writes. The second deletes. – embedded.kyle Jan 23 '13 at 14:22

    greg0ire ,Jan 23, 2013 at 14:41

    Ok, if I understand well, the trick is to use ranges to select and write in the same command. That's very similar to what I did. +1 for the detailed explanation, but I don't think this is more efficient, since the trick with hitting ':' is what I do for the moment. – greg0ire Jan 23 '13 at 14:41

    Xyon ,Jan 23, 2013 at 13:32

    Select the text in visual mode, then press y to "yank" it into the buffer (copy) or d to "delete" it into the buffer (cut).

    Then you can :split <new file name> to split your vim window up, and press p to paste in the yanked text. Write the file as normal.

    To close the split again, pass the split you want to close :q .

    greg0ire ,Jan 23, 2013 at 13:42

    I have 4 steps for the moment: select, write, select, delete. With your method, I have 6 steps: select, delete, split, paste, write, close. I asked for something more efficient :P – greg0ire Jan 23 '13 at 13:42

    Xyon ,Jan 23, 2013 at 13:44

    Well, if you pass the split :x instead, you can combine writing and closing into one and make it five steps. :P – Xyon Jan 23 '13 at 13:44

    greg0ire ,Jan 23, 2013 at 13:46

    That's better, but 5 still > 4 :P – greg0ire Jan 23 '13 at 13:46 Based on @embedded.kyle's answer and this Q&A , I ended up with this one liner to append a selection to a file and delete from current file. After selecting some lines with Shift+V , hit : and run:
    '<,'>w >> test | normal gvd
    

    The first part appends selected lines. The second command enters normal mode and runs gvd to select the last selection and then deletes.

    [Oct 22, 2018] Cut/copy and paste using visual selection

    Oct 22, 2018 | vim.wikia.com
    Visual selection is a common feature in applications, but Vim's visual selection has several benefits.

    To cut-and-paste or copy-and-paste:

    1. Position the cursor at the beginning of the text you want to cut/copy.
    2. Press v to begin character-based visual selection, or V to select whole lines, or Ctrl-v or Ctrl-q to select a block.
    3. Move the cursor to the end of the text to be cut/copied. While selecting text, you can perform searches and other advanced movement.
    4. Press d (delete) to cut, or y (yank) to copy.
    5. Move the cursor to the desired paste location.
    6. Press p to paste after the cursor, or P to paste before.

    Visual selection (steps 1-3) can be performed using a mouse.

    If you want to change the selected text, press c instead of d or y in step 4. In a visual selection, pressing c performs a change by deleting the selected text and entering insert mode so you can type the new text.

    Pasting over a block of text

    You can copy a block of text by pressing Ctrl-v (or Ctrl-q if you use Ctrl-v for paste), then moving the cursor to select, and pressing y to yank. Now you can move elsewhere and press p to paste the text after the cursor (or P to paste before). The paste inserts a block (which might, for example, be 4 rows by 3 columns of text).

    Instead of inserting the block, it is also possible to replace (paste over) the destination. To do this, move to the target location then press 1vp ( 1v selects an area equal to the original, and p pastes over it).

    When a count is used before v , V , or ^V (character, line or block selection), an area equal to the previous area, multiplied by the count, is selected. See the paragraph after :help <LeftRelease> .

    Note that this will only work if you actually did something to the previous visual selection, such as a yank, delete, or change operation. It will not work after visually selecting an area and leaving visual mode without taking any actions.

    See also Comments

    NOTE: after selecting the visual copy mode, you can hold the shift key while selection the region to get a multiple line copy. For example, to copy three lines, press V, then hold down the Shift key while pressing the down arrow key twice. Then do your action on the buffer.

    I have struck out the above new comment because I think it is talking about something that may apply to those who have used :behave mswin . To visually select multiple lines, you type V , then press j (or cursor down). You hold down Shift only to type the uppercase V . Do not press Shift after that. If I am wrong, please explain here. JohnBeckett 10:48, October 7, 2010 (UTC)

    If you just want to copy (yank) the visually marked text, you do not need to 'y'ank it. Marking it will already copy it.

    Using a mouse, you can insert it at another position by clicking the middle mouse button.

    This also works in across Vim applications on Windows systems (clipboard is inserted)


    This is a really useful thing in Vim. I feel lost without it in any other editor. I have some more points I'd like to add to this tip:


    You can replace a set of text in a visual block very easily by selecting a block, press c and then make changes to the first line. Pressing <Esc> twice replaces all the text of the original selection. See :help v_b_c .


    On Windows the <mswin.vim> script seems to be getting sourced for many users.

    Result: more Windows like behavior (ctrl-v is "paste", instead of visual-block selection). Hunt down your system vimrc and remove sourcing thereof if you don't like that behavior (or substitute <mrswin.vim> in its place, see VimTip63 .

    With VimTip588 one can sort lines or blocks based on visual-block selection.


    With reference to the earlier post asking how to paste an inner block

    1. Select the inner block to copy usint ctrl-v and highlighting with the hjkl keys
    2. yank the visual region (y)
    3. Select the inner block you want to overwrite (Ctrl-v then hightlight with hjkl keys)
    4. paste the selection P (that is shift P) , this will overwrite keeping the block formation

    The "yank" buffers in Vim are not the same as the Windows clipboard (i.e., cut-and-paste) buffers. If you're using the yank, it only puts it in a Vim buffer - that buffer is not accessible to the Windows paste command. You'll want to use the Edit | Copy and Edit | Paste (or their keyboard equivalents) if you're using the Windows GUI, or select with your mouse and use your X-Windows cut-n-paste mouse buttons if you're running UNIX.


    Double-quote and star gives one access to windows clippboard or the unix equivalent. as an example if I wanted to yank the current line into the clipboard I would type "*yy

    If I wanted to paste the contents of the clippboard into Vim at my current curser location I would type "*p

    The double-qoute and start trick work well with visual mode as well. ex: visual select text to copy to clippboard and then type "*y

    I find this very useful and I use it all the time but it is a bit slow typing "* all the time so I am thinking about creating a macro to speed it up a bit.


    Copy and Paste using the System Clipboard

    There are some caveats regarding how the "*y (copy into System Clipboard) command works. We have to be sure that we are using vim-full (sudo aptitude install vim-full on debian-based systems) or a Vim that has X11 support enabled. Only then will the "*y command work.

    For our convenience as we are all familiar with using Ctrl+c to copy a block of text in most other GUI applications, we can also map Ctrl+c to "*y so that in Vim Visual Mode, we can simply Ctrl+c to copy the block of text we want into our system buffer. To do that, we simply add this line in our .vimrc file:

    map <C-c> "+y<CR>

    Restart our shell and we are good. Now whenever we are in Visual Mode, we can Ctrl+c to grab what we want and paste it into another application or another editor in a convenient and intuitive manner.

    [Oct 21, 2018] Moving lines between split windows in vim

    Notable quotes:
    "... "send the line I am on (or the test I selected) to the other window" ..."
    Oct 21, 2018 | superuser.com

    brad ,Nov 24, 2015 at 12:28

    I have two files, say a.txt and b.txt , in the same session of vim and I split the screen so I have file a.txt in the upper window and b.txt in the lower window.

    I want to move lines here and there from a.txt to b.txt : I select a line with Shift + v , then I move to b.txt in the lower window with Ctrl + w , paste with p , get back to a.txt with Ctrl + w and I can repeat the operation when I get to another line I want to move.

    My question: is there a quicker way to say vim "send the line I am on (or the test I selected) to the other window" ?

    Chong ,Nov 24, 2015 at 12:33

    Use q macro? q[some_letter] [whatever operations] q , then call the macro with [times to be called]@qChong Nov 24 '15 at 12:33

    Anthony Geoghegan ,Nov 24, 2015 at 13:00

    I presume that you're deleting the line that you've selected in a.txt . If not, you'd be pasting something else into b.txt . If so, there's no need to select the line first. – Anthony Geoghegan Nov 24 '15 at 13:00

    Anthony Geoghegan ,Nov 24, 2015 at 13:17

    This sounds like a good use case for a macro. Macros are commands that can be recorded and stored in a Vim register. Each register is identified by a letter from a to z. Recording

    From Recording keys for repeated jobs - Vim Tips

    To start recording, press q in Normal mode followed by a letter (a to z). That starts recording keystrokes to the specified register. Vim displays "recording" in the status line. Type any Normal mode commands, or enter Insert mode and type text. To stop recording, again press q while in Normal mode.

    For this particular macro, I chose the m (for move) register to store it.

    I pressed qm to record the following commands:

    When I typed q to finish recording the macro, the contents of the m register were:

    dd^Wjp^Wk
    
    Usage

    brad ,Nov 24, 2015 at 14:26

    I asked to see if there is a command unknown to me that does the job: it seems there is none. In absence of such a command, this can be a good solution. – brad Nov 24 '15 at 14:26

    romainl ,Nov 26, 2015 at 9:54

    @brad, you can find all the commands available to you in the documentation. If it's not there it doesn't exist no need to ask random strangers. – romainl Nov 26 '15 at 9:54

    brad ,Nov 26, 2015 at 10:17

    @romainl, yes, I know this but vim documentation is really huge and, although it doesn't scare me, there is always the possibility to miss something. Moreover, it could also be that you can obtain the effect using the combination of 2 commands and in this case it would be hardly documented – brad Nov 26 '15 at 10:17

    [Oct 21, 2018] How to move around buffers in vim?

    Oct 21, 2018 | stackoverflow.com

    user3721893 ,Jul 23, 2014 at 5:43

    I normally work with more than 5 files at a time. I use buffers to open different files. I use commands such as :buf file1, :buf file2 etc. Is there a faster way to move to different files?

    eckes ,Jul 23, 2014 at 5:49

    What I use:

    And have a short look on :he buffer

    And the wiki entry on Easier Buffer Switching on the Vim Wiki: http://vim.wikia.com/wiki/Easier_buffer_switching

    SO already has a question regarding yours: How do you prefer to switch between buffers in Vim?

    romainl ,Jul 23, 2014 at 6:13

    A few mappings can make your life a lot easier.

    This one lists your buffers and prompts you for a number:

    nnoremap gb :buffers<CR>:buffer<Space>
    

    This one lists your buffers in the "wildmenu". Depends on the 'wildcharm' option as well as 'wildmenu' and 'wildmode' :

    nnoremap <leader>b :buffer <C-z>
    

    These ones allow you to cycle between all your buffers without much thinking:

    nnoremap <PageUp>   :bprevious<CR>
    nnoremap <PageDown> :bnext<CR>
    

    Also, don't forget <C-^> which allows you to alternate between two buffers.

    mikew ,Jul 23, 2014 at 6:38

    Once the buffers are already open, you can just type :b partial_filename to switch

    So if :ls shows that i have my ~./vimrc open, then I can just type :b vimr or :b rc to switch to that buffer

    Brady Trainor ,Jul 25, 2014 at 22:13

    Below I describe some excerpts from sections of my .vimrc . It includes mapping the leader key, setting wilds tab completion, and finally my buffer nav key choices (all mostly inspired by folks on the interweb, including romainl). Edit: Then I ramble on about my shortcuts for windows and tabs.
    " easier default keys {{{1
    
    let mapleader=','
    nnoremap <leader>2 :@"<CR>
    

    The leader key is a prefix key for mostly user-defined key commands (some plugins also use it). The default is \ , but many people suggest the easier to reach , .

    The second line there is a command to @ execute from the " clipboard, in case you'd like to quickly try out various key bindings (without relying on :so % ). (My nmeumonic is that Shift - 2 is @ .)

    " wilds {{{1
    
    set wildmenu wildmode=list:full
    set wildcharm=<C-z>
    set wildignore+=*~ wildignorecase
    

    For built-in completion, wildmenu is probably the part that shows up yellow on your Vim when using tab completion on command-line. wildmode is set to a comma-separated list, each coming up in turn on each tab completion (that is, my list is simply one element, list:full ). list shows rows and columns of candidates. full 's meaning includes maintaining existence of the wildmenu . wildcharm is the way to include Tab presses in your macros. The *~ is for my use in :edit and :find commands.

    " nav keys {{{1
    " windows, buffers and tabs {{{2
    " buffers {{{3
    
    nnoremap <leader>bb :b <C-z><S-Tab>
    nnoremap <leader>bh :ls!<CR>:b<Space>
    nnoremap <leader>bw :ls!<CR>:bw<Space>
    nnoremap <leader>bt :TSelectBuffer<CR>
    nnoremap <leader>be :BufExplorer<CR>
    nnoremap <leader>bs :BufExplorerHorizontalSplit<CR>
    nnoremap <leader>bv :BufExplorerVerticalSplit<CR>
    nnoremap <leader>3 :e#<CR>
    nmap <C-n> :bn<cr>
    nmap <C-p> :bp<cr>
    

    The ,3 is for switching between the "two" last buffers (Easier to reach than built-in Ctrl - 6 ). Nmeuonic is Shift - 3 is # , and # is the register symbol for last buffer. (See :marks .)

    ,bh is to select from hidden buffers ( ! ).

    ,bw is to bwipeout buffers by number or name. For instance, you can wipeout several while looking at the list, with ,bw 1 3 4 8 10 <CR> . Note that wipeout is more destructive than :bdelete . They have their pros and cons. For instance, :bdelete leaves the buffer in the hidden list, while :bwipeout removes global marks (see :help marks , and the description of uppercase marks).

    I haven't settled on these keybindings, I would sort of prefer that my ,bb was simply ,b (simply defining while leaving the others defined makes Vim pause to see if you'll enter more).

    Those shortcuts for :BufExplorer are actually the defaults for that plugin, but I have it written out so I can change them if I want to start using ,b without a hang.

    You didn't ask for this:

    If you still find Vim buffers a little awkward to use, try to combine the functionality with tabs and windows (until you get more comfortable?).

    " windows {{{3
    
    " window nav
    nnoremap <leader>w <C-w>
    nnoremap <M-h> <C-w>h
    nnoremap <M-j> <C-w>j
    nnoremap <M-k> <C-w>k
    nnoremap <M-l> <C-w>l
    " resize window
    nnoremap <C-h> <C-w><
    nnoremap <C-j> <C-w>+
    nnoremap <C-k> <C-w>-
    nnoremap <C-l> <C-w>>
    

    Notice how nice ,w is for a prefix. Also, I reserve Ctrl key for resizing, because Alt ( M- ) is hard to realize in all environments, and I don't have a better way to resize. I'm fine using ,w to switch windows.

    " tabs {{{3
    
    nnoremap <leader>t :tab
    nnoremap <M-n> :tabn<cr>
    nnoremap <M-p> :tabp<cr>
    nnoremap <C-Tab> :tabn<cr>
    nnoremap <C-S-Tab> :tabp<cr>
    nnoremap tn :tabe<CR>
    nnoremap te :tabe<Space><C-z><S-Tab>
    nnoremap tf :tabf<Space>
    nnoremap tc :tabc<CR>
    nnoremap to :tabo<CR>
    nnoremap tm :tabm<CR>
    nnoremap ts :tabs<CR>
    
    nnoremap th :tabr<CR>
    nnoremap tj :tabn<CR>
    nnoremap tk :tabp<CR>
    nnoremap tl :tabl<CR>
    
    " or, it may make more sense to use
    " nnoremap th :tabp<CR>
    " nnoremap tj :tabl<CR>
    " nnoremap tk :tabr<CR>
    " nnoremap tl :tabn<CR>
    

    In summary of my window and tabs keys, I can navigate both of them with Alt , which is actually pretty easy to reach. In other words:

    " (modifier) key choice explanation {{{3
    "
    "       KEYS        CTRL                  ALT            
    "       hjkl        resize windows        switch windows        
    "       np          switch buffer         switch tab      
    "
    " (resize windows is hard to do otherwise, so we use ctrl which works across
    " more environments. i can use ',w' for windowcmds o.w.. alt is comfortable
    " enough for fast and gui nav in tabs and windows. we use np for navs that 
    " are more linear, hjkl for navs that are more planar.) 
    "
    

    This way, if the Alt is working, you can actually hold it down while you find your "open" buffer pretty quickly, amongst the tabs and windows.

    ,

    There are many ways to solve. The best is the best that WORKS for YOU. You have lots of fuzzy match plugins that help you navigate. The 2 things that impress me most are

    1) CtrlP or Unite's fuzzy buffer search

    2) LustyExplorer and/or LustyJuggler

    And the simplest :

    :map <F5> :ls<CR>:e #
    

    Pressing F5 lists all buffer, just type number.

    [Oct 21, 2018] Favorite (G)Vim plugins/scripts?

    Dec 27, 2009 | stackoverflow.com
    What are your favorite (G)Vim plugins/scripts?

    community wiki 2 revs ,Jun 24, 2009 at 13:35

    Nerdtree

    The NERD tree allows you to explore your filesystem and to open files and directories. It presents the filesystem to you in the form of a tree which you manipulate with the keyboard and/or mouse. It also allows you to perform simple filesystem operations.

    The tree can be toggled easily with :NERDTreeToggle which can be mapped to a more suitable key. The keyboard shortcuts in the NERD tree are also easy and intuitive.

    Edit: Added synopsis

    SpoonMeiser ,Sep 17, 2008 at 19:32

    For those of us not wanting to follow every link to find out about each plugin, care to furnish us with a brief synopsis? – SpoonMeiser Sep 17 '08 at 19:32

    AbdullahDiaa ,Sep 10, 2012 at 19:51

    and NERDTree with NERDTreeTabs are awesome combination github.com/jistr/vim-nerdtree-tabs – AbdullahDiaa Sep 10 '12 at 19:51

    community wiki 2 revs ,May 27, 2010 at 0:08

    Tim Pope has some kickass plugins. I love his surround plugin.

    Taurus Olson ,Feb 21, 2010 at 18:01

    Surround is a great plugin for sure. – Taurus Olson Feb 21 '10 at 18:01

    Benjamin Oakes ,May 27, 2010 at 0:11

    Link to all his vim contributions: vim.org/account/profile.php?user_id=9012 – Benjamin Oakes May 27 '10 at 0:11

    community wiki SergioAraujo, Mar 15, 2011 at 15:35

    Pathogen plugin and more things commented by Steve Losh

    Patrizio Rullo ,Sep 26, 2011 at 12:11

    Pathogen is the FIRST plugin you have to install on every Vim installation! It resolves the plugin management problems every Vim developer has. – Patrizio Rullo Sep 26 '11 at 12:11

    Profpatsch ,Apr 12, 2013 at 8:53

    I would recommend switching to Vundle . It's better by a long shot and truly automates. You can give vim-addon-manager a try, too. – Profpatsch Apr 12 '13 at 8:53

    community wiki JPaget, Sep 15, 2008 at 20:47

    Taglist , a source code browser plugin for Vim, is currently the top rated plugin at the Vim website and is my favorite plugin.

    mindthief ,Jun 27, 2012 at 20:53

    A more recent alternative to this is Tagbar , which appears to have some improvements over Taglist. This blog post offers a comparison between the two plugins. – mindthief Jun 27 '12 at 20:53

    community wiki 1passenger, Nov 17, 2009 at 9:15

    I love snipMate . It's simular to snippetsEmu, but has a much better syntax to read (like Textmate).

    community wiki cschol, Aug 22, 2008 at 4:19

    A very nice grep replacement for GVim is Ack . A search plugin written in Perl that beats Vim's internal grep implementation and externally invoked greps, too. It also by default skips any CVS directories in the project directory, e.g. '.svn'. This blog shows a way to integrate Ack with vim.

    FUD, Aug 27, 2013 at 15:50

    github.com/mileszs/ack.vim – FUD Aug 27 '13 at 15:50

    community wiki Dominic Dos Santos ,Sep 12, 2008 at 12:44

    A.vim is a great little plugin. It allows you to quickly switch between header and source files with a single command. The default is :A , but I remapped it to F2 reduce keystrokes.

    community wiki 2 revs, Aug 25, 2008 at 15:06

    I really like the SuperTab plugin, it allows you to use the tab key to do all your insert completions.

    community wiki Greg Hewgill, Aug 25, 2008 at 19:23

    I have recently started using a plugin that highlights differences in your buffer from a previous version in your RCS system (Subversion, git, whatever). You just need to press a key to toggle the diff display on/off. You can find it here: http://github.com/ghewgill/vim-scmdiff . Patches welcome!

    Nathan Fellman, Sep 15, 2008 at 18:51

    Do you know if this supports bitkeeper? I looked on the website but couldn't even see whom to ask. – Nathan Fellman Sep 15 '08 at 18:51

    Greg Hewgill, Sep 16, 2008 at 9:26

    It doesn't explicitly support bitkeeper at the moment, but as long as bitkeeper has a "diff" command that outputs a normal patch file, it should be easy enough to add. – Greg Hewgill Sep 16 '08 at 9:26

    Yogesh Arora, Mar 10, 2010 at 0:47

    does it support clearcase – Yogesh Arora Mar 10 '10 at 0:47

    Greg Hewgill, Mar 10, 2010 at 1:39

    @Yogesh: No, it doesn't support ClearCase at this time. However, if you can add ClearCase support, a patch would certainly be accepted. – Greg Hewgill Mar 10 '10 at 1:39

    Olical ,Jan 23, 2013 at 11:05

    This version can be loaded via pathogen in a git submodule: github.com/tomasv/vim-scmdiff – Olical Jan 23 '13 at 11:05

    community wiki 4 revs, May 23, 2017 at 11:45

    1. Elegant (mini) buffer explorer - This is the multiple file/buffer manager I use. Takes very little screen space. It looks just like most IDEs where you have a top tab-bar with the files you've opened. I've tested some other similar plugins before, and this is my pick.
    2. TagList - Small file explorer, without the "extra" stuff the other file explorers have. Just lets you browse directories and open files with the "enter" key. Note that this has already been noted by previous commenters to your questions.
    3. SuperTab - Already noted by WMR in this post, looks very promising. It's an auto-completion replacement key for Ctrl-P.
    4. Desert256 color Scheme - Readable, dark one.
    5. Moria color scheme - Another good, dark one. Note that it's gVim only.
    6. Enahcned Python syntax - If you're using Python, this is an enhanced syntax version. Works better than the original. I'm not sure, but this might be already included in the newest version. Nonetheless, it's worth adding to your syntax folder if you need it.
    7. Enhanced JavaScript syntax - Same like the above.
    8. EDIT: Comments - Great little plugin to [un]comment chunks of text. Language recognition included ("#", "/", "/* .. */", etc.) .

    community wiki Konrad Rudolph, Aug 25, 2008 at 14:19

    Not a plugin, but I advise any Mac user to switch to the MacVim distribution which is vastly superior to the official port.

    As for plugins, I used VIM-LaTeX for my thesis and was very satisfied with the usability boost. I also like the Taglist plugin which makes use of the ctags library.

    community wiki Yariv ,Nov 25, 2010 at 19:58

    clang complete - the best c++ code completion I have seen so far. By using an actual compiler (that would be clang) the plugin is able to complete complex expressions including STL and smart pointers.

    community wiki Greg Bowyer, Jul 30, 2009 at 19:51

    No one said matchit yet ? Makes HTML / XML soup much nicer http://www.vim.org/scripts/script.php?script_id=39

    community wiki 2 revs, 2 users 91% ,Nov 24, 2011 at 5:18

    Tomas Restrepo posted on some great Vim scripts/plugins . He has also pointed out some nice color themes on his blog, too. Check out his Vim category .

    community wiki HaskellElephant ,Mar 29, 2011 at 17:59,

    With version 7.3, undo branches was added to vim. A very powerful feature, but hard to use, until Steve Losh made Gundo which makes this feature possible to use with a ascii representation of the tree and a diff of the change. A must for using undo branches.

    community wiki, Auguste ,Apr 20, 2009 at 8:05

    Matrix Mode .

    community wiki wilhelmtell ,Dec 10, 2010 at 19:11

    My latest favourite is Command-T . Granted, to install it you need to have Ruby support and you'll need to compile a C extension for Vim. But oy-yoy-yoy does this plugin make a difference in opening files in Vim!

    Victor Farazdagi, Apr 19, 2011 at 19:16

    Definitely! Let not the ruby + c compiling stop you, you will be amazed on how well this plugin enhances your toolset. I have been ignoring this plugin for too long, installed it today and already find myself using NERDTree lesser and lesser. – Victor Farazdagi Apr 19 '11 at 19:16

    datentyp ,Jan 11, 2012 at 12:54

    With ctrlp now there is something as awesome as Command-T written in pure Vimscript! It's available at github.com/kien/ctrlp.vim – datentyp Jan 11 '12 at 12:54

    FUD ,Dec 26, 2012 at 4:48

    just my 2 cents.. being a naive user of both plugins, with a few first characters of file name i saw a much better result with commandt plugin and a lots of false positives for ctrlp. – FUD Dec 26 '12 at 4:48

    community wiki
    f3lix
    ,Mar 15, 2011 at 12:55

    Conque Shell : Run interactive commands inside a Vim buffer

    Conque is a Vim plugin which allows you to run interactive programs, such as bash on linux or powershell.exe on Windows, inside a Vim buffer. In other words it is a terminal emulator which uses a Vim buffer to display the program output.

    http://code.google.com/p/conque/

    http://www.vim.org/scripts/script.php?script_id=2771

    community wiki 2 revs ,Nov 20, 2009 at 14:51

    The vcscommand plugin provides global ex commands for manipulating version-controlled source files and it supports CVS,SVN and some other repositories.

    You can do almost all repository related tasks from with in vim:
    * Taking the diff of current buffer with repository copy
    * Adding new files
    * Reverting the current buffer to the repository copy by nullifying the local changes....

    community wiki Sirupsen ,Nov 20, 2009 at 15:00

    Just gonna name a few I didn't see here, but which I still find extremely helpful:

    community wiki thestoneage ,Dec 22, 2011 at 16:25

    One Plugin that is missing in the answers is NERDCommenter , which let's you do almost anything with comments. For example {add, toggle, remove} comments. And more. See this blog entry for some examples.

    community wiki james ,Feb 19, 2010 at 7:17

    I like taglist and fuzzyfinder, those are very cool plugin

    community wiki JAVH ,Aug 15, 2010 at 11:54

    TaskList

    This script is based on the eclipse Task List. It will search the file for FIXME, TODO, and XXX (or a custom list) and put them in a handy list for you to browse which at the same time will update the location in the document so you can see exactly where the tag is located. Something like an interactive 'cw'

    community wiki Peter Hoffmann ,Aug 29, 2008 at 4:07

    I really love the snippetsEmu Plugin. It emulates some of the behaviour of Snippets from the OS X editor TextMate, in particular the variable bouncing and replacement behaviour.

    community wiki Anon ,Sep 11, 2008 at 10:20

    Zenburn color scheme and good fonts - [Droid Sans Mono]( http://en.wikipedia.org/wiki/Droid_(font)) on Linux, Consolas on Windows.

    Gary Willoughby ,Jul 7, 2011 at 21:21

    Take a look at DejaVu Sans Mono too dejavu-fonts.org/wiki/Main_Page – Gary Willoughby Jul 7 '11 at 21:21

    Santosh Kumar ,Mar 28, 2013 at 4:48

    Droid Sans Mono makes capital m and 0 appear same. – Santosh Kumar Mar 28 '13 at 4:48

    community wiki julienXX ,Jun 22, 2010 at 12:05

    If you're on a Mac, you got to use peepopen , fuzzyfinder on steroids.

    Khaja Minhajuddin ,Apr 5, 2012 at 9:24

    Command+T is a free alternative to this: github.com/wincent/Command-T – Khaja Minhajuddin Apr 5 '12 at 9:24

    community wiki Peter Stuifzand ,Aug 25, 2008 at 19:16

    I use the following two plugins all the time:

    Csaba_H ,Jun 24, 2009 at 13:47

    vimoutliner is really good for managing small pieces of information (from tasks/todo-s to links) – Csaba_H Jun 24 '09 at 13:47

    ThiefMaster ♦ ,Nov 25, 2010 at 20:35

    Adding some links/descriptions would be nice – ThiefMaster ♦ Nov 25 '10 at 20:35

    community wiki chiggsy ,Aug 26, 2009 at 18:22

    For vim I like a little help with completions. Vim has tons of completion modes, but really, I just want vim to complete anything it can, whenver it can.

    I hate typing ending quotes, but fortunately this plugin obviates the need for such misery.

    Those two are my heavy hitters.

    This one may step up to roam my code like an unquiet shade, but I've yet to try it.

    community wiki Brett Stahlman, Dec 11, 2009 at 13:28

    Txtfmt (The Vim Highlighter) Screenshots

    The Txtfmt plugin gives you a sort of "rich text" highlighting capability, similar to what is provided by RTF editors and word processors. You can use it to add colors (foreground and background) and formatting attributes (all combinations of bold, underline, italic, etc...) to your plain text documents in Vim.

    The advantage of this plugin over something like Latex is that with Txtfmt, your highlighting changes are visible "in real time", and as with a word processor, the highlighting is WYSIWYG. Txtfmt embeds special tokens directly in the file to accomplish the highlighting, so the highlighting is unaffected when you move the file around, even from one computer to another. The special tokens are hidden by the syntax; each appears as a single space. For those who have applied Vince Negri's conceal/ownsyntax patch, the tokens can even be made "zero-width".

    community wiki 2 revs, Dec 10, 2010 at 4:37

    tcomment

    "I map the "Command + /" keys so i can just comment stuff out while in insert mode imap :i

    [Oct 21, 2018] Duplicate a whole line in Vim

    Notable quotes:
    "... Do people not run vimtutor anymore? This is probably within the first five minutes of learning how to use Vim. ..."
    "... Can also use capital Y to copy the whole line. ..."
    "... I think the Y should be "copy from the cursor to the end" ..."
    "... In normal mode what this does is copy . copy this line to just below this line . ..."
    "... And in visual mode it turns into '<,'> copy '> copy from start of selection to end of selection to the line below end of selection . ..."
    "... I like: Shift + v (to select the whole line immediately and let you select other lines if you want), y, p ..."
    "... Multiple lines with a number in between: y7yp ..."
    "... 7yy is equivalent to y7y and is probably easier to remember how to do. ..."
    "... or :.,.+7 copy .+7 ..."
    "... When you press : in visual mode, it is transformed to '<,'> so it pre-selects the line range the visual selection spanned over ..."
    Oct 21, 2018 | stackoverflow.com

    sumek, Sep 16, 2008 at 15:02

    How do I duplicate a whole line in Vim in a similar way to Ctrl + D in IntelliJ IDEA/Resharper or Ctrl + Alt + / in Eclipse?

    dash-tom-bang, Feb 15, 2016 at 23:31

    Do people not run vimtutor anymore? This is probably within the first five minutes of learning how to use Vim.dash-tom-bang Feb 15 '16 at 23:31

    Mark Biek, Sep 16, 2008 at 15:06

    yy or Y to copy the line
    or
    dd to delete (cutting) the line

    then

    p to paste the copied or deleted text after the current line
    or
    P to paste the copied or deleted text before the current line

    camflan, Sep 28, 2008 at 15:55

    Can also use capital Y to copy the whole line.camflan Sep 28 '08 at 15:55

    nXqd, Jul 19, 2012 at 11:35

    @camflan I think the Y should be "copy from the cursor to the end" nXqd Jul 19 '12 at 11:35

    Amir Ali Akbari, Oct 9, 2012 at 10:33

    and 2yy can be used to copy 2 lines (and for any other n) – Amir Ali Akbari Oct 9 '12 at 10:33

    zelk, Mar 9, 2014 at 13:29

    To copy two lines, it's even faster just to go yj or yk, especially since you don't double up on one character. Plus, yk is a backwards version that 2yy can't do, and you can put the number of lines to reach backwards in y9j or y2k, etc.. Only difference is that your count has to be n-1 for a total of n lines, but your head can learn that anyway. – zelk Mar 9 '14 at 13:29

    DarkWiiPlayer, Apr 13 at 7:26

    I know I'm late to the party, but whatever; I have this in my .vimrc:
    nnoremap <C-d> :copy .<CR>
    vnoremap <C-d> :copy '><CR>
    

    the :copy command just copies the selected line or the range (always whole lines) to below the line number given as its argument.

    In normal mode what this does is copy . copy this line to just below this line .

    And in visual mode it turns into '<,'> copy '> copy from start of selection to end of selection to the line below end of selection .

    yolenoyer, Apr 11 at 16:34

    I like to use this mapping:
    :nnoremap yp Yp

    because it makes it consistent to use alongside the native YP command.

    Gabe add a comment, Jul 14, 2009 at 4:45

    I like: Shift + v (to select the whole line immediately and let you select other lines if you want), y, p

    jedi, Feb 11 at 17:20

    If you would like to duplicate a line and paste it right away below the current like, just like in Sublime Ctrl + Shift + D, then you can add this to your .vimrc file.

    imap <S-C-d> <Esc>Yp

    jedi, Apr 14 at 17:48

    This works perfectly fine for me: imap <S-C-d> <Esc>Ypi insert mode and nmap <S-C-d> <Esc>Yp in normal mode – jedi Apr 14 at 17:48

    Chris Penner, Apr 20, 2015 at 4:33

    Default is yyp, but I've been using this rebinding for a year or so and love it:

    " set Y to duplicate lines, works in visual mode as well. nnoremap Y yyp vnoremap Y y`>pgv

    yemu, Oct 12, 2013 at 18:23

    yyp - paste after

    yyP - paste before

    Mikk, Dec 4, 2015 at 9:09

    @A-B-B However, there is a miniature difference here - what line will your cursor land on. – Mikk Dec 4 '15 at 9:09

    theschmitzer, Sep 16, 2008 at 15:16

    yyp - remember it with "yippee!"

    Multiple lines with a number in between: y7yp

    graywh, Jan 4, 2009 at 21:25

    7yy is equivalent to y7y and is probably easier to remember how to do.graywh Jan 4 '09 at 21:25

    Nefrubyr, Jul 29, 2014 at 14:09

    y7yp (or 7yyp) is rarely useful; the cursor remains on the first line copied so that p pastes the copied lines between the first and second line of the source. To duplicate a block of lines use 7yyP – Nefrubyr Jul 29 '14 at 14:09

    DarkWiiPlayer, Apr 13 at 7:28

    @Nefrubyr or :.,.+7 copy .+7 :P – DarkWiiPlayer Apr 13 at 7:28

    Michael, May 12, 2016 at 14:54

    For someone who doesn't know vi, some answers from above might mislead him with phrases like "paste ... after/before current line ".
    It's actually "paste ... after/before cursor ".

    yy or Y to copy the line
    or
    dd to delete the line

    then

    p to paste the copied or deleted text after the cursor
    or
    P to paste the copied or deleted text before the cursor


    For more key bindings, you can visit this site: vi Complete Key Binding List

    ap-osd, Feb 10, 2016 at 13:23

    For those starting to learn vi, here is a good introduction to vi by listing side by side vi commands to typical Windows GUI Editor cursor movement and shortcut keys. It lists all the basic commands including yy (copy line) and p (paste after) or P (paste before).

    vi (Vim) for Windows Users

    pjz, Sep 16, 2008 at 15:04

    yy

    will yank the current line without deleting it

    dd

    will delete the current line

    p

    will put a line grabbed by either of the previous methods

    Benoit, Apr 17, 2012 at 15:17

    Normal mode: see other answers.

    The Ex way:

    If you need to move instead of copying, use :m instead of :t .

    This can be really powerful if you combine it with :g or :v :

    Reference: :help range, :help :t, :help :g, :help :m and :help :v

    Benoit, Jun 30, 2012 at 14:17

    When you press : in visual mode, it is transformed to '<,'> so it pre-selects the line range the visual selection spanned over. So, in visual mode, :t0 will copy the lines at the beginning. – Benoit Jun 30 '12 at 14:17

    Niels Bom, Jul 31, 2012 at 8:21

    For the record: when you type a colon (:) you go into command line mode where you can enter Ex commands. vimdoc.sourceforge.net/htmldoc/cmdline.html Ex commands can be really powerful and terse. The yyp solutions are "Normal mode" commands. If you want to copy/move/delete a far-away line or range of lines an Ex command can be a lot faster. – Niels Bom Jul 31 '12 at 8:21

    Burak Erdem, Jul 8, 2016 at 16:55

    :t. is the exact answer to the question. – Burak Erdem Jul 8 '16 at 16:55

    Aaron Thoma, Aug 22, 2013 at 23:31

    Y is usually remapped to y$ (yank (copy) until end of line (from current cursor position, not beginning of line)) though. With this line in .vimrc : :nnoremap Y y$Aaron Thoma Aug 22 '13 at 23:31

    Kwondri, Sep 16, 2008 at 15:37

    If you want another way :-)

    "ayy this will store the line in buffer a

    "ap this will put the contents of buffer a at the cursor.

    There are many variations on this.

    "a5yy this will store the 5 lines in buffer a

    see http://www.vim.org/htmldoc/help.html for more fun

    frbl, Jun 21, 2015 at 21:04

    Thanks, I used this as a bind: map <Leader>d "ayy"ap – frbl Jun 21 '15 at 21:04

    Rook, Jul 14, 2009 at 4:37

    Another option would be to go with:
    nmap <C-d> mzyyp`z

    gives you the advantage of preserving the cursor position.

    ,Sep 18, 2008 at 20:32

    You can also try <C-x><C-l> which will repeat the last line from insert mode and brings you a completion window with all of the lines. It works almost like <C-p>

    Jorge Gajon, May 11, 2009 at 6:38

    This is very useful, but to avoid having to press many keys I have mapped it to just CTRL-L, this is my map: inoremap ^L ^X^L – Jorge Gajon May 11 '09 at 6:38

    cori, Sep 16, 2008 at 15:06

    1 gotcha: when you use "p" to put the line, it puts it after the line your cursor is on, so if you want to add the line after the line you're yanking, don't move the cursor down a line before putting the new line.

    Ghoti, Jan 31, 2016 at 11:05

    or use capital P - put before – Ghoti Jan 31 '16 at 11:05

    [Oct 21, 2018] Indent multiple lines quickly in vi

    Oct 21, 2018 | stackoverflow.com

    Allain Lalonde, Oct 25, 2008 at 3:27

    Should be trivial, and it might even be in the help, but I can't figure out how to navigate it. How do I indent multiple lines quickly in vi?

    Greg Hewgill, Oct 25, 2008 at 3:28

    Use the > command. To indent 5 lines, 5>> . To mark a block of lines and indent it, Vjj> to indent 3 lines (vim only). To indent a curly-braces block, put your cursor on one of the curly braces and use >% .

    If you're copying blocks of text around and need to align the indent of a block in its new location, use ]p instead of just p . This aligns the pasted block with the surrounding text.

    Also, the shiftwidth setting allows you to control how many spaces to indent.

    akdom, Oct 25, 2008 at 3:31

    <shift>-v also works to select a line in Vim. – akdom Oct 25 '08 at 3:31

    R. Martinho Fernandes, Feb 15, 2009 at 17:26

    I use >i} (indent inner {} block). Works in vim. Not sure it works in vi. – R. Martinho Fernandes Feb 15 '09 at 17:26

    Kamran Bigdely, Feb 28, 2011 at 23:25

    My problem(in gVim) is that the command > indents much more than 2 blanks (I want just two blanks but > indent something like 5 blanks) – Kamran Bigdely Feb 28 '11 at 23:25

    Greg Hewgill, Mar 1, 2011 at 18:42

    @Kamran: See the shiftwidth setting for the way to change that. – Greg Hewgill Mar 1 '11 at 18:42

    Greg Hewgill, Feb 28, 2013 at 3:36

    @MattStevens: You can find extended discussion about this phenomenon here: meta.stackexchange.com/questions/9731/ – Greg Hewgill Feb 28 '13 at 3:36

    Michael Ekoka, Feb 15, 2009 at 5:42

    When you select a block and use > to indent, it indents then goes back to normal mode. I have this in my .vimrc file:
    vnoremap < <gv
    
    vnoremap > >gv

    It lets you indent your selection as many time as you want.

    sundar, Sep 1, 2009 at 17:14

    To indent the selection multiple times, you can simply press . to repeat the previous command. – sundar Sep 1 '09 at 17:14

    masukomi, Dec 6, 2013 at 21:24

    The problem with . in this situation is that you have to move your fingers. With @mike's solution (same one i use) you've already got your fingers on the indent key and can just keep whacking it to keep indenting rather than switching and doing something else. Using period takes longer because you have to move your hands and it requires more thought because it's a second, different, operation. – masukomi Dec 6 '13 at 21:24

    Johan, Jan 20, 2009 at 21:11

    A big selection would be:
    gg=G

    It is really fast, and everything gets indented ;-)

    asgs, Jan 28, 2014 at 21:57

    I've an XML file and turned on syntax highlighting. Typing gg=G just puts every line starting from position 1. All the white spaces have been removed. Is there anything else specific to XML? – asgs Jan 28 '14 at 21:57

    Johan, Jan 29, 2014 at 6:10

    stackoverflow.com/questions/7600860/

    Amanuel Nega, May 19, 2015 at 19:51

    I think set cindent should be in vimrc or should run :set cindent before running that command – Amanuel Nega May 19 '15 at 19:51

    Amanuel Nega, May 19, 2015 at 19:57

    I think cindent must be set first. and @asgs i think this only works for cstyle programming languages. – Amanuel Nega May 19 '15 at 19:57

    sqqqrly, Sep 28, 2017 at 23:59

    I use block-mode visual selection:

    This is not a uni-tasker. It works: