Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Unix Sysadmin Tips

News Enterprise Unix System Administration Recommended Links Unix System Monitoring Job schedulers Unix Configuration Management Tools Perl Admin Tools and Scripts Baseliners
Saferm -- wrapper for rm command PDSH -- a parallel remote shell TeraTerm Macros Linux implementation of sar Mon -- King of Simplicity among Unix Monitoring packages      
Bash Tips and Tricks WinSCP Tips Attaching to and detaching from screen sessions Midnight Commander Tips and Tricks WinSCP Tips Linux netwoking tips RHEL Tips Suse Tips
Filesystems tips Shell Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips AWK Tips Linux Start up and Run Levels
Unix System Monitoring Job schedulers  Grub Simple Unix Backup Tools  Sysadmin Horror Stories History Humor Etc

Lazy Linux: 10 essential tricks for admins by Vallard Benincosa  Certified Technical Sales Specialist, IBM

20 Jul 2008 | IBM DeveloperWorks

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time—and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat—especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot
GRUB screen after reboot
 

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line
Ready to edit the kernel line
 

Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1
Append the argument with the number 1
 

Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall
Poking a hole in the firewall
 

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.

     
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh thedude@blackbox.example.com .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.

     
  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com

    This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling
Putty can forward SSH ports for tunneling
 

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system—a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

  Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.
 
Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Nov 09, 2019] Mirroring a running system into a ramdisk Oracle Linux Blog

Nov 09, 2019 | blogs.oracle.com

javascript:void(0)

Mirroring a running system into a ramdisk Greg Marsden

In this blog post, Oracle Linux kernel developer William Roche presents a method to mirror a running system into a ramdisk.

A RAM mirrored System ?

There are cases where a system can boot correctly but after some time, can lose its system disk access - for example an iSCSI system disk configuration that has network issues, or any other disk driver problem. Once the system disk is no longer accessible, we rapidly face a hang situation followed by I/O failures, without the possibility of local investigation on this machine. I/O errors can be reported on the console:

 XFS (dm-0): Log I/O Error Detected....

Or losing access to basic commands like:

# ls
-bash: /bin/ls: Input/output error

The approach presented here allows a small system disk space to be mirrored in memory to avoid the above I/O failures situation, which provides the ability to investigate the reasons for the disk loss. The system disk loss will be noticed as an I/O hang, at which point there will be a transition to use only the ram-disk.

To enable this, the Oracle Linux developer Philip "Bryce" Copeland created the following method (more details will follow):

Disk and memory sizes:

As we are going to mirror the entire system installation to the memory, this system installation image has to fit in a fraction of the memory - giving enough memory room to hold the mirror image and necessary running space.

Of course this is a trade-off between the memory available to the server and the minimal disk size needed to run the system. For example a 12GB disk space can be used for a minimal system installation on a 16GB memory machine.

A standard Oracle Linux installation uses XFS as root fs, which (currently) can't be shrunk. In order to generate a usable "small enough" system, it is recommended to proceed to the OS installation on a correctly sized disk space. Of course, a correctly sized installation location can be created using partitions of large physical disk. Then, the needed application filesystems can be mounted from their current installation disk(s). Some system adjustments may also be required (services added, configuration changes, etc...).

This configuration phase should not be underestimated as it can be difficult to separate the system from the needed applications, and keeping both on the same space could be too large for a RAM disk mirroring.

The idea is not to keep an entire system load active when losing disks access, but to be able to have enough system to avoid system commands access failure and analyze the situation.

We are also going to avoid the use of swap. When the system disk access is lost, we don't want to require it for swap data. Also, we don't want to use more memory space to hold a swap space mirror. The memory is better used directly by the system itself.

The system installation can have a swap space (for example a 1.2GB space on our 12GB disk example) but we are neither going to mirror it nor use it.

Our 12GB disk example could be used with: 1GB /boot space, 11GB LVM Space (1.2GB swap volume, 9.8 GB root volume).

Ramdisk memory footprint:

The ramdisk size has to be a little larger (8M) than the root volume size that we are going to mirror, making room for metadata. But we can deal with 2 types of ramdisk:

We can expect roughly 30% to 50% memory space gain from zram compared to brd, but zram must use 4k I/O blocks only. This means that the filesystem used for root has to only deal with a multiple of 4k I/Os.

Basic commands:

Here is a simple list of commands to manually create and use a ramdisk and mirror the root filesystem space. We create a temporary configuration that needs to be undone or the subsequent reboot will not work. But we also provide below a way of automating at startup and shutdown.

Note the root volume size (considered to be ol/root in this example):

?
1 2 3 # lvs --units k -o lv_size ol/root LSize 10268672.00k

Create a ramdisk a little larger than that (at least 8M larger):

?
1 # modprobe brd rd_nr=1 rd_size=$((10268672 + 8*1024))

Verify the created disk:

?
1 2 3 # lsblk /dev/ram0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ram0 1:0 0 9.8G 0 disk

Put the disk under lvm control

?
1 2 3 4 5 6 7 8 9 # pvcreate /dev/ram0 Physical volume "/dev/ram0" successfully created. # vgextend ol /dev/ram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/ram0 Logical volume ol/root successfully converted.

We now have ol/root mirror to our /dev/ram0 disk.

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 40.70 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

A few minutes (or seconds) later, the synchronization is completed:

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

We have our mirrored configuration running !

For security, we can also remove the swap and /boot, /boot/efi(if it exists) mount points:

?
1 2 3 # swapoff -a # umount /boot/efi # umount /boot

Stopping the system also requires some actions as you need to cleanup the configuration so that it will not be looking for a gone ramdisk on reboot.

?
1 2 3 4 5 6 7 # lvconvert -y -m 0 ol/root /dev/ram0 Logical volume ol/root successfully converted. # vgreduce ol /dev/ram0 Removed "/dev/ram0" from volume group "ol" # mount /boot # mount /boot/efi # swapon -a
What about in-memory compression ?

As indicated above, zRAM devices can compress data in-memory, but 2 main problems need to be fixed:

Make lvm work with zram:

The lvm configuration file has to be changed to take into account the "zram" type of devices. Including the following "types" entry to the /etc/lvm/lvm.conf file in its "devices" section:

?
1 2 3 devices { types = [ "zram" , 16 ] }
Root file system I/Os:

A standard Oracle Linux installation uses XFS, and we can check the sector size used (depending on the disk type used) with

?
1 2 3 4 5 6 7 8 9 10 # xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

We can notice here that the sector size (sectsz) used on this root fs is a standard 512 bytes. This fs type cannot be mirrored with a zRAM device, and needs to be recreated with 4k sector sizes.

Transforming the root file system to 4k sector size:

This is simply a backup (to a zram disk) and restore procedure after recreating the root FS. To do so, the system has to be booted from another system image. Booting from an installation DVD image can be a good possibility.

?
1 2 3 sh-4.2 # vgchange -a y ol 2 logical volume(s) in volume group "ol" now active sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 sh-4.2 # modprobe zram sh-4.2 # echo 10G > /sys/block/zram0/disksize sh-4.2 # mkfs.xfs /dev/zram0 meta-data=/dev/zram0 isize=256 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mkdir /mnt2 sh-4.2 # mount /dev/zram0 /mnt2 sh-4.2 # xfsdump -L BckUp -M dump -f /mnt2/ROOT /mnt xfsdump: using file dump (drive_simple) strategy xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsdump: level 0 dump of localhost:/mnt ... xfsdump: dump complete: 130 seconds elapsed xfsdump: Dump Summary: xfsdump: stream 0 /mnt2/ROOT OK (success) xfsdump: Dump Status: SUCCESS sh-4.2 # umount /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 sh-4.2 # mkfs.xfs -f -s size=4096 /dev/mapper/ol-root meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 sh-4.2 # xfsrestore -f /mnt2/ROOT /mnt xfsrestore: using file dump (drive_simple) strategy xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsrestore: searching media for dump ... xfsrestore: restore complete: 337 seconds elapsed xfsrestore: Restore Summary: xfsrestore: stream 0 /mnt2/ROOT OK (success) xfsrestore: Restore Status: SUCCESS sh-4.2 # umount /mnt sh-4.2 # umount /mnt2
?
1 sh-4.2 # reboot
?
1 2 3 4 5 6 7 8 9 10 $ xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

With sectsz=4096, our system is now ready for zRAM mirroring.

Basic commands with a zRAM device: ?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 # modprobe zram # zramctl --find --size 10G /dev/zram0 # pvcreate /dev/zram0 Physical volume "/dev/zram0" successfully created. # vgextend ol /dev/zram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/zram0 Logical volume ol/root successfully converted. # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 12.38 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # zramctl NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lzo 10G 9.8G 5.3G 5.5G 1

The compressed disk uses a total of 5.5GB of memory to mirror a 9.8G volume size (using in this case 8.5G).

Removal is performed the same way as brd, except that the device is /dev/zram0 instead of /dev/ram0.

Automating the process:

Fortunately, the procedure can be automated on system boot and shutdown with the following scripts (given as examples).

The start method: /usr/sbin/start-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/start-raid1-ramdisk ]

After a chmod 555 /usr/sbin/start-raid1-ramdisk, running this script on a 4k xfs root file system should show something like:

?
1 2 3 4 5 6 7 8 9 10 11 # /usr/sbin/start-raid1-ramdisk Volume group "ol" is already consistent. RAID1 ramdisk: intending to use 10276864 K of memory for facilitation of [ / ] Physical volume "/dev/zram0" successfully created. Volume group "ol" successfully extended Logical volume ol/root successfully converted. Waiting for mirror to synchronize... LVM RAID1 sync of [ / ] took 00:01:53 sec Logical volume ol/root changed. NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lz4 9.8G 9.8G 5.5G 5.8G 1

The stop method: /usr/sbin/stop-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/stop-raid1-ramdisk ]

After a chmod 555 /usr/sbin/stop-raid1-ramdisk, running this script should show something like:

?
1 2 3 4 5 6 # /usr/sbin/stop-raid1-ramdisk Volume group "ol" is already consistent. Logical volume ol/root changed. Logical volume ol/root successfully converted. Removed "/dev/zram0" from volume group "ol" Labels on physical volume "/dev/zram0" successfully wiped.

A service Unit file can also be created: /etc/systemd/system/raid1-ramdisk.service [https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/raid1-ramdisk.service]

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [Unit] Description=Enable RAMdisk RAID 1 on LVM After= local -fs.target Before= shutdown .target reboot.target halt.target [Service] ExecStart=/usr/sbin/start-raid1-ramdisk ExecStop=/usr/sbin/stop-raid1-ramdisk Type=oneshot RemainAfterExit= yes TimeoutSec=0 [Install] WantedBy=multi-user.target
Conclusion:

When the system disk access problem manifests itself, the ramdisk mirror branch will provide the possibility to investigate the situation. This procedure goal is not to keep the system running on this memory mirror configuration, but help investigate a bad situation.

When the problem is identified and fixed, I really recommend to come back to a standard configuration -- enjoying the entire memory of the system, a standard system disk, a possible swap space etc.

Hoping the method described here can help. I also want to thank for their reviews Philip "Bryce" Copeland who also created the first prototype of the above scripts, and Mark Kanda who also helped testing many aspects of this work.

[Nov 09, 2019] chkservice Is A systemd Unit Manager With A Terminal User Interface

The site is https://github.com/linuxenko/chkservice The tool is written in C++
Looks like in version 0.3 the author increased the complexity by adding features which probably are not needed at all
Nov 07, 2019 | www.linuxuprising.com

chkservice systemd manager
chkservice, a terminal user interface (TUI) for managing systemd units, has been updated recently with window resize and search support.

chkservice is a simplistic systemd unit manager that uses ncurses for its terminal interface. Using it you can enable or disable, and start or stop a systemd unit. It also shows the units status (enabled, disabled, static or masked).

You can navigate the chkservice user interface using keyboard shortcuts:

To enable or disable a unit press Space , and to start or stop a unity press s . You can access the help screen which shows all available keys by pressing ? .

The command line tool had its first release in August 2017, with no new releases until a few days ago when version 0.2 was released, quickly followed by 0.3.

With the latest 0.3 release, chkservice adds a search feature that allows easily searching through all systemd units.

To search, type / followed by your search query, and press Enter . To search for the next item matching your search query you'll have to type / again, followed by Enter or Ctrl + m (without entering any search text).

Another addition to the latest chkservice is window resize support. In the 0.1 version, the tool would close when the user tried to resize the terminal window. That's no longer the case now, chkservice allowing the resize of the terminal window it runs in.

And finally, the last addition to the latest chkservice 0.3 is G-g navigation support . Press G ( Shift + g ) to navigate to the bottom, and g to navigate to the top.

Download and install chkservice

The initial (0.1) chkservice version can be found in the official repositories of a few Linux distributions, including Debian and Ubuntu (and Debian or Ubuntu based Linux distribution -- e.g. Linux Mint, Pop!_OS, Elementary OS and so on).

There are some third-party repositories available as well, including a Fedora Copr, Ubuntu / Linux Mint PPA, and Arch Linux AUR, but at the time I'm writing this, only the AUR package was updated to the latest chkservice version 0.3.

You may also install chkservice from source. Use the instructions provided in the tool's readme to either create a DEB package or install it directly.

[Nov 08, 2019] Multiple Linux sysadmins working as root

No new interesting ideas for such an important topic whatsoever. One of the main problems here is documenting actions of each administrator in such a way that the set of actions was visible to everybody in a convenient and transparent matter. With multiple terminal opened Unix history is not the file from which you can deduct each sysadmin actions as parts of the history from additional terminals are missing. , not smooch access. Actually Solaris has some ideas implemented in Solaris 10, but they never made it to Linux
May 21, 2012 | serverfault.com

In our team we have three seasoned Linux sysadmins having to administer a few dozen Debian servers. Previously we have all worked as root using SSH public key authentication. But we had a discussion on what is the best practice for that scenario and couldn't agree on anything.

Everybody's SSH public key is put into ~root/.ssh/authorized_keys2

Using personalized accounts and sudo

That way we would login with personalized accounts using SSH public keys and use sudo to do single tasks with root permissions. In addition we could give ourselves the "adm" group that allows us to view log files.

Using multiple UID 0 users

This is a very unique proposal from one of the sysadmins. He suggest to create three users in /etc/passwd all having UID 0 but different login names. He claims that this is not actually forbidden and allow everyone to be UID 0 but still being able to audit.

Comments:

The second option is the best one IMHO. Personal accounts, sudo access. Disable root access via SSH completely. We have a few hundred servers and half a dozen system admins, this is how we do it.

How does agent forwarding break exactly?

Also, if it's such a hassle using sudo in front of every task you can invoke a sudo shell with sudo -s or switch to a root shell with sudo su -

thepearson thepearson 775 8 8 silver badges 18 18 bronze badges

add a comment | 9 With regard to the 3rd suggested strategy, other than perusal of the useradd -o -u userXXX options as recommended by @jlliagre, I am not familiar with running multiple users as the same uid. (hence if you do go ahead with that, I would be interested if you could update the post with any issues (or sucesses) that arise...)

I guess my first observation regarding the first option "Everybody's SSH public key is put into ~root/.ssh/authorized_keys2", is that unless you absolutely are never going to work on any other systems;

  1. then at least some of the time, you are going to have to work with user accounts and sudo

The second observation would be, that if you work on systems that aspire to HIPAA, PCI-DSS compliance, or stuff like CAPP and EAL, then you are going to have to work around the issues of sudo because;

  1. It an industry standard to provide non-root individual user accounts, that can be audited, disabled, expired, etc, typically using some centralized user database.

So; Using personalized accounts and sudo

It is unfortunate that as a sysadmin, almost everything you will need to do on a remote machine is going to require some elevated permissions, however it is annoying that most of the SSH based tools and utilities are busted while you are in sudo

Hence I can pass on some tricks that I use to work-around the annoyances of sudo that you mention. The first problem is that if root login is blocked using PermitRootLogin=no or that you do not have the root using ssh key, then it makes SCP files something of a PITA.

Problem 1 : You want to scp files from the remote side, but they require root access, however you cannot login to the remote box as root directly.

Boring Solution : copy the files to home directory, chown, and scp down.

ssh userXXX@remotesystem , sudo su - etc, cp /etc/somefiles to /home/userXXX/somefiles , chown -R userXXX /home/userXXX/somefiles , use scp to retrieve files from remote.

Less Boring Solution : sftp supports the -s sftp_server flag, hence you can do something like the following (if you have configured password-less sudo in /etc/sudoers );

sftp  -s '/usr/bin/sudo /usr/libexec/openssh/sftp-server' \
userXXX@remotehost:/etc/resolv.conf

(you can also use this hack-around with sshfs, but I am not sure its recommended... ;-)

If you don't have password-less sudo rights, or for some configured reason that method above is broken, I can suggest one more less boring file transfer method, to access remote root files.

Port Forward Ninja Method :

Login to the remote host, but specify that the remote port 3022 (can be anything free, and non-reserved for admins, ie >1024) is to be forwarded back to port 22 on the local side.

 [localuser@localmachine ~]$ ssh userXXX@remotehost -R 3022:localhost:22
Last login: Mon May 21 05:46:07 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------

Get root in the normal fashion...

-bash-3.2$ sudo su -
[root@remotehost ~]#

Now you can scp the files in the other direction avoiding the boring boring step of making a intermediate copy of the files;

[root@remotehost ~]#  scp -o NoHostAuthenticationForLocalhost=yes \
 -P3022 /etc/resolv.conf localuser@localhost:~
localuser@localhost's password: 
resolv.conf                                 100%  
[root@remotehost ~]#

Problem 2: SSH agent forwarding : If you load the root profile, e.g. by specifying a login shell, the necessary environment variables for SSH agent forwarding such as SSH_AUTH_SOCK are reset, hence SSH agent forwarding is "broken" under sudo su - .

Half baked answer :

Anything that properly loads a root shell, is going to rightfully reset the environment, however there is a slight work-around your can use when you need BOTH root permission AND the ability to use the SSH Agent, AT THE SAME TIME

This achieves a kind of chimera profile, that should really not be used, because it is a nasty hack , but is useful when you need to SCP files from the remote host as root, to some other remote host.

Anyway, you can enable that your user can preserve their ENV variables, by setting the following in sudoers;

 Defaults:userXXX    !env_reset

this allows you to create nasty hybrid login environments like so;

login as normal;

[localuser@localmachine ~]$ ssh userXXX@remotehost 
Last login: Mon May 21 12:33:12 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
-bash-3.2$ env | grep SSH_AUTH
SSH_AUTH_SOCK=/tmp/ssh-qwO715/agent.1971

create a bash shell, that runs /root/.profile and /root/.bashrc . but preserves SSH_AUTH_SOCK

-bash-3.2$ sudo -E bash -l

So this shell has root permissions, and root $PATH (but a borked home directory...)

bash-3.2# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=user_u:system_r:unconfined_t
bash-3.2# echo $PATH
/usr/kerberos/sbin:/usr/local/sbin:/usr/sbin:/sbin:/home/xtrabm/xtrabackup-manager:/usr/kerberos/bin:/opt/admin/bin:/usr/local/bin:/bin:/usr/bin:/opt/mx/bin

But you can use that invocation to do things that require remote sudo root, but also the SSH agent access like so;

bash-3.2# scp /root/.ssh/authorized_keys ssh-agent-user@some-other-remote-host:~
/root/.ssh/authorized_keys              100%  126     0.1KB/s   00:00    
bash-3.2#

Tom H Tom H 8,793 3 3 gold badges 34 34 silver badges 57 57 bronze badges

add a comment | 2 The 3rd option looks ideal - but have you actually tried it out to see what's happenning? While you might see the additional usernames in the authentication step, any reverse lookup is going to return the same value.

Allowing root direct ssh access is a bad idea, even if your machines are not connected to the internet / use strong passwords.

Usually I use 'su' rather than sudo for root access.

symcbean symcbean 18.8k 1 1 gold badge 24 24 silver badges 40 40 bronze badges

add a comment | 2 I use (1), but I happened to type

rm -rf / tmp *

on one ill-fated day.I can see to be bad enough if you have more than a handful admins.

(2) Is probably more engineered - and you can become full-fledged root through sudo su -. Accidents are still possible though.

(3) I would not touch with a barge pole. I used it on Suns, in order to have a non-barebone-sh root account (if I remember correctly) but it was never robust - plus I doubt it would be very auditable.

add a comment | 2 Definitely answer 2.
  1. Means that you're allowing SSH access as root . If this machine is in any way public facing, this is just a terrible idea; back when I ran SSH on port 22, my VPS got multiple attempts hourly to authenticate as root. I had a basic IDS set up to log and ban IPs that made multiple failed attempts, but they kept coming. Thankfully, I'd disabled SSH access as the root user as soon as I had my own account and sudo configured. Additionally, you have virtually no audit trail doing this.
  2. Provides root access as and when it is needed. Yes, you barely have any privileges as a standard user, but this is pretty much exactly what you want; if an account does get compromised, you want it to be limited in its abilities. You want any super user access to require a password re-entry. Additionally, sudo access can be controlled through user groups, and restricted to particular commands if you like, giving you more control over who has access to what. Additionally, commands run as sudo can be logged, so it provides a much better audit trail if things go wrong. Oh, and don't just run "sudo su -" as soon as you log in. That's terrible, terrible practice.
  3. Your sysadmin's idea is bad. And he should feel bad. No, *nix machines probably won't stop you from doing this, but both your file system, and virtually every application out there expects each user to have a unique UID. If you start going down this road, I can guarantee that you'll run into problems. Maybe not immediately, but eventually. For example, despite displaying nice friendly names, files and directories use UID numbers to designate their owners; if you run into a program that has a problem with duplicate UIDs down the line, you can't just change a UID in your passwd file later on without having to do some serious manual file system cleanup.

sudo is the way forward. It may cause additional hassle with running commands as root, but it provides you with a more secure box, both in terms of access and auditing.

Rohaq Rohaq 121 3 3 bronze badges

Definitely option 2, but use groups to give each user as much control as possible without needing to use sudo. sudo in front of every command loses half the benefit because you are always in the danger zone. If you make the relevant directories writable by the sysadmins without sudo you return sudo to the exception which makes everyone feel safer.

Julian Julian 121 4 4 bronze badges

In the old days, sudo did not exist. As a consequence, having multiple UID 0 users was the only available alternative. But it's still not that good, notably with logging based on the UID to obtain the username. Nowadays, sudo is the only appropriate solution. Forget anything else.

It is documented permissible by fact. BSD unices have had their toor account for a long time, and bashroot users tend to be accepted practice on systems where csh is standard (accepted malpractice ;)

add a comment | 0 Perhaps I'm weird, but method (3) is what popped into my mind first as well. Pros: you'd have every users name in logs and would know who did what as root. Cons: they'd each be root all the time, so mistakes can be catastrophic.

I'd like to question why you need all admins to have root access. All 3 methods you propose have one distinct disadvantage: once an admin runs a sudo bash -l or sudo su - or such, you lose your ability to track who does what and after that, a mistake can be catastrophic. Moreover, in case of possible misbehaviour, this even might end up a lot worse.

Instead you might want to consider going another way:

This way, martin would be able to safely handle postfix, and in case of mistake or misbehaviour, you'd only lose your postfix system, not entire server.

Same logic can be applied to any other subsystem, such as apache, mysql, etc.

Of course, this is purely theoretical at this point, and might be hard to set up. It does look like a better way to go tho. At least to me. If anyone tries this, please let me know how it went.

Tuncay Göncüoğlu Tuncay Göncüoğlu 561 3 3 silver badges 9 9 bronze badges

[Nov 08, 2019] Perl tricks for system administrators by Ruth Holloway Feed

Notable quotes:
"... /home/<department>/<username> ..."
Jul 27, 2016 | opensource.com

Did you know that Perl is a great programming language for system administrators? Perl is platform-independent so you can do things on different operating systems without rewriting your scripts. Scripting in Perl is quick and easy, and its portability makes your scripts amazingly useful. Here are a few examples, just to get your creative juices flowing! Renaming a bunch of files

Suppose you need to rename a whole bunch of files in a directory. In this case, we've got a directory full of .xml files, and we want to rename them all to .html . Easy-peasy!

#!/usr/bin/perl
use strict ;
use warnings ;

foreach my $file ( glob "*.xml" ) {
my $new = substr ( $file , 0 , - 3 ) . "html" ;
rename $file , $new ;
}

Then just cd to the directory where you need to make the change, and run the script. You could put this in a cron job, if you needed to run it regularly, and it is easily enhanced to accept parameters.

Speaking of accepting parameters, let's take a look at a script that does just that.

Creating a Linux user account

Programming and development

Suppose you need to regularly create Linux user accounts on your system, and the format of the username is first initial/last name, as is common in many businesses. (This is, of course, a good idea, until you get John Smith and Jane Smith working at the same company -- or want John to have two accounts, as he works part-time in two different departments. But humor me, okay?) Each user account needs to be in a group based on their department, and home directories are of the format /home/<department>/<username> . Let's take a look at a script to do that:

#!/usr/bin/env perl
use strict ;
use warnings ;

my $adduser = '/usr/sbin/adduser' ;

use Getopt :: Long qw ( GetOptions ) ;

# If the user calls the script with no parameters,
# give them help!

if ( not @ ARGV ) {
usage () ;
}

# Gather our options; if they specify any undefined option,
# they'll get sent some help!

my %opts ;
GetOptions ( \%opts ,
'fname=s' ,
'lname=s' ,
'dept=s' ,
'run' ,
) or usage () ;

# Let's validate our inputs. All three parameters are
# required, and must be alphabetic.
# You could be clever, and do this with a foreach loop,
# but let's keep it simple for now.

if ( not $opts { fname } or $opts { fname } !~ /^[a-zA-Z]+$/ ) {
usage ( "First name must be alphabetic" ) ;
}
if ( not $opts { lname } or $opts { lname } !~ /^[a-zA-Z]+$/ ) {
usage ( "Last name must be alphabetic" ) ;
}
if ( not $opts { dept } or $opts { dept } !~ /^[a-zA-Z]+$/ ) {
usage ( "Department must be alphabetic" ) ;
}

# Construct the username and home directory

my $username = lc ( substr ( $opts { fname } , 0 , 1 ) . $opts { lname }) ;
my $home = "/home/$opts{dept}/$username" ;

# Show them what we've got ready to go.

print "Name: $opts{fname} $opts{lname} \n " ;
print "Username: $username \n " ;
print "Department: $opts{dept} \n " ;
print "Home directory: $home \n\n " ;

# use qq() here, so that the quotes in the --gecos flag
# get carried into the command!

my $cmd = qq ( $adduser -- home $home -- ingroup $opts { dept } \\
-- gecos "$opts{fname} $opts{lname}" $username ) ;

print "$cmd \n " ;
if ( $opts { run }) {
system $cmd ;
} else {
print "You need to add the --run flag to actually execute \n " ;
}

sub usage {
my ( $msg ) = @_ ;
if ( $msg ) {
print "$msg \n\n " ;
}
print "Usage: $0 --fname FirstName --lname LastName --dept Department --run \n " ;
exit ;
}

As with the previous script, there are opportunities for enhancement, but something like this might be all that you need for this task.

One more, just for fun!

Change copyright text in every Perl source file in a directory tree

Now we're going to try a mass edit. Suppose you've got a directory full of code, and each file has a copyright statement somewhere in it. (Rich Bowen wrote a great article, Copyright statements proliferate inside open source code a couple of years ago that discusses the wisdom of copyright statements in open source code. It is a good read, and I recommend it highly. But again, humor me.) You want to change that text in each and every file in the directory tree. File::Find and File::Slurp are your friends!

#!/usr/bin/perl
use strict ;
use warnings ;

use File :: Find qw ( find ) ;
use File :: Slurp qw ( read_file write_file ) ;

# If the user gives a directory name, use that. Otherwise,
# use the current directory.

my $dir = $ARGV [ 0 ] || '.' ;

# File::Find::find is kind of dark-arts magic.
# You give it a reference to some code,
# and a directory to hunt in, and it will
# execute that code on every file in the
# directory, and all subdirectories. In this
# case, \&change_file is the reference
# to our code, a subroutine. You could, if
# what you wanted to do was really short,
# include it in a { } block instead. But doing
# it this way is nice and readable.

find ( \&change_file , $dir ) ;

sub change_file {
my $name = $_ ;

# If the file is a directory, symlink, or other
# non-regular file, don't do anything

if ( not - f $name ) {
return ;
}
# If it's not Perl, don't do anything.

if ( substr ( $name , - 3 ) ne ".pl" ) {
return ;
}
print "$name \n " ;

# Gobble up the file, complete with carriage
# returns and everything.
# Be wary of this if you have very large files
# on a system with limited memory!

my $data = read_file ( $name ) ;

# Use a regex to make the change. If the string appears
# more than once, this will change it everywhere!

$data =~ s/Copyright Old/Copyright New/g ;

# Let's not ruin our original files

my $backup = "$name.bak" ;
rename $name , $backup ;
write_file ( $name , $data ) ;

return ;
}

Because of Perl's portability, you could use this script on a Windows system as well as a Linux system -- it Just Works because of the underlying Perl interpreter code. In our create-an-account code above, that one is not portable, but is Linux-specific because it uses Linux commands such as adduser .

In my experience, I've found it useful to have a Git repository of these things somewhere that I can clone on each new system I'm working with. Over time, you'll think of changes to make to the code to enhance the capabilities, or you'll add new scripts, and Git can help you make sure that all your tools and tricks are available on all your systems.

I hope these little scripts have given you some ideas how you can use Perl to make your system administration life a little easier. In addition to these longer scripts, take a look at a fantastic list of Perl one-liners, and links to other Perl magic assembled by Mischa Peterson.

[Nov 08, 2019] Manage NTP with Chrony by David Both

Dec 03, 2018 | opensource.com

Chronyd is a better choice for most networks than ntpd for keeping computers synchronized with the Network Time Protocol.

"Does anybody really know what time it is? Does anybody really care?"
Chicago , 1969

Perhaps that rock group didn't care what time it was, but our computers do need to know the exact time. Timekeeping is very important to computer networks. In banking, stock markets, and other financial businesses, transactions must be maintained in the proper order, and exact time sequences are critical for that. For sysadmins and DevOps professionals, it's easier to follow the trail of email through a series of servers or to determine the exact sequence of events using log files on geographically dispersed hosts when exact times are kept on the computers in question.

I used to work at an organization that received over 20 million emails per day and had four servers just to accept and do a basic filter on the incoming flood of email. From there, emails were sent to one of four other servers to perform more complex anti-spam assessments, then they were delivered to one of several additional servers where the emails were placed in the correct inboxes. At each layer, the emails would be sent to one of the next-level servers, selected only by the randomness of round-robin DNS. Sometimes we had to trace a new message through the system until we could determine where it "got lost," according to the pointy-haired bosses. We had to do this with frightening regularity.

Most of that email turned out to be spam. Some people actually complained that their [joke, cat pic, recipe, inspirational saying, or other-strange-email]-of-the-day was missing and asked us to find it. We did reject those opportunities.

Our email and other transactional searches were aided by log entries with timestamps that -- today -- can resolve down to the nanosecond in even the slowest of modern Linux computers. In very high-volume transaction environments, even a few microseconds of difference in the system clocks can mean sorting thousands of transactions to find the correct one(s).

The NTP server hierarchy

Computers worldwide use the Network Time Protocol (NTP) to synchronize their times with internet standard reference clocks via a hierarchy of NTP servers. The primary servers are at stratum 1, and they are connected directly to various national time services at stratum 0 via satellite, radio, or even modems over phone lines. The time service at stratum 0 may be an atomic clock, a radio receiver tuned to the signals broadcast by an atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS satellites.

To prevent time requests from time servers lower in the hierarchy (i.e., with a higher stratum number) from overwhelming the primary reference servers, there are several thousand public NTP stratum 2 servers that are open and available for anyone to use. Many organizations with large numbers of hosts that need an NTP server will set up their own time servers so that only one local host accesses the stratum 2 time servers, then they configure the remaining network hosts to use the local time server which, in my case, is a stratum 3 server.

NTP choices

The original NTP daemon, ntpd , has been joined by a newer one, chronyd . Both keep the local host's time synchronized with the time server. Both services are available, and I have seen nothing to indicate that this will change anytime soon.

Chrony has features that make it the better choice for most environments for the following reasons:

The NTP and Chrony RPM packages are available from standard Fedora repositories. You can install both and switch between them, but modern Fedora, CentOS, and RHEL releases have moved from NTP to Chrony as their default time-keeping implementation. I have found that Chrony works well, provides a better interface for the sysadmin, presents much more information, and increases control.

Just to make it clear, NTP is a protocol that is implemented with either NTP or Chrony. If you'd like to know more, read this comparison between NTP and Chrony as implementations of the NTP protocol.

This article explains how to configure Chrony clients and servers on a Fedora host, but the configuration for CentOS and RHEL current releases works the same.

Chrony structure

The Chrony daemon, chronyd , runs in the background and monitors the time and status of the time server specified in the chrony.conf file. If the local time needs to be adjusted, chronyd does it smoothly without the programmatic trauma that would occur if the clock were instantly reset to a new time.

Chrony's chronyc tool allows someone to monitor the current status of Chrony and make changes if necessary. The chronyc utility can be used as a command that accepts subcommands, or it can be used as an interactive text-mode program. This article will explain both uses.

Client configuration

The NTP client configuration is simple and requires little or no intervention. The NTP server can be defined during the Linux installation or provided by the DHCP server at boot time. The default /etc/chrony.conf file (shown below in its entirety) requires no intervention to work properly as a client. For Fedora, Chrony uses the Fedora NTP pool, and CentOS and RHEL have their own NTP server pools. Like many Red Hat-based distributions, the configuration file is well commented.

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.fedora.pool.ntp.org iburst

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys

# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

Let's look at the current status of NTP on a virtual machine I use for testing. The chronyc command, when used with the tracking subcommand, provides statistics that report how far off the local system is from the reference server.

[root@studentvm1 ~]# chronyc tracking
Reference ID : 23ABED4D (ec2-35-171-237-77.compute-1.amazonaws.com)
Stratum : 3
Ref time (UTC) : Fri Nov 16 16:21:30 2018
System time : 0.000645622 seconds slow of NTP time
Last offset : -0.000308577 seconds
RMS offset : 0.000786140 seconds
Frequency : 0.147 ppm slow
Residual freq : -0.073 ppm
Skew : 0.062 ppm
Root delay : 0.041452706 seconds
Root dispersion : 0.022665167 seconds
Update interval : 1044.2 seconds
Leap status : Normal
[root@studentvm1 ~]#

The Reference ID in the first line of the result is the server the host is synchronized to -- in this case, a stratum 3 reference server that was last contacted by the host at 16:21:30 2018. The other lines are described in the chronyc(1) man page .

The sources subcommand is also useful because it provides information about the time source configured in chrony.conf .

[root@studentvm1 ~]# chronyc sources
210 Number of sources = 5
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ 192.168.0.51 3 6 377 0 -2613us[-2613us] +/- 63ms
^+ dev.smatwebdesign.com 3 10 377 28m -2961us[-3534us] +/- 113ms
^+ propjet.latt.net 2 10 377 465 -1097us[-1085us] +/- 77ms
^* ec2-35-171-237-77.comput> 2 10 377 83 +2388us[+2395us] +/- 95ms
^+ PBX.cytranet.net 3 10 377 507 -1602us[-1589us] +/- 96ms
[root@studentvm1 ~]#

The first source in the list is the time server I set up for my personal network. The others were provided by the pool. Even though my NTP server doesn't appear in the Chrony configuration file above, my DHCP server provides its IP address for the NTP server. The "S" column -- Source State -- indicates with an asterisk ( * ) the server our host is synced to. This is consistent with the data from the tracking subcommand.

The -v option provides a nice description of the fields in this output.

[root@studentvm1 ~]# chronyc sources -v
210 Number of sources = 5

.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ 192.168.0.51 3 7 377 28 -2156us[-2156us] +/- 63ms
^+ triton.ellipse.net 2 10 377 24 +5716us[+5716us] +/- 62ms
^+ lithium.constant.com 2 10 377 351 -820us[ -820us] +/- 64ms
^* t2.time.bf1.yahoo.com 2 10 377 453 -992us[ -965us] +/- 46ms
^- ntp.idealab.com 2 10 377 799 +3653us[+3674us] +/- 87ms
[root@studentvm1 ~]#

If I wanted my server to be the preferred reference time source for this host, I would add the line below to the /etc/chrony.conf file.

server 192.168.0.51 iburst prefer

I usually place this line just above the first pool server statement near the top of the file. There is no special reason for this, except I like to keep the server statements together. It would work just as well at the bottom of the file, and I have done that on several hosts. This configuration file is not sequence-sensitive.

The prefer option marks this as the preferred reference source. As such, this host will always be synchronized with this reference source (as long as it is available). We can also use the fully qualified hostname for a remote reference server or the hostname only (without the domain name) for a local reference time source as long as the search statement is set in the /etc/resolv.conf file. I prefer the IP address to ensure that the time source is accessible even if DNS is not working. In most environments, the server name is probably the better option, because NTP will continue to work even if the server's IP address changes.

If you don't have a specific reference source you want to synchronize to, it is fine to use the defaults.

Configuring an NTP server with Chrony

The nice thing about the Chrony configuration file is that this single file configures the host as both a client and a server. To add a server function to our host -- it will always be a client, obtaining its time from a reference server -- we just need to make a couple of changes to the Chrony configuration, then configure the host's firewall to accept NTP requests.

Open the /etc/chrony.conf file in your favorite text editor and uncomment the local stratum 10 line. This enables the Chrony NTP server to continue to act as if it were connected to a remote reference server if the internet connection fails; this enables the host to continue to be an NTP server to other hosts on the local network.

Let's restart chronyd and track how the service is working for a few minutes. Before we enable our host as an NTP server, we want to test a bit.

[root@studentvm1 ~]# systemctl restart chronyd ; watch chronyc tracking

The results should look like this. The watch command runs the chronyc tracking command every two seconds so we can watch changes occur over time.

Every 2.0s: chronyc tracking studentvm1: Fri Nov 16 20:59:31 2018

Reference ID : C0A80033 (192.168.0.51)
Stratum : 4
Ref time (UTC) : Sat Nov 17 01:58:51 2018
System time : 0.001598277 seconds fast of NTP time
Last offset : +0.001791533 seconds
RMS offset : 0.001791533 seconds
Frequency : 0.546 ppm slow
Residual freq : -0.175 ppm
Skew : 0.168 ppm
Root delay : 0.094823152 seconds
Root dispersion : 0.021242738 seconds
Update interval : 65.0 seconds
Leap status : Normal

Notice that my NTP server, the studentvm1 host, synchronizes to the host at 192.168.0.51, which is my internal network NTP server, at stratum 4. Synchronizing directly to the Fedora pool machines would result in synchronization at stratum 3. Notice also that the amount of error decreases over time. Eventually, it should stabilize with a tiny variation around a fairly small range of error. The size of the error depends upon the stratum and other network factors. After a few minutes, use Ctrl+C to break out of the watch loop.

To turn our host into an NTP server, we need to allow it to listen on the local network. Uncomment the following line to allow hosts on the local network to access our NTP server.

# Allow NTP client access from local network.
allow 192.168.0.0/16

Note that the server can listen for requests on any local network it's attached to. The IP address in the "allow" line is just intended for illustrative purposes. Be sure to change the IP network and subnet mask in that line to match your local network's.

Restart chronyd .

[root@studentvm1 ~]# systemctl restart chronyd

To allow other hosts on your network to access this server, configure the firewall to allow inbound UDP packets on port 123. Check your firewall's documentation to find out how to do that.

Testing

Your host is now an NTP server. You can test it with another host or a VM that has access to the network on which the NTP server is listening. Configure the client to use the new NTP server as the preferred server in the /etc/chrony.conf file, then monitor that client using the chronyc tools we used above.

Chronyc as an interactive tool

As I mentioned earlier, chronyc can be used as an interactive command tool. Simply run the command without a subcommand and you get a chronyc command prompt.

[root@studentvm1 ~]# chronyc
chrony version 3.4
Copyright (C) 1997-2003, 2007, 2009-2018 Richard P. Curnow and others
chrony comes with ABSOLUTELY NO WARRANTY. This is free software, and
you are welcome to redistribute it under certain conditions. See the
GNU General Public License version 2 for details.

chronyc>

You can enter just the subcommands at this prompt. Try using the tracking , ntpdata , and sources commands. The chronyc command line allows command recall and editing for chronyc subcommands. You can use the help subcommand to get a list of possible commands and their syntax.

Conclusion

Chrony is a powerful tool for synchronizing the times of client hosts, whether they are all on the local network or scattered around the globe. It's easy to configure because, despite the large number of options available, only a few configurations are required for most circumstances.

After my client computers have synchronized with the NTP server, I like to set the system hardware clock from the system (OS) time by using the following command:

/sbin/hwclock --systohc

This command can be added as a cron job or a script in cron.daily to keep the hardware clock synced with the system time.

Chrony and NTP (the service) both use the same configuration, and the files' contents are interchangeable. The man pages for chronyd , chronyc , and chrony.conf contain an amazing amount of information that can help you get started or learn about esoteric configuration options.

Do you run your own NTP server? Let us know in the comments and be sure to tell us which implementation you are using, NTP or Chrony.

[Nov 08, 2019] Vim universe. fzf - command line fuzzy finder by Alexey Samoshkin

Nov 08, 2019 | www.youtube.com

Zeeshan Jan , 1 month ago (edited)

Alexey thanks for great video, I have a question, how did you integrate the fzf and bat. When I am in my zsh using tmux then when I type fzf and search for a file I am not able to select multiple files using TAB I can do this inside VIM but not in the tmux iTerm terminal also I am not able to see the preview I have already installed bat using brew on my mac book pro. also when I type cd ** it doesn't work

Paul Hale , 4 months ago

Thanks for the video. When searching in vim dotfiles are hidden. How can we configure so that dotfiles are shown but .git and .git subfolders are ignored?

[Nov 08, 2019] 10 resources every sysadmin should know about Opensource.com

Nov 08, 2019 | opensource.com

Cheat

Having a hard time remembering a command? Normally you might resort to a man page, but some man pages have a hard time getting to the point. It's the reason Chris Allen Lane came up with the idea (and more importantly, the code) for a cheat command .

The cheat command displays cheatsheets for common tasks in your terminal. It's a man page without the preamble. It cuts to the chase and tells you exactly how to do whatever it is you're trying to do. And if it lacks a common example that you think ought to be included, you can submit an update.

$ cheat tar
# To extract an uncompressed archive:
tar -xvf '/path/to/foo.tar'

# To extract a .gz archive:
tar -xzvf '/path/to/foo.tgz'
[ ... ]

You can also treat cheat as a local cheatsheet system, which is great for all the in-house commands you and your team have invented over the years. You can easily add a local cheatsheet to your own home directory, and cheat will find and display it just as if it were a popular system command.

[Nov 08, 2019] A Linux user's guide to Logical Volume Management Opensource.com

Nov 08, 2019 | opensource.com

In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes.

Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups.

Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one.

Adding a new logical volume

There are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently.

The basic steps for adding a new logical volume are as follows.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create a new logical volumes (LV) from the space in the volume group.
  6. Create a filesystem on the new logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.

Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems.

Example

This example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install hard drive

If there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps.

Create Physical Volume from hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Extend the existing Volume Group

In this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Create the Logical Volume

First create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvcreate -L +50G --name Stuff MyVG01
Create the filesystem

Creating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume.

mkfs -t ext4 /dev/MyVG01/Stuff
Add a filesystem label

Adding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems.

e2label /dev/MyVG01/Stuff Stuff
Mount the filesystem

At this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem.

You should also check to verify the volume has been created correctly. You can use the df , lvs, and vgs commands to do this.

Resizing a logical volume in an LVM filesystem

The need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group.
  6. If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.
Example

This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install the hard drive

If there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps.

Create a Physical Volume from the hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Add PV to existing Volume Group

For this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Extend the Logical Volume

Extend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvextend -L +50G /dev/MyVG01/Stuff
Expand the filesystem

Extending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume.

resize2fs /dev/MyVG01/Stuff

You should check to verify the resizing has been performed correctly. You can use the df , lvs, and vgs commands to do this.

Tips

Over the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you.

I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch. Topics Business Linux How-tos and tutorials Sysadmin About the author David Both - David Both is an Open Source Software and GNU/Linux advocate, trainer, writer, and speaker who lives in Raleigh North Carolina. He is a strong proponent of and evangelist for the "Linux Philosophy." David has been in the IT industry for nearly 50 years. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for over 20 years. David prefers to purchase the components and build his...

[Nov 08, 2019] 10 killer tools for the admin in a hurry Opensource.com

Nov 08, 2019 | opensource.com

NixCraft
Use the site's internal search function. With more than a decade of regular updates, there's gold to be found here -- useful scripts and handy hints that can solve your problem straight away. This is often the second place I look after Google.

Webmin
This gives you a nice web interface to remotely edit your configuration files. It cuts down on a lot of time spent having to juggle directory paths and sudo nano , which is handy when you're handling several customers.

Windows Subsystem for Linux
The reality of the modern workplace is that most employees are on Windows, while the grown-up gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks from (gasp) a Windows desktop.

What do you do? Install a virtual machine? It's actually much faster and far less work to configure if you install the Windows Subsystem for Linux compatibility layer, now available at no cost on Windows 10.

This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries on the local machine, have full access to both Windows and Linux filesystems, and mount network drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors.

mRemoteNG
This is an excellent SSH and remote desktop client for when you have 100+ servers to manage.

Setting up a network so you don't have to do it again

A poorly planned network is the sworn enemy of the admin who hates working overtime.

IP Addressing Schemes that Scale
The diabolical thing about running out of IP addresses is that, when it happens, the network's grown large enough that a new addressing scheme is an expensive, time-consuming pain in the proverbial.

Ain't nobody got time for that!

At some point, IPv6 will finally arrive to save the day. Until then, these one-size-fits-most IP addressing schemes should keep you going, no matter how many network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and espresso machines the world throws at us.

Linux Chmod Permissions Cheat Sheet
A short but sweet cheat sheet of Bash commands to set permissions across the network. This is so when Bill from Customer Service falls for that ransomware scam, you're recovering just his files and not the entire company's.

VLSM Subnet Calculator
Just put in the number of networks you want to create from an address space and the number of hosts you want per network, and it calculates what the subnet mask should be for everything.

Single-purpose Linux distributions

Need a Linux box that does just one thing? It helps if someone else has already sweated the small stuff on an operating system you can install and have ready immediately.

Each of these has, at one point, made my work day so much easier.

Porteus Kiosk
This is for when you want a computer totally locked down to just a web browser. With a little tweaking, you can even lock the browser down to just one website. This is great for public access machines. It works with touchscreens or with a keyboard and mouse.

Parted Magic
This is an operating system you can boot from a USB drive to partition hard drives, recover data, and run benchmarking tools.

IPFire
Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire." That's my second favorite thing about this Linux distribution. My favorite is that it's a seriously solid software suite. It's so easy to set up and configure, and there is a heap of plugins available to extend it.

What about your top tools and cheat sheets?

So, how about you? What tools, resources, and cheat sheets have you found to make the workday easier? I'd love to know. Please share in the comments.

[Nov 02, 2019] LVM spanning over multiple disks What disk is a file on? Can I lose a drive without total loss

Notable quotes:
"... If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part. ..."
"... So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost. ..."
"... LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd. ..."
"... As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason. ..."
"... it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something. ..."
Mar 16, 2015 | serverfault.com

LVM spanning over multiple disks: What disk is a file on? Can I lose a drive without total loss? Ask Question Asked 8 years, 10 months ago Active 4 years, 6 months ago Viewed 9k times 7 2 I have three 990GB partitions over three drives in my server. Using LVM, I can create one ~3TB partition for file storage.

1) How does the system determine what partition to use first?
2) Can I find what disk a file or folder is physically on?
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk? storage lvm share

edited Mar 16 '15 at 12:53

HopelessN00b 49k 25 25 gold badges 121 121 silver badges 194 194 bronze badges asked Dec 2 '10 at 2:28 Luke has no name Luke has no name 989 10 10 silver badges 13 13 bronze badges

add a comment | 3 Answers 3 active oldest votes 12
  1. The system fills from the first disk in the volume group to the last, unless you configure striping with extents.
  2. I don't think this is possible, but where I'd start to look is in the lvs/vgs commands man pages.
  3. If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part.
  4. So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost.
Peter Grace Peter Grace 2,676 2 2 gold badges 22 22 silver badges 38 38 bronze badges add a comment | 3

1) How does the system determine what partition to use first?

LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd.

2) Can I find what disk a file or folder is physically on?

You can determine what PVs a LV has allocation extents on. But I don't know of a way to get that information for an individual file.

3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk?

As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason.

3dinfluence 3dinfluence 12k 1 1 gold badge 23 23 silver badges 38 38 bronze badges

add a comment | 2 I don't know the answer to #2, so I'll leave that to someone else. I suspect "no", but I'm willing to be happily surprised.

1 is: you tell it, when you combine the physical volumes into a volume group.

3 is: it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something.

And to the overall, unasked question: yeah, you probably don't really want to do that.

[Oct 08, 2019] Forward root email on Linux server

Oct 08, 2019 | www.reddit.com

Hi, generally I configure /etc/aliases to forward root messages to my work email address. I found this useful, because sometimes I become aware of something wrong...

I create specific email filter on my MUA to put everything with "fail" in subject in my ALERT subfolder, "update" or "upgrade" in my UPGRADE subfolder, and so on.

It is a bit annoying, because with > 50 server, there is lot of "rumor", anyway.

How do you manage that?

Thank you!

[Oct 02, 2019] raid5 - Can I recover a RAID 5 array if two drives have failed - Server Fault

Oct 02, 2019 | serverfault.com

Can I recover a RAID 5 array if two drives have failed? Ask Question Asked 9 years ago Active 2 years, 3 months ago Viewed 58k times I have a Dell 2600 with 6 drives configured in a RAID 5 on a PERC 4 controller. 2 drives failed at the same time, and according to what I know a RAID 5 is recoverable if 1 drive fails. I'm not sure if the fact I had six drives in the array might save my skin.

I bought 2 new drives and plugged them in but no rebuild happened as I expected. Can anyone shed some light? raid raid5 dell-poweredge share Share a link to this question

add a comment | 4 Answers 4 active oldest votes

11 Regardless of how many drives are in use, a RAID 5 array only allows for recovery in the event that just one disk at a time fails.

What 3molo says is a fair point but even so, not quite correct I think - if two disks in a RAID5 array fail at the exact same time then a hot spare won't help, because a hot spare replaces one of the failed disks and rebuilds the array without any intervention, and a rebuild isn't possible if more than one disk fails.

For now, I am sorry to say that your options for recovering this data are going to involve restoring a backup.

For the future you may want to consider one of the more robust forms of RAID (not sure what options a PERC4 supports) such as RAID 6 or a nested RAID array . Once you get above a certain amount of disks in an array you reach the point where the chance that more than one of them can fail before a replacement is installed and rebuilt becomes unacceptably high. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 13:37 longneck 21.1k 3 3 gold badges 43 43 silver badges 76 76 bronze badges answered Sep 21 '10 at 14:43 Rob Moir Rob Moir 30k 4 4 gold badges 53 53 silver badges 84 84 bronze badges

add a comment | 2 You can try to force one or both of the failed disks to be online from the BIOS interface of the controller. Then check that the data and the file system are consistent. share Share a link to this answer Copy link | improve this answer answered Sep 21 '10 at 15:35 Mircea Vutcovici Mircea Vutcovici 13.6k 3 3 gold badges 42 42 silver badges 69 69 bronze badges add a comment | 2 Direct answer is "No". In-direct -- "It depends". Mainly it depends on whether disks are partially out of order, or completely. In case there're partially broken, you can give it a try -- I would copy (using tool like ddrescue) both failed disks. Then I'd try to run the bunch of disks using Linux SoftRAID -- re-trying with proper order of disks and stripe-size in read-only mode and counting CRC mismatches. It's quite doable, I should say -- this text in Russian mentions 12 disk RAID50's recovery using LSR , for example. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 15:12 Skyhawk 13.5k 3 3 gold badges 45 45 silver badges 91 91 bronze badges answered Jun 8 '12 at 14:11 poige poige 7,370 2 2 gold badges 16 16 silver badges 38 38 bronze badges add a comment | 0 It is possible if raid was with one spare drive , and one of your failed disks died before the second one. So, you just need need to try reconstruct array virtually with 3d party software . Found small article about this process on this page: http://www.angeldatarecovery.com/raid5-data-recovery/

And, if you realy need one of died drives you can send it to recovery shops. With of this images you can reconstruct raid properly with good channces.

[Sep 23, 2019] How to recover deleted files with foremost on Linux - LinuxConfig.org

Sep 23, 2019 | linuxconfig.org
Details
System Administration
15 September 2019
Contents In this article we will talk about foremost , a very useful open source forensic utility which is able to recover deleted files using the technique called data carving . The utility was originally developed by the United States Air Force Office of Special Investigations, and is able to recover several file types (support for specific file types can be added by the user, via the configuration file). The program can also work on partition images produced by dd or similar tools.

In this tutorial you will learn:

foremost-manual <img src=https://linuxconfig.org/images/foremost_manual.png alt=foremost-manual width=1200 height=675 /> Foremost is a forensic data recovery program for Linux used to recover files using their headers, footers, and data structures through a process known as file carving. Software Requirements and Conventions Used
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution-independent
Software The "foremost" program
Other Familiarity with the command line interface
Conventions # - requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ - requires given linux commands to be executed as a regular non-privileged user
Installation

Since foremost is already present in all the major Linux distributions repositories, installing it is a very easy task. All we have to do is to use our favorite distribution package manager. On Debian and Ubuntu, we can use apt :

$ sudo apt install foremost

In recent versions of Fedora, we use the dnf package manager to install packages , the dnf is a successor of yum . The name of the package is the same:

$ sudo dnf install foremost

If we are using ArchLinux, we can use pacman to install foremost . The program can be found in the distribution "community" repository:

$ sudo pacman -S foremost

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

me name=


Basic usage
WARNING
No matter which file recovery tool or process your are going to use to recover your files, before you begin it is recommended to perform a low level hard drive or partition backup, hence avoiding an accidental data overwrite !!! In this case you may re-try to recover your files even after unsuccessful recovery attempt. Check the following dd command guide on how to perform hard drive or partition low level backup.

The foremost utility tries to recover and reconstruct files on the base of their headers, footers and data structures, without relying on filesystem metadata . This forensic technique is known as file carving . The program supports various types of files, as for example:

The most basic way to use foremost is by providing a source to scan for deleted files (it can be either a partition or an image file, as those generated with dd ). Let's see an example. Imagine we want to scan the /dev/sdb1 partition: before we begin, a very important thing to remember is to never store retrieved data on the same partition we are retrieving the data from, to avoid overwriting delete files still present on the block device. The command we would run is:

$ sudo foremost -i /dev/sdb1

By default, the program creates a directory called output inside the directory we launched it from and uses it as destination. Inside this directory, a subdirectory for each supported file type we are attempting to retrieve is created. Each directory will hold the corresponding file type obtained from the data carving process:

output
├── audit.txt
├── avi
├── bmp
├── dll
├── doc
├── docx
├── exe
├── gif
├── htm
├── jar
├── jpg
├── mbd
├── mov
├── mp4
├── mpg
├── ole
├── pdf
├── png  
├── ppt
├── pptx
├── rar
├── rif
├── sdw
├── sx
├── sxc
├── sxi
├── sxw
├── vis
├── wav
├── wmv
├── xls
├── xlsx
└── zip

When foremost completes its job, empty directories are removed. Only the ones containing files are left on the filesystem: this let us immediately know what type of files were successfully retrieved. By default the program tries to retrieve all the supported file types; to restrict our search, we can, however, use the -t option and provide a list of the file types we want to retrieve, separated by a comma. In the example below, we restrict the search only to gif and pdf files:

$ sudo foremost -t gif,pdf -i /dev/sdb1

https://www.youtube.com/embed/58S2wlsJNvo

In this video we will test the forensic data recovery program Foremost to recover a single png file from /dev/sdb1 partition formatted with the EXT4 filesystem.

me name=


Specifying an alternative destination

As we already said, if a destination is not explicitly declared, foremost creates an output directory inside our cwd . What if we want to specify an alternative path? All we have to do is to use the -o option and provide said path as argument. If the specified directory doesn't exist, it is created; if it exists but it's not empty, the program throws a complain:

ERROR: /home/egdoc/data is not empty
        Please specify another directory or run with -T.

To solve the problem, as suggested by the program itself, we can either use another directory or re-launch the command with the -T option. If we use the -T option, the output directory specified with the -o option is timestamped. This makes possible to run the program multiple times with the same destination. In our case the directory that would be used to store the retrieved files would be:

/home/egdoc/data_Thu_Sep_12_16_32_38_2019
The configuration file

The foremost configuration file can be used to specify file formats not natively supported by the program. Inside the file we can find several commented examples showing the syntax that should be used to accomplish the task. Here is an example involving the png type (the lines are commented since the file type is supported by default):

# PNG   (used in web pages)
#       (NOTE THIS FORMAT HAS A BUILTIN EXTRACTION FUNCTION)
#       png     y       200000  \x50\x4e\x47?   \xff\xfc\xfd\xfe

The information to provide in order to add support for a file type, are, from left to right, separated by a tab character: the file extension ( png in this case), whether the header and footer are case sensitive ( y ), the maximum file size in Bytes ( 200000 ), the header ( \x50\x4e\x47? ) and and the footer ( \xff\xfc\xfd\xfe ). Only the latter is optional and can be omitted.

If the path of the configuration file it's not explicitly provided with the -c option, a file named foremost.conf is searched and used, if present, in the current working directory. If it is not found the default configuration file, /etc/foremost.conf is used instead.

Adding the support for a file type

By reading the examples provided in the configuration file, we can easily add support for a new file type. In this example we will add support for flac audio files. Flac (Free Lossless Audio Coded) is a non-proprietary lossless audio format which is able to provide compressed audio without quality loss. First of all, we know that the header of this file type in hexadecimal form is 66 4C 61 43 00 00 00 22 ( fLaC in ASCII), and we can verify it by using a program like hexdump on a flac file:

$ hexdump -C
blind_guardian_war_of_wrath.flac|head
00000000  66 4c 61 43 00 00 00 22  12 00 12 00 00 00 0e 00  |fLaC..."........|
00000010  36 f2 0a c4 42 f0 00 4d  04 60 6d 0b 64 36 d7 bd  |6...B..M.`m.d6..|
00000020  3e 4c 0d 8b c1 46 b6 fe  cd 42 04 00 03 db 20 00  |>L...F...B.... .|
00000030  00 00 72 65 66 65 72 65  6e 63 65 20 6c 69 62 46  |..reference libF|
00000040  4c 41 43 20 31 2e 33 2e  31 20 32 30 31 34 31 31  |LAC 1.3.1 201411|
00000050  32 35 21 00 00 00 12 00  00 00 54 49 54 4c 45 3d  |25!.......TITLE=|
00000060  57 61 72 20 6f 66 20 57  72 61 74 68 11 00 00 00  |War of Wrath....|
00000070  52 45 4c 45 41 53 45 43  4f 55 4e 54 52 59 3d 44  |RELEASECOUNTRY=D|
00000080  45 0c 00 00 00 54 4f 54  41 4c 44 49 53 43 53 3d  |E....TOTALDISCS=|
00000090  32 0c 00 00 00 4c 41 42  45 4c 3d 56 69 72 67 69  |2....LABEL=Virgi|

As you can see the file signature is indeed what we expected. Here we will assume a maximum file size of 30 MB, or 30000000 Bytes. Let's add the entry to the file:

flac    y       30000000    \x66\x4c\x61\x43\x00\x00\x00\x22

The footer signature is optional so here we didn't provide it. The program should now be able to recover deleted flac files. Let's verify it. To test that everything works as expected I previously placed, and then removed, a flac file from the /dev/sdb1 partition, and then proceeded to run the command:

$ sudo foremost -i /dev/sdb1 -o $HOME/Documents/output

As expected, the program was able to retrieve the deleted flac file (it was the only file on the device, on purpose), although it renamed it with a random string. The original filename cannot be retrieved because, as we know, files metadata is contained in the filesystem, and not in the file itself:

/home/egdoc/Documents
└── output
    ├── audit.txt
    └── flac
        └── 00020482.flac

me name=


The audit.txt file contains information about the actions performed by the program, in this case:

Foremost version 1.5.7 by Jesse Kornblum, Kris
Kendall, and Nick Mikus
Audit File

Foremost started at Thu Sep 12 23:47:04 2019
Invocation: foremost -i /dev/sdb1 -o /home/egdoc/Documents/output
Output directory: /home/egdoc/Documents/output
Configuration file: /etc/foremost.conf
------------------------------------------------------------------
File: /dev/sdb1
Start: Thu Sep 12 23:47:04 2019
Length: 200 MB (209715200 bytes)

Num      Name (bs=512)         Size      File Offset     Comment

0:      00020482.flac         28 MB        10486784
Finish: Thu Sep 12 23:47:04 2019

1 FILES EXTRACTED

flac:= 1
------------------------------------------------------------------

Foremost finished at Thu Sep 12 23:47:04 2019
Conclusion

In this article we learned how to use foremost, a forensic program able to retrieve deleted files of various types. We learned that the program works by using a technique called data carving , and relies on files signatures to achieve its goal. We saw an example of the program usage and we also learned how to add the support for a specific file type using the syntax illustrated in the configuration file. For more information about the program usage, please consult its manual page.

[Sep 18, 2019] Delete Files That Have Not Been Accessed For A Given Time On Linux

Sep 18, 2019 | www.ostechnix.com

Delete Files That Have Not Been Accessed For A Given Time On Linux

by sk · Published September 16, 2019 · Updated September 17, 2019

We already have covered how to manually find and delete files older than X days using "find" command in Linux . Today we will do the same, but only if the files have not been accessed for a certain period of time. Say hello to "Tmpwatch" , a command line utility to recursively delete files that haven't been accessed for a given time. Not just files, tmpwatch will also delete empty directories as well.

By default, Tmpwatch will decide which files/directories should be deleted based on their atime (access time). You can, of course, change this behaviour by using ctime (inode change time), mtime (modification time) values as well. Normally, Tmpwatch can be used to delete the contents of /tmp directory and other unused/unwanted stuffs like old log files.

An important warning!!

Before start using this tool, you must know that Tmpwatch will delete files and directories recursively based on the given criteria. Do not run tmpwatch in / (root directory) . This directory contains important files which are required to keep the Linux system running. If you're not careful enough, tmpwatch will delete the important system files and directories that matches the given criteria in the entire root directory. There is no safeguard mechanism built into Tmpwatch tool to prevent you from running it on root directory. So, there is no way to undo the operation. You have been warned!

Install Tmpwatch

Tmpwatch is available in the default repositories of most Linux distributions.

On Fedora, you can install it using command:

$ sudo dnf install tmpwatch

On CentOS:

$ sudo yum install tmpwatch

On openSUSE:

$ sudo zypper install tmpwatch

On Debian and its derivatives like Ubuntu, Tmpwatch is available in different name i.e Tmpreaper . Tmpreaper is mostly based on `tmpwatch-1.2/1.4′ by Erik Troan from Redhat. Now, tmpreaper is being maintained for Debian by Paul Slootman .

To install tmpreaper on Debian, Ubuntu, Linux Mint, run:

$ sudo apt install tmpreaper
Delete Files That Have Not Been Accessed For A Given Time Using Tmpwatch / Tmpreaper

Usage of Tmpwatch and Tmpreaper is almost same. If you're on Debian-based systems, replace "Tmpwatch" with "Tmpreaper" in the following examples.

Delete files which are not accessed more than X days

To delete files more than 10 days old, run:

tmpwatch 10d /var/log/

The above command will delete all the files and empty directories which are not accessed more than 10 days from /var/log/ folder.

Delete files which are not modified more than X days

Like I already said, Tmpwatch will delete files based on their access time. You can also delete files based on their modification time (mtime) using -m option.

For example, the following command will delete files which are not modified for the 10 days in /var/log/ folder.

tmpwatch -m 10d /var/log/

Here, -m refers the modification time and d is the <time_spec> parameter. The <time_spec> parameter defines the age threshold for removing files. You can use the following time_spec parameters for removing files.

Hours is the default.

For instance, to delete files which are not modified for the past 10 hours , simply run:

tmpwatch -m 10 /var/log/

As you might have noticed, I haven't used time_spec parameter in the above command. Because, h (for hours) is default parameter, so we don't have to mention it when deleting files that haven't been modified for the past X hours.

Delete Symlinks

If you want to delete symlinks, not just regular files and directories, use -s option like below.

tmpwatch -s 10 /var/log/
Delete all files

To remove all file types, not just regular files, symlinks, and directories, use -a option.

tmpwatch -a 10 /var/log/

The above command will delete all types of files including regular files, symlinks, and directories in the /var/log/ folder.

Exclude directories from deletion

Sometimes, you might want to delete files, but not directories. if so, the command would be:

tmpwatch -am 10 --nodirs /var/log/

The above command will delete all files except the directories which are not modified for the past 10 hours.

Perform a test run without actually delete anything

Sometimes, you might want to view which files are actually going to be deleted. This will be helpful when running Tmpwatch on an important directory. If so, run Tmpwatch in test mode with -t option.

tmpwatch -t 30 /var/log/

Sample output from CentOS 7 server:

removing file /var/log/wtmp
removing directory /var/log/ppp if empty
removing directory /var/log/tuned if empty
removing directory /var/log/anaconda if empty
removing file /var/log/dmesg.old
removing file /var/log/boot.log
removing file /var/log/dnf.librepo.log

On Debian-based systems, you will see an output like below.

$ tmpreaper -t 30 /var/log/
(PID 1803) Pretending to clean up directory `/var/log/'.
(PID 1804) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 1804) Back from recursing down `apache2'.
(PID 1804) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 1804) Back from recursing down `dbconfig-common'.
(PID 1804) Pretending to clean up directory `dist-upgrade'.
(PID 1804) Back from recursing down `dist-upgrade'.
(PID 1804) Pretending to clean up directory `lxd'.
(PID 1804) Back from recursing down `lxd'.
Pretending to remove file `/var/log//cloud-init.log'.
(PID 1804) Pretending to clean up directory `landscape'.
Pretending to remove file `landscape/sysinfo.log'.
(PID 1804) Back from recursing down `landscape'.
[...]

This will only simulate the operation, but don't actually delete anything. Tmpwatch will simply perform a dry run and show you which files are going to be deleted in the output.

Force file deletion

If you want to forcibly delete the files, use -f option.

tmpwatch -f 10h /var/log/

Normally, the files owned by the current user, with no write access are not removed. The -f option will delete them as well.

Skip certain files from deletion

Tmpreaper has an option to skip files from deletion. This will be useful when you want to keep certain types of files and deleting everything else. If so, use –protect option like below.

tmpreaper --protect '*.txt' -t 10h /var/log/

This command will skip all files that have .txt extension from deletion

Sample output:

(PID 2623) Pretending to clean up directory `/var/log/'.
(PID 2624) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 2624) Back from recursing down `apache2'.
(PID 2624) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 2624) Back from recursing down `dbconfig-common'.
(PID 2624) Pretending to clean up directory `dist-upgrade'.
(PID 2624) Back from recursing down `dist-upgrade'.
Pretending to remove empty directory `dist-upgrade'.
Entry matching `--protect' pattern skipped. `ostechnix.txt'
(PID 2624) Pretending to clean up directory `lxd'.

As you can see, Tmpreaper skips the *.txt files from deletion.

This option is not available in Tmpwatch, by the way.

Setting up cron job to delete files periodically

You may not want to manually run Tmpwatch/Tmpreaper all the time. In that case, you could setup a cron job to automate the clean process.

When installing Tmpreaper , it will create a daily cron job ( /etc/cron.daily/tmpreaper ). This job will read the options from /etc/timereaper.conf file and act accordingly. Open the file and change the values as per your requirement. By default, Tmpreaper will delete files that 7 days older. You can, however, change this by modifying the value "TMPREAPER_TIME=7d" in tmpreaper.conf file.

If you use "Tmpwatch", you need to manually create cron job and put the cron entry in it.

# crontab -e

Add the following line:

0 1 * * * /usr/sbin/tmpwatch 30d /var/log/

As per the above cron job, Tmpwatch will run everyday at 1am and delete files which are 30 days older.

For more details about setting cron jobs, refer the following link.

Again, please careful while using Tmpwatch/Tmpreaper commands . Double check the path before running it to avoid data loss.

For more details, refer man pages.

$ man tmpwatch

Or,

$ man tmpreaper

[Sep 16, 2019] Artistic Style - Index

Sep 16, 2019 | astyle.sourceforge.net

Artistic Style 3.1 A Free, Fast, and Small Automatic Formatter
for C, C++, C++/CLI, Objective‑C, C#, and Java Source Code

Project Page: http://astyle.sourceforge.net/
SourceForge: http://sourceforge.net/projects/astyle/

Artistic Style is a source code indenter, formatter, and beautifier for the C, C++, C++/CLI, Objective‑C, C# and Java programming languages.

When indenting source code, we as programmers have a tendency to use both spaces and tab characters to create the wanted indentation. Moreover, some editors by default insert spaces instead of tabs when pressing the tab key. Other editors (Emacs for example) have the ability to "pretty up" lines by automatically setting up the white space before the code on the line, possibly inserting spaces in code that up to now used only tabs for indentation.

The NUMBER of spaces for each tab character in the source code can change between editors (unless the user sets up the number to his liking...). One of the standard problems programmers face when moving from one editor to another is that code containing both spaces and tabs, which was perfectly indented, suddenly becomes a mess to look at. Even if you as a programmer take care to ONLY use spaces or tabs, looking at other people's source code can still be problematic.

To address this problem, Artistic Style was created – a filter written in C++ that automatically re-indents and re-formats C / C++ / Objective‑C / C++/CLI / C# / Java source files. It can be used from a command line, or it can be incorporated as a library in another program.

[Sep 16, 2019] Usage -- PrettyPrinter 0.18.0 documentation

Sep 16, 2019 | prettyprinter.readthedocs.io

Usage

Install the package with pip :

pip install prettyprinter

Then, instead of

from pprint import pprint

do

from prettyprinter import cpprint

for colored output. For colorless output, remove the c prefix from the function name:

from prettyprinter import pprint

[Sep 16, 2019] JavaScript code prettifier

Sep 16, 2019 | github.com

Announcement: Action required rawgit.com is going away .

An embeddable script that makes source-code snippets in HTML prettier.

[Sep 16, 2019] Pretty-print for shell script

Sep 16, 2019 | stackoverflow.com

Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Sep 9 at 7:47

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(mktemp -u "XXXXXXXXXX");
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

> ,

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 16, 2019] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Notable quotes:
"... Have a look at the HTML Tidy Project: http://www.html-tidy.org/ ..."
Sep 16, 2019 | stackoverflow.com

nisetama ,Aug 12 at 10:33

I'm looking for recommendations for HTML pretty printers which fulfill the following requirements:

> ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

[Sep 13, 2019] How To Delete Files Older Or Newer Than N Days Using find (With Extra Examples) - Linux Uprising Blog

Sep 13, 2019 | www.linuxuprising.com

Only delete files matching .extension older than N days from a directory and all its subdirectories:

find /directory/path/ -type f -mtime +N -name '*.extension' -delete

You can add -maxdepth 1 to prevent the command from going through subdirectories, and only delete files and 1st level depth only directories:
find /directory/path/ -mindepth 1 -maxdepth 1 -mtime +N -delete

You may also use -ctime +N , used to match (and delete in this example) files that had their status last changed N days ago (the file attributes/metadata AND/OR file content was modified) , as opposed to -mtime , which only matches files based on when their content was last modified:
find /directory/path/ -mindepth 1 -ctime +N -delete

[Sep 12, 2019] 9 Best File Comparison and Difference (Diff) Tools for Linux

Sep 12, 2019 | www.tecmint.com

3. Kompare

Kompare is a diff GUI wrapper that allows users to view differences between files and also merge them.

Some of its features include:

  1. Supports multiple diff formats
  2. Supports comparison of directories
  3. Supports reading diff files
  4. Customizable interface
  5. Creating and applying patches to source files
Kompare Tool - Compare Two Files in Linux <img aria-describedby="caption-attachment-21311" src="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png" alt="Kompare Tool - Compare Two Files in Linux" width="1097" height="701" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png 1097w, https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux-768x491.png 768w" sizes="(max-width: 1097px) 100vw, 1097px" />

Kompare Tool – Compare Two Files in Linux

Visit Homepage : https://www.kde.org/applications/development/kompare/

4. DiffMerge

DiffMerge is a cross-platform GUI application for comparing and merging files. It has two functionality engines, the Diff engine which shows the difference between two files, which supports intra-line highlighting and editing and a Merge engine which outputs the changed lines between three files.

It has got the following features:

  1. Supports directory comparison
  2. File browser integration
  3. Highly configurable
DiffMerge - Compare Files in Linux <img aria-describedby="caption-attachment-21312" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png" alt="DiffMerge - Compare Files in Linux" width="1078" height="700" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png 1078w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux-768x499.png 768w" sizes="(max-width: 1078px) 100vw, 1078px" />

DiffMerge – Compare Files in Linux

Visit Homepage : https://sourcegear.com/diffmerge/

5. Meld – Diff Tool

Meld is a lightweight GUI diff and merge tool. It enables users to compare files, directories plus version controlled programs. Built specifically for developers, it comes with the following features:

  1. Two-way and three-way comparison of files and directories
  2. Update of file comparison as a users types more words
  3. Makes merges easier using auto-merge mode and actions on changed blocks
  4. Easy comparisons using visualizations
  5. Supports Git, Mercurial, Subversion, Bazaar plus many more
Meld - A Diff Tool to Compare File in Linux <img aria-describedby="caption-attachment-21313" src="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png" alt="Meld - A Diff Tool to Compare File in Linux" width="1028" height="708" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png 1028w, https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux-768x529.png 768w" sizes="(max-width: 1028px) 100vw, 1028px" />

Meld – A Diff Tool to Compare File in Linux

Visit Homepage : http://meldmerge.org/

6. Diffuse – GUI Diff Tool

Diffuse is another popular, free, small and simple GUI diff and merge tool that you can use on Linux. Written in Python, It offers two major functionalities, that is: file comparison and version control, allowing file editing, merging of files and also output the difference between files.

You can view a comparison summary, select lines of text in files using a mouse pointer, match lines in adjacent files and edit different file. Other features include:

  1. Syntax highlighting
  2. Keyboard shortcuts for easy navigation
  3. Supports unlimited undo
  4. Unicode support
  5. Supports Git, CVS, Darcs, Mercurial, RCS, Subversion, SVK and Monotone
DiffUse - A Tool to Compare Text Files in Linux <img aria-describedby="caption-attachment-21314" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png" alt="DiffUse - A Tool to Compare Text Files in Linux" width="1030" height="795" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png 1030w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux-768x593.png 768w" sizes="(max-width: 1030px) 100vw, 1030px" />

DiffUse – A Tool to Compare Text Files in Linux

Visit Homepage : http://diffuse.sourceforge.net/

7. XXdiff – Diff and Merge Tool

XXdiff is a free, powerful file and directory comparator and merge tool that runs on Unix like operating systems such as Linux, Solaris, HP/UX, IRIX, DEC Tru64. One limitation of XXdiff is its lack of support for unicode files and inline editing of diff files.

It has the following list of features:

  1. Shallow and recursive comparison of two, three file or two directories
  2. Horizontal difference highlighting
  3. Interactive merging of files and saving of resulting output
  4. Supports merge reviews/policing
  5. Supports external diff tools such as GNU diff, SIG diff, Cleareddiff and many more
  6. Extensible using scripts
  7. Fully customizable using resource file plus many other minor features
xxdiff Tool <img aria-describedby="caption-attachment-21315" src="https://www.tecmint.com/wp-content/uploads/2016/07/xxdiff-Tool.png" alt="xxdiff Tool" width="718" height="401" />

xxdiff Tool

Visit Homepage : http://furius.ca/xxdiff/

8. KDiff3 – – Diff and Merge Tool

KDiff3 is yet another cool, cross-platform diff and merge tool made from KDevelop . It works on all Unix-like platforms including Linux and Mac OS X, Windows.

It can compare or merge two to three files or directories and has the following notable features:

  1. Indicates differences line by line and character by character
  2. Supports auto-merge
  3. In-built editor to deal with merge-conflicts
  4. Supports Unicode, UTF-8 and many other codecs
  5. Allows printing of differences
  6. Windows explorer integration support
  7. Also supports auto-detection via byte-order-mark "BOM"
  8. Supports manual alignment of lines
  9. Intuitive GUI and many more
KDiff3 Tool for Linux <img aria-describedby="caption-attachment-21418" src="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png" alt="KDiff3 Tool for Linux" width="950" height="694" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png 950w, https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux-768x561.png 768w" sizes="(max-width: 950px) 100vw, 950px" />

KDiff3 Tool for Linux

Visit Homepage : http://kdiff3.sourceforge.net/

9. TkDiff

TkDiff is also a cross-platform, easy-to-use GUI wrapper for the Unix diff tool. It provides a side-by-side view of the differences between two input files. It can run on Linux, Windows and Mac OS X.

Additionally, it has some other exciting features including diff bookmarks, a graphical map of differences for easy and quick navigation plus many more.

Visit Homepage : https://sourceforge.net/projects/tkdiff/

Having read this review of some of the best file and directory comparator and merge tools, you probably want to try out some of them. These may not be the only diff tools available you can find on Linux, but they are known to offer some the best features, you may also want to let us know of any other diff tools out there that you have tested and think deserve to be mentioned among the best.

[Sep 06, 2019] Using Case Insensitive Matches with Bash Case Statements by Steven Vona

Jun 30, 2019 | www.putorius.net

If you want to match the pattern regardless of it's case (Capital letters or lowercase letters) you can set the nocasematch shell option with the shopt builtin. You can do this as the first line of your script. Since the script will run in a subshell it won't effect your normal environment.

#!/bin/bash
 shopt -s nocasematch
 read -p "Name a Star Trek character: " CHAR
 case $CHAR in
   "Seven of Nine" | Neelix | Chokotay | Tuvok | Janeway )
       echo "$CHAR was in Star Trek Voyager"
       ;;&
   Archer | Phlox | Tpol | Tucker )
       echo "$CHAR was in Star Trek Enterprise"
       ;;&
   Odo | Sisko | Dax | Worf | Quark )
       echo "$CHAR was in Star Trek Deep Space Nine"
       ;;&
   Worf | Data | Riker | Picard )
       echo "$CHAR was in Star Trek The Next Generation" &&  echo "/etc/redhat-release"
       ;;
   *) echo "$CHAR is not in this script." 
       ;;
 esac

[Sep 04, 2019] Exec - Process Replacement Redirection in Bash by Steven Vona

Sep 02, 2019 | www.putorius.net

The Linux exec command is a bash builtin and a very interesting utility. It is not something most people who are new to Linux know. Most seasoned admins understand it but only use it occasionally. If you are a developer, programmer or DevOp engineer it is probably something you use more often. Lets take a deep dive into the builtin exec command, what it does and how to use it.

Table of Contents

Basics of the Sub-Shell

In order to understand the exec command, you need a fundamental understanding of how sub-shells work.

... ... ...

What the Exec Command Does

In it's most basic function the exec command changes the default behavior of creating a sub-shell to run a command. If you run exec followed by a command, that command will REPLACE the original process, it will NOT create a sub-shell.

An additional feature of the exec command, is redirection and manipulation of file descriptors . Explaining redirection and file descriptors is outside the scope of this tutorial. If these are new to you please read " Linux IO, Standard Streams and Redirection " to get acquainted with these terms and functions.

In the following sections we will expand on both of these functions and try to demonstrate how to use them.

How to Use the Exec Command with Examples

Let's look at some examples of how to use the exec command and it's options.

Basic Exec Command Usage – Replacement of Process

If you call exec and supply a command without any options, it simply replaces the shell with command .

Let's run an experiment. First, I ran the ps command to find the process id of my second terminal window. In this case it was 17524. I then ran "exec tail" in that second terminal and checked the ps command again. If you look at the screenshot below, you will see the tail process replaced the bash process (same process ID).

Linux terminal screenshot showing the exec command replacing a parent process instead of creating a sub-shell.
Screenshot 3

Since the tail command replaced the bash shell process, the shell will close when the tail command terminates.

Exec Command Options

If the -l option is supplied, exec adds a dash at the beginning of the first (zeroth) argument given. So if we ran the following command:

exec -l tail -f /etc/redhat-release

It would produce the following output in the process list. Notice the highlighted dash in the CMD column.

The -c option causes the supplied command to run with a empty environment. Environmental variables like PATH , are cleared before the command it run. Let's try an experiment. We know that the printenv command prints all the settings for a users environment. So here we will open a new bash process, run the printenv command to show we have some variables set. We will then run printenv again but this time with the exec -c option.

animated gif showing the exec command output with the -c option supplied.

In the example above you can see that an empty environment is used when using exec with the -c option. This is why there was no output to the printenv command when ran with exec.

The last option, -a [name], will pass name as the first argument to command . The command will still run as expected, but the name of the process will change. In this next example we opened a second terminal and ran the following command:

exec -a PUTORIUS tail -f /etc/redhat-release

Here is the process list showing the results of the above command:

Linux terminal screenshot showing the exec command using the -a option to replace the name of the first argument
Screenshot 5

As you can see, exec passed PUTORIUS as first argument to command , therefore it shows in the process list with that name.

Using the Exec Command for Redirection & File Descriptor Manipulation

The exec command is often used for redirection. When a file descriptor is redirected with exec it affects the current shell. It will exist for the life of the shell or until it is explicitly stopped.

If no command is specified, redirections may be used to affect the current shell environment.

– Bash Manual

Here are some examples of how to use exec for redirection and manipulating file descriptors. As we stated above, a deep dive into redirection and file descriptors is outside the scope of this tutorial. Please read " Linux IO, Standard Streams and Redirection " for a good primer and see the resources section for more information.

Redirect all standard output (STDOUT) to a file:
exec >file

In the example animation below, we use exec to redirect all standard output to a file. We then enter some commands that should generate some output. We then use exec to redirect STDOUT to the /dev/tty to restore standard output to the terminal. This effectively stops the redirection. Using the cat command we can see that the file contains all the redirected output.

Screenshot of Linux terminal using exec to redirect all standard output to a file
Open a file as file descriptor 6 for writing:
exec 6> file2write
Open file as file descriptor 8 for reading:
exec 8< file2read
Copy file descriptor 5 to file descriptor 7:
exec 7<&5
Close file descriptor 8:
exec 8<&-
Conclusion

In this article we covered the basics of the exec command. We discussed how to use it for process replacement, redirection and file descriptor manipulation.

In the past I have seen exec used in some interesting ways. It is often used as a wrapper script for starting other binaries. Using process replacement you can call a binary and when it takes over there is no trace of the original wrapper script in the process table or memory. I have also seen many System Administrators use exec when transferring work from one script to another. If you call a script inside of another script the original process stays open as a parent. You can use exec to replace that original script.

I am sure there are people out there using exec in some interesting ways. I would love to hear your experiences with exec. Please feel free to leave a comment below with anything on your mind.

Resources

[Sep 03, 2019] bash - How to convert strings like 19-FEB-12 to epoch date in UNIX - Stack Overflow

Feb 11, 2013 | stackoverflow.com

Asked 6 years, 6 months ago Active 2 years, 2 months ago Viewed 53k times 24 4

hellish ,Feb 11, 2013 at 3:45

In UNIX how to convert to epoch milliseconds date strings like:
19-FEB-12
16-FEB-12
05-AUG-09

I need this to compare these dates with the current time on the server.

> ,

To convert a date to seconds since the epoch:
date --date="19-FEB-12" +%s

Current epoch:

date +%s

So, since your dates are in the past:

NOW=`date +%s`
THEN=`date --date="19-FEB-12" +%s`

let DIFF=$NOW-$THEN
echo "The difference is: $DIFF"

Using BSD's date command, you would need

$ date -j -f "%d-%B-%y" 19-FEB-12 +%s

Differences from GNU date :

  1. -j prevents date from trying to set the clock
  2. The input format must be explicitly set with -f
  3. The input date is a regular argument, not an option (viz. -d )
  4. When no time is specified with the date, use the current time instead of midnight.

[Sep 03, 2019] Linux - UNIX Convert Epoch Seconds To the Current Time - nixCraft

Sep 03, 2019 | www.cyberciti.biz

Print Current UNIX Time

Type the following command to display the seconds since the epoch:

date +%s

date +%s

Sample outputs:
1268727836

Convert Epoch To Current Time

Type the command:

date -d @Epoch
date -d @1268727836
date -d "1970-01-01 1268727836 sec GMT"

date -d @Epoch date -d @1268727836 date -d "1970-01-01 1268727836 sec GMT"

Sample outputs:

Tue Mar 16 13:53:56 IST 2010

Please note that @ feature only works with latest version of date (GNU coreutils v5.3.0+). To convert number of seconds back to a more readable form, use a command like this:

date -d @1268727836 +"%d-%m-%Y %T %z"

date -d @1268727836 +"%d-%m-%Y %T %z"

Sample outputs:

16-03-2010 13:53:56 +0530

[Sep 03, 2019] command line - How do I convert an epoch timestamp to a human readable format on the cli - Unix Linux Stack Exchange

Sep 03, 2019 | unix.stackexchange.com

Gilles ,Oct 11, 2010 at 18:14

date -d @1190000000 Replace 1190000000 with your epoch

Stefan Lasiewski ,Oct 11, 2010 at 18:04

$ echo 1190000000 | perl -pe 's/(\d+)/localtime($1)/e' 
Sun Sep 16 20:33:20 2007

This can come in handy for those applications which use epoch time in the logfiles:

$ tail -f /var/log/nagios/nagios.log | perl -pe 's/(\d+)/localtime($1)/e'
[Thu May 13 10:15:46 2010] EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT;HOSTA;check_raid;0;check_raid.pl: OK (Unit 0 on Controller 0 is OK)

Stéphane Chazelas ,Jul 31, 2015 at 20:24

With bash-4.2 or above:
printf '%(%F %T)T\n' 1234567890

(where %F %T is the strftime() -type format)

That syntax is inspired from ksh93 .

In ksh93 however, the argument is taken as a date expression where various and hardly documented formats are supported.

For a Unix epoch time, the syntax in ksh93 is:

printf '%(%F %T)T\n' '#1234567890'

ksh93 however seems to use its own algorithm for the timezone and can get it wrong. For instance, in Britain, it was summer time all year in 1970, but:

$ TZ=Europe/London bash -c 'printf "%(%c)T\n" 0'
Thu 01 Jan 1970 01:00:00 BST
$ TZ=Europe/London ksh93 -c 'printf "%(%c)T\n" "#0"'
Thu Jan  1 00:00:00 1970

DarkHeart ,Jul 28, 2014 at 3:56

Custom format with GNU date :
date -d @1234567890 +'%Y-%m-%d %H:%M:%S'

Or with GNU awk :

awk 'BEGIN { print strftime("%Y-%m-%d %H:%M:%S", 1234567890); }'

Linked SO question: https://stackoverflow.com/questions/3249827/convert-from-unixtime-at-command-line

,

The two I frequently use are:
$ perl -leprint\ scalar\ localtime\ 1234567890
Sat Feb 14 00:31:30 2009

[Sep 03, 2019] Time conversion using Bash Vanstechelman.eu

Sep 03, 2019 | www.vanstechelman.eu

Time conversion using Bash This article show how you can obtain the UNIX epoch time (number of seconds since 1970-01-01 00:00:00 UTC) using the Linux bash "date" command. It also shows how you can convert a UNIX epoch time to a human readable time.

Obtain UNIX epoch time using bash
Obtaining the UNIX epoch time using bash is easy. Use the build-in date command and instruct it to output the number of seconds since 1970-01-01 00:00:00 UTC. You can do this by passing a format string as parameter to the date command. The format string for UNIX epoch time is '%s'.

lode@srv-debian6:~$ date "+%s"
1234567890

To convert a specific date and time into UNIX epoch time, use the -d parameter. The next example shows how to convert the timestamp "February 20th, 2013 at 08:41:15" into UNIX epoch time.

lode@srv-debian6:~$ date "+%s" -d "02/20/2013 08:41:15"
1361346075

Converting UNIX epoch time to human readable time
Even though I didn't find it in the date manual, it is possible to use the date command to reformat a UNIX epoch time into a human readable time. The syntax is the following:

lode@srv-debian6:~$ date -d @1234567890
Sat Feb 14 00:31:30 CET 2009

The same thing can also be achieved using a bit of perl programming:

lode@srv-debian6:~$ perl -e 'print scalar(localtime(1234567890)), "\n"'
Sat Feb 14 00:31:30 2009

Please note that the printed time is formatted in the timezone in which your Linux system is configured. My system is configured in UTC+2, you can get another output for the same command.

[Sep 03, 2019] Run PerlTidy to beautify the code

Notable quotes:
"... Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a . ..."
Sep 03, 2019 | perlmaven.com

The Code-TidyAll distribution provides a command line script called tidyall that will use Perl::Tidy to change the layout of the code.

This tandem needs 2 configuration file.

The .perltidyrc file contains the instructions to Perl::Tidy that describes the layout of a Perl-file. We used the following file copied from the source code of the Perl Maven project.

-pbp
-nst
-et=4
--maximum-line-length=120

# Break a line after opening/before closing token.
-vt=0
-vtc=0

The tidyall command uses a separate file called .tidyallrc that describes which files need to be beautified.

[PerlTidy]
select = {lib,t}/**/*.{pl,pm,t}
select = Makefile.PL
select = {mod2html,podtree2html,pods2html,perl2html}
argv = --profile=$ROOT/.perltidyrc

[SortLines]
select = .gitignore
Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a .

That created a directory called .tidyall.d/ where it stores cached versions of the files, and changed all the files that were matches by the select statements in the .tidyallrc file.

Then, I added .tidyall.d/ to the .gitignore file to avoid adding that subdirectory to the repository and ran tidyall -a again to make sure the .gitignore file is sorted.

[Sep 02, 2019] Switch statement for bash script

Sep 02, 2019 | www.linuxquestions.org
Switch statement for bash script
<a rel='nofollow' target='_blank' href='//rev.linuxquestions.org/www/delivery/ck.php?n=a054b75'><img border='0' alt='' src='//rev.linuxquestions.org/www/delivery/avw.php?zoneid=10&amp;n=a054b75' /></a>
[ Log in to get rid of this advertisement] Hello, i am currently trying out the switch statement using bash script.

CODE:
showmenu () {
echo "1. Number1"
echo "2. Number2"
echo "3. Number3"
echo "4. All"
echo "5. Quit"
}

while true
do
showmenu
read choice
echo "Enter a choice:"
case "$choice" in
"1")
echo "Number One"
;;
"2")
echo "Number Two"
;;
"3")
echo "Number Three"
;;
"4")
echo "Number One, Two, Three"
;;
"5")
echo "Program Exited"
exit 0
;;
*)
echo "Please enter number ONLY ranging from 1-5!"
;;
esac
done

OUTPUT:
1. Number1
2. Number2
3. Number3
4. All
5. Quit
Enter a choice:

So, when the code is run, a menu with option 1-5 will be shown, then the user will be asked to enter a choice and finally an output is shown. But it is possible if the user want to enter multiple choices. For example, user enter choice "1" and "3", so the output will be "Number One" and "Number Three". Any idea?

Just something to get you started.

Code:

#! /bin/bash
showmenu ()
{
    typeset ii
    typeset -i jj=1
    typeset -i kk
    typeset -i valid=0  # valid=1 if input is good

    while (( ! valid ))
    do
        for ii in "${options[@]}"
        do
            echo "$jj) $ii"
            let jj++
        done
        read -e -p 'Select a list of actions : ' -a answer
        jj=0
        valid=1
        for kk in "${answer[@]}"
        do
            if (( kk < 1 || kk > "${#options[@]}" ))
            then
                echo "Error Item $jj is out of bounds" 1>&2
                valid=0
                break
            fi
            let jj++
        done
    done
}

typeset -r c1=Number1
typeset -r c2=Number2
typeset -r c3=Number3
typeset -r c4=All
typeset -r c5=Quit
typeset -ra options=($c1 $c2 $c3 $c4 $c5)
typeset -a answer
typeset -i kk
while true
do
    showmenu
    for kk in "${answer[@]}"
    do
        case $kk in
        1)
            echo 'Number One'
            ;;
        2)
            echo 'Number Two'
            ;;
        3)
            echo 'Number Three'
            ;;
        4)
            echo 'Number One, Two, Three'
            ;;
        5)
            echo 'Program Exit'
            exit 0
            ;;
        esac
    done 
done
stevenworr
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by stevenworr
Old 11-16-2009, 10:10 PM # 4
wjs1990 Member
Registered: Nov 2009 Posts: 30
Original Poster
Rep: Reputation: 15
Ok will try it out first. Thanks.
Last edited by wjs1990; 11-16-2009 at 10:13 PM .
wjs1990
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by wjs1990
Old 11-16-2009, 10:16 PM # 5
evo2 LQ Guru
Registered: Jan 2009 Location: Japan Distribution: Mostly Debian and CentOS Posts: 5,945
Rep: Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376
This can be done just by wrapping your case block in a for loop and changing one line.

Code:

#!/bin/bash
showmenu () {
    echo "1. Number1"
    echo "2. Number2"
    echo "3. Number3"
    echo "4. All"
    echo "5. Quit"
}

while true ; do
    showmenu
    read choices
    for choice in $choices ; do
        case "$choice" in
            1)
                echo "Number One" ;;
            2)
                echo "Number Two" ;;
            3)
                echo "Number Three" ;;
            4)
                echo "Numbers One, two, three" ;;
            5)
                echo "Exit"
                exit 0 ;;
            *)
                echo "Please enter number ONLY ranging from 1-5!"
                ;;
        esac
    done
done
You can now enter any number of numbers seperated by white space.

Cheers,

EVo2.

[Sep 02, 2019] bash - Pretty-print for shell script

Oct 21, 2010 | stackoverflow.com

Pretty-print for shell script Ask Question Asked 8 years, 10 months ago Active 30 days ago Viewed 14k times


Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Aug 11 at 4:08

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1);
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

Pius Raeder ,Jan 10, 2017 at 8:35

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 02, 2019] mvdan-sh A shell parser, formatter, and interpreter (POSIX-Bash-mksh)

Written in Go language
Sep 02, 2019 | github.com
go parser shell bash formatter posix mksh interpreter bash-parser beautify
  1. Go 98.8%
  2. Other 1.2%
Type Name Latest commit message Commit time
Failed to load latest commit information.
_fuzz/ it
_js
cmd
expand
fileutil
interp
shell
syntax
.gitignore
.travis.yml
LICENSE
README.md
go.mod
go.sum
release-docker.sh
README.md

sh

A shell parser, formatter and interpreter. Supports POSIX Shell , Bash and mksh . Requires Go 1.11 or later.

Quick start

To parse shell scripts, inspect them, and print them out, see the syntax examples .

For high-level operations like performing shell expansions on strings, see the shell examples .

shfmt

Go 1.11 and later can download the latest v2 stable release:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/cmd/shfmt

The latest v3 pre-release can be downloaded in a similar manner, using the /v3 module:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/shfmt

Finally, any older release can be built with their respective older Go versions by manually cloning, checking out a tag, and running go build ./cmd/shfmt .

shfmt formats shell programs. It can use tabs or any number of spaces to indent. See canonical.sh for a quick look at its default style.

You can feed it standard input, any number of files or any number of directories to recurse into. When recursing, it will operate on .sh and .bash files and ignore files starting with a period. It will also operate on files with no extension and a shell shebang.

shfmt -l -w script.sh

Typically, CI builds should use the command below, to error if any shell scripts in a project don't adhere to the format:

shfmt -d .

Use -i N to indent with a number of spaces instead of tabs. There are other formatting options - see shfmt -h . For example, to get the formatting appropriate for Google's Style guide, use shfmt -i 2 -ci .

Packages are available on Arch , CRUX , Docker , FreeBSD , Homebrew , NixOS , Scoop , Snapcraft , and Void .

Replacing bash -n

bash -n can be useful to check for syntax errors in shell scripts. However, shfmt >/dev/null can do a better job as it checks for invalid UTF-8 and does all parsing statically, including checking POSIX Shell validity:

$ echo '${foo:1 2}' | bash -n
$ echo '${foo:1 2}' | shfmt
1:9: not a valid arithmetic operator: 2
$ echo 'foo=(1 2)' | bash --posix -n
$ echo 'foo=(1 2)' | shfmt -p
1:5: arrays are a bash feature

gosh

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/gosh

Experimental shell that uses interp . Work in progress, so don't expect stability just yet.

Fuzzing

This project makes use of go-fuzz to find crashes and hangs in both the parser and the printer. To get started, run:

git checkout fuzz
./fuzz

Caveats

$ echo '${array[spaced string]}' | shfmt
1:16: not a valid arithmetic operator: string
$ echo '${array[dash-string]}' | shfmt
${array[dash - string]}
$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))

JavaScript

A subset of the Go packages are available as an npm package called mvdan-sh . See the _js directory for more information.

Docker

To build a Docker image, checkout a specific version of the repository and run:

docker build -t my:tag -f cmd/shfmt/Dockerfile .

Related projects

[Aug 29, 2019] Parsing bash script options with getopts by Kevin Sookocheff

Mar 30, 2018 | sookocheff.com

Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes | Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your script. Bash provides the getopts built-in function to do just that. This tutorial explains how to use the getopts built-in function to parse arguments and options to a bash script.

The getopts function takes three parameters. The first is a specification of which options are valid, listed as a sequence of letters. For example, the string 'ht' signifies that the options -h and -t are valid.

The second argument to getopts is a variable that will be populated with the option or argument to be processed next. In the following loop, opt will hold the value of the current option that has been parsed by getopts .

while getopts ":ht" opt; do
  case ${opt} in
    h ) # process option a
      ;;
    t ) # process option t
      ;;
    \? ) echo "Usage: cmd [-h] [-t]"
      ;;
  esac
done

This example shows a few additional features of getopts . First, if an invalid option is provided, the option variable is assigned the value ? . You can catch this case and provide an appropriate usage message to the user. Second, this behaviour is only true when you prepend the list of valid options with : to disable the default error handling of invalid options. It is recommended to always disable the default error handling in your scripts.

The third argument to getopts is the list of arguments and options to be processed. When not provided, this defaults to the arguments and options provided to the application ( $@ ). You can provide this third argument to use getopts to parse any list of arguments and options you provide.

Shifting processed options

The variable OPTIND holds the number of options parsed by the last call to getopts . It is common practice to call the shift command at the end of your processing loop to remove options that have already been handled from $@ .

shift $((OPTIND -1))
Parsing options with arguments

Options that themselves have arguments are signified with a : . The argument to an option is placed in the variable OPTARG . In the following example, the option t takes an argument. When the argument is provided, we copy its value to the variable target . If no argument is provided getopts will set opt to : . We can recognize this error condition by catching the : case and printing an appropriate error message.

while getopts ":t:" opt; do
  case ${opt} in
    t )
      target=$OPTARG
      ;;
    \? )
      echo "Invalid option: $OPTARG" 1>&2
      ;;
    : )
      echo "Invalid option: $OPTARG requires an argument" 1>&2
      ;;
  esac
done
shift $((OPTIND -1))
An extended example – parsing nested arguments and options

Let's walk through an extended example of processing a command that takes options, has a sub-command, and whose sub-command takes an additional option that has an argument. This is a mouthful so let's break it down using an example. Let's say we are writing our own version of the pip command . In this version you can call pip with the -h option to display a help message.

> pip -h
Usage:
    pip -h                      Display this help message.
    pip install                 Install a Python package.

We can use getopts to parse the -h option with the following while loop. In it we catch invalid options with \? and shift all arguments that have been processed with shift $((OPTIND -1)) .

while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install                 Install a Python package."
      exit 0
      ;;
    \? )
      echo "Invalid Option: -$OPTARG" 1>&2
      exit 1
      ;;
  esac
done
shift $((OPTIND -1))

Now let's add the sub-command install to our script. install takes as an argument the Python package to install.

> pip install urllib3

install also takes an option, -t . -t takes as an argument the location to install the package to relative to the current directory.

> pip install urllib3 -t ./src/lib

To process this line we must find the sub-command to execute. This value is the first argument to our script.

subcommand=$1
shift # Remove `pip` from the argument list

Now we can process the sub-command install . In our example, the option -t is actually an option that follows the package argument so we begin by removing install from the argument list and processing the remainder of the line.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list
    ;;
esac

After shifting the argument list we can process the remaining arguments as if they are of the form package -t src/lib . The -t option takes an argument itself. This argument will be stored in the variable OPTARG and we save it to the variable target for further work.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list

  while getopts ":t:" opt; do
    case ${opt} in
      t )
        target=$OPTARG
        ;;
      \? )
        echo "Invalid Option: -$OPTARG" 1>&2
        exit 1
        ;;
      : )
        echo "Invalid Option: -$OPTARG requires an argument" 1>&2
        exit 1
        ;;
    esac
  done
  shift $((OPTIND -1))
  ;;
esac

Putting this all together, we end up with the following script that parses arguments to our version of pip and its sub-command install .

package=""  # Default to empty package
target=""  # Default to empty target

# Parse options to the `pip` command
while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install <package>       Install <package>."
      exit 0
      ;;
   \? )
     echo "Invalid Option: -$OPTARG" 1>&2
     exit 1
     ;;
  esac
done
shift $((OPTIND -1))

subcommand=$1; shift  # Remove 'pip' from the argument list
case "$subcommand" in
  # Parse options to the install sub command
  install)
    package=$1; shift  # Remove 'install' from the argument list

    # Process package options
    while getopts ":t:" opt; do
      case ${opt} in
        t )
          target=$OPTARG
          ;;
        \? )
          echo "Invalid Option: -$OPTARG" 1>&2
          exit 1
          ;;
        : )
          echo "Invalid Option: -$OPTARG requires an argument" 1>&2
          exit 1
          ;;
      esac
    done
    shift $((OPTIND -1))
    ;;
esac

After processing the above sequence of commands, the variable package will hold the package to install and the variable target will hold the target to install the package to. You can use this as a template for processing any set of arguments and options to your scripts.

bash getopts

[Aug 29, 2019] How do I parse command line arguments in Bash - Stack Overflow

Jul 10, 2017 | stackoverflow.com

Livven, Jul 10, 2017 at 8:11

Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great edits/comments/suggestions. In order save maintenance time, I've modified the code block to be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ". Instead, copy-paste the code block, see the output, make the change, rerun the script, and comment "I changed X to Y and " I don't have time to test your ideas and tell you if they work.
Method #1: Using bash without getopt[s]

Two common ways to pass key-value-pair arguments are:

Bash Space-Separated (e.g., --option argument ) (without getopt[s])

Usage demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

cat >/tmp/demo-space-separated.sh <<'EOF'
#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 "$1"
fi
EOF

chmod +x /tmp/demo-space-separated.sh

/tmp/demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com
Bash Equals-Separated (e.g., --option=argument ) (without getopt[s])

Usage demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

cat >/tmp/demo-equals-separated.sh <<'EOF'
#!/bin/bash

for i in "$@"
do
case $i in
    -e=*|--extension=*)
    EXTENSION="${i#*=}"
    shift # past argument=value
    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    shift # past argument=value
    ;;
    -l=*|--lib=*)
    LIBPATH="${i#*=}"
    shift # past argument=value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument with no value
    ;;
    *)
          # unknown option
    ;;
esac
done
echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of file specified as non-opt/last argument:"
    tail -1 $1
fi
EOF

chmod +x /tmp/demo-equals-separated.sh

/tmp/demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts

output from copy-pasting the block above:

FILE EXTENSION  = conf
SEARCH PATH     = /etc
LIBRARY PATH    = /usr/lib
DEFAULT         =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34    example.com

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Method #2: Using bash with getopt[s]

from: http://mywiki.wooledge.org/BashFAQ/035#getopts

getopt(1) limitations (older, relatively-recent getopt versions):

More recent getopt versions don't have these limitations.

Additionally, the POSIX shell (and others) offer getopts which doesn't have these limitations. I've included a simplistic getopts example.

Usage demo-getopts.sh -vf /etc/hosts foo bar

cat >/tmp/demo-getopts.sh <<'EOF'
#!/bin/sh

# A POSIX variable
OPTIND=1         # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
output_file=""
verbose=0

while getopts "h?vf:" opt; do
    case "$opt" in
    h|\?)
        show_help
        exit 0
        ;;
    v)  verbose=1
        ;;
    f)  output_file=$OPTARG
        ;;
    esac
done

shift $((OPTIND-1))

[ "${1:-}" = "--" ] && shift

echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"
EOF

chmod +x /tmp/demo-getopts.sh

/tmp/demo-getopts.sh -vf /etc/hosts foo bar

output from copy-pasting the block above:

verbose=1, output_file='/etc/hosts', Leftovers: foo bar

The advantages of getopts are:

  1. It's more portable, and will work in other shells like dash .
  2. It can handle multiple single options like -vf filename in the typical Unix way, automatically.

The disadvantage of getopts is that it can only handle short options ( -h , not --help ) without additional code.

There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also help getopts , which might be informative.

johncip ,Jul 23, 2018 at 15:15

No answer mentions enhanced getopt . And the top-voted answer is misleading: It either ignores -⁠vfd style short options (requested by the OP) or options after positional arguments (also requested by the OP); and it ignores parsing-errors. Instead:

The following calls

myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile
myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile
myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile
myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd
myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFile

all return

verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile

with the following myscript

#!/bin/bash
# saner programming env: these switches turn some bugs into errors
set -o errexit -o pipefail -o noclobber -o nounset

# -allow a command to fail with !'s side effect on errexit
# -use return value from ${PIPESTATUS[0]}, because ! hosed $?
! getopt --test > /dev/null 
if [[ ${PIPESTATUS[0]} -ne 4 ]]; then
    echo 'I'm sorry, `getopt --test` failed in this environment.'
    exit 1
fi

OPTIONS=dfo:v
LONGOPTS=debug,force,output:,verbose

# -regarding ! and PIPESTATUS see above
# -temporarily store output to be able to check for errors
# -activate quoting/enhanced mode (e.g. by writing out "--options")
# -pass arguments only via   -- "$@"   to separate them correctly
! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")
if [[ ${PIPESTATUS[0]} -ne 0 ]]; then
    # e.g. return value is 1
    #  then getopt has complained about wrong arguments to stdout
    exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"

d=n f=n v=n outFile=-
# now enjoy the options in order and nicely split until we see --
while true; do
    case "$1" in
        -d|--debug)
            d=y
            shift
            ;;
        -f|--force)
            f=y
            shift
            ;;
        -v|--verbose)
            v=y
            shift
            ;;
        -o|--output)
            outFile="$2"
            shift 2
            ;;
        --)
            shift
            break
            ;;
        *)
            echo "Programming error"
            exit 3
            ;;
    esac
done

# handle non-option arguments
if [[ $# -ne 1 ]]; then
    echo "$0: A single input file is required."
    exit 4
fi

echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"

1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt or sudo port install getopt
2 the POSIX exec() conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)

Tobias Kienzler ,Mar 19, 2016 at 15:23

from : digitalpeer.com with minor modifications

Usage myscript.sh -p=my_prefix -s=dirname -l=libname

#!/bin/bash
for i in "$@"
do
case $i in
    -p=*|--prefix=*)
    PREFIX="${i#*=}"

    ;;
    -s=*|--searchpath=*)
    SEARCHPATH="${i#*=}"
    ;;
    -l=*|--lib=*)
    DIR="${i#*=}"
    ;;
    --default)
    DEFAULT=YES
    ;;
    *)
            # unknown option
    ;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}

To better understand ${i#*=} search for "Substring Removal" in this guide . It is functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two needless subprocesses.

Robert Siemer ,Jun 1, 2018 at 1:57

getopt() / getopts() is a good option. Stolen from here :

The simple use of "getopt" is shown in this mini-script:

#!/bin/bash
echo "Before getopt"
for i
do
  echo $i
done
args=`getopt abc:d $*`
set -- $args
echo "After getopt"
for i
do
  echo "-->$i"
done

What we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).

If we call this "g" and try it out:

bash-2.05a$ ./g -abc foo
Before getopt
-abc
foo
After getopt
-->-a
-->-b
-->-c
-->foo
-->--

We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".

hfossli ,Jan 31 at 20:05

More succinct way

script.sh

#!/bin/bash

while [[ "$#" -gt 0 ]]; do case $1 in
  -d|--deploy) deploy="$2"; shift;;
  -u|--uglify) uglify=1;;
  *) echo "Unknown parameter passed: $1"; exit 1;;
esac; shift; done

echo "Should deploy? $deploy"
echo "Should uglify? $uglify"

Usage:

./script.sh -d dev -u

# OR:

./script.sh --deploy dev --uglify

bronson ,Apr 27 at 23:22

At the risk of adding another example to ignore, here's my scheme.

Hope it's useful to someone.

while [ "$#" -gt 0 ]; do
  case "$1" in
    -n) name="$2"; shift 2;;
    -p) pidfile="$2"; shift 2;;
    -l) logfile="$2"; shift 2;;

    --name=*) name="${1#*=}"; shift 1;;
    --pidfile=*) pidfile="${1#*=}"; shift 1;;
    --logfile=*) logfile="${1#*=}"; shift 1;;
    --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;;

    -*) echo "unknown option: $1" >&2; exit 1;;
    *) handle_argument "$1"; shift 1;;
  esac
done

Robert Siemer ,Jun 6, 2016 at 19:28

I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash

# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi

echo "Before"
for i ; do echo - $i ; done


# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.

while [ -n "$1" ]; do
        # Copy so we can modify it (can't modify $1)
        OPT="$1"
        # Detect argument termination
        if [ x"$OPT" = x"--" ]; then
                shift
                for OPT ; do
                        REMAINS="$REMAINS \"$OPT\""
                done
                break
        fi
        # Parse current opt
        while [ x"$OPT" != x"-" ] ; do
                case "$OPT" in
                        # Handle --flag=value opts like this
                        -c=* | --config=* )
                                CONFIGFILE="${OPT#*=}"
                                shift
                                ;;
                        # and --flag value opts like this
                        -c* | --config )
                                CONFIGFILE="$2"
                                shift
                                ;;
                        -f* | --force )
                                FORCE=true
                                ;;
                        -r* | --retry )
                                RETRY=true
                                ;;
                        # Anything unknown is recorded for later
                        * )
                                REMAINS="$REMAINS \"$OPT\""
                                break
                                ;;
                esac
                # Check for multiple short options
                # NOTICE: be sure to update this pattern to match valid options
                NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
                if [ x"$OPT" != x"$NEXTOPT" ] ; then
                        OPT="-$NEXTOPT"  # multiple short opts, keep going
                else
                        break  # long form, exit inner loop
                fi
        done
        # Done with that param. move to next
        shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS


echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done

> ,

I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:

https://argbash.io

[Aug 29, 2019] shell - An example of how to use getopts in bash - Stack Overflow

The key thing to understand is that getops is just parsing options. You need to shift them as a separate operation:
shift $((OPTIND-1))
May 10, 2013 | stackoverflow.com

An example of how to use getopts in bash Ask Question Asked 6 years, 3 months ago Active 10 months ago Viewed 419k times 288 132

chepner ,May 10, 2013 at 13:42

I want to call myscript file in this way:
$ ./myscript -s 45 -p any_string

or

$ ./myscript -h >>> should display help
$ ./myscript    >>> should display help

My requirements are:

I tried so far this code:

#!/bin/bash
while getopts "h:s:" arg; do
  case $arg in
    h)
      echo "usage" 
      ;;
    s)
      strength=$OPTARG
      echo $strength
      ;;
  esac
done

But with that code I get errors. How to do it with Bash and getopt ?

,

#!/bin/bash

usage() { echo "Usage: $0 [-s <45|90>] [-p <string>]" 1>&2; exit 1; }

while getopts ":s:p:" o; do
    case "${o}" in
        s)
            s=${OPTARG}
            ((s == 45 || s == 90)) || usage
            ;;
        p)
            p=${OPTARG}
            ;;
        *)
            usage
            ;;
    esac
done
shift $((OPTIND-1))

if [ -z "${s}" ] || [ -z "${p}" ]; then
    usage
fi

echo "s = ${s}"
echo "p = ${p}"

Example runs:

$ ./myscript.sh
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -h
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s "" -p ""
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 10 -p foo
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 45 -p foo
s = 45
p = foo

$ ./myscript.sh -s 90 -p bar
s = 90
p = bar

[Aug 28, 2019] How to Replace Spaces in Filenames with Underscores on the Linux Shell

You probably would be better off with -nv options for mv
Aug 28, 2019 | vitux.com
$ for file in *; do mv "$file" `echo $file | tr ' ' '_'` ; done

[Aug 28, 2019] 9 Quick 'mv' Command Practical Examples in Linux

Aug 28, 2019 | www.linuxbuzz.com

Example:5) Do not overwrite existing file at destination (mv -n)

Use '-n' option in mv command in case if we don't want to overwrite an existing file at destination,

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 09:59 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 24 10:10 tools.txt
[linuxbuzz@web ~]$

As we can see tools.txt is present in our current working directory and in /tmp/sysadmin, use below mv command to avoid overwriting at destination,

[linuxbuzz@web ~]$ mv -n tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$
Example:6) Forcefully overwrite write protected file at destination (mv -f)

Use '-f' option in mv command to forcefully overwrite the write protected file at destination. Let's assumes we have a file named " bands.txt " in our present working directory and in /tmp/sysadmin.

[linuxbuzz@web ~]$ ls -l bands.txt /tmp/sysadmin/bands.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 bands.txt
-r--r--r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$

As we can see under /tmp/sysadmin, bands.txt is write protected file,

Without -f option

[linuxbuzz@web ~]$ mv bands.txt /tmp/sysadmin/bands.txt

mv: try to overwrite '/tmp/sysadmin/bands.txt', overriding mode 0444 (r–r–r–)?

To forcefully overwrite, use below mv command,

[linuxbuzz@web ~]$ mv -f bands.txt /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$
Example:7) Verbose output of mv command (mv -v)

Use '-v' option in mv command to print the verbose output, example is shown below

[linuxbuzz@web ~]$ mv -v  buzz51.txt buzz52.txt buzz53.txt buzz54.txt /tmp/sysadmin/
'buzz51.txt' -> '/tmp/sysadmin/buzz51.txt'
'buzz52.txt' -> '/tmp/sysadmin/buzz52.txt'
'buzz53.txt' -> '/tmp/sysadmin/buzz53.txt'
'buzz54.txt' -> '/tmp/sysadmin/buzz54.txt'
[linuxbuzz@web ~]$
Example:8) Create backup at destination while using mv command (mv -b)

Use '-b' option to take backup of a file at destination while performing mv command, at destination backup file will be created with tilde character appended to it, example is shown below,

[linuxbuzz@web ~]$ mv -b buzz55.txt /tmp/sysadmin/buzz55.txt
[linuxbuzz@web ~]$ ls -l /tmp/sysadmin/buzz55.txt*
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:47 /tmp/sysadmin/buzz55.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:37 /tmp/sysadmin/buzz55.txt~
[linuxbuzz@web ~]$
Example:9) Move file only when its newer than destination (mv -u)

There are some scenarios where we same file at source and destination and we wan to move the file only when file at source is newer than the destination, so to accomplish, use -u option in mv command. Example is shown below

[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 55 Aug 25 00:55 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 87 Aug 25 00:57 tools.txt
[linuxbuzz@web ~]$

Execute below mv command to mv file only when its newer than destination,

[linuxbuzz@web ~]$ mv -u tools.txt /tmp/sysadmin/tools.txt
[linuxbuzz@web ~]$

That's all from this article, we have covered all important and basic examples of mv command.

Hopefully above examples will help you to learn more about mv command. Write your feedback and suggestions to us.

[Aug 28, 2019] Echo Command in Linux with Examples

Notable quotes:
"... The -e parameter is used for the interpretation of backslashes ..."
"... The -n option is used for omitting trailing newline. ..."
Aug 28, 2019 | linoxide.com

The -e parameter is used for the interpretation of backslashes

... ... ...

To create a new line after each word in a string use the -e operator with the \n option as shown
$ echo -e "Linux \nis \nan \nopensource \noperating \nsystem"

... ... ...

Omit echoing trailing newline

The -n option is used for omitting trailing newline. This is shown in the example below

$ echo -n "Linux is an opensource operating system"

Sample Output

Linux is an opensource operating systemjames@buster:/$

[Aug 20, 2019] Fixing Midnight Commander's unreadable dropdown menus

Apr 24, 2011 | tech.iprock.com
Skip to content April 24, 2011 by Admin
Important This is an edited version of a post that originally appeared on a blog called The Michigan Telephone Blog, which was written by a friend before he decided to stop blogging. It is reposted with his permission. Comments dated before the year 2013 were originally posted to his blog.

If you've installed Midnight Commander and haven't changed the default colors, when you try to access a dropdown menu you may see this:

Midnight Commander -- Original Colors

REALLY hard to read that menu, isn't it? Wouldn't you rather see this?

Midnight Commander -- Changed Colors

To fix the unreadable menus, just make sure Midnight Commander is not open, then use any text editor (such as nano) to open ~/.mc/ini:

nano ~/.mc/ini

Assuming that there is no existing [Colors] section in the file, just add this at the bottom of the file (if the second line exceeds the blog column width, just use copy and paste to get it all):

[Colors] base_color=default,default:menu=black,cyan:menuhot=brightmagenta,cyan:menusel=white,blue:menuhotsel=brightmagenta,blue

If there is an existing [Colors] section, you can try tweaking it using the parameters shown above. If you have a very recent version of Midnight Commander (which you probably will have if you are running Ubuntu), then instead of menu= you'll need to use menunormal= , as shown here:

[Colors] base_color=default,default:menunormal=black,cyan:menuhot=brightmagenta,cyan:menusel=white,blue:menuhotsel=brightmagenta,blue

Note that for some reason the base_color parameter must appear, or the other items are ignored. Save the change, exit the editor, and open Midnight Commander. If you then close Midnight Commander, you may find that the position of the [Colors] section has moved within the ini file -- apparently Midnight Commander rewrites the file when you close it -- but if you don't like the changes you can remove the [Colors] section to reverse the change.

I figured out how to do this after reading this blog post:
Ajnasz Blog – Midnight Commander colors and themes
Another source of information is:
Zagura's blog – Midnight Commander Color Themes

Related Posts
  • [Aug 20, 2019] Midnight Commander, using date in User menu

    Dec 31, 2013 | unix.stackexchange.com

    user2013619 ,Dec 31, 2013 at 0:43

    I would like to use MC (midnight commander) to compress the selected dir with date in its name, e.g: dirname_20131231.tar.gz

    The command in the User menu is :

    tar -czf dirname_`date '+%Y%m%d'`.tar.gz %d

    The archive is missing because %m , and %d has another meaning in MC. I made an alias for the date, but it also doesn't work.

    Does anybody solved this problem ever?

    John1024 ,Dec 31, 2013 at 1:06

    To escape the percent signs, double them:
    tar -czf dirname_$(date '+%%Y%%m%%d').tar.gz %d

    The above would compress the current directory (%d) to a file also in the current directory. If you want to compress the directory pointed to by the cursor rather than the current directory, use %f instead:

    tar -czf %f_$(date '+%%Y%%m%%d').tar.gz %f
    

    mc handles escaping of special characters so there is no need to put %f in quotes.

    By the way, midnight commander's special treatment of percent signs occurs not just in the user menu file but also at the command line. This is an issue when using shell commands with constructs like ${var%.c} . At the command line, the same as in the user menu file, percent signs can be escaped by doubling them.

    [Aug 19, 2019] Moreutils - A Collection Of More Useful Unix Utilities - OSTechNix

    Parallel is a really useful utility. RPM is installable from EPEL.
    Aug 19, 2019 | www.ostechnix.com

    ... ... ...

    On RHEL , CentOS , Scientific Linux :
    $ sudo yum install epel-release
    
    $ sudo yum install moreutils
    

    [Aug 10, 2019] LinuxQuestions.org - [SOLVED] Midnight Commander Help

    Aug 10, 2019 | www.linuxquestions.org
    CrazyCatLover 12-22-2014 02:40 AM

    Midnight Commander Help
    Hi,

    I need to know how to check the current colour for mc and how to change it.
    I google it and they talk about changeing some initial file /.mc/ini which i have no idea (no one ever gives full filename.)and i cant find it at all. Wasted an hour of my life. I just need the simplest way to change it, not another 10+ steps to change a stupid colour.


    gengisdave 12-22-2014 03:22 AM

    in some distros (mine, e.g.) it is located in ~/.local/mc/ini

    sycamorex 12-22-2014 03:24 AM

    This is the full filename. Mind you on my distro it's in ~/.config/mc/ini
    Find / Create this file and add the following (obviously change the colour values):

    The syntax is: variable=foreground_colour,background_colour
    Code:


    [Colors]
    base_color=lightgray,green:normal=green,default:selected=white,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default


    Also, have a look at this:
    http://blog.mybox.ro/2010/05/10/skin...ght-commander/

    [Aug 10, 2019] Plug-and-Pray Editing Midnight Commander's color scheme

    Aug 10, 2019 | plug-and-pray.blogspot.com

    Editing Midnight Commander's color scheme In a previous post I was sort of laying out a "formula" on how to transform your Midnight Commander default color scheme into a trasnparent skin, without talking too much about how you can change the other colors.

    To my great shame, I didn't pay too much attention to this blog or to the comments asking for further advice. I found Mateus' comment rather late (just now!) and decided to dig further, in order to find out how exactly to deal with more refined color changes, while still keeping the transparent background (in both in Midnight Commander and its editor).

    So the first thing to know is which are the colors that Midnight Commander supports; the available colors are:

    black
    gray
    lightgray
    white
    red
    brightred
    green
    brightgreen
    blue
    brightblue
    magenta
    brightmagenta
    cyan
    brightcyan
    brown
    yellow
    default

    The " default " color is the one giving out the nice transparency.

    Now, there are certain "components" in Midnight Commander's display that can have their colors altered. Here they are:

    base_color, normal, selected, marked, markselect, errors, menu, reverse, dnormal, dfocus, dhotnormal, dhotfocus, viewunderline, menuhot, menusel, menuhotsel, helpnormal, helpitalic, helpbold, helplink, helpslink, gauge, input, directory, executable, link, stalelink, device, core, special, editnormal, editbold, editmarked, errdhotnormal, errdhotfocus

    Each and every one of these "components" can have its own colors set accordingly to the user's wish. Each component is assigned a color pair and must be followed by a colon (':') in order to separate it from the color pair of the next component. Here's how this basic syntax must look like:

    component=foreground_color,background_color:

    When you start modifying the color scheme in your Midnight Commander configuration file (located at ~/.mc/ini ), you just have to add a section called " [Colors] " and proceed with enumerating the color pairs. So you'd have something like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:...:componentN= foreground_colorN,background_colorN

    For increased readability, I will "truncate" that long line, adding a backslash ('\') to indicate that in fact what follows on the next line should be adjacent to the text on the previous line. This being said, the [Colors] section could look like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:\
    component2=foreground_color2,background_color2:\
    ...
    componentN=foreground_colorN,background_colorN

    Now that you've gotten the hang of this, let's see how the [Colors] section looks like in the default Midnight Commander color scheme (you know, the "ugly" one, with blue and dull cyan):

    IMPORTANT NOTE: For visual impact's sake and due to Blogspot breaking long lines, I wrote each color pair on a single row, followed by a backslash ('\'). Please note that this does NOT work in the ~/.mc/ini file, so the final [Colors] section in your Midnight Commander configuration file MUST be a SINGLE line with no spaces and with each color pair separated from the next one by a colon (':').

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,blue:\
    normal=lightgray,blue:\
    selected=black,cyan:\
    marked=yellow,blue:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,blue:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,blue:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,blue:\
    executable=brightgreen,blue:\
    link=lightgray,blue:\
    stalelink=brightred,blue:\
    device=brightmagenta,blue:\
    core=red,blue:\
    special=black,blue:\
    editnormal=lightgray,blue:\
    editbold=yellow,blue:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now let's see. What you want to change first of all is most of the background of these "components", such that the display will be one with a neat looking transparent background. So first of all you might want to make a few changes to these color pairs by replacing the background color "blue" with "default". After doing these changes, your [Colors] section will look a bit like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,cyan:\
    marked=yellow,default:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,default:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=lightgray,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now you've got the basic "Midnight Commander transparent scheme" that was the result of this post .

    Proceeding to Mateus' question, regarding how to change the rest of the colors now, it's about the same as before. What he didn't like there (and as a matter of fact I don't quite like it, either) is the dull cyan that's still seen in the following places:

    1. the bottom line (the one displaying the F1...F10 function keys);
    2. the line that signifies the current selection, the "prompt" which shows you on which file/directory you're "on" at a given moment;
    3. the uppermost line (the "menu" line);
    4. the menus themselves, once you open them.
    To "fix" issues 1, 2, and 3 it is sufficient to alter the value of the " selected " parameter. Notice how it is initially

    selected=black,cyan:\

    My personal choice is to replace the background cyan, which I don't really like, with green. To do this, I'll change this color pair to

    selected=black,green:\

    You can, of course, change the foreground color as well. For me, it's alright to keep the foreground (the text) "black". You can change it to whatever suits your taste.

    To "fix" issue number 4 in the list above, you need to change the " menu " parameter. To get it transparent, just change the "cyan" background to "default". Make other adjustments as you see fit. In other words, change

    menu=white,cyan:\

    into, for instance,

    menu=ligthgray,default:\

    However, there are a few "leftovers" from the default color scheme.

    One of them is the parameter regarding the hotkeys in the menus (the "underlined" character on most of the menu options, showing you what key you can press in order to access that option faster than by moving to it with the arrow keys). This color pair is called " menuhot ". I changed it from

    menuhot=yellow,cyan:\

    into

    menuhot=yellow,default:\

    Another thing which might bother you is the color of the line in the panel you're in when you've "selected all" files (when you've pressed the "*" key). This parameter is called " markselect ". I changed it from

    markselect=yellow,cyan:\

    into

    markselect=white,green:\

    The color pair of the selected buttons in dialogs is called " dfocus ". I changed mine from

    dfocus=black,cyan:\

    into

    dfocus=black,green:\

    In the "focused" buttons or options, the underlined character is called " dhotfocus ". I changed mine from

    dhotfocus=blue,cyan:\

    into

    dhotfocus=brightgreen,green:\

    since the background color was already green, after I modified the " dfocus " color pair.

    The other buttons or options in the dialogs which have hotkeys assigned to them, but which are not "focused" (the buttons/options that you're not located on at a given moment) are still displayed in blue on a light gray background. This color pair is referred to as " dhotnormal ". Since the blue looks a bit odd there, I changed

    dhotnormal=blue,lightgray:\

    into

    dhotnormal=brightgreen,default:\

    Well, this is nice, in window titles and on normal (unfocused) hotkeys I get the transparent background. The problem now is that the rest of the dialog window is still light gray. To change this (to make the window transparent as well), you only need to alter the " dnormal " color pair, such as changing it from

    dnormal=black,lightgray:\

    into

    dnormal=white,default:\

    You may notice that the input fields stay cyan, as well; you find these fields in quite a lot of dialog boxes. To alter this, I changed

    input=black,cyan:\

    into

    input=black,green:\

    One thing which I consider useful is to have symbolic links displayed in bright cyan (as in the colored listings in the terminal). So I just changed

    link=lightgray,default:\

    into

    link=brightcyan,default:\

    Now, regarding the rest of the color pairs, I don't really know what they do. However, if at some point after using Midnight Commander more with this new, neat, transparent/green color scheme you'll notice unwanted leftovers, you can try out other changes in the color pairs values, one at a time, until you determine the troublesome one.

    After operating the changes above, my [Colors] section in ~/.mc/ini now looks like this:

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,green:\
    marked=yellow,default:\
    markselect=white,green:\
    errors=white,red:\
    menu=lightgray,default:\
    reverse=black,lightgray:\
    dnormal=white,default:\
    dfocus=black,green:\
    dhotnormal=brightgreen,default:\
    dhotfocus=brightgreen,green:\
    viewunderline=brightred,default:\
    menuhot=yellow,default:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,green:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=brightcyan,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    I need to direct you to the " IMPORTANT NOTE " above. The final [Colors] section above is written like this - one pair on each row, followed by a backslash - for clarity's sake. The actual final [Colors] section in your ~/.mc/ini file will have to be a one-liner, with no blanks and no backslashes. So it will probably look similar to this:

    base_color=lightgray,default:normal=lightgray,default:selected=black,green:marked=yellow,default:markselect=white,green:errors=white,red:menu=lightgray,default:reverse=black,lightgray:dnormal=white,default:dfocus=black,green:dhotnormal=brightgreen,default:dhotfocus=brightgreen,green:viewunderline=brightred,default:menuhot=yellow,default:menusel=white,black:menuhotsel=yellow,black:helpnormal=black,lightgray:helpitalic=red,lightgray:helpbold=blue,lightgray:helplink=black,cyan:helpslink=yellow,default:gauge=white,black:input=black,green:directory=white,default:executable=brightgreen,default:link=brightcyan,default:stalelink=brightred,default:device=brightmagenta,default:core=red,default:special=black,default:editnormal=lightgray,default:editbold=yellow,default:editmarked=black,cyan:errdhotnormal=yellow,red:errdhotfocus=yellow,lightgray

    Now, the next time you start mc , the new color scheme will take effect.

    As a bonus, here's a picture of how my Midnight Commander looks like, with this new "skin" on:

    Posted by Alexandra at 1:54 PM Labels: color scheme , mc , transparency

    [Aug 10, 2019] Midnight Commander color scheme ~ centosvn

    Aug 10, 2019 | centos-vn.blogspot.com

    Midnight Commander (or "mc") can have transparent panels instead of the ugly, dull default blue. So can "mcedit", its text editor.

    Here's how to do it. Edit the file ~/.mc/ini and add at the end the following:

    [Colors]
    base_color=normal=,default:selected=,:marked=,default:\
    markselect=,:menu=,:menuhot=,:menusel=,:\
    menuhotsel=,:dnormal=,:dfocus=,:dhotnormal=,:dhotfocus=,:\
    input=,:reverse=,:executable=,default:directory=,default:\
    link=,default:device=,default:special=,:core=,:helpnormal=,:\
    helplink=,:helpslink=,:editnormal=,default:

    Note #1: In the above 'code' block, there is only one line below [Colors] . I truncated the line with the backslash because of blogspot rendering issues. You just write all that on one single line, without the "\" (backslash-es).

    Note #2: At the end of this line, the " editnormal,=default: " option means that mcedit will have transparent background in your console, as well.

    To my great shame, I didn't pay too much attention to this blog or to the comments asking for further advice. I found Mateus' comment rather late (just now!) and decided to dig further, in order to find out how exactly to deal with more refined color changes, while still keeping the transparent background (in both in Midnight Commander and its editor).

    So the first thing to know is which are the colors that Midnight Commander supports; the available colors are:

    black
    gray
    lightgray
    white
    red
    brightred
    green
    brightgreen
    blue
    brightblue
    magenta
    brightmagenta
    cyan
    brightcyan
    brown
    yellow
    default

    The " default " color is the one giving out the nice transparency.

    Now, there are certain "components" in Midnight Commander's display that can have their colors altered. Here they are:

    base_color, normal, selected, marked, markselect, errors, menu, reverse, dnormal, dfocus, dhotnormal, dhotfocus, viewunderline, menuhot, menusel, menuhotsel, helpnormal, helpitalic, helpbold, helplink, helpslink, gauge, input, directory, executable, link, stalelink, device, core, special, editnormal, editbold, editmarked, errdhotnormal, errdhotfocus

    Each and every one of these "components" can have its own colors set accordingly to the user's wish. Each component is assigned a color pair and must be followed by a colon (':') in order to separate it from the color pair of the next component. Here's how this basic syntax must look like:

    component=foreground_color,background_color:

    When you start modifying the color scheme in your Midnight Commander configuration file (located at ~/.mc/ini ), you just have to add a section called " [Colors] " and proceed with enumerating the color pairs. So you'd have something like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:...:componentN= foreground_colorN,background_colorN

    For increased readability, I will "truncate" that long line, adding a backslash ('\') to indicate that in fact what follows on the next line should be adjacent to the text on the previous line. This being said, the [Colors] section could look like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    component1=foreground_color1,background_color1:\
    component2=foreground_color2,background_color2:\
    ...
    componentN=foreground_colorN,background_colorN

    Now that you've gotten the hang of this, let's see how the [Colors] section looks like in the default Midnight Commander color scheme (you know, the "ugly" one, with blue and dull cyan):

    IMPORTANT NOTE: For visual impact's sake and due to Blogspot breaking long lines, I wrote each color pair on a single row, followed by a backslash ('\'). Please note that this does NOT work in the ~/.mc/ini file, so the final [Colors] section in your Midnight Commander configuration file MUST be a SINGLE line with no spaces and with each color pair separated from the next one by a colon (':').

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,blue:\
    normal=lightgray,blue:\
    selected=black,cyan:\
    marked=yellow,blue:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,blue:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,blue:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,blue:\
    executable=brightgreen,blue:\
    link=lightgray,blue:\
    stalelink=brightred,blue:\
    device=brightmagenta,blue:\
    core=red,blue:\
    special=black,blue:\
    editnormal=lightgray,blue:\
    editbold=yellow,blue:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now let's see. What you want to change first of all is most of the background of these "components", such that the display will be one with a neat looking transparent background. So first of all you might want to make a few changes to these color pairs by replacing the background color "blue" with "default". After doing these changes, your [Colors] section will look a bit like this:

    # the rest of your ~/.mc/ini file

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,cyan:\
    marked=yellow,default:\
    markselect=yellow,cyan:\
    errors=white,red:\
    menu=white,cyan:\
    reverse=black,lightgray:\
    dnormal=black,lightgray:\
    dfocus=black,cyan:\
    dhotnormal=blue,lightgray:\
    dhotfocus=blue,cyan:\
    viewunderline=brightred,default:\
    menuhot=yellow,cyan:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,cyan:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=lightgray,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    Now you've got the basic "Midnight Commander transparent scheme" that was the result of this post .

    Proceeding to Mateus' question, regarding how to change the rest of the colors now, it's about the same as before. What he didn't like there (and as a matter of fact I don't quite like it, either) is the dull cyan that's still seen in the following places:

    1. the bottom line (the one displaying the F1...F10 function keys);
    2. the line that signifies the current selection, the "prompt" which shows you on which file/directory you're "on" at a given moment;
    3. the uppermost line (the "menu" line);
    4. the menus themselves, once you open them.
    To "fix" issues 1, 2, and 3 it is sufficient to alter the value of the " selected " parameter. Notice how it is initially

    selected=black,cyan:\

    My personal choice is to replace the background cyan, which I don't really like, with green. To do this, I'll change this color pair to

    selected=black,green:\

    You can, of course, change the foreground color as well. For me, it's alright to keep the foreground (the text) "black". You can change it to whatever suits your taste.

    To "fix" issue number 4 in the list above, you need to change the " menu " parameter. To get it transparent, just change the "cyan" background to "default". Make other adjustments as you see fit. In other words, change

    menu=white,cyan:\

    into, for instance,

    menu=ligthgray,default:\

    However, there are a few "leftovers" from the default color scheme.

    One of them is the parameter regarding the hotkeys in the menus (the "underlined" character on most of the menu options, showing you what key you can press in order to access that option faster than by moving to it with the arrow keys). This color pair is called " menuhot ". I changed it from

    menuhot=yellow,cyan:\

    into

    menuhot=yellow,default:\

    Another thing which might bother you is the color of the line in the panel you're in when you've "selected all" files (when you've pressed the "*" key). This parameter is called " markselect ". I changed it from

    markselect=yellow,cyan:\

    into

    markselect=white,green:\

    The color pair of the selected buttons in dialogs is called " dfocus ". I changed mine from

    dfocus=black,cyan:\

    into

    dfocus=black,green:\

    In the "focused" buttons or options, the underlined character is called " dhotfocus ". I changed mine from

    dhotfocus=blue,cyan:\

    into

    dhotfocus=brightgreen,green:\

    since the background color was already green, after I modified the " dfocus " color pair.

    The other buttons or options in the dialogs which have hotkeys assigned to them, but which are not "focused" (the buttons/options that you're not located on at a given moment) are still displayed in blue on a light gray background. This color pair is referred to as " dhotnormal ". Since the blue looks a bit odd there, I changed

    dhotnormal=blue,lightgray:\

    into

    dhotnormal=brightgreen,default:\

    Well, this is nice, in window titles and on normal (unfocused) hotkeys I get the transparent background. The problem now is that the rest of the dialog window is still light gray. To change this (to make the window transparent as well), you only need to alter the " dnormal " color pair, such as changing it from

    dnormal=black,lightgray:\

    into

    dnormal=white,default:\

    You may notice that the input fields stay cyan, as well; you find these fields in quite a lot of dialog boxes. To alter this, I changed

    input=black,cyan:\

    into

    input=black,green:\

    One thing which I consider useful is to have symbolic links displayed in bright cyan (as in the colored listings in the terminal). So I just changed

    link=lightgray,default:\

    into

    link=brightcyan,default:\

    Now, regarding the rest of the color pairs, I don't really know what they do. However, if at some point after using Midnight Commander more with this new, neat, transparent/green color scheme you'll notice unwanted leftovers, you can try out other changes in the color pairs values, one at a time, until you determine the troublesome one.

    After operating the changes above, my [Colors] section in ~/.mc/ini now looks like this:

    [Colors]
    base_color=lightgray,default:\
    normal=lightgray,default:\
    selected=black,green:\
    marked=yellow,default:\
    markselect=white,green:\
    errors=white,red:\
    menu=lightgray,default:\
    reverse=black,lightgray:\
    dnormal=white,default:\
    dfocus=black,green:\
    dhotnormal=brightgreen,default:\
    dhotfocus=brightgreen,green:\
    viewunderline=brightred,default:\
    menuhot=yellow,default:\
    menusel=white,black:\
    menuhotsel=yellow,black:\
    helpnormal=black,lightgray:\
    helpitalic=red,lightgray:\
    helpbold=blue,lightgray:\
    helplink=black,cyan:\
    helpslink=yellow,default:\
    gauge=white,black:\
    input=black,green:\
    directory=white,default:\
    executable=brightgreen,default:\
    link=brightcyan,default:\
    stalelink=brightred,default:\
    device=brightmagenta,default:\
    core=red,default:\
    special=black,default:\
    editnormal=lightgray,default:\
    editbold=yellow,default:\
    editmarked=black,cyan:\
    errdhotnormal=yellow,red:\
    errdhotfocus=yellow,lightgray

    I need to direct you to the " IMPORTANT NOTE " above. The final [Colors] section above is written like this - one pair on each row, followed by a backslash - for clarity's sake. The actual final [Colors] section in your ~/.mc/ini file will have to be a one-liner, with no blanks and no backslashes. So it will probably look similar to this:

    base_color=lightgray,default:normal=lightgray,default:selected=black,green:marked=yellow,default:markselect=white,green:errors=white,red:menu=lightgray,default:reverse=black,lightgray:dnormal=white,default:dfocus=black,green:dhotnormal=brightgreen,default:dhotfocus=brightgreen,green:viewunderline=brightred,default:menuhot=yellow,default:menusel=white,black:menuhotsel=yellow,black:helpnormal=black,lightgray:helpitalic=red,lightgray:helpbold=blue,lightgray:helplink=black,cyan:helpslink=yellow,default:gauge=white,black:input=black,green:directory=white,default:executable=brightgreen,default:link=brightcyan,default:stalelink=brightred,default:device=brightmagenta,default:core=red,default:special=black,default:editnormal=lightgray,default:editbold=yellow,default:editmarked=black,cyan:errdhotnormal=yellow,red:errdhotfocus=yellow,lightgray

    Now, the next time you start mc , the new color scheme will take effect.

    As a bonus, here's a picture of how my Midnight Commander looks like, with this new "skin" on:

    Email This BlogThis! Share to Twitter Share to Facebook

    [Aug 10, 2019] Midnight Commander colors and themes

    Aug 10, 2019 | ajnasz.hu

    Koszti Lajos Midnight Commander is the most pupular file manager on unix like systems. It's fast and it has all features what you need. But it's only blue and we know, that everyone loves the eyecandy, everyone likes customizing his/her own desktop. But is there any way to custimize the mc ?
    Yes, and I try to show you, how can you create your theme .

    You can change the Midnight Commander colors if you edit the ~/.mc/ini file, where you have to add a new section, named [Colors] . You should define the new colors in this section, for example:

    [Colors] base_color=lightgray,green:normal=green,default:selected=white,gray ...

    As you see, it has a simple syntax:

    <keyword>=<foregroundcolor>,<backgroundcolor>:<keyword>= ...

    The colors are optional, so you can use this:

    [Colors] base_color=lightgray,green:normal=green:selected=,gray ...

    It's not the exactly the same as the first version!

    Fine, you can change some colors of the filemanager, but which are the keywords? These are:

    And which are the colors? I don't know all, but here are some of them:
    white, gray, blue, green, yellow, magenta, cyan, red, brown, birghtgreen, brightblue, brightmagenta, brightcyan, brightred, default

    Here is the config, what I use:

    [Colors] base_color=lightgray,green:normal=green,default:selected=white,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default

    Screenshot about my redesigned Midnight Commander

    On the screenshot you can see, that the directory color is blue, the files are green, the executable files are birghtgreen and the selected line is white on a gray background.

    And another one, what I use recently:

    [Colors] base_color=lightgray,blue:normal=blue,default:selected=white,brightblue:marked=yellow,default:markselect=yellow,gray:directory=brightblue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default

    Screenshot about my redesigned Midnight Commander

    And here is a small shell script, which will help for you to test your new theme:

    #!/bin/sh mc --colors normal=green,default:selected=brightmagenta,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default

    Download the shell script to make your own mc theme

    Save it as mccolortest.sh, make it executable with the chmod +x mccolortest.sh command, and run it with the ./mccolortest.sh command. If you want to change a color, just edit this file. When you done, copy the colors and paste it below the [Colors] section in the ~/.mc/ini . If it doesn't exists, make it yourself.

    For more information of the mc redesigning check its manual page .


    Mauricio2 hónapja ,

    Awesome!
    Thank you for your clear explanation.

    Anonymous • 6 éve ,

    Thank you for theme. I tried your last theme and it is exactly what I was searching for.

    Anonymous • 6 éve ,

    Also, in 4.8.3 here, I copied the first example scheme line and my colors are different. I can't even set the background of the select bar to gray (or "grey"): it gets replaced with black. Also, the panel headings remain blue here, unlike the (first) screenshot, and I can see no corresponding tag in the line anyway.

    Good intro, regardless. Someone should post a pointer to a more up-to-date one, though, as Google seems to find this old thread within the top few hits. Király! ;)

    --lunakid

    Ajnasz Anonymous6 éve ,

    The colors are depends on the color settings of your terminal. I don't have those settings anymore which was when I posted this article, but here is my current. If I'm right, it's similar to that. Put it into your .Xdefaults

    *background: #000000
    *foreground: #EEEEEC
    
    ! Default
    ! 0: black
    *color0: #1C1C1C
    *color8: #333333
    ! 1: red
    *color1: #C14242
    *color9: #EF2929
    ! 2: green
    *color2: #6AA037
    *color10: #9DCF70
    ! 3: yellow
    *color3: #CFAB2F
    *color11: #FCDA4F
    ! 4: blue
    *color4: #2D578A
    *color12: #729FCF
    ! 5: magenta
    *color5: #A85EB4
    *color13: #AD7FA8
    ! 6: cyan
    *color6: #2F8D8F
    *color14: #34E2E2
    ! 7: white
    *color7: #D3D7CF
    *color15: #EEEEEC
    
    Anonymous • 7 éve ,

    Now ~/.mc dir is ignored. Now is ~/.config/mc ;)

    Anonymous • 10 éve ,

    Midnight Commander supports skins starting from 4.7.0-pre3 version. You can download a skin with black as a main color from here:
    http://zool.in.ua/software/bluemoon/

    Anonymous • 10 éve ,

    I am using MC on my router ASUS WL-500GP and I am developing php scripts on it. but as I see MC in openwrt (kmaikaze 8.09) does not use syntax-highlighting and it is very unconfortable.
    Do you know how could I turn it on? I have already downloaded php.syntax file and put it into /usr/share/syntax dir but it does not seem to work. is it possible that some support is not compiled into my version or the syntax file must be compiled to another format?
    Br Zé.

    Anonymous Anonymous10 éve ,

    I found it. in ~/.mc/cedit/Syntax must be this:
    file ..\*\\.(php|PHP)$ PHP\sFile
    include php.syntax

    and in the same dir php.syntax file must be placed. (copied out from a source distrib)

    Anonymous • 10 éve ,

    hei ajnasz, your color theme so very nice, keep my eye on my pc longer than usual. Well, i don't have much time to do more explore with this tricks. I think your taste so cool. If you have any kind of theme, i should be try it. :-)

    Regards,

    Dedi

    Anonymous • 10 éve ,

    Any chance to change the color of the files by extension?

    Anonymous Anonymous10 éve ,

    Midnight Commander supports this starting from 4.7.0-pre3 version.

    Ajnasz Anonymous10 éve ,

    I didn't find anything about it. By the way, since the extension doesn't determinate the file type in UNIX like systems, it wouldn't make any sense to do it.

    Anonymous Ajnasz9 éve ,

    Don't be silly. Mp3 is just music, txt is text, doc is document. The only thing, which is not exactly determinable is the executables, but whatever, it has +x flag.

    Anonymous • 11 éve ,

    Also, you should know that most modern terminal applications allow you to redefine the exact shade of those 16 colors.

    Some of them (such as the Gnome or KDE terminals) may have a place under their preferences where you can redefine the colors.

    Older terminals, such as aterm, use ~/.Xdefaults for this. You can edit that file and add lines like this: "aterm*color1: OrangeRed" (without the quotes). What I've done with that is tell aterm that the "color1" (which was red) should now be "OrangeRed". See /usr/share/X11/rgb.txt for valid color names. You can use *color0 through *color15. So when you'll say "red" in MC's ini file, and if you use aterm, it will get replaced by color1 in ~/.Xdefaults and changed to OrangeRed. (Sorry, I don't remember the mappings between the names used by MC and 0-15 in Xdefaults by heart.)

    Anonymous • 12 éve ,

    On the same subject:
    http://www.zagura.ro/index....

    [Jul 29, 2019] A Guide to Kill, Pkill and Killall Commands to Terminate a Process in Linux

    Jul 26, 2019 | www.tecmint.com
    ... ... ...

    How about killing a process using process name

    You must be aware of process name, before killing and entering a wrong process name may screw you.

    # pkill mysqld
    
    Kill more than one process at a time.
    # kill PID1 PID2 PID3
    
    or
    
    # kill -9 PID1 PID2 PID3
    
    or
    
    # kill -SIGKILL PID1 PID2 PID3
    
    What if a process have too many instances and a number of child processes, we have a command ' killall '. This is the only command of this family, which takes process name as argument in-place of process number.

    Syntax:

    # killall [signal or option] Process Name
    

    To kill all mysql instances along with child processes, use the command as follow.

    # killall mysqld
    

    You can always verify the status of the process if it is running or not, using any of the below command.

    # service mysql status
    # pgrep mysql
    # ps -aux | grep mysql
    

    That's all for now, from my side. I will soon be here again with another Interesting and Informative topic. Till Then, stay tuned, connected to Tecmint and healthy. Don't forget to give your valuable feedback in comment section.

    [Jul 29, 2019] Locate Command in Linux

    Jul 25, 2019 | linuxize.com

    ... ... ...

    The locate command also accepts patterns containing globbing characters such as the wildcard character * . When the pattern contains no globbing characters the command searches for *PATTERN* , that's why in the previous example all files containing the search pattern in their names were displayed.

    The wildcard is a symbol used to represent zero, one or more characters. For example, to search for all .md files on the system you would use:

    locate *.md
    

    To limit the search results use the -n option followed by the number of results you want to be displayed. For example, the following command will search for all .py files and display only 10 results:

    locate -n 10 *.py
    

    By default, locate performs case-sensitive searches. The -i ( --ignore-case ) option tels locate to ignore case and run case-insensitive search.

    locate -i readme.md
    
    /home/linuxize/p1/readme.md
    /home/linuxize/p2/README.md
    /home/linuxize/p3/ReadMe.md
    

    To display the count of all matching entries, use the -c ( --count ) option. The following command would return the number of all files containing .bashrc in their names:

    locate -c .bashrc
    
    6
    

    By default, locate doesn't check whether the found files still exist on the file system. If you deleted a file after the latest database update if the file matches the search pattern it will be included in the search results.

    To display only the names of the files that exist at the time locate is run use the -e ( --existing ) option. For example, the following would return only the existing .json files:

    locate -e *.json
    

    If you need to run a more complex search you can use the -r ( --regexp ) option which allows you to search using a basic regexp instead of patterns. This option can be specified multiple times.
    For example, to search for all .mp4 and .avi files on your system and ignore case you would run:

    locate --regex -i "(\.mp4|\.avi)"
    

    [Jul 28, 2019] command line - How do I extract a specific file from a tar archive - Ask Ubuntu

    Jul 28, 2019 | askubuntu.com

    CMCDragonkai, Jun 3, 2016 at 13:04

    1. Using the Command-line tar

    Yes, just give the full stored path of the file after the tarball name.

    Example: suppose you want file etc/apt/sources.list from etc.tar :

    tar -xf etc.tar etc/apt/sources.list

    Will extract sources.list and create directories etc/apt under the current directory.

    2. Extract it with the Archive Manager

    Open the tar in Archive Manager from Nautilus, go down into the folder hierarchy to find the file you need, and extract it.

    3. Using Nautilus/Archive-Mounter

    Right-click the tar in Nautilus, and select Open with ArchiveMounter.

    The tar will now appear similar to a removable drive on the left, and you can explore/navigate it like a normal drive and drag/copy/paste any file(s) you need to any destination.

    [Jul 28, 2019] iso - midnight commander rules for accessing archives through VFS - Unix Linux Stack Exchange

    Jul 28, 2019 | unix.stackexchange.com

    ,

    Midnight Commander uses virtual filesystem ( VFS ) for displying files, such as contents of a .tar.gz archive, or of .iso image. This is configured in mc.ext with rules such as this one ( Open is Enter , View is F3 ):
    regex/\.([iI][sS][oO])$
        Open=%cd %p/iso9660://
        View=%view{ascii} isoinfo -d -i %f
    

    When I press Enter on an .iso file, mc will open the .iso and I can browse individual files. This is very useful.

    Now my question: I have also files which are disk images, i.e. created with pv /dev/sda1 > sda1.img

    I would like mc to "browse" the files inside these images in the same fashion as .iso .

    Is this possible ? How would such rule look like ?

    [Jul 28, 2019] Use Midnight Commander like a pro

    Jul 28, 2019 | klimer.eu

    May 1, 2015

    If you've used an *nix system, at some point you've stumbled upon Midnight Commander , a file manager based on the venerable Norton Commander. You're probably familiar with the basic operations ( F5 for copying, F6 for moving, F8 for deleting, etc.) and how to switch panels (ummm, the Tab key). But mc offers so much more than that. This article aims to show all the useful (YMMV) shortcuts and functionalities that are often overlooked. Most of them can be accessed using the menu ( F9 ), but who has the time to do that?

    Before we get started, let's establish some facts. This article was written and tested on the following software:

    Oh, and make sure you're running a modern and UTF-8 friendly terminal - for example, rxvt-unicode.

    Hold your horses

    There's actually one thing I'd recommend doing before you run mc . mc has the ability to exit to its current directory. Meaning, you can navigate the filesystem using mc (sometimes it's easier than cd ing into that one directory buried deep down somewhere ) and when you quit mc ( F10 ), your shell will automagically cd to that directory. This is done thanks to the mc-wrapper script that should be bundled with your installation of mc . The exact location is dependent on your distribution - in mine (Gentoo) it's /usr/libexec/mc/ , in Ubuntu supposedly it's in /usr/share/mc/bin/ . Once found, modify your ~/.bashrc :

    alias mc='. /usr/libexec/mc/mc-wrapper.sh'
    

    Restart your shell, launch mc , change to another directory, exit and your shell should be set to that new directory.

    Selecting files Accessing the shell Internal viewer ( F3 ) and editor ( F4 ) Panels Searching files Common actions Virtual File System (VFS)

    mc has a concept known as Virtual File System. Try "entering" an archive ( *.tar.gz , *.rpm or even *.jar ) - you'll be able to browse the contents of the archive like a normal folder, without unpacking it first. You extract selected files from the archive by just copying them to the other panel. Bonus points: try "entering" a *.patch file.

    This concept is even more powerful when you realize that remote locations can be viewed the same way. A quick way to browse an FTP location is to just cd to it: cd ftp://mirrors.tera-byte.com/pub/gentoo (first Gentoo FTP mirror I found). You'll be able to interact with files as you normally do. To exit this remote location, cd to a local directory. Just typing cd will suffice as it will take you to your home directory.

    VFS works for SFTP and Samba shares too. Check the manpages for more information on how to specify user/pass, etc.

    Useful options Bonus assignments

    Well, that was a lot to take in. Of course, this list is not complete (that's what man mc is there for), but I've selected the commands and functionalities that are the most useful to me . Embrace the ones you find useful, forget the rest and learn about the other ones I've missed!

    [Jul 28, 2019] Bartosz Kosarzycki's blog Midnight Commander how to compress a file-directory; Make a tar archive with midnight commander

    Jul 28, 2019 | kosiara87.blogspot.com

    Midnight Commander how to compress a file/directory; Make a tar archive with midnight commander

    To compress a file in Midnight Commader (e.g. to make a tar.gz archive) navigate to the directory you want to pack and press 'F2'. This will bring up the 'User menu'. Choose the option 'Compress the current subdirectory'. This will compress the WHOLE directory you're currently in - not the highlighted directory.

    [Jul 26, 2019] How To Check Swap Usage Size and Utilization in Linux by Vivek Gite

    Jul 26, 2019 | www.cyberciti.biz

    The procedure to check swap space usage and size in Linux is as follows:

    1. Open a terminal application.
    2. To see swap size in Linux, type the command: swapon -s .
    3. You can also refer to the /proc/swaps file to see swap areas in use on Linux.
    4. Type free -m to see both your ram and your swap space usage in Linux.
    5. Finally, one can use the top or htop command to look for swap space Utilization on Linux too.
    How to Check Swap Space in Linux using /proc/swaps file

    Type the following cat command to see total and used swap size:
    # cat /proc/swaps
    Sample outputs:

    Filename                           Type            Size    Used    Priority
    /dev/sda3                               partition       6291448 65680   0
    

    Another option is to type the grep command as follows:
    grep Swap /proc/meminfo

    SwapCached:            0 kB
    SwapTotal:        524284 kB
    SwapFree:         524284 kB
    
    Look for swap space in Linux using swapon command

    Type the following command to show swap usage summary by device
    # swapon -s
    Sample outputs:

    Filename                           Type            Size    Used    Priority
    /dev/sda3                               partition       6291448 65680   0
    
    Use free command to monitor swap space usage

    Use the free command as follows:
    # free -g
    # free -k
    # free -m

    Sample outputs:

                 total       used       free     shared    buffers     cached
    Mem:         11909      11645        264          0        324       8980
    -/+ buffers/cache:       2341       9568
    Swap:         6143         64       6079
    
    See swap size in Linux using vmstat command

    Type the following vmstat command:
    # vmstat
    # vmstat 1 5

    ... ... ...

    Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

    [Jul 26, 2019] Cheat.sh Shows Cheat Sheets On The Command Line Or In Your Code Editor>

    The choice of shell as a programming language is strange, but the idea is good...
    Notable quotes:
    "... The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget. ..."
    Jul 26, 2019 | www.linuxuprising.com

    While it does have its own cheat sheet repository too, the project is actually concentrated around the creation of a unified mechanism to access well developed and maintained cheat sheet repositories.

    The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget.

    It's worth noting that cheat.sh is not new. In fact it had its initial commit around May, 2017, and is a very popular repository on GitHub. But I personally only came across it recently, and I found it very useful, so I figured there must be some Linux Uprising readers who are not aware of this cool gem.

    cheat.sh features & more
    cheat.sh tar example
    cheat.sh major features:

    The command line client features a special shell mode with a persistent queries context and readline support. It also has a query history, it integrates with the clipboard, supports tab completion for shells like Bash, Fish and Zsh, and it includes the stealth mode I mentioned in the cheat.sh features.

    The web, curl and cht.sh (command line) interfaces all make use of https://cheat.sh/ but if you prefer, you can self-host it .

    It should be noted that each editor plugin supports a different feature set (configurable server, multiple answers, toggle comments, and so on). You can view a feature comparison of each cheat.sh editor plugin on the Editors integration section of the project's GitHub page.

    Want to contribute a cheat sheet? See the cheat.sh guide on editing or adding a new cheat sheet.

    Interested in bookmarking commands instead? You may want to give Marker, a command bookmark manager for the console , a try.

    cheat.sh curl / command line client usage examples
    Examples of using cheat.sh using the curl interface (this requires having curl installed as you'd expect) from the command line:

    Show the tar command cheat sheet:

    curl cheat.sh/tar
    

    Example with output:
    $ curl cheat.sh/tar
    # To extract an uncompressed archive:
    tar -xvf /path/to/foo.tar
    
    # To create an uncompressed archive:
    tar -cvf /path/to/foo.tar /path/to/foo/
    
    # To extract a .gz archive:
    tar -xzvf /path/to/foo.tgz
    
    # To create a .gz archive:
    tar -czvf /path/to/foo.tgz /path/to/foo/
    
    # To list the content of an .gz archive:
    tar -ztvf /path/to/foo.tgz
    
    # To extract a .bz2 archive:
    tar -xjvf /path/to/foo.tgz
    
    # To create a .bz2 archive:
    tar -cjvf /path/to/foo.tgz /path/to/foo/
    
    # To extract a .tar in specified Directory:
    tar -xvf /path/to/foo.tar -C /path/to/destination/
    
    # To list the content of an .bz2 archive:
    tar -jtvf /path/to/foo.tgz
    
    # To create a .gz archive and exclude all jpg,gif,... from the tgz
    tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/
    
    # To use parallel (multi-threaded) implementation of compression algorithms:
    tar -z ... -> tar -Ipigz ...
    tar -j ... -> tar -Ipbzip2 ...
    tar -J ... -> tar -Ipixz ...
    

    cht.sh also works instead of cheat.sh:
    curl cht.sh/tar
    

    Want to search for a keyword in all cheat sheets? Use:
    curl cheat.sh/~keyword
    

    List the Python programming language cheat sheet for random list :
    curl cht.sh/python/random+list
    

    Example with output:
    $ curl cht.sh/python/random+list
    #  python - How to randomly select an item from a list?
    #  
    #  Use random.choice
    #  (https://docs.python.org/2/library/random.htmlrandom.choice):
    
    import random
    
    foo = ['a', 'b', 'c', 'd', 'e']
    print(random.choice(foo))
    
    #  For cryptographically secure random choices (e.g. for generating a
    #  passphrase from a wordlist), use random.SystemRandom
    #  (https://docs.python.org/2/library/random.htmlrandom.SystemRandom)
    #  class:
    
    import random
    
    foo = ['battery', 'correct', 'horse', 'staple']
    secure_random = random.SystemRandom()
    print(secure_random.choice(foo))
    
    #  [Pēteris Caune] [so/q/306400] [cc by-sa 3.0]
    

    Replace python with some other programming language supported by cheat.sh, and random+list with the cheat sheet you want to show.

    Want to eliminate the comments from your answer? Add ?Q at the end of the query (below is an example using the same /python/random+list):

    $ curl cht.sh/python/random+list?Q
    import random
    
    foo = ['a', 'b', 'c', 'd', 'e']
    print(random.choice(foo))
    
    import random
    
    foo = ['battery', 'correct', 'horse', 'staple']
    secure_random = random.SystemRandom()
    print(secure_random.choice(foo))
    

    For more flexibility and tab completion you can use cht.sh, the command line cheat.sh client; you'll find instructions for how to install it further down this article. Examples of using the cht.sh command line client:

    Show the tar command cheat sheet:

    cht.sh tar
    

    List the Python programming language cheat sheet for random list :
    cht.sh python random list
    

    There is no need to use quotes with multiple keywords.

    You can start the cht.sh client in a special shell mode using:

    cht.sh --shell
    

    And then you can start typing your queries. Example:
    $ cht.sh --shell
    cht.sh> bash loop
    

    If all your queries are about the same programming language, you can start the client in the special shell mode, directly in that context. As an example, start it with the Bash context using:
    cht.sh --shell bash
    

    Example with output:
    $ cht.sh --shell bash
    cht.sh/bash> loop
    ...........
    cht.sh/bash> switch case
    

    Want to copy the previously listed answer to the clipboard? Type c , then press Enter to copy the whole answer, or type C and press Enter to copy it without comments.

    Type help in the cht.sh interactive shell mode to see all available commands. Also look under the Usage section from the cheat.sh GitHub project page for more options and advanced usage.

    How to install cht.sh command line client
    You can use cheat.sh in a web browser, from the command line with the help of curl and without having to install anything else, as explained above, as a code editor plugin, or using its command line client which has some extra features, which I already mentioned. The steps below are for installing this cht.sh command line client.

    If you'd rather install a code editor plugin for cheat.sh, see the Editors integration page.

    1. Install dependencies.

    To install the cht.sh command line client, the curl command line tool will be used, so this needs to be installed on your system. Another dependency is rlwrap , which is required by the cht.sh special shell mode. Install these dependencies as follows.

    sudo apt install curl rlwrap
    

    sudo dnf install curl rlwrap
    

    sudo pacman -S curl rlwrap
    

    sudo zypper install curl rlwrap
    

    The packages seem to be named the same on most (if not all) Linux distributions, so if your Linux distribution is not on this list, just install the curl and rlwrap packages using your distro's package manager.

    2. Download and install the cht.sh command line interface.

    You can install this either for your user only (so only you can run it), or for all users:

    curl https://cht.sh/:cht.sh > ~/.bin/cht.sh
    
    chmod +x ~/.bin/cht.sh
    

    curl https://cht.sh/:cht.sh | sudo tee /usr/local/bin/cht.sh
    
    sudo chmod +x /usr/local/bin/cht.sh
    

    If the first command appears to have frozen displaying only the cURL output, press the Enter key and you'll be prompted to enter your password in order to save the file to /usr/local/bin .

    You may also download and install the cheat.sh command completion for Bash or Zsh:

    mkdir ~/.bash.d
    
    curl https://cheat.sh/:bash_completion > ~/.bash.d/cht.sh
    
    echo ". ~/.bash.d/cht.sh" >> ~/.bashrc
    

    mkdir ~/.zsh.d
    
    curl https://cheat.sh/:zsh > ~/.zsh.d/_cht
    
    echo 'fpath=(~/.zsh.d/ $fpath)' >> ~/.zshrc
    

    Opening a new shell / terminal and it will load the cheat.sh completion.

    [Jul 26, 2019] What Is /dev/null in Linux by Alexandru Andrei

    Images removed...
    Jul 23, 2019 | www.maketecheasier.com
    ... ... ...

    In technical terms, "/dev/null" is a virtual device file. As far as programs are concerned, these are treated just like real files. Utilities can request data from this kind of source, and the operating system feeds them data. But, instead of reading from disk, the operating system generates this data dynamically. An example of such a file is "/dev/zero."

    In this case, however, you will write to a device file. Whatever you write to "/dev/null" is discarded, forgotten, thrown into the void. To understand why this is useful, you must first have a basic understanding of standard output and standard error in Linux or *nix type operating systems.

    Related : How to Use the Tee Command in Linux

    stdout and stder

    A command-line utility can generate two types of output. Standard output is sent to stdout. Errors are sent to stderr.

    By default, stdout and stderr are associated with your terminal window (or console). This means that anything sent to stdout and stderr is normally displayed on your screen. But through shell redirections, you can change this behavior. For example, you can redirect stdout to a file. This way, instead of displaying output on the screen, it will be saved to a file for you to read later – or you can redirect stdout to a physical device, say, a digital LED or LCD display.

    A full article about pipes and redirections is available if you want to learn more.

    Related : 12 Useful Linux Commands for New User

    Use /dev/null to Get Rid of Output You Don't Need

    Since there are two types of output, standard output and standard error, the first use case is to filter out one type or the other. It's easier to understand through a practical example. Let's say you're looking for a string in "/sys" to find files that refer to power settings.

    grep -r power /sys/
    

    There will be a lot of files that a regular, non-root user cannot read. This will result in many "Permission denied" errors.

    These clutter the output and make it harder to spot the results that you're looking for. Since "Permission denied" errors are part of stderr, you can redirect them to "/dev/null."

    grep -r power /sys/ 2>/dev/null
    

    As you can see, this is much easier to read.

    In other cases, it might be useful to do the reverse: filter out standard output so you can only see errors.

    ping google.com 1>/dev/null
    

    The screenshot above shows that, without redirecting, ping displays its normal output when it can reach the destination machine. In the second command, nothing is displayed while the network is online, but as soon as it gets disconnected, only error messages are displayed.

    You can redirect both stdout and stderr to two different locations.

    ping google.com 1>/dev/null 2>error.log
    

    In this case, stdout messages won't be displayed at all, and error messages will be saved to the "error.log" file.

    Redirect All Output to /dev/null

    Sometimes it's useful to get rid of all output. There are two ways to do this.

    grep -r power /sys/ >/dev/null 2>&1
    

    The string >/dev/null means "send stdout to /dev/null," and the second part, 2>&1 , means send stderr to stdout. In this case you have to refer to stdout as "&1" instead of simply "1." Writing "2>1" would just redirect stdout to a file named "1."

    What's important to note here is that the order is important. If you reverse the redirect parameters like this:

    grep -r power /sys/ 2>&1 >/dev/null
    

    it won't work as intended. That's because as soon as 2>&1 is interpreted, stderr is sent to stdout and displayed on screen. Next, stdout is supressed when sent to "/dev/null." The final result is that you will see errors on the screen instead of suppressing all output. If you can't remember the correct order, there's a simpler redirect that is much easier to type:

    grep -r power /sys/ &>/dev/null
    

    In this case, &>/dev/null is equivalent to saying "redirect both stdout and stderr to this location."

    Other Examples Where It Can Be Useful to Redirect to /dev/null

    Say you want to see how fast your disk can read sequential data. The test is not extremely accurate but accurate enough. You can use dd for this, but dd either outputs to stdout or can be instructed to write to a file. With of=/dev/null you can tell dd to write to this virtual file. You don't even have to use shell redirections here. if= specifies the location of the input file to be read; of= specifies the name of the output file, where to write.

    dd if=debian-disk.qcow2 of=/dev/null status=progress bs=1M iflag=direct
    

    In some scenarios, you may want to see how fast you can download from a server. But you don't want to write to your disk unnecessarily. Simply enough, don't write to a regular file, write to "/dev/null."

    wget -O /dev/null http://ftp.halifax.rwth-aachen.de/ubuntu-releases/18.04/ubuntu-18.04.2-desktop-amd64.iso
    
    Conclusion

    Hopefully, the examples in this article can inspire you to find your own creative ways to use "/dev/null."

    Know an interesting use-case for this special device file? Leave a comment below and share the knowledge!

    [Jul 26, 2019] How to check open ports in Linux using the CLI> by Vivek Gite

    Jul 26, 2019 | www.cyberciti.biz

    Using netstat to list open ports

    Type the following netstat command
    sudo netstat -tulpn | grep LISTEN

    ... ... ...

    For example, TCP port 631 opened by cupsd process and cupsd only listing on the loopback address (127.0.0.1). Similarly, TCP port 22 opened by sshd process and sshd listing on all IP address for ssh connections:

    Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name 
    tcp   0      0      127.0.0.1:631           0.0.0.0:*               LISTEN      0          43385      1821/cupsd  
    tcp   0      0      0.0.0.0:22              0.0.0.0:*               LISTEN      0          44064      1823/sshd
    

    Where,

    Use ss to list open ports

    The ss command is used to dump socket statistics. It allows showing information similar to netstat. It can display more TCP and state information than other tools. The syntax is:
    sudo ss -tulpn

    ... ... ...

    Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

    [Jun 23, 2019] Utilizing multi core for tar+gzip-bzip compression-decompression

    Highly recommended!
    Notable quotes:
    "... There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. ..."
    "... You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use. ..."
    Jun 23, 2019 | stackoverflow.com

    user1118764 , Sep 7, 2012 at 6:58

    I normally compress using tar zcvf and decompress using tar zxvf (using gzip due to habit).

    I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I notice that many of the cores are unused during compression/decompression.

    Is there any way I can utilize the unused cores to make it faster?

    Warren Severin , Nov 13, 2017 at 4:37

    The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and installed tar from source: gnu.org/software/tar I included the options mentioned in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I ran the backup again and it took only 32 minutes. That's better than 4X improvement! I watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole time. THAT is the best solution. – Warren Severin Nov 13 '17 at 4:37

    Mark Adler , Sep 7, 2012 at 14:48

    You can use pigz instead of gzip, which does gzip compression on multiple cores. Instead of using the -z option, you would pipe it through pigz:
    tar cf - paths-to-archive | pigz > archive.tar.gz

    By default, pigz uses the number of available cores, or eight if it could not query that. You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can request better compression with -9. E.g.

    tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz

    user788171 , Feb 20, 2013 at 12:43

    How do you use pigz to decompress in the same fashion? Or does it only work for compression?

    Mark Adler , Feb 20, 2013 at 16:18

    pigz does use multiple cores for decompression, but only with limited improvement over a single core. The deflate format does not lend itself to parallel decompression.

    The decompression portion must be done serially. The other cores for pigz decompression are used for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets close to a factor of n improvement with n cores.

    Garrett , Mar 1, 2014 at 7:26

    The hyphen here is stdout (see this page ).

    Mark Adler , Jul 2, 2014 at 21:29

    Yes. 100% compatible in both directions.

    Mark Adler , Apr 23, 2015 at 5:23

    There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files.

    Jen , Jun 14, 2013 at 14:34

    You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use.

    For example use:

    tar -c --use-compress-program=pigz -f tar.file dir_to_zip

    Valerio Schiavoni , Aug 5, 2014 at 22:38

    Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by executing that command and monitoring the load on each of the cores. – Valerio Schiavoni Aug 5 '14 at 22:38

    bovender , Sep 18, 2015 at 10:14

    @ValerioSchiavoni: Not here, I get full load on all 4 cores (Ubuntu 15.04 'Vivid'). – bovender Sep 18 '15 at 10:14

    Valerio Schiavoni , Sep 28, 2015 at 23:41

    On compress or on decompress ? – Valerio Schiavoni Sep 28 '15 at 23:41

    Offenso , Jan 11, 2017 at 17:26

    I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you can skip it. But still it easier to write and remember. – Offenso Jan 11 '17 at 17:26

    Maxim Suslov , Dec 18, 2014 at 7:31

    Common approach

    There is option for tar program:

    -I, --use-compress-program PROG
          filter through PROG (must accept -d)

    You can use multithread version of archiver or compressor utility.

    Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:

    $ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
    $ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive

    Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need specify additional parameters, then use pipes (add parameters if necessary):

    $ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
    $ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz
    

    Input and output of singlethread and multithread are compatible. You can compress using multithread version and decompress using singlethread version and vice versa.

    p7zip

    For p7zip for compression you need a small shell script like the following:

    #!/bin/sh
    case $1 in
      -d) 7za -txz -si -so e;;
       *) 7za -txz -si -so a .;;
    esac 2>/dev/null
    

    Save it as 7zhelper.sh. Here the example of usage:

    $ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
    $ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
    
    xz

    Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils, you can utilize multiple cores for compression by setting -T or --threads to an appropriate value via the environmental variable XZ_DEFAULTS (e.g. XZ_DEFAULTS="-T 0" ).

    This is a fragment of man for 5.1.0alpha version:

    Multithreaded compression and decompression are not implemented yet, so this option has no effect for now.

    However this will not work for decompression of files that haven't also been compressed with threading enabled. From man for version 5.2.2:

    Threaded decompression hasn't been implemented yet. It will only work on files that contain multiple blocks with size information in block headers. All files compressed in multi-threaded mode meet this condition, but files compressed in single-threaded mode don't even if --block-size=size is used.

    Recompiling with replacement

    If you build tar from sources, then you can recompile with parameters

    --with-gzip=pigz
    --with-bzip2=lbzip2
    --with-lzip=plzip
    

    After recompiling tar with these options you can check the output of tar's help:

    $ tar --help | grep "lbzip2\|plzip\|pigz"
      -j, --bzip2                filter the archive through lbzip2
          --lzip                 filter the archive through plzip
      -z, --gzip, --gunzip, --ungzip   filter the archive through pigz
    

    mpibzip2 , Apr 28, 2015 at 20:57

    I just found pbzip2 and mpibzip2 . mpibzip2 looks very promising for clusters or if you have a laptop and a multicore desktop computer for instance. – user1985657 Apr 28 '15 at 20:57

    oᴉɹǝɥɔ , Jun 10, 2015 at 17:39

    Processing STDIN may in fact be slower. – oᴉɹǝɥɔ Jun 10 '15 at 17:39

    selurvedu , May 26, 2016 at 22:13

    Plus 1 for xz option. It the simplest, yet effective approach. – selurvedu May 26 '16 at 22:13

    panticz.de , Sep 1, 2014 at 15:02

    You can use the shortcut -I for tar's --use-compress-program switch, and invoke pbzip2 for bzip2 compression on multiple cores:
    tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 DIRECTORY_TO_COMPRESS/
    

    einpoklum , Feb 11, 2017 at 15:59

    A nice TL;DR for @MaximSuslov's answer . – einpoklum Feb 11 '17 at 15:59
    If you want to have more flexibility with filenames and compression options, you can use:
    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec \
    tar -P --transform='s@/my/path/@@g' -cf - {} + | \
    pigz -9 -p 4 > myarchive.tar.gz
    
    Step 1: find

    find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec

    This command will look for the files you want to archive, in this case /my/path/*.sql and /my/path/*.log . Add as many -o -name "pattern" as you want.

    -exec will execute the next command using the results of find : tar

    Step 2: tar

    tar -P --transform='s@/my/path/@@g' -cf - {} +

    --transform is a simple string replacement parameter. It will strip the path of the files from the archive so the tarball's root becomes the current directory when extracting. Note that you can't use -C option to change directory as you'll lose benefits of find : all files of the directory would be included.

    -P tells tar to use absolute paths, so it doesn't trigger the warning "Removing leading `/' from member names". Leading '/' with be removed by --transform anyway.

    -cf - tells tar to use the tarball name we'll specify later

    {} + uses everyfiles that find found previously

    Step 3: pigz

    pigz -9 -p 4

    Use as many parameters as you want. In this case -9 is the compression level and -p 4 is the number of cores dedicated to compression. If you run this on a heavy loaded webserver, you probably don't want to use all available cores.

    Step 4: archive name

    > myarchive.tar.gz

    Finally.

    [Jun 20, 2019] Exploring run filesystem on Linux by Sandra Henry-Stocker

    Jun 20, 2019 | www.networkworld.com

    /run is home to a wide assortment of data. For example, if you take a look at /run/user, you will notice a group of directories with numeric names.

    $ ls /run/user
    1000  1002  121
    

    A long file listing will clarify the significance of these numbers.

    $ ls -l
    total 0
    drwx------ 5 shs  shs  120 Jun 16 12:44 1000
    drwx------ 5 dory dory 120 Jun 16 16:14 1002
    drwx------ 8 gdm  gdm  220 Jun 14 12:18 121

    This allows us to see that each directory is related to a user who is currently logged in or to the display manager, gdm. The numbers represent their UIDs. The content of each of these directories are files that are used by running processes.

    The /run/user files represent only a very small portion of what you'll find in /run. There are lots of other files, as well. A handful contain the process IDs for various system processes.

    $ ls *.pid
    acpid.pid  atopacctd.pid  crond.pid  rsyslogd.pid
    atd.pid    atop.pid       gdm3.pid   sshd.pid
    

    As shown below, that sshd.pid file listed above contains the process ID for the ssh daemon (sshd).

    [Mar 13, 2019] Getting started with the cat command by Alan Formy-Duval

    Mar 13, 2019 | opensource.com

    6 comments

    Cat can also number a file's lines during output. There are two commands to do this, as shown in the help documentation: -b, --number-nonblank number nonempty output lines, overrides -n
    -n, --number number all output lines

    If I use the -b command with the hello.world file, the output will be numbered like this:

       $ cat -b hello.world
       1 Hello World !

    In the example above, there is an empty line. We can determine why this empty line appears by using the -n argument:

    $ cat -n hello.world
       1 Hello World !
       2
       $

    Now we see that there is an extra empty line. These two arguments are operating on the final output rather than the file contents, so if we were to use the -n option with both files, numbering will count lines as follows:

       
       $ cat -n hello.world goodbye.world
       1 Hello World !
       2
       3 Good Bye World !
       4
       $

    One other option that can be useful is -s for squeeze-blank . This argument tells cat to reduce repeated empty line output down to one line. This is helpful when reviewing files that have a lot of empty lines, because it effectively fits more text on the screen. Suppose I have a file with three lines that are spaced apart by several empty lines, such as in this example, greetings.world :

       $ cat greetings.world
       Greetings World !
    
       Take me to your Leader !
    
       We Come in Peace !
       $

    Using the -s option saves screen space:

    $ cat -s greetings.world

    Cat is often used to copy contents of one file to another file. You may be asking, "Why not just use cp ?" Here is how I could create a new file, called both.files , that contains the contents of the hello and goodbye files:

    $ cat hello.world goodbye.world > both.files
    $ cat both.files
    Hello World !
    Good Bye World !
    $
    zcat

    There is another variation on the cat command known as zcat . This command is capable of displaying files that have been compressed with Gzip without needing to uncompress the files with the gunzip command. As an aside, this also preserves disk space, which is the entire reason files are compressed!

    The zcat command is a bit more exciting because it can be a huge time saver for system administrators who spend a lot of time reviewing system log files. Where can we find compressed log files? Take a look at /var/log on most Linux systems. On my system, /var/log contains several files, such as syslog.2.gz and syslog.3.gz . These files are the result of the log management system, which rotates and compresses log files to save disk space and prevent logs from growing to unmanageable file sizes. Without zcat , I would have to uncompress these files with the gunzip command before viewing them. Thankfully, I can use zcat :

    $ cd / var / log
    $ ls * .gz
    syslog.2.gz syslog.3.gz
    $
    $ zcat syslog.2.gz | more
    Jan 30 00:02: 26 workstation systemd [ 1850 ] : Starting GNOME Terminal Server...
    Jan 30 00:02: 26 workstation dbus-daemon [ 1920 ] : [ session uid = 2112 pid = 1920 ] Successful
    ly activated service 'org.gnome.Terminal'
    Jan 30 00:02: 26 workstation systemd [ 1850 ] : Started GNOME Terminal Server.
    Jan 30 00:02: 26 workstation org.gnome.Terminal.desktop [ 2059 ] : # watch_fast: "/org/gno
    me / terminal / legacy / " (establishing: 0, active: 0)
    Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # unwatch_fast: " / org / g
    nome / terminal / legacy / " (active: 0, establishing: 1)
    Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # watch_established: " /
    org / gnome / terminal / legacy / " (establishing: 0)
    --More--

    We can also pass both files to zcat if we want to review both of them uninterrupted. Due to how log rotation works, you need to pass the filenames in reverse order to preserve the chronological order of the log contents:

    $ ls -l * .gz
    -rw-r----- 1 syslog adm 196383 Jan 31 00:00 syslog.2.gz
    -rw-r----- 1 syslog adm 1137176 Jan 30 00:00 syslog.3.gz
    $ zcat syslog.3.gz syslog.2.gz | more

    The cat command seems simple but is very useful. I use it regularly. You also don't need to feed or pet it like a real cat. As always, I suggest you review the man pages ( man cat ) for the cat and zcat commands to learn more about how it can be used. You can also use the --help argument for a quick synopsis of command line arguments.

    Victorhck on 13 Feb 2019 Permalink

    and there's also a "tac" command, that is just a "cat" upside down!
    Following your example:

    ~~~~~

    tac both.files
    Good Bye World!
    Hello World!
    ~~~~
    Happy hacking! :)
    Johan Godfried on 26 Feb 2019 Permalink

    Interesting article but please don't misuse cat to pipe to more......

    I am trying to teach people to use less pipes and here you go abusing cat to pipe to other commands. IMHO, 99.9% of the time this is not necessary!

    In stead of "cat file | command" most of the time, you can use "command file" (yes, I am an old dinosaur from a time where memory was very expensive and forking multiple commands could fill it all up)

    Uri Ran on 03 Mar 2019 Permalink

    Run cat then press keys to see the codes your shortcut send. (Press Ctrl+C to kill the cat when you're done.)

    For example, on my Mac, the key combination option-leftarrow is ^[^[[D and command-downarrow is ^[[B.

    I learned it from https://stackoverflow.com/users/787216/lolesque in his answer to https://stackoverflow.com/questions/12382499/looking-for-altleftarrowkey...

    Geordie on 04 Mar 2019 Permalink

    cat is also useful to make (or append to) text files without an editor:

    $ cat >> foo << "EOF"
    > Hello World
    > Another Line
    > EOF
    $

    [Mar 10, 2019] How do I detach a process from Terminal, entirely?

    Mar 10, 2019 | superuser.com

    stackoverflow.com, Aug 25, 2016 at 17:24

    I use Tilda (drop-down terminal) on Ubuntu as my "command central" - pretty much the way others might use GNOME Do, Quicksilver or Launchy.

    However, I'm struggling with how to completely detach a process (e.g. Firefox) from the terminal it's been launched from - i.e. prevent that such a (non-)child process

    For example, in order to start Vim in a "proper" terminal window, I have tried a simple script like the following:

    exec gnome-terminal -e "vim $@" &> /dev/null &
    

    However, that still causes pollution (also, passing a file name doesn't seem to work).

    lhunath, Sep 23, 2016 at 19:08

    First of all; once you've started a process, you can background it by first stopping it (hit Ctrl - Z ) and then typing bg to let it resume in the background. It's now a "job", and its stdout / stderr / stdin are still connected to your terminal.

    You can start a process as backgrounded immediately by appending a "&" to the end of it:

    firefox &
    

    To run it in the background silenced, use this:

    firefox </dev/null &>/dev/null &
    

    Some additional info:

    nohup is a program you can use to run your application with such that its stdout/stderr can be sent to a file instead and such that closing the parent script won't SIGHUP the child. However, you need to have had the foresight to have used it before you started the application. Because of the way nohup works, you can't just apply it to a running process .

    disown is a bash builtin that removes a shell job from the shell's job list. What this basically means is that you can't use fg , bg on it anymore, but more importantly, when you close your shell it won't hang or send a SIGHUP to that child anymore. Unlike nohup , disown is used after the process has been launched and backgrounded.

    What you can't do, is change the stdout/stderr/stdin of a process after having launched it. At least not from the shell. If you launch your process and tell it that its stdout is your terminal (which is what you do by default), then that process is configured to output to your terminal. Your shell has no business with the processes' FD setup, that's purely something the process itself manages. The process itself can decide whether to close its stdout/stderr/stdin or not, but you can't use your shell to force it to do so.

    To manage a background process' output, you have plenty of options from scripts, "nohup" probably being the first to come to mind. But for interactive processes you start but forgot to silence ( firefox < /dev/null &>/dev/null & ) you can't do much, really.

    I recommend you get GNU screen . With screen you can just close your running shell when the process' output becomes a bother and open a new one ( ^Ac ).


    Oh, and by the way, don't use " $@ " where you're using it.

    $@ means, $1 , $2 , $3 ..., which would turn your command into:

    gnome-terminal -e "vim $1" "$2" "$3" ...
    

    That's probably not what you want because -e only takes one argument. Use $1 to show that your script can only handle one argument.

    It's really difficult to get multiple arguments working properly in the scenario that you gave (with the gnome-terminal -e ) because -e takes only one argument, which is a shell command string. You'd have to encode your arguments into one. The best and most robust, but rather cludgy, way is like so:

    gnome-terminal -e "vim $(printf "%q " "$@")"
    

    Limited Atonement ,Aug 25, 2016 at 17:22

    nohup cmd &

    nohup detaches the process completely (daemonizes it)

    Randy Proctor ,Sep 13, 2016 at 23:00

    If you are using bash , try disown [ jobspec ] ; see bash(1) .

    Another approach you can try is at now . If you're not superuser, your permission to use at may be restricted.

    Stephen Rosen ,Jan 22, 2014 at 17:08

    Reading these answers, I was under the initial impression that issuing nohup <command> & would be sufficient. Running zsh in gnome-terminal, I found that nohup <command> & did not prevent my shell from killing child processes on exit. Although nohup is useful, especially with non-interactive shells, it only guarantees this behavior if the child process does not reset its handler for the SIGHUP signal.

    In my case, nohup should have prevented hangup signals from reaching the application, but the child application (VMWare Player in this case) was resetting its SIGHUP handler. As a result when the terminal emulator exits, it could still kill your subprocesses. This can only be resolved, to my knowledge, by ensuring that the process is removed from the shell's jobs table. If nohup is overridden with a shell builtin, as is sometimes the case, this may be sufficient, however, in the event that it is not...


    disown is a shell builtin in bash , zsh , and ksh93 ,

    <command> &
    disown
    

    or

    <command> &; disown
    

    if you prefer one-liners. This has the generally desirable effect of removing the subprocess from the jobs table. This allows you to exit the terminal emulator without accidentally signaling the child process at all. No matter what the SIGHUP handler looks like, this should not kill your child process.

    After the disown, the process is still a child of your terminal emulator (play with pstree if you want to watch this in action), but after the terminal emulator exits, you should see it attached to the init process. In other words, everything is as it should be, and as you presumably want it to be.

    What to do if your shell does not support disown ? I'd strongly advocate switching to one that does, but in the absence of that option, you have a few choices.

    1. screen and tmux can solve this problem, but they are much heavier weight solutions, and I dislike having to run them for such a simple task. They are much more suitable for situations in which you want to maintain a tty, typically on a remote machine.
    2. For many users, it may be desirable to see if your shell supports a capability like zsh's setopt nohup . This can be used to specify that SIGHUP should not be sent to the jobs in the jobs table when the shell exits. You can either apply this just before exiting the shell, or add it to shell configuration like ~/.zshrc if you always want it on.
    3. Find a way to edit the jobs table. I couldn't find a way to do this in tcsh or csh , which is somewhat disturbing.
    4. Write a small C program to fork off and exec() . This is a very poor solution, but the source should only consist of a couple dozen lines. You can then pass commands as commandline arguments to the C program, and thus avoid a process specific entry in the jobs table.

    Sheljohn ,Jan 10 at 10:20

    1. nohup $COMMAND &
    2. $COMMAND & disown
    3. setsid command

    I've been using number 2 for a very long time, but number 3 works just as well. Also, disown has a 'nohup' flag of '-h', can disown all processes with '-a', and can disown all running processes with '-ar'.

    Silencing is accomplished by '$COMMAND &>/dev/null'.

    Hope this helps!

    dunkyp

    add a comment ,Mar 25, 2009 at 1:51
    I think screen might solve your problem

    Nathan Fellman ,Mar 23, 2009 at 14:55

    in tcsh (and maybe in other shells as well), you can use parentheses to detach the process.

    Compare this:

    > jobs # shows nothing
    > firefox &
    > jobs
    [1]  + Running                       firefox
    

    To this:

    > jobs # shows nothing
    > (firefox &)
    > jobs # still shows nothing
    >
    

    This removes firefox from the jobs listing, but it is still tied to the terminal; if you logged in to this node via 'ssh', trying to log out will still hang the ssh process.

    ,

    To disassociate tty shell run command through sub-shell for e.g.

    (command)&

    When exit used terminal closed but process is still alive.

    check -

    (sleep 100) & exit
    

    Open other terminal

    ps aux | grep sleep
    

    Process is still alive.

    [Mar 10, 2019] linux - How to attach terminal to detached process

    Mar 10, 2019 | unix.stackexchange.com

    Ask Question 86


    Gilles ,Feb 16, 2012 at 21:39

    I have detached a process from my terminal, like this:
    $ process &
    

    That terminal is now long closed, but process is still running and I want to send some commands to that process's stdin. Is that possible?

    Samuel Edwin Ward ,Dec 22, 2018 at 13:34

    Yes, it is. First, create a pipe: mkfifo /tmp/fifo . Use gdb to attach to the process: gdb -p PID

    Then close stdin: call close (0) ; and open it again: call open ("/tmp/fifo", 0600)

    Finally, write away (from a different terminal, as gdb will probably hang):

    echo blah > /tmp/fifo

    NiKiZe ,Jan 6, 2017 at 22:52

    When original terminal is no longer accessible...

    reptyr might be what you want, see https://serverfault.com/a/284795/187998

    Quote from there:

    Have a look at reptyr , which does exactly that. The github page has all the information.
    reptyr - A tool for "re-ptying" programs.

    reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

    USAGE

    reptyr PID

    "reptyr PID" will grab the process with id PID and attach it to your current terminal.

    After attaching, the process will take input from and write output to the new terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have to run "bg" or "fg" in the old terminal. This is likely impossible to fix in a reasonable way without patching your shell.)

    manatwork ,Nov 20, 2014 at 22:59

    I am quite sure you can not.

    Check using ps x . If a process has a ? as controlling tty , you can not send input to it any more.

    9942 ?        S      0:00 tail -F /var/log/messages
    9947 pts/1    S      0:00 tail -F /var/log/messages
    

    In this example, you can send input to 9947 doing something like echo "test" > /dev/pts/1 . The other process ( 9942 ) is not reachable.

    Next time, you could use screen or tmux to avoid this situation.

    Stéphane Gimenez ,Feb 16, 2012 at 16:16

    EDIT : As Stephane Gimenez said, it's not that simple. It's only allowing you to print to a different terminal.

    You can try to write to this process using /proc . It should be located in /proc/ pid /fd/0 , so a simple :

    echo "hello" > /proc/PID/fd/0
    

    should do it. I have not tried it, but it should work, as long as this process still has a valid stdin file descriptor. You can check it with ls -l on /proc/ pid /fd/ .

    See nohup for more details about how to keep processes running.

    Stéphane Gimenez ,Nov 20, 2015 at 5:08

    Just ending the command line with & will not completely detach the process, it will just run it in the background. (With zsh you can use &! to actually detach it, otherwise you have do disown it later).

    When a process runs in the background, it won't receive input from its controlling terminal anymore. But you can send it back into the foreground with fg and then it will read input again.

    Otherwise, it's not possible to externally change its filedescriptors (including stdin) or to reattach a lost controlling terminal unless you use debugging tools (see Ansgar's answer , or have a look at the retty command).

    [Mar 10, 2019] linux - Preventing tmux session created by systemd from automatically terminating on Ctrl+C - Stack Overflow

    Mar 10, 2019 | stackoverflow.com

    Preventing tmux session created by systemd from automatically terminating on Ctrl+C Ask Question -1


    Jim Stewart ,Nov 10, 2018 at 12:55

    Since a few days I'm successfully running the new Minecraft Bedrock Edition dedicated server on my Ubuntu 18.04 LTS home server. Because it should be available 24/7 and automatically startup after boot I created a systemd service for a detached tmux session:

    tmux.minecraftserver.service

    [Unit]
    Description=tmux minecraft_server detached
    
    [Service]
    Type=forking
    WorkingDirectory=/home/mine/minecraftserver
    ExecStart=/usr/bin/tmux new -s minecraftserver -d "LD_LIBRARY_PATH=. /home/mine/minecraftser$
    User=mine
    
    [Install]
    WantedBy=multi-user.target
    

    Everything works as expected but there's one tiny thing that keeps bugging me:

    How can I prevent tmux from terminating it's whole session when I press Ctrl+C ? I just want to terminate the Minecraft server process itself instead of the whole tmux session. When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

    FlKo ,Nov 12, 2018 at 6:21

    When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

    The difference between these situations is actually unrelated to systemd. In one case, you're starting the server from a shell within the tmux session, and when the server terminates, control returns to the shell. In the other case, you're starting the server directly within the tmux session, and when it terminates there's no shell to return to, so the tmux session also dies.

    tmux has an option to keep the session alive after the process inside it dies (look for remain-on-exit in the manpage), but that's probably not what you want: you want to be able to return to an interactive shell, to restart the server, investigate why it died, or perform maintenance tasks, for example. So it's probably better to change your command to this:

    'LD_LIBRARY_PATH=. /home/mine/minecraftserver/ ; exec bash'
    

    That is, first run the server, and then, after it terminates, replace the process (the shell which tmux implicitly spawns to run the command, but which will then exit) with another, interactive shell. (For some other ways to get an interactive shell after the command exits, see e. g. this question – but note that the <(echo commands) syntax suggested in the top answer is not available in systemd unit files.)

    FlKo ,Nov 12, 2018 at 6:21

    I as able to solve this by using systemd's ExecStartPost and tmux's send-keys like this:
    [Unit]
    Description=tmux minecraft_server detached
    
    [Service]
    Type=forking
    WorkingDirectory=/home/mine/minecraftserver
    ExecStart=/usr/bin/tmux new -d -s minecraftserver
    ExecStartPost=/usr/bin/tmux send-keys -t minecraftserver "cd /home/mine/minecraftserver/" Enter "LD_LIBRARY_PATH=. ./bedrock_server" Enter
    
    User=mine
    
    [Install]
    WantedBy=multi-user.target
    

    [Feb 04, 2019] Do not play those dangerous games with resing of partitions unless absolutly nessesary

    Copying to additional drive (can be USB), repartitioning and then copying everything back is a safer bet
    May 07, 2017 | superuser.com
    womble

    In theory, you could reduce the size of sda1, increase the size of the extended partition, shift the contents of the extended partition down, then increase the size of the PV on the extended partition and you'd have the extra room.

    However, the number of possible things that can go wrong there is just astronomical

    So I'd recommend either buying a second hard drive (and possibly transferring everything onto it in a more sensible layout, then repartitioning your current drive better) or just making some bind mounts of various bits and pieces out of /home into / to free up a bit more space.

    --womble

    [Jan 26, 2019] SysVinit to Systemd Cheatsheet

    Apr 15, 2015 | FedoraProject
    Sysvinit Command Systemd Command Notes
    service frobozz start systemctl start frobozz Used to start a service (not reboot persistent)
    service frobozz stop systemctl stop frobozz Used to stop a service (not reboot persistent)
    service frobozz restart systemctl restart frobozz Used to stop and then start a service
    service frobozz reload systemctl reload frobozz When supported, reloads the config file without interrupting pending operations.
    service frobozz condrestart systemctl condrestart frobozz Restarts if the service is already running.
    service frobozz status systemctl status frobozz Tells whether a service is currently running.
    ls /etc/rc.d/init.d/ systemctl (or) systemctl list-unit-files --type=service (or)
    ls /lib/systemd/system/*.service /etc/systemd/system/*.service
    Used to list the services that can be started or stopped
    Used to list all the services and other units
    chkconfig frobozz on systemctl enable frobozz Turn the service on, for start at next boot, or other trigger.
    chkconfig frobozz off systemctl disable frobozz Turn the service off for the next reboot, or any other trigger.
    chkconfig frobozz systemctl is-enabled frobozz Used to check whether a service is configured to start or not in the current environment.
    chkconfig --list systemctl list-unit-files --type=service (or) ls /etc/systemd/system/*.wants/ Print a table of services that lists which runlevels each is configured on or off
    chkconfig frobozz --list ls /etc/systemd/system/*.wants/frobozz.service Used to list what levels this service is configured on or off
    chkconfig frobozz --add systemctl daemon-reload Used when you create a new service file or modify any configuration

    [Nov 12, 2018] Linux Find Out Which Process Is Listening Upon a Port

    Jun 25, 2012 | www.cyberciti.biz

    How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111 or udp port 7000 under Linux?

    You can the following programs to find out about port numbers and its associated process:

    1. netstat – a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
    2. fuser – a command line tool to identify processes using files or sockets.
    3. lsof – a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that opened them.
    4. /proc/$pid/ file system – Under Linux /proc includes a directory for each running process (including kernel processes) at /proc/PID, containing information about that process, notably including the processes name that opened port.

    You must run above command(s) as the root user.

    netstat example

    Type the following command:
    # netstat -tulpn
    Sample outputs:

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1138/mysqld     
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      850/portmap     
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2    
    tcp        0      0 0.0.0.0:55091           0.0.0.0:*               LISTEN      910/rpc.statd   
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1467/dnsmasq    
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      992/sshd        
    tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1565/cupsd      
    tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      3813/transmission
    tcp6       0      0 :::22                   :::*                    LISTEN      992/sshd        
    tcp6       0      0 ::1:631                 :::*                    LISTEN      1565/cupsd      
    tcp6       0      0 :::7000                 :::*                    LISTEN      3813/transmission
    udp        0      0 0.0.0.0:111             0.0.0.0:*                           850/portmap     
    udp        0      0 0.0.0.0:662             0.0.0.0:*                           910/rpc.statd   
    udp        0      0 192.168.122.1:53        0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:67              0.0.0.0:*                           1467/dnsmasq    
    udp        0      0 0.0.0.0:68              0.0.0.0:*                           3697/dhclient   
    udp        0      0 0.0.0.0:7000            0.0.0.0:*                           3813/transmission
    udp        0      0 0.0.0.0:54746           0.0.0.0:*                           910/rpc.statd
    

    TCP port 3306 was opened by mysqld process having PID # 1138. You can verify this using /proc, enter:
    # ls -l /proc/1138/exe
    Sample outputs:

    lrwxrwxrwx 1 root root 0 2010-10-29 10:20 /proc/1138/exe -> /usr/sbin/mysqld
    

    You can use grep command to filter out information:
    # netstat -tulpn | grep :80
    Sample outputs:

    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2
    
    Video demo

    https://www.youtube.com/embed/h3fJlmuGyos

    fuser command

    Find out the processes PID that opened tcp port 7000, enter:
    # fuser 7000/tcp
    Sample outputs:

    7000/tcp:             3813
    

    Finally, find out process name associated with PID # 3813, enter:
    # ls -l /proc/3813/exe
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 11:00 /proc/3813/exe -> /usr/bin/transmission
    

    /usr/bin/transmission is a bittorrent client, enter:
    # man transmission
    OR
    # whatis transmission
    Sample outputs:

    transmission (1)     - a bittorrent client
    
    Task: Find Out Current Working Directory Of a Process

    To find out current working directory of a process called bittorrent or pid 3813, enter:
    # ls -l /proc/3813/cwd
    Sample outputs:

    lrwxrwxrwx 1 vivek vivek 0 2010-10-29 12:04 /proc/3813/cwd -> /home/vivek
    

    OR use pwdx command, enter:
    # pwdx 3813
    Sample outputs:

    3813: /home/vivek
    
    Task: Find Out Owner Of a Process

    Use the following command to find out the owner of a process PID called 3813:
    # ps aux | grep 3813
    OR
    # ps aux | grep '[3]813'
    Sample outputs:

    vivek     3813  1.9  0.3 188372 26628 ?        Sl   10:58   2:27 transmission
    

    OR try the following ps command:
    # ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
    Sample outputs:

    3813 vivek    vivek    transmission                   02:44:05 Fri Oct 29 10:58:40 2010
    

    Another option is /proc/$PID/environ, enter:
    # cat /proc/3813/environ
    OR
    # grep --color -w -a USER /proc/3813/environ
    Sample outputs (note –colour option):

    Fig.01: grep output
    Fig.01: grep output

    lsof Command Example

    Type the command as follows:

    lsof -i :portNumber 
    lsof -i tcp:portNumber 
    lsof -i udp:portNumber 
    lsof -i :80
    lsof -i :80 | grep LISTEN
    

    lsof -i :portNumber lsof -i tcp:portNumber lsof -i udp:portNumber lsof -i :80 lsof -i :80 | grep LISTEN

    Sample outputs:

    apache2   1607     root    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1616 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1617 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1618 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1619 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    apache2   1620 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
    

    Now, you get more information about pid # 1607 or 1616 and so on:
    # ps aux | grep '[1]616'
    Sample outputs:
    www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
    I recommend the following command to grab info about pid # 1616:
    # ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
    Sample outputs:

    1616 www-data www-data /usr/sbin/apache2 -k start     03:16:22 Fri Oct 29 10:20:17 2010
    

    Where,

    Help: I Discover an Open Port Which I Don't Recognize At All

    The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers:
    $ grep port /etc/services
    $ grep 443 /etc/services

    Sample outputs:

    https		443/tcp				# http protocol over TLS/SSL
    https		443/udp
    
    Check For rootkit

    I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers. See how to detecting / checking rootkits under Linux .

    Keep an Eye On Your Bandwidth Graphs

    Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.

    See also:

    See the following man pages for more information:
    $ man ps
    $ man grep
    $ man lsof
    $ man netstat
    $ man fuser

    Posted by: Vivek Gite

    The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . GOT FEEDBACK? CLICK HERE TO JOIN THE DISCUSSION

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Nov 08, 2018 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Nov 08, 2018] Determining what process is bound to a port

    Mar 14, 2011 | unix.stackexchange.com
    I know that using the command:
    lsof -i TCP

    (or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.

    Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.

    Cakemox , Mar 14, 2011 at 20:48

    netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only.
    # netstat -lntp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
    tcp        0      0 0.0.0.0:24800           0.0.0.0:*               LISTEN      27899/synergys
    tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3361/python
    tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2264/mysqld
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      22964/apache2
    tcp        0      0 192.168.99.1:53         0.0.0.0:*               LISTEN      3389/named
    tcp        0      0 192.168.88.1:53         0.0.0.0:*               LISTEN      3389/named
    

    etc.

    xxx , Mar 14, 2011 at 21:01

    Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something similar for Solaris? – user5721 Mar 14 '11 at 21:01

    Rich Homolka , Mar 15, 2011 at 19:56

    netstat -p above is my vote. also look at lsof . – Rich Homolka Mar 15 '11 at 19:56

    Jonathan , Aug 26, 2014 at 18:50

    As an aside, for windows it's similar: netstat -aon | more – Jonathan Aug 26 '14 at 18:50

    sudo , May 25, 2017 at 2:24

    What about for SCTP? – sudo May 25 '17 at 2:24

    frielp , Mar 15, 2011 at 13:33

    On AIX, netstat & rmsock can be used to determine process binding:
    [root@aix] netstat -Ana|grep LISTEN|grep 80
    f100070000280bb0 tcp4       0      0  *.37               *.*        LISTEN
    f1000700025de3b0 tcp        0      0  *.80               *.*        LISTEN
    f1000700002803b0 tcp4       0      0  *.111              *.*        LISTEN
    f1000700021b33b0 tcp4       0      0  127.0.0.1.32780    *.*        LISTEN
    
    # Port 80 maps to f1000700025de3b0 above, so we type:
    [root@aix] rmsock f1000700025de3b0 tcpcb
    The socket 0x25de008 is being held by process 499790 (java).
    

    Olivier Dulac , Sep 18, 2013 at 4:05

    Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt to remove it) ? – Olivier Dulac Sep 18 '13 at 4:05

    Vitor Py , Sep 26, 2013 at 14:18

    @OlivierDulac: "Unlike what its name implies, rmsock does not remove the socket, if it is being used by a process. It just reports the process holding the socket." ( ibm.com/developerworks/community/blogs/cgaix/entry/ ) – Vitor Py Sep 26 '13 at 14:18

    Olivier Dulac , Sep 26, 2013 at 16:00

    @vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't even try to remove it when a process holds it. That's cool! Thx! – Olivier Dulac Sep 26 '13 at 16:00

    frielp , Mar 15, 2011 at 13:27

    Another tool available on Linux is ss . From the ss man page on Fedora:
    NAME
           ss - another utility to investigate sockets
    SYNOPSIS
           ss [options] [ FILTER ]
    DESCRIPTION
           ss is used to dump socket statistics. It allows showing information 
           similar to netstat. It can display more TCP and state informations  
           than other tools.
    

    Example output below - the final column shows the process binding:

    [root@box] ss -ap
    State      Recv-Q Send-Q      Local Address:Port          Peer Address:Port
    LISTEN     0      128                    :::http                    :::*        users:(("httpd",20891,4),("httpd",20894,4),("httpd",20895,4),("httpd",20896,4)
    LISTEN     0      128             127.0.0.1:munin                    *:*        users:(("munin-node",1278,5))
    LISTEN     0      128                    :::ssh                     :::*        users:(("sshd",1175,4))
    LISTEN     0      128                     *:ssh                      *:*        users:(("sshd",1175,3))
    LISTEN     0      10              127.0.0.1:smtp                     *:*        users:(("sendmail",1199,4))
    LISTEN     0      128             127.0.0.1:x11-ssh-offset                  *:*        users:(("sshd",25734,8))
    LISTEN     0      128                   ::1:x11-ssh-offset                 :::*        users:(("sshd",25734,7))
    

    Eugen Constantin Dinca , Mar 14, 2011 at 23:47

    For Solaris you can use pfiles and then grep by sockname: or port: .

    A sample (from here ):

    pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'
    

    rickumali , May 8, 2011 at 14:40

    I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e. http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it became obvious what the process was (for the record, it was Splunk ).

    One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!

    Gilles , Oct 8, 2015 at 21:04

    In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. – Gilles May 8 '11 at 14:45

    [Nov 08, 2018] How to find which process is regularly writing to disk?

    Notable quotes:
    "... tick...tick...tick...trrrrrr ..."
    "... /var/log/syslog ..."
    Jul 27, 2012 | unix.stackexchange.com

    Cedric Martin , Jul 27, 2012 at 4:31

    How can I find which process is constantly writing to disk?

    I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

    And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

    In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

    But here I'm not sure.

    I tried the following:

    ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
    

    but nothing is changing there.

    Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

    Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

    hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

    (HD is a Seagate Barracude 500GB)

    Mat , Jul 27, 2012 at 6:03

    Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

    Cedric Martin , Jul 27, 2012 at 7:02

    @Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

    camh , Jul 27, 2012 at 9:48

    Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

    mnmnc , Jul 27, 2012 at 8:27

    Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

    example output:

    Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
      TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
        1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
        2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
        3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
        6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
        7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
        8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
     1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
       10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]
    

    Cedric Martin , Aug 2, 2012 at 15:56

    thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

    ndemou , Jun 20, 2016 at 15:32

    I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

    scai , Jul 27, 2012 at 10:48

    You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

    dan3 , Jul 15, 2013 at 8:32

    It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

    scai , Jul 16, 2013 at 6:32

    You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

    dan3 , Jul 16, 2013 at 7:22

    I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

    scai , Jul 16, 2013 at 10:50

    I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

    Volker Siegel , Apr 16, 2014 at 22:57

    I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

    Gilles , Jul 28, 2012 at 1:34

    Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
    auditctl -S sync -S fsync -S fdatasync -a exit,always
    

    Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

    If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

    Gilles , Mar 23 at 18:24

    @StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

    cas , Jul 27, 2012 at 9:40

    It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

    This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

    I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

    for disk in /dev/sd? ; do
      /sbin/hdparm -q -S 0 "/dev/$disk"
    done
    

    Cedric Martin , Aug 2, 2012 at 16:03

    that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

    cas , Aug 2, 2012 at 21:42

    IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

    Micheal Johnson , Mar 12, 2016 at 20:48

    It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

    Micheal Johnson , Mar 12, 2016 at 20:51

    Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

    ,

    I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

    [Oct 23, 2018] To switch from vertical split to horizontal split fast in Vim

    Nov 24, 2013 | stackoverflow.com

    ДМИТРИЙ МАЛИКОВ, Nov 24, 2013 at 7:55

    How can you switch your current windows from horizontal split to vertical split and vice versa in Vim?

    I did that a moment ago by accident but I cannot find the key again.

    Mark Rushakoff

    Vim mailing list says (re-formatted for better readability):

    • To change two vertically split windows to horizontal split: Ctrl - W t Ctrl - W K
    • Horizontally to vertically: Ctrl - W t Ctrl - W H

    Explanations:

    • Ctrl - W t -- makes the first (topleft) window current
    • Ctrl - W K -- moves the current window to full-width at the very top
    • Ctrl - W H -- moves the current window to full-height at far left

    Note that the t is lowercase, and the K and H are uppercase.

    Also, with only two windows, it seems like you can drop the Ctrl - W t part because if you're already in one of only two windows, what's the point of making it current?

    Too much php Aug 13 '09 at 2:17

    So if you have two windows split horizontally, and you are in the lower window, you just use ^WL

    Alex Hart Dec 7 '12 at 14:10

    There are a ton of interesting ^w commands (b, w, etc)

    holms Feb 28 '13 at 9:07

    somehow doesn't work for me.. =/ –

    Lambart Mar 26 at 19:34

    Just toggle your NERDTree panel closed before 'rotating' the splits, then toggle it back open. :NERDTreeToggle (I have it mapped to a function key for convenience).

    xxx Feb 19 '13 at 20:26

    ^w followed by capital H , J , K or L will move the current window to the far left , bottom , top or right respectively like normal cursor navigation.

    The lower case equivalents move focus instead of moving the window.

    respectTheCode, Jul 21 '13 at 9:55

    1 Wow, cool! Thanks! :-) – infous Feb 6 at 8:46 it's much better since users use hjkl to move between buffers. – Afshin Mehrabani

    In VIM, take a look at the following to see different alternatives for what you might have done:

    :help opening-window

    For instance:

    Ctrl - W s
    Ctrl - W o
    Ctrl - W v
    Ctrl - W o
    Ctrl - W s

    Anon, Apr 29 at 21:45

    The command ^W-o is great! I did not know it. – Masi Aug 13 '09 at 2:20 add a comment | up vote 6 down vote The following ex commands will (re-)split any number of windows:

    If there are hidden buffers, issuing these commands will also make the hidden buffers visible.

    Mark Oct 22 at 19:31

    When you have two or more windows open horizontally or vertically and want to switch them all to the other orientation, you can use the following:

    [Oct 22, 2018] move selection to a separate file

    Highly recommended!
    Oct 22, 2018 | superuser.com

    greg0ire ,Jan 23, 2013 at 13:29

    With vim, how can I move a piece of text to a new file? For the moment, I do this:

    Is there a more efficient way to do this?

    Before

    a.txt

    sometext
    some other text
    some other other text
    end
    
    After

    a.txt

    sometext
    end
    

    b.txt

    some other text
    some other other text
    

    Ingo Karkat, Jan 23, 2013 at 15:20

    How about these custom commands:
    :command! -bang -range -nargs=1 -complete=file MoveWrite  <line1>,<line2>write<bang> <args> | <line1>,<line2>delete _
    :command! -bang -range -nargs=1 -complete=file MoveAppend <line1>,<line2>write<bang> >> <args> | <line1>,<line2>delete _
    

    greg0ire ,Jan 23, 2013 at 15:27

    This is very ugly, but hey, it seems to do in one step exactly what I asked for (I tried). +1, and accepted. I was looking for a native way to do this quickly but since there does not seem to be one, yours will do just fine. Thanks! – greg0ire Jan 23 '13 at 15:27

    Ingo Karkat ,Jan 23, 2013 at 16:15

    Beauty is in the eye of the beholder. I find this pretty elegant; you only need to type it once (into your .vimrc). – Ingo Karkat Jan 23 '13 at 16:15

    greg0ire ,Jan 23, 2013 at 16:21

    You're right, "very ugly" shoud have been "very unfamiliar". Your command is very handy, and I think I definitely going to carve it in my .vimrc – greg0ire Jan 23 '13 at 16:21

    embedded.kyle ,Jan 23, 2013 at 14:08

    By "move a piece of text to a new file" I assume you mean cut that piece of text from the current file and create a new file containing only that text.

    Various examples:

    The above only copies the text and creates a new file containing that text. You will then need to delete afterward.

    This can be done using the same range and the d command:

    Or by using dd for the single line case.

    If you instead select the text using visual mode, and then hit : while the text is selected, you will see the following on the command line:

    :'<,'>

    Which indicates the selected text. You can then expand the command to:

    :'<,'>w >> old_file

    Which will append the text to an existing file. Then delete as above.


    One liner:

    :2,3 d | new +put! "

    The breakdown:

    greg0ire, Jan 23, 2013 at 14:09

    Your assumption is right. This looks good, I'm going to test. Could you explain 2. a bit more? I'm not very familiar with ranges. EDIT: If I try this on the second line, it writes the first line to the other file, not the second line. – greg0ire Jan 23 '13 at 14:09

    embedded.kyle ,Jan 23, 2013 at 14:16

    @greg0ire I got that a bit backward, I'll edit to better explain – embedded.kyle Jan 23 '13 at 14:16

    greg0ire ,Jan 23, 2013 at 14:18

    I added an example to make my question clearer. – greg0ire Jan 23 '13 at 14:18

    embedded.kyle ,Jan 23, 2013 at 14:22

    @greg0ire I corrected my answer. It's still two steps. The first copies and writes. The second deletes. – embedded.kyle Jan 23 '13 at 14:22

    greg0ire ,Jan 23, 2013 at 14:41

    Ok, if I understand well, the trick is to use ranges to select and write in the same command. That's very similar to what I did. +1 for the detailed explanation, but I don't think this is more efficient, since the trick with hitting ':' is what I do for the moment. – greg0ire Jan 23 '13 at 14:41

    Xyon ,Jan 23, 2013 at 13:32

    Select the text in visual mode, then press y to "yank" it into the buffer (copy) or d to "delete" it into the buffer (cut).

    Then you can :split <new file name> to split your vim window up, and press p to paste in the yanked text. Write the file as normal.

    To close the split again, pass the split you want to close :q .

    greg0ire ,Jan 23, 2013 at 13:42

    I have 4 steps for the moment: select, write, select, delete. With your method, I have 6 steps: select, delete, split, paste, write, close. I asked for something more efficient :P – greg0ire Jan 23 '13 at 13:42

    Xyon ,Jan 23, 2013 at 13:44

    Well, if you pass the split :x instead, you can combine writing and closing into one and make it five steps. :P – Xyon Jan 23 '13 at 13:44

    greg0ire ,Jan 23, 2013 at 13:46

    That's better, but 5 still > 4 :P – greg0ire Jan 23 '13 at 13:46 Based on @embedded.kyle's answer and this Q&A , I ended up with this one liner to append a selection to a file and delete from current file. After selecting some lines with Shift+V , hit : and run:
    '<,'>w >> test | normal gvd
    

    The first part appends selected lines. The second command enters normal mode and runs gvd to select the last selection and then deletes.

    [Oct 22, 2018] Cut/copy and paste using visual selection

    Oct 22, 2018 | vim.wikia.com
    Visual selection is a common feature in applications, but Vim's visual selection has several benefits.

    To cut-and-paste or copy-and-paste:

    1. Position the cursor at the beginning of the text you want to cut/copy.
    2. Press v to begin character-based visual selection, or V to select whole lines, or Ctrl-v or Ctrl-q to select a block.
    3. Move the cursor to the end of the text to be cut/copied. While selecting text, you can perform searches and other advanced movement.
    4. Press d (delete) to cut, or y (yank) to copy.
    5. Move the cursor to the desired paste location.
    6. Press p to paste after the cursor, or P to paste before.

    Visual selection (steps 1-3) can be performed using a mouse.

    If you want to change the selected text, press c instead of d or y in step 4. In a visual selection, pressing c performs a change by deleting the selected text and entering insert mode so you can type the new text.

    Pasting over a block of text

    You can copy a block of text by pressing Ctrl-v (or Ctrl-q if you use Ctrl-v for paste), then moving the cursor to select, and pressing y to yank. Now you can move elsewhere and press p to paste the text after the cursor (or P to paste before). The paste inserts a block (which might, for example, be 4 rows by 3 columns of text).

    Instead of inserting the block, it is also possible to replace (paste over) the destination. To do this, move to the target location then press 1vp ( 1v selects an area equal to the original, and p pastes over it).

    When a count is used before v , V , or ^V (character, line or block selection), an area equal to the previous area, multiplied by the count, is selected. See the paragraph after :help <LeftRelease> .

    Note that this will only work if you actually did something to the previous visual selection, such as a yank, delete, or change operation. It will not work after visually selecting an area and leaving visual mode without taking any actions.

    See also Comments

    NOTE: after selecting the visual copy mode, you can hold the shift key while selection the region to get a multiple line copy. For example, to copy three lines, press V, then hold down the Shift key while pressing the down arrow key twice. Then do your action on the buffer.

    I have struck out the above new comment because I think it is talking about something that may apply to those who have used :behave mswin . To visually select multiple lines, you type V , then press j (or cursor down). You hold down Shift only to type the uppercase V . Do not press Shift after that. If I am wrong, please explain here. JohnBeckett 10:48, October 7, 2010 (UTC)

    If you just want to copy (yank) the visually marked text, you do not need to 'y'ank it. Marking it will already copy it.

    Using a mouse, you can insert it at another position by clicking the middle mouse button.

    This also works in across Vim applications on Windows systems (clipboard is inserted)


    This is a really useful thing in Vim. I feel lost without it in any other editor. I have some more points I'd like to add to this tip:


    You can replace a set of text in a visual block very easily by selecting a block, press c and then make changes to the first line. Pressing <Esc> twice replaces all the text of the original selection. See :help v_b_c .


    On Windows the <mswin.vim> script seems to be getting sourced for many users.

    Result: more Windows like behavior (ctrl-v is "paste", instead of visual-block selection). Hunt down your system vimrc and remove sourcing thereof if you don't like that behavior (or substitute <mrswin.vim> in its place, see VimTip63 .

    With VimTip588 one can sort lines or blocks based on visual-block selection.


    With reference to the earlier post asking how to paste an inner block

    1. Select the inner block to copy usint ctrl-v and highlighting with the hjkl keys
    2. yank the visual region (y)
    3. Select the inner block you want to overwrite (Ctrl-v then hightlight with hjkl keys)
    4. paste the selection P (that is shift P) , this will overwrite keeping the block formation

    The "yank" buffers in Vim are not the same as the Windows clipboard (i.e., cut-and-paste) buffers. If you're using the yank, it only puts it in a Vim buffer - that buffer is not accessible to the Windows paste command. You'll want to use the Edit | Copy and Edit | Paste (or their keyboard equivalents) if you're using the Windows GUI, or select with your mouse and use your X-Windows cut-n-paste mouse buttons if you're running UNIX.


    Double-quote and star gives one access to windows clippboard or the unix equivalent. as an example if I wanted to yank the current line into the clipboard I would type "*yy

    If I wanted to paste the contents of the clippboard into Vim at my current curser location I would type "*p

    The double-qoute and start trick work well with visual mode as well. ex: visual select text to copy to clippboard and then type "*y

    I find this very useful and I use it all the time but it is a bit slow typing "* all the time so I am thinking about creating a macro to speed it up a bit.


    Copy and Paste using the System Clipboard

    There are some caveats regarding how the "*y (copy into System Clipboard) command works. We have to be sure that we are using vim-full (sudo aptitude install vim-full on debian-based systems) or a Vim that has X11 support enabled. Only then will the "*y command work.

    For our convenience as we are all familiar with using Ctrl+c to copy a block of text in most other GUI applications, we can also map Ctrl+c to "*y so that in Vim Visual Mode, we can simply Ctrl+c to copy the block of text we want into our system buffer. To do that, we simply add this line in our .vimrc file:

    map <C-c> "+y<CR>

    Restart our shell and we are good. Now whenever we are in Visual Mode, we can Ctrl+c to grab what we want and paste it into another application or another editor in a convenient and intuitive manner.

    [Oct 21, 2018] Moving lines between split windows in vim

    Notable quotes:
    "... "send the line I am on (or the test I selected) to the other window" ..."
    Oct 21, 2018 | superuser.com

    brad ,Nov 24, 2015 at 12:28

    I have two files, say a.txt and b.txt , in the same session of vim and I split the screen so I have file a.txt in the upper window and b.txt in the lower window.

    I want to move lines here and there from a.txt to b.txt : I select a line with Shift + v , then I move to b.txt in the lower window with Ctrl + w , paste with p , get back to a.txt with Ctrl + w and I can repeat the operation when I get to another line I want to move.

    My question: is there a quicker way to say vim "send the line I am on (or the test I selected) to the other window" ?

    Chong ,Nov 24, 2015 at 12:33

    Use q macro? q[some_letter] [whatever operations] q , then call the macro with [times to be called]@qChong Nov 24 '15 at 12:33

    Anthony Geoghegan ,Nov 24, 2015 at 13:00

    I presume that you're deleting the line that you've selected in a.txt . If not, you'd be pasting something else into b.txt . If so, there's no need to select the line first. – Anthony Geoghegan Nov 24 '15 at 13:00

    Anthony Geoghegan ,Nov 24, 2015 at 13:17

    This sounds like a good use case for a macro. Macros are commands that can be recorded and stored in a Vim register. Each register is identified by a letter from a to z. Recording

    From Recording keys for repeated jobs - Vim Tips

    To start recording, press q in Normal mode followed by a letter (a to z). That starts recording keystrokes to the specified register. Vim displays "recording" in the status line. Type any Normal mode commands, or enter Insert mode and type text. To stop recording, again press q while in Normal mode.

    For this particular macro, I chose the m (for move) register to store it.

    I pressed qm to record the following commands:

    When I typed q to finish recording the macro, the contents of the m register were:

    dd^Wjp^Wk
    
    Usage

    brad ,Nov 24, 2015 at 14:26

    I asked to see if there is a command unknown to me that does the job: it seems there is none. In absence of such a command, this can be a good solution. – brad Nov 24 '15 at 14:26

    romainl ,Nov 26, 2015 at 9:54

    @brad, you can find all the commands available to you in the documentation. If it's not there it doesn't exist no need to ask random strangers. – romainl Nov 26 '15 at 9:54

    brad ,Nov 26, 2015 at 10:17

    @romainl, yes, I know this but vim documentation is really huge and, although it doesn't scare me, there is always the possibility to miss something. Moreover, it could also be that you can obtain the effect using the combination of 2 commands and in this case it would be hardly documented – brad Nov 26 '15 at 10:17

    [Oct 21, 2018] How to move around buffers in vim?

    Oct 21, 2018 | stackoverflow.com

    user3721893 ,Jul 23, 2014 at 5:43

    I normally work with more than 5 files at a time. I use buffers to open different files. I use commands such as :buf file1, :buf file2 etc. Is there a faster way to move to different files?

    eckes ,Jul 23, 2014 at 5:49

    What I use:

    And have a short look on :he buffer

    And the wiki entry on Easier Buffer Switching on the Vim Wiki: http://vim.wikia.com/wiki/Easier_buffer_switching

    SO already has a question regarding yours: How do you prefer to switch between buffers in Vim?

    romainl ,Jul 23, 2014 at 6:13

    A few mappings can make your life a lot easier.

    This one lists your buffers and prompts you for a number:

    nnoremap gb :buffers<CR>:buffer<Space>
    

    This one lists your buffers in the "wildmenu". Depends on the 'wildcharm' option as well as 'wildmenu' and 'wildmode' :

    nnoremap <leader>b :buffer <C-z>
    

    These ones allow you to cycle between all your buffers without much thinking:

    nnoremap <PageUp>   :bprevious<CR>
    nnoremap <PageDown> :bnext<CR>
    

    Also, don't forget <C-^> which allows you to alternate between two buffers.

    mikew ,Jul 23, 2014 at 6:38

    Once the buffers are already open, you can just type :b partial_filename to switch

    So if :ls shows that i have my ~./vimrc open, then I can just type :b vimr or :b rc to switch to that buffer

    Brady Trainor ,Jul 25, 2014 at 22:13

    Below I describe some excerpts from sections of my .vimrc . It includes mapping the leader key, setting wilds tab completion, and finally my buffer nav key choices (all mostly inspired by folks on the interweb, including romainl). Edit: Then I ramble on about my shortcuts for windows and tabs.
    " easier default keys {{{1
    
    let mapleader=','
    nnoremap <leader>2 :@"<CR>
    

    The leader key is a prefix key for mostly user-defined key commands (some plugins also use it). The default is \ , but many people suggest the easier to reach , .

    The second line there is a command to @ execute from the " clipboard, in case you'd like to quickly try out various key bindings (without relying on :so % ). (My nmeumonic is that Shift - 2 is @ .)

    " wilds {{{1
    
    set wildmenu wildmode=list:full
    set wildcharm=<C-z>
    set wildignore+=*~ wildignorecase
    

    For built-in completion, wildmenu is probably the part that shows up yellow on your Vim when using tab completion on command-line. wildmode is set to a comma-separated list, each coming up in turn on each tab completion (that is, my list is simply one element, list:full ). list shows rows and columns of candidates. full 's meaning includes maintaining existence of the wildmenu . wildcharm is the way to include Tab presses in your macros. The *~ is for my use in :edit and :find commands.

    " nav keys {{{1
    " windows, buffers and tabs {{{2
    " buffers {{{3
    
    nnoremap <leader>bb :b <C-z><S-Tab>
    nnoremap <leader>bh :ls!<CR>:b<Space>
    nnoremap <leader>bw :ls!<CR>:bw<Space>
    nnoremap <leader>bt :TSelectBuffer<CR>
    nnoremap <leader>be :BufExplorer<CR>
    nnoremap <leader>bs :BufExplorerHorizontalSplit<CR>
    nnoremap <leader>bv :BufExplorerVerticalSplit<CR>
    nnoremap <leader>3 :e#<CR>
    nmap <C-n> :bn<cr>
    nmap <C-p> :bp<cr>
    

    The ,3 is for switching between the "two" last buffers (Easier to reach than built-in Ctrl - 6 ). Nmeuonic is Shift - 3 is # , and # is the register symbol for last buffer. (See :marks .)

    ,bh is to select from hidden buffers ( ! ).

    ,bw is to bwipeout buffers by number or name. For instance, you can wipeout several while looking at the list, with ,bw 1 3 4 8 10 <CR> . Note that wipeout is more destructive than :bdelete . They have their pros and cons. For instance, :bdelete leaves the buffer in the hidden list, while :bwipeout removes global marks (see :help marks , and the description of uppercase marks).

    I haven't settled on these keybindings, I would sort of prefer that my ,bb was simply ,b (simply defining while leaving the others defined makes Vim pause to see if you'll enter more).

    Those shortcuts for :BufExplorer are actually the defaults for that plugin, but I have it written out so I can change them if I want to start using ,b without a hang.

    You didn't ask for this:

    If you still find Vim buffers a little awkward to use, try to combine the functionality with tabs and windows (until you get more comfortable?).

    " windows {{{3
    
    " window nav
    nnoremap <leader>w <C-w>
    nnoremap <M-h> <C-w>h
    nnoremap <M-j> <C-w>j
    nnoremap <M-k> <C-w>k
    nnoremap <M-l> <C-w>l
    " resize window
    nnoremap <C-h> <C-w><
    nnoremap <C-j> <C-w>+
    nnoremap <C-k> <C-w>-
    nnoremap <C-l> <C-w>>
    

    Notice how nice ,w is for a prefix. Also, I reserve Ctrl key for resizing, because Alt ( M- ) is hard to realize in all environments, and I don't have a better way to resize. I'm fine using ,w to switch windows.

    " tabs {{{3
    
    nnoremap <leader>t :tab
    nnoremap <M-n> :tabn<cr>
    nnoremap <M-p> :tabp<cr>
    nnoremap <C-Tab> :tabn<cr>
    nnoremap <C-S-Tab> :tabp<cr>
    nnoremap tn :tabe<CR>
    nnoremap te :tabe<Space><C-z><S-Tab>
    nnoremap tf :tabf<Space>
    nnoremap tc :tabc<CR>
    nnoremap to :tabo<CR>
    nnoremap tm :tabm<CR>
    nnoremap ts :tabs<CR>
    
    nnoremap th :tabr<CR>
    nnoremap tj :tabn<CR>
    nnoremap tk :tabp<CR>
    nnoremap tl :tabl<CR>
    
    " or, it may make more sense to use
    " nnoremap th :tabp<CR>
    " nnoremap tj :tabl<CR>
    " nnoremap tk :tabr<CR>
    " nnoremap tl :tabn<CR>
    

    In summary of my window and tabs keys, I can navigate both of them with Alt , which is actually pretty easy to reach. In other words:

    " (modifier) key choice explanation {{{3
    "
    "       KEYS        CTRL                  ALT            
    "       hjkl        resize windows        switch windows        
    "       np          switch buffer         switch tab      
    "
    " (resize windows is hard to do otherwise, so we use ctrl which works across
    " more environments. i can use ',w' for windowcmds o.w.. alt is comfortable
    " enough for fast and gui nav in tabs and windows. we use np for navs that 
    " are more linear, hjkl for navs that are more planar.) 
    "
    

    This way, if the Alt is working, you can actually hold it down while you find your "open" buffer pretty quickly, amongst the tabs and windows.

    ,

    There are many ways to solve. The best is the best that WORKS for YOU. You have lots of fuzzy match plugins that help you navigate. The 2 things that impress me most are

    1) CtrlP or Unite's fuzzy buffer search

    2) LustyExplorer and/or LustyJuggler

    And the simplest :

    :map <F5> :ls<CR>:e #
    

    Pressing F5 lists all buffer, just type number.

    [Oct 21, 2018] Favorite (G)Vim plugins/scripts?

    Dec 27, 2009 | stackoverflow.com
    What are your favorite (G)Vim plugins/scripts?

    community wiki 2 revs ,Jun 24, 2009 at 13:35

    Nerdtree

    The NERD tree allows you to explore your filesystem and to open files and directories. It presents the filesystem to you in the form of a tree which you manipulate with the keyboard and/or mouse. It also allows you to perform simple filesystem operations.

    The tree can be toggled easily with :NERDTreeToggle which can be mapped to a more suitable key. The keyboard shortcuts in the NERD tree are also easy and intuitive.

    Edit: Added synopsis

    SpoonMeiser ,Sep 17, 2008 at 19:32

    For those of us not wanting to follow every link to find out about each plugin, care to furnish us with a brief synopsis? – SpoonMeiser Sep 17 '08 at 19:32

    AbdullahDiaa ,Sep 10, 2012 at 19:51

    and NERDTree with NERDTreeTabs are awesome combination github.com/jistr/vim-nerdtree-tabs – AbdullahDiaa Sep 10 '12 at 19:51

    community wiki 2 revs ,May 27, 2010 at 0:08

    Tim Pope has some kickass plugins. I love his surround plugin.

    Taurus Olson ,Feb 21, 2010 at 18:01

    Surround is a great plugin for sure. – Taurus Olson Feb 21 '10 at 18:01

    Benjamin Oakes ,May 27, 2010 at 0:11

    Link to all his vim contributions: vim.org/account/profile.php?user_id=9012 – Benjamin Oakes May 27 '10 at 0:11

    community wiki SergioAraujo, Mar 15, 2011 at 15:35

    Pathogen plugin and more things commented by Steve Losh

    Patrizio Rullo ,Sep 26, 2011 at 12:11

    Pathogen is the FIRST plugin you have to install on every Vim installation! It resolves the plugin management problems every Vim developer has. – Patrizio Rullo Sep 26 '11 at 12:11

    Profpatsch ,Apr 12, 2013 at 8:53

    I would recommend switching to Vundle . It's better by a long shot and truly automates. You can give vim-addon-manager a try, too. – Profpatsch Apr 12 '13 at 8:53

    community wiki JPaget, Sep 15, 2008 at 20:47

    Taglist , a source code browser plugin for Vim, is currently the top rated plugin at the Vim website and is my favorite plugin.

    mindthief ,Jun 27, 2012 at 20:53

    A more recent alternative to this is Tagbar , which appears to have some improvements over Taglist. This blog post offers a comparison between the two plugins. – mindthief Jun 27 '12 at 20:53

    community wiki 1passenger, Nov 17, 2009 at 9:15

    I love snipMate . It's simular to snippetsEmu, but has a much better syntax to read (like Textmate).

    community wiki cschol, Aug 22, 2008 at 4:19

    A very nice grep replacement for GVim is Ack . A search plugin written in Perl that beats Vim's internal grep implementation and externally invoked greps, too. It also by default skips any CVS directories in the project directory, e.g. '.svn'. This blog shows a way to integrate Ack with vim.

    FUD, Aug 27, 2013 at 15:50

    github.com/mileszs/ack.vim – FUD Aug 27 '13 at 15:50

    community wiki Dominic Dos Santos ,Sep 12, 2008 at 12:44

    A.vim is a great little plugin. It allows you to quickly switch between header and source files with a single command. The default is :A , but I remapped it to F2 reduce keystrokes.

    community wiki 2 revs, Aug 25, 2008 at 15:06

    I really like the SuperTab plugin, it allows you to use the tab key to do all your insert completions.

    community wiki Greg Hewgill, Aug 25, 2008 at 19:23

    I have recently started using a plugin that highlights differences in your buffer from a previous version in your RCS system (Subversion, git, whatever). You just need to press a key to toggle the diff display on/off. You can find it here: http://github.com/ghewgill/vim-scmdiff . Patches welcome!

    Nathan Fellman, Sep 15, 2008 at 18:51

    Do you know if this supports bitkeeper? I looked on the website but couldn't even see whom to ask. – Nathan Fellman Sep 15 '08 at 18:51

    Greg Hewgill, Sep 16, 2008 at 9:26

    It doesn't explicitly support bitkeeper at the moment, but as long as bitkeeper has a "diff" command that outputs a normal patch file, it should be easy enough to add. – Greg Hewgill Sep 16 '08 at 9:26

    Yogesh Arora, Mar 10, 2010 at 0:47

    does it support clearcase – Yogesh Arora Mar 10 '10 at 0:47

    Greg Hewgill, Mar 10, 2010 at 1:39

    @Yogesh: No, it doesn't support ClearCase at this time. However, if you can add ClearCase support, a patch would certainly be accepted. – Greg Hewgill Mar 10 '10 at 1:39

    Olical ,Jan 23, 2013 at 11:05

    This version can be loaded via pathogen in a git submodule: github.com/tomasv/vim-scmdiff – Olical Jan 23 '13 at 11:05

    community wiki 4 revs, May 23, 2017 at 11:45

    1. Elegant (mini) buffer explorer - This is the multiple file/buffer manager I use. Takes very little screen space. It looks just like most IDEs where you have a top tab-bar with the files you've opened. I've tested some other similar plugins before, and this is my pick.
    2. TagList - Small file explorer, without the "extra" stuff the other file explorers have. Just lets you browse directories and open files with the "enter" key. Note that this has already been noted by previous commenters to your questions.
    3. SuperTab - Already noted by WMR in this post, looks very promising. It's an auto-completion replacement key for Ctrl-P.
    4. Desert256 color Scheme - Readable, dark one.
    5. Moria color scheme - Another good, dark one. Note that it's gVim only.
    6. Enahcned Python syntax - If you're using Python, this is an enhanced syntax version. Works better than the original. I'm not sure, but this might be already included in the newest version. Nonetheless, it's worth adding to your syntax folder if you need it.
    7. Enhanced JavaScript syntax - Same like the above.
    8. EDIT: Comments - Great little plugin to [un]comment chunks of text. Language recognition included ("#", "/", "/* .. */", etc.) .

    community wiki Konrad Rudolph, Aug 25, 2008 at 14:19

    Not a plugin, but I advise any Mac user to switch to the MacVim distribution which is vastly superior to the official port.

    As for plugins, I used VIM-LaTeX for my thesis and was very satisfied with the usability boost. I also like the Taglist plugin which makes use of the ctags library.

    community wiki Yariv ,Nov 25, 2010 at 19:58

    clang complete - the best c++ code completion I have seen so far. By using an actual compiler (that would be clang) the plugin is able to complete complex expressions including STL and smart pointers.

    community wiki Greg Bowyer, Jul 30, 2009 at 19:51

    No one said matchit yet ? Makes HTML / XML soup much nicer http://www.vim.org/scripts/script.php?script_id=39

    community wiki 2 revs, 2 users 91% ,Nov 24, 2011 at 5:18

    Tomas Restrepo posted on some great Vim scripts/plugins . He has also pointed out some nice color themes on his blog, too. Check out his Vim category .

    community wiki HaskellElephant ,Mar 29, 2011 at 17:59,

    With version 7.3, undo branches was added to vim. A very powerful feature, but hard to use, until Steve Losh made Gundo which makes this feature possible to use with a ascii representation of the tree and a diff of the change. A must for using undo branches.

    community wiki, Auguste ,Apr 20, 2009 at 8:05

    Matrix Mode .

    community wiki wilhelmtell ,Dec 10, 2010 at 19:11

    My latest favourite is Command-T . Granted, to install it you need to have Ruby support and you'll need to compile a C extension for Vim. But oy-yoy-yoy does this plugin make a difference in opening files in Vim!

    Victor Farazdagi, Apr 19, 2011 at 19:16

    Definitely! Let not the ruby + c compiling stop you, you will be amazed on how well this plugin enhances your toolset. I have been ignoring this plugin for too long, installed it today and already find myself using NERDTree lesser and lesser. – Victor Farazdagi Apr 19 '11 at 19:16

    datentyp ,Jan 11, 2012 at 12:54

    With ctrlp now there is something as awesome as Command-T written in pure Vimscript! It's available at github.com/kien/ctrlp.vim – datentyp Jan 11 '12 at 12:54

    FUD ,Dec 26, 2012 at 4:48

    just my 2 cents.. being a naive user of both plugins, with a few first characters of file name i saw a much better result with commandt plugin and a lots of false positives for ctrlp. – FUD Dec 26 '12 at 4:48

    community wiki
    f3lix
    ,Mar 15, 2011 at 12:55

    Conque Shell : Run interactive commands inside a Vim buffer

    Conque is a Vim plugin which allows you to run interactive programs, such as bash on linux or powershell.exe on Windows, inside a Vim buffer. In other words it is a terminal emulator which uses a Vim buffer to display the program output.

    http://code.google.com/p/conque/

    http://www.vim.org/scripts/script.php?script_id=2771

    community wiki 2 revs ,Nov 20, 2009 at 14:51

    The vcscommand plugin provides global ex commands for manipulating version-controlled source files and it supports CVS,SVN and some other repositories.

    You can do almost all repository related tasks from with in vim:
    * Taking the diff of current buffer with repository copy
    * Adding new files
    * Reverting the current buffer to the repository copy by nullifying the local changes....

    community wiki Sirupsen ,Nov 20, 2009 at 15:00

    Just gonna name a few I didn't see here, but which I still find extremely helpful:

    community wiki thestoneage ,Dec 22, 2011 at 16:25

    One Plugin that is missing in the answers is NERDCommenter , which let's you do almost anything with comments. For example {add, toggle, remove} comments. And more. See this blog entry for some examples.

    community wiki james ,Feb 19, 2010 at 7:17

    I like taglist and fuzzyfinder, those are very cool plugin

    community wiki JAVH ,Aug 15, 2010 at 11:54

    TaskList

    This script is based on the eclipse Task List. It will search the file for FIXME, TODO, and XXX (or a custom list) and put them in a handy list for you to browse which at the same time will update the location in the document so you can see exactly where the tag is located. Something like an interactive 'cw'

    community wiki Peter Hoffmann ,Aug 29, 2008 at 4:07

    I really love the snippetsEmu Plugin. It emulates some of the behaviour of Snippets from the OS X editor TextMate, in particular the variable bouncing and replacement behaviour.

    community wiki Anon ,Sep 11, 2008 at 10:20

    Zenburn color scheme and good fonts - [Droid Sans Mono]( http://en.wikipedia.org/wiki/Droid_(font)) on Linux, Consolas on Windows.

    Gary Willoughby ,Jul 7, 2011 at 21:21

    Take a look at DejaVu Sans Mono too dejavu-fonts.org/wiki/Main_Page – Gary Willoughby Jul 7 '11 at 21:21

    Santosh Kumar ,Mar 28, 2013 at 4:48

    Droid Sans Mono makes capital m and 0 appear same. – Santosh Kumar Mar 28 '13 at 4:48

    community wiki julienXX ,Jun 22, 2010 at 12:05

    If you're on a Mac, you got to use peepopen , fuzzyfinder on steroids.

    Khaja Minhajuddin ,Apr 5, 2012 at 9:24

    Command+T is a free alternative to this: github.com/wincent/Command-T – Khaja Minhajuddin Apr 5 '12 at 9:24

    community wiki Peter Stuifzand ,Aug 25, 2008 at 19:16

    I use the following two plugins all the time:

    Csaba_H ,Jun 24, 2009 at 13:47

    vimoutliner is really good for managing small pieces of information (from tasks/todo-s to links) – Csaba_H Jun 24 '09 at 13:47

    ThiefMaster ♦ ,Nov 25, 2010 at 20:35

    Adding some links/descriptions would be nice – ThiefMaster ♦ Nov 25 '10 at 20:35

    community wiki chiggsy ,Aug 26, 2009 at 18:22

    For vim I like a little help with completions. Vim has tons of completion modes, but really, I just want vim to complete anything it can, whenver it can.

    I hate typing ending quotes, but fortunately this plugin obviates the need for such misery.

    Those two are my heavy hitters.

    This one may step up to roam my code like an unquiet shade, but I've yet to try it.

    community wiki Brett Stahlman, Dec 11, 2009 at 13:28

    Txtfmt (The Vim Highlighter) Screenshots

    The Txtfmt plugin gives you a sort of "rich text" highlighting capability, similar to what is provided by RTF editors and word processors. You can use it to add colors (foreground and background) and formatting attributes (all combinations of bold, underline, italic, etc...) to your plain text documents in Vim.

    The advantage of this plugin over something like Latex is that with Txtfmt, your highlighting changes are visible "in real time", and as with a word processor, the highlighting is WYSIWYG. Txtfmt embeds special tokens directly in the file to accomplish the highlighting, so the highlighting is unaffected when you move the file around, even from one computer to another. The special tokens are hidden by the syntax; each appears as a single space. For those who have applied Vince Negri's conceal/ownsyntax patch, the tokens can even be made "zero-width".

    community wiki 2 revs, Dec 10, 2010 at 4:37

    tcomment

    "I map the "Command + /" keys so i can just comment stuff out while in insert mode imap :i

    [Oct 21, 2018] Duplicate a whole line in Vim

    Notable quotes:
    "... Do people not run vimtutor anymore? This is probably within the first five minutes of learning how to use Vim. ..."
    "... Can also use capital Y to copy the whole line. ..."
    "... I think the Y should be "copy from the cursor to the end" ..."
    "... In normal mode what this does is copy . copy this line to just below this line . ..."
    "... And in visual mode it turns into '<,'> copy '> copy from start of selection to end of selection to the line below end of selection . ..."
    "... I like: Shift + v (to select the whole line immediately and let you select other lines if you want), y, p ..."
    "... Multiple lines with a number in between: y7yp ..."
    "... 7yy is equivalent to y7y and is probably easier to remember how to do. ..."
    "... or :.,.+7 copy .+7 ..."
    "... When you press : in visual mode, it is transformed to '<,'> so it pre-selects the line range the visual selection spanned over ..."
    Oct 21, 2018 | stackoverflow.com

    sumek, Sep 16, 2008 at 15:02

    How do I duplicate a whole line in Vim in a similar way to Ctrl + D in IntelliJ IDEA/Resharper or Ctrl + Alt + / in Eclipse?

    dash-tom-bang, Feb 15, 2016 at 23:31

    Do people not run vimtutor anymore? This is probably within the first five minutes of learning how to use Vim.dash-tom-bang Feb 15 '16 at 23:31

    Mark Biek, Sep 16, 2008 at 15:06

    yy or Y to copy the line
    or
    dd to delete (cutting) the line

    then

    p to paste the copied or deleted text after the current line
    or
    P to paste the copied or deleted text before the current line

    camflan, Sep 28, 2008 at 15:55

    Can also use capital Y to copy the whole line.camflan Sep 28 '08 at 15:55

    nXqd, Jul 19, 2012 at 11:35

    @camflan I think the Y should be "copy from the cursor to the end" nXqd Jul 19 '12 at 11:35

    Amir Ali Akbari, Oct 9, 2012 at 10:33

    and 2yy can be used to copy 2 lines (and for any other n) – Amir Ali Akbari Oct 9 '12 at 10:33

    zelk, Mar 9, 2014 at 13:29

    To copy two lines, it's even faster just to go yj or yk, especially since you don't double up on one character. Plus, yk is a backwards version that 2yy can't do, and you can put the number of lines to reach backwards in y9j or y2k, etc.. Only difference is that your count has to be n-1 for a total of n lines, but your head can learn that anyway. – zelk Mar 9 '14 at 13:29

    DarkWiiPlayer, Apr 13 at 7:26

    I know I'm late to the party, but whatever; I have this in my .vimrc:
    nnoremap <C-d> :copy .<CR>
    vnoremap <C-d> :copy '><CR>
    

    the :copy command just copies the selected line or the range (always whole lines) to below the line number given as its argument.

    In normal mode what this does is copy . copy this line to just below this line .

    And in visual mode it turns into '<,'> copy '> copy from start of selection to end of selection to the line below end of selection .

    yolenoyer, Apr 11 at 16:34

    I like to use this mapping:
    :nnoremap yp Yp

    because it makes it consistent to use alongside the native YP command.

    Gabe add a comment, Jul 14, 2009 at 4:45

    I like: Shift + v (to select the whole line immediately and let you select other lines if you want), y, p

    jedi, Feb 11 at 17:20

    If you would like to duplicate a line and paste it right away below the current like, just like in Sublime Ctrl + Shift + D, then you can add this to your .vimrc file.

    imap <S-C-d> <Esc>Yp

    jedi, Apr 14 at 17:48

    This works perfectly fine for me: imap <S-C-d> <Esc>Ypi insert mode and nmap <S-C-d> <Esc>Yp in normal mode – jedi Apr 14 at 17:48

    Chris Penner, Apr 20, 2015 at 4:33

    Default is yyp, but I've been using this rebinding for a year or so and love it:

    " set Y to duplicate lines, works in visual mode as well. nnoremap Y yyp vnoremap Y y`>pgv

    yemu, Oct 12, 2013 at 18:23

    yyp - paste after

    yyP - paste before

    Mikk, Dec 4, 2015 at 9:09

    @A-B-B However, there is a miniature difference here - what line will your cursor land on. – Mikk Dec 4 '15 at 9:09

    theschmitzer, Sep 16, 2008 at 15:16

    yyp - remember it with "yippee!"

    Multiple lines with a number in between: y7yp

    graywh, Jan 4, 2009 at 21:25

    7yy is equivalent to y7y and is probably easier to remember how to do.graywh Jan 4 '09 at 21:25

    Nefrubyr, Jul 29, 2014 at 14:09

    y7yp (or 7yyp) is rarely useful; the cursor remains on the first line copied so that p pastes the copied lines between the first and second line of the source. To duplicate a block of lines use 7yyP – Nefrubyr Jul 29 '14 at 14:09

    DarkWiiPlayer, Apr 13 at 7:28

    @Nefrubyr or :.,.+7 copy .+7 :P – DarkWiiPlayer Apr 13 at 7:28

    Michael, May 12, 2016 at 14:54

    For someone who doesn't know vi, some answers from above might mislead him with phrases like "paste ... after/before current line ".
    It's actually "paste ... after/before cursor ".

    yy or Y to copy the line
    or
    dd to delete the line

    then

    p to paste the copied or deleted text after the cursor
    or
    P to paste the copied or deleted text before the cursor


    For more key bindings, you can visit this site: vi Complete Key Binding List

    ap-osd, Feb 10, 2016 at 13:23

    For those starting to learn vi, here is a good introduction to vi by listing side by side vi commands to typical Windows GUI Editor cursor movement and shortcut keys. It lists all the basic commands including yy (copy line) and p (paste after) or P (paste before).

    vi (Vim) for Windows Users

    pjz, Sep 16, 2008 at 15:04

    yy

    will yank the current line without deleting it

    dd

    will delete the current line

    p

    will put a line grabbed by either of the previous methods

    Benoit, Apr 17, 2012 at 15:17

    Normal mode: see other answers.

    The Ex way:

    If you need to move instead of copying, use :m instead of :t .

    This can be really powerful if you combine it with :g or :v :

    Reference: :help range, :help :t, :help :g, :help :m and :help :v

    Benoit, Jun 30, 2012 at 14:17

    When you press : in visual mode, it is transformed to '<,'> so it pre-selects the line range the visual selection spanned over. So, in visual mode, :t0 will copy the lines at the beginning. – Benoit Jun 30 '12 at 14:17

    Niels Bom, Jul 31, 2012 at 8:21

    For the record: when you type a colon (:) you go into command line mode where you can enter Ex commands. vimdoc.sourceforge.net/htmldoc/cmdline.html Ex commands can be really powerful and terse. The yyp solutions are "Normal mode" commands. If you want to copy/move/delete a far-away line or range of lines an Ex command can be a lot faster. – Niels Bom Jul 31 '12 at 8:21

    Burak Erdem, Jul 8, 2016 at 16:55

    :t. is the exact answer to the question. – Burak Erdem Jul 8 '16 at 16:55

    Aaron Thoma, Aug 22, 2013 at 23:31

    Y is usually remapped to y$ (yank (copy) until end of line (from current cursor position, not beginning of line)) though. With this line in .vimrc : :nnoremap Y y$Aaron Thoma Aug 22 '13 at 23:31

    Kwondri, Sep 16, 2008 at 15:37

    If you want another way :-)

    "ayy this will store the line in buffer a

    "ap this will put the contents of buffer a at the cursor.

    There are many variations on this.

    "a5yy this will store the 5 lines in buffer a

    see http://www.vim.org/htmldoc/help.html for more fun

    frbl, Jun 21, 2015 at 21:04

    Thanks, I used this as a bind: map <Leader>d "ayy"ap – frbl Jun 21 '15 at 21:04

    Rook, Jul 14, 2009 at 4:37

    Another option would be to go with:
    nmap <C-d> mzyyp`z

    gives you the advantage of preserving the cursor position.

    ,Sep 18, 2008 at 20:32

    You can also try <C-x><C-l> which will repeat the last line from insert mode and brings you a completion window with all of the lines. It works almost like <C-p>

    Jorge Gajon, May 11, 2009 at 6:38

    This is very useful, but to avoid having to press many keys I have mapped it to just CTRL-L, this is my map: inoremap ^L ^X^L – Jorge Gajon May 11 '09 at 6:38

    cori, Sep 16, 2008 at 15:06

    1 gotcha: when you use "p" to put the line, it puts it after the line your cursor is on, so if you want to add the line after the line you're yanking, don't move the cursor down a line before putting the new line.

    Ghoti, Jan 31, 2016 at 11:05

    or use capital P - put before – Ghoti Jan 31 '16 at 11:05

    [Oct 21, 2018] Indent multiple lines quickly in vi

    Oct 21, 2018 | stackoverflow.com

    Allain Lalonde, Oct 25, 2008 at 3:27

    Should be trivial, and it might even be in the help, but I can't figure out how to navigate it. How do I indent multiple lines quickly in vi?

    Greg Hewgill, Oct 25, 2008 at 3:28

    Use the > command. To indent 5 lines, 5>> . To mark a block of lines and indent it, Vjj> to indent 3 lines (vim only). To indent a curly-braces block, put your cursor on one of the curly braces and use >% .

    If you're copying blocks of text around and need to align the indent of a block in its new location, use ]p instead of just p . This aligns the pasted block with the surrounding text.

    Also, the shiftwidth setting allows you to control how many spaces to indent.

    akdom, Oct 25, 2008 at 3:31

    <shift>-v also works to select a line in Vim. – akdom Oct 25 '08 at 3:31

    R. Martinho Fernandes, Feb 15, 2009 at 17:26

    I use >i} (indent inner {} block). Works in vim. Not sure it works in vi. – R. Martinho Fernandes Feb 15 '09 at 17:26

    Kamran Bigdely, Feb 28, 2011 at 23:25

    My problem(in gVim) is that the command > indents much more than 2 blanks (I want just two blanks but > indent something like 5 blanks) – Kamran Bigdely Feb 28 '11 at 23:25

    Greg Hewgill, Mar 1, 2011 at 18:42

    @Kamran: See the shiftwidth setting for the way to change that. – Greg Hewgill Mar 1 '11 at 18:42

    Greg Hewgill, Feb 28, 2013 at 3:36

    @MattStevens: You can find extended discussion about this phenomenon here: meta.stackexchange.com/questions/9731/ – Greg Hewgill Feb 28 '13 at 3:36

    Michael Ekoka, Feb 15, 2009 at 5:42

    When you select a block and use > to indent, it indents then goes back to normal mode. I have this in my .vimrc file:
    vnoremap < <gv
    
    vnoremap > >gv

    It lets you indent your selection as many time as you want.

    sundar, Sep 1, 2009 at 17:14

    To indent the selection multiple times, you can simply press . to repeat the previous command. – sundar Sep 1 '09 at 17:14

    masukomi, Dec 6, 2013 at 21:24

    The problem with . in this situation is that you have to move your fingers. With @mike's solution (same one i use) you've already got your fingers on the indent key and can just keep whacking it to keep indenting rather than switching and doing something else. Using period takes longer because you have to move your hands and it requires more thought because it's a second, different, operation. – masukomi Dec 6 '13 at 21:24

    Johan, Jan 20, 2009 at 21:11

    A big selection would be:
    gg=G

    It is really fast, and everything gets indented ;-)

    asgs, Jan 28, 2014 at 21:57

    I've an XML file and turned on syntax highlighting. Typing gg=G just puts every line starting from position 1. All the white spaces have been removed. Is there anything else specific to XML? – asgs Jan 28 '14 at 21:57

    Johan, Jan 29, 2014 at 6:10

    stackoverflow.com/questions/7600860/

    Amanuel Nega, May 19, 2015 at 19:51

    I think set cindent should be in vimrc or should run :set cindent before running that command – Amanuel Nega May 19 '15 at 19:51

    Amanuel Nega, May 19, 2015 at 19:57

    I think cindent must be set first. and @asgs i think this only works for cstyle programming languages. – Amanuel Nega May 19 '15 at 19:57

    sqqqrly, Sep 28, 2017 at 23:59

    I use block-mode visual selection:

    This is not a uni-tasker. It works:

    oligofren, Mar 27 at 15:23

    This is cumbersome, but is the way to go if you do formatting outside of core VIM (for instance, using vim-prettier instead of the default indenting engine). Using > will otherwise royally scew up the formatting done by Prettier. – oligofren Mar 27 at 15:23

    sqqqrly, Jun 15 at 16:30

    Funny, I find it anything but cumbersome. This is not a uni-tasker! Learning this method has many uses beyond indenting. – sqqqrly Jun 15 at 16:30

    user4052054, Aug 17 at 17:50

    I find it better than the accepted answer, as I can see what is happening, the lines I'm selecting and the action I'm doing, and not just type some sort of vim incantation. – user4052054 Aug 17 at 17:50

    Sergio, Apr 25 at 7:14

    Suppose | represents the position of the cursor in Vim. If the text to be indented is enclosed in a code block like:
    int main() {
    line1
    line2|
    line3
    }
    

    you can do >i{ which means " indent ( > ) inside ( i ) block ( { ) " and get:

    int main() {
        line1
        line2|
        line3
    }

    Now suppose the lines are contiguous but outside a block, like:

    do
    line2|
    line3
    line4
    done
    

    To indent lines 2 thru 4 you can visually select the lines and type > . Or even faster you can do >2j to get:

    do
        line2|
        line3
        line4
    done
    

    Note that >Nj means indent from current line to N lines below. If the number of lines to be indented is large, it could take some seconds for the user to count the proper value of N . To save valuable seconds you can activate the option of relative number with set relativenumber (available since Vim version 7.3).

    Sagar Jain, Apr 18, 2014 at 18:41

    The master of all commands is
    gg=G

    This indents the entire file!

    And below are some of the simple and elegant commands used to indent lines quickly in Vim or gVim.

    To indent the current line
    ==

    To indent the all the lines below the current line

    =G

    To indent n lines below the current line

    n==

    For example, to indent 4 lines below the current line

    4==

    To indent a block of code, go to one of the braces and use command

    =%

    These are the simplest, yet powerful commands to indent multiple lines.

    rojomoke, Jul 30, 2014 at 15:48

    This is just in vim, not vi . – rojomoke Jul 30 '14 at 15:48

    Sagar Jain, Jul 31, 2014 at 3:40

    @rojomoke: No, it works in vi as well – Sagar Jain Jul 31 '14 at 3:40

    rojomoke, Jul 31, 2014 at 10:09

    Not on my Solaris or AIX boxes it doesn't. The equals key has always been one of my standard ad hoc macro assignments. Are you sure you're not looking at a vim that's been linked to as vi ? – rojomoke Jul 31 '14 at 10:09

    rojomoke, Aug 1, 2014 at 8:22

    Yeah, on Linux, vi is almost always a link to vim. Try running the :ve command inside vi. – rojomoke Aug 1 '14 at 8:22

    datelligence, Dec 28, 2015 at 17:28

    I love this kind of answers: clear, precise and succinct. Worked for me in Debian Jessie. Thanks, @SJain – datelligence Dec 28 '15 at 17:28

    kapil, Mar 1, 2015 at 13:20

    To indent every line in a file type, esc then G=gg

    zundarz, Sep 10, 2015 at 18:41

    :help left

    In ex mode you can use :left or :le to align lines a specified amount. Specifically, :left will Left align lines in the [range]. It sets the indent in the lines to [indent] (default 0).

    :%le3 or :%le 3 or :%left3 or :%left 3 will align the entire file by padding with three spaces.

    :5,7 le 3 will align lines 5 through 7 by padding them with 3 spaces.

    :le without any value or :le 0 will left align with a padding of 0.

    This works in vim and gvim .

    Subfuzion, Aug 11, 2017 at 22:02

    Awesome, just what I was looking for (a way to insert a specific number of spaces -- 4 spaces for markdown code -- to override my normal indent). In my case I wanted to indent a specific number of lines in visual mode, so shift-v to highlight the lines, then :'<,'>le4 to insert the spaces. Thanks! – Subfuzion Aug 11 '17 at 22:02

    Nykakin, Aug 21, 2015 at 13:33

    There is one more way that hasn't been mentioned yet - you can use norm i command to insert given text at the beginning of the line. To insert 10 spaces before lines 2-10:
    :2,10norm 10i
    

    Remember that there has to be space character at the end of the command - this will be the character we want to have inserted. We can also indent line with any other text, for example to indent every line in file with 5 underscore characters:

    :%norm 5i_
    

    Or something even more fancy:

    :%norm 2i[ ]
    

    More practical example is commenting Bash/Python/etc code with # character:

    :1,20norm i#
    

    To re-indent use x instead of i . For example to remove first 5 characters from every line:

    :%norm 5x
    

    Eliethesaiyan, Jun 13, 2016 at 14:18

    this starts from the left side of the file...not the the current position of the block – Eliethesaiyan Jun 13 '16 at 14:18

    John Sonderson, Jan 31, 2015 at 19:17

    Suppose you use 2 spaces to indent your code. Type:
    :set shiftwidth=2
    

    Then:

    You get the idea.

    ( Empty lines will not get indented, which I think is kind of nice. )


    I found the answer in the (g)vim documentation for indenting blocks:

    :help visual-block
    /indent
    

    If you want to give a count to the command, do this just before typing the operator character: "v{move-around}3>" (move lines 3 indents to the right).

    Michael, Dec 19, 2014 at 20:18

    To indent all file by 4:
    esc 4G=G
    

    underscore_d, Oct 17, 2015 at 19:35

    ...what? 'indent by 4 spaces'? No, this jumps to line 4 and then indents everything from there to the end of the file, using the currently selected indent mode (if any). – underscore_d Oct 17 '15 at 19:35

    Abhishesh Sharma, Jul 15, 2014 at 9:22

    :line_num_start,line_num_end>
    

    e.g.

    14,21> shifts line number 14 to 21 to one tab
    

    Increase the '>' symbol for more tabs

    e.g.

    14,21>>> for 3 tabs
    

    HoldOffHunger, Dec 5, 2017 at 15:50

    There are clearly a lot of ways to solve this, but this is the easiest to implement, as line numbers show by default in vim and it doesn't require math. – HoldOffHunger Dec 5 '17 at 15:50

    rohitkadam19, May 7, 2013 at 7:13

    5== will indent 5 lines from current cursor position. so you can type any number before ==, it will indent number of lines. This is in command mode.

    gg=G will indent whole file from top to bottom.

    Kamlesh Karwande, Feb 6, 2014 at 4:04

    I dont know why its so difficult to find a simple answer like this one...

    I myself had to struggle a lot to know this its its very simple

    edit your .vimrc file under home directory add this line

    set cindent
    

    in you file where you want to indent properly

    in normal/command mode type

    10==   (this will indent 10 lines from the current cursor location )
    gg=G   (complete file will be properly indented)
    

    Michael Durrant, Nov 4, 2013 at 22:57

    Go to the start of the text

    Eric Leschinski, Dec 23, 2013 at 3:30

    How to indent highlighted code in vi immediately by a # of spaces:

    Option 1: Indent a block of code in vi to three spaces with Visual Block mode:

    1. Select the block of code you want to indent. Do this using Ctrl+V in normal mode and arrowing down to select text. While it is selected, enter : to give a command to the block of selected text.
    2. The following will appear in the command line: :'<,'>
    3. To set indent to 3 spaces, type le 3 and press enter. This is what appears: :'<,'>le 3
    4. The selected text is immediately indented to 3 spaces.

    Option 2: Indent a block of code in vi to three spaces with Visual Line mode:

    1. Open your file in VI.
    2. Put your cursor over some code
    3. Be in normal mode press the following keys:
      Vjjjj:le 3
      

      Interpretation of what you did:

      V means start selecting text.

      jjjj arrows down 4 lines, highlighting 4 lines.

      : tells vi you will enter an instruction for the highlighted text.

      le 3 means indent highlighted text 3 lines.

      The selected code is immediately increased or decreased to three spaces indentation.

    Option 3: use Visual Block mode and special insert mode to increase indent:

    1. Open your file in VI.
    2. Put your cursor over some code
    3. Be in normal mode press the following keys:

      Ctrl+V

      jjjj
      

      (press spacebar 5 times)

      Esc Shift+i

      All the highlighted text is indented an additional 5 spaces.

    ire_and_curses, Mar 6, 2011 at 17:29

    This answer summarises the other answers and comments of this question, and adds extra information based on the Vim documentation and the Vim wiki . For conciseness, this answer doesn't distinguish between Vi and Vim-specific commands.

    In the commands below, "re-indent" means "indent lines according to your indentation settings ." shiftwidth is the primary variable that controls indentation.

    General Commands

    >>   Indent line by shiftwidth spaces
    <<   De-indent line by shiftwidth spaces
    5>>  Indent 5 lines
    5==  Re-indent 5 lines
    
    >%   Increase indent of a braced or bracketed block (place cursor on brace first)
    =%   Reindent a braced or bracketed block (cursor on brace)
    <%   Decrease indent of a braced or bracketed block (cursor on brace)
    ]p   Paste text, aligning indentation with surroundings
    
    =i{  Re-indent the 'inner block', i.e. the contents of the block
    =a{  Re-indent 'a block', i.e. block and containing braces
    =2a{ Re-indent '2 blocks', i.e. this block and containing block
    
    >i{  Increase inner block indent
    <i{  Decrease inner block indent
    

    You can replace { with } or B, e.g. =iB is a valid block indent command. Take a look at "Indent a Code Block" for a nice example to try these commands out on.

    Also, remember that

    .    Repeat last command
    

    , so indentation commands can be easily and conveniently repeated.

    Re-indenting complete files

    Another common situation is requiring indentation to be fixed throughout a source file:

    gg=G  Re-indent entire buffer
    

    You can extend this idea to multiple files:

    " Re-indent all your c source code:
    :args *.c
    :argdo normal gg=G
    :wall
    

    Or multiple buffers:

    " Re-indent all open buffers:
    :bufdo normal gg=G:wall
    

    In Visual Mode

    Vjj> Visually mark and then indent 3 lines
    

    In insert mode

    These commands apply to the current line:

    CTRL-t   insert indent at start of line
    CTRL-d   remove indent at start of line
    0 CTRL-d remove all indentation from line
    

    Ex commands

    These are useful when you want to indent a specific range of lines, without moving your cursor.

    :< and :> Given a range, apply indentation e.g.
    :4,8>   indent lines 4 to 8, inclusive
    

    Indenting using markers

    Another approach is via markers :

    ma     Mark top of block to indent as marker 'a'
    

    ...move cursor to end location

    >'a    Indent from marker 'a' to current location
    

    Variables that govern indentation

    You can set these in your .vimrc file .

    set expandtab       "Use softtabstop spaces instead of tab characters for indentation
    set shiftwidth=4    "Indent by 4 spaces when using >>, <<, == etc.
    set softtabstop=4   "Indent by 4 spaces when pressing <TAB>
    
    set autoindent      "Keep indentation from previous line
    set smartindent     "Automatically inserts indentation in some cases
    set cindent         "Like smartindent, but stricter and more customisable
    

    Vim has intelligent indentation based on filetype. Try adding this to your .vimrc:

    if has ("autocmd")
        " File type detection. Indent based on filetype. Recommended.
        filetype plugin indent on
    endif
    

    References

    Amit, Aug 10, 2011 at 13:26

    Both this answer and the one above it were great. But I +1'd this because it reminded me of the 'dot' operator, which repeats the last command. This is extremely useful when needing to indent an entire block several shiftspaces (or indentations) without needing to keep pressing >} . Thanks a long – Amit Aug 10 '11 at 13:26

    Wipqozn, Aug 24, 2011 at 16:00

    5>> Indent 5 lines : This command indents the fifth line, not 5 lines. Could this be due to my VIM settings, or is your wording incorrect? – Wipqozn Aug 24 '11 at 16:00

    ire_and_curses, Aug 24, 2011 at 16:21

    @Wipqozn - That's strange. It definitely indents the next five lines for me, tested on Vim 7.2.330. – ire_and_curses Aug 24 '11 at 16:21

    Steve, Jan 6, 2012 at 20:13

    >42gg Indent from where you are to line 42. – Steve Jan 6 '12 at 20:13

    aqn, Mar 6, 2013 at 4:42

    Great summary! Also note that the "indent inside block" and "indent all block" (<i{ >a{ etc.) also works with parentheses and brackets: >a( <i] etc. (And while I'm at it, in addition to <>'s, they also work with d,c,y etc.) – aqn Mar 6 '13 at 4:42

    NickSoft, Nov 5, 2013 at 16:19

    I didn't find a method I use in the comments, so I'll share it (I think vim only):
    1. Esc to enter command mode
    2. Move to the first character of the last line you want to ident
    3. ctrl - v to start block select
    4. Move to the first character of the first line you want to ident
    5. shift - i to enter special insert mode
    6. type as many spases/tabs as you need to indent to (2 for example
    7. press Esc and spaces will appear in all lines

    This is useful when you don't want to change ident/tab settings in vimrc or to remember them to change it while editing.

    To unindent I use the same ctrl - v block select to select spaces and delete it with d .

    svec, Oct 25, 2008 at 4:21

    Also try this for C-indenting indentation, do :help = for more info:

    ={

    That will auto-indent the current code block you're in.

    Or just:

    ==

    to auto-indent the current line.

    underscore_d, Oct 17, 2015 at 19:39

    doesn't work for me, just dumps my cursor to the line above the opening brace of 'the current code block i'm in'. – underscore_d Oct 17 '15 at 19:39

    John La Rooy, Jul 2, 2013 at 7:24

    Using Python a lot, I find myself needing frequently needing to shift blocks by more than one indent. You can do this by using any of the block selection methods, and then just enter the number of indents you wish to jump right before the >

    Eg. V5j3> will indent 5 lines 3 times - which is 12 spaces if you use 4 spaces for indents

    Juan Lanus, Sep 18, 2012 at 14:12

    The beauty of vim's UI is that it's consistent. Editing commands are made up of the command and a cursor move. The cursor moves are always the same:

    So, in order to use vim you have to learn to move the cursor and remember a repertoire of commands like, for example, > to indent (and < to "outdent").
    Thus, for indenting the lines from the cursor position to the top of the screen you do >H, >G to indent to the bottom of the file.

    If, instead of typing >H, you type dH then you are deleting the same block of lines, cH for replacing it, etc.

    Some cursor movements fit better with specific commands. In particular, the % command is handy to indent a whole HTML or XML block.
    If the file has syntax highlighted ( :syn on ) then setting the cursor in the text of a tag (like, in the "i" of <div> and entering >% will indent up to the closing </div> tag.

    This is how vim works: one has to remember only the cursor movements and the commands, and how to mix them.
    So my answer to this question would be "go to one end of the block of lines you want to indent, and then type the > command and a movement to the other end of the block" if indent is interpreted as shifting the lines, = if indent is interpreted as in pretty-printing.

    aqn, Mar 6, 2013 at 4:38

    I would say that vi/vim is mostly consistent. For instance, D does not behave the same as S and Y! :) – aqn Mar 6 '13 at 4:38

    Kent Fredric, Oct 25, 2008 at 9:16

    Key-Presses for more visual people:
    1. Enter Command Mode:
      Escape
    2. Move around to the start of the area to indent:
      hjkl↑↓←→
    3. Start a block:
      v
    4. Move around to the end of the area to indent:
      hjkl↑↓←→
    5. (Optional) Type the number of indentation levels you want
      0..9
    6. Execute the indentation on the block:
      >

    Shane Reustle, Mar 10, 2011 at 22:24

    This is great, but it uses spaces and not tabs. Any possible way to fix this? – Shane Reustle Mar 10 '11 at 22:24

    Kent Fredric, Mar 16, 2011 at 8:33

    If its using spaces instead of tabs, then its probably because you have indentation set to use spaces. =). – Kent Fredric Mar 16 '11 at 8:33

    Kent Fredric, Mar 16, 2011 at 8:36

    When the 'expandtab' option is off (this is the default) Vim uses <Tab>s as much as possible to make the indent. ( :help :> ) – Kent Fredric Mar 16 '11 at 8:36

    Shane Reustle, Dec 2, 2012 at 3:17

    The only tab/space related vim setting I've changed is :set tabstop=3. It's actually inserting this every time I use >>: "<tab><space><space>". Same with indenting a block. Any ideas? – Shane Reustle Dec 2 '12 at 3:17

    Kent Fredric, Dec 2, 2012 at 17:08

    The three settings you want to look at for "spaces vs tabs" are 1. tabstop 2. shiftwidth 3. expandtab. You probably have "shiftwidth=5 noexpandtab", so a "tab" is 3 spaces, and an indentation level is "5" spaces, so it makes up the 5 with 1 tab, and 2 spaces. – Kent Fredric Dec 2 '12 at 17:08

    mda, Jun 4, 2012 at 5:12

    For me, the MacVim (Visual) solution was, select with mouse and press ">", but after putting the following lines in "~/.vimrc" since I like spaces instead of tabs:
    set expandtab
    set tabstop=2
    set shiftwidth=2
    

    Also it's useful to be able to call MacVim from the command-line (Terminal.app), so since I have the following helper directory "~/bin", where I place a script called "macvim":

    #!/usr/bin/env bash
    /usr/bin/open -a /Applications/MacPorts/MacVim.app $@
    

    And of course in "~/.bashrc":

    export PATH=$PATH:$HOME/bin
    

    Macports messes with "~/.profile" a lot, so the PATH environment variable can get quite long.

    jash, Feb 17, 2012 at 15:16

    >} or >{ indent from current line up to next paragraph

    <} or <{ same un-indent

    Eric Kigathi, Jan 4, 2012 at 0:41

    A quick way to do this using VISUAL MODE uses the same process as commenting a block of code.

    This is useful if you would prefer not to change your shiftwidth or use any set directives and is flexible enough to work with TABS or SPACES or any other character.

    1. Position cursor at the beginning on the block
    2. v to switch to -- VISUAL MODE --
    3. Select the text to be indented
    4. Type : to switch to the prompt
    5. Replacing with 3 leading spaces:

      :'<,'>s/^/ /g

    6. Or replacing with leading tabs:

      :'<,'>s/^/\t/g

    7. Brief Explanation:

      '<,'> - Within the Visually Selected Range

      s/^/ /g - Insert 3 spaces at the beginning of every line within the whole range

      (or)

      s/^/\t/g - Insert Tab at the beginning of every line within the whole range

    pankaj ukumar, Nov 11, 2009 at 17:33

    do this
    $vi .vimrc
    

    and add this line

    autocmd FileType cpp setlocal expandtab shiftwidth=4 softtabstop=4 cindent
    

    this is only for cpp file you can do this for another file type also just by modifying the filetype...

    SteveO, Nov 10, 2010 at 19:16

    I like to mark text for indentation:
    1. go to beginning of line of text then type ma (a is the label from the 'm'ark: it could be any letter)
    2. go to end line of text and type mz (again z could be any letter)
    3. :'a,'z> or :'a,'z< will indent or outdent (is this a word?)
    4. Voila! the text is moved (empty lines remain empty with no spaces)

    PS: you can use :'a,'z technique to mark a range for any operation (d,y,s///, etc) where you might use lines, numbers, or %

    Paul Tomblin, Oct 25, 2008 at 4:08

    As well as the offered solutions, I like to do things a paragraph at a time with >}

    aqn, Mar 6, 2013 at 4:47

    Yup, and this is why one of my big peeves is white spaces on an otherwise empty line: they messes up vim's notion of a "paragraph". – aqn Mar 6 '13 at 4:47

    Daniel Spiewak, Oct 25, 2008 at 4:00

    In addition to the answer already given and accepted, it is also possible to place a marker and then indent everything from the current cursor to the marker. Thus, enter ma where you want the top of your indented block, cursor down as far as you need and then type >'a (note that " a " can be substituted for any valid marker name). This is sometimes easier than 5>> or vjjj> .

    user606723, Mar 17, 2011 at 15:31

    This is really useful. I am going to have to look up what all works with this. I know d'a and y'a, what else? – user606723 Mar 17 '11 at 15:31

    ziggy, Aug 25, 2014 at 14:14

    This is very useful as it avoids the need to count how many lines you want to indent. – ziggy Aug 25 '14 at 14:14

    [Oct 21, 2018] vim - how to move a block or column of text

    Oct 21, 2018 | stackoverflow.com

    how to move a block or column of text Ask Question up vote 23 down vote favorite 5


    David.Chu.ca ,Mar 6, 2009 at 20:47

    I have the following text as a simple case:

    ...
    abc xxx 123 456
    wer xxx 345 678676
    ...
    

    what I need to move a block of text xxx to another location:

    ...
    abc 123 xxx 456
    wer 345 xxx 678676
    ...
    

    I think I use visual mode to block a column of text, what are the other commands to move the block to another location?

    Paul ,Mar 6, 2009 at 20:52

    You should use blockwise visual mode ( Ctrl + v ). Then d to delete block, p or P to paste block.

    Klinger ,Mar 6, 2009 at 20:53

    Try the link .


    Marking text (visual mode)

    Visual commands

    Cut and Paste

    Kemin Zhou ,Nov 6, 2015 at 23:59

    One of the few useful command I learned at the beginning of learning VIM is :1,3 mo 5 This means move text line 1 through 3 to line 5.

    Júda Ronén ,Jan 18, 2017 at 21:20

    And you can select the lines in visual mode, then press : to get :'<,'> (equivalent to the :1,3 part in your answer), and add mo N . If you want to move a single line, just :mo N . If you are really lazy, you can omit the space (e.g. :mo5 ). Use marks with mo '{a-zA-Z} . – Júda Ronén Jan 18 '17 at 21:20

    Miles ,Jun 29, 2017 at 23:44

    just m also works – Miles Jun 29 '17 at 23:44

    John Ellinwood ,Mar 6, 2009 at 20:52

    1. In VIM, press Ctrl + V to go in Visual Block mode
    2. Select the required columns with your arrow keys and press x to cut them in the buffer.
    3. Move cursor to row 1 column 9 and press P (thats capital P) in command mode.
    4. Press Ctrl + Shift + b to get in and out of it. ( source )

    SergioAraujo ,Jan 4 at 21:49

    Using an external command "awk".

    %!awk '{print $1,$3,$2,$4}' test.txt
    

    With pure vim

    :%s,\v(\w+) (\w+) (\w+) (\w+),\1 \3 \2 \4,g
    

    Another vim solution using global command

    :g/./normal wdwwP
    

    [Oct 21, 2018] What is your most productive shortcut with Vim?

    Notable quotes:
    "... less productive ..."
    "... column oriented ..."
    Feb 17, 2013 | stackoverflow.com
    I've heard a lot about Vim, both pros and cons. It really seems you should be (as a developer) faster with Vim than with any other editor. I'm using Vim to do some basic stuff and I'm at best 10 times less productive with Vim.

    The only two things you should care about when you talk about speed (you may not care enough about them, but you should) are:

    1. Using alternatively left and right hands is the fastest way to use the keyboard.
    2. Never touching the mouse is the second way to be as fast as possible. It takes ages for you to move your hand, grab the mouse, move it, and bring it back to the keyboard (and you often have to look at the keyboard to be sure you returned your hand properly to the right place)

    Here are two examples demonstrating why I'm far less productive with Vim.

    Copy/Cut & paste. I do it all the time. With all the contemporary editors you press Shift with the left hand, and you move the cursor with your right hand to select text. Then Ctrl + C copies, you move the cursor and Ctrl + V pastes.

    With Vim it's horrible:

    Another example? Search & replace.

    And everything with Vim is like that: it seems I don't know how to handle it the right way.

    NB : I've already read the Vim cheat sheet :)

    My question is: What is the way you use Vim that makes you more productive than with a contemporary editor?

    community wiki 18 revs, 16 users 64%, Dec 22, 2011 at 11:43

    Your problem with Vim is that you don't grok vi .

    You mention cutting with yy and complain that you almost never want to cut whole lines. In fact programmers, editing source code, very often want to work on whole lines, ranges of lines and blocks of code. However, yy is only one of many way to yank text into the anonymous copy buffer (or "register" as it's called in vi ).

    The "Zen" of vi is that you're speaking a language. The initial y is a verb. The statement yy is a synonym for y_ . The y is doubled up to make it easier to type, since it is such a common operation.

    This can also be expressed as dd P (delete the current line and paste a copy back into place; leaving a copy in the anonymous register as a side effect). The y and d "verbs" take any movement as their "subject." Thus yW is "yank from here (the cursor) to the end of the current/next (big) word" and y'a is "yank from here to the line containing the mark named ' a '."

    If you only understand basic up, down, left, and right cursor movements then vi will be no more productive than a copy of "notepad" for you. (Okay, you'll still have syntax highlighting and the ability to handle files larger than a piddling ~45KB or so; but work with me here).

    vi has 26 "marks" and 26 "registers." A mark is set to any cursor location using the m command. Each mark is designated by a single lower case letter. Thus ma sets the ' a ' mark to the current location, and mz sets the ' z ' mark. You can move to the line containing a mark using the ' (single quote) command. Thus 'a moves to the beginning of the line containing the ' a ' mark. You can move to the precise location of any mark using the ` (backquote) command. Thus `z will move directly to the exact location of the ' z ' mark.

    Because these are "movements" they can also be used as subjects for other "statements."

    So, one way to cut an arbitrary selection of text would be to drop a mark (I usually use ' a ' as my "first" mark, ' z ' as my next mark, ' b ' as another, and ' e ' as yet another (I don't recall ever having interactively used more than four marks in 15 years of using vi ; one creates one's own conventions regarding how marks and registers are used by macros that don't disturb one's interactive context). Then we go to the other end of our desired text; we can start at either end, it doesn't matter. Then we can simply use d`a to cut or y`a to copy. Thus the whole process has a 5 keystrokes overhead (six if we started in "insert" mode and needed to Esc out command mode). Once we've cut or copied then pasting in a copy is a single keystroke: p .

    I say that this is one way to cut or copy text. However, it is only one of many. Frequently we can more succinctly describe the range of text without moving our cursor around and dropping a mark. For example if I'm in a paragraph of text I can use { and } movements to the beginning or end of the paragraph respectively. So, to move a paragraph of text I cut it using { d} (3 keystrokes). (If I happen to already be on the first or last line of the paragraph I can then simply use d} or d{ respectively.

    The notion of "paragraph" defaults to something which is usually intuitively reasonable. Thus it often works for code as well as prose.

    Frequently we know some pattern (regular expression) that marks one end or the other of the text in which we're interested. Searching forwards or backwards are movements in vi . Thus they can also be used as "subjects" in our "statements." So I can use d/foo to cut from the current line to the next line containing the string "foo" and y?bar to copy from the current line to the most recent (previous) line containing "bar." If I don't want whole lines I can still use the search movements (as statements of their own), drop my mark(s) and use the `x commands as described previously.

    In addition to "verbs" and "subjects" vi also has "objects" (in the grammatical sense of the term). So far I've only described the use of the anonymous register. However, I can use any of the 26 "named" registers by prefixing the "object" reference with " (the double quote modifier). Thus if I use "add I'm cutting the current line into the ' a ' register and if I use "by/foo then I'm yanking a copy of the text from here to the next line containing "foo" into the ' b ' register. To paste from a register I simply prefix the paste with the same modifier sequence: "ap pastes a copy of the ' a ' register's contents into the text after the cursor and "bP pastes a copy from ' b ' to before the current line.

    This notion of "prefixes" also adds the analogs of grammatical "adjectives" and "adverbs' to our text manipulation "language." Most commands (verbs) and movement (verbs or objects, depending on context) can also take numeric prefixes. Thus 3J means "join the next three lines" and d5} means "delete from the current line through the end of the fifth paragraph down from here."

    This is all intermediate level vi . None of it is Vim specific and there are far more advanced tricks in vi if you're ready to learn them. If you were to master just these intermediate concepts then you'd probably find that you rarely need to write any macros because the text manipulation language is sufficiently concise and expressive to do most things easily enough using the editor's "native" language.


    A sampling of more advanced tricks:

    There are a number of : commands, most notably the :% s/foo/bar/g global substitution technique. (That's not advanced but other : commands can be). The whole : set of commands was historically inherited by vi 's previous incarnations as the ed (line editor) and later the ex (extended line editor) utilities. In fact vi is so named because it's the visual interface to ex .

    : commands normally operate over lines of text. ed and ex were written in an era when terminal screens were uncommon and many terminals were "teletype" (TTY) devices. So it was common to work from printed copies of the text, using commands through an extremely terse interface (common connection speeds were 110 baud, or, roughly, 11 characters per second -- which is slower than a fast typist; lags were common on multi-user interactive sessions; additionally there was often some motivation to conserve paper).

    So the syntax of most : commands includes an address or range of addresses (line number) followed by a command. Naturally one could use literal line numbers: :127,215 s/foo/bar to change the first occurrence of "foo" into "bar" on each line between 127 and 215. One could also use some abbreviations such as . or $ for current and last lines respectively. One could also use relative prefixes + and - to refer to offsets after or before the curent line, respectively. Thus: :.,$j meaning "from the current line to the last line, join them all into one line". :% is synonymous with :1,$ (all the lines).

    The :... g and :... v commands bear some explanation as they are incredibly powerful. :... g is a prefix for "globally" applying a subsequent command to all lines which match a pattern (regular expression) while :... v applies such a command to all lines which do NOT match the given pattern ("v" from "conVerse"). As with other ex commands these can be prefixed by addressing/range references. Thus :.,+21g/foo/d means "delete any lines containing the string "foo" from the current one through the next 21 lines" while :.,$v/bar/d means "from here to the end of the file, delete any lines which DON'T contain the string "bar."

    It's interesting that the common Unix command grep was actually inspired by this ex command (and is named after the way in which it was documented). The ex command :g/re/p (grep) was the way they documented how to "globally" "print" lines containing a "regular expression" (re). When ed and ex were used, the :p command was one of the first that anyone learned and often the first one used when editing any file. It was how you printed the current contents (usually just one page full at a time using :.,+25p or some such).

    Note that :% g/.../d or (its reVerse/conVerse counterpart: :% v/.../d are the most common usage patterns. However there are couple of other ex commands which are worth remembering:

    We can use m to move lines around, and j to join lines. For example if you have a list and you want to separate all the stuff matching (or conversely NOT matching some pattern) without deleting them, then you can use something like: :% g/foo/m$ ... and all the "foo" lines will have been moved to the end of the file. (Note the other tip about using the end of your file as a scratch space). This will have preserved the relative order of all the "foo" lines while having extracted them from the rest of the list. (This would be equivalent to doing something like: 1G!GGmap!Ggrep foo<ENTER>1G:1,'a g/foo'/d (copy the file to its own tail, filter the tail through grep, and delete all the stuff from the head).

    To join lines usually I can find a pattern for all the lines which need to be joined to their predecessor (all the lines which start with "^ " rather than "^ * " in some bullet list, for example). For that case I'd use: :% g/^ /-1j (for every matching line, go up one line and join them). (BTW: for bullet lists trying to search for the bullet lines and join to the next doesn't work for a couple reasons ... it can join one bullet line to another, and it won't join any bullet line to all of its continuations; it'll only work pairwise on the matches).

    Almost needless to mention you can use our old friend s (substitute) with the g and v (global/converse-global) commands. Usually you don't need to do so. However, consider some case where you want to perform a substitution only on lines matching some other pattern. Often you can use a complicated pattern with captures and use back references to preserve the portions of the lines that you DON'T want to change. However, it will often be easier to separate the match from the substitution: :% g/foo/s/bar/zzz/g -- for every line containing "foo" substitute all "bar" with "zzz." (Something like :% s/\(.*foo.*\)bar\(.*\)/\1zzz\2/g would only work for the cases those instances of "bar" which were PRECEDED by "foo" on the same line; it's ungainly enough already, and would have to be mangled further to catch all the cases where "bar" preceded "foo")

    The point is that there are more than just p, s, and d lines in the ex command set.

    The : addresses can also refer to marks. Thus you can use: :'a,'bg/foo/j to join any line containing the string foo to its subsequent line, if it lies between the lines between the ' a ' and ' b ' marks. (Yes, all of the preceding ex command examples can be limited to subsets of the file's lines by prefixing with these sorts of addressing expressions).

    That's pretty obscure (I've only used something like that a few times in the last 15 years). However, I'll freely admit that I've often done things iteratively and interactively that could probably have been done more efficiently if I'd taken the time to think out the correct incantation.

    Another very useful vi or ex command is :r to read in the contents of another file. Thus: :r foo inserts the contents of the file named "foo" at the current line.

    More powerful is the :r! command. This reads the results of a command. It's the same as suspending the vi session, running a command, redirecting its output to a temporary file, resuming your vi session, and reading in the contents from the temp. file.

    Even more powerful are the ! (bang) and :... ! ( ex bang) commands. These also execute external commands and read the results into the current text. However, they also filter selections of our text through the command! This we can sort all the lines in our file using 1G!Gsort ( G is the vi "goto" command; it defaults to going to the last line of the file, but can be prefixed by a line number, such as 1, the first line). This is equivalent to the ex variant :1,$!sort . Writers often use ! with the Unix fmt or fold utilities for reformating or "word wrapping" selections of text. A very common macro is {!}fmt (reformat the current paragraph). Programmers sometimes use it to run their code, or just portions of it, through indent or other code reformatting tools.

    Using the :r! and ! commands means that any external utility or filter can be treated as an extension of our editor. I have occasionally used these with scripts that pulled data from a database, or with wget or lynx commands that pulled data off a website, or ssh commands that pulled data from remote systems.

    Another useful ex command is :so (short for :source ). This reads the contents of a file as a series of commands. When you start vi it normally, implicitly, performs a :source on ~/.exinitrc file (and Vim usually does this on ~/.vimrc, naturally enough). The use of this is that you can change your editor profile on the fly by simply sourcing in a new set of macros, abbreviations, and editor settings. If you're sneaky you can even use this as a trick for storing sequences of ex editing commands to apply to files on demand.

    For example I have a seven line file (36 characters) which runs a file through wc, and inserts a C-style comment at the top of the file containing that word count data. I can apply that "macro" to a file by using a command like: vim +'so mymacro.ex' ./mytarget

    (The + command line option to vi and Vim is normally used to start the editing session at a given line number. However it's a little known fact that one can follow the + by any valid ex command/expression, such as a "source" command as I've done here; for a simple example I have scripts which invoke: vi +'/foo/d|wq!' ~/.ssh/known_hosts to remove an entry from my SSH known hosts file non-interactively while I'm re-imaging a set of servers).

    Usually it's far easier to write such "macros" using Perl, AWK, sed (which is, in fact, like grep a utility inspired by the ed command).

    The @ command is probably the most obscure vi command. In occasionally teaching advanced systems administration courses for close to a decade I've met very few people who've ever used it. @ executes the contents of a register as if it were a vi or ex command.
    Example: I often use: :r!locate ... to find some file on my system and read its name into my document. From there I delete any extraneous hits, leaving only the full path to the file I'm interested in. Rather than laboriously Tab -ing through each component of the path (or worse, if I happen to be stuck on a machine without Tab completion support in its copy of vi ) I just use:

    1. 0i:r (to turn the current line into a valid :r command),
    2. "cdd (to delete the line into the "c" register) and
    3. @c execute that command.

    That's only 10 keystrokes (and the expression "cdd @c is effectively a finger macro for me, so I can type it almost as quickly as any common six letter word).


    A sobering thought

    I've only scratched to surface of vi 's power and none of what I've described here is even part of the "improvements" for which vim is named! All of what I've described here should work on any old copy of vi from 20 or 30 years ago.

    There are people who have used considerably more of vi 's power than I ever will.

    Jim Dennis, Feb 12, 2010 at 4:08

    @Wahnfieden -- grok is exactly what I meant: en.wikipedia.org/wiki/Grok (It's apparently even in the OED --- the closest we anglophones have to a canonical lexicon). To "grok" an editor is to find yourself using its commands fluently ... as if they were your natural language. – Jim Dennis Feb 12 '10 at 4:08

    knittl, Feb 27, 2010 at 13:15

    wow, a very well written answer! i couldn't agree more, although i use the @ command a lot (in combination with q : record macro) – knittl Feb 27 '10 at 13:15

    Brandon Rhodes, Mar 29, 2010 at 15:26

    Superb answer that utterly redeems a really horrible question. I am going to upvote this question, that normally I would downvote, just so that this answer becomes easier to find. (And I'm an Emacs guy! But this way I'll have somewhere to point new folks who want a good explanation of what vi power users find fun about vi. Then I'll tell them about Emacs and they can decide.) – Brandon Rhodes Mar 29 '10 at 15:26

    Marko, Apr 1, 2010 at 14:47

    Can you make a website and put this tutorial there, so it doesn't get burried here on stackoverflow. I have yet to read better introduction to vi then this. – Marko Apr 1 '10 at 14:47

    CMS, Aug 2, 2009 at 8:27

    You are talking about text selecting and copying, I think that you should give a look to the Vim Visual Mode .

    In the visual mode, you are able to select text using Vim commands, then you can do whatever you want with the selection.

    Consider the following common scenarios:

    You need to select to the next matching parenthesis.

    You could do:

    You want to select text between quotes:

    You want to select a curly brace block (very common on C-style languages):

    You want to select the entire file:

    Visual block selection is another really useful feature, it allows you to select a rectangular area of text, you just have to press Ctrl - V to start it, and then select the text block you want and perform any type of operation such as yank, delete, paste, edit, etc. It's great to edit column oriented text.

    finnw, Aug 2, 2009 at 8:49

    Every editor has something like this, it's not specific to vim. – finnw Aug 2 '09 at 8:49

    guns, Aug 2, 2009 at 9:54

    Yes, but it was a specific complaint of the poster. Visual mode is Vim's best method of direct text-selection and manipulation. And since vim's buffer traversal methods are superb, I find text selection in vim fairly pleasurable. – guns Aug 2 '09 at 9:54

    Hamish Downer, Mar 16, 2010 at 13:34

    I think it is also worth mentioning Ctrl-V to select a block - ie an arbitrary rectangle of text. When you need it it's a lifesaver. – Hamish Downer Mar 16 '10 at 13:34

    CMS, Apr 2, 2010 at 2:07

    @viksit: I'm using Camtasia, but there are plenty of alternatives: codinghorror.com/blog/2006/11/screencasting-for-windows.htmlCMS Apr 2 '10 at 2:07

    Nathan Long, Mar 1, 2011 at 19:05

    Also, if you've got a visual selection and want to adjust it, o will hop to the other end. So you can move both the beginning and the end of the selection as much as you like. – Nathan Long Mar 1 '11 at 19:05

    community wiki
    12 revs, 3 users 99%
    ,Oct 29, 2012 at 18:51

    Some productivity tips:

    Smart movements

    Quick editing commands

    Combining commands

    Most commands accept a amount and direction, for example:

    Useful programmer commands

    Macro recording

    By using very specific commands and movements, VIM can replay those exact actions for the next lines. (e.g. A for append-to-end, b / e to move the cursor to the begin or end of a word respectively)

    Example of well built settings

    # reset to vim-defaults
    if &compatible          # only if not set before:
      set nocompatible      # use vim-defaults instead of vi-defaults (easier, more user friendly)
    endif
    
    # display settings
    set background=dark     # enable for dark terminals
    set nowrap              # dont wrap lines
    set scrolloff=2         # 2 lines above/below cursor when scrolling
    set number              # show line numbers
    set showmatch           # show matching bracket (briefly jump)
    set showmode            # show mode in status bar (insert/replace/...)
    set showcmd             # show typed command in status bar
    set ruler               # show cursor position in status bar
    set title               # show file in titlebar
    set wildmenu            # completion with menu
    set wildignore=*.o,*.obj,*.bak,*.exe,*.py[co],*.swp,*~,*.pyc,.svn
    set laststatus=2        # use 2 lines for the status bar
    set matchtime=2         # show matching bracket for 0.2 seconds
    set matchpairs+=<:>     # specially for html
    
    # editor settings
    set esckeys             # map missed escape sequences (enables keypad keys)
    set ignorecase          # case insensitive searching
    set smartcase           # but become case sensitive if you type uppercase characters
    set smartindent         # smart auto indenting
    set smarttab            # smart tab handling for indenting
    set magic               # change the way backslashes are used in search patterns
    set bs=indent,eol,start # Allow backspacing over everything in insert mode
    
    set tabstop=4           # number of spaces a tab counts for
    set shiftwidth=4        # spaces for autoindents
    #set expandtab           # turn a tabs into spaces
    
    set fileformat=unix     # file mode is unix
    #set fileformats=unix,dos    # only detect unix file format, displays that ^M with dos files
    
    # system settings
    set lazyredraw          # no redraws in macros
    set confirm             # get a dialog when :q, :w, or :wq fails
    set nobackup            # no backup~ files.
    set viminfo='20,\"500   # remember copy registers after quitting in the .viminfo file -- 20 jump links, regs up to 500 lines'
    set hidden              # remember undo after quitting
    set history=50          # keep 50 lines of command history
    set mouse=v             # use mouse in visual mode (not normal,insert,command,help mode
    
    
    # color settings (if terminal/gui supports it)
    if &t_Co > 2 || has("gui_running")
      syntax on          # enable colors
      set hlsearch       # highlight search (very useful!)
      set incsearch      # search incremently (search while typing)
    endif
    
    # paste mode toggle (needed when using autoindent/smartindent)
    map <F10> :set paste<CR>
    map <F11> :set nopaste<CR>
    imap <F10> <C-O>:set paste<CR>
    imap <F11> <nop>
    set pastetoggle=<F11>
    
    # Use of the filetype plugins, auto completion and indentation support
    filetype plugin indent on
    
    # file type specific settings
    if has("autocmd")
      # For debugging
      #set verbose=9
    
      # if bash is sh.
      let bash_is_sh=1
    
      # change to directory of current file automatically
      autocmd BufEnter * lcd %:p:h
    
      # Put these in an autocmd group, so that we can delete them easily.
      augroup mysettings
        au FileType xslt,xml,css,html,xhtml,javascript,sh,config,c,cpp,docbook set smartindent shiftwidth=2 softtabstop=2 expandtab
        au FileType tex set wrap shiftwidth=2 softtabstop=2 expandtab
    
        # Confirm to PEP8
        au FileType python set tabstop=4 softtabstop=4 expandtab shiftwidth=4 cinwords=if,elif,else,for,while,try,except,finally,def,class
      augroup END
    
      augroup perl
        # reset (disable previous 'augroup perl' settings)
        au!  
    
        au BufReadPre,BufNewFile
        \ *.pl,*.pm
        \ set formatoptions=croq smartindent shiftwidth=2 softtabstop=2 cindent cinkeys='0{,0},!^F,o,O,e' " tags=./tags,tags,~/devel/tags,~/devel/C
        # formatoption:
        #   t - wrap text using textwidth
        #   c - wrap comments using textwidth (and auto insert comment leader)
        #   r - auto insert comment leader when pressing <return> in insert mode
        #   o - auto insert comment leader when pressing 'o' or 'O'.
        #   q - allow formatting of comments with "gq"
        #   a - auto formatting for paragraphs
        #   n - auto wrap numbered lists
        #   
      augroup END
    
    
      # Always jump to the last known cursor position. 
      # Don't do it when the position is invalid or when inside
      # an event handler (happens when dropping a file on gvim). 
      autocmd BufReadPost * 
        \ if line("'\"") > 0 && line("'\"") <= line("$") | 
        \   exe "normal g`\"" | 
        \ endif 
    
    endif # has("autocmd")
    

    The settings can be stored in ~/.vimrc, or system-wide in /etc/vimrc.local and then by read from the /etc/vimrc file using:

    source /etc/vimrc.local
    

    (you'll have to replace the # comment character with " to make it work in VIM, I wanted to give proper syntax highlighting here).

    The commands I've listed here are pretty basic, and the main ones I use so far. They already make me quite more productive, without having to know all the fancy stuff.

    naught101, Apr 28, 2012 at 2:09

    Better than '. is g;, which jumps back through the changelist . Goes to the last edited position, instead of last edited line – naught101 Apr 28 '12 at 2:09

    community wiki
    5 revs, 4 users 53%
    ,Apr 12, 2012 at 7:46

    The Control + R mechanism is very useful :-) In either insert mode or command mode (i.e. on the : line when typing commands), continue with a numbered or named register:

    See :help i_CTRL-R and :help c_CTRL-R for more details, and snoop around nearby for more CTRL-R goodness.

    vdboor, Jun 3, 2010 at 9:08

    FYI, this refers to Ctrl+R in insert mode . In normal mode, Ctrl+R is redo. – vdboor Jun 3 '10 at 9:08

    Aryeh Leib Taurog, Feb 26, 2012 at 19:06

    +1 for current/alternate file name. Control-A also works in insert mode for last inserted text, and Control-@ to both insert last inserted text and immediately switch to normal mode. – Aryeh Leib Taurog Feb 26 '12 at 19:06

    community wiki
    Benson
    , Apr 1, 2010 at 3:44

    Vim Plugins

    There are a lot of good answers here, and one amazing one about the zen of vi. One thing I don't see mentioned is that vim is extremely extensible via plugins. There are scripts and plugins to make it do all kinds of crazy things the original author never considered. Here are a few examples of incredibly handy vim plugins:

    rails.vim

    Rails.vim is a plugin written by tpope. It's an incredible tool for people doing rails development. It does magical context-sensitive things that allow you to easily jump from a method in a controller to the associated view, over to a model, and down to unit tests for that model. It has saved dozens if not hundreds of hours as a rails developer.

    gist.vim

    This plugin allows you to select a region of text in visual mode and type a quick command to post it to gist.github.com . This allows for easy pastebin access, which is incredibly handy if you're collaborating with someone over IRC or IM.

    space.vim

    This plugin provides special functionality to the spacebar. It turns the spacebar into something analogous to the period, but instead of repeating actions it repeats motions. This can be very handy for moving quickly through a file in a way you define on the fly.

    surround.vim

    This plugin gives you the ability to work with text that is delimited in some fashion. It gives you objects which denote things inside of parens, things inside of quotes, etc. It can come in handy for manipulating delimited text.

    supertab.vim

    This script brings fancy tab completion functionality to vim. The autocomplete stuff is already there in the core of vim, but this brings it to a quick tab rather than multiple different multikey shortcuts. Very handy, and incredibly fun to use. While it's not VS's intellisense, it's a great step and brings a great deal of the functionality you'd like to expect from a tab completion tool.

    syntastic.vim

    This tool brings external syntax checking commands into vim. I haven't used it personally, but I've heard great things about it and the concept is hard to beat. Checking syntax without having to do it manually is a great time saver and can help you catch syntactic bugs as you introduce them rather than when you finally stop to test.

    fugitive.vim

    Direct access to git from inside of vim. Again, I haven't used this plugin, but I can see the utility. Unfortunately I'm in a culture where svn is considered "new", so I won't likely see git at work for quite some time.

    nerdtree.vim

    A tree browser for vim. I started using this recently, and it's really handy. It lets you put a treeview in a vertical split and open files easily. This is great for a project with a lot of source files you frequently jump between.

    FuzzyFinderTextmate.vim

    This is an unmaintained plugin, but still incredibly useful. It provides the ability to open files using a "fuzzy" descriptive syntax. It means that in a sparse tree of files you need only type enough characters to disambiguate the files you're interested in from the rest of the cruft.

    Conclusion

    There are a lot of incredible tools available for vim. I'm sure I've only scratched the surface here, and it's well worth searching for tools applicable to your domain. The combination of traditional vi's powerful toolset, vim's improvements on it, and plugins which extend vim even further, it's one of the most powerful ways to edit text ever conceived. Vim is easily as powerful as emacs, eclipse, visual studio, and textmate.

    Thanks

    Thanks to duwanis for his vim configs from which I have learned much and borrowed most of the plugins listed here.

    Tom Morris, Apr 1, 2010 at 8:50

    The magical tests-to-class navigation in rails.vim is one of the more general things I wish Vim had that TextMate absolutely nails across all languages: if I am working on Person.scala and I do Cmd+T, usually the first thing in the list is PersonTest.scala. – Tom Morris Apr 1 '10 at 8:50

    Gavin Gilmour, Jan 15, 2011 at 13:44

    I think it's time FuzzyFinderTextmate started to get replaced with github.com/wincent/Command-TGavin Gilmour Jan 15 '11 at 13:44

    Nathan Long, Mar 1, 2011 at 19:07

    +1 for Syntastic. That, combined with JSLint, has made my Javascript much less error-prone. See superuser.com/questions/247012/ about how to set up JSLint to work with Syntastic. – Nathan Long Mar 1 '11 at 19:07

    AlG, Sep 13, 2011 at 17:37

    @Benson Great list! I'd toss in snipMate as well. Very helpful automation of common coding stuff. if<tab> instant if block, etc. – AlG Sep 13 '11 at 17:37

    EarlOfEgo, May 12, 2012 at 15:13

    I think nerdcommenter is also a good plugin: here . Like its name says, it is for commenting your code. – EarlOfEgo May 12 '12 at 15:13

    community wiki
    4 revs, 2 users 89%
    ,Mar 31, 2010 at 23:01

    . Repeat last text-changing command

    I save a lot of time with this one.

    Visual mode was mentioned previously, but block visual mode has saved me a lot of time when editing fixed size columns in text file. (accessed with Ctrl-V).

    vdboor, Apr 1, 2010 at 8:34

    Additionally, if you use a concise command (e.g. A for append-at-end) to edit the text, vim can repeat that exact same action for the next line you press the . key at. – vdboor Apr 1 '10 at 8:34

    community wiki
    3 revs, 3 users 87%
    ,Dec 24, 2012 at 14:50

    gi

    Go to last edited location (very useful if you performed some searching and than want go back to edit)

    ^P and ^N

    Complete previous (^P) or next (^N) text.

    ^O and ^I

    Go to previous ( ^O - "O" for old) location or to the next ( ^I - "I" just near to "O" ). When you perform searches, edit files etc., you can navigate through these "jumps" forward and back.

    R. Martinho Fernandes, Apr 1, 2010 at 3:02

    Thanks for gi ! Now I don't need marks for that! – R. Martinho Fernandes Apr 1 '10 at 3:02

    Kungi, Feb 10, 2011 at 16:23

    I Think this can also be done with `` – Kungi Feb 10 '11 at 16:23

    Grant McLean, Aug 23, 2011 at 8:21

    @Kungi `. will take you to the last edit `` will take you back to the position you were in before the last 'jump' - which /might/ also be the position of the last edit. – Grant McLean Aug 23 '11 at 8:21

    community wiki
    Ronny Brendel
    , Mar 31, 2010 at 19:37

    I recently (got) discovered this site: http://vimcasts.org/

    It's pretty new and really really good. The guy who is running the site switched from textmate to vim and hosts very good and concise casts on specific vim topics. Check it out!

    Jeromy Anglim, Jan 13, 2011 at 6:40

    If you like vim tutorials, check out Derek Wyatt's vim videos as well. They're excellent. – Jeromy Anglim Jan 13 '11 at 6:40

    community wiki
    2 revs, 2 users 67%
    ,Feb 27, 2010 at 11:20

    CTRL + A increments the number you are standing on.

    innaM, Aug 3, 2009 at 9:14

    ... and CTRL-X decrements. – innaM Aug 3 '09 at 9:14

    SolutionYogi, Feb 26, 2010 at 20:43

    It's a neat shortcut but so far I have NEVER found any use for it. – SolutionYogi Feb 26 '10 at 20:43

    matja, Feb 27, 2010 at 14:21

    if you run vim in screen and wonder why this doesn't work - ctrl+A, A – matja Feb 27 '10 at 14:21

    hcs42, Feb 27, 2010 at 19:05

    @SolutionYogi: Consider that you want to add line number to the beginning of each line. Solution: ggI1<space><esc>0qqyawjP0<c-a>0q9999@q – hcs42 Feb 27 '10 at 19:05

    blueyed, Apr 1, 2010 at 14:47

    Extremely useful with Vimperator, where it increments (or decrements, Ctrl-X) the last number in the URL. Useful for quickly surfing through image galleries etc. – blueyed Apr 1 '10 at 14:47

    community wiki
    3 revs
    ,Aug 28, 2009 at 15:23

    All in Normal mode:

    f<char> to move to the next instance of a particular character on the current line, and ; to repeat.

    F<char> to move to the previous instance of a particular character on the current line and ; to repeat.

    If used intelligently, the above two can make you killer-quick moving around in a line.

    * on a word to search for the next instance.

    # on a word to search for the previous instance.

    Jim Dennis, Mar 14, 2010 at 6:38

    Whoa, I didn't know about the * and # (search forward/back for word under cursor) binding. That's kinda cool. The f/F and t/T and ; commands are quick jumps to characters on the current line. f/F put the cursor on the indicated character while t/T puts it just up "to" the character (the character just before or after it according to the direction chosen. ; simply repeats the most recent f/F/t/T jump (in the same direction). – Jim Dennis Mar 14 '10 at 6:38

    Steve K, Apr 3, 2010 at 23:50

    :) The tagline at the top of the tips page at vim.org: "Can you imagine how many keystrokes could have been saved, if I only had known the "*" command in time?" - Juergen Salk, 1/19/2001" – Steve K Apr 3 '10 at 23:50

    puk, Feb 24, 2012 at 6:45

    As Jim mentioned, the "t/T" combo is often just as good, if not better, for example, ct( will erase the word and put you in insert mode, but keep the parantheses! – puk Feb 24 '12 at 6:45

    community wiki
    agfe2
    , Aug 19, 2010 at 8:08

    Session

    a. save session

    :mks sessionname

    b. force save session

    :mks! sessionname

    c. load session

    gvim or vim -S sessionname


    Adding and Subtracting

    a. Adding and Subtracting

    CTRL-A ;Add [count] to the number or alphabetic character at or after the cursor. {not in Vi

    CTRL-X ;Subtract [count] from the number or alphabetic character at or after the cursor. {not in Vi}

    b. Window key unmapping

    In window, Ctrl-A already mapped for whole file selection you need to unmap in rc file. mark mswin.vim CTRL-A mapping part as comment or add your rc file with unmap

    c. With Macro

    The CTRL-A command is very useful in a macro. Example: Use the following steps to make a numbered list.

    1. Create the first list entry, make sure it starts with a number.
    2. qa - start recording into buffer 'a'
    3. Y - yank the entry
    4. p - put a copy of the entry below the first one
    5. CTRL-A - increment the number
    6. q - stop recording
    7. @a - repeat the yank, put and increment times

    Don Reba, Aug 22, 2010 at 5:22

    Any idea what the shortcuts are in Windows? – Don Reba Aug 22 '10 at 5:22

    community wiki
    8 revs, 2 users 98%
    ,Aug 18, 2012 at 21:44

    Last week at work our project inherited a lot of Python code from another project. Unfortunately the code did not fit into our existing architecture - it was all done with global variables and functions, which would not work in a multi-threaded environment.

    We had ~80 files that needed to be reworked to be object oriented - all the functions moved into classes, parameters changed, import statements added, etc. We had a list of about 20 types of fix that needed to be done to each file. I would estimate that doing it by hand one person could do maybe 2-4 per day.

    So I did the first one by hand and then wrote a vim script to automate the changes. Most of it was a list of vim commands e.g.

    " delete an un-needed function "
    g/someFunction(/ d
    
    " add wibble parameter to function foo "
    %s/foo(/foo( wibble,/
    
    " convert all function calls bar(thing) into method calls thing.bar() "
    g/bar(/ normal nmaf(ldi(`aPa.
    

    The last one deserves a bit of explanation:

    g/bar(/  executes the following command on every line that contains "bar("
    normal   execute the following text as if it was typed in in normal mode
    n        goes to the next match of "bar(" (since the :g command leaves the cursor position at the start of the line)
    ma       saves the cursor position in mark a
    f(       moves forward to the next opening bracket
    l        moves right one character, so the cursor is now inside the brackets
    di(      delete all the text inside the brackets
    `a       go back to the position saved as mark a (i.e. the first character of "bar")
    P        paste the deleted text before the current cursor position
    a.       go into insert mode and add a "."
    

    For a couple of more complex transformations such as generating all the import statements I embedded some python into the vim script.

    After a few hours of working on it I had a script that will do at least 95% of the conversion. I just open a file in vim then run :source fixit.vim and the file is transformed in a blink of the eye.

    We still have the work of changing the remaining 5% that was not worth automating and of testing the results, but by spending a day writing this script I estimate we have saved weeks of work.

    Of course it would have been possible to automate this with a scripting language like Python or Ruby, but it would have taken far longer to write and would be less flexible - the last example would have been difficult since regex alone would not be able to handle nested brackets, e.g. to convert bar(foo(xxx)) to foo(xxx).bar() . Vim was perfect for the task.

    Olivier Pons, Feb 28, 2010 at 14:41

    Thanks a lot for sharing that's really nice to learn from "useful & not classical" macros. – Olivier Pons Feb 28 '10 at 14:41

    Ipsquiggle, Mar 23, 2010 at 16:55

    %s/\(bar\)(\(.\+\))/\2.\1()/ would do that too. (Escapes are compatible with :set magic .) Just for the record. :) – Ipsquiggle Mar 23 '10 at 16:55

    Ipsquiggle, Mar 23, 2010 at 16:56

    Of if you don't like vim-style escapes, use \v to turn on Very Magic: %s/\v(bar)\((.+)\)/\2.\1()/Ipsquiggle Mar 23 '10 at 16:56

    Dave Kirby, Mar 23, 2010 at 17:16

    @lpsquiggle: your suggestion would not handle complex expressions with more than one set of brackets. e.g. if bar(foo(xxx)) or wibble(xxx): becomes if foo(xxx)) or wibble(xxx.bar(): which is completely wrong. – Dave Kirby Mar 23 '10 at 17:16

    community wiki
    2 revs
    ,Aug 2, 2009 at 11:17

    Use the builtin file explorer! The command is :Explore and it allows you to navigate through your source code very very fast. I have these mapping in my .vimrc :
    map <silent> <F8>   :Explore<CR>
    map <silent> <S-F8> :sp +Explore<CR>
    

    The explorer allows you to make file modifications, too. I'll post some of my favorite keys, pressing <F1> will give you the full list:

    Svend, Aug 2, 2009 at 8:48

    I always thought the default methods for browsing kinda sucked for most stuff. It's just slow to browse, if you know where you wanna go. LustyExplorer from vim.org's script section is a much needed improvement. – Svend Aug 2 '09 at 8:48

    Taurus Olson, Aug 6, 2009 at 17:37

    Your second mapping could be more simple: map <silent> <S-F8> :Sexplore<CR> – Taurus Olson Aug 6 '09 at 17:37

    kprobst, Apr 1, 2010 at 3:53

    I recommend NERDtree instead of the built-in explorer. It has changed the way I used vim for projects and made me much more productive. Just google for it. – kprobst Apr 1 '10 at 3:53

    dash-tom-bang, Aug 24, 2011 at 0:35

    I never feel the need to explore the source tree, I just use :find, :tag and the various related keystrokes to jump around. (Maybe this is because the source trees I work on are big and organized differently than I would have done? :) ) – dash-tom-bang Aug 24 '11 at 0:35

    community wiki
    2 revs, 2 users 92%
    ,Jun 15, 2011 at 13:39

    I am a member of the American Cryptogram Association. The bimonthly magazine includes over 100 cryptograms of various sorts. Roughly 15 of these are "cryptarithms" - various types of arithmetic problems with letters substituted for the digits. Two or three of these are sudokus, except with letters instead of numbers. When the grid is completed, the nine distinct letters will spell out a word or words, on some line, diagonal, spiral, etc., somewhere in the grid.

    Rather than working with pencil, or typing the problems in by hand, I download the problems from the members area of their website.

    When working with these sudokus, I use vi, simply because I'm using facilities that vi has that few other editors have. Mostly in converting the lettered grid into a numbered grid, because I find it easier to solve, and then the completed numbered grid back into the lettered grid to find the solution word or words.

    The problem is formatted as nine groups of nine letters, with - s representing the blanks, written in two lines. The first step is to format these into nine lines of nine characters each. There's nothing special about this, just inserting eight linebreaks in the appropriate places.

    The result will look like this:

    T-O-----C
    -E-----S-
    --AT--N-L
    ---NASO--
    ---E-T---
    --SPCL---
    E-T--OS--
    -A-----P-
    S-----C-T
    

    So, first step in converting this into numbers is to make a list of the distinct letters. First, I make a copy of the block. I position the cursor at the top of the block, then type :y}}p . : puts me in command mode, y yanks the next movement command. Since } is a move to the end of the next paragraph, y} yanks the paragraph. } then moves the cursor to the end of the paragraph, and p pastes what we had yanked just after the cursor. So y}}p creates a copy of the next paragraph, and ends up with the cursor between the two copies.

    Next, I to turn one of those copies into a list of distinct letters. That command is a bit more complex:

    :!}tr -cd A-Z | sed 's/\(.\)/\1\n/g' | sort -u | tr -d '\n'
    

    : again puts me in command mode. ! indicates that the content of the next yank should be piped through a command line. } yanks the next paragraph, and the command line then uses the tr command to strip out everything except for upper-case letters, the sed command to print each letter on a single line, and the sort command to sort those lines, removing duplicates, and then tr strips out the newlines, leaving the nine distinct letters in a single line, replacing the nine lines that had made up the paragraph originally. In this case, the letters are: ACELNOPST .

    Next step is to make another copy of the grid. And then to use the letters I've just identified to replace each of those letters with a digit from 1 to 9. That's simple: :!}tr ACELNOPST 0-9 . The result is:

    8-5-----1
    -2-----7-
    --08--4-3
    ---4075--
    ---2-8---
    --7613---
    2-8--57--
    -0-----6-
    7-----1-8
    

    This can then be solved in the usual way, or entered into any sudoku solver you might prefer. The completed solution can then be converted back into letters with :!}tr 1-9 ACELNOPST .

    There is power in vi that is matched by very few others. The biggest problem is that only a very few of the vi tutorial books, websites, help-files, etc., do more than barely touch the surface of what is possible.

    hhh, Jan 14, 2011 at 17:12

    and an irritation is that some distros such as ubuntu has aliases from the word "vi" to "vim" so people won't really see vi. Excellent example, have to try... +1 – hhh Jan 14 '11 at 17:12

    dash-tom-bang, Aug 24, 2011 at 0:45

    Doesn't vim check the name it was started with so that it can come up in the right 'mode'? – dash-tom-bang Aug 24 '11 at 0:45

    sehe, Mar 4, 2012 at 20:47

    I'm baffled by this repeated error: you say you need : to go into command mode, but then invariably you specify normal mode commands (like y}}p ) which cannot possibly work from the command mode?! – sehe Mar 4 '12 at 20:47

    sehe, Mar 4, 2012 at 20:56

    My take on the unique chars challenge: :se tw=1 fo= (preparation) VG:s/./& /g (insert spaces), gvgq (split onto separate lines), V{:sort u (sort and remove duplicates) – sehe Mar 4 '12 at 20:56

    community wiki
    jqno
    , Aug 2, 2009 at 8:59

    Bulk text manipulations!

    Either through macros:

    Or through regular expressions:

    (But be warned: if you do the latter, you'll have 2 problems :).)

    Jim Dennis, Jan 10, 2010 at 4:03

    +1 for the Jamie Zawinski reference. (No points taken back for failing to link to it, even). :) – Jim Dennis Jan 10 '10 at 4:03

    jqno, Jan 10, 2010 at 10:06

    @Jim I didn't even know it was a Jamie Zawinski quote :). I'll try to remember it from now on. – jqno Jan 10 '10 at 10:06

    Jim Dennis, Feb 12, 2010 at 4:15

    I find the following trick increasingly useful ... for cases where you want to join lines that match (or that do NOT match) some pattern to the previous line: :% g/foo/-1j or :'a,'z v/bar/-1j for example (where the former is "all lines and matching the pattern" while the latter is "lines between mark a and mark z which fail to match the pattern"). The part after the patter in a g or v ex command can be any other ex commmands, -1j is just a relative line movement and join command. – Jim Dennis Feb 12 '10 at 4:15

    JustJeff, Feb 27, 2010 at 12:54

    of course, if you name your macro '2', then when it comes time to use it, you don't even have to move your finger from the '@' key to the 'q' key. Probably saves 50 to 100 milliseconds every time right there. =P – JustJeff Feb 27 '10 at 12:54

    Simon Steele, Apr 1, 2010 at 13:12

    @JustJeff Depends entirely on your keyboard layout, my @ key is at the other side of the keyboard from my 2 key. – Simon Steele Apr 1 '10 at 13:12

    community wiki
    David Pope
    , Apr 2, 2012 at 7:56

    I recently discovered q: . It opens the "command window" and shows your most recent ex-mode (command-mode) commands. You can move as usual within the window, and pressing <CR> executes the command. You can edit, etc. too. Priceless when you're messing around with some complex command or regex and you don't want to retype the whole thing, or if the complex thing you want to do was 3 commands back. It's almost like bash's set -o vi, but for vim itself (heh!).

    See :help q: for more interesting bits for going back and forth.

    community wiki
    2 revs, 2 users 56%
    ,Feb 27, 2010 at 11:29

    I just discovered Vim's omnicompletion the other day, and while I'll admit I'm a bit hazy on what does which, I've had surprisingly good results just mashing either Ctrl + x Ctrl + u or Ctrl + n / Ctrl + p in insert mode. It's not quite IntelliSense, but I'm still learning it.

    Try it out! :help ins-completion

    community wiki
    tfmoraes
    , Mar 14, 2010 at 19:49

    These are not shortcuts, but they are related:
    1. Make capslock an additional ESC (or Ctrl)
    2. map leader to "," (comma), with this command: let mapleader=","

    They boost my productivity.

    Olivier Pons, Mar 15, 2010 at 10:09

    Hey nice hint about the "\"! Far better to type "," than "\". – Olivier Pons Mar 15 '10 at 10:09

    R. Martinho Fernandes, Apr 1, 2010 at 3:30

    To make Caps Lock an additional Esc in Windows (what's a caps lock key for? An "any key"?), try this: web.archive.org/web/20100418005858/http://webpages.charter.net/R. Martinho Fernandes Apr 1 '10 at 3:30

    Tom Morris, Apr 1, 2010 at 8:45

    On Mac, you need PCKeyboardHack - details at superuser.com/questions/34223/Tom Morris Apr 1 '10 at 8:45

    Jeromy Anglim, Jan 10, 2011 at 4:43

    On Windows I use AutoHotKey with Capslock::EscapeJeromy Anglim Jan 10 '11 at 4:43

    community wiki
    Costyn
    , Sep 20, 2010 at 10:34

    Another useful vi "shortcut" I frequently use is 'xp'. This will swap the character under the cursor with the next character.

    tester, Aug 22, 2011 at 17:19

    Xp to go the other way – tester Aug 22 '11 at 17:19

    kguest, Aug 27, 2011 at 8:21

    Around the time that Windows xp came out, I used to joke that this is the only good use for it. – kguest Aug 27 '11 at 8:21

    community wiki
    Peter Ellis
    , Aug 2, 2009 at 9:47

    <Ctrl> + W, V to split the screen vertically
    <Ctrl> + W, W to shift between the windows

    !python % [args] to run the script I am editing in this window

    ZF in visual mode to fold arbitrary lines

    Andrew Scagnelli, Apr 1, 2010 at 2:58

    <Ctrl> + W and j/k will let you navigate absolutely (j up, k down, as with normal vim). This is great when you have 3+ splits. – Andrew Scagnelli Apr 1 '10 at 2:58

    coder_tim, Jan 30, 2012 at 20:08

    +1 for zf in visual mode, I like code folding, but did not know about that. – coder_tim Jan 30 '12 at 20:08

    puk, Feb 24, 2012 at 7:00

    after bashing my keyboard I have deduced that <C-w>n or <C-w>s is new horizontal window, <C-w>b is bottom right window, <C-w>c or <C-w>q is close window, <C-w>x is increase and then decrease window width (??), <C-w>p is last window, <C-w>backspace is move left(ish) window – puk Feb 24 '12 at 7:00

    sjas, Jun 25, 2012 at 0:25

    :help ctrl-w FTW... do yourself a favour, and force yourself to try these things for at least 15 minutes! – sjas Jun 25 '12 at 0:25

    community wiki
    2 revs
    ,Apr 1, 2010 at 17:00

    Visual Mode

    As several other people have said, visual mode is the answer to your copy/cut & paste problem. Vim gives you 'v', 'V', and C-v. Lower case 'v' in vim is essentially the same as the shift key in notepad. The nice thing is that you don't have to hold it down. You can use any movement technique to navigate efficiently to the starting (or ending) point of your selection. Then hit 'v', and use efficient movement techniques again to navigate to the other end of your selection. Then 'd' or 'y' allows you to cut or copy that selection.

    The advantage vim's visual mode has over Jim Dennis's description of cut/copy/paste in vi is that you don't have to get the location exactly right. Sometimes it's more efficient to use a quick movement to get to the general vicinity of where you want to go and then refine that with other movements than to think up a more complex single movement command that gets you exactly where you want to go.

    The downside to using visual mode extensively in this manner is that it can become a crutch that you use all the time which prevents you from learning new vi(m) commands that might allow you to do things more efficiently. However, if you are very proactive about learning new aspects of vi(m), then this probably won't affect you much.

    I'll also re-emphasize that the visual line and visual block modes give you variations on this same theme that can be very powerful...especially the visual block mode.

    On Efficient Use of the Keyboard

    I also disagree with your assertion that alternating hands is the fastest way to use the keyboard. It has an element of truth in it. Speaking very generally, repeated use of the same thing is slow. This most significant example of this principle is that consecutive keystrokes typed with the same finger are very slow. Your assertion probably stems from the natural tendency to use the s/finger/hand/ transformation on this pattern. To some extent it's correct, but at the extremely high end of the efficiency spectrum it's incorrect.

    Just ask any pianist. Ask them whether it's faster to play a succession of a few notes alternating hands or using consecutive fingers of a single hand in sequence. The fastest way to type 4 keystrokes is not to alternate hands, but to type them with 4 fingers of the same hand in either ascending or descending order (call this a "run"). This should be self-evident once you've considered this possibility.

    The more difficult problem is optimizing for this. It's pretty easy to optimize for absolute distance on the keyboard. Vim does that. It's much harder to optimize at the "run" level, but vi(m) with it's modal editing gives you a better chance at being able to do it than any non-modal approach (ahem, emacs) ever could.

    On Emacs

    Lest the emacs zealots completely disregard my whole post on account of that last parenthetical comment, I feel I must describe the root of the difference between the emacs and vim religions. I've never spoken up in the editor wars and I probably won't do it again, but I've never heard anyone describe the differences this way, so here it goes. The difference is the following tradeoff:

    Vim gives you unmatched raw text editing efficiency Emacs gives you unmatched ability to customize and program the editor

    The blind vim zealots will claim that vim has a scripting language. But it's an obscure, ad-hoc language that was designed to serve the editor. Emacs has Lisp! Enough said. If you don't appreciate the significance of those last two sentences or have a desire to learn enough about functional programming and Lisp to develop that appreciation, then you should use vim.

    The emacs zealots will claim that emacs has viper mode, and so it is a superset of vim. But viper mode isn't standard. My understanding is that viper mode is not used by the majority of emacs users. Since it's not the default, most emacs users probably don't develop a true appreciation for the benefits of the modal paradigm.

    In my opinion these differences are orthogonal. I believe the benefits of vim and emacs as I have stated them are both valid. This means that the ultimate editor doesn't exist yet. It's probably true that emacs would be the easiest platform on which to base the ultimate editor. But modal editing is not entrenched in the emacs mindset. The emacs community could move that way in the future, but that doesn't seem very likely.

    So if you want raw editing efficiency, use vim. If you want the ultimate environment for scripting and programming your editor use emacs. If you want some of both with an emphasis on programmability, use emacs with viper mode (or program your own mode). If you want the best of both worlds, you're out of luck for now.

    community wiki
    konryd
    , Mar 31, 2010 at 22:44

    Spend 30 mins doing the vim tutorial (run vimtutor instead of vim in terminal). You will learn the basic movements, and some keystrokes, this will make you at least as productive with vim as with the text editor you used before. After that, well, read Jim Dennis' answer again :)

    dash-tom-bang, Aug 24, 2011 at 0:47

    This is the first thing I thought of when reading the OP. It's obvious that the poster has never run this; I ran through it when first learning vim two years ago and it cemented in my mind the superiority of Vim to any of the other editors I've used (including, for me, Emacs since the key combos are annoying to use on a Mac). – dash-tom-bang Aug 24 '11 at 0:47

    community wiki
    Johnsyweb
    , Jan 12, 2011 at 22:52

    What is the way you use Vim that makes you more productive than with a contemporary editor?

    Being able to execute complex, repetitive edits with very few keystrokes (often using macros ). Take a look at VimGolf to witness the power of Vim!

    After over ten years of almost daily usage, it's hard to imagine using any other editor.

    community wiki
    2 revs, 2 users 67%
    ,Jun 15, 2011 at 13:42

    Use \c anywhere in a search to ignore case (overriding your ignorecase or smartcase settings). E.g. /\cfoo or /foo\c will match foo, Foo, fOO, FOO, etc.

    Use \C anywhere in a search to force case matching. E.g. /\Cfoo or /foo\C will only match foo.

    community wiki
    2 revs, 2 users 67%
    ,Jun 15, 2011 at 13:44

    I was surprised to find no one mention the t movement. I frequently use it with parameter lists in the form of dt, or yt,

    hhh, Jan 14, 2011 at 17:09

    or dfx, dFx, dtx, ytx, etc where x is a char, +1 – hhh Jan 14 '11 at 17:09

    dash-tom-bang, Aug 24, 2011 at 0:48

    @hhh yep, T t f and F are all pretty regular keys for me to hit... – dash-tom-bang Aug 24 '11 at 0:48

    markle976, Mar 30, 2012 at 13:52

    Yes! And don't forget ct (change to). – markle976 Mar 30 '12 at 13:52

    sjas, Jun 24, 2012 at 23:35

    t for teh win!!! – sjas Jun 24 '12 at 23:35

    community wiki
    3 revs
    ,May 6, 2012 at 20:50

    Odd nobody's mentioned ctags. Download "exuberant ctags" and put it ahead of the crappy preinstalled version you already have in your search path. Cd to the root of whatever you're working on; for example the Android kernel distribution. Type "ctags -R ." to build an index of source files anywhere beneath that dir in a file named "tags". This contains all tags, nomatter the language nor where in the dir, in one file, so cross-language work is easy.

    Then open vim in that folder and read :help ctags for some commands. A few I use often:

    community wiki
    2 revs, 2 users 67%
    ,Feb 27, 2010 at 11:19

    Automatic indentation:

    gg (go to start of document)
    = (indent time!)
    shift-g (go to end of document)

    You'll need 'filetype plugin indent on' in your .vimrc file, and probably appropriate 'shiftwidth' and 'expandtab' settings.

    xcramps, Aug 28, 2009 at 17:14

    Or just use the ":set ai" (auto-indent) facility, which has been in vi since the beginning. – xcramps Aug 28 '09 at 17:14

    community wiki
    autodidakto
    , Jul 24, 2010 at 5:41

    You asked about productive shortcuts, but I think your real question is: Is vim worth it? The answer to this stackoverflow question is -> "Yes"

    You must have noticed two things. Vim is powerful, and vim is hard to learn. Much of it's power lies in it's expandability and endless combination of commands. Don't feel overwhelmed. Go slow. One command, one plugin at a time. Don't overdo it.

    All that investment you put into vim will pay back a thousand fold. You're going to be inside a text editor for many, many hours before you die. Vim will be your companion.

    community wiki
    2 revs, 2 users 67%
    ,Feb 27, 2010 at 11:23

    Multiple buffers, and in particular fast jumping between them to compare two files with :bp and :bn (properly remapped to a single Shift + p or Shift + n )

    vimdiff mode (splits in two vertical buffers, with colors to show the differences)

    Area-copy with Ctrl + v

    And finally, tab completion of identifiers (search for "mosh_tab_or_complete"). That's a life changer.

    community wiki
    David Wolever
    , Aug 28, 2009 at 16:07

    Agreed with the top poster - the :r! command is very useful.

    Most often I use it to "paste" things:

    :r!cat
    **Ctrl-V to paste from the OS clipboard**
    ^D
    

    This way I don't have to fiddle with :set paste .

    R. Martinho Fernandes, Apr 1, 2010 at 3:17

    Probably better to set the clipboard option to unnamed ( set clipboard=unnamed in your .vimrc) to use the system clipboard by default. Or if you still want the system clipboard separate from the unnamed register, use the appropriately named clipboard register: "*p . – R. Martinho Fernandes Apr 1 '10 at 3:17

    kevpie, Oct 12, 2010 at 22:38

    Love it! After being exasperated by pasting code examples from the web and I was just starting to feel proficient in vim. That was the command I dreamed up on the spot. This was when vim totally hooked me. – kevpie Oct 12 '10 at 22:38

    Ben Mordecai, Feb 6, 2013 at 19:54

    If you're developing on a Mac, Command+C and Command+V copy and paste using the system clipboard, no remap required. – Ben Mordecai Feb 6 '13 at 19:54

    David Wolever, Feb 6, 2013 at 20:55

    Only with GVIm From the console, pasting without :set paste doesn't work so well if autoindent is enabled. – David Wolever Feb 6 '13 at 20:55

    [Oct 21, 2018] What are the dark corners of Vim your mom never told you about?

    Notable quotes:
    "... Want to look at your :command history? q: Then browse, edit and finally to execute the command. ..."
    "... from the ex editor (:), you can do CTRL-f to pop up the command history window. ..."
    "... q/ and q? can be used to do a similar thing for your search patterns. ..."
    "... adjacent to the one I just edit ..."
    Nov 16, 2011 | stackoverflow.com

    Ask Question, Nov 16, 2011 at 0:44

    There are a plethora of questions where people talk about common tricks, notably " Vim+ctags tips and tricks ".

    However, I don't refer to commonly used shortcuts that someone new to Vim would find cool. I am talking about a seasoned Unix user (be they a developer, administrator, both, etc.), who thinks they know something 99% of us never heard or dreamed about. Something that not only makes their work easier, but also is COOL and hackish .

    After all, Vim resides in the most dark-corner-rich OS in the world, thus it should have intricacies that only a few privileged know about and want to share with us.

    user3218088, Jun 16, 2014 at 9:51

    :Sex -- Split window and open integrated file explorer (horizontal split) – user3218088 Jun 16 '14 at 9:51

    community wiki, 2 revs, Apr 7, 2009 at 19:04

    Might not be one that 99% of Vim users don't know about, but it's something I use daily and that any Linux+Vim poweruser must know.

    Basic command, yet extremely useful.

    :w !sudo tee %

    I often forget to sudo before editing a file I don't have write permissions on. When I come to save that file and get a permission error, I just issue that vim command in order to save the file without the need to save it to a temp file and then copy it back again.

    You obviously have to be on a system with sudo installed and have sudo rights.

    jm666, May 12, 2011 at 6:09

    cmap w!! w !sudo tee % – jm666 May 12 '11 at 6:09

    Gerardo Marset, Jul 5, 2011 at 0:49

    You should never run sudo vim . Instead you should export EDITOR as vim and run sudoedit . – Gerardo Marset Jul 5 '11 at 0:49

    migu, Sep 2, 2013 at 20:42

    @maximus: vim replaces % by the name of the current buffer/file. – migu Sep 2 '13 at 20:42

    community wiki
    Chad Birch
    , Apr 7, 2009 at 18:09

    Something I just discovered recently that I thought was very cool:
    :earlier 15m

    Reverts the document back to how it was 15 minutes ago. Can take various arguments for the amount of time you want to roll back, and is dependent on undolevels. Can be reversed with the opposite command :later

    ephemient, Apr 8, 2009 at 16:15

    @skinp: If you undo and then make further changes from the undone state, you lose that redo history. This lets you go back to a state which is no longer in the undo stack. – ephemient Apr 8 '09 at 16:15

    Etienne PIERRE, Jul 21, 2009 at 13:53

    Also very usefull is g+ and g- to go backward and forward in time. This is so much more powerfull than an undo/redo stack since you don't loose the history when you do something after an undo. – Etienne PIERRE Jul 21 '09 at 13:53

    Ehtesh Choudhury, Nov 29, 2011 at 12:09

    You don't lose the redo history if you make a change after an undo. It's just not easily accessed. There are plugins to help you visualize this, like Gundo.vim – Ehtesh Choudhury Nov 29 '11 at 12:09

    Igor Popov, Dec 29, 2011 at 6:59

    Wow, so now I can just do :later 8h and I'm done for today? :P – Igor Popov Dec 29 '11 at 6:59

    Ring Ø, Jul 11, 2014 at 5:14

    Your command assumes one will spend at least 15 minutes in vim ! – Ring Ø Jul 11 '14 at 5:14

    community wiki,2 revs, 2 users 92%, ,Mar 31, 2016 at 17:54

    :! [command] executes an external command while you're in Vim.

    But add a dot after the colon, :.! [command], and it'll dump the output of the command into your current window. That's : . !

    For example:

    :.! ls

    I use this a lot for things like adding the current date into a document I'm typing:

    :.! date

    saffsd, May 6, 2009 at 14:41

    This is quite similar to :r! The only difference as far as I can tell is that :r! opens a new line, :.! overwrites the current line. – saffsd May 6 '09 at 14:41

    hlovdal, Jan 25, 2010 at 21:11

    An alternative to :.!date is to write "date" on a line and then run !$sh (alternatively having the command followed by a blank line and run !jsh ). This will pipe the line to the "sh" shell and substitute with the output from the command. – hlovdal Jan 25 '10 at 21:11

    Nefrubyr, Mar 25, 2010 at 16:24

    :.! is actually a special case of :{range}!, which filters a range of lines (the current line when the range is . ) through a command and replaces those lines with the output. I find :%! useful for filtering whole buffers. – Nefrubyr Mar 25 '10 at 16:24

    jabirali, Jul 13, 2010 at 4:30

    @sundar: Why pass a line to sed, when you can use the similar built-in ed / ex commands? Try running :.s/old/new/g ;-) – jabirali Jul 13 '10 at 4:30

    aqn, Apr 26, 2013 at 20:52

    And also note that '!' is like 'y', 'd', 'c' etc. i.e. you can do: !!, number!!, !motion (e.g. !Gshell_command<cr> replace from current line to end of file ('G') with output of shell_command). – aqn Apr 26 '13 at 20:52

    community wiki 2 revs , Apr 8, 2009 at 12:17

    Not exactly obscure, but there are several "delete in" commands which are extremely useful, like..

    Others can be found on :help text-objects

    sjh, Apr 8, 2009 at 15:33

    dab "delete arounb brackets", daB for around curly brackets, t for xml type tags, combinations with normal commands are as expected cib/yaB/dit/vat etc – sjh Apr 8 '09 at 15:33

    Don Reba, Apr 13, 2009 at 21:41

    @Masi: yi(va(p deletes only the brackets – Don Reba Apr 13 '09 at 21:41

    thomasrutter, Apr 26, 2009 at 11:11

    This is possibly the biggest reason for me staying with Vim. That and its equivalent "change" commands: ciw, ci(, ci", as well as dt<space> and ct<space> – thomasrutter Apr 26 '09 at 11:11

    Roger Pate Oct 12 '10 at 16:40 ,

    @thomasrutter: Why not dW/cW instead of dt<space>? –

    Roger Pate Oct 12 '10 at 16:43, Oct 12, 2010 at 16:43

    @Masi: With the surround plugin: ds(. –

    community wiki, 9 revs, 9 users 84%, ultraman, Apr 21, 2017 at 14:06

    de Delete everything till the end of the word by pressing . at your heart's desire.

    ci(xyz[Esc] -- This is a weird one. Here, the 'i' does not mean insert mode. Instead it means inside the parenthesis. So this sequence cuts the text inside parenthesis you're standing in and replaces it with "xyz". It also works inside square and figure brackets -- just do ci[ or ci{ correspondingly. Naturally, you can do di (if you just want to delete all text without typing anything. You can also do a instead of i if you want to delete the parentheses as well and not just text inside them.

    ci" - cuts the text in current quotes

    ciw - cuts the current word. This works just like the previous one except that ( is replaced with w .

    C - cut the rest of the line and switch to insert mode.

    ZZ -- save and close current file (WAY faster than Ctrl-F4 to close the current tab!)

    ddp - move current line one row down

    xp -- move current character one position to the right

    U - uppercase, so viwU upercases the word

    ~ - switches case, so viw~ will reverse casing of entire word

    Ctrl+u / Ctrl+d scroll the page half-a-screen up or down. This seems to be more useful than the usual full-screen paging as it makes it easier to see how the two screens relate. For those who still want to scroll entire screen at a time there's Ctrl+f for Forward and Ctrl+b for Backward. Ctrl+Y and Ctrl+E scroll down or up one line at a time.

    Crazy but very useful command is zz -- it scrolls the screen to make this line appear in the middle. This is excellent for putting the piece of code you're working on in the center of your attention. Sibling commands -- zt and zb -- make this line the top or the bottom one on the sreen which is not quite as useful.

    % finds and jumps to the matching parenthesis.

    de -- delete from cursor to the end of the word (you can also do dE to delete until the next space)

    bde -- delete the current word, from left to right delimiter

    df[space] -- delete up until and including the next space

    dt. -- delete until next dot

    dd -- delete this entire line

    ye (or yE) -- yanks text from here to the end of the word

    ce - cuts through the end of the word

    bye -- copies current word (makes me wonder what "hi" does!)

    yy -- copies the current line

    cc -- cuts the current line, you can also do S instead. There's also lower cap s which cuts current character and switches to insert mode.

    viwy or viwc . Yank or change current word. Hit w multiple times to keep selecting each subsequent word, use b to move backwards

    vi{ - select all text in figure brackets. va{ - select all text including {}s

    vi(p - highlight everything inside the ()s and replace with the pasted text

    b and e move the cursor word-by-word, similarly to how Ctrl+Arrows normally do . The definition of word is a little different though, as several consecutive delmiters are treated as one word. If you start at the middle of a word, pressing b will always get you to the beginning of the current word, and each consecutive b will jump to the beginning of the next word. Similarly, and easy to remember, e gets the cursor to the end of the current, and each subsequent, word.

    similar to b / e, capital B and E move the cursor word-by-word using only whitespaces as delimiters.

    capital D (take a deep breath) Deletes the rest of the line to the right of the cursor, same as Shift+End/Del in normal editors (notice 2 keypresses -- Shift+D -- instead of 3)

    Nick Lewis, Jul 17, 2009 at 16:41

    zt is quite useful if you use it at the start of a function or class definition. – Nick Lewis Jul 17 '09 at 16:41

    Nathan Fellman, Sep 7, 2009 at 8:27

    vity and vitc can be shortened to yit and cit respectively. – Nathan Fellman Sep 7 '09 at 8:27

    Laurence Gonsalves, Feb 19, 2011 at 23:49

    All the things you're calling "cut" is "change". eg: C is change until the end of the line. Vim's equivalent of "cut" is "delete", done with d/D. The main difference between change and delete is that delete leaves you in normal mode but change puts you into a sort of insert mode (though you're still in the change command which is handy as the whole change can be repeated with . ). – Laurence Gonsalves Feb 19 '11 at 23:49

    Almo, May 29, 2012 at 20:09

    I thought this was for a list of things that not many people know. yy is very common, I would have thought. – Almo May 29 '12 at 20:09

    Andrea Francia, Jul 3, 2012 at 20:50

    bye does not work when you are in the first character of the word. yiw always does. – Andrea Francia Jul 3 '12 at 20:50

    community wiki 2 revs, 2 users 83%, ,Sep 17, 2010 at 16:55

    One that I rarely find in most Vim tutorials, but it's INCREDIBLY useful (at least to me), is the

    g; and g,

    to move (forward, backward) through the changelist.

    Let me show how I use it. Sometimes I need to copy and paste a piece of code or string, say a hex color code in a CSS file, so I search, jump (not caring where the match is), copy it and then jump back (g;) to where I was editing the code to finally paste it. No need to create marks. Simpler.

    Just my 2cents.

    aehlke, Feb 12, 2010 at 1:19

    similarly, '. will go to the last edited line, And `. will go to the last edited position – aehlke Feb 12 '10 at 1:19

    Kimball Robinson, Apr 16, 2010 at 0:29

    Ctrl-O and Ctrl-I (tab) will work similarly, but not the same. They move backward and forward in the "jump list", which you can view by doing :jumps or :ju For more information do a :help jumplist – Kimball Robinson Apr 16 '10 at 0:29

    Kimball Robinson, Apr 16, 2010 at 0:30

    You can list the change list by doing :changes – Kimball Robinson Apr 16 '10 at 0:30

    Wayne Werner, Jan 30, 2013 at 14:49

    Hot dang that's useful. I use <C-o> / <C-i> for this all the time - or marking my place. – Wayne Werner Jan 30 '13 at 14:49

    community wiki, 4 revs, 4 users 36%, ,May 5, 2014 at 13:06

    :%!xxd

    Make vim into a hex editor.

    :%!xxd -r
    

    Revert.

    Warning: If you don't edit with binary (-b), you might damage the file. – Josh Lee in the comments.

    Christian, Jul 7, 2009 at 19:11

    And how do you revert it back? – Christian Jul 7 '09 at 19:11

    Naga Kiran, Jul 8, 2009 at 13:46

    :!xxd -r //To revert back from HEX – Naga Kiran Jul 8 '09 at 13:46

    Andreas Grech, Nov 14, 2009 at 10:37

    I actually think it's :%!xxd -r to revert it back – Andreas Grech Nov 14 '09 at 10:37

    dotancohen, Jun 7, 2013 at 5:50

    @JoshLee: If one is careful not to traverse newlines, is it safe to not use the -b option? I ask because sometimes I want to make a hex change, but I don't want to close and reopen the file to do so. – dotancohen Jun 7 '13 at 5:50

    Bambu, Nov 23, 2014 at 23:58

    @dotancohen: If you don't want to close/reopen the file you can do :set binary – Bambu Nov 23 '14 at 23:58

    community wiki AaronS, Jan 12, 2011 at 20:03

    gv

    Reselects last visual selection.

    community wiki 3 revs, 2 users 92%
    ,Jul 7, 2014 at 19:10

    Sometimes a setting in your .vimrc will get overridden by a plugin or autocommand. To debug this a useful trick is to use the :verbose command in conjunction with :set. For example, to figure out where cindent got set/unset:
    :verbose set cindent?

    This will output something like:

    cindent
        Last set from /usr/share/vim/vim71/indent/c.vim
    

    This also works with maps and highlights. (Thanks joeytwiddle for pointing this out.) For example:

    :verbose nmap U
    n  U             <C-R>
            Last set from ~/.vimrc
    
    :verbose highlight Normal
    Normal         xxx guifg=#dddddd guibg=#111111 font=Inconsolata Medium 14
            Last set from ~/src/vim-holodark/colors/holodark.vim

    Artem Russakovskii, Oct 23, 2009 at 22:09

    Excellent tip - exactly what I was looking for today. – Artem Russakovskii Oct 23 '09 at 22:09

    joeytwiddle, Jul 5, 2014 at 22:08

    :verbose can also be used before nmap l or highlight Normal to find out where the l keymap or the Normal highlight were last defined. Very useful for debugging! – joeytwiddle Jul 5 '14 at 22:08

    SidOfc, Sep 24, 2017 at 11:26

    When you get into creating custom mappings, this will save your ass so many times, probably one of the most useful ones here (IMO)! – SidOfc Sep 24 '17 at 11:26

    community wiki 3 revs, 3 users 70% ,May 31, 2015 at 19:30

    Not sure if this counts as dark-corner-ish at all, but I've only just learnt it...
    :g/match/y A

    will yank (copy) all lines containing "match" into the "a / @a register. (The capitalization as A makes vim append yankings instead of replacing the previous register contents.) I used it a lot recently when making Internet Explorer stylesheets.

    tsukimi, May 27, 2012 at 6:17

    You can use :g! to find lines that don't match a pattern e.x. :g!/set/normal dd (delete all lines that don't contain set) – tsukimi May 27 '12 at 6:17

    pandubear, Oct 12, 2013 at 8:39

    Sometimes it's better to do what tsukimi said and just filter out lines that don't match your pattern. An abbreviated version of that command though: :v/PATTERN/d Explanation: :v is an abbreviation for :g!, and the :g command applies any ex command to lines. :y[ank] works and so does :normal, but here the most natural thing to do is just :d[elete] . – pandubear Oct 12 '13 at 8:39

    Kimball Robinson, Feb 5, 2016 at 17:58

    You can also do :g/match/normal "Ayy -- the normal keyword lets you tell it to run normal-mode commands (which you are probably more familiar with). – Kimball Robinson Feb 5 '16 at 17:58

    community wiki 2 revs, 2 users 80% ,Apr 5, 2013 at 15:55

    :%TOhtml Creates an html rendering of the current file.

    kenorb, Feb 19, 2015 at 11:27

    Related: How to convert a source code file into HTML? at Vim SE – kenorb Feb 19 '15 at 11:27

    community wiki 2 revs, 2 users 86% ,May 11, 2011 at 19:30

    Want to look at your :command history? q: Then browse, edit and finally to execute the command.

    Ever make similar changes to two files and switch back and forth between them? (Say, source and header files?)

    :set hidden
    :map <TAB> :e#<CR>
    

    Then tab back and forth between those files.

    Josh Lee, Sep 22, 2009 at 16:58

    I hit q: by accident all the time... – Josh Lee Sep 22 '09 at 16:58

    Jason Down, Oct 6, 2009 at 4:14

    Alternatively, from the ex editor (:), you can do CTRL-f to pop up the command history window.Jason Down Oct 6 '09 at 4:14

    bradlis7, Mar 23, 2010 at 17:10

    @jleedev me too. I almost hate this command, just because I use it accidentally way too much. – bradlis7 Mar 23 '10 at 17:10

    bpw1621, Feb 19, 2011 at 15:01

    q/ and q? can be used to do a similar thing for your search patterns. bpw1621 Feb 19 '11 at 15:01

    idbrii, Feb 23, 2011 at 19:07

    Hitting <C-f> after : or / (or any time you're in command mode) will bring up the same history menu. So you can remap q: if you hit it accidentally a lot and still access this awesome mode. – idbrii Feb 23 '11 at 19:07

    community wiki, 2 revs, 2 users 89%, ,Jun 4, 2014 at 14:52

    Vim will open a URL, for example
    vim http://stackoverflow.com/

    Nice when you need to pull up the source of a page for reference.

    Ivan Vučica, Sep 21, 2010 at 8:07

    For me it didn't open the source; instead it apparently used elinks to dump rendered page into a buffer, and then opened that. – Ivan Vučica Sep 21 '10 at 8:07

    Thomas, Apr 19, 2013 at 21:00

    Works better with a slash at the end. Neat trick! – Thomas Apr 19 '13 at 21:00

    Isaac Remuant, Jun 3, 2013 at 15:23

    @Vdt: It'd be useful if you posted your error. If it's this one: " error (netrw) neither the wget nor the fetch command is available" you obviously need to make one of those tools available from your PATH environment variable. – Isaac Remuant Jun 3 '13 at 15:23

    Dettorer, Oct 29, 2014 at 13:47

    I find this one particularly useful when people send links to a paste service and forgot to select a syntax highlighting, I generally just have to open the link in vim after appending "&raw". – Dettorer Oct 29 '14 at 13:47

    community wiki 2 revs, 2 users 94% ,Jan 20, 2015 at 23:14

    Macros can call other macros, and can also call itself.

    eg:

    qq0dwj@qq@q
    

    ...will delete the first word from every line until the end of the file.

    This is quite a simple example but it demonstrates a very powerful feature of vim

    Kimball Robinson, Apr 16, 2010 at 0:39

    I didn't know macros could repeat themselves. Cool. Note: qx starts recording into register x (he uses qq for register q). 0 moves to the start of the line. dw delets a word. j moves down a line. @q will run the macro again (defining a loop). But you forgot to end the recording with a final "q", then actually run the macro by typing @q. – Kimball Robinson Apr 16 '10 at 0:39

    Yktula, Apr 18, 2010 at 5:32

    I think that's intentional, as a nested and recursive macro. – Yktula Apr 18 '10 at 5:32

    Gerardo Marset, Jul 5, 2011 at 1:38

    qqqqqifuu<Esc>h@qq@qGerardo Marset Jul 5 '11 at 1:38

    Nathan Long, Aug 29, 2011 at 15:33

    Another way of accomplishing this is to record a macro in register a that does some transformation to a single line, then linewise highlight a bunch of lines with V and type :normal! @a to applyyour macro to every line in your selection. – Nathan Long Aug 29 '11 at 15:33

    dotancohen, May 14, 2013 at 6:00

    I found this post googling recursive VIM macros. I could find no way to stop the macro other than killing the VIM process. – dotancohen May 14 '13 at 6:00

    community wiki
    Brian Carper
    , Apr 8, 2009 at 1:15

    Assuming you have Perl and/or Ruby support compiled in, :rubydo and :perldo will run a Ruby or Perl one-liner on every line in a range (defaults to entire buffer), with $_ bound to the text of the current line (minus the newline). Manipulating $_ will change the text of that line.

    You can use this to do certain things that are easy to do in a scripting language but not so obvious using Vim builtins. For example to reverse the order of the words in a line:

    :perldo $_ = join ' ', reverse split
    

    To insert a random string of 8 characters (A-Z) at the end of every line:

    :rubydo $_ += ' ' + (1..8).collect{('A'..'Z').to_a[rand 26]}.join
    

    You are limited to acting on one line at a time and you can't add newlines.

    Sujoy, May 6, 2009 at 18:27

    what if i only want perldo to run on a specified line? or a selected few lines? – Sujoy May 6 '09 at 18:27

    Brian Carper, May 6, 2009 at 18:52

    You can give it a range like any other command. For example :1,5perldo will only operate on lines 1-5. – Brian Carper May 6 '09 at 18:52

    Greg, Jul 2, 2009 at 16:41

    Could you do $_ += '\nNEWLINE!!!' to get a newline after the current one? – Greg Jul 2 '09 at 16:41

    Brian Carper, Jul 2, 2009 at 17:26

    Sadly not, it just adds a funky control character to the end of the line. You could then use a Vim search/replace to change all those control characters to real newlines though. – Brian Carper Jul 2 '09 at 17:26

    Derecho, Mar 14, 2014 at 8:48

    Similarly, pydo and py3do work for python if you have the required support compiled in. – Derecho Mar 14 '14 at 8:48

    community wiki
    4 revs
    ,Jul 28, 2009 at 19:05

    ^O and ^I

    Go to older/newer position. When you are moving through the file (by searching, moving commands etc.) vim rember these "jumps", so you can repeat these jumps backward (^O - O for old) and forward (^I - just next to I on keyboard). I find it very useful when writing code and performing a lot of searches.

    gi

    Go to position where Insert mode was stopped last. I find myself often editing and then searching for something. To return to editing place press gi.

    gf

    put cursor on file name (e.g. include header file), press gf and the file is opened

    gF

    similar to gf but recognizes format "[file name]:[line number]". Pressing gF will open [file name] and set cursor to [line number].

    ^P and ^N

    Auto complete text while editing (^P - previous match and ^N next match)

    ^X^L

    While editing completes to the same line (useful for programming). You write code and then you recall that you have the same code somewhere in file. Just press ^X^L and the full line completed

    ^X^F

    Complete file names. You write "/etc/pass" Hmm. You forgot the file name. Just press ^X^F and the filename is completed

    ^Z or :sh

    Move temporary to the shell. If you need a quick bashing:

    sehe, Mar 4, 2012 at 21:50

    With ^X^F my pet peeve is that filenames include = signs, making it do rotten things in many occasions (ini files, makefiles etc). I use se isfname-== to end that nuisance – sehe Mar 4 '12 at 21:50

    joeytwiddle, Jul 5, 2014 at 22:10

    +1 the built-in autocomplete is just sitting there waiting to be discovered. – joeytwiddle Jul 5 '14 at 22:10

    community wiki
    2 revs
    ,Apr 7, 2009 at 18:59

    This is a nice trick to reopen the current file with a different encoding:
    :e ++enc=cp1250 %:p
    

    Useful when you have to work with legacy encodings. The supported encodings are listed in a table under encoding-values (see help encoding-values ). Similar thing also works for ++ff, so that you can reopen file with Windows/Unix line ends if you get it wrong for the first time (see help ff ).

    >, Apr 7, 2009 at 18:43

    Never had to use this sort of a thing, but we'll certainly add to my arsenal of tricks... – Sasha Apr 7 '09 at 18:43

    Adriano Varoli Piazza, Apr 7, 2009 at 18:44

    great tip, thanks. For bonus points, add a list of common valid encodings. – Adriano Varoli Piazza Apr 7 '09 at 18:44

    Ivan Vučica, Jul 8, 2009 at 19:29

    I have used this today, but I think I didn't need to specify "%:p"; just opening the file and :e ++enc=cp1250 was enough. I – Ivan Vučica Jul 8 '09 at 19:29

    laz, Jul 8, 2009 at 19:32

    would :set encoding=cp1250 have the same effect? – laz Jul 8 '09 at 19:32

    intuited, Jun 4, 2010 at 2:51

    `:e +b %' is similarly useful for reopening in binary mode (no munging of newlines) – intuited Jun 4 '10 at 2:51

    community wiki
    4 revs, 3 users 48%
    ,Nov 6, 2012 at 8:32

    " insert range ip's
    "
    "          ( O O )
    " =======oOO=(_)==OOo======
    
    :for i in range(1,255) | .put='10.0.0.'.i | endfor
    

    Ryan Edwards, Nov 16, 2011 at 0:42

    I don't see what this is good for (besides looking like a joke answer). Can anybody else enlighten me? – Ryan Edwards Nov 16 '11 at 0:42

    Codygman, Nov 6, 2012 at 8:33

    open vim and then do ":for i in range(1,255) | .put='10.0.0.'.i | endfor" – Codygman Nov 6 '12 at 8:33

    Ruslan, Sep 30, 2013 at 10:30

    @RyanEdwards filling /etc/hosts maybe – Ruslan Sep 30 '13 at 10:30

    dotancohen, Nov 30, 2014 at 14:56

    This is a terrific answer. Not the bit about creating the IP addresses, but the bit that implies that VIM can use for loops in commands . – dotancohen Nov 30 '14 at 14:56

    BlackCap, Aug 31, 2017 at 7:54

    Without ex-mode: i10.0.0.1<Esc>Y254p$<C-v>}g<C-a>BlackCap Aug 31 '17 at 7:54

    community wiki
    2 revs
    ,Aug 6, 2010 at 0:30

    Typing == will correct the indentation of the current line based on the line above.

    Actually, you can do one = sign followed by any movement command. = {movement}

    For example, you can use the % movement which moves between matching braces. Position the cursor on the { in the following code:

    if (thisA == that) {
    //not indented
    if (some == other) {
    x = y;
    }
    }
    

    And press =% to instantly get this:

    if (thisA == that) {
        //not indented
        if (some == other) {
            x = y;
        }
    }
    

    Alternately, you could do =a{ within the code block, rather than positioning yourself right on the { character.

    Ehtesh Choudhury, May 2, 2011 at 0:48

    Hm, I didn't know this about the indentation. – Ehtesh Choudhury May 2 '11 at 0:48

    sehe, Mar 4, 2012 at 22:03

    No need, usually, to be exactly on the braces. Thought frequently I'd just =} or vaBaB= because it is less dependent. Also, v}}:!astyle -bj matches my code style better, but I can get it back into your style with a simple %!astyle -ajsehe Mar 4 '12 at 22:03

    kyrias, Oct 19, 2013 at 12:12

    gg=G is quite neat when pasting in something. – kyrias Oct 19 '13 at 12:12

    kenorb, Feb 19, 2015 at 11:30

    Related: Re-indenting badly indented code at Vim SE – kenorb Feb 19 '15 at 11:30

    Braden Best, Feb 4, 2016 at 16:16

    @kyrias Oh, I've been doing it like ggVG= . – Braden Best Feb 4 '16 at 16:16

    community wiki
    Trumpi
    , Apr 19, 2009 at 18:33

    imap jj <esc>
    

    hasen, Jun 12, 2009 at 6:08

    how will you type jj then? :P – hasen Jun 12 '09 at 6:08

    ojblass, Jul 5, 2009 at 18:29

    How often to you type jj? In English at least? – ojblass Jul 5 '09 at 18:29

    Alex, Oct 5, 2009 at 5:32

    I remapped capslock to esc instead, as it's an otherwise useless key. My mapping was OS wide though, so it has the added benefit of never having to worry about accidentally hitting it. The only drawback IS ITS HARDER TO YELL AT PEOPLE. :) – Alex Oct 5 '09 at 5:32

    intuited, Jun 4, 2010 at 4:18

    @Alex: definitely, capslock is death. "wait, wtf? oh, that was ZZ?....crap." – intuited Jun 4 '10 at 4:18

    brianmearns, Oct 3, 2012 at 12:45

    @ojblass: Not sure how many people ever right matlab code in Vim, but ii and jj are commonly used for counter variables, because i and j are reserved for complex numbers. – brianmearns Oct 3 '12 at 12:45

    community wiki
    4 revs, 3 users 71%
    ,Feb 12, 2015 at 15:55

    Let's see some pretty little IDE editor do column transposition.
    :%s/\(.*\)^I\(.*\)/\2^I\1/
    

    Explanation

    \( and \) is how to remember stuff in regex-land. And \1, \2 etc is how to retrieve the remembered stuff.

    >>> \(.*\)^I\(.*\)
    

    Remember everything followed by ^I (tab) followed by everything.

    >>> \2^I\1
    

    Replace the above stuff with "2nd stuff you remembered" followed by "1st stuff you remembered" - essentially doing a transpose.

    chaos, Apr 7, 2009 at 18:33

    Switches a pair of tab-separated columns (separator arbitrary, it's all regex) with each other. – chaos Apr 7 '09 at 18:33

    rlbond, Apr 26, 2009 at 4:11

    This is just a regex; plenty of IDEs have regex search-and-replace. – rlbond Apr 26 '09 at 4:11

    romandas, Jun 19, 2009 at 16:58

    @rlbond - It comes down to how good is the regex engine in the IDE. Vim's regexes are pretty powerful; others.. not so much sometimes. – romandas Jun 19 '09 at 16:58

    Kimball Robinson, Apr 16, 2010 at 0:32

    The * will be greedy, so this regex assumes you have just two columns. If you want it to be nongreedy use {-} instead of * (see :help non-greedy for more information on the {} multiplier) – Kimball Robinson Apr 16 '10 at 0:32

    mk12, Jun 22, 2012 at 17:31

    This is actually a pretty simple regex, it's only escaping the group parentheses that makes it look complicated. – mk12 Jun 22 '12 at 17:31

    community wiki
    KKovacs
    , Apr 11, 2009 at 7:14

    Not exactly a dark secret, but I like to put the following mapping into my .vimrc file, so I can hit "-" (minus) anytime to open the file explorer to show files adjacent to the one I just edit . In the file explorer, I can hit another "-" to move up one directory, providing seamless browsing of a complex directory structures (like the ones used by the MVC frameworks nowadays):
    map - :Explore<cr>
    

    These may be also useful for somebody. I like to scroll the screen and advance the cursor at the same time:

    map <c-j> j<c-e>
    map <c-k> k<c-y>
    

    Tab navigation - I love tabs and I need to move easily between them:

    map <c-l> :tabnext<enter>
    map <c-h> :tabprevious<enter>
    

    Only on Mac OS X: Safari-like tab navigation:

    map <S-D-Right> :tabnext<cr>
    map <S-D-Left> :tabprevious<cr>
    

    Roman Plášil, Oct 1, 2009 at 21:33

    You can also browse files within Vim itself, using :Explore – Roman Plášil Oct 1 '09 at 21:33

    KKovacs, Oct 15, 2009 at 15:20

    Hi Roman, this is exactly what this mapping does, but assigns it to a "hot key". :) – KKovacs Oct 15 '09 at 15:20

    community wiki
    rampion
    , Apr 7, 2009 at 20:11

    Often, I like changing current directories while editing - so I have to specify paths less.
    cd %:h
    

    Leonard, May 8, 2009 at 1:54

    What does this do? And does it work with autchdir? – Leonard May 8 '09 at 1:54

    rampion, May 8, 2009 at 2:55

    I suppose it would override autochdir temporarily (until you switched buffers again). Basically, it changes directory to the root directory of the current file. It gives me a bit more manual control than autochdir does. – rampion May 8 '09 at 2:55

    Naga Kiran, Jul 8, 2009 at 13:44

    :set autochdir //this also serves the same functionality and it changes the current directory to that of file in buffer – Naga Kiran Jul 8 '09 at 13:44

    community wiki
    4 revs
    ,Jul 21, 2009 at 1:12

    I like to use 'sudo bash', and my sysadmin hates this. He locked down 'sudo' so it could only be used with a handful of commands (ls, chmod, chown, vi, etc), but I was able to use vim to get a root shell anyway:
    bash$ sudo vi +'silent !bash' +q
    Password: ******
    root#
    

    RJHunter, Jul 21, 2009 at 0:53

    FWIW, sudoedit (or sudo -e) edits privileged files but runs your editor as your normal user. – RJHunter Jul 21 '09 at 0:53

    sundar, Sep 23, 2009 at 9:41

    @OP: That was cunning. :) – sundar Sep 23 '09 at 9:41

    jnylen, Feb 22, 2011 at 15:58

    yeah... I'd hate you too ;) you should only need a root shell VERY RARELY, unless you're already in the habit of running too many commands as root which means your permissions are all screwed up. – jnylen Feb 22 '11 at 15:58

    d33tah, Mar 30, 2014 at 17:50

    Why does your sysadmin even give you root? :D – d33tah Mar 30 '14 at 17:50

    community wiki
    Taurus Olson
    , Apr 7, 2009 at 21:11

    I often use many windows when I work on a project and sometimes I need to resize them. Here's what I use:
    map + <C-W>+
    map - <C-W>-
    

    These mappings allow to increase and decrease the size of the current window. It's quite simple but it's fast.

    Bill Lynch, Apr 8, 2009 at 2:49

    There's also Ctrl-W =, which makes the windows equal width. – Bill Lynch Apr 8 '09 at 2:49

    joeytwiddle, Jan 29, 2012 at 18:12

    Don't forget you can prepend numbers to perform an action multiple times in Vim. So to expand the current window height by 8 lines: 8<C-W>+ – joeytwiddle Jan 29 '12 at 18:12

    community wiki
    Roberto Bonvallet
    , May 6, 2009 at 7:38

    :r! <command>
    

    pastes the output of an external command into the buffer.

    Do some math and get the result directly in the text:

    :r! echo $((3 + 5 + 8))
    

    Get the list of files to compile when writing a Makefile:

    :r! ls *.c
    

    Don't look up that fact you read on wikipedia, have it directly pasted into the document you are writing:

    :r! lynx -dump http://en.wikipedia.org/wiki/Whatever
    

    Sudhanshu, Jun 7, 2010 at 8:40

    ^R=3+5+8 in insert mode will let you insert the value of the expression (3+5+8) in text with fewer keystrokes. – Sudhanshu Jun 7 '10 at 8:40

    dcn, Mar 27, 2011 at 10:13

    How can I get the result/output to a different buffer than the current? – dcn Mar 27 '11 at 10:13

    kenorb, Feb 19, 2015 at 11:31

    Related: How to dump output from external command into editor? at Vim SE – kenorb Feb 19 '15 at 11:31

    community wiki
    jqno
    , Jul 8, 2009 at 19:19

    Map F5 to quickly ROT13 your buffer:
    map <F5> ggg?G``
    

    You can use it as a boss key :).

    sehe, Mar 4, 2012 at 21:57

    I don't know what you are writing... But surely, my boss would be more curious when he saw me write ROT13 jumble :) – sehe Mar 4 '12 at 21:57

    romeovs, Jun 19, 2014 at 19:22

    or to spoof your friends: nmap i ggg?G`` . Or the diabolical: nmap i ggg?G``i ! – romeovs Jun 19 '14 at 19:22

    Amit Gold, Aug 7, 2016 at 10:14

    @romeovs 2nd one is infinite loop, use nnoremap – Amit Gold Aug 7 '16 at 10:14

    community wiki
    mohi666
    , Mar 4, 2011 at 2:20

    Not an obscure feature, but very useful and time saving.

    If you want to save a session of your open buffers, tabs, markers and other settings, you can issue the following:

    mksession session.vim
    

    You can open your session using:

    vim -S session.vim
    

    TankorSmash, Nov 3, 2012 at 13:45

    You can also :so session.vim inside vim. – TankorSmash Nov 3 '12 at 13:45

    community wiki
    Grant Limberg
    , May 11, 2009 at 21:59

    I just found this one today via NSFAQ :

    Comment blocks of code.

    Enter Blockwise Visual mode by hitting CTRL-V.

    Mark the block you wish to comment.

    Hit I (capital I) and enter your comment string at the beginning of the line. (// for C++)

    Hit ESC and all lines selected will have // prepended to the front of the line.

    Neeraj Singh, Jun 17, 2009 at 16:56

    I added # to comment out a block of code in ruby. How do I undo it. – Neeraj Singh Jun 17 '09 at 16:56

    Grant Limberg, Jun 17, 2009 at 19:29

    well, if you haven't done anything else to the file, you can simply type u for undo. Otherwise, I haven't figured that out yet. – Grant Limberg Jun 17 '09 at 19:29

    nos, Jul 28, 2009 at 20:00

    You can just hit ctrl+v again, mark the //'s and hit x to "uncomment" – nos Jul 28 '09 at 20:00

    ZyX, Mar 7, 2010 at 14:18

    I use NERDCommenter for this. – ZyX Mar 7 '10 at 14:18

    Braden Best, Feb 4, 2016 at 16:23

    Commented out code is probably one of the worst types of comment you could possibly put in your code. There are better uses for the awesome block insert. – Braden Best Feb 4 '16 at 16:23

    community wiki
    2 revs, 2 users 84%
    Ian H
    , Jul 3, 2015 at 23:44

    I use vim for just about any text editing I do, so I often times use copy and paste. The problem is that vim by default will often times distort imported text via paste. The way to stop this is to use
    :set paste
    

    before pasting in your data. This will keep it from messing up.

    Note that you will have to issue :set nopaste to recover auto-indentation. Alternative ways of pasting pre-formatted text are the clipboard registers ( * and + ), and :r!cat (you will have to end the pasted fragment with ^D).

    It is also sometimes helpful to turn on a high contrast color scheme. This can be done with

    :color blue
    

    I've noticed that it does not work on all the versions of vim I use but it does on most.

    jamessan, Dec 28, 2009 at 8:27

    The "distortion" is happening because you have some form of automatic indentation enabled. Using set paste or specifying a key for the pastetoggle option is a common way to work around this, but the same effect can be achieved with set mouse=a as then Vim knows that the flood of text it sees is a paste triggered by the mouse. – jamessan Dec 28 '09 at 8:27

    kyrias, Oct 19, 2013 at 12:15

    If you have gvim installed you can often (though it depends on what your options your distro compiles vim with) use the X clipboard directly from vim through the * register. For example "*p to paste from the X xlipboard. (It works from terminal vim, too, it's just that you might need the gvim package if they're separate) – kyrias Oct 19 '13 at 12:15

    Braden Best, Feb 4, 2016 at 16:26

    @kyrias for the record, * is the PRIMARY ("middle-click") register. The clipboard is +Braden Best Feb 4 '16 at 16:26

    community wiki
    viraptor
    , Apr 7, 2009 at 22:29

    Here's something not obvious. If you have a lot of custom plugins / extensions in your $HOME and you need to work from su / sudo / ... sometimes, then this might be useful.

    In your ~/.bashrc:

    export VIMINIT=":so $HOME/.vimrc"

    In your ~/.vimrc:

    if $HOME=='/root'
            if $USER=='root'
                    if isdirectory('/home/your_typical_username')
                            let rtuser = 'your_typical_username'
                    elseif isdirectory('/home/your_other_username')
                            let rtuser = 'your_other_username'
                    endif
            else
                    let rtuser = $USER
            endif
            let &runtimepath = substitute(&runtimepath, $HOME, '/home/'.rtuser, 'g')
    endif
    

    It will allow your local plugins to load - whatever way you use to change the user.

    You might also like to take the *.swp files out of your current path and into ~/vimtmp (this goes into .vimrc):

    if ! isdirectory(expand('~/vimtmp'))
       call mkdir(expand('~/vimtmp'))
    endif
    if isdirectory(expand('~/vimtmp'))
       set directory=~/vimtmp
    else
       set directory=.,/var/tmp,/tmp
    endif
    

    Also, some mappings I use to make editing easier - makes ctrl+s work like escape and ctrl+h/l switch the tabs:

    inoremap <C-s> <ESC>
    vnoremap <C-s> <ESC>
    noremap <C-l> gt
    noremap <C-h> gT
    

    Kyle Challis, Apr 2, 2014 at 21:18

    Just in case you didn't already know, ctrl+c already works like escape. – Kyle Challis Apr 2 '14 at 21:18

    shalomb, Aug 24, 2015 at 8:02

    I prefer never to run vim as root/under sudo - and would just run the command from vim e.g. :!sudo tee %, :!sudo mv % /etc or even launch a login shell :!sudo -ishalomb Aug 24 '15 at 8:02

    community wiki
    2 revs, 2 users 67%
    ,Nov 7, 2009 at 7:54

    Ctrl-n while in insert mode will auto complete whatever word you're typing based on all the words that are in open buffers. If there is more than one match it will give you a list of possible words that you can cycle through using ctrl-n and ctrl-p.

    community wiki
    daltonb
    , Feb 22, 2010 at 4:28

    gg=G
    

    Corrects indentation for entire file. I was missing my trusty <C-a><C-i> in Eclipse but just found out vim handles it nicely.

    sjas, Jul 15, 2012 at 22:43

    I find G=gg easier to type. – sjas Jul 15 '12 at 22:43

    sri, May 12, 2013 at 16:12

    =% should do it too. – sri May 12 '13 at 16:12

    community wiki
    mohi666
    , Mar 24, 2011 at 22:44

    Ability to run Vim on a client/server based modes.

    For example, suppose you're working on a project with a lot of buffers, tabs and other info saved on a session file called session.vim.

    You can open your session and create a server by issuing the following command:

    vim --servername SAMPLESERVER -S session.vim
    

    Note that you can open regular text files if you want to create a server and it doesn't have to be necessarily a session.

    Now, suppose you're in another terminal and need to open another file. If you open it regularly by issuing:

    vim new_file.txt
    

    Your file would be opened in a separate Vim buffer, which is hard to do interactions with the files on your session. In order to open new_file.txt in a new tab on your server use this command:

    vim --servername SAMPLESERVER --remote-tab-silent new_file.txt
    

    If there's no server running, this file will be opened just like a regular file.

    Since providing those flags every time you want to run them is very tedious, you can create a separate alias for creating client and server.

    I placed the followings on my bashrc file:

    alias vims='vim --servername SAMPLESERVER'
    alias vimc='vim --servername SAMPLESERVER --remote-tab-silent'
    

    You can find more information about this at: http://vimdoc.sourceforge.net/htmldoc/remote.html

    community wiki
    jm666
    , May 11, 2011 at 19:54

    Variation of sudo write:

    into .vimrc

    cmap w!! w !sudo tee % >/dev/null
    

    After reload vim you can do "sudo save" as

    :w!!
    

    community wiki
    3 revs, 3 users 74%
    ,Sep 17, 2010 at 17:06

    HOWTO: Auto-complete Ctags when using Vim in Bash. For anyone else who uses Vim and Ctags, I've written a small auto-completer function for Bash. Add the following into your ~/.bash_completion file (create it if it does not exist):

    Thanks go to stylishpants for his many fixes and improvements.

    _vim_ctags() {
        local cur prev
    
        COMPREPLY=()
        cur="${COMP_WORDS[COMP_CWORD]}"
        prev="${COMP_WORDS[COMP_CWORD-1]}"
    
        case "${prev}" in
            -t)
                # Avoid the complaint message when no tags file exists
                if [ ! -r ./tags ]
                then
                    return
                fi
    
                # Escape slashes to avoid confusing awk
                cur=${cur////\\/}
    
                COMPREPLY=( $(compgen -W "`awk -vORS=" "  "/^${cur}/ { print \\$1 }" tags`" ) )
                ;;
            *)
                _filedir_xspec
                ;;
        esac
    }
    
    # Files matching this pattern are excluded
    excludelist='*.@(o|O|so|SO|so.!(conf)|SO.!(CONF)|a|A|rpm|RPM|deb|DEB|gif|GIF|jp?(e)g|JP?(E)G|mp3|MP3|mp?(e)g|MP?(E)G|avi|AVI|asf|ASF|ogg|OGG|class|CLASS)'
    
    complete -F _vim_ctags -f -X "${excludelist}" vi vim gvim rvim view rview rgvim rgview gview
    

    Once you restart your Bash session (or create a new one) you can type:

    Code:

    ~$ vim -t MyC<tab key>
    

    and it will auto-complete the tag the same way it does for files and directories:

    Code:

    MyClass MyClassFactory
    ~$ vim -t MyC
    

    I find it really useful when I'm jumping into a quick bug fix.

    >, Apr 8, 2009 at 3:05

    Amazing....I really needed it – Sasha Apr 8 '09 at 3:05

    TREE, Apr 27, 2009 at 13:19

    can you summarize? If that external page goes away, this answer is useless. :( – TREE Apr 27 '09 at 13:19

    Hamish Downer, May 5, 2009 at 16:38

    Summary - it allows ctags autocomplete from the bash prompt for opening files with vim. – Hamish Downer May 5 '09 at 16:38

    community wiki
    2 revs, 2 users 80%
    ,Dec 22, 2016 at 7:44

    I often want to highlight a particular word/function name, but don't want to search to the next instance of it yet:
    map m* *#
    

    René Nyffenegger, Dec 3, 2009 at 7:36

    I don't understand this one. – René Nyffenegger Dec 3 '09 at 7:36

    Scotty Allen, Dec 3, 2009 at 19:55

    Try it:) It basically highlights a given word, without moving the cursor to the next occurrance (like * would). – Scotty Allen Dec 3 '09 at 19:55

    jamessan, Dec 27, 2009 at 19:10

    You can do the same with "nnoremap m* :let @/ = '\<' . expand('<cword>') . '\>'<cr>" – jamessan Dec 27 '09 at 19:10

    community wiki
    Ben
    , Apr 9, 2009 at 12:37

    % is also good when you want to diff files across two different copies of a project without wearing out the pinkies (from root of project1):
    :vert diffs /project2/root/%
    

    community wiki
    Naga Kiran
    , Jul 8, 2009 at 19:07

    :setlocal autoread

    Auto reloads the current buffer..especially useful while viewing log files and it almost serves the functionality of "tail" program in unix from within vim.

    Checking for compile errors from within vim. set the makeprg variable depending on the language let's say for perl

    :setlocal makeprg = perl\ -c \ %

    For PHP

    set makeprg=php\ -l\ %
    set errorformat=%m\ in\ %f\ on\ line\ %l

    Issuing ":make" runs the associated makeprg and displays the compilation errors/warnings in quickfix window and can easily navigate to the corresponding line numbers.

    community wiki
    2 revs, 2 users 73%
    ,Sep 14 at 20:16

    Want an IDE?

    :make will run the makefile in the current directory, parse the compiler output, you can then use :cn and :cp to step through the compiler errors opening each file and seeking to the line number in question.

    :syntax on turns on vim's syntax highlighting.

    community wiki
    Luper Rouch
    , Apr 9, 2009 at 12:53

    Input a character from its hexadecimal value (insert mode):
    <C-Q>x[type the hexadecimal byte]
    

    MikeyB, Sep 22, 2009 at 21:57

    <C-V> is the more generic command that works in both the text-mode and gui – MikeyB Sep 22 '09 at 21:57

    jamessan, Dec 27, 2009 at 19:06

    It's only <C-q> if you're using the awful mswin.vim (or you mapped it yourself). – jamessan Dec 27 '09 at 19:06

    community wiki
    Brad Cox
    , May 8, 2009 at 1:54

    I was sure someone would have posted this already, but here goes.

    Take any build system you please; make, mvn, ant, whatever. In the root of the project directory, create a file of the commands you use all the time, like this:

    mvn install
    mvn clean install
    ... and so forth

    To do a build, put the cursor on the line and type !!sh. I.e. filter that line; write it to a shell and replace with the results.

    The build log replaces the line, ready to scroll, search, whatever.

    When you're done viewing the log, type u to undo and you're back to your file of commands.

    ojblass, Jul 5, 2009 at 18:27

    This doesn't seem to fly on my system. Can you show an example only using the ls command? – ojblass Jul 5 '09 at 18:27

    Brad Cox, Jul 29, 2009 at 19:30

    !!ls replaces current line with ls output (adding more lines as needed). – Brad Cox Jul 29 '09 at 19:30

    jamessan, Dec 28, 2009 at 8:29

    Why wouldn't you just set makeprg to the proper tool you use for your build (if it isn't set already) and then use :make ? :copen will show you the output of the build as well as allowing you to jump to any warnings/errors. – jamessan Dec 28 '09 at 8:29

    community wiki
    2 revs, 2 users 95%
    ,Dec 28, 2009 at 8:38

    ==========================================================
    In normal mode
    ==========================================================
    gf ................ open file under cursor in same window --> see :h path
    Ctrl-w f .......... open file under cursor in new window
    Ctrl-w q .......... close current window
    Ctrl-w 6 .......... open alternate file --> see :h #
    gi ................ init insert mode in last insertion position
    '0 ................ place the cursor where it was when the file was last edited
    

    Braden Best, Feb 4, 2016 at 16:33

    I believe it's <C-w> c to close a window, actually. :h ctrl-wBraden Best Feb 4 '16 at 16:33

    community wiki
    2 revs, 2 users 84%
    ,Sep 17, 2010 at 16:53

    Due to the latency and lack of colors (I love color schemes :) I don't like programming on remote machines in PuTTY . So I developed this trick to work around this problem. I use it on Windows.

    You will need

    Setting up remote machine

    Configure rsync to make your working directory accessible. I use an SSH tunnel and only allow connections from the tunnel:

    address = 127.0.0.1
    hosts allow = 127.0.0.1
    port = 40000
    use chroot = false
    [bledge_ce]
        path = /home/xplasil/divine/bledge_ce
        read only = false
    

    Then start rsyncd: rsync --daemon --config=rsyncd.conf

    Setting up local machine

    Install rsync from Cygwin. Start Pageant and load your private key for the remote machine. If you're using SSH tunelling, start PuTTY to create the tunnel. Create a batch file push.bat in your working directory which will upload changed files to the remote machine using rsync:

    rsync --blocking-io *.cc *.h SConstruct rsync://localhost:40001/bledge_ce
    

    SConstruct is a build file for scons. Modify the list of files to suit your needs. Replace localhost with the name of remote machine if you don't use SSH tunelling.

    Configuring Vim That is now easy. We will use the quickfix feature (:make and error list), but the compilation will run on the remote machine. So we need to set makeprg:

    set makeprg=push\ &&\ plink\ -batch\ xplasil@anna.fi.muni.cz\ \"cd\ /home/xplasil/divine/bledge_ce\ &&\ scons\ -j\ 2\"
    

    This will first start the push.bat task to upload the files and then execute the commands on remote machine using SSH ( Plink from the PuTTY suite). The command first changes directory to the working dir and then starts build (I use scons).

    The results of build will show conviniently in your local gVim errors list.

    matpie, Sep 17, 2010 at 23:02

    A much simpler solution would be to use bcvi: sshmenu.sourceforge.net/articles/bcvimatpie Sep 17 '10 at 23:02

    Uri Goren, Jul 20 at 20:21

    cmder is much easier and simpler, it also comes with its own ssh client – Uri Goren Jul 20 at 20:21

    community wiki
    3 revs, 2 users 94%
    ,Jan 16, 2014 at 14:10

    I use Vim for everything. When I'm editing an e-mail message, I use:

    gqap (or gwap )

    extensively to easily and correctly reformat on a paragraph-by-paragraph basis, even with quote leadin characters. In order to achieve this functionality, I also add:

    -c 'set fo=tcrq' -c 'set tw=76'

    to the command to invoke the editor externally. One noteworthy addition would be to add ' a ' to the fo (formatoptions) parameter. This will automatically reformat the paragraph as you type and navigate the content, but may interfere or cause problems with errant or odd formatting contained in the message.

    Andrew Ferrier, Jul 14, 2014 at 22:22

    autocmd FileType mail set tw=76 fo=tcrq in your ~/.vimrc will also work, if you can't edit the external editor command. – Andrew Ferrier Jul 14 '14 at 22:22

    community wiki
    2 revs, 2 users 94%
    ,May 6, 2009 at 12:22

    Put this in your .vimrc to have a command to pretty-print xml:
    function FormatXml()
        %s:\(\S\)\(<[^/]\)\|\(>\)\(</\):\1\3\r\2\4:g
        set filetype=xml
        normal gg=G
    endfunction
    
    command FormatXml :call FormatXml()
    

    David Winslow, Nov 24, 2009 at 20:43

    On linuxes (where xmllint is pretty commonly installed) I usually just do :%! xmllint - for this. – David Winslow Nov 24 '09 at 20:43

    community wiki
    searlea
    , Aug 6, 2009 at 9:33

    :sp %:h - directory listing / file-chooser using the current file's directory

    (belongs as a comment under rampion's cd tip, but I don't have commenting-rights yet)

    bpw1621, Feb 19, 2011 at 15:13

    ":e ." does the same thing for your current working directory which will be the same as your current file's directory if you set autochdir – bpw1621 Feb 19 '11 at 15:13

    community wiki
    2 revs
    ,Sep 22, 2009 at 22:23

    Just before copying and pasting to stackoverflow:
    :retab 1
    :% s/^I/ /g
    :% s/^/    /
    

    Now copy and paste code.

    As requested in the comments:

    retab 1. This sets the tab size to one. But it also goes through the code and adds extra tabs and spaces so that the formatting does not move any of the actual text (ie the text looks the same after ratab).

    % s/^I/ /g: Note the ^I is tthe result of hitting tab. This searches for all tabs and replaces them with a single space. Since we just did a retab this should not cause the formatting to change but since putting tabs into a website is hit and miss it is good to remove them.

    % s/^/ /: Replace the beginning of the line with four spaces. Since you cant actually replace the beginning of the line with anything it inserts four spaces at the beging of the line (this is needed by SO formatting to make the code stand out).

    vehomzzz, Sep 22, 2009 at 20:52

    explain it please... – vehomzzz Sep 22 '09 at 20:52

    cmcginty, Sep 22, 2009 at 22:31

    so I guess this won't work if you use 'set expandtab' to force all tabs to spaces. – cmcginty Sep 22 '09 at 22:31

    Martin York, Sep 23, 2009 at 0:07

    @Casey: The first two lines will not apply. The last line will make sure you can just cut and paste into SO. – Martin York Sep 23 '09 at 0:07

    Braden Best, Feb 4, 2016 at 16:40

    Note that you can achieve the same thing with cat <file> | awk '{print " " $line}' . So try :w ! awk '{print " " $line}' | xclip -i . That's supposed to be four spaces between the ""Braden Best Feb 4 '16 at 16:40

    community wiki
    Anders Holmberg
    , Dec 28, 2009 at 9:21

    When working on a project where the build process is slow I always build in the background and pipe the output to a file called errors.err (something like make debug 2>&1 | tee errors.err ). This makes it possible for me to continue editing or reviewing the source code during the build process. When it is ready (using pynotify on GTK to inform me that it is complete) I can look at the result in vim using quickfix . Start by issuing :cf[ile] which reads the error file and jumps to the first error. I personally like to use cwindow to get the build result in a separate window.

    community wiki
    quabug
    , Jul 12, 2011 at 12:21

    set colorcolumn=+1 or set cc=+1 for vim 7.3

    Luc M, Oct 31, 2012 at 15:12

    A short explanation would be appreciated... I tried it and could be very usefull! You can even do something like set colorcolumn=+1,+10,+20 :-) – Luc M Oct 31 '12 at 15:12

    DBedrenko, Oct 31, 2014 at 16:17

    @LucM If you tried it why didn't you provide an explanation? – DBedrenko Oct 31 '14 at 16:17

    mjturner, Aug 19, 2015 at 11:16

    colorcolumn allows you to specify columns that are highlighted (it's ideal for making sure your lines aren't too long). In the original answer, set cc=+1 highlights the column after textwidth . See the documentation for more information. – mjturner Aug 19 '15 at 11:16

    community wiki
    mpe
    , May 11, 2009 at 4:39

    For making vim a little more like an IDE editor:

    Rook, May 11, 2009 at 4:42

    How does that make Vim more like an IDE ?? – Rook May 11 '09 at 4:42

    mpe, May 12, 2009 at 12:29

    I did say "a little" :) But it is something many IDEs do, and some people like it, eg: eclipse.org/screenshots/images/JavaPerspective-WinXP.pngmpe May 12 '09 at 12:29

    Rook, May 12, 2009 at 21:25

    Yes, but that's like saying yank/paste functions make an editor "a little" more like an IDE. Those are editor functions. Pretty much everything that goes with the editor that concerns editing text and that particular area is an editor function. IDE functions would be, for example, project/files management, connectivity with compiler&linker, error reporting, building automation tools, debugger ... i.e. the stuff that doesn't actually do nothing with editing text. Vim has some functions & plugins so he can gravitate a little more towards being an IDE, but these are not the ones in question. – Rook May 12 '09 at 21:25

    Rook, May 12, 2009 at 21:26

    After all, an IDE = editor + compiler + debugger + building tools + ... – Rook May 12 '09 at 21:26

    Rook, May 12, 2009 at 21:31

    Also, just FYI, vim has an option to set invnumber. That way you don't have to "set nu" and "set nonu", i.e. remember two functions - you can just toggle. – Rook May 12 '09 at 21:31

    community wiki
    2 revs, 2 users 50%
    PuzzleCracker
    , Sep 13, 2009 at 23:20

    I love :ls command.

    aehlke, Oct 28, 2009 at 3:16

    Well what does it do? – aehlke Oct 28 '09 at 3:16

    >, Dec 7, 2009 at 10:51

    gives the current file name opened ? – user59634 Dec 7 '09 at 10:51

    Nona Urbiz, Dec 20, 2010 at 8:25

    :ls lists all the currently opened buffers. :be opens a file in a new buffer, :bn goes to the next buffer, :bp to the previous, :b filename opens buffer filename (it auto-completes too). buffers are distinct from tabs, which i'm told are more analogous to views. – Nona Urbiz Dec 20 '10 at 8:25

    community wiki
    2 revs, 2 users 80%
    ,Sep 17, 2010 at 16:45

    A few useful ones:
    :set nu # displays lines
    :44     # go to line 44
    '.      # go to last modification line
    

    My favourite: Ctrl + n WORD COMPLETION!

    community wiki
    2 revs
    ,Jun 18, 2013 at 11:10

    In insert mode, ctrl + x, ctrl + p will complete (with menu of possible completions if that's how you like it) the current long identifier that you are typing.
    if (SomeCall(LONG_ID_ <-- type c-x c-p here
                [LONG_ID_I_CANT_POSSIBLY_REMEMBER]
                 LONG_ID_BUT_I_NEW_IT_WASNT_THIS_ONE
                 LONG_ID_GOSH_FORGOT_THIS
                 LONG_ID_ETC
                 ∶
    

    Justin L., Jun 13, 2013 at 16:21

    i type <kbd>ctrl</kbd>+<kbd>p</kbd> way too much by accident while trying to hit <kbd>ctrl</kbd>+<kbd>[</kbd> >< – Justin L. Jun 13 '13 at 16:21

    community wiki
    Fritz G. Mehner
    , Apr 22, 2009 at 16:41

    Use the right mouse key to toggle insert mode in gVim with the following settings in ~/.gvimrc :
    "
    "------------------------------------------------------------------
    " toggle insert mode <--> 'normal mode with the <RightMouse>-key
    "------------------------------------------------------------------
    nnoremap  <RightMouse> <Insert>
    inoremap  <RightMouse> <ESC>
    "
    

    Andreas Grech, Jun 20, 2010 at 17:22

    This is stupid. Defeats the productivity gains from not using the mouse. – Andreas Grech Jun 20 '10 at 17:22

    Brady Trainor, Jul 5, 2014 at 21:07

    Maybe fgm has head gestures mapped to mouse clicks. – Brady Trainor Jul 5 '14 at 21:07

    community wiki
    AIB
    , Apr 27, 2009 at 13:06

    Replace all
      :%s/oldtext/newtext/igc
    

    Give a to replace all :)

    Nathan Fellman, Jan 12, 2011 at 20:58

    or better yet, instead of typing a, just remove the c . c means confirm replacementNathan Fellman Jan 12 '11 at 20:58

    community wiki
    2 revs
    ,Sep 13, 2009 at 18:39

    Neither of the following is really diehard, but I find it extremely useful.

    Trivial bindings, but I just can't live without. It enables hjkl-style movement in insert mode (using the ctrl key). In normal mode: ctrl-k/j scrolls half a screen up/down and ctrl-l/h goes to the next/previous buffer. The µ and ù mappings are especially for an AZERTY-keyboard and go to the next/previous make error.

    imap <c-j> <Down>
    imap <c-k> <Up>
    imap <c-h> <Left>
    imap <c-l> <Right>
    nmap <c-j> <c-d>
    nmap <c-k> <c-u>
    nmap <c-h> <c-left>
    nmap <c-l> <c-right>
    
    nmap ù :cp<RETURN>
    nmap µ :cn<RETURN>
    

    A small function I wrote to highlight functions, globals, macro's, structs and typedefs. (Might be slow on very large files). Each type gets different highlighting (see ":help group-name" to get an idea of your current colortheme's settings) Usage: save the file with ww (default "\ww"). You need ctags for this.

    nmap <Leader>ww :call SaveCtagsHighlight()<CR>
    
    "Based on: http://stackoverflow.com/questions/736701/class-function-names-highlighting-in-vim
    function SaveCtagsHighlight()
        write
    
        let extension = expand("%:e")
        if extension!="c" && extension!="cpp" && extension!="h" && extension!="hpp"
            return
        endif
    
        silent !ctags --fields=+KS *
        redraw!
    
        let list = taglist('.*')
        for item in list
            let kind = item.kind
    
            if     kind == 'member'
                let kw = 'Identifier'
            elseif kind == 'function'
                let kw = 'Function'
            elseif kind == 'macro'
                let kw = 'Macro'
            elseif kind == 'struct'
                let kw = 'Structure'
            elseif kind == 'typedef'
                let kw = 'Typedef'
            else
                continue
            endif
    
            let name = item.name
            if name != 'operator=' && name != 'operator ='
                exec 'syntax keyword '.kw.' '.name
            endif
        endfor
        echo expand("%")." written, tags updated"
    endfunction
    

    I have the habit of writing lots of code and functions and I don't like to write prototypes for them. So I made some function to generate a list of prototypes within a C-style sourcefile. It comes in two flavors: one that removes the formal parameter's name and one that preserves it. I just refresh the entire list every time I need to update the prototypes. It avoids having out of sync prototypes and function definitions. Also needs ctags.

    "Usage: in normal mode, where you want the prototypes to be pasted:
    ":call GenerateProptotypes()
    function GeneratePrototypes()
        execute "silent !ctags --fields=+KS ".expand("%")
        redraw!
        let list = taglist('.*')
        let line = line(".")
        for item in list
            if item.kind == "function"  &&  item.name != "main"
                let name = item.name
                let retType = item.cmd
                let retType = substitute( retType, '^/\^\s*','','' )
                let retType = substitute( retType, '\s*'.name.'.*', '', '' ) 
    
                if has_key( item, 'signature' )
                    let sig = item.signature
                    let sig = substitute( sig, '\s*\w\+\s*,',        ',',   'g')
                    let sig = substitute( sig, '\s*\w\+\(\s)\)', '\1', '' )
                else
                    let sig = '()'
                endif
                let proto = retType . "\t" . name . sig . ';'
                call append( line, proto )
                let line = line + 1
            endif
        endfor
    endfunction
    
    
    function GeneratePrototypesFullSignature()
        "execute "silent !ctags --fields=+KS ".expand("%")
        let dir = expand("%:p:h");
        execute "silent !ctags --fields=+KSi --extra=+q".dir."/* "
        redraw!
        let list = taglist('.*')
        let line = line(".")
        for item in list
            if item.kind == "function"  &&  item.name != "main"
                let name = item.name
                let retType = item.cmd
                let retType = substitute( retType, '^/\^\s*','','' )
                let retType = substitute( retType, '\s*'.name.'.*', '', '' ) 
    
                if has_key( item, 'signature' )
                    let sig = item.signature
                else
                    let sig = '(void)'
                endif
                let proto = retType . "\t" . name . sig . ';'
                call append( line, proto )
                let line = line + 1
            endif
        endfor
    endfunction
    

    community wiki
    Yada
    , Nov 24, 2009 at 20:21

    I collected these over the years.
    " Pasting in normal mode should append to the right of cursor
    nmap <C-V>      a<C-V><ESC>
    " Saving
    imap <C-S>      <C-o>:up<CR>
    nmap <C-S>      :up<CR>
    " Insert mode control delete
    imap <C-Backspace> <C-W>
    imap <C-Delete> <C-O>dw
    nmap    <Leader>o       o<ESC>k
    nmap    <Leader>O       O<ESC>j
    " tired of my typo
    nmap :W     :w
    

    community wiki
    jonyamo
    , May 10, 2010 at 15:01

    Create a function to execute the current buffer using it's shebang (assuming one is set) and call it with crtl-x.
    map <C-X> :call CallInterpreter()<CR>
    
    au BufEnter *
    \ if match (getline(1),  '^\#!') == 0 |
    \   execute("let b:interpreter = getline(1)[2:]") |
    \ endif
    
    fun! CallInterpreter()
        if exists("b:interpreter")
            exec("! ".b:interpreter." %")
        endif
    endfun
    

    community wiki
    Marcus Borkenhagen
    , Jan 12, 2011 at 15:22

    map macros

    I rather often find it useful to on-the-fly define some key mapping just like one would define a macro. The twist here is, that the mapping is recursive and is executed until it fails.

    Example:

    enum ProcStats
    {
            ps_pid,
            ps_comm,
            ps_state,
            ps_ppid,
            ps_pgrp,
    :map X /ps_<CR>3xixy<Esc>X
    

    Gives:

    enum ProcStats
    {
            xypid,
            xycomm,
            xystate,
            xyppid,
            xypgrp,
    

    Just an silly example :).

    I am completely aware of all the downsides - it just so happens that I found it rather useful in some occasions. Also it can be interesting to watch it at work ;).

    00dani, Aug 2, 2013 at 11:25

    Macros are also allowed to be recursive and work in pretty much the same fashion when they are, so it's not particularly necessary to use a mapping for this. –