Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Unix Sysadmin Tips

News Enterprise Unix System Administration Recommended Links Linux command line helpers Sysadmin cheatsheets Unix System Monitoring Job schedulers Systemd Cheatsheet
Saferm -- wrapper for rm command PDSH -- a parallel remote shell TeraTerm Macros Linux implementation of sar Mon -- King of Simplicity among Unix Monitoring packages Unix Configuration Management Tools Perl Admin Tools and Scripts Baseliners
Bash Tips and Tricks WinSCP Tips Attaching to and detaching from screen sessions Midnight Commander Tips and Tricks WinSCP Tips Linux netwoking tips RHEL Tips Suse Tips
Filesystems tips Shell Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips AWK Tips Linux Start up and Run Levels
Unix System Monitoring Job schedulers  Grub Simple Unix Backup Tools  Sysadmin Horror Stories History Humor Etc
There are some very useful blogs and columns that contain tips useful for most sysadmins. Among them

Unix as a Second Language by By 

aliases

Aliases will not provide information on how to use commands, but can be a great boon to remembering them – especially those that are complex or require a string of options to do what you want. Here are some examples that I use to avoid command complexity:

alias dirsBySize='du -kx | egrep -v "\./.+/" | sort -n'
alias myip='hostname -I | awk '\''{print }'\'''
alias oct2dec='f(){ echo "obase=10; ibase=8; $1" | bc; unset -f f; }; f'
alias recent='ls -ltr | tail -5'
alias rel='lsb_release -r'
alias side-by-side='pr -mt '

cheat

There's a very useful snap called "cheat" that can be used to print a cheat sheet for a particular command, It will contain a lot of useful examples of how to use the command. You do, however, have to be using a system that supports snaps (distribution-neutral packages) and install cheat.

Here's a truncated example of what you might see:

shs@firefly:~$ cheat grep
# To search a file for a pattern:
grep <pattern> <file>

# To perform a case-insensitive search (with line numbers):
grep -in <pattern> <file>

# To recursively grep for string <pattern> in <dir>:
grep -R <pattern> <dir>

# Read search patterns from a file (one per line):
grep -f <pattern-file> <<file>

# Find lines NOT containing pattern:
grep -v <pattern> <file>

# To grep with regular expressions:
grep "^00" <file>                                               # Match lines starting with 00
grep -E "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" <file> # Find IP add
…

cheat sheets

You can also locate and use a prepared Linux cheat sheet, whether you print it and keep it on your desktop or download a PDF that you can open when needed. It's hard to know all of the commands available on Linux or all of the options available with any particular command. Good cheat sheets can save you a lot of trouble by providing common usage examples.

 


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 26, 2021] Replace man pages with Tealdeer on Linux - Opensource.com

Jun 22, 2021 | opensource.com

Replace man pages with Tealdeer on Linux Tealdeer is a Rust implementation of tldr, which provides easy-to-understand information about common commands. 21 Jun 2021 Sudeshna Sur (Red Hat, Correspondent) Feed 10 up Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 More Linux resources

Man pages were my go-to resource when I started exploring Linux. Certainly, man is the most frequently used command when a beginner starts getting familiar with the world of the command line. But man pages, with their extensive lists of options and arguments, can be hard to decipher, which makes it difficult to understand whatever you wanted to know. If you want an easier solution with example-based output, I think tldr is the best option. What's Tealdeer?

Tealdeer is a wonderful implementation of tldr in Rust. It's a community-driven man page that gives very simple examples of how commands work. The best part about Tealdeer is that it has virtually every command you would normally use.

Install Tealdeer

On Linux, you can install Tealdeer from your software repository. For example, on Fedora :

$ sudo dnf install tealdeer

On macOS, use MacPorts or Homebrew .

Alternately, you can build and install the tool with Rust's Cargo package manager:

$ cargo install tealdeer
Use Tealdeer

Entering tldr --list returns the list of man pages tldr supports, like touch , tar , dnf , docker , zcat , zgrep , and so on:

$ tldr --list
2to3
7z
7za
7zr
[
a2disconf
a2dismod
a2dissite
a2enconf
a2enmod
a2ensite
a2query
[ ... ]

Using tldr with a specific command (like tar ) shows example-based man pages that describe all the options that you can do with that command:

$ tldr tar

Archiving utility.
Often combined with a compression method, such as gzip or bzip2.
More information: < https: // www.gnu.org / software / tar > .

[ c ] reate an archive and write it to a [ f ] ile:

tar cf target.tar file1 file2 file3

[ c ] reate a g [ z ] ipped archive and write it to a [ f ] ile:

tar czf target.tar.gz file1 file2 file3

[ c ] reate a g [ z ] ipped archive from a directory using relative paths:

tar czf target.tar.gz --directory =path / to / directory .

E [ x ] tract a ( compressed ) archive [ f ] ile into the current directory [ v ] erbosely:

tar xvf source.tar [ .gz | .bz2 | .xz ]

E [ x ] tract a ( compressed ) archive [ f ] ile into the target directory:

tar xf source.tar [ .gz | .bz2 | .xz ] --directory =directory

[ c ] reate a compressed archive and write it to a [ f ] ile, using [ a ] rchive suffix to determine the compression program:

tar caf target.tar.xz file1 file2 file3

To control the cache:

$ tldr --update
$ tldr --clear-cache

You can give Tealdeer output some color with the --color option, setting it to always , auto , and never . The default is auto , but I like the added context color provides, so I make mine permanent with this addition to my ~/.bashrc file:

alias tldr='tldr --color always'
Conclusion

The beauty of Tealdeer is you don't need a network connection to use it, except when you're updating the cache. So, even if you are offline, you can still search for and learn about your new favorite command. For more information, consult the tool's documentation .

Would you use Tealdeer? Or are you already using it? Let us know what you think in the comments below.

[Jun 19, 2021] How To Comment Out Multiple Lines At Once In Vim Editor - OSTechNix

Jun 19, 2021 | ostechnix.com

Method 1:

Step 1: Open the file using vim editor with command:

$ vim ostechnix.txt

Step 2: Highlight the lines that you want to comment out. To do so, go to the line you want to comment and move the cursor to the beginning of a line.

Press SHIFT+V to highlight the whole line after the cursor. After highlighting the first line, press UP or DOWN arrow keys or k or j to highlight the other lines one by one.

Here is how the lines will look like after highlighting them.

Highlight lines in Vim editor

Step 3: After highlighting the lines that you want to comment out, type the following and hit ENTER key:

:s/^/# /

Please mind the space between # and the last forward slash ( / ).

Now you will see the selected lines are commented out i.e. # symbol is added at the beginning of all lines.

Comment out multiple lines at once in Vim editor

Here, s stands for "substitution" . In our case, we substitute the caret symbol ^ (in the beginning of the line) with # (hash). As we all know, we put # in-front of a line to comment it out.

Step 4: After commenting the lines, you can type :w to save the changes or type :wq to save the file and exit.

Let us move on to the next method.

Method 2:

Step 1: Open the file in vim editor.

$ vim ostechnix.txt

Step 2: Set line numbers by typing the following in vim editor and hit ENTER.

:set number
Set line numbers in Vim

Step 3: Then enter the following command:

:1,4s/^/#

In this case, we are commenting out the lines from 1 to 4 . Check the following screenshot. The lines from 1 to 4 have been commented out.

Comment out multiple lines at once in Vim editor

Step 4: Finally, unset the line numbers.

:set nonumber

Step 5: To save the changes type :w or :wq to save the file and exit.

The same procedure can be used for uncommenting the lines in a file. Open the file and set the line numbers as shown in Step 2. Finally type the following command and hit ENTER at the Step 3:

:1,3s/^#/

After uncommenting the lines, simply remove the line numbers by entering the following command:

:set nonumber

Let us go ahead and see the third method.

Method 3:

This one is similar to Method 2 but slightly different.

Step 1: Open the file in vim editor.

$ vim ostechnix.txt

Step 2: Set line numbers by typing:

:set number

Step 3: Type the following to comment out the lines.

:1,4s/^/# /

The above command will comment out lines from 1 to 4.

Comment out multiple lines at once in Vim editor

Step 4: Finally, unset the line numbers by typing the following.

:set nonumber
Method 4:

This method is suggested by one of our reader Mr.Anand Nande in the comment section below.

Step 1: Open file in vim editor:

$ vim ostechnix.txt

Step 2: Go to the line you want to comment. Press Ctrl+V to enter into 'Visual block' mode.

Enter into Visual block mode in Vim editor

Step 3: Press UP or DOWN arrow or the letter k or j in your keyboard to select all the lines that you want to be commented in your file.

Select the lines to comment in Vim

Step 4: Press Shift+i to enter into INSERT mode. This will place your cursor on the first line.

Step 5: And then insert # (press Shift+3 ) before your first line.

Insert hash symbol before a line in Vim

Step 6: Finally, press ESC key. This will insert # on all other selected lines.

Comment out multiple lines at once in Vim editor

As you see in the above screenshot, all other selected lines including the first line are commented out.

Method 5:

This method is suggested by one of our Twitter follower and friend Mr.Tim Chase . We can even target lines to comment out by regex . In other words, we can comment all the lines that contains a specific word.

Step 1: Open the file in vim editor.

$ vim ostechnix.txt

Step 2: Type the following and press ENTER key:

:g/\Linux/s/^/# /

The above command will comment out all lines that contains the word "Linux" . Replace "Linux" with a word of your choice.

Comment out all lines that contains a specific word in Vim editor

As you see in the above output, all the lines have the word "Linux" , hence all of them are commented out.

And, that's all for now. I hope this was useful. If you know any other method than the given methods here, please let me know in the comment section below. I will check and add them in the guide.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=1479930931&adf=2055659237&pi=t.aa~a.2234679917~i.79~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1624141039&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fcomment-multiple-lines-vim-editor%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChAI8Ku2hgYQ-faf04DIxaZIEioA0oaPENzpMRwAvx5DdKl3WQFLBejYeBeOk4vBFOIUHsiK6A2cxarqfp0&uach=WyJXaW5kb3dzIiwiNi4xIiwieDg2IiwiIiwiOTEuMC44NjQuNDgiLFtdXQ..&dt=1624140995080&bpp=3&bdt=1704&idt=3&shv=r20210616&cbv=%2Fr20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Debfd4fb7c45c6c54-2271b162b47a0088%3AT%3D1624140991%3ART%3D1624140991%3AS%3DALNI_MYs-KRs82ESdaW9SqvTz0LdDn4aqw&prev_fmts=728x90%2C780x280%2C340x280%2C340x280%2C0x0%2C780x280%2C340x99%2C1519x762&nras=5&correlator=3214925991239&frm=20&pv=1&ga_vid=1620222149.1624140994&ga_sid=1624140994&ga_hid=1585543798&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=10272&biw=1519&bih=762&scr_x=0&scr_y=7239&eid=31060930%2C31061335&oid=3&psts=AGkb-H8UPcBDhqITRwjTEyaYjcsmTPBBB9-bKOCf5dU6ZjyXM_6d05U7bldpjo0O5VLXEx7awwc0KWKBEPwN%2CAGkb-H9ggMPM9ggYLcULWRyNg8Y1iDWLRzXLF71BPFCEPuIeMGaCEj1g81N-YmDTJtGAcCbFDWPYeaCoglq93g%2CAGkb-H_QCC0JmQ1BW2LVFWvGqsVvQRxvhIdC7I-I3wZ7_80Utt0U7Ef1bXvSFsCNVC9s8QIi8KLJOW5wWg2oVQ%2CAGkb-H-XM_O8cXp1AEMOS9B3OIHcuTK0k76S7RzpQkcHybZRRG0n-ps01q10AVEcKWdflTgafC47Cmzytdo%2CAGkb-H8P1-25rKkLXj21OtvZxC5syCIAnKUouYAUGDphNQJfDg5WgM38b5K51AE6BCGVqiuTDW0S2PpLMxDLVw%2CAGkb-H9LN-Y7NrJo_tIwtzBt6UcyBgIbsto0eamWWufKBQPkf1n_1eelsKy3kz-f4BY34amgaBPKBfGLpdQ&pvsid=1797588606635582&pem=289&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&cms=2&fu=128&bc=31&jar=2021-06-08-17&ifi=7&uci=a!7&btvi=5&fsb=1&xpc=J8GplgNl8K&p=https%3A//ostechnix.com&dtd=44434

Also, have a look at the comment section below. One of our visitor has shared a good guide about Vim usage.

Related read:

[Jun 12, 2021] Ctrl-R -- Find and run a previous command

Jun 12, 2021 | anto.online

What if you needed to execute a specific command again, one which you used a while back? And you can't remember the first character, but you can remember you used the word "serve".

You can use the up key and keep on pressing up until you find your command. (That could take some time)

Or, you can enter CTRL + R and type few keywords you used in your last command. Linux will help locate your command, requiring you to press enter once you found your command. The example below shows how you can enter CTRL + R and then type "ser" to find the previously run "PHP artisan serve" command. For sure, this tip will help you speed up your command-line experience.

anto@odin:~$ 
(reverse-i-search)`ser': php artisan serve

You can also use the history command to output all the previously stored commands. The history command will give a list that is ordered in ascending relative to its execution.

[Jun 12, 2021] The use of PS4= LINENO in debugging bash scripts

Jun 10, 2021 | www.redhat.com

Exit status

In Bash scripting, $? prints the exit status. If it returns zero, it means there is no error. If it is non-zero, then you can conclude the earlier task has some issue.

A basic example is as follows:

$ cat myscript.sh
           #!/bin/bash
           mkdir learning
           echo $?

If you run the above script once, it will print 0 because the directory does not exist, therefore the script will create it. Naturally, you will get a non-zero value if you run the script a second time, as seen below:

$ sh myscript.sh
mkdir: cannot create directory 'learning': File exists
1
In the cloud Best practices

It is always recommended to enable the debug mode by adding the -e option to your shell script as below:

$ cat test3.sh
!/bin/bash
set -x
echo "hello World"
mkdiir testing
 ./test3.sh
+ echo 'hello World'
hello World
+ mkdiir testing
./test3.sh: line 4: mkdiir: command not found

You can write a debug function as below, which helps to call it anytime, using the example below:

$ cat debug.sh
#!/bin/bash
_DEBUG="on"
function DEBUG()
{
 [ "$_DEBUG" == "on" ] && $@
}
DEBUG echo 'Testing Debudding'
DEBUG set -x
a=2
b=3
c=$(( $a + $b ))
DEBUG set +x
echo "$a + $b = $c"

Which prints:

$ ./debug.sh
Testing Debudding
+ a=2
+ b=3
+ c=5
+ DEBUG set +x
+ '[' on == on ']'
+ set +x
2 + 3 = 5
Standard error redirection

You can redirect all the system errors to a custom file using standard errors, which can be denoted by the number 2 . Execute it in normal Bash commands, as demonstrated below:

$ mkdir users 2> errors.txt
$ cat errors.txt
mkdir: cannot create directory "˜users': File exists

Most of the time, it is difficult to find the exact line number in scripts. To print the line number with the error, use the PS4 option (supported with Bash 4.1 or later). Example below:

$ cat test3.sh
#!/bin/bash
PS4='LINENO:'

set -x
echo "hello World"
mkdiir testing

You can easily see the line number while reading the errors:

$ /test3.sh
5: echo 'hello World'
hello World
6: mkdiir testing
./test3.sh: line 6: mkdiir: command not found

[Jun 12, 2021] What is your Linux server hardware decommissioning process

May 20, 2021
Jun 10, 2021 | www.redhat.com

by Ken Hess (Red Hat)

Even small to medium-sized companies have some sort of governance surrounding server decommissioning. They might not call it decommissioning but the process usually goes something like the following:

[Jun 12, 2021] 12 Useful Linux date Command Examples

Jun 10, 2021 | vitux.com

Displaying Date From String

We can display the formatted date from the date string provided by the user using the -d or ""date option to the command. It will not affect the system date, it only parses the requested date from the string. For example,

$ date -d "Feb 14 1999"

Parsing string to date.

$ date --date="09/10/1960"

Parsing string to date.

Displaying Upcoming Date & Time With -d Option

Aside from parsing the date, we can also display the upcoming date using the -d option with the command. The date command is compatible with words that refer to time or date values such as next Sun, last Friday, tomorrow, yesterday, etc. For examples,

Displaying Next Monday Date

$ date -d "next Mon"

Displaying upcoming date.

Displaying Past Date & Time With -d Option

Using the -d option to the command we can also know or view past date. For examples,

Displaying Last Friday Date
$ date -d "last Fri"

Displaying past date

Parse Date From File

If you have a record of the static date strings in the file we can parse them in the preferred date format using the -f option with the date command. In this way, you can format multiple dates using the command. In the following example, I have created the file that contains the list of date strings and parsed it with the command.

$ date -f datefile.txt

Parse date from the file.

Setting Date & Time on Linux

We can not only view the date but also set the system date according to your preference. For this, you need a user with Sudo access and you can execute the command in the following way.

$ sudo date -s "Sun 30 May 2021 07:35:06 PM PDT"
Display File Last Modification Time

We can check the file's last modification time using the date command, for this we need to add the -r option to the command. It helps in tracking files when it was last modified. For example,

$ date -r /etc/hosts

[Jun 08, 2021] How to use TEE command in Linux

Apr 21, 2021 | linuxtechlab.com

3- Write output to multiple files

With tee command, we have option to copy the output to multiple files as well & this can be done as follows,

# free -m | tee output1.txt output2.txt

... ... ...

5- Ignore any interrupts

There are instances where we might face some interruptions while running a command but we can suppress that with the help of '-i' option,

# ping -c 3 | tee -i output1.txt

[Jun 08, 2021] Recovery LVM Data from RAID

May 24, 2021 | blog.dougco.com

Recovery LVM Data from RAID – Doug's Blog

We had a client that had an OLD fileserver box, a Thecus N4100PRO. It was completely dust-ridden and the power supply had burned out.

Since these drives were in a RAID configuration, you could not hook any one of them up to a windows box, or a linux box to see the data. You have to hook them all up to a box and reassemble the RAID.

We took out the drives (3 of them) and then used an external SATA to USB box to connect them to a Linux server running CentOS. You can use parted to see what drives are now being seen by your linux system:

parted -l | grep 'raid\|sd'

Then using that output, we assembled the drives into a software array:

mdadm -A /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2

If we tried to only use two of those drives, it would give an error, since these were all in a linear RAID in the Thecus box.

If the last command went well, you can see the built array like so:

root% cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd2[0] sdb2[2] sdc2[1]
1459012480 blocks super 1.0 128k rounding

Note the personality shows the RAID type, in our case it was linear, which is probably the worst RAID since if any one drive fails, your data is lost. So good thing these drives outlasted the power supply! Now we find the physical volume:

pvdisplay /dev/md0

Gives us:

-- Physical volume --
PV Name /dev/md0
VG Name vg0
PV Size 1.36 TB / not usable 704.00 KB
Allocatable yes
PE Size (KByte) 2048
Total PE 712408
Free PE 236760
Allocated PE 475648
PV UUID iqwRGX-zJ23-LX7q-hIZR-hO2y-oyZE-tD38A3

Then we find the logical volume:

lvdisplay /dev/vg0

Gives us:

-- Logical volume --
LV Name /dev/vg0/syslv
VG Name vg0
LV UUID UtrwkM-z0lw-6fb3-TlW4-IpkT-YcdN-NY1orZ
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384

-- Logical volume --
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID 0qsIdY-i2cA-SAHs-O1qt-FFSr-VuWO-xuh41q
LV Write Access read/write
LV Status NOT available
LV Size 928.00 GB
Current LE 475136
Segments 1
Allocation inherit
Read ahead sectors 16384

We want to focus on the lv0 volume. You cannot mount yet, until you are able to lvscan them.

lvscan

Show us things are inactive currently:

inactive '/dev/vg0/syslv' [1.00 GB] inherit
inactive '/dev/vg0/lv0' [928.00 GB] inherit

So we set them active with:

vgchange vg0 -a y

And doing lvscan again shows:

ACTIVE '/dev/vg0/syslv' [1.00 GB] inherit
ACTIVE '/dev/vg0/lv0' [928.00 GB] inherit

Now we can mount with:

mount /dev/vg0/lv0 /mnt

And viola! We have our data up and accessable in /mnt to recover! Of course your setup is most likely going to look different from what I have shown you above, but hopefully this gives some helpful information for you to recover your own data.

[Jun 08, 2021] Bang commands: two potentially useful shortcuts for command line -- !! and !$ by Nikolai Bezroukov

softpanorama.org

Those shortcuts belong to the class of commands known as bang commands . Internet search for this term provides a wealth of additional information (which probably you do not need ;-), I will concentrate on just most common and potentially useful in the current command line environment bang commands. Of them !$ is probably the most useful and definitely is the most widely used. For many sysadmins it is the only bang command that is regularly used.

  1. !! is the bang command that re-executes the last command . This command is used mainly as a shortcut sudo !! -- elevation of privileges after your command failed on your user account. For example:

    fgrep 'kernel' /var/log/messages # it will fail due to unsufficient privileges, as /var/log directory is not readable by ordinary user
    sudo !! # now we re-execute the command with elevated privileges
    
  2. !$ puts into the current command line the last argument from previous command . For example:

    mkdir -p /tmp/Bezroun/Workdir
    cd !$
    
    In this example the last command is equivalent to the command cd /tmp/Bezroun/Workdir. Please try this example. It is a pretty neat trick.

NOTE: You can also work with individual arguments using numbers.

For example:
cp !:2 !:3 # picks up  the first and the second argument from the previous command
For this and other bang command capabilities, copying fragments of the previous command line using mouse is much more convenient, and you do not need to remember extra staff. After all, band commands were created before mouse was available, and most of them reflect the realities and needs of this bygone era. Still I met sysadmins that use this and some additional capabilities like !!:s^<old>^<new> (which replaces the string 'old' with the string 'new" and re-executes previous command) even now.

The same is true for !* -- all arguments of the last command. I do not use them and have had troubles writing this part of this post, correcting it several times to make it right 4/0

Nowadays CTRL+R activates reverse search, which provides an easier way to navigate through your history then capabilities in the past provided by band commands.

[May 28, 2021] Linux lsof Command Tutorial for Beginners (15 Examples) by Himanshu Arora

Images removed. See the original for the full text
May 23, 2021 | www.howtoforge.com

How to list all open files

To list all open files, run the lsof command without any arguments:

lsof

For example, Here is the screengrab of a part of the output the above command produced on my system:

The first column represents the process while the last column contains the file name. For details on all the columns, head to the command's man page .

2. How to list files opened by processes belonging to a specific user

The tool also allows you to list files opened by processes belonging to a specific user. This feature can be accessed by using the -u command-line option.

lsof -u [user-name]

For example:

lsof -u administrator
3. How to list files based on their Internet address

The tool lets you list files based on their Internet address. This can be done using the -i command-line option. For example, if you want, you can have IPv4 and IPv6 files displayed separately. For IPv4, run the following command:

lsof -i 4

...

4. How to list all files by application name

The -c command-line option allows you to get all files opened by program name.

$ lsof -c apache

You do not have to use the full program name as all programs that start with the word 'apache' are shown. So in our case, it will list all processes of the 'apache2' application.

The -c option is basically just a shortcut for the two commands:

$ lsof | grep apache
5. How to list files specific to a process

The tool also lets you display opened files based on process identification (PID) numbers. This can be done by using the -p command-line option.

lsof -p [PID]

For example:

lsof -p 856

Moving on, you can also exclude specific PIDs in the output by adding the ^ symbol before them. To exclude a specific PID, you can run the following command:

lsof -p [^PID]

For example:

lsof -p ^1

As you can see in the above screenshot, the process with id 1 is excluded from the list.

6. How to list IDs of processes that have opened a particular file

The tool allows you to list IDs of processes that have opened a particular file. This can be done by using the -t command line option.

$ lsof -t [file-name]

For example:

$ lsof -t /usr/lib/x86_64-linux-gnu/libpcre2-8.so.0.9.0
7. How to list all open files in a directory

If you want, you can also make lsof search for all open instances of a directory (including all the files and directories it contains). This feature can be accessed using the +D command-line option.

$ lsof +D [directory-path]

For example:

$ lsof +D /usr/lib/locale
8. How to list all Internet and x.25 (HP-UX) network files

This is possible by using the -i command-line option we described earlier. Just that you have to use it without any arguments.

$ lsof -i
9. Find out which program is using a port

The -i switch of the command allows you to find a process or application which listens to a specific port number. In the example below, I checked which program is using port 80.

$ lsof -i :80

Instead of the port number, you can use the service name as listed in the /etc/services file. Example to check which app listens on the HTTPS (443) port:

$ lsof -i :https

... ... ...

The above examples will check both TCP and UDP. If you like to check for TCP or UDP only, prepend the word 'tcp' or 'udp'. For example, which application is using port 25 TCP:

$ lsof -i tcp:25

or which app uses UDP port 53:

$ lsof -i udp:53
10. How to list open files based on port range

The utility also allows you to list open files based on a specific port or port range. For example, to display open files for port 1-1024, use the following command:

$ lsof -i :1-1024
11. How to list open files based on the type of connection (TCP or UDP)

The tool allows you to list files based on the type of connection. For example, for UDP specific files, use the following command:

$ lsof -i udp

Similarly, you can make lsof display TCP-specific files.

12. How to make lsof list Parent PID of processes

There's also an option that forces lsof to list the Parent Process IDentification (PPID) number in the output. The option in question is -R .

$ lsof -R

To get PPID info for a specific PID, you can run the following command:

$ lsof -p [PID] -R

For example:

$ lsof -p 3 -R
13. How to find network activity by user

By using a combination of the -i and -u command-line options, we can search for all network connections of a Linux user. This can be helpful if you inspect a system that might have been hacked. In this example, we check all network activity of the user www-data:

$ lsof -a -i -u www-data
14. List all memory-mapped files

This command lists all memory-mapped files on Linux.

$ lsof -d mem
15. List all NFS files

The -N option shows you a list of all NFS (Network File System) files.

$lsof -N
Conclusion

Although lsof offers a plethora of options, the ones we've discussed here should be enough to get you started. Once you're done practicing with these, head to the tool's man page to learn more about it. Oh, and in case you have any doubts and queries, drop in a comment below.

Himanshu Arora has been working on Linux since 2007. He carries professional experience in system level programming, networking protocols, and command line. In addition to HowtoForge, Himanshu's work has also been featured in some of world's other leading publications including Computerworld, IBM DeveloperWorks, and Linux Journal.

By: ShabbyCat at: 2020-05-31 23:47:44 Reply

Great article! Another useful one is "lsof -i tcp:PORT_NUMBER" to list processes happening on a specific port, useful for node.js when you need to kill a process.

Ex: lsof -i tcp:3000

then say you want to kill the process 5393 (PID) running on port 3000, you would run "kill -9 5393"

[May 28, 2021] Top Hex Editors for Linux

Images removed. See the original for the full text
May 23, 2021 | www.tecmint.com

Xxd Hex Editor

Most (if not every) Linux distributions come with an editor that allows you to perform hexadecimal and binary manipulation. One of those tools is the command-line tool "" xxd , which is most commonly used to make a hex dump of a given file or standard input. It can also convert a hex dump back to its original binary form.

Hexedit Hex Editor

Hexedit is another hexadecimal command-line editor that might already be preinstalled on your OS.

Hexedit shows both the hexadecimal and ASCII view of the file at the same time.

[May 10, 2021] The Tilde Text Editor

Highly recommended!
This is an editor similar to FDE and can be used as external editor for MC
May 10, 2021 | os.ghalkes.nl

Tilde is a text editor for the console/terminal, which provides an intuitive interface for people accustomed to GUI environments such as Gnome, KDE and Windows. For example, the short-cut to copy the current selection is Control-C, and to paste the previously copied text the short-cut Control-V can be used. As another example, the File menu can be accessed by pressing Meta-F.

However, being a terminal-based program there are limitations. Not all terminals provide sufficient information to the client programs to make Tilde behave in the most intuitive way. When this is the case, Tilde provides work-arounds which should be easy to work with.

The main audience for Tilde is users who normally work in GUI environments, but sometimes require an editor for a console/terminal environment. This may be because the computer in question is a server which does not provide a GUI, or is accessed remotely over SSH. Tilde allows these users to edit files without having to learn a completely new interface, such as vi or Emacs do. A result of this choice is that Tilde will not provide all the fancy features that Vim or Emacs provide, but only the most used features.

News Tilde version 1.1.2 released

This release fixes a bug where Tilde would discard read lines before an invalid character, while requested to continue reading.

23-May-2020

Tilde version 1.1.1 released

This release fixes a build failure on C++14 and later compilers

12-Dec-2019

[May 10, 2021] Lazy Linux: 10 essential tricks for admins by Vallard Benincosa

IBM is notorious for destroying useful information . This article is no longer available from IBM.
Jul 20, 2008

Originally from: |IBM DeveloperWorks

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time-and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat-especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line

Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1

Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.

Figure 4. Poking a hole in the firewall

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 [email protected]

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh [email protected] .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.

  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 [email protected]

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 [email protected]

    This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system-a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

[May 09, 2021] Good Alternatives To Man Pages Every Linux User Needs To Know by Sk

Images removed. See the original for full text.
Notable quotes:
"... you need Ruby 1.8.7+ installed on your machine for this to work. ..."
| ostechnix.com

1. Bropages

The slogan of the Bropages utility is just get to the point . It is true! The bropages are just like man pages, but it will display examples only. As its slogan says, It skips all text part and gives you the concise examples for command line programs. The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work. To install Ruby on Rails in CentOS and Ubuntu, refer the following guide: The slogan of the Bropages utility is just get to the point . It is true!

The bropages are just like man pages, but it will display examples only. As its slogan says, It skips all text part and gives you the concise examples for command line programs. The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work...After After installing gem, all you have to do to install bro pages is:

$ gem install bropages
... The usage is incredibly easy! ...just type:
$ bro find
... The good thing thing is you can upvote or downvote the examples.

As you see in the above screenshot, we can upvote to first command by entering the following command: As you see in the above screenshot, we can upvote to first command by entering the following command:

$ bro thanks
You will be asked to enter your Email Id. Enter a valid Email to receive the verification code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top. You will be asked to enter your Email Id. Enter a valid Email to receive the verification code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top.
Bropages.org requires an email address verification to do this
What's your email address?
[email protected]
Great! We're sending an email to [email protected]
Please enter the verification code: apHelH13ocC7OxTyB7Mo9p
Great! You're verified! FYI, your email and code are stored locally in ~/.bro
You just gave thanks to an entry for find!
You rock!
To upvote the second command, type:
$ bro thanks 2
Similarly, to downvote the first command, run:
$ bro ...no

... ... ...

2. Cheat

Cheat is another useful alternative to man pages to learn Unix commands. It allows you to create and view interactive Linux/Unix commands cheatsheets on the command-line. The recommended way to install Cheat is using Pip package manager.,,,

... ... ...

Cheat usage is trivial.

$ cheat find
You will be presented with the list of available examples of find command: ... ... ...

To view help section, run: To view help section, run:

$ cheat -h
For more details, see project's GitHub repository: For more details, see project's GitHub repository: 3. TLDR Pages

TLDR is a collection of simplified and community-driven man pages. Unlike man pages, TLDR pages focuses only on practical examples. TLDR can be installed using npm . So, you need NodeJS installed on your machine for this to work.

To install NodeJS in Linux, refer the following guide. To install NodeJS in Linux, refer the following guide.

After installing npm, run the following command to install tldr. After installing npm, run the following command to install tldr.
$ npm install -g tldr
TLDR clients are also available for Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your Android devices. TLDR clients are also available for Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your Android devices. There are many TLDR clients available. You can view them all here

3.1. Usage To display the documentation of any command, fro example find , run:

$ tldr find
You will see the list of available examples of find command. ...To view the list of all commands in the cache, run: To view the list of all commands in the cache, run:
$ tldr --list-all
...To update the local cache, run: To update the local cache, run: To update the local cache, run:
$ tldr -u
Or, Or,
$ tldr --update
To display the help section, run: To display the help section, run:
$ tldr -h
For more details, refer TLDR github page.4. TLDR++

Tldr++ is yet another client to access the TLDR pages. Unlike the other Tldr clients, it is fully interactive .

5. Tealdeer

Tealdeer is a fast, un-official tldr client that allows you to access and display Linux commands cheatsheets in your Terminal. The developer of Tealdeer claims it is very fast compared to the official tldr client and other community-supported tldr clients.

6. tldr.jsx web client

The tldr.jsx is a a Reactive web client for tldr-pages. If you don't want to install anything on your system, you can try this client online from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just a Web-browser. Open a web browser and navigate to The tldr.jsx is a a Reactive web client for tldr-pages. If you don't want to install anything on your system, you can try this client online from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just a Web-browser. Open a web browser and navigate to Open a web browser and navigate to Open a web browser and navigate to https://tldr.ostera.io/ page.

7. Navi interactive commandline cheatsheet tool

Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all other comprehensive text parts. For more details, check the following link. Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all other comprehensive text parts. For more details, check the following link.

8. Manly

I came across this utility recently and I thought that it would be a worth addition to this list. Say hello to Manly , a compliment to man pages. Manly is written in Python , so you can install it using Pip package manager.

Manly is slightly different from the above three utilities. It will not display any examples and also you need to mention the flags or options along with the commands. Say for example, the following example won't work:

$ manly dpkg
But, if you mention any flag/option of a command, you will get a small description of the given command and its options.
$ manly dpkg -i -R
View Linux
$ manly --help
And also take a look at the project's GitHub page. And also take a look at the project's GitHub page.
Suggested Read: Suggested Read:

[Apr 22, 2021] TLDR pages- Simplified Alternative To Linux Man Pages That You'll Love

Images removed. See the original for full text.
Apr 22, 2021 | fossbytes.com

The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and community-driven man pages. It's an effort to make the experience of using man pages simpler with the help of practical examples. For those who don't know, TLDR is taken from common internet slang Too Long Didn't Read .

In case you wish to compare, let's take the example of tar command. The usual man page extends over 1,000 lines. It's an archiving utility that's often combined with a compression method like bzip or gzip. Take a look at its man page:

On the other hand, TLDR pages lets you simply take a glance at the command and see how it works. Tar's TLDR page simply looks like this and comes with some handy examples of the most common tasks you can complete with this utility:

Let's take another example and show you what TLDR pages has to offer when it comes to apt:

Having shown you how TLDR works and makes your life easier, let's tell you how to install it on your Linux-based operating system.

How to install and use TLDR pages on Linux?

The most mature TLDR client is based on Node.js and you can install it easily using NPM package manager. In case Node and NPM are not available on your system, run the following command:

sudo apt-get install nodejs
sudo apt-get install npm

In case you're using an OS other than Debian, Ubuntu, or Ubuntu's derivatives, you can use yum, dnf, or pacman package manager as per your convenience.

[Apr 22, 2021] Alternatives of man in Linux command line

Images removed. See the original for full text.
Jan 01, 2020 | www.chuanjin.me

When we need help in Linux command line, man is usually the first friend we check for more information. But it became my second line support after I met other alternatives, e.g. tldr , cheat and eg .

tldr

tldr stands for too long didn't read , it is a simplified and community-driven man pages. Maybe we forget the arguments to a command, or just not patient enough to read the long man document, here tldr comes in, it will provide concise information with examples. And I even contributed a couple of lines code myself to help a little bit with the project on Github. It is very easy to install: npm install -g tldr , and there are many clients available to pick to be able to access the tldr pages. E.g. install Python client with pip install tldr ,

To display help information, run tldr -h or tldr tldr .

Take curl as an example

tldr++

tldr++ is an interactive tldr client written with go, I just steal the gif from its official site.

cheat

Similarly, cheat allows you to create and view interactive cheatsheets on the command-line. It was designed to help remind *nix system administrators of options for commands that they use frequently, but not frequently enough to remember. It is written in Golang, so just download the binary and add it into your PATH .

eg

eg provides useful examples with explanations on the command line.

So I consult tldr , cheat or eg before I ask man and Google.

[Apr 22, 2021] 5 modern alternatives to essential Linux command-line tools by Ricardo Gerardi

While some of those tools do provide additional functionality sticking to classic tool makes more sense. So user beware.
Jun 25, 2020 | opensource.com

In our daily use of Linux/Unix systems, we use many command-line tools to complete our work and to understand and manage our systems -- tools like du to monitor disk utilization and top to show system resources. Some of these tools have existed for a long time. For example, top was first released in 1984, while du 's first release dates to 1971.

Over the years, these tools have been modernized and ported to different systems, but, in general, they still follow their original idea, look, and feel.

These are great tools and essential to many system administrators' workflows. However, in recent years, the open source community has developed alternative tools that offer additional benefits. Some are just eye candy, but others greatly improve usability, making them a great choice to use on modern systems. These include the following five alternatives to the standard Linux command-line tools.

1. ncdu as a replacement for du

The NCurses Disk Usage ( ncdu ) tool provides similar results to du but in a curses-based, interactive interface that focuses on the directories that consume most of your disk space. ncdu spends some time analyzing the disk, then displays the results sorted by your most used directories or files, like this:

ncdu 1.14.2 ~ Use the arrow keys to navigate, press ? for help
--- /home/rgerardi ------------------------------------------------------------
96.7 GiB [##########] /libvirt
33.9 GiB [### ] /.crc
...
Total disk usage: 159.4 GiB Apparent size: 280.8 GiB Items: 561540

Navigate to each entry by using the arrow keys. If you press Enter on a directory entry, ncdu displays the contents of that directory:

--- /home/rgerardi/libvirt ----------------------------------------------------
/..
91.3 GiB [##########] /images
5.3 GiB [ ] /media

You can use that to drill down into the directories and find which files are consuming the most disk space. Return to the previous directory by using the Left arrow key. By default, you can delete files with ncdu by pressing the d key, and it asks for confirmation before deleting a file. If you want to disable this behavior to prevent accidents, use the -r option for read-only access: ncdu -r .

ncdu is available for many platforms and Linux distributions. For example, you can use dnf to install it on Fedora directly from the official repositories:

$ sudo dnf install ncdu

You can find more information about this tool on the ncdu web page .

2. htop as a replacement for top

htop is an interactive process viewer similar to top but that provides a nicer user experience out of the box. By default, htop displays the same metrics as top in a pleasant and colorful display.

By default, htop looks like this:

htop_small.png

(Ricardo Gerardi, CC BY-SA 4.0 )

In contrast to default top :

top_small.png

(Ricardo Gerardi, CC BY-SA 4.0 )

In addition, htop provides system overview information at the top and a command bar at the bottom to trigger commands using the function keys, and you can customize it by pressing F2 to enter the setup screen. In setup, you can change its colors, add or remove metrics, or change display options for the overview bar.

More Linux resources While you can configure recent versions of top to achieve similar results, htop provides saner default configurations, which makes it a nice and easy to use process viewer.

To learn more about this project, check the htop home page .

3. tldr as a replacement for man

The tldr command-line tool displays simplified command utilization information, mostly including examples. It works as a client for the community tldr pages project .

This tool is not a replacement for man . The man pages are still the canonical and complete source of information for many tools. However, in some cases, man is too much. Sometimes you don't need all that information about a command; you're just trying to remember the basic options. For example, the man page for the curl command has almost 3,000 lines. In contrast, the tldr for curl is 40 lines long and looks like this:

$ tldr curl

# curl
Transfers data from or to a server.
Supports most protocols, including HTTP, FTP, and POP3.
More information: < https: // curl.haxx.se > .

- Download the contents of an URL to a file:

curl http: // example.com -o filename

- Download a file , saving the output under the filename indicated by the URL:

curl -O http: // example.com / filename

- Download a file , following [ L ] ocation redirects, and automatically [ C ] ontinuing ( resuming ) a previous file transfer:

curl -O -L -C - http: // example.com / filename

- Send form-encoded data ( POST request of type ` application / x-www-form-urlencoded ` ) :

curl -d 'name=bob' http: // example.com / form
- Send a request with an extra header, using a custom HTTP method:

curl -H 'X-My-Header: 123' -X PUT http: // example.com
- Send data in JSON format, specifying the appropriate content-type header:

curl -d '{"name":"bob"}' -H 'Content-Type: application/json' http: // example.com / users / 1234

... TRUNCATED OUTPUT

TLDR stands for "too long; didn't read," which is internet slang for a summary of long text. The name is appropriate for this tool because man pages, while useful, are sometimes just too long.

In Fedora, the tldr client was written in Python. You can install it using dnf . For other client options, consult the tldr pages project .

In general, the tldr tool requires access to the internet to consult the tldr pages. The Python client in Fedora allows you to download and cache these pages for offline access.

For more information on tldr , you can use tldr tldr .

4. jq as a replacement for sed/grep for JSON

jq is a command-line JSON processor. It's like sed or grep but specifically designed to deal with JSON data. If you're a developer or system administrator who uses JSON in your daily tasks, this is an essential tool in your toolbox.

The main benefit of jq over generic text-processing tools like grep and sed is that it understands the JSON data structure, allowing you to create complex queries with a single expression.

To illustrate, imagine you're trying to find the name of the containers in this JSON file:

{
"apiVersion" : "v1" ,
"kind" : "Pod" ,
"metadata" : {
"labels" : {
"app" : "myapp"
} ,
"name" : "myapp" ,
"namespace" : "project1"
} ,
"spec" : {
"containers" : [
{
"command" : [
"sleep" ,
"3000"
] ,
"image" : "busybox" ,
"imagePullPolicy" : "IfNotPresent" ,
"name" : "busybox"
} ,
{
"name" : "nginx" ,
"image" : "nginx" ,
"resources" : {} ,
"imagePullPolicy" : "IfNotPresent"
}
] ,
"restartPolicy" : "Never"
}
}

If you try to grep directly for name , this is the result:

$ grep name k8s-pod.json
"name" : "myapp" ,
"namespace" : "project1"
"name" : "busybox"
"name" : "nginx" ,

grep returned all lines that contain the word name . You can add a few more options to grep to restrict it and, with some regular-expression manipulation, you can find the names of the containers. To obtain the result you want with jq , use an expression that simulates navigating down the data structure, like this:

$ jq '.spec.containers[].name' k8s-pod.json
"busybox"
"nginx"

This command gives you the name of both containers. If you're looking for only the name of the second container, add the array element index to the expression:

$ jq '.spec.containers[1].name' k8s-pod.json
"nginx"

Because jq is aware of the data structure, it provides the same results even if the file format changes slightly. grep and sed may provide different results with small changes to the format.

jq has many features, and covering them all would require another article. For more information, consult the jq project page , the man pages, or tldr jq .

5. fd as a replacement for find

fd is a simple and fast alternative to the find command. It does not aim to replace the complete functionality find provides; instead, it provides some sane defaults that help a lot in certain scenarios.

For example, when searching for source-code files in a directory that contains a Git repository, fd automatically excludes hidden files and directories, including the .git directory, as well as ignoring patterns from the .gitignore file. In general, it provides faster searches with more relevant results on the first try.

By default, fd runs a case-insensitive pattern search in the current directory with colored output. The same search using find requires you to provide additional command-line parameters. For example, to search all markdown files ( .md or .MD ) in the current directory, the find command is this:

$ find . -iname "*.md"

Here is the same search with fd :

$ fd .md

In some cases, fd requires additional options; for example, if you want to include hidden files and directories, you must use the option -H , while this is not required in find .

fd is available for many Linux distributions. Install it in Fedora using the standard repositories:

$ sudo dnf install fd-find

For more information, consult the fd GitHub repository .

... ... ...

S Arun-Kumar on 25 Jun 2020

I use "meld" in place of "diff" Ricardo Gerardi on 25 Jun 2020

Thanks ! I never used "meld". I'll give it a try.
Keith Peters on 25 Jun 2020

exa for ls Ricardo Gerardi on 25 Jun 2020

Thanks. I'll give it a try. brick on 27 Jun 2020

Another (fancy looking) alternative for ls is lsd. Miguel Perez on 25 Jun 2020

Bat instead of cat, ripgrep instead of grep, httpie instead of curl, bashtop instead of htop, autojump instead of cd... Drto on 25 Jun 2020

ack instead of grep for files. Million times faster.
Gordon Harris on 25 Jun 2020

The yq command line utility is useful too. It's just like jq, except for yaml files and has the ability to convert yaml into json.
Matt howard on 26 Jun 2020

Glances is a great top replacement too Paul M on 26 Jun 2020

Try "mtr" instead of traceroute
Try "hping2" instead of ping
Try "pigz" instead of gzip jmtd on 28 Jun 2020

I've never used ncdu, but I recommend "duc" as a du replacement https://github.com/zevv/duc/

You run a separate "duc index" command to capture disk space usage in a database file and then can explore the data very quickly with "duc ui" ncurses ui. There's also GUI and web front-ends that give you a nice graphical pie chart interface.

In my experience the index stage is faster than plain du. You can choose to re-index only certain folders if you want to update some data quickly without rescanning everything.

wurn on 29 Jun 2020

Imho, jq uses a syntax that's ok for simple queries but quickly becomes horrible when you need more complex queries. Pjy is a sensible replacement for jq, having an (improved) python syntax which is familiar to many people and much more readable: https://github.com/hydrargyrum/pjy
Jack Orenstein on 29 Jun 2020

Also along the lines of command-line alternatives, take a look at marcel, which is a modern shell: https://marceltheshell.org . The basic idea is to pipe Python values instead of strings, between commands. It integrates smoothly with host commands (and, presumably, the alternatives discussed here), and also integrates remote access and database access. Ricardo Fraile on 05 Jul 2020

"tuptime" instead of "uptime".
It tracks the history of the system, not only the current one. The Cube on 07 Jul 2020

One downside of all of this is that there are even more things to remember. I learned find, diff, cat, vi (and ed), grep and a few others starting in 1976 on 6th edition. They have been enhanced some, over the years (for which I use man when I need to remember), and learned top and other things as I needed them, but things I did back then still work great now. KISS is still a "thing". Especially in scripts one is going to use on a wide variety of distributions or for a long time. These kind of tweaks are fun and all, but add complexity and reduce one's inter-system mobility. (And don't get me started on systemd 8P).

[Apr 22, 2021] replace(1) - Linux manual page

Apr 22, 2021 | www.man7.org
REPLACE(1)               MariaDB Database System              REPLACE(1)
NAME top
       replace - a string-replacement utility
SYNOPSIS top
       replace arguments
DESCRIPTION top
       The replace utility program changes strings in place in files or
       on the standard input.

       Invoke replace in one of the following ways:

           shell> replace from to [from to] ... -- file_name [file_name] ...
           shell> replace from to [from to] ... < file_name

       from represents a string to look for and to represents its
       replacement. There can be one or more pairs of strings.

       Use the -- option to indicate where the string-replacement list
       ends and the file names begin. In this case, any file named on
       the command line is modified in place, so you may want to make a
       copy of the original before converting it.  replace prints a
       message indicating which of the input files it actually modifies.

       If the -- option is not given, replace reads the standard input
       and writes to the standard output.

       replace uses a finite state machine to match longer strings
       first. It can be used to swap strings. For example, the following
       command swaps a and b in the given files, file1 and file2:

           shell> replace a b b a -- file1 file2 ...

       The replace program is used by msql2mysql. See msql2mysql(1).

       replace supports the following options.

       •   -?, -I

           Display a help message and exit.

       •   -#debug_options

           Enable debugging.

       •   -s

           Silent mode. Print less information what the program does.

       •   -v

           Verbose mode. Print more information about what the program
           does.

       •   -V

           Display version information and exit.
COPYRIGHT top
       Copyright 2007-2008 MySQL AB, 2008-2010 Sun Microsystems, Inc.,
       2010-2015 MariaDB Foundation

       This documentation is free software; you can redistribute it
       and/or modify it only under the terms of the GNU General Public
       License as published by the Free Software Foundation; version 2
       of the License.

       This documentation is distributed in the hope that it will be
       useful, but WITHOUT ANY WARRANTY; without even the implied
       warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
       See the GNU General Public License for more details.

       You should have received a copy of the GNU General Public License
       along with the program; if not, write to the Free Software
       Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
       02110-1335 USA or see http://www.gnu.org/licenses/.
SEE ALSO top
       For more information, please refer to the MariaDB Knowledge Base,
       available online at https://mariadb.com/kb/
AUTHOR top
       MariaDB Foundation (http://www.mariadb.org/).
COLOPHON top
       This page is part of the MariaDB (MariaDB database server)
       project.  Information about the project can be found at 
       ⟨http://mariadb.org/⟩.  If you have a bug report for this manual
       page, see ⟨https://mariadb.com/kb/en/mariadb/reporting-bugs/⟩.
       This page was obtained from the project's upstream Git repository
       ⟨https://github.com/MariaDB/server⟩ on 2021-04-01.  (At that
       time, the date of the most recent commit that was found in the
       repository was 2020-11-03.)  If you discover any rendering
       problems in this HTML version of the page, or you believe there
       is a better or more up-to-date source for the page, or you have
       corrections or improvements to the information in this COLOPHON
       (which is not part o

[Apr 19, 2021] How To Display Linux Commands Cheatsheets Using Eg

Apr 19, 2021 | ostechnix.com

Eg is a free, open source program written in Python language and the code is freely available in GitHub. For those wondering, eg comes from the Latin word "Exempli Gratia" that literally means "for the sake of example" in English. Exempli Gratia is known by its abbreviation e.g. , in English speaking countries.

Install Eg in Linux

Eg can be installed using Pip package manager. If Pip is not available in your system, install it as described in the below link.

After installing Pip, run the following command to install eg on your Linux system:

$ pip install eg
Display Linux commands cheatsheets using Eg

Let us start by displaying the help section of eg program. To do so, run eg without any options:

$ eg

Sample output:

usage: eg [-h] [-v] [-f CONFIG_FILE] [-e] [--examples-dir EXAMPLES_DIR]
          [-c CUSTOM_DIR] [-p PAGER_CMD] [-l] [--color] [-s] [--no-color]
          [program]

eg provides examples of common command usage.

positional arguments:
  program               The program for which to display examples.

optional arguments:
  -h, --help            show this help message and exit
  -v, --version         Display version information about eg
  -f CONFIG_FILE, --config-file CONFIG_FILE
                        Path to the .egrc file, if it is not in the default
                        location.
  -e, --edit            Edit the custom examples for the given command. If
                        editor-cmd is not set in your .egrc and $VISUAL and
                        $EDITOR are not set, prints a message and does
                        nothing.
  --examples-dir EXAMPLES_DIR
                        The location to the examples/ dir that ships with eg
  -c CUSTOM_DIR, --custom-dir CUSTOM_DIR
                        Path to a directory containing user-defined examples.
  -p PAGER_CMD, --pager-cmd PAGER_CMD
                        String literal that will be invoked to page output.
  -l, --list            Show all the programs with eg entries.
  --color               Colorize output.
  -s, --squeeze         Show fewer blank lines in output.
  --no-color            Do not colorize output.

You can also bring the help section using this command too:

$ eg --help

Now let us see how to view example commands usage.

To display cheatsheet of a Linux command, for example grep , run:

$ eg grep

Sample output:

grep
 print all lines containing foo in input.txt
 grep "foo" input.txt
 print all lines matching the regex "^start" in input.txt
 grep -e "^start" input.txt
 print all lines containing bar by recursively searching a directory
 grep -r "bar" directory
 print all lines containing bar ignoring case
 grep -i "bAr" input.txt
 print 3 lines of context before and after each line matching "foo"
 grep -C 3 "foo" input.txt
 Basic Usage
 Search each line in input_file for a match against pattern and print
 matching lines:
 grep "<pattern>" <input_file>
[...]

[Apr 19, 2021] How to Install and Use locate Command in Linux

Apr 19, 2021 | www.linuxshelltips.com

Before using the locate command you should check if it is installed in your machine. A locate command comes with GNU findutils or GNU mlocate packages. You can simply run the following command to check if locate is installed or not.

$ which locate
Check locate Command
Check locate Command

If locate is not installed by default then you can run the following commands to install.

$ sudo yum install mlocate     [On CentOS/RHEL/Fedora]
$ sudo apt install mlocate     [On Debian/Ubuntu/Mint]

Once the installation is completed you need to run the following command to update the locate database to quickly get the file location. That's how your result is faster when you use the locate command to find files in Linux.

$ sudo updatedb

The mlocate db file is located at /var/lib/mlocate/mlocate.db .

$ ls -l /var/lib/mlocate/mlocate.db
mlocate database
mlocate database

A good place to start and get to know about locate command is using the man page.

$ man locate
locate command manpage
locate command manpage
How to Use locate Command to Find Files Faster in Linux

To search for any files simply pass the file name as an argument to locate command.

$ locate .bashrc
Locate Files in Linux
Locate Files in Linux

If you wish to see how many matched items instead of printing the location of the file you can pass the -c flag.

$ sudo locate -c .bashrc
Find File Count Occurrence
Find File Count Occurrence

By default locate command is set to be case sensitive. You can make the search to be case insensitive by using the -i flag.

$ sudo locate -i file1.sh
Find Files Case Sensitive in Linux
Find Files Case Sensitive in Linux

You can limit the search result by using the -n flag.

$ sudo locate -n 3 .bashrc
Limit Search Results
Limit Search Results

When you delete a file and if you did not update the mlocate database it will still print the deleted file in output. You have two options now either to update mlocate db periodically or use -e flag which will skip the deleted files.

$ locate -i -e file1.sh
Skip Deleted Files
Skip Deleted Files

You can check the statistics of the mlocate database by running the following command.

$ locate -S
mlocate database stats
mlocate database stats

If your db file is in a different location then you may want to use -d flag followed by mlocate db path and filename to be searched for.

$ locate -d [ DB PATH ] [ FILENAME ]

Sometimes you may encounter an error, you can suppress the error messages by running the command with the -q flag.

$ locate -q [ FILENAME ]

That's it for this article. We have shown you all the basic operations you can do with locate command. It will be a handy tool for you when working on the command line.

[Mar 28, 2021] How to Install and Configure VNC on Ubuntu 20.04

Mar 26, 2021 | linuxize.com

... ... ...

We'll be installing TigerVNC. It is an actively maintained high-performance VNC server. Type the following command to install the package:

sudo apt install tigervnc-standalone-serverCopy
Configuring VNC Access

Once the VNC server is installed, the next step is to create the initial user configuration and set up the password.

Set the user password using the vncpasswd command. Do not use sudo when running the command below:

vncpasswdCopy

You will be prompted to enter and confirm the password and whether to set it as a view-only password. If you choose to set up a view-only password, the user will not be able to interact with the VNC instance with the mouse and the keyboard.

Password:
Verify:
Would you like to enter a view-only password (y/n)? n
Copy

The password file is stored in the ~/.vnc directory, which is created if not present.

Next, we need to configure TigerVNC to use Xfce. To do so, create the following file:

~/.vnc/xstartup
nano ~/.vnc/xstartupCopy
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec startxfce4
Copy

Save and close the file. The commands above are automatically executed whenever you start or restart the TigerVNC server.

The ~/.vnc/xstartup file also needs to have execute permissions. Use the chmod command to set the file permissions:

chmod u+x ~/.vnc/xstartupCopy

If you need to pass additional options to the VNC server, create a file named config and add one option per line. Here is an example:

me title=

geometry=1920x1080
dpi=96
Copy

me title=

vncserver command:
vncserverCopy
New 'server2.linuxize.com:1 (linuxize)' desktop at :1 on machine server2.linuxize.com

Starting applications specified in /home/linuxize/.vnc/xstartup
Log file is /home/linuxize/.vnc/server2.linuxize.com:1.log

Use xtigervncviewer -SecurityTypes VncAuth -passwd /home/linuxize/.vnc/passwd :1 to connect to the VNC server.
Copy

Note the :1 after the hostname in the output above. This indicates the number of the display port on which the vnc server is running. In this example, the server is running on TCP port 5901 (5900+1). If you create a second instance with vncserver it will run on the next free port i.e :2 , which means that the server is running on port 5902 (5900+2).

What is important to remember is that when working with VNC servers, :X is a display port that refers to 5900+X .

https://googleads.g.doubleclick.net/pagead/ads?us_privacy=1---&client=ca-pub-9439755881064125&output=html&h=15&slotname=2822423476&adk=2131362362&adf=1183300332&pi=t.ma~as.2822423476&w=728&lmt=1616790774&psa=0&channel=7012520740&url=https%3A%2F%2Flinuxize.com%2Fpost%2Fhow-to-install-and-configure-vnc-on-ubuntu-20-04%2F&flash=0&wgl=1&adsid=ChEI8NKAgwYQvtb8gszP2JnFARIqAEiSDbp7dyHCaSBFcAURTBZF5nZU7KfOTTBoe-VfAphyk5mY4Jvol_Ww&dt=1616953702630&bpp=29&bdt=655&idt=429&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc49f1a8b9e10950c-22349bba1bc700c9%3AT%3D1616953703%3ART%3D1616953703%3AS%3DALNI_MZiWutkltNSo09KlscLjWoiwjnLBA&prev_fmts=0x0%2C0x0&nras=2&correlator=315778962186&pv_ch=7012520740%2B&frm=20&pv=1&ga_vid=1920561091.1616953703&ga_sid=1616953703&ga_hid=1068497979&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=384&ady=4455&biw=1519&bih=762&scr_x=0&scr_y=1486&eid=31060288%2C21066429%2C44740079%2C44739387&oid=3&pvsid=4105865734671113&pem=261&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8192&bc=31&jar=2021-03-26-21&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=1tDDd3qiMD&p=https%3A//linuxize.com&dtd=6635

You can get a list of all the currently running VNC sessions by typing:

vncserver -listCopy
TigerVNC server sessions:

X DISPLAY #	RFB PORT #	PROCESS ID
:1		      5901		    5710
Copy

Before continuing with the next step, stop the VNC instance using the vncserver command with a -kill option and the server number as an argument. In this example, the server is running in port 5901 ( :1 ), so we'll stop it with:

me title=

vncserver -kill :1Copy
Killing Xtigervnc process ID 5710... success!
Copy
Creating a Systemd unit file

Instead of manually starting the VNC session, let's create a systemd unit file so that you start, stop, and restart the VNC service as needed.

Open your text editor and copy and paste the following configuration into it. Make sure to change the username on line 7 to match your username.

sudo nano /etc/systemd/system/[email protected]
/etc/systemd/system/[email protected]
[Unit]
Description=Remote desktop service (VNC)
After=syslog.target network.target

[Service]
Type=simple
User=linuxize
PAMName=login
PIDFile=/home/%u/.vnc/%H%i.pid
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill :%i > /dev/null 2>&1 || :'
ExecStart=/usr/bin/vncserver :%i -geometry 1440x900 -alwaysshared -fg
ExecStop=/usr/bin/vncserver -kill :%i

[Install]
WantedBy=multi-user.target
Copy

Save and close the file.

Notify systemd that a new unit file is created:

sudo systemctl daemon-reloadCopy

Enable the service to start on boot:

me title=

sudo systemctl enable [email protected]

The number 1 after the @ sign defines the display port on which the VNC service will run. This means that the VNC server will listen on port 5901 , as we discussed in the previous section.

Start the VNC service by executing:

sudo systemctl start [email protected]

Verify that the service is successfully started with:

sudo systemctl status [email protected]
[email protected] - Remote desktop service (VNC)
     Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2021-03-26 20:00:59 UTC; 3s ago
...
Copy
Connecting to VNC server

VNC is not an encrypted protocol and can be subject to packet sniffing. The recommended approach is to create an SSH tunnel and securely forward traffic from your local machine on port 5901 to the server on the same port.

Set Up SSH Tunneling on Linux and macOS

If you run Linux, macOS, or any other Unix-based operating system on your machine, you can easily create an SSH tunnel with the following command:

me title=

ssh -L 5901:127.0.0.1:5901 -N -f -l vagrant 192.168.33.10Copy

You will be prompted to enter the user password.

Make sure to replace username and server_ip_address with your username and the IP address of your server.

Set Up SSH Tunneling on Windows

If you run Windows, you can set up SSH Tunneling using the PuTTY SSH client .

Open Putty and enter your server IP Address in the Host name or IP address field.

Under the Connection menu, box, expand SSH , and select Tunnels . Enter the VNC server port ( 5901 ) in the Source Port field and enter server_ip_address:5901 in the Destination field and click on the Add button as shown in the image below:

Go back to the Session page to save the settings, so you do not need to enter them each time. To the remote server, select the saved session and click on the Open button.

Connecting using Vncviewer

Now that the SSH tunnel is created, it is time to open your Vncviewer and to connect to the VNC Server at localhost:5901 .

You can use any VNC viewer such as TigerVNC, TightVNC, RealVNC, UltraVNC, Vinagre, and VNC Viewer for Google Chrome .

We'll be using TigerVNC. Open the viewer, enter localhost:5901 , and click on the Connect button.

Enter your user password when prompted, and you should see the default Xfce desktop. It will look something like this:

You can start interacting with the remote XFCE desktop from your local machine using your keyboard and mouse.

Conclusion

We've shown you how to install and configure a VNC server up and running, on Ubuntu 20.04.

To configure your VNC server to start a display for more than one user, create the initial configuration and set up the password using the vncpasswd command. You will also need to create a new service file using a different port.

Feel free to leave a comment if you have any questions.

[Mar 24, 2021] How To Edit Multiple Files Using Vim Editor by Senthil Kumar

Images removed. Use the original for full text.
Mar 24, 2021 | ostechnix.com

March 17, 2018

...Now, let us edit these two files at a time using Vim editor. To do so, run:

$ vim file1.txt file2.txt

Vim will display the contents of the files in an order. The first file's contents will be shown first and then second file and so on.

Edit Multiple Files Using Vim Editor

Edit Multiple Files Using Vim Editor Switch between files

To move to the next file, type:

:n
Switch between files in Vim editor

Switch between files in Vim editor

To go back to previous file, type:

:N

Here, N is capital (Type SHIFT+n).

Start editing the files as the way you do with Vim editor. Press 'i' to switch to interactive mode and modify the contents as per your liking. Once done, press ESC to go back to normal mode.

Vim won't allow you to move to the next file if there are any unsaved changes. To save the changes in the current file, type:

ZZ

Please note that it is double capital letters ZZ (SHIFT+zz).

To abandon the changes and move to the previous file, type:

:N!

To view the files which are being currently edited, type:

:buffers
View files in buffer in VIm

View files in buffer in VIm

You will see the list of loaded files at the bottom.

List of files in buffer in Vim

List of files in buffer in Vim

To switch to the next file, type :buffer followed by the buffer number. For example, to switch to the first file, type:

:buffer 1

Or, just do:

:b 1
Switch to next file in Vim

Switch to next file in Vim

Just remember these commands to easily switch between buffers:

:bf            # Go to first file.
:bl            # Go to last file
:bn            # Go to next file.
:bp            # Go to previous file.
:b number  # Go to n'th file (E.g :b 2)
:bw            # Close current file.
Opening additional files for editing

We are currently editing two files namely file1.txt, file2.txt. You might want to open another file named file3.txt for editing. What will you do? It's easy! Just type :e followed by the file name like below.

:e file3.txt
Open additional files for editing in Vim

Open additional files for editing in Vim

Now you can edit file3.txt.

To view how many files are being edited currently, type:

:buffers
View all files in buffers in Vim

View all files in buffers in Vim

Please note that you can not switch between opened files with :e using either :n or :N . To switch to another file, type :buffer followed by the file buffer number.

Copying contents of one file into another

You know how to open and edit multiple files at the same time. Sometimes, you might want to copy the contents of one file into another. It is possible too. Switch to a file of your choice. For example, let us say you want to copy the contents of file1.txt into file2.txt.

To do so, first switch to file1.txt:

:buffer 1

Place the move cursor in-front of a line that wants to copy and type yy to yank(copy) the line. Then, move to file2.txt:

:buffer 2

Place the mouse cursor where you want to paste the copied lines from file1.txt and type p . For example, you want to paste the copied line between line2 and line3. To do so, put the mouse cursor before line and type p .

Sample output:

line1
line2
ostechnix
line3
line4
line5
Copying contents of one file into another file using Vim

Copying contents of one file into another file using Vim

To save the changes made in the current file, type:

ZZ

Again, please note that this is double capital ZZ (SHIFT+z).

To save the changes in all files and exit vim editor. type:

:wq

Similarly, you can copy any line from any file to other files.

Copying entire file contents into another

We know how to copy a single line. What about the entire file contents? That's also possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt.

To do so, open the file2.txt first:

$ vim file2.txt

If the files are already loaded, you can switch to file2.txt by typing:

:buffer 2

Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then, type the following command and hit ENTER key:

:r file1.txt
Copying entire contents of a file into another file

Copying entire contents of a file into another file

Here, r means read .

Now you will see the contents of file1.txt is pasted after line5 in file2.txt.

line1
line2
line3
line4
line5
ostechnix
open source
technology
linux
unix
Copying entire file contents into another file using Vim

Copying entire file contents into another file using Vim

To save the changes in the current file, type:

ZZ

To save all changes in all loaded files and exit vim editor, type:

:wq
Method 2

The another method to open multiple files at once is by using either -o or -O flags.

To open multiple files in horizontal windows, run:

$ vim -o file1.txt file2.txt
Open multiple files at once in Vim

Open multiple files at once in Vim

To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the following shortcuts to move between windows.

To open multiple files in vertical windows, run:

$ vim -O file1.txt file2.txt file3.txt
Open multiple files in vertical windows in Vim

Open multiple files in vertical windows in Vim

To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the following shortcuts to move between windows.

Everything else is same as described in method 1.

For example, to list currently loaded files, run:

:buffers

To switch between files:

:buffer 1

To open an additional file, type:

:e file3.txt

To copy entire contents of a file into another:

:r file1.txt

The only difference in method 2 is once you saved the changes in the current file using ZZ , the file will automatically close itself. Also, you need to close the files one by one by typing :wq . But, had you followed the method 1, when typing :wq all changes will be saved in all files and all files will be closed at once.

For more details, refer man pages.

$ man vim

[Mar 24, 2021] How To Comment Out Multiple Lines At Once In Vim Editor by Senthil Kumar Images removed. Use the original for full text. Images removed. Use the original for full text.

Nov 22, 2017 | ostechnix.com

...enter the following command:

:1,3s/^/#

In this case, we are commenting out the lines from 1 to 3. Check the following screenshot. The lines from 1 to 3 have been commented out.

Comment out multiple lines at once in vim

Comment out multiple lines at once in vim

To uncomment those lines, run:

:1,3s/^#/

Once you're done, unset the line numbers.

:set nonumber

Let us go ahead and see third method.

Method 3:

This one is same as above but slightly different.

Open the file in vim editor.

$ vim ostechnix.txt

Set line numbers:

:set number

Then, type the following command to comment out the lines.

:1,4s/^/# /

The above command will comment out lines from 1 to 4.

Comment out multiple lines in vim

Comment out multiple lines in vim

Finally, unset the line numbers by typing the following.

:set nonumber
Method 4:

This method is suggested by one of our reader Mr.Anand Nande in the comment section below.

Open file in vim editor:

$ vim ostechnix.txt

Press Ctrl+V to enter into 'Visual block' mode and press DOWN arrow to select all the lines in your file.

Select lines in Vim

Select lines in Vim

Then, press Shift+i to enter INSERT mode (this will place your cursor on the first line). Press Shift+3 which will insert '#' before your first line.

Insert '#' before the first line in Vim

Insert '#' before the first line in Vim

Finally, press ESC key, and you can now see all lines are commented out.

Comment out multiple lines using vim

Comment out multiple lines using vim Method 5:

This method is suggested by one of our Twitter follower and friend Mr.Tim Chase .

We can even target lines to comment out by regex. Open the file in vim editor.

$ vim ostechnix.txt

And type the following:

:g/\Linux/s/^/# /

The above command will comment out all lines that contains the word "Linux".

Comment out all lines that contains a specific word in Vim

Comment out all lines that contains a specific word in Vim

And, that's all for now. I hope this helps. If you know any other easier method than the given methods here, please let me know in the comment section below. I will check and add them in the guide. Also, have a look at the comment section below. One of our visitor has shared a good guide about Vim usage.

NUNY3 November 23, 2017 - 8:46 pm

If you want to be productive in Vim you need to talk with Vim with *language* Vim is using. Every solution that gets out of "normal
mode" is most probably not the most effective.

METHOD 1
Using "normal mode". For example comment first three lines with: I#j.j.
This is strange isn't it, but:
I –> capital I jumps to the beginning of row and gets into insert mode
# –> type actual comment character
–> exit insert mode and gets back to normal mode
j –> move down a line
. –> repeat last command. Last command was: I#
j –> move down a line
. –> repeat last command. Last command was: I#
You get it: After you execute a command, you just repeat j. cobination for the lines you would like to comment out.

METHOD 2
There is "command line mode" command to execute "normal mode" command.
Example: :%norm I#
Explanation:
% –> whole file (you can also use range if you like: 1,3 to do only for first three lines).
norm –> (short for normal)
I –> is normal command I that is, jump to the first character in line and execute insert
# –> insert actual character
You get it, for each range you select, for each of the line normal mode command is executed

METHOD 3
This is the method I love the most, because it uses Vim in the "I am talking to Vim" with Vim language principle.
This is by using extension (plug-in, add-in): https://github.com/tomtom/tcomment_vim extension.
How to use it? In NORMAL MODE of course to be efficient. Use: gc+action.

Examples:
gcap –> comment a paragraph
gcj –> comment current line and line bellow
gc3j –> comment current line and 3 lines bellow
gcgg –> comment current line and all the lines including first line in file
gcG –> comment current line and all the lines including last line in file
gcc –> shortcut for comment a current line

You name it it has all sort of combinations. Remember, you have to talk with Vim, to properly efficially use it.
Yes sure it also works with "visual mode", so you use it like: V select the lines you would like to mark and execute: gc

You see if I want to impress a friend I am using gc+action combination. Because I always get: What? How did you do it? My answer it is Vim, you need to talk with the text editor, not using dummy mouse and repeat actions.

NOTE: Please stop telling people to use DOWN arrow key. Start using h, j, k and l keys to move around. This keys are on home row of typist. DOWN, UP, LEFT and RIGHT key are bed habit used by beginners. It is very inefficient. You have to move your hand from home row to arrow keys.

VERY IMPORTANT: Do you want to get one million dollar tip for using Vim? Start using Vim like it was designed for use normal mode. Use its language: verbs, nouns, adverbs and adjectives. Interested what I am talking about? You should be, if you are serious about using Vim. Read this one million dollar answer on forum: https://stackoverflow.com/questions/1218390/what-is-your-most-productive-shortcut-with-vim/1220118#1220118 MDEBUSK November 26, 2019 - 7:07 am

I've tried the "boxes" utility with vim and it can be a lot of fun.

https://boxes.thomasjensen.com/ SÉRGIO ARAÚJO December 17, 2020 - 4:43 am

Method 6
:%norm I#

[Mar 24, 2021] What commands are missing from your bashrc file- - Enable Sysadmin

Mar 24, 2021 | www.redhat.com

The idea was that sharing this would inspire others to improve their bashrc savviness. Take a look at what our Sudoers group shared and, please, borrow anything you like to make your sysadmin life easier.

[ You might also like: Parsing Bash history in Linux ]

Jonathan Roemer
# Require confirmation before overwriting target files. This setting keeps me from deleting things I didn't expect to, etc
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'

# Add color, formatting, etc to ls without re-typing a bunch of options every time
alias ll='ls -alhF'
alias ls="ls --color"
# So I don't need to remember the options to tar every time
alias untar='tar xzvf'
alias tarup='tar czvf'

# Changing the default editor, I'm sure a bunch of people have this so they don't get dropped into vi instead of vim, etc. A lot of distributions have system default overrides for these, but I don't like relying on that being around
alias vim='nvim'
alias vi='nvim'
Valentin Bajrami

Here are a few functions from my ~/.bashrc file:

# Easy copy the content of a file without using cat / selecting it etc. It requires xclip to be installed
# Example:  _cp /etc/dnsmasq.conf
_cp()
{
  local file="$1"
  local st=1
  if [[ -f $file ]]; then
    cat "$file" | xclip -selection clipboard
    st=$?
  else
    printf '%s\n' "Make sure you are copying the content of a file" >&2
  fi
  return $st    
}

# This is the function to paste the content. The content is now in your buffer.
# Example: _paste   

_paste()
{
  xclip -selection cliboard -o
}

# Generate a random password without installing any external tooling
genpw()
{
  alphanum=( {a..z} {A..Z} {0..9} ); for((i=0;i<=${#alphanum[@]};i++)); do printf '%s' "${alphanum[@]:$((RANDOM%255)):1}"; done; echo
}
# See what command you are using the most (this parses the history command)
cm() {
  history | awk ' { a[$4]++ } END { for ( i in a ) print a[i], i | "sort -rn | head -n10"}' | awk '$1 > max{ max=$1} { bar=""; i=s=10*$1/max;while(i-->0)bar=bar"#"; printf "%25s %15d %s %s", $2, $1,bar, "\n"; }'
}
Peter Gervase

For shutting down at night, I kill all SSH sessions and then kill any VPN connections:

#!/bin/bash
/usr/bin/killall ssh
/usr/bin/nmcli connection down "Raleigh (RDU2)"
/usr/bin/nmcli connection down "Phoenix (PHX2)"
Valentin Rothberg
alias vim='nvim'
alias l='ls -CF --color=always''
alias cd='cd -P' # follow symlinks
alias gits='git status'
alias gitu='git remote update'
alias gitum='git reset --hard upstream/master'
Steve Ovens
alias nano='nano -wET 4'
alias ls='ls --color=auto'
PS1="\[\e[01;32m\]\u@\h \[\e[01;34m\]\w  \[\e[01;34m\]$\[\e[00m\] "
export EDITOR=nano
export AURDEST=/var/cache/pacman/pkg
PATH=$PATH:/home/stratus/.gem/ruby/2.7.0/bin
alias mp3youtube='youtube-dl -x --audio-format mp3'
alias grep='grep --color'
alias best-youtube='youtube-dl -r 1M --yes-playlist -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]''
alias mv='mv -vv'
shopt -s histappend
HISTCONTROL=ignoreboth
Jason Hibbets

While my bashrc aliases aren't as sophisticated as the previous technologists, you can probably tell I really like shortcuts:

# User specific aliases and functions

alias q='exit'
alias h='cd ~/'
alias c='clear'
alias m='man'
alias lsa='ls -al'
alias s='sudo su -'
More Linux resources Bonus: Organizing bashrc files and cleaning up files

We know many sysadmins like to script things to make their work more automated. Here are a few tips from our Sudoers that you might find useful.

Chris Collins

I don't know who I need to thank for this, some awesome woman on Twitter whose name I no longer remember, but it's changed the organization of my bash aliases and commands completely.

I have Ansible drop individual <something>.bashrc files into ~/.bashrc.d/ with any alias or command or shortcut I want, related to any particular technology or Ansible role, and can manage them all separately per host. It's been the best single trick I've learned for .bashrc files ever.

Git stuff gets a ~/.bashrc.d/git.bashrc , Kubernetes goes in ~/.bashrc.d/kube.bashrc .

if [ -d ${HOME}/.bashrc.d ]
then
  for file in ~/.bashrc.d/*.bashrc
  do
    source "${file}"
  done
fi
Peter Gervase

These aren't bashrc aliases, but I use them all the time. I wrote a little script named clean for getting rid of excess lines in files. For example, here's nsswitch.conf with lots of comments and blank lines:

[pgervase@pgervase etc]$ head authselect/nsswitch.conf
# Generated by authselect on Sun Dec  6 22:12:26 2020
# Do not modify this file manually.

# If you want to make changes to nsswitch.conf please modify
# /etc/authselect/user-nsswitch.conf and run 'authselect apply-changes'.
#
# Note that your changes may not be applied as they may be
# overwritten by selected profile. Maps set in the authselect
# profile always take precedence and overwrites the same maps
# set in the user file. Only maps that are not set by the profile

[pgervase@pgervase etc]$ wc -l authselect/nsswitch.conf
80 authselect/nsswitch.conf

[pgervase@pgervase etc]$ clean authselect/nsswitch.conf
passwd:     sss files systemd
group:      sss files systemd
netgroup:   sss files
automount:  sss files
services:   sss files
shadow:     files sss
hosts:      files dns myhostname
bootparams: files
ethers:     files
netmasks:   files
networks:   files
protocols:  files
rpc:        files
publickey:  files
aliases:    files

[pgervase@pgervase etc]$ cat `which clean`
#! /bin/bash
#
/bin/cat $1 | /bin/sed 's/^[ \t]*//' | /bin/grep -v -e "^#" -e "^;" -e "^[[:space:]]*$" -e "^[ \t]+"

[ Free online course: Red Hat Enterprise Linux technical overview . ]

[Mar 12, 2021] Connect computers through WebRTC.

Mar 12, 2021 | opensource.com

Snapdrop

If navigating a network through IP addresses and hostnames is confusing, or if you don't like the idea of opening a folder for sharing and forgetting that it's open for perusal, then you might prefer Snapdrop . This is an open source project that you can run yourself or use the demonstration instance on the internet to connect computers through WebRTC. WebRTC enables peer-to-peer connections through a web browser, meaning that two users on the same network can find each other by navigating to Snapdrop and then communicate with each other directly, without going through an external server.

snapdrop.jpg

(Seth Kenlon, CC BY-SA 4.0 )

Once two or more clients have contacted a Snapdrop service, users can trade files and chat messages back and forth, right over the local network. The transfer is fast, and your data stays local.

[Mar 12, 2021] How to measure elapsed time in bash by Dan Nanni

Mar 09, 2021 | www.xmodulo.com
When you call date with +%s option, it shows the current system clock in seconds since 1970-01-01 00:00:00 UTC. Thus, with this option, you can easily calculate time difference in seconds between two clock measurements.
start_time=$(date +%s)
# perform a task
end_time=$(date +%s)

# elapsed time with second resolution
elapsed=$(( end_time - start_time ))

Another (preferred) way to measure elapsed time in seconds in bash is to use a built-in bash variable called SECONDS . When you access SECONDS variable in a bash shell, it returns the number of seconds that have passed so far since the current shell was launched. Since this method does not require running the external date command in a subshell, it is a more elegant solution.

start_time=$SECONDS
sleep 5
elapsed=$(( SECONDS - start_time ))
echo $elapsed

This will display elapsed time in terms of the number of seconds. If you want a more human-readable format, you can convert $elapsed output as follows.

eval "echo Elapsed time: $(date -ud "@$elapsed" +'$((%s/3600/24)) days %H hr %M min %S sec')"

This will produce output like the following.

Elapsed time: 0 days 13 hr 53 min 20 sec

[Mar 03, 2021] How to move /var directory to another partition

Mar 03, 2021 | linuxconfig.org

How to move /var directory to another partition

System Administration
18 November 2020

me title=

/var directory has filled up and you are left with with no free disk space available. This is a typical scenario which can be easily fixed by mounting your /var directory on different partition. Let's get started by attaching new storage, partitioning and creating a desired file system. The exact steps may vary and are not part of this config article. Once ready obtain partition UUID of your new var partition eg. /dev/sdc1:
# blkid | grep sdc1
/dev/sdc1: UUID="1de46881-1f49-440e-89dd-6c32592491a7" TYPE="ext4" PARTUUID="652a2fee-01"
Create a new mount point and mount your new partition:
# mkdir /mnt/newvar
# mount /dev/sdc1 /mnt/newvar
Confirm that it is mounted. Note, your output will be different:
# df -h /mnt/newvar
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc1       1.8T  1.6T  279G  85% /mnt/newvar
Copy current /var data to the new location:
# rsync -aqxP /var/* /mnt/newvar
Unmount new partition:
# umount /mnt/newvar/  /mnt/var/
Edit your /etc/fstab to include new partition and choosing a relevant file-system:
UUID=1de46881-1f49-440e-89dd-6c32592491a7 /var        ext4    defaults        0       2
Reboot your system and you are done. Confirm that everything is working correctly and optionally remove old var directory by booting to some Live Linux system etc.

[Mar 03, 2021] partitioning - How to move boot and root partitions to another drive - Ask Ubuntu

Mar 03, 2021 | askubuntu.com

How to move boot and root partitions to another drive Ask Question Asked 10 years, 6 months ago Active 1 year, 7 months ago Viewed 80k times


mlissner ,

34 20

I have two drives on my computer that have the following configuration:

Drive 1: 160GB, /home
Drive 2: 40GB, /boot and /

Unfortunately, drive 2 seems to be dying, because trying to write to it is giving me errors, and checking out the SMART settings shows a sad state of affairs.

I have plenty of space on Drive 1, so what I'd like to do is move the / and /boot partitions to it, remove Drive 2 from the system, replace Drive 2 with a new drive, then reverse the process.

I imagine I need to do some updating to grub, and I need to move some things around, but I'm pretty baffled how to exactly go about this. Since this is my main computer, I want to be careful not to mess things up so I can't boot. partitioning fstab Share Improve this question Follow asked Sep 1 '10 at 0:56 mlissner 2,013 2 2 gold badges 22 22 silver badges 35 35 bronze badges

Lucas ,

This is exactly what I had to do as well. I wrote a blog with full instructions on how to move root partition / to /home.Lucas Sep 17 '18 at 15:12

maco ,

31

You'll need to boot from a live cd. Add partitions for them to disk 1, copy all the contents over, and then use sudo blkid to get the UUID of each partition. On disk 1's new /, edit the /etc/fstab to use the new UUIDs you just looked up.

Updating GRUB depends on whether it's GRUB1 or GRUB2. If GRUB1, you need to edit /boot/grub/device.map

If GRUB2, I think you need to mount your partitions as they would be in a real situation. For example:

sudo mkdir /media/root
sudo mount /dev/sda1 /media/root
sudo mount /dev/sda2 /media/root/boot
sudo mount /dev/sda3 /media/root/home

(Filling in whatever the actual partitions are that you copied things to, of course)

Then bind mount /proc and /dev in the /media/root:

sudo mount -B /proc /media/root/proc
sudo mount -B /dev /media/root/dev
sudo mount -B /sys /media/root/sys

Now chroot into the drive so you can force GRUB to update itself according to the new layout:

sudo chroot /media/root
sudo update-grub

The second command will make one complaint (I forget what it is though...), but that's ok to ignore.

Test it by removing the bad drive. If it doesn't work, the bad drive should still be able to boot the system, but I believe these are all the necessary steps. Share Improve this answer Follow edited Jun 15 '14 at 23:04 Matthew Buckett 105 4 4 bronze badges answered Sep 1 '10 at 6:14 maco 14.4k 3 3 gold badges 27 27 silver badges 35 35 bronze badges

William Mortada ,

FYI to anyone viewing this these days, this does not apply to EFI setups. You need to mount /media/root/boot/efi , among other things. – wjandrea Sep 10 '16 at 7:54

sBlatt ,

6

If you replace the drive right away you can use dd (tried it on my server some months ago, and it worked like a charm).

You'll need a boot-CD for this as well.

  1. Start boot-CD
  2. Only mount Drive 1
  3. Run dd if=/dev/sdb1 of=/media/drive1/backuproot.img - sdb1 being your root ( / ) partition. This will save the whole partition in a file.
    • same for /boot
  4. Power off, replace disk, power on
  5. Run dd if=/media/drive1/backuproot.img of=/dev/sdb1 - write it back.
    • same for /boot

The above will create 2 partitions with the exact same size as they had before. You might need to adjust grub (check macos post).

If you want to resize your partitions (as i did):

  1. Create 2 Partitions on the new drive (for / and /boot ; size whatever you want)
  2. Mount the backup-image: mount /media/drive1/backuproot.img /media/backuproot/
  3. Mount the empty / partition: mount /dev/sdb1 /media/sdb1/
  4. Copy its contents to the new partition (i'm unsure about this command, it's really important to preserve ownership, cp -R won't do it!) cp -R --preserve=all /media/backuproot/* /media/sdb1
    • same for /boot/

This should do it. Share Improve this answer Follow edited Sep 10 '16 at 1:59 wjandrea 12.2k 4 4 gold badges 39 39 silver badges 83 83 bronze badges answered Sep 1 '10 at 9:53 sBlatt 3,849 2 2 gold badges 18 18 silver badges 19 19 bronze badges

> ,

It turns out that the new "40GB" drive I'm trying to install is smaller than my current "40GB" drive. I have both of them connected, and I'm booted into a liveCD. Is there an easy way to just dd from the old one to the new one, and call it a done deal? – mlissner Sep 4 '10 at 3:02

mlissner ,

6

My final solution to this was a combination of a number of techniques:

  1. I connected the dying drive and its replacement to the computer simultaneously.
  2. The new drive was smaller than the old, so I shrank the partitions on the old using GParted.
  3. After doing that, I copied the partitions on the old drive, and pasted them on the new (also using GParted).
  4. Next, I added the boot flag to the correct partition on the new drive, so it was effectively a mirror of the old drive.

This all worked well, but I needed to update grub2 per the instructions here .

After all this was done, things seem to work. Share Improve this answer Follow edited Jul 16 '19 at 23:35 Pablo Bianchi 7,787 3 3 gold badges 41 41 silver badges 76 76 bronze badges answered Sep 4 '10 at 8:35 mlissner 2,013 2 2 gold badges 22 22 silver badges 35 35 bronze badges

j.karlsson ,

Finally, this solved it for me. I had a Virtualbox disk (vdi file) that I needed to move to a smaller disk. However Virtualbox does not support shrinking a vdi file, so I had to create a new virtual disk and copy over the linux installation onto this new disk. I've spent two days trying to get it to boot. – j.karlsson Dec 19 '19 at 9:48

[Mar 03, 2021] How to Migrate the Root Filesystem to a New Disk - Support - SUSE

Mar 03, 2021 | www.suse.com

How to Migrate the Root Filesystem to a New Disk

This document (7018639) is provided subject to the disclaimer at the end of this document.

Environment SLE 11
SLE 12
Situation The root filesystem needs to be moved to a new disk or partition. Resolution 1. Use the media to go into rescue mode on the system. This is the safest way to copy data from the root disk so that it's not changing while we are copying from it. Make sure the new disk is available.

2. Copy data at the block(a) or filesystem(b) level depending on preference from the old disk to the new disk.
NOTE: If the dd command is not being used to copy data from an entire disk to an entire disk the partition(s) will need to be created prior to this step on the new disk so that the data can copied from partition to partition.

a. Here is a dd command for copying at the block level (the disks do not need to be mounted):
# dd if=/dev/<old root disk> of=/dev/<new root disk> bs=64k conv=noerror,sync

The dd command is not verbose and depending on the size of the disk could take some time to complete. While it is running the command will look like it is just hanging. If needed, to verify it is still running, use the ps command on another terminal window to find the dd command's process ID and use strace to follow that PID and make sure there is activity.
# ps aux | grep dd
# strace -p<process id>

After confirming activity, hit CTRL + c to end the strace command. Once the dd command is complete the terminal prompt will return allowing for new commands to be run.

b. Alternatively to dd, mount the disks and then use an rsync command for copying at the filesystem level:
# mount /dev/<old root disk> /mnt
# mkdir /mnt2
(If the new disk's root partition doesn't have a filesystem yet, create it now.)
# mount /dev/<new root disk> /mnt2
# rsync -zahP /mnt/ /mnt2/

This command is much more verbose than dd and there shouldn't be any issues telling that it is working. This does generally take longer than the dd command.

3. Setting up the partition boot label with either fdisk(a) or parted(b)
NOTE: This step can be skipped if the boot partition is separate from the root partition and has not changed. Also, if dd was used on an entire disk to an entire disk in section "a" of step 2 you can still skip this step since the partition table will have been copied to the new disk (If the partitions are not showing as available yet on the new disk run "partprobe" or enter fdisk and save no changes. ). This exception does not include using dd on only a partition.

a. Using fdisk to label the new root partition (which contains boot) as bootable.
# fdisk /dev/<new root disk>

From the fdisk shell type 'p' to list and verify the root partition is there.
Command (m for help): p
If the "Boot" column of the root partition does not have an "*" symbol then it needs to be activated. Type 'a' to toggle the bootable partition flag: Command (m for help): a Partition number (1-4): <number from output p for root partition>

After that use the 'p' command to verify the bootable flag is now enabled. Finally, save changes: Command (m for help): w

b. Alternatively to fdisk, use parted to label the new root partition (which contains boot) as bootable.
# parted /dev/sda

From the parted shell type "print" to list and verify the root partition is there.
(parted) print If the "Flags" column of the root partition doesn't include "boot" then it will need to be enabled. (parted) set <root partition number> boot on

After that use the "print" command again to verify the flag is now listed for the root partition. then exit parted to save the changes: (parted) quit

4. Updating Legacy GRUB(a) on SLE11 or GRUB2(b) on SLE12.
NOTE: Steps 4 through 6 will need to be done in a chroot environment on the new root disk. TID7018126 covers how to chroot in rescue mode: https://www.suse.com/support/kb/doc?id=7018126

a. Updating Legacy GRUB on SLE11
# vim /boot/grub/menu.lst

There are two changes that may need to occur in the menu.lst file. 1. If the contents of /boot are in the root partition which is being changed, we'll need to update the line "root (hd#,#)" which points to the disk with the contents of /boot.

Since the sd[a-z] device names are not persistent it's recommended to find the equivalent /dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name might be different in chroot than it was before chroot. Run this command to verify the disk name in chroot: # mount

For this line Grub uses "hd[0-9]" rather than "sd[a-z]" so sda would be hd0 and sdb would be hd1, and so on. Match to the disk as shown in the mount command within chroot. The partition number in Legacy Grub also starts at 0. So if it were sda1 it would be hd0,0 and if it were sdb2 it would be hd1,1. Update that line accordingly.

2. in the line starting with the word "kernel" (generally just below the root line we just went over) there should be a root=/dev/<old root disk> parameter. That will need to be updated to match the path and device name of the new root partition. root=/dev/disk/by-id/<new root partition> Also, if the swap partition was changed to the new disk you'll need to reflect that with the resume= parameter.
Save and exit after making the above changes as needed.
Next, run this command: # yast2 bootloader
( you may get a warning message about the boot loader. This can be ignored.)
Go to the "Boot Loader Installation" tab with ALT + a. Verify it is set to boot from the correct partition. For example, if the content of /boot is in the root partition then make sure it is set to boot from the root partition. Lastly hit ALT + o so that it will save the configuration. While the YaST2 module is existing it should also install the boot loader.
b Updating GRUB2 on SLE12 # vim /etc/default/grub

The parameter to update is the GRUB_CMDLINE_LINUX_DEFAULT. If there is a "root=/dev/<old root disk>" parameter update it so that it is "root=/dev/<new root disk>". If there is no root= parameter in there add it. Each parameter is space separated so make sure there is a space separating it from the other parameters. Also, if the swap partition was changed to the new disk you'll need to reflect that with the resume= parameter.

Since the sd[a-z] device names are not persistent it's recommended to find the equivalent /dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name might be different in chroot than it was before chroot. Run this command to verify the disk name in chroot before comparing with by-id or by-path: # mount

It might look something like this afterward: GRUB_CMDLINE_LINUX_DEFAULT="root=/dev/disk/by-id/<partition/disk name> resume=/dev/disk/by-id/<partition/disk name> splash=silent quiet showopts"
After saving changes to that file run this command to save them to the GRUB2 configuration: # grub2-mkconfig -o /boot/grub2/grub.cfg (You can ignore any errors about lvmetad during the output of the above command.)
After that run this command on the disk with the root partition. For example, if the root partition is sda2 run this command on sda:
# grub2-install /dev/<disk of root partition>

5. Correct the fstab file to match new partition name(s)
# vim /etc/fstab

Correct the root (/) partition mount row in the file so that it points to the new disk/partition name. If any other partitions were changed they will need to be updated as well. For example, changed from: /dev/<old root disk> / ext3 defaults 1 1 to: /dev/disk/by-id/<new root disk> / ext3 defaults 1 1

The 3rd through 6th column may vary from the example. The important aspect is to change the row that is root (/) on the second column and adjust in particular the first column to reflect the new root disk/partition. Save and exit after making needed changes.
6. Lastly, run the following command to rebuild the ramdisk to match updated information: # mkinitrd

7. Exit chroot and reboot the system to test if it will boot using the new disk. Make sure to adjust the BIOS boot order so that the new disk is prioritized first. Additional Information The range of environments that can impact the necessary steps to migrate a root filesystem makes it near impossible to cover every case. Some environments could require tweaks in the steps needed to make this migration a success. As always in administration, have backups ready and proceed with caution. Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

[Mar 03, 2021] How to move Linux root partition to another drive quickly - by Dominik Gacek - Medium

Mar 03, 2021 | medium.com

How to move Linux root partition to another drive quickly Dominik Gacek

Dominik Gacek

Jun 21, 2019 · 4 min read

There's a bunch of information over internet on how to clone the Linux drives or partitions between other drives and partitions using solution like partclone , clonezilla , partimage , dd or similar, and while most of them are working just fine, they're not always the fastest possible way to achieve the result.

Today I want to show you another approach that combines most of them, and I am finding it the easiest and fastest of all.

Assumptions:

  1. You are using GRUB 2 as a boot loader
  2. You have two disks/partitions where a destination one is at least the same size or larger than the original one.

Let's dive in into action.

Just "dd" it

First thing that we h ave to do, is to create a direct copy of our current root partition from our source disk into our target one.

Before you start, you have to know what are the device names of your drives, to check on that type in:

sudo fdisk -l

You should see the list of all the disks and partitions inside your system, along with the corresponding device names, most probably something like /dev/sdx where the x will be replaced with proper device letter, in addition to that you'll see all of the partitions for that device prefixed with partition number, so something like /dev/sdx1

Based on the partition size, device identifier and the file-system, you can say what partitions you'll switch your installation from and which one will be the target one.

I am assuming here, that you already have the proper destination partition created, but if you do not, you can utilize one of the tools like GParted or similar to create it.

Once you'll have those identifiers, let's use dd to create a clone, with command similar to.

sudo dd if=/dev/sdx1 of=/dev/sdy1 bs=64K conv=noerror,sync

Where /dev/sdx1 is your source partition, and /dev/sdy1 is your destination one.

It's really important to provide the proper devices into if and of arguments, cause otherwise you can overwrite your source disk instead!

The above process will take a while and once it's finished you should already be able to mount your new partition into the system by using two commands:

sudo mkdir /mnt/new
sudo mount /dev/sdy1 /mnt/new

There's also a chance that your device will be mounted automatically but that varies on a Linux distro of choice.

Once you execute it, if everything went smoothly you should be able to run

ls -l /mnt/new

And as the outcome you should see all the files from the core partition, being stored in the new location.

It finishes the first and most important part of the operation.

Now the tricky part

We do have our new partition moved into shiny new drive, but the problem that we have, is the fact that since they're the direct clones both of the devices will have the same UUIDs and if we want to load your installation from the new device properly, we'll have to adjust that as well.

First, execute following command to see the current disk uuid's

blkid

You'll see all of the partitions with the corresponding UUID.
Now, if we want to change it we have to first generate a new one using:

uuidgen

which will generate a brand new UUID for us, then let's copy it result and execute command similar to:

sudo tune2fs /dev/sdy1 -U cd6ecfb1-05e0-4dd7-89e7-8e78dad1fa0e

where in place of /dev/sdy1 you should provide your target partition device identifier, and in place of -U flag value, you should paste the value generated from uuidgen command.

Now the last thing to do, is to update our fstab file on new partition so that it'll contain the proper UUID, to do this, let's edit it with.

sudo vim /etc/fstab
# or nano or whatever editor of choice

you'll see something similar to the code below inside:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdc1 during installation
UUID=cd6ecfb1–05e0–4dd7–89e7–8e78dad1fa0e / ext4 errors=remount-ro 0 1
# /home was on /dev/sdc2 during installation
UUID=667f98f4–9db1–415b-b326–65d16c528e29 /home ext4 defaults 0 2
/swapfile none swap sw 0 0
UUID=7AA7–10F1 /boot/efi vfat defaults 0 1

The bold part is important for us, so what we want to do, is to paste our new UUID replacing the current one specified for the / path.

And that's almost it

The last part you have to do is to simply update the grub.

There are a number of options here, for the brave ones you can edit the /boot/grub/grub.cfg

Another option is to simply reinstall grub into our new drive with command:

sudo grub-install /dev/sdx

And if you do not want to bother with editing or reinstalling grub manually, you can simply use the tool called grub-customizer to have a simple and easy GUI for all of those operations.

Happy partitioning! :)

[Mar 03, 2021] HDD to SSD cloning on Linux without re-installing - PCsuggest

Mar 03, 2021 | www.pcsuggest.com

HDD to SSD cloning on Linux without re-installing

Updated - March 25, 2020 by Arnab Satapathi

No doubt the old spinning hard drives are the main bottleneck of any Linux PC. Overall system responsiveness is highly dependent on storage drive performance.

So, here's how you can clone HDD to SSD without re-installing the existing Linux distro and now be clear about few things.

Of course it's not the only way to clone linux from HDD to SSD, rather it's exactly what I did after buying a SSD for my laptop.

This tutorial should work on every Linux distro with a little modification, depending on which distro you're using, I was using Ubuntu.

Contents

Hardware setup

As you're going to copy files from the hard drive to the SSD. So you need to attach the both disk at the same time on your PC/Laptop.

For desktops, it's easier, as there's always at least 2 SATA ports on the motherboard. You've just have to connect the SSD to any of the free SATA ports and you're done.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=258425468&pi=t.aa~a.1476024706~i.12~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822071&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071151&bpp=6&bdt=1482&idt=-M&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0&nras=2&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=1684&biw=1519&bih=762&scr_x=0&scr_y=0&oid=3&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=c5eY9hPQ6Q&p=https%3A//www.pcsuggest.com&dtd=84

On laptops it's a bit tricky, as there's no free SATA port. If the laptop has a DVD drive, then you could remove it and use a " 2nd hard drive caddy ". ssd caddy sample

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=2371715447&pi=t.aa~a.1476024706~i.13~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822071&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071151&bpp=2&bdt=1481&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280&nras=3&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=2511&biw=1519&bih=762&scr_x=0&scr_y=0&oid=3&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=LQ6LebZF03&p=https%3A//www.pcsuggest.com&dtd=104

It could be either 9.5 mm or 12.7 mm. Open up your laptop's DVD drive and get a rough measurement.

But if you don't want to play around with your DVD drive or there's no DVD at all, use a USB to SATA adapter .

Preferably a USB 3 adapter for better speed, like this one . However the "caddy" is the best you can do with your laptop.

Try AmazonPrime for free
Enjoy free shipping and One-Day delivery, cancel any time.

You'll need a bootable USB drive for letter steps, booting any live Linux distro of your choice, I used to Ubuntu.

You could use any method to create it, the dd approach will be the simplest. Here's detailed the tutorials, with MultiBootUSB and here's bootable USB with GRUB .

Create Partitions on the SSD

After successfully attaching the SSD, you need to partition it according to it's capacity and your choice. My SSD, SAMSUNG 850 EVO was absolutely blank, might be yours too as well. So, I had to create the partition table before creating disk partitions.

Now many question arises, likeWhat kind of partition table? How many partitions? Is there any need of a swap partition?

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3945169977&pi=t.aa~a.1476024706~i.22~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822074&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071163&bpp=2&bdt=1493&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762&nras=5&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=3738&biw=1519&bih=762&scr_x=0&scr_y=707&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=4ZCi9IMqCp&p=https%3A//www.pcsuggest.com&dtd=3284

Well, if your Laptop/PC has a UEFI based BIOS, and want to use the UEFI functionalities, you should use the GPT partition table.

For a regular desktop use, 2 separate partitions are enough, a root partition and a home . But if you want to boot through UEFI, then you also need to crate a 100 MB or more FAT32 partition.

I think a 32 GB root partition is just enough, but you've to decide yours depending on future plans. However you can go with as low as 8 GB root partition, if you know what you're doing.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3420926156&pi=t.aa~a.1476024706~i.25~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822093&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071173&bpp=2&bdt=1504&idt=3&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280&nras=6&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=4355&biw=1519&bih=762&scr_x=0&scr_y=1320&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=B28apczroD&p=https%3A//www.pcsuggest.com&dtd=22307

Of course you don't need a dedicated swap partition, at least what I think. If there's any need of swap in future, you can just create a swap file.

So, here's how I partitioned the disk. It's formatted with the MBR partition table, a 32 GB root partition and the rest of 256 GB(232.89 GiB) is home . linux hdd to ssd cloning disk partition

This SSD partitions were created with Gparted on the existing Linux system on the HDD. The SSD was connected to the DVD drive slot with a "Caddy", showing as /dev/sdb here.

Mount the HDD and SSD partitions

At the beginning of this step, you need to shutdown your PC and boot to any live Linux distro of your choice from a bootable USB drive.

The purpose of booting to a live linux session is for copying everything from the old root partition in a more cleaner way. I mean why copy unnecessary files or directories under /dev , /proc , /sys , /var , /tmp ?

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3106139488&pi=t.aa~a.1476024706~i.31~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822113&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071183&bpp=2&bdt=1514&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280&nras=7&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=5575&biw=1519&bih=762&scr_x=0&scr_y=2549&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=6&uci=a!6&btvi=5&fsb=1&xpc=ZmyKd7RTgz&p=https%3A//www.pcsuggest.com&dtd=42797

And of course you know how to boot from a USB drive, so I'm not going to repeat the same thing. After booting to the live session, you've to mount both the HDD and SSD.

As I used Ubuntu live, so just opened up the file manager to mount the volumes. At this point you've to be absolutely sure about which are the old and new root and home partitions.

And if you didn't had any separate /home partition on the HDD previously, then you've to be careful while copying files. As there could be lots of contents that won't fit inside the tiny root volume of the SSD in this case.

Finally if you don't want to use any graphical tool like file managers to mount the disk partition, then it's even better. An example below, only commands, not much explanation.

sudo -i    # after booting to the live session

mkdir -p /mnt/{root1,root2,home1,home2}       # Create the directories

mount /dev/sdb1 /mnt/root1/       # mount the root partitions
mount /dev/sdc1 /mnt/root2/

mount /dev/sdb2 /mnt/home1/       # mount the home partitions
mount /dev/sdc2 /mnt/home2/
Copy contents from the HDD to SSD

In this step, we'll be using the rsync command to clone HDD to SSD while preserving proper file permissions . And we'll assume that the all partitions are mounter like below.

  • Old root partition of the hard drive mounted on /media/ubuntu/root/
  • Old home partition of the hard drive on /media/ubuntu/home/
  • New root partition of the SSD, on /media/ubuntu/root1/
  • New home partition of the SSD mounted on /media/ubuntu/home1/

Actually in my case, both the root and home partitions were labelled as root and home, so udisk2 created the mount directories like above.

Note: Most probably your mount points are different. Don't just copy paste the commands below, modify them according to your system and requirements.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3865305780&pi=t.aa~a.1476024706~i.41~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822136&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071192&bpp=2&bdt=1521&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280&nras=8&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=7106&biw=1519&bih=762&scr_x=0&scr_y=4066&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=7&uci=a!7&btvi=6&fsb=1&xpc=IxhPgWVhvg&p=https%3A//www.pcsuggest.com&dtd=64891

First copy the contents of one root partition to another.

rsync -axHAWXS --numeric-ids --info=progress2 /media/ubuntu/root/ /media/ubuntu/root1/

You can also see the transfer progress, that's helpful.

The copying process will take about 10 minutes or so to complete, depending on the size of it's contents.

Note: If there was no separate home partition on your previous installation and there's not enough space in the SSD's root partition, exclude the /home directory.

For that, we'll use the rsync command again.

rsync -axHAWXS --numeric-ids --info=progress2 --exclude={/home} /media/ubuntu/root/ /media/ubuntu/root1/

Now copy the contents of one home partition to another, and this is a bit tricky of your SSD is smaller in size than the HDD. You've to use the --exclude flag with rsync to exclude certain large files or folders.

So, here for an example , I wanted to exclude few excessively large folders.

rsync -axHAWXS --numeric-ids --info=progress2 --exclude={home/b00m/OS,home/b00m/Downloads} /media/ubuntu/home/ /media/ubuntu/home1/

Excluding files and folders with rsync is bit sketchy, the source folder is the starting point of any file or directory path. Make sure that the exclude path is properly implemented.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=2709772142&pi=t.aa~a.1476024706~i.52~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822141&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071200&bpp=2&bdt=1530&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280%2C720x280&nras=9&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=8338&biw=1519&bih=762&scr_x=0&scr_y=5324&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ%2CAGkb-H9XBHpi_X9gAzB4mP646K5sky0HEY1Py0ZNxsLcwJkkZAC8BmYR8RlNEPcor0vSct4cXCofh5ccTvm5jg&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=8&uci=a!8&btvi=7&fsb=1&xpc=wefQe5b0bo&p=https%3A//www.pcsuggest.com&dtd=70468

Note: You need to go through the below step only if you excluded the /home directory while cloning to SSD, as said above.

rsync -axHAWXS --numeric-ids --info=progress2 /media/ubuntu/root/home/ /media/ubuntu/home1/

Hope you've got the point, for a proper HDD to SSD cloning in linux, copy the contents of the HDD's root partition to the new SSD's root partition. And do the the same thing for the home partition too.

Install GRUB bootloader on the SSD

The SSD won't boot until there's a properly configured bootloader. And there's a very good chance that you'were using GRUB as a boot loader.

So, to install GRUB, we've to chroot on the root partition of the SSD and install it from there. Before that be sure about which device under the /dev directory is your SSD. In my case, it was /dev/sdb .

Note: You can just copy the first 512 byte from the HDD and dump it to the SSD, but I'm not going that way this time.

So, first step is chrooting, here's all the commands below, running all of then as super user.

sudo -i               # login as super user

mount -o bind /dev/ /media/ubuntu/root1/dev/
mount -o bind /dev/pts/ /media/ubuntu/root1/dev/pts/ 
mount -o bind /sys/ /media/ubuntu/root1/sys/
mount -o bind /proc/ /media/ubuntu/root1/proc/

chroot /media/ubuntu/root1/

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=1452168868&pi=t.aa~a.1476024706~i.61~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822152&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071209&bpp=1&bdt=1539&idt=1&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280%2C720x280%2C720x280&nras=10&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=9694&biw=1519&bih=762&scr_x=0&scr_y=6666&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ%2CAGkb-H9XBHpi_X9gAzB4mP646K5sky0HEY1Py0ZNxsLcwJkkZAC8BmYR8RlNEPcor0vSct4cXCofh5ccTvm5jg%2CAGkb-H9f9fUn01smqVRP5aEnN31pZNxrDL15Qj0IlmWDPH8p8BIJMRy6cFTja0zNONcUCMw6gUHiQaNqou6aaQ&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=9&uci=a!9&btvi=8&fsb=1&xpc=GYejSIH3w1&p=https%3A//www.pcsuggest.com&dtd=80956

After successfully chrooting to the SSD's root partition, install GRUB. And there's also a catch, if you want to use a UEFI compatible GRUB, then it's another long path. But we'll be installing the legacy BIOS version of the GRUB here.

grub-install /dev/sdb --boot-directory=/boot/ --target=i386-pc

If GRUB is installed without any problem, then update the configuration file.

update-grub

These two commands above are to be run inside the chroot, and don't exit from the chroot now. Here's the detailed GRUB rescue tutorial, both for legacy BIOS and UEFI systems.

Update the fstab entry

You've to properly update the fstab entry to properly mount the filesystems while booting.

Use the blkid command to know the proper UUID of the partitions. ssd blkid

Now open up the /etc/fstab file with your favorite text editor and add the proper root and home UUID at proper locations.

nano /etc/fstab

clone hdd to ssd fstab entryThe above is the final fstab entry from my laptops Ubuntu installation.

Shutdown and boot from the SSD

If you were using a USB to SATA converter to do all the above steps, then it's time to connect the SSD to a SATA port.

For desktops it's not a problem, just connect the SSD to any of it's available SATA port. But many laptop refuses to boot if the DVD drive is replaced with a SSD or HDD. So, in that case, remove the hard drive and slip the SSD in it's place.

After doing all the hardware stuff, it's better to check if the SSD is recognized by the BIOS/UEFI at all. Hit the BIOS setup button while powering it up, and check all the disks.

If the SSD is detected, then set it as the default boot device. Save all the changes to BIOS/UEFI and hit the power button again. BIOS boot selection menu

Now it's the moment of truth, if HDD to SSD cloning was done right, then Linux should boot. It will boot much faster than previous, you can check that with the systemd-analyze command.

Conclusion

As said before it's neither the only way nor the perfect, but was pretty simple for me.I got the idea from openwrt extroot setup, but previously used the squashfs tools instead of rsync.

It took around 20 minute to clone my HDD to SSD. But, well writing this tutorial took around 15X more time of that.

Hope I'll be able to add the GRUB installation process for UEFI based systems to this tutorial soon, stay tuned !

Also please don't forget to share your thoughts and suggestions on the comment section. Your comments

  1. Sh3l says

    December 21, 2020

    Hello,
    It seems you haven't gotten around writing that UEFI based article yet. But right now I really need the steps necessary to clone hdd to ssd in UEFI based system. Can you please let me know how to do it? Reply

    • Arnab Satapathi says

      December 22, 2020

      Create an extra UEFI partition, along with root and home partitions, FAT32, 100 to 200 MB, install GRUB in UEFI mode, it should boot.
      Commands should be like this -
      mount /dev/sda2 /boot/efi
      grub-install /dev/sda --target=x86_64-efi

      sda2 is the EFI partition.

      This could be helpful- https://www.pcsuggest.com/grub-rescue-linux/#GRUB_rescue_on_UEFI_systems

      Then edit the grub.cfg file under /boot/grub/ , you're good to go.

      If it's not booting try GRUB rescue, boot and install grub from there. Reply

  2. Pronay Guha says

    November 9, 2020

    I'm already using Kubuntu 20.04, and now I'm trying to add an SSD to my laptop. It is running windows alongside. I want the data to be there but instead of using HDD, the Kubuntu OS should use SSD. How to do it? Reply

  3. none says

    May 23, 2020

    Can you explain what to do if the original HDD has Swap and you don't want it on the SSD?
    Thanks. Reply

    • Arnab Satapathi says

      May 23, 2020

      You can ignore the Swap partition, as it's not essential for booting.

      Edit the /etc/fstab file, and use a swap file instead. Reply

  4. none says

    May 21, 2020

    A couple of problems:
    In one section you mount homeS and rootS as root1 root2 home1 home2 but in the next sectionS you call them root root1 home home1
    In the blkid image sda is SSD and sdb is HDD but you said in the previous paragraph that sdb is your SSD
    Thanks for the guide Reply

    • Arnab Satapathi says

      May 23, 2020

      The first portion is just an example, not the actual commands.

      There's some confusing paragraphs and formatting error, I agree. Reply

  5. oybek says

    April 21, 2020

    Thank you very much for the article
    Yesterday moved linux from hdd to ssd without any problem
    Brilliant article Reply

    • Pronay Guha says

      November 9, 2020

      hey, I'm trying to move Linux from HDD to SSD with windows as a dual boot option.
      What changes should I do? Reply

  6. Passingby says

    March 25, 2020

    Thank you for your article. It was very helpful. But i see one disadvantage. When you copy like cp -a /media/ubuntu/root/ /media/ubuntu/root1/ In root1 will be created root folder, but not all its content separately without folder. To avoid this you must add (*) after /
    It should be looked like cp -a /media/ubuntu/root/* /media/ubuntu/root1/ For my opinion rsync command is much more better. You see like files copping. And when i used cp, i did not understand the process hanged up or not. Reply

  7. David Keith says

    December 8, 2018

    Just a quick note: rsync, scp, cp etc. all seem to have a file size limitation of approximately 100GB. So this tutorial will work well with the average filesystem, but will bomb repeatedly if the file size is extremely large. Reply

  8. oldunixguy says

    June 23, 2018

    Question: If one doesn't need to exclude anything why not use "cp -a" instead of rsync?

    Question: You say "use a UEFI compatible GRUB, then it's another long path" but you don't tell us how to do this for UEFI. How do we do it? Reply

    • Arnab Satapathi says

      June 23, 2018

      1. Yeah, using cp -a is preferable if we don't have to exclude anything.
      2. At the moment of writing, I didn't had any PC/laptop with a UEFI firmware.

      Thanks for the feedback, fixed the first issue. Reply

  9. Alfonso says

    February 8, 2018

    best tutorial ever, thank you! Reply

    • Arnab Satapathi says

      February 8, 2018

      You're most welcome, truly I don't know how to respond such a praise. Thanks! Reply

  10. Emmanuel says

    February 3, 2018

    Far the best tutorial I've found "quickly" searching DuckDuckGo. Planning to migrate my system on early 2018. Thank you! I now visualize quite clearly the different steps I'll have to adapt and pass through. it also stick to the KISS* thank you again, the time you invested is very useful, at least for me!

    Best regards.

    Emmanuel Reply

    • Arnab Satapathi says

      February 3, 2018

      Wow! That's motivating, thanks Emmanuel.

[Mar 03, 2021] What Is /dev/shm And Its Practical Usage

Mar 03, 2021 | www.cyberciti.biz

Author: Vivek Gite Last updated: March 14, 2006 58 comments

/dev/shm is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things on Linux.

shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage facility on many Unix-like operating systems. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage device.

https://googleads.g.doubleclick.net/pagead/ads?adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&jar=2021-03-02-22&client=ca-pub-7825705102693166&format=644x320&w=644&h=320&ptt=12&iu=4688727157&adk=1529679259&output=html&bc=7&pv=1&wgl=1&asnt=0-21263681621210495305&dff=system-ui%2C%20BlinkMacSystemFont%2C%20Roboto%2C%20%22Segoe%20UI%22%2C%20Segoe%2C%20%22Helvetica%20Neue%22%2C%20Tahoma%2C%20sans-serif&prev_fmts=1519x320&prev_slotnames=1433529302&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&ifi=2&pfx=0&adf=3590974695&nhd=0&adx=240&ady=936&oid=2&is_amp=5&_v=2102200206004&d_imp=1&c=57473004511&ga_cid=amp-57phOoIfF4DPpaL7S3NtDA&ga_hid=4511&dt=1614816474718&biw=1536&bih=762&u_aw=1536&u_ah=864&u_cd=24&u_w=1536&u_h=864&u_tz=-300&u_his=4&vis=1&scr_x=0&scr_y=0&url=https%3A%2F%2Fwww.cyberciti.biz%2Ftips%2Fwhat-is-devshm-and-its-practical-usage.html&ref=https%3A%2F%2Fduckduckgo.com%2F&bdt=993&dtd=210&__amp_source_origin=https%3A%2F%2Fwww.cyberciti.biz

If you type the mount command you will see /dev/shm as a tempfs file system. Therefore, it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm:
$ df -h
Sample outputs:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/wks01-root
                      444G   70G  351G  17% /
tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
udev                  3.9G  332K  3.9G   1% /dev
tmpfs                 3.9G  168K  3.9G   1% /dev/shm
/dev/sda1             228M   32M  184M  15% /boot
Nevertheless, where can I use /dev/shm?

You can use /dev/shm to improve the performance of application software such as Oracle or overall Linux system performance. On heavily loaded system, it can make tons of difference. For example VMware workstation/server can be optimized to improve your Linux host's performance (i.e. improve the performance of your virtual machines).

In this example, remount /dev/shm with 8G size as follows:
# mount -o remount,size=8G /dev/shm
To be frank, if you have more than 2GB RAM + multiple Virtual machines, this hack always improves performance. In this example, you will give you tmpfs instance on /disk2/tmpfs which can allocate 5GB RAM/SWAP in 5K inodes and it is only accessible by root:
# mount -t tmpfs -o size=5G,nr_inodes=5k,mode=700 tmpfs /disk2/tmpfs
Where,

How do I restrict or modify size of /dev/shm permanently?

You need to add or modify entry in /etc/fstab file so that system can read it after the reboot. Edit, /etc/fstab as a root user, enter:
# vi /etc/fstab
Append or modify /dev/shm entry as follows to set size to 8G

none      /dev/shm        tmpfs   defaults,size=8G        0 0

Save and close the file. For the changes to take effect immediately remount /dev/shm:
# mount -o remount /dev/shm
Verify the same:
# df -h

Recommend readings:

[Mar 03, 2021] How to move the /root directory

Mar 03, 2021 | serverfault.com

https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad 2 1

I would like to move my root user's directory to a larger partition. Sometimes "he" runs out of space when performing tasks.

Here are my partitions:

host3:~# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1               334460    320649         0 100% /
tmpfs                   514128         0    514128   0% /lib/init/rw
udev                     10240       720      9520   8% /dev
tmpfs                   514128         0    514128   0% /dev/shm
/dev/sda9            228978900   1534900 215812540   1% /home
/dev/sda8               381138     10305    351155   3% /tmp
/dev/sda5              4806904    956852   3605868  21% /usr
/dev/sda6              2885780   2281584    457608  84% /var

The root user's home directory is /root. I would like to relocate this, and any other user's home directories to a new location, perhaps on sda9. How do I go about this? debian user-management linux Share Improve this question Follow asked Nov 30 '10 at 17:27 nicholas.alipaz 155 2 2 silver badges 7 7 bronze badges

Add a comment 3 Answers Active Oldest Votes 4

You should avoid symlinks, it can make nasty bugs to appear... one day. And very hard to debug.

Use mount --bind :

# as root
cp -a /root /home/
echo "" >> /etc/fstab
echo "/home/root /root none defaults,bind 0 0" >> /etc/fstab

# do it now
cd / ; mv /root /root.old; mkdir /root; mount -a

it will be made at every reboots which you should do now if you want to catch errors soon Share Improve this answer Follow answered Nov 30 '10 at 17:51 shellholic 1,257 8 8 silver badges 11 11 bronze badges

Add a comment

https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad 1

Never tried it, but you shouldn't have a problem with:
cd / to make sure you're not in the directory to be moved
mv /root /home/root
ln -s /home/root /root symlink it back to the original location. Share Improve this answer Follow answered Nov 30 '10 at 17:32 James L 5,645 1 1 gold badge 17 17 silver badges 23 23 bronze badges Add a comment 0

Share Improve this answer Follow answered Nov 30 '10 at 17:45 Sergey 2,076 15 15 silver badges 14 14 bronze badges

[Mar 03, 2021] The dmesg command is used to print the kernel's message buffer.

Mar 03, 2021 | www.redhat.com

11 Linux commands I can't live without - Enable Sysadmin

Command 9: dmesg

The dmesg command is used to print the kernel's message buffer. This is another important command that you cannot work without. It is much easier to troubleshoot a system when you can see what is going on, and what happened behind the scenes.

Image

[Mar 03, 2021] The classic case of "low free disk space"

Mar 03, 2021 | www.redhat.com

Originally from: Sysadmin university- Quick and dirty Linux tricks - Enable Sysadmin

Another example from real life: You are troubleshooting an issue and find out that one file system is at 100 percent of its capacity.

There may be many subdirectories and files in production, so you may have to come up with some way to classify the "worst directories" because the problem (or solution) could be in one or more.

In the next example, I will show a very simple scenario to illustrate the point.

https://asciinema.org/a/dt1WZkdpfCALbQ5XeiJNYxSCS/embed?

The sequence of steps is:

  1. We go to the file system where the disk space is low (I used my home directory as an example).
  2. Then, we use the command df -k * to show the sizes of directories in kilobytes.
  3. That requires some classification for us to find the big ones, but just sort is not enough because, by default, this command will not treat the numbers as values but just characters.
  4. We add -n to the sort command, which now shows us the biggest directories.
  5. In case we have to navigate to many other directories, creating an alias might be useful.

[Mar 01, 2021] Smart ways to compare files on Linux by Sandra Henry-Stocker

Feb 16, 2021 | www.networkworld.com

colordiff

The colordiff command enhances the differences between two text files by using colors to highlight the differences.

5 Often-Overlooked Log Sources

SponsoredPost Sponsored by ReliaQuest

5 Often-Overlooked Log Sources

Some data sources present unique logging challenges, leaving organizations vulnerable to attack. Here's how to navigate each one to reduce risk and increase visibility.

$ colordiff attendance-2020 attendance-2021
10,12c10
< Monroe Landry
< Jonathan Moody
< Donnell Moore
---
< Sandra Henry-Stocker

If you add a -u option, those lines that are included in both files will appear in your normal font color.

wdiff

The wdiff command uses a different strategy. It highlights the lines that are only in the first or second files using special characters. Those surrounded by square brackets are only in the first file. Those surrounded by braces are only in the second file.

$ wdiff attendance-2020 attendance-2021
Alfreda Branch
Hans Burris
Felix Burt
Ray Campos
Juliet Chan
Denver Cunningham
Tristan Day
Kent Farmer
Terrie Harrington
[-Monroe Landry                 <== lines in file 1 start
Jonathon Moody
Donnell Moore-]                 <== lines only in file 1 stop
{+Sandra Henry-Stocker+}        <== line only in file 2
Leanne Park
Alfredo Potter
Felipe Rush
vimdiff

The vimdiff command takes an entirely different approach. It uses the vim editor to open the files in a side-by-side fashion. It then highlights the lines that are different using background colors and allows you to edit the two files and save each of them separately.

Unlike the commands described above, it runs on the desktop, not in a terminal window.

Strategies for Pixel-Perfect Applications across Web, Mobile, and Chat

SponsoredPost Sponsored by Outsystems

Strategies for Pixel-Perfect Applications across Web, Mobile, and Chat

This webinar will discuss key trends and strategies, identified by Forrester Research, for digital CX and customer self-service in 2021 and beyond. Register now

On Debian systems, you can install vimdiff with this command:

$ sudo apt install vim

vimdiff.jpg <=====================

kompare

The kompare command, like vimdifff , runs on your desktop. It displays differences between files to be viewed and merged and is often used by programmers to see and manage differences in their code. It can compare files or folders. It's also quite customizable.

Learn more at kde.org .

kdiff3

The kdiff3 tool allows you to compare up to three files and not only see the differences highlighted, but merge the files as you see fit. This tool is often used to manage changes and updates in program code.

Like vimdiff and kompare , kdiff3 runs on the desktop.

You can find more information on kdiff3 at sourceforge .

[Feb 28, 2021] Tagging commands on Linux by Sandra Henry-Stocker

Nov 20, 2020 | www.networkworld.com

Tags provide an easy way to associate strings that look like hash tags (e.g., #HOME ) with commands that you run on the command line. Once a tag is established, you can rerun the associated command without having to retype it. Instead, you simply type the tag. The idea is to use tags that are easy to remember for commands that are complex or bothersome to retype.

Unlike setting up an alias, tags are associated with your command history. For this reason, they only remain available if you keep using them. Once you stop using a tag, it will slowly disappear from your command history file. Of course, for most of us, that means we can type 500 or 1,000 commands before this happens. So, tags are a good way to rerun commands that are going to be useful for some period of time, but not for those that you want to have available permanently.

To set up a tag, type a command and then add your tag at the end of it. The tag must start with a # sign and should be followed immediately by a string of letters. This keeps the tag from being treated as part of the command itself. Instead, it's handled as a comment but is still included in your command history file. Here's a very simple and not particularly useful example:

[ Also see Invaluable tips and tricks for troubleshooting Linux . ]
$ echo "I like tags" #TAG

This particular echo command is now associated with #TAG in your command history. If you use the history command, you'll see it:

https://imasdk.googleapis.com/js/core/bridge3.444.1_en.html#goog_926521185

me width=

$ history | grep TAG
  998  08/11/20 08:28:29 echo "I like tags" #TAG     <==
  999  08/11/20 08:28:34 history | grep TAG

Afterwards, you can rerun the echo command shown by entering !? followed by the tag.

$ !? #TAG
echo "I like tags" #TAG
"I like tags"

The point is that you will likely only want to do this when the command you want to run repeatedly is so complex that it's hard to remember or just annoying to type repeatedly. To list your most recently updated files, for example, you might use a tag #REC (for "recent") and associate it with the appropriate ls command. The command below lists files in your home directory regardless of where you are currently positioned in the file system, lists them in reverse date order, and displays only the five most recently created or changed files.

$ ls -ltr ~ | tail -5 #REC <== Associate the tag with a command
drwxrwxr-x  2 shs     shs        4096 Oct 26 06:13 PNGs
-rw-rw-r--  1 shs     shs          21 Oct 27 16:26 answers
-rwx------  1 shs     shs         644 Oct 29 17:29 update_user
-rw-rw-r--  1 shs     shs      242528 Nov  1 15:54 my.log
-rw-rw-r--  1 shs     shs      266296 Nov  5 18:39 political_map.jpg
$ !? #REC                       <== Run the command that the tag is associated with
ls -ltr ~ | tail -5 #REC
drwxrwxr-x  2 shs     shs        4096 Oct 26 06:13 PNGs
-rw-rw-r--  1 shs     shs          21 Oct 27 16:26 answers
-rwx------  1 shs     shs         644 Oct 29 17:29 update_user
-rw-rw-r--  1 shs     shs      242528 Nov  1 15:54 my.log
-rw-rw-r--  1 shs     shs      266296 Nov  5 18:39 political_map.jpg

You can also rerun tagged commands using Ctrl-r (hold Ctrl key and press the "r" key) and then typing your tag (e.g., #REC). In fact, if you are only using one tag, just typing # after Ctrl-r should bring it up for you. The Ctrl-r sequence, like !? , searches through your command history for the string that you enter.

Tagging locations

Some people use tags to remember particular file system locations, making it easier to return to directories they"re working in without having to type complete directory paths.

5 Often-Overlooked Log Sources

SponsoredPost Sponsored by ReliaQuest

5 Often-Overlooked Log Sources

Some data sources present unique logging challenges, leaving organizations vulnerable to attack. Here's how to navigate each one to reduce risk and increase visibility.

$ cd /apps/data/stats/2020/11 #NOV
$ cat stats
$ cd
!? #NOV        <== takes you back to /apps/data/stats/2020/11

After using the #NOV tag as shown, whenever you need to move into the directory associated with #NOV , you have a quick way to do so – and one that doesn't require that you think too much about where the data files are stored.

NOTE: Tags don't need to be in all uppercase letters, though this makes them easier to recognize and unlikely to conflict with any commands or file names that are also in your command history.

Alternatives to tags

While tags can be very useful, there are other ways to do the same things that you can do with them.

To make commands easily repeatable, assign them to aliases.

Netskope Leadership Shows Where CASB and SWG Are Headed

BrandPost Sponsored by Netskope

Netskope Leadership Shows Where CASB and SWG Are Headed

As the status quo of security inverts from the data center to the user, Cloud Access Security Brokers and Secure Web Gateways increasingly will be the same conversation, not separate technology...

$ alias recent="ls -ltr ~ | tail -5"

To make multiple commands easily repeatable, turn them into a script.

#!/bin/bash
echo "Most recently updated files:"
ls -ltr ~ | tail -5

To make file system locations easier to navigate to, create symbolic links.

$ ln -s /apps/data/stats/2020/11 NOV

To rerun recently used commands, use the up arrow key to back up through your command history until you reach the command you want to reuse and then press the enter key.

You can also rerun recent commands by typing something like "history | tail -20" and then type "!" following by the number to the left of the command you want to rerun (e.g., !999).

Wrap-up

Tags are most useful when you need to run complex commands again and again in a limited timeframe. They're easy to set up and they fade away when you stop using them.

[Feb 28, 2021] Selectively reusing commands on Linux by Sandra Henry-Stocker

Feb 23, 2021 | www.networkworld.com

Reuse a command by typing a portion of it

One easy way to reuse a previously entered command (one that's still on your command history) is to type the beginning of the command. If the bottom of your history buffers looks like this, you could rerun the ps command that's used to count system processes simply by typing just !p .

Debunking the 3 Biggest Myths Around Cloud Migration

SponsoredPost Sponsored by Lenovo & Intel

Debunking the 3 Biggest Myths Around Cloud Migration

Can you name the 3 biggest misconceptions about cloud migration? Here's the truth - and how to solve the challenges.

$ history | tail -7
 1002  21/02/21 18:24:25 alias
 1003  21/02/21 18:25:37 history | more
 1004  21/02/21 18:33:45 ps -ef | grep systemd | wc -l
 1005  21/02/21 18:33:54 ls
 1006  21/02/21 18:34:16 echo "What's next?"

You can also rerun a command by entering a string that was included anywhere within it. For example, you could rerun the ps command shown in the listing above by typing !?sys? The question marks act as string delimiters.

$ !?sys?
ps -ef | grep systemd | wc -l
5

You could rerun the command shown in the listing above by typing !1004 but this would be more trouble if you're not looking at a listing of recent commands.

Run previous commands with changes

After the ps command shown above, you could count kworker processes instead of systemd processes by typing ^systemd^kworker^ . This replaces one process name with the other and runs the altered command. As you can see in the commands below, this string substitution allows you to reuse commands when they differ only a little.

$ ps -ef | grep systemd | awk '{ print $2 }' | wc -l
5
$ ^systemd^smbd^
ps -ef | grep smbd | awk '{ print $2 }' | wc -l
5
$ ^smbd^kworker^
ps -ef | grep kworker | awk '{ print $2 }' | wc -l
13

The string substitution is also useful if you mistype a command or file name.

Four Business Leaders Focus on the Future of Work

BrandPost Sponsored by DocuSign

Four Business Leaders Focus on the Future of Work

The pandemic of 2020 threw business into disarray, but provided opportunities to accelerate remote work, collaboration, and digital transformation

$ sudo ls -l /var/log/samba/corse
ls: cannot access '/var/log/samba/corse': No such file or directory
$ ^se^es^
sudo ls -l /var/log/samba/cores
total 8
drwx --  --  -- . 2 root root 4096 Feb 16 10:50 nmbd
drwx --  --  -- . 2 root root 4096 Feb 16 10:50 smbd
Reach back into history

You can also reuse commands with a character string that asks, for example, to rerun the command you entered some number of commands earlier. Entering !-11 would rerun the command you typed 11 commands earlier. In the output below, the !-3 reruns the first of the three earlier commands displayed.

$ ps -ef | wc -l
132
$ who
shs      pts/0        2021-02-21 18:19 (192.168.0.2)
$ date
Sun 21 Feb 2021 06:59:09 PM EST
$ !-3
ps -ef | wc -l
133
Reuse command arguments

Another thing you can do with your command history is reuse arguments that you provided to various commands. For example, the character sequence !:1 represents the first argument provided to the most recently run command, !:2 the second, !:3 the third and so on. !:$ represents the final argument. In this example, the arguments are reversed in the second echo command.

$ echo be the light
be the light
$ echo !:3 !:2 !:1
echo light the be
light the be
$ echo !:3 !:$
echo light light
light light

If you want to run a series of commands using the same argument, you could do something like this:

$ echo nemo
nemo
$ id !:1
id nemo
uid=1001(nemo) gid=1001(nemo) groups=1001(nemo),16(fish),27(sudo)
$ df -k /home/!:$
df -k /home/nemo
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sdb1      446885824 83472864 340642736  20% /home

Of course, if the argument was a long and complicated string, it might actually save you some time and trouble to use this technique. Please remember this is just an example!

Wrap-Up

Simple history command tricks can often save you a lot of trouble by allowing you to reuse rather than retype previously entered commands. Remember, however, that using strings to identify commands will recall only the most recent use of that string and that you can only rerun commands in this way if they are being saved in your history buffer.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

[Feb 27, 2021] Parallel Shell

Feb 27, 2021 | cs.boisestate.edu

2.3.2 Parallel Shell

The cluster comes with a simple parallel shell named pdsh. The pdsh shell is handy for running commands across the cluster. There is man page that describes the capabilities of pdsh is detail. One of the useful features is the capability of specifying all or a subset of the cluster. For example: pdsh -a targets the to all nodes of the cluster, including the master. pdsh -a -x node00 targets the to all nodes of the cluster except the master. pdsh node[01-08] targets the to the 8 nodes of the cluster named node01, node02, . . ., node08.

Another utility that is useful for formatting the output of pdsh is dshbak. Here we will show some handy uses of pdsh.

[Feb 27, 2021] pdsh Utility Wrappers

Feb 27, 2021 | docstore.mik.ua

Administrators can build wrapper commands around pdsh for commands that are frequently used across multiple systems and Serviceguard clusters. Several such wrapper commands are provided with DSAU. These wrappers are Serviceguard cluster-aware and default to fanning out cluster-wide when used in a Serviceguard environment. These wrappers support most standard pdsh command line options and also support long options ( -- option syntax) .

cexec is a general purpose pdsh wrapper. In addition to the standard pdsh features, cexec includes a reporting feature. Use the --report_loc option to have cexec display the report location for a command. The command report records the command issued in addition to the nodes where the command succeeded, failed, or the nodes that were unreachable. The report can be used with the --retry option to replay the command against nodes that failed, succeeded, were unreachable, or all nodes.

ccp

ccp is a wrapper for pdcp and copies files cluster-wide or to the specified set of systems.

cps

cps fans out a ps command across a set of systems or cluster.

ckill

ckill allows the administrator to signal a process by name since the pid of a specific process will vary across a set of systems or the members of a cluster.

cuptime

cuptime displays the uptime statistics for a set of systems or a cluster.

cwall

cwall displays a wall(1M) broadcast message on multiple hosts.

All the wrappers support the CFANOUT_HOSTS environment variable when not executing in a Serviceguard cluster. The environment variable specifies a file containing the list of hosts to target, one hostname per line. This will be used if no other host specifications are present on the command line. When no target nodelist command line options are used and CFANOUT_HOSTS is undefined, the command will be executed on the local host.

For more information on these commands, refer to their reference manpages.

[Feb 27, 2021] Parallel Distributed Shell - Thread- [Pdsh-users] Re- Dshbak option

Feb 27, 2021 | sourceforge.net
Hm, this seems like a good idea, but I'm not sure dshbak is the right 
place for this. (That script is meant to simply reformat output which
is prefixed by "node: ")

If you'd like to track up/down nodes, you should check out Al Chu's
Cerebro and whatsup/libnodeupdown:

http://www.llnl.gov/linux/cerebro/cerebro.html
http://www.llnl.gov/linux/whatsup/

But I do realize that reporting nodes that did not respond to pdsh
would also be a good feature. However, it seems to me that pdsh itself
would have to do this work, because only it knows the list of hosts originally
targeted. (How would dshbak know this?)

As an alternative I sometimes use something like this:

 # pdsh -a true 2>&1 | sed 's/^[^:]*: //'  | dshbak -c
----------------
emcr[73,138,165,293,313,331,357,386,389,481,493,499,519,522,526,536,548,553,560,564,574,601,604,612,618,636,646,655,665,676,678,693,700-701,703,706,711,713,715,717-718,724,733,737,740,759,767,779,817,840,851,890]
----------------
 mcmd: connect failed: No route to host
----------------
emcrj
----------------
 mcmd: xpoll: protocol failure in circuit setup

i.e. strip off the leading pdsh@...: and send all errors to stdout. Then
 collect errors with dshbak to see which hosts are not reachable.

Maybe we should add an option to pdsh to issue a report of failed hosts
at the end of execution?

mark

 
> 

[Feb 27, 2021] Shell scripting - want to login on some server, which are in same domain and execute command and exit

Feb 27, 2021 | unix.stackexchange.com

Install pdsh , then you can run commands like:

pdsh -w server[0-9],server10 'command1 ; command2 ;... ; command5' > logfile.txt

NOTE: if you don't want to enter passwords for each server, then you need to have an authorized_key installed on the remote servers. If necessary, you can use the environment variable PDSH_SSH_ARGS to specify ssh options, including which identity file to use ( -i ).

The commands will be run in parallel on all servers, and output from them will be intermingled (with the hostname pre-pended to each output line). You can view the output nicely formatted and separated by host using pdsh 's dshbak utility:

dshbak logfile.txt | less

Alternatively, you can pipe through dshbak before redirecting to a logfile:

pdsh -w server[0-9],server10 'command1 ; command2 ;... ; command5' | 
    dshbak > logfile.txt

IMO it's better to save the raw log file and use dshbak when required, but that's just my subjective preference. For remote commands that produce only a single line of output (e.g. uname or uptime ), dshbak is overly verbose as the output is nicely concise. e.g. from my home network:

# pdsh -g all uptime
kali:  12:03:18 up 33 days, 23 min,  3 users,  load average: 0.21, 0.06, 0.06
hanuman:  12:03:37 up 10 days, 21:59,  2 users,  load average: 0.04, 0.05, 0.05
indra:  12:02:57 up 13 days,  1:12,  3 users,  load average: 0.30, 0.26, 0.30
ganesh:  12:03:10 up 6 days, 10:11,  6 users,  load average: 1.18, 1.34, 1.35

You can define hosts and groups of hosts in a file called /etc/genders and then specify the host group with pdsh -g instead of pdsh -w . e.g. with an /etc/genders file like this:

server1  all,web
server2  all,web
server3  all,mail
server4  all,mail
server5  all,mysql
server6  all,mysql

pdsh -g all uname -a will run uname -a on all servers. pdsh -g web uptime will run uptime only on server1 and server 2. pdsh -g web,mysql df -h / will run df on servers 1, 2, 5, and 6. and so on.

BTW, one odd thing about pdsh is that it is configured to use rsh by default instead of ssh . You need to either:

  1. use -R ssh on the pdsh command line (e.g. pdsh -R ssh -w server[0-9] ...
  2. export PDSH_RCMD_TYPE=ssh before running pdsh
  3. run echo ssh > /etc/pdsh/rcmd_default to set ssh as the permanent default.

If pdsh is not packaged for your distro, you can find it at LLNL: https://computing.llnl.gov/linux/pdsh.html


There are several other tools that do the same basic job as pdsh . I've tried several of them and found that they're generally more hassle to set up and use. pdsh pretty much just works with zero or minimal configuration.

[Feb 27, 2021] AIX for System Administrators

Feb 27, 2021 | aix4admins.blogspot.com

dsh -q displays the values of the dsh variables (DSH_NODE_LIST, DCP_NODE_RCP...)
dsh <command> runs comamnd on each server in DSH_NODE_LIST
dsh <command> | dshbak same as above, just formats output to separate each host
dsh -w aix1,aix2 <command> execute command on the given servers (dsh -w aix1,aix2 "oslevel -s")
dsh -e <script> to run the given script on each server
(for me it was faster to dcp and after run the script with dsh on the remote server)

dcp <file> <location> copies a file to the given location (without location home dir will be used)

dping -n aix1, aix2 do a ping on the listed servers
dping -f <filename> do a ping for all servers given in the file (-f)

[Feb 20, 2021] Improve your productivity with this Linux keyboard tool - Opensource.com

Feb 20, 2021 | opensource.com

AutoKey is an open source Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever managed without. It can be a transformative tool to improve your productivity or simply a way to reduce the physical stress associated with typing.

This article will look at how to install and start using AutoKey, cover some simple recipes you can immediately use in your workflow, and explore some of the advanced features that AutoKey power users may find attractive.

Install and set up AutoKey

AutoKey is available as a software package on many Linux distributions. The project's installation guide contains directions for many platforms, including building from source. This article uses Fedora as the operating platform.

AutoKey comes in two variants: autokey-gtk, designed for GTK -based environments such as GNOME, and autokey-qt, which is QT -based.

You can install either variant from the command line:

sudo dnf install autokey-gtk

Once it's installed, run it by using autokey-gtk (or autokey-qt ).

Explore the interface

Before you set AutoKey to run in the background and automatically perform actions, you will first want to configure it. Bring up the configuration user interface (UI):

autokey-gtk -c

AutoKey comes preconfigured with some examples. You may wish to leave them while you're getting familiar with the UI, but you can delete them if you wish.

autokey-defaults.png

(Matt Bargenquast, CC BY-SA 4.0 )

The left pane contains a folder-based hierarchy of phrases and scripts. Phrases are text that you want AutoKey to enter on your behalf. Scripts are dynamic, programmatic equivalents that can be written using Python and achieve basically the same result of making the keyboard send keystrokes to an active window.

The right pane is where the phrases and scripts are built and configured.

Once you're happy with your configuration, you'll probably want to run AutoKey automatically when you log in so that you don't have to start it up every time. You can configure this in the Preferences menu ( Edit -> Preferences ) by selecting Automatically start AutoKey at login .

startautokey.png

(Matt Bargenquast, CC BY-SA 4.0 ) Correct common typos with AutoKey More Linux resources

Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems for you.

Create a new subfolder where you can group all your "typo correction" configurations. Select My Phrases in the left pane, then File -> New -> Subfolder . Name the subfolder Typos .

Create a new phrase in File -> New -> Phrase , and call it "grep."

Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering "grep" in the Enter phrase contents section (replacing the default "Enter phrase contents" text).

Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the Set button next to Abbreviations at the bottom of the UI.

In the dialog box that pops up, click the Add button and add "gerp" as a new abbreviation. Leave Remove typed abbreviation checked; this is what instructs AutoKey to replace any typed occurrence of the word "gerp" with "grep." Leave Trigger when typed as part of a word unchecked so that if you type a word containing "gerp" (such as "fingerprint"), it won't attempt to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated word.

[Jan 27, 2021] Make Bash history more useful with these tips by Seth Kenlon

Notable quotes:
"... Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong. ..."
Jun 25, 2020 | opensource.com

To block adding a command to the history entries, you can place a space before the command, as long as you have ignorespace in your HISTCONTROL environment variable:

$ history | tail
535 echo "foo"
536 echo "bar"
$ history -d 536
$ history | tail
535 echo "foo"

You can clear your entire session history with the -c option:

$ history -c
$ history
$ History lessons More on Bash Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong.

Use your history sessions as needed, and exercise your power over history wisely. Happy history hacking!

[Jan 03, 2021] 9 things to do in your first 10 minutes on a new to you server

Jan 03, 2021 | opensource.com

1. First contact

As soon as I log into a server, the first thing I do is check whether it has the operating system, kernel, and hardware architecture needed for the tests I will be running. I often check how long a server has been up and running. While this does not matter very much for a test system because it will be rebooted multiple times, I still find this information helpful.

Use the following commands to get this information. I mostly use Red Hat Linux for testing, so if you are using another Linux distro, use *-release in the filename instead of redhat-release :

cat / etc / redhat-release
uname -a
hostnamectl
uptime 2. Is anyone else on board?

Once I know that the machine meets my test needs, I need to ensure no one else is logged into the system at the same time running their own tests. Although it is highly unlikely, given that the provisioning system takes care of this for me, it's still good to check once in a while -- especially if it's my first time logging into a server. I also check whether there are other users (other than root) who can access the system.

Use the following commands to find this information. The last command looks for users in the /etc/passwd file who have shell access; it skips other services in the file that do not have shell access or have a shell set to nologin :

who
who -Hu
grep sh $ / etc / passwd 3. Physical or virtual machine

Now that I know I have the machine to myself, I need to identify whether it's a physical machine or a virtual machine (VM). If I provisioned the machine myself, I could be sure that I have what I asked for. However, if you are using a machine that you did not provision, you should check whether the machine is physical or virtual.

Use the following commands to identify this information. If it's a physical system, you will see the vendor's name (e.g., HP, IBM, etc.) and the make and model of the server; whereas, in a virtual machine, you should see KVM, VirtualBox, etc., depending on what virtualization software was used to create the VM:

dmidecode -s system-manufacturer
dmidecode -s system-product-name
lshw -c system | grep product | head -1
cat / sys / class / dmi / id / product_name
cat / sys / class / dmi / id / sys_vendor 4. Hardware

Because I often test hardware connected to the Linux machine, I usually work with physical servers, not VMs. On a physical machine, my next step is to identify the server's hardware capabilities -- for example, what kind of CPU is running, how many cores does it have, which flags are enabled, and how much memory is available for running tests. If I am running network tests, I check the type and capacity of the Ethernet or other network devices connected to the server.

Use the following commands to display the hardware connected to a Linux server. Some of the commands might be deprecated in newer operating system versions, but you can still install them from yum repos or switch to their equivalent new commands:

lscpu or cat / proc / cpuinfo
lsmem or cat / proc / meminfo
ifconfig -a
ethtool < devname >
lshw
lspci
dmidecode 5. Installed software

Testing software always requires installing additional dependent packages, libraries, etc. However, before I install anything, I check what is already installed (including what version it is), as well as which repos are configured, so I know where the software comes from, and I can debug any package installation issues.

Use the following commands to identify what software is installed:

rpm -qa
rpm -qa | grep < pkgname >
rpm -qi < pkgname >
yum repolist
yum repoinfo
yum install < pkgname >
ls -l / etc / yum.repos.d / 6. Running processes and services

Once I check the installed software, it's natural to check what processes are running on the system. This is crucial when running a performance test on a system -- if a running process, daemon, test software, etc. is eating up most of the CPU/RAM, it makes sense to stop that process before running the tests. This also checks that the processes or daemons the test requires are up and running. For example, if the tests require httpd to be running, the service to start the daemon might not have run even if the package is installed.

Use the following commands to identify running processes and enabled services on your system:

pstree -pa 1
ps -ef
ps auxf
systemctl 7. Network connections

Today's machines are heavily networked, and they need to communicate with other machines or services on the network. I identify which ports are open on the server, if there are any connections from the network to the test machine, if a firewall is enabled, and if so, is it blocking any ports, and which DNS servers the machine talks to.

Use the following commands to identify network services-related information. If a deprecated command is not available, install it from a yum repo or use the equivalent newer command:

netstat -tulpn
netstat -anp
lsof -i
ss
iptables -L -n
cat / etc / resolv.conf 8. Kernel

When doing systems testing, I find it helpful to know kernel-related information, such as the kernel version and which kernel modules are loaded. I also list any tunable kernel parameters and what they are set to and check the options used when booting the running kernel.

Use the following commands to identify this information:

uname -r
cat / proc / cmdline
lsmod
modinfo < module >
sysctl -a
cat / boot / grub2 / grub.cfg

[Jan 02, 2021] 10 shortcuts to master bash by Guest Contributor

06, 2025 | TechRepublic

If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

  1. Easily recall previous commands

    Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

  2. Use command aliases

    If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

    bash> alias ls='ls -l' 

    To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

    You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

  3. Use filename auto-completion

    Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

  4. Use key shortcuts to efficiently edit the command line

    Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

  5. Get automatic notification of new mail

    You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

    bash> MAILPATH='/var/spool/mail/john'
    bash> export MAILPATH 

    Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

  6. Run tasks in the background

    Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

    bash> tail -f /var/log/messages &
    [1] 614

    Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

    bash> fg 1

    A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

  7. Quickly jump to frequently-used directories

    You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

    bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
    bash> export CDPATH

    Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

  8. Perform calculations

    Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

    bash> echo $((16/2))
    8
  9. Customise the shell prompt

    You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

    bash> PS1='\u@\h:\w \@> '
    
    bash> export PS1
    root@medusa:/tmp 03:01 PM>

    This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

  10. Get context-specific help

    Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

    bash> help alias
    ...some help text...

    Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

[Jan 02, 2021] 11 Linux command line guides you shouldn't be without - Enable Sysadmin

Jan 02, 2021 | www.redhat.com

Here are some brief comments about each topic:

  1. How to use the Linux mtr command - The mtr (My Traceroute) command is a major improvement over the old traceroute and is one of my first go-to tools when troubleshooting network problems.
  2. Linux for beginners: 10 commands to get you started at the terminal - Everyone who works on the Linux CLI needs to know some basic commands for moving around the directory structure and exploring files and directories. This article covers those commands in a simple way that places them into a usable context for those of us new to the command line.
  3. Linux for beginners: 10 more commands for manipulating files - One of the most common tasks we all do, whether as a Sysadmin or a regular user, is to manage and manipulate files.
  4. More stupid Bash tricks: Variables, find, file descriptors, and remote operations - These tricks are actually quite smart, and if you want to learn the basics of Bash along with standard IO streams (STDIO), this is a good place to start.
  5. Getting started with systemctl - Do you need to enable, disable, start, and stop systemd services? Learn the basics of systemctl – a powerful tool for managing systemd services and more.
  6. How to use the uniq command to process lists in Linux - Ever had a list in which items can appear multiple times where you only need to know which items appear in the list but not how many times?
  7. A beginner's guide to gawk - gawk is a command line tool that can be used for simple text processing in Bash and other scripts. It is also a powerful language in its own right.
  8. An introduction to the diff command - Sometimes it is important to know the difference.
  9. Looking forward to Linux network configuration in the initial ramdisk (initrd) - The initrd is a critical part of the very early boot process for Linux. Here is a look at what it is and how it works.
  10. Linux troubleshooting: Setting up a TCP listener with ncat - Network troubleshooting sometimes requires tracking specific network packets based on complex filter criteria or just determining whether a connection can be made.
  11. Hard links and soft links in Linux explained - The use cases for hard and soft links can overlap but it is how they differ that makes them both important – and cool.

[Dec 30, 2020] Lazy Linux: 10 essential tricks for admins by Vallard Benincosa

The original link to the article of Vallard Benincosa published on 20 Jul 2008 in IBM DeveloperWorks disappeared due to yet another reorganization of IBM website that killed old content. Money greedy incompetents is what current upper IBM managers really is...
Jul 20, 2008 | benincosa.com

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time -- and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset . But wait you say, typing reset is too close to typing reboot or shutdown . Your palms start to sweat -- especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo ."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line

Use the arrow key again to highlight the line that begins with kernel , and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1

Then press Enter , B , and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door . To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 [email protected]

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh [email protected] .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.
  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4 .)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5 , ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 [email protected]

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 [email protected]

    This time the SSH flag we used was -L , which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb ; 1024Mb/8 = 128MB ; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk , grep , and sed . There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped ( | d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1 . This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8 , you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system -- a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool :

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

[Dec 10, 2020] Possibility to change only year or only month in date

Jan 01, 2017 | unix.stackexchange.com

Gilles

491k 109 965 1494 asked Aug 22 '14 at 9:40 SHW 7,341 3 31 69

> ,

Christian Severin , 2017-09-29 09:47:52

You can use e.g. date --set='-2 years' to set the clock back two years, leaving all other elements identical. You can change month and day of month the same way. I haven't checked what happens if that calculation results in a datetime that doesn't actually exist, e.g. during a DST switchover, but the behaviour ought to be identical to the usual "set both date and time to concrete values" behaviour. – Christian Severin Sep 29 '17 at 9:47

Michael Homer , 2014-08-22 09:44:23

Use date -s :
date -s '2014-12-25 12:34:56'

Run that as root or under sudo . Changing only one of the year/month/day is more of a challenge and will involve repeating bits of the current date. There are also GUI date tools built in to the major desktop environments, usually accessed through the clock.

To change only part of the time, you can use command substitution in the date string:

date -s "2014-12-25 $(date +%H:%M:%S)"

will change the date, but keep the time. See man date for formatting details to construct other combinations: the individual components are %Y , %m , %d , %H , %M , and %S .

> ,

> , 2014-08-22 09:51:41

I don't want to change the time – SHW Aug 22 '14 at 9:51

Michael Homer , 2014-08-22 09:55:00

There's no option to do that. You can use date -s "2014-12-25 $(date +%H:%M:%S)" to change the date and reuse the current time, though. – Michael Homer Aug 22 '14 at 9:55

chaos , 2014-08-22 09:59:58

System time

You can use date to set the system date. The GNU implementation of date (as found on most non-embedded Linux-based systems) accepts many different formats to set the time, here a few examples:

set only the year:

date -s 'next year'
date -s 'last year'

set only the month:

date -s 'last month'
date -s 'next month'

set only the day:

date -s 'next day'
date -s 'tomorrow'
date -s 'last day'
date -s 'yesterday'
date -s 'friday'

set all together:

date -s '2009-02-13 11:31:30' #that's a magical timestamp

Hardware time

Now the system time is set, but you may want to sync it with the hardware clock:

Use --show to print the hardware time:

hwclock --show

You can set the hardware clock to the current system time:

hwclock --systohc

Or the system time to the hardware clock

hwclock --hctosys

> ,

garethTheRed , 2014-08-22 09:57:11

You change the date with the date command. However, the command expects a full date as the argument:
# date -s "20141022 09:45"
Wed Oct 22 09:45:00 BST 2014

To change part of the date, output the current date with the date part that you want to change as a string and all others as date formatting variables. Then pass that to the date -s command to set it:

# date -s "$(date +'%Y12%d %H:%M')"
Mon Dec 22 10:55:03 GMT 2014

changes the month to the 12th month - December.

The date formats are:

Balmipour , 2016-03-23 09:10:21

For ones like me running ESXI 5.1, here's what the system answered me
~ # date -s "2016-03-23 09:56:00"
date: invalid date '2016-03-23 09:56:00'

I had to uses a specific ESX command instead :

esxcli system time set  -y 2016 -M 03 -d 23  -H 10 -m 05 -s 00

Hope it helps !

> ,

Brook Oldre , 2017-09-26 20:03:34

I used the date command and time format listed below to successfully set the date from the terminal shell command performed on Android Things which uses the Linux Kernal.

date 092615002017.00

MMDDHHMMYYYY.SS

MM - Month - 09

DD - Day - 26

HH - Hour - 15

MM - Min - 00

YYYY - Year - 2017

.SS - second - 00

> ,

[Nov 22, 2020] Programmable editor as sysadmin tool

Highly recommended!
Oct 05, 2020 | perlmonks.org

likbez

C( vi/vim, emacs, THE, etc ).

There are also some newer editors that use LUA as the scripting language, but none with Perl as a scripting language. See https://www.slant.co/topics/7340/~open-source-programmable-text-editors

Here, for example, is a fragment from an old collection of hardening scripts called Titan, written for Solaris by by Brad M. Powell. Example below uses vi which is the simplest, but probably not optimal choice, unless your primary editor is VIM.

FixHostsEquiv() {

if   -f /etc/hosts.equiv -a -s /etc/hosts.equiv ; then
      t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a copy..."
      /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG

        if grep -s "^+$" /etc/hosts.equiv
        then
        ed - /etc/hosts.equiv <<- !
        g/^+$/d
        w
        q
        !
        fi
else
        t_echo 2 "        No /etc/hosts.equiv -  PASSES CHECK"
        exit 1
fi

For VIM/Emacs users the main benefit here is that you will know your editor better, instead of inventing/learning "yet another tool." That actually also is an argument against Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to kill a bird with a cannon. Positive return on investment probably starts if you manage over 8 or even 16 boxes.

Perl also can be used. But I would recommend to slurp the file into an array and operate with lines like in editor; a regex on the whole text are more difficult to write correctly then a regex for a line, although experts have no difficulties using just them. But we seldom acquire skills we can so without :-)

On the other hand, that gives you a chance to learn splice function ;-)

If the files are basically identical and need some slight customization you can use patch utility with pdsh, but you need to learn the ropes. Like Perl the patch utility was also written by Larry Wall and is a very flexible tool for such tasks. You need first to collect files from your servers into some central directory with pdsh/pdcp (which I think is a standard RPM on RHEL and other linuxes) or other tool, then create diffs with one server to which you already applied the change (diff is your command language at this point), verify that on other server that diff produced right results, apply it and then distribute the resulting files back to each server using again pdsh/pdcp. If you have a common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the tree and diffs on common filesystem.

The same central repository of config files can be used with vi and other approaches creating "poor man Ansible" for you .

[Oct 05, 2020] Modular Perl in Red Hat Enterprise Linux 8 - Red Hat Developer

Notable quotes:
"... perl-DBD-SQLite ..."
"... perl-DBD-SQLite:1.58 ..."
"... perl-libwww-perl ..."
"... multi-contextual ..."
Oct 05, 2020 | developers.redhat.com

Modular Perl in Red Hat Enterprise Linux 8 By Petr Pisar May 16, 2019

Red Hat Enterprise Linux 8 comes with modules as a packaging concept that allows system administrators to select the desired software version from multiple packaged versions. This article will show you how to manage Perl as a module.

Installing from a default stream

Let's install Perl:

# yum --allowerasing install perl
Last metadata expiration check: 1:37:36 ago on Tue 07 May 2019 04:18:01 PM CEST.
Dependencies resolved.
==========================================================================================
 Package                       Arch    Version                Repository             Size
==========================================================================================
Installing:
 perl                          x86_64  4:5.26.3-416.el8       rhel-8.0.z-appstream   72 k
Installing dependencies:
[ ]
Transaction Summary
==========================================================================================
Install  147 Packages

Total download size: 21 M
Installed size: 59 M
Is this ok [y/N]: y
[ ]
  perl-threads-shared-1.58-2.el8.x86_64                                                   

Complete!

Next, check which Perl you have:

$ perl -V:version
version='5.26.3';

You have 5.26.3 Perl version. This is the default version supported for the next 10 years and, if you are fine with it, you don't have to know anything about modules. But what if you want to try a different version?

Everything you need to grow your career.

With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.

SIGN UP Discovering streams

Let's find out what Perl modules are available using the yum module list command:

# yum module list
Last metadata expiration check: 1:45:10 ago on Tue 07 May 2019 04:18:01 PM CEST.
[ ]
Name                 Stream           Profiles     Summary
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24             common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d]         common [d]   SQLite DBI driver
perl-DBI             1.641 [d]        common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

Here you can see a Perl module is available in versions 5.24 and 5.26. Those are called streams in the modularity world, and they denote an independent variant, usually a different version, of the same software stack. The [d] flag marks a default stream. That means if you do not explicitly enable a different stream, the default one will be used. That explains why yum installed Perl 5.26.3 and not some of the 5.24 micro versions.

Now suppose you have an old application that you are migrating from Red Hat Enterprise Linux 7, which was running in the rh-perl524 software collection environment, and you want to give it a try on Red Hat Enterprise Linux 8. Let's try Perl 5.24 on Red Hat Enterprise Linux 8.

Enabling a Stream

First, switch the Perl module to the 5.24 stream:

# yum module enable perl:5.24
Last metadata expiration check: 2:03:16 ago on Tue 07 May 2019 04:18:01 PM CEST.
Problems in request:
Modular dependency problems with Defaults:

 Problem 1: conflicting requests
  - module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
 Problem 2: conflicting requests
  - module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
Dependencies resolved.
==========================================================================================
 Package              Arch                Version              Repository            Size
==========================================================================================
Enabling module streams:
 perl                                     5.24

Transaction Summary
==========================================================================================

Is this ok [y/N]: y
Complete!

Switching module streams does not alter installed packages (see 'module enable' in dnf(8)
for details)

Here you can see a warning that the freeradius:3.0 stream is not compatible with perl:5.24 . That's because FreeRADIUS was built for Perl 5.26 only. Not all modules are compatible with all other modules.

Next, you can see a confirmation for enabling the Perl 5.24 stream. And, finally, there is another warning about installed packages. The last warning means that the system still can have installed RPM packages from the 5.26 stream, and you need to explicitly sort it out.

Changing modules and changing packages are two separate phases. You can fix it by synchronizing a distribution content like this:

# yum --allowerasing distrosync
Last metadata expiration check: 0:00:56 ago on Tue 07 May 2019 06:33:36 PM CEST.
Modular dependency problems:

 Problem 1: module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
  - conflicting requests
 Problem 2: module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
  - module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
  - module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
  - conflicting requests
Dependencies resolved.
==========================================================================================
 Package           Arch   Version                              Repository            Size
==========================================================================================
[ ]
Downgrading:
 perl              x86_64 4:5.24.4-403.module+el8+2770+c759b41a
                                                               rhel-8.0.z-appstream 6.1 M
[ ]
Transaction Summary
==========================================================================================
Upgrade    69 Packages
Downgrade  66 Packages

Total download size: 20 M
Is this ok [y/N]: y
[ ]
Complete!

And try the perl command again:

$ perl -V:version
version='5.24.4';

Great! It works. We switched to a different Perl version, and the different Perl is still invoked with the perl command and is installed to a standard path ( /usr/bin/perl ). No scl enable incantation is needed, in contrast to the software collections.

You could notice the repeated warning about FreeRADIUS. A future YUM update is going to clean up the unnecessary warning. Despite that, I can show you that other Perl-ish modules are compatible with any Perl stream.

Dependent modules

Let's say the old application mentioned before is using DBD::SQLite Perl module. (This nomenclature is a little ambiguous: Red Hat Enterprise Linux has modules; Perl has modules. If I want to emphasize the difference, I will say the Modularity modules or the CPAN modules.) So, let's install CPAN's DBD::SQLite module. Yum can search in a packaged CPAN module, so give a try:

# yum --allowerasing install 'perl(DBD::SQLite)'
[ ]
Dependencies resolved.
==========================================================================================
 Package          Arch    Version                             Repository             Size
==========================================================================================
Installing:
 perl-DBD-SQLite  x86_64  1.58-1.module+el8+2519+e351b2a7     rhel-8.0.z-appstream  186 k
Installing dependencies:
 perl-DBI         x86_64  1.641-2.module+el8+2701+78cee6b5    rhel-8.0.z-appstream  739 k
Enabling module streams:
 perl-DBD-SQLite          1.58
 perl-DBI                 1.641

Transaction Summary
==========================================================================================
Install  2 Packages

Total download size: 924 k
Installed size: 2.3 M
Is this ok [y/N]: y
[ ]
Installed:
  perl-DBD-SQLite-1.58-1.module+el8+2519+e351b2a7.x86_64
  perl-DBI-1.641-2.module+el8+2701+78cee6b5.x86_64

Complete!

Here you can see DBD::SQLite CPAN module was found in the perl-DBD-SQLite RPM package that's part of perl-DBD-SQLite:1.58 module, and apparently it requires some dependencies from the perl-DBI:1.641 module, too. Thus, yum asked for enabling the streams and installing the packages.

Before playing with DBD::SQLite under Perl 5.24, take a look at the listing of the Modularity modules and compare it with what you saw the first time:

# yum module list
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24 [e]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

Notice that perl:5.24 is enabled ( [e] ) and thus takes precedence over perl:5.26, which would otherwise be a default one ( [d] ). Other enabled Modularity modules are perl-DBD-SQLite:1.58 and perl-DBI:1.641. Those are were enabled when you installed DBD::SQLite. These two modules have no other streams.

In general, any module can have multiple streams. At most, one stream of a module can be the default one. And, at most, one stream of a module can be enabled. An enabled stream takes precedence over a default one. If there is no enabled or a default stream, content of the module is unavailable.

If, for some reason, you need to disable a stream, even a default one, you do that with yum module disable MODULE:STREAM command.

Enough theory, back to some productive work. You are ready to test the DBD::SQLite CPAN module now. Let's create a test database, a foo table inside with one textual column called bar , and let's store a row with Hello text there:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test});
    $dbh->do(q{CREATE TABLE foo (bar text)});
    $sth=$dbh->prepare(q{INSERT INTO foo(bar) VALUES(?)});
    $sth->execute(q{Hello})'

Next, verify the Hello string was indeed stored by querying the database:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello

It seems DBD::SQLite works.

Non-modular packages may not work with non-default streams

So far, everything is great and working. Now I will show what happens if you try to install an RPM package that has not been modularized and is thus compatible only with the default Perl, perl:5.26:

# yum --allowerasing install 'perl(LWP)'
[ ]
Error: 
 Problem: package perl-libwww-perl-6.34-1.el8.noarch requires perl(:MODULE_COMPAT_5.26.2), but none of the providers can be installed
  - cannot install the best candidate for the job
  - package perl-libs-4:5.26.3-416.el8.i686 is excluded
  - package perl-libs-4:5.26.3-416.el8.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

Yum will report an error about perl-libwww-perl RPM package being incompatible. The LWP CPAN module that is packaged as perl-libwww-perl is built only for Perl 5.26, and therefore RPM dependencies cannot be satisfied. When a perl:5.24 stream is enabled, the packages from perl:5.26 stream are masked and become unavailable. However, this masking does not apply to non-modular packages, like perl-libwww-perl. There are plenty of packages that were not modularized yet. If you need some of them to be available and compatible with a non-default stream (e.g., not only with perl:5.26 but also with perl:5.24) do not hesitate to contact Red Hat support team with your request.

Resetting a module

Let's say you tested your old application and now you want to find out if it works with the new Perl 5.26.

To do that, you need to switch back to the perl:5.26 stream. Unfortunately, switching from an enabled stream back to a default or to a yet another non-default stream is not straightforward. You'll need to perform a module reset:

# yum module reset perl
[ ]
Dependencies resolved.
==========================================================================================
 Package              Arch                Version              Repository            Size
==========================================================================================
Resetting module streams:
 perl                                     5.24                                           

Transaction Summary
==========================================================================================

Is this ok [y/N]: y
Complete!

Well, that did not hurt. Now you can synchronize the distribution again to replace the 5.24 RPM packages with 5.26 ones:

# yum --allowerasing distrosync
[ ]
Transaction Summary
==========================================================================================
Upgrade    65 Packages
Downgrade  71 Packages

Total download size: 22 M
Is this ok [y/N]: y
[ ]

After that, you can check the Perl version:

$ perl -V:version
version='5.26.3';

And, check the enabled modules:

# yum module list
[ ]
parfait              0.5              common       Parfait Module
perl                 5.24             common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl                 5.26 [d]         common [d],  Practical Extraction and Report Languag
                                      minimal      e
perl-App-cpanminus   1.7044 [d]       common [d]   Get, unpack, build and install CPAN mod
                                                   ules
perl-DBD-MySQL       4.046 [d]        common [d]   A MySQL interface for Perl
perl-DBD-Pg          3.7 [d]          common [d]   A PostgreSQL interface for Perl
perl-DBD-SQLite      1.58 [d][e]      common [d]   SQLite DBI driver
perl-DBI             1.641 [d][e]     common [d]   A database access API for Perl
perl-FCGI            0.78 [d]         common [d]   FastCGI Perl bindings
perl-YAML            1.24 [d]         common [d]   Perl parser for YAML
php                  7.2 [d]          common [d],  PHP scripting language
                                      devel, minim
                                      al
[ ]

As you can see, we are back at the square one. The perl:5.24 stream is not enabled, and perl:5.26 is the default and therefore preferred. Only perl-DBD-SQLite:1.58 and perl-DBI:1.641 streams remained enabled. It does not matter much because those are the only streams. Nonetheless, you can reset them back using yum module reset perl-DBI perl-DBD-SQLite if you like.

Multi-context streams

What happened with the DBD::SQLite? It's still there and working:

$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello

That is possible because the perl-DBD-SQLite module is built for both 5.24 and 5.26 Perls. We call these modules multi-contextual . That's the case for perl-DBD-SQLite or perl-DBI, but not the case for FreeRADIUS, which explains the warning you saw earlier. If you want to see these low-level details, such which contexts are available, which dependencies are required, or which packages are contained in a module, you can use the yum module info MODULE:STREAM command.

Afterword

I hope this tutorial shed some light on modules -- the fresh feature of Red Hat Enterprise Linux 8 that enables us to provide you with multiple versions of software on top of one Linux platform. If you need more details, please read documentation accompanying the product (namely, user-space component management document and yum(8) manual page ) or ask the support team for help.

[Jul 14, 2020] Important Linux -proc filesystem files you need to know - Enable Sysadmin

Jul 14, 2020 | www.redhat.com

The /proc files I find most valuable, especially for inherited system discovery, are:

And the most valuable of those are cpuinfo and meminfo .

Again, I'm not stating that other files don't have value, but these are the ones I've found that have the most value to me. For example, the /proc/uptime file gives you the system's uptime in seconds. For me, that's not particularly valuable. However, if I want that information, I use the uptime command that also gives me a more readable version of /proc/loadavg as well.

By comparison:

$ cat /proc/uptime
46901.13 46856.69

$ cat /proc/loadavg 
0.00 0.01 0.03 2/111 2039

$ uptime
 00:56:13 up 13:01,  2 users,  load average: 0.00, 0.01, 0.03

I think you get the idea.

/proc/cmdline

This file shows the parameters passed to the kernel at the time it is started.

$ cat /proc/cmdline

BOOT_IMAGE=/vmlinuz-3.10.0-1062.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto spectre_v2=retpoline rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

The value of this information is in how the kernel was booted because any switches or special parameters will be listed here, too. And like all information under /proc , it can be found elsewhere and usually with better formatting, but /proc files are very handy when you can't remember the command or don't want to grep for something.

/proc/cpuinfo

The /proc/cpuinfo file is the first file I check when connecting to a new system. I want to know the CPU make-up of a system and this file tells me everything I need to know.

$ cat /proc/cpuinfo 

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 142
model name      : Intel(R) Core(TM) i5-7360U CPU @ 2.30GHz
stepping        : 9
cpu MHz         : 2303.998
cache size      : 4096 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc eagerfpu pni pclmulqdq monitor ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase avx2 invpcid rdseed clflushopt md_clear flush_l1d
bogomips        : 4607.99
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

This is a virtual machine and only has one vCPU. If your system contains more than one CPU, the CPU numbering begins at 0 for the first CPU.

/proc/meminfo

The /proc/meminfo file is the second file I check on a new system. It gives me a general and a specific look at a system's memory allocation and usage.

$ cat /proc/meminfo 
MemTotal:        1014824 kB
MemFree:          643608 kB
MemAvailable:     706648 kB
Buffers:            1072 kB
Cached:           185568 kB
SwapCached:            0 kB
Active:           187568 kB
Inactive:          80092 kB
Active(anon):      81332 kB
Inactive(anon):     6604 kB
Active(file):     106236 kB
Inactive(file):    73488 kB
Unevictable:           0 kB
Mlocked:               0 kB
***Output truncated***

I think most sysadmins either use the free or the top command to pull some of the data contained here. The /proc/meminfo file gives me a quick memory overview that I like and can redirect to another file as a snapshot.

/proc/version

The /proc/version command provides more information than the related uname -a command does. Here are the two compared:

$ cat /proc/version
Linux version 3.10.0-1062.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Wed Aug 7 18:08:02 UTC 2019

$ uname -a
Linux centos7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Usually, the uname -a command is sufficient to give you kernel version info but for those of you who are developers or who are ultra-concerned with details, the /proc/version file is there for you.

Wrapping up

The /proc filesystem has a ton of valuable information available to system administrators who want a convenient, non-command way of getting at raw system info. As I stated earlier, there are other ways to display the information in /proc . Additionally, some of the /proc info isn't what you'd want to use for system assessment. For example, use commands such as vmstat 5 5 or iostat 5 5 to get a better picture of system performance rather than reading one of the available /proc files.

[Jul 12, 2020] 6 handy Bash scripts for Git - Opensource.com

Jul 12, 2020 | opensource.com

6 handy Bash scripts for Git These six Bash scripts will make your life easier when you're working with Git repositories. 15 Jan 2020 Bob Peterson (Red Hat) Feed 86 up 2 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0 More on Git

I wrote a bunch of Bash scripts that make my life easier when I'm working with Git repositories. Many of my colleagues say there's no need; that everything I need to do can be done with Git commands. While that may be true, I find the scripts infinitely more convenient than trying to figure out the appropriate Git command to do what I want. 1. gitlog

gitlog prints an abbreviated list of current patches against the master version. It prints them from oldest to newest and shows the author and description, with H for HEAD , ^ for HEAD^ , 2 for HEAD~2, and so forth. For example:

$ gitlog
-----------------------[ recovery25 ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

If I want to see what patches are on a different branch, I can specify an alternate branch:

$ gitlog recovery24
2. gitlog.id

gitlog.id just prints the patch SHA1 IDs:

$ gitlog.id
-----------------------[ recovery25 ]-----------------------
56908eeb6940 2ca4a6b628a1 fc64ad5d99fe 02031a00a251 f6f38da7dd18 d8546e8f0023 fc3cc1f98f6b 12c3e0cb3523 76cce178b134 6fc1dce3ab9c 1b681ab074ca 26fed8de719b 802ff51a5670 49f67a512d8c f04f20193bbb 5f6afe809d23 2030521dc70e dada79b3be94 9b19a1e08161 78a035041d3e f03da011cae2 0d2b2e068fcd 2449976aa133 57dfb5e12ccd 53abedfdcf72 6fbdda3474b3 49544a547188 187032f7a63c 6f75dae23d93 95fc2a261b00 ebfb14ded191 f653ee9e414a 0e2911cb8111 73968b76e2e3 8a3e4cb5e92c a5f2da803b5b 7c9ef68388ed 71ca19d0cba8 340d27a33895 9b3c4e6efb10 d2e8c22be39b 9563e31f8bfd ebac7a38036c f703a3c27874 a3e86d2ef30e da3c604755b0 4525c2f5b46f a06a5b7dea02 8ba93c796d5c e8b5ff851bb9

Again, it assumes the current branch, but I can specify a different branch if I want.

3. gitlog.id2

gitlog.id2 is the same as gitlog.id but without the branch line at the top. This is handy for cherry-picking all patches from one branch to the current branch:

$ # create a new branch
$ git branch --track origin/master
$ # check out the new branch I just created
$ git checkout recovery26
$ # cherry-pick all patches from the old branch to the new one
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done 4. gitlog.grep

gitlog.grep greps for a string within that collection of patches. For example, if I find a bug and want to fix the patch that has a reference to function inode_go_sync , I simply do:

$ gitlog.grep inode_go_sync
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
152:-static void inode_go_sync(struct gfs2_glock *gl)
153:+static int inode_go_sync(struct gfs2_glock *gl)
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time

So, now I know that patch HEAD~9 is the one that needs fixing. I use git rebase -i HEAD~10 to edit patch 9, git commit -a --amend , then git rebase --continue to make the necessary adjustments.

5. gitbranchcmp3

gitbranchcmp3 lets me compare my current branch to another branch, so I can compare older versions of patches to my newer versions and quickly see what's changed and what hasn't. It generates a compare script (that uses the KDE tool Kompare , which works on GNOME3, as well) to compare the patches that aren't quite the same. If there are no differences other than line numbers, it prints [SAME] . If there are only comment differences, it prints [same] (in lower case). For example:

$ gitbranchcmp3 recovery24
Branch recovery24 has 47 patches
Branch recovery25 has 50 patches

(snip)
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of glops
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in gfs2_allocate_page_backing
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time

Missing from recovery25:
The missing:
Compare script generated at: /tmp/compare_mismatches.sh 6. gitlog.find

Finally, I have gitlog.find , a script to help me identify where the upstream versions of my patches are and each patch's current status. It does this by matching the patch description. It also generates a compare script (again, using Kompare) to compare the current patch to the upstream counterpart:

$ gitlog.find
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
Not found upstream
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
Not found upstream
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in dq
Not found upstream
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
Not found upstream
Compare script generated: /tmp/compare_upstream.sh

The patches are shown on two lines, the first of which is your current patch, followed by the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream status:

Some of my scripts make assumptions based on how I normally work with Git. For example, when searching for upstream patches, it uses my well-known Git tree's location. So, you will need to adjust or improve them to suit your conditions. The gitlog.find script is designed to locate GFS2 and DLM patches only, so unless you're a GFS2 developer, you will want to customize it to the components that interest you.

Source code

Here is the source for these scripts.

1. gitlog #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done

if [[ $branch =~ . * for-next. * ]]
then
start =HEAD
# start=origin/for-next
else
start =origin / master
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s %n" $i
patches =$ ( echo $patches - 1 | bc )
done
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" $tracking..$branch
#git log --reverse --abbrev-commit --pretty=format:"%h %<|(32)%an %s" ^origin/master ^linux-gfs2/for-next $branch 2. gitlog.id #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

/ usr / bin / echo "-----------------------[" $branch "]-----------------------"
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 3. gitlog.id2 #!/bin/bash
branch = $1

if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' 4. gitlog.grep #!/bin/bash
param1 = $1
param2 = $2

if test "x $param2 " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
string = $param1
else
branch = $param1
string = $param2
fi

patches = 0
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
/ usr / bin / git show --pretty =email --patch-with-stat $i | grep -n " $string "
patches =$ ( echo $patches - 1 | bc )
done 5. gitbranchcmp3 #!/bin/bash
#
# gitbranchcmp3 <old branch> [<new_branch>]
#
oldbranch = $1
newbranch = $2
script = / tmp / compare_mismatches.sh

/ usr / bin / rm -f $script
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

if test "x $newbranch " = x; then
newbranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi

tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

declare -a oldsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $oldbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )
declare -a newsha1s = ( ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $newbranch | cut -d ' ' -f1 | paste -s -d ' ' ` )

#echo "old: " $oldsha1s
oldcount = ${#oldsha1s[@]}
echo "Branch $oldbranch has $oldcount patches"
oldcount =$ ( echo $oldcount - 1 | bc )
#for o in `seq 0 ${#oldsha1s[@]}`; do
# echo -n ${oldsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done

#echo "new: " $newsha1s
newcount = ${#newsha1s[@]}
echo "Branch $newbranch has $newcount patches"
newcount =$ ( echo $newcount - 1 | bc )
#for o in `seq 0 ${#newsha1s[@]}`; do
# echo -n ${newsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
echo

for new in ` seq 0 $newcount ` ; do
newsha = ${newsha1s[$new]}
newdesc = ` git show $newsha | head -5 | tail -1 | cut -b5- `
oldsha = " "
same = "[ ]"
for old in ` seq 0 $oldcount ` ; do
if test " ${oldsha1s[$old]} " = "match" ; then
continue ;
fi
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
if test " $olddesc " = " $newdesc " ; then
oldsha = ${oldsha1s[$old]}
#echo $oldsha
git show $oldsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# No differences
same = "[SAME]"
oldsha1s [ $old ] = "match"
break
fi
git show $oldsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# Differences in comments only
same = "[same]"
oldsha1s [ $old ] = "match"
break
fi
oldsha1s [ $old ] = "match"
echo "compare_them $oldsha $newsha " >> $script
fi
done
echo " $new $oldsha $newsha $same $newdesc "
done

echo
echo "Missing from $newbranch :"
the_missing = ""
# Now run through the olds we haven't matched up
for old in ` seq 0 $oldcount ` ; do
if test ${oldsha1s[$old]} ! = "match" ; then
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
echo " ${oldsha1s[$old]} $olddesc "
the_missing = ` echo " $the_missing ${oldsha1s[$old]} " `
fi
done

echo "The missing: " $the_missing
echo "Compare script generated at: $script "
#git log --reverse --abbrev-commit --pretty=oneline $tracking..$branch | cut -d ' ' -f1 |paste -s -d ' ' 6. gitlog.find #!/bin/bash
#
# Find the upstream equivalent patch
#
# gitlog.find
#
cwd = $PWD
param1 = $1
ubranch = $2
patches = 0
script = / tmp / compare_upstream.sh
echo "#!/bin/bash" > $script
/ usr / bin / chmod 755 $script
echo "# Generated by gitbranchcmp3.sh" >> $script
echo "# Run this script to compare the mismatched patches" >> $script
echo " " >> $script
echo "function compare_them()" >> $script
echo "{" >> $script
echo " cwd= $PWD " >> $script
echo " git show --pretty=email --patch-with-stat \$ 2 > /tmp/gronk2" >> $script
echo " cd ~/linux.git/fs/gfs2" >> $script
echo " git show --pretty=email --patch-with-stat \$ 1 > /tmp/gronk1" >> $script
echo " cd $cwd " >> $script
echo " kompare /tmp/gronk1 /tmp/gronk2" >> $script
echo "}" >> $script
echo " " >> $script

#echo "Gathering upstream patch info. Please wait."
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
tracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `

cd ~ / linux.git
if test "X ${ubranch} " = "X" ; then
ubranch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi
utracking = ` git rev-parse --abbrev-ref --symbolic-full-name @ { u } `
#
# gather a list of gfs2 patches from master just in case we can't find it
#
#git log --abbrev-commit --pretty=format:" %h %<|(32)%an %s" master |grep -i -e "gfs2" -e "dlm" > /tmp/gronk
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / gfs2 / > / tmp / gronk.gfs2
# ms = in Linus's master
git log --reverse --abbrev-commit --pretty =format: "ms %h %<|(32)%an %s" master fs / dlm / > / tmp / gronk.dlm

cd $cwd
LIST = ` git log --reverse --abbrev-commit --pretty =oneline $tracking .. $branch | cut -d ' ' -f1 | paste -s -d ' ' `
for i in $LIST ; do patches =$ ( echo $patches + 1 | bc ) ; done
/ usr / bin / echo "-----------------------[" $branch "-" $patches "patches ]-----------------------"
patches =$ ( echo $patches - 1 | bc ) ;
for i in $LIST ; do
if [ $patches -eq 1 ] ; then
cnt = " ^"
elif [ $patches -eq 0 ] ; then
cnt = " H"
else
if [ $patches -lt 10 ] ; then
cnt = " $patches "
else
cnt = " $patches "
fi
fi
/ usr / bin / git show --abbrev-commit -s --pretty =format: " $cnt %h %<|(32)%an %s" $i
desc = `/ usr / bin / git show --abbrev-commit -s --pretty =format: "%s" $i `
cd ~ / linux.git
cmp = 1
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "lo %h %<|(32)%an %s" $utracking .. $ubranch | grep " $desc " `
# lo = in local for-next
if test "X $up_eq " = "X" ; then
up_eq = ` git log --reverse --abbrev-commit --pretty =format: "fn %h %<|(32)%an %s" master.. $utracking | grep " $desc " `
# fn = in for-next for next merge window
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.gfs2 `
if test "X $up_eq " = "X" ; then
up_eq = ` grep " $desc " / tmp / gronk.dlm `
if test "X $up_eq " = "X" ; then
up_eq = " Not found upstream"
cmp = 0
fi
fi
fi
fi
echo " $up_eq "
if [ $cmp -eq 1 ] ; then
UP_SHA1 = ` echo $up_eq | cut -d ' ' -f2 `
echo "compare_them $UP_SHA1 $i " >> $script
fi
cd $cwd
patches =$ ( echo $patches - 1 | bc )
done
echo "Compare script generated: $script "

[Jul 11, 2020] Own your own content Vallard's Blog

Jul 11, 2020 | benincosa.com

Posted on December 31, 2019 by Vallard

Reading this morning on Hacker News was this article on how the old Internet has died because we trusted all our content to Facebook and Google. While hyperbole abounds in the headline and there are plenty of internet things out there that aren't owned by Google nor Facebook (including this AWS free blog) it is true much of the information and content is in the hands of a giant Ad serving service and a social echo chamber. (well that is probably too harsh)

I heard this advice many years ago that you should own your own content. While there isn't much value in my trivial or obscure blog that nobody reads, it matters to me and is the reason I've ran it on my own software, my own servers, for 10+ years. This blog, for example, runs on open source WordPress, a Linux server hosted by a friend, and managed by me as I login and make changes.

But of course, that is silly! Why not publish on Medium like everyone else? Or publish on someone else's service? Isn't that the point of the internet? Maybe. But in another sense, to me, the point is freedom. Freedom to express, do what I want, say what I will with no restrictions. The ability to own what I say and freedom from others monetizing me directly. There's no walled garden and anyone can access the content I write in my own little funzone.

While that may seem like ridiculousness, to me it's part of my hobby, and something I enjoy. In the next decade, whether this blog remains up or is shut down, is not dependent upon the fates of Google, Facebook, Amazon, nor Apple. It's dependent upon me, whether I want it up or not. If I change my views, I can delete it. It won't just sit on the Internet because someone else's terms of service agreement changed. I am in control, I am in charge. That to me is important and the reason I run this blog, don't use other people's services, and why I advocate for owning your own content.

[Jul 09, 2020] My Favourite Secret Weapon strace

Jul 09, 2020 | zwischenzugs.com

Why strace ?

I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing with them can give a much deeper understanding of what's going on and ultimately lead to better code.

My favourite secret weapon in this path to understanding is strace.

strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS) calls your program is making.

An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc) its use has a very high hit rate in resolving problems out of developers' normal problem space.

Usage Patterns

strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.

My Netcat Server Won't Start!

Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source, and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.

Simply running through strace will likely give you clues as to what's gone on.

$  nc -l localhost 80
nc: Permission denied

Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).

Simply put strace at the front of your command. Note that the following output has been heavily edited for space reasons (deep breath):

 $ strace nc -l localhost 80
 execve("/bin/nc", ["nc", "-l", "localhost", "80"], [/* 54 vars */]) = 0
 brk(0)                                  = 0x1e7a000
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9c0000
 access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
 open("/usr/local/lib/tls/x86_64/libglib-2.0.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)
 stat("/usr/local/lib/tls/x86_64", 0x7fff5686c240) = -1 ENOENT (No such file or directory)
 [...]
 open("libglib-2.0.so.0", O_RDONLY)      = -1 ENOENT (No such file or directory)
 open("/etc/ld.so.cache", O_RDONLY)      = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=179820, ...}) = 0
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libglib-2.0.so.0", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\320k\1"..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=975080, ...}) = 0
 mmap(NULL, 3072520, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751c4b3000
 mprotect(0x7f751c5a0000, 2093056, PROT_NONE) = 0
 mmap(0x7f751c79f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xec000) = 0x7f751c79f000
 mmap(0x7f751c7a1000, 520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f751c7a1000
 close(3)                                = 0
 open("/usr/local/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory)
[...]
 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000
 close(3)                                = 0
 access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
 open("/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY) = 3
 read(3, "\177ELF\2\1\1\3>\1\20\""..., 832) = 832
 fstat(3, {st_mode=S_IFREG|0644, st_size=51728, ...}) = 0
 mmap(NULL, 2148104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751b8b0000
 mprotect(0x7f751b8bc000, 2093056, PROT_NONE) = 0
 mmap(0x7f751babb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f751babb000
 close(3)                                = 0
 mprotect(0x7f751babb000, 4096, PROT_READ) = 0
 munmap(0x7f751c994000, 179820)          = 0
 open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 3
 fcntl(3, F_GETFD)                       = 0x1 (flags FD_CLOEXEC)
 fstat(3, {st_mode=S_IFREG|0644, st_size=315, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "127.0.0.1\tlocalhost\n127.0.1.1\tal"..., 4096) = 315
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 open("/etc/gai.conf", O_RDONLY)         = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000
 read(3, "# Configuration for getaddrinfo("..., 4096) = 3343
 read(3, "", 4096)                       = 0
 close(3)                                = 0
 munmap(0x7f751c9bf000, 4096)            = 0
 futex(0x7f751c4af460, FUTEX_WAKE_PRIVATE, 2147483647) = 0
 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
 getsockname(3, {sa_family=AF_INET, sin_port=htons(58567), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
 connect(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0
 getsockname(3, {sa_family=AF_INET6, sin6_port=htons(42803), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0
 close(3)                                = 0
 socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse when a few things are explained.

For each line:

open("/etc/gai.conf", O_RDONLY)         = 3

Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf and the constant O_RDONLY , and the return value was 3 .

How to make sense of this?

Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above line is the attempt to open a file with read-only permission.

In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that return "Permission denied":

 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied)
 close(3)                                = 0
 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied)
 close(3)                                = 0
 write(2, "nc: ", 4nc: )                     = 4
 write(2, "Permission denied\n", 18Permission denied
 )     = 18
 exit_group(1)                           = ?

We might therefore want to understand what "bind" is and why it might be failing.

You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is in the manpages-dev package, and can be invoked by eg ​​ man 2 bind (I just used strace to determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!). You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs that match your installation.

Right there in my man 2 bind page it says:

ERRORS
EACCES The address is protected, and the user is not the superuser.

So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.

My Library Is Not Loading!

Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:

$ strace perl a.pl
execve("/usr/bin/perl", ["perl", "a.pl"], [/* 57 vars */]) = 0
brk(0)                                  = 0xa8f000
[...]fcntl(3, F_SETFD, FD_CLOEXEC)           = 0
fstat(3, {st_mode=S_IFREG|0664, st_size=14, ...}) = 0
rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0
brk(0xad1000)                           = 0xad1000
read(3, "use blahlib;\n\n", 4096)       = 14
stat("/space/myperllib/blahlib.pmc", 0x7fffbaf7f3d0) = -1 ENOENT (No such file or directory)
stat("/space/myperllib/blahlib.pm", {st_mode=S_IFREG|0644, st_size=7692, ...}) = 0
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffbaf7f090) = -1 ENOTTY (Inappropriate ioctl for device)
[...]mmap(0x7f4c45ea8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x4000) = 0x7f4c45ea8000
close(5)                                = 0
mprotect(0x7f4c45ea8000, 4096, PROT_READ) = 0
brk(0xb55000)                           = 0xb55000
read(4, "swrite($_[0], $_[1], $_[2], $_[3"..., 4096) = 3596
brk(0xb77000)                           = 0xb77000
read(4, "", 4096)                       = 0
close(4)                                = 0
read(3, "", 4096)                       = 0
close(3)                                = 0
exit_group(0)                           = ?

We observe that the file is found in what looks like an unusual place.

open("/space/myperllib/blahlib.pm", O_RDONLY) = 4

Inspecting the environment, we see that:

$ env | grep myperl
PERL5LIB=/space/myperllib

So the solution is to set the same env variable before running:

export PERL5LIB=/space/myperllib
Get to know the internals bit by bit

If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a look at the file /etc/gai.conf , which I'd never come across before writing this.

Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens & Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully) browse C code more and more understanding will grow.

Gotchas

If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes and straces them. -ff creates a separate file with the pid suffixed to the name.

If you're on solaris, this program doesn't exist – you need to use truss instead.

Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies (on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the executable up.

Other useful tidbits

You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with -p .

If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and -T ) can help significantly.

vasudevram February 11, 2018 at 5:29 pm

Interesting post. One point: The errors start earlier than what you said.There is a call to access() near the top of the strace output, which fails:

access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)

vasudevram February 11, 2018 at 5:29 pm

I guess that could trigger the other errors.

Benji Wiebe February 11, 2018 at 7:30 pm

A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking if a config file exists.

vasudevram February 11, 2018 at 8:24 pm

>A failed access or open system call is not usually an error in the context of launching a program.

Yes, good point, that could be so, if the programmer meant to ignore the error, and if it was not an issue to do so.

>Generally it is merely checking if a config file exists.

The file name being access'ed is "/etc/ld.so.nohwcap" – not sure if it is a config file or not.

[Jul 08, 2020] Exit Codes

From bash manual: The exit status of an executed command is the value returned by the waitpid system call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125 specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances, the shell will use special values to indicate specific failure modes.
For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure. This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to indicate various failure modes. When a command terminates on a fatal signal whose number is N, Bash uses the value 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
The exit status is used by the Bash conditional commands (see Conditional Constructs) and some of the list constructs (see Lists).
All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid options or missing arguments.
Jul 08, 2020 | zwischenzugs.com

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code.

1 is a 'general error', and others can give you more information (e.g. which signal killed it, for example). 255 is upper limit and is "internal error"

grep joeuser /etc/passwd # in case of success returns 0, otherwise 1

or

grep not_there /dev/null
echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

[Jul 07, 2020] The Missing Readline Primer by Ian Miell

Highly recommended!
This is from the book Learn Bash the Hard Way, available for $6.99.
Jul 07, 2020 | zwischenzugs.com

The Missing Readline Primer zwischenzugs Uncategorized April 23, 2019 7 Minutes

Readline is one of those technologies that is so commonly used many users don't realise it's there.

I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with it over the years.

Bash Without Readline

First you're going to see what bash looks like without readline.

In your 'normal' bash shell, hit the TAB key twice. You should see something like this:

    Display all 2335 possibilities? (y or n)

That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap tab twice.

Hit n to get out of that autocomplete.

Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands should be brought back to the command line.

Now type:

$ bash --noediting

The --noediting flag starts up bash without the readline library enabled.

If you hit TAB twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab direct to the screen, moving your cursor along. Autocomplete has gone.

Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the up or down arrows as you did above to see that that no longer works as well.

Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:

$ exit
Other Shortcuts

There are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most commonly-used of these before explaining how you can find out more.

$ echo 'some command'

There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.

If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of the line. Another way of representing this 'multi-key' way of inputting is to write it like this: \C-a . This is one conventional way to represent this kind of input. The \C represents the control key, and the -a represents that the a key is depressed at the same time.

Now if you hit \C-e ( ctrl and e ) then your cursor has moved to the end of the line. I use these two dozens of times a day.

Another frequently useful one is \C-l , which clears the screen, but leaves your command line intact.

The last one I'll show you allows you to search your history to find matching commands while you type. Hit \C-r , and then type ec . You should see the echo command you just ran like this:

    (reverse-i-search)`ec': echo echo

Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that you've input before (if you've only got one echo command in your history then you will only see one). As you see them you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.

There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind` to Show Readline Shortcuts

If you type:

$ bind -p

You will see a list of bindings that readline is capable of. There's a lot of them!

Have a read through if you're interested, but don't worry about understanding them all yet.

If you type:

$ bind -p | grep C-a

you'll pick out the 'beginning-of-line' binding you used before, and see the \C-a notation I showed you before.

As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.

If you want to look through the entirety of the bind -p output, then you will want to know that \M refers to the Meta key (which you might also know as the Alt key), and \e refers to the Esc key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also tries to auto-complete the command you are typing. This is documented as:

    "\e?": possible-completions

in the bind -p output.

Readline and Terminal Options

If you've looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked at earlier:

    "\C-r": reverse-search-history

You might also have seen that there is another binding that allows you to search forward through your history too:

    "\C-s": forward-search-history

What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.

Watch out though! Hitting \C-s to search forward through the history might well not work for you.

Why is this, if the binding is there and readline is switched on?

It's because something picked up the \C-s before it got to the readline library: the terminal settings.

The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before readline gets to see it.

If you type:

$ stty -e

you should get output similar to this:

speed 9600 baud; 47 rows; 202 columns;
lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc
iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk
oflags: opost onlcr -oxtabs -onocr -onlret
cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf
discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      ^S      ^Z      0       ^W

You can see on the last four lines ( discard dsusp [...] ) there is a table of key bindings that your terminal will pick up before readline sees them. The ^ character (known as the 'caret') here represents the ctrl key that we previously represented with a \C .

If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one way of describing these key combinations.

If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you can use the stty program to unset that binding. In this case, we want to unset the 'stop' binding.

If you are in the same situation, type:

$ stty stop undef

Now, if you re-run stty -e , the last two lines might look like this:

[...]
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

where the stop entry now has <undef> underneath it.

Strangely, for me C-r is also bound to 'reprint' above ( ^R ).

But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.

While we are looking at this table:

discard dsusp   eof     eol     eol2    erase   intr    kill    lnext
^O      ^Y      ^D      <undef> <undef> ^?      ^C      ^U      ^V
min     quit    reprint start   status  stop    susp    time    werase
1       ^\      ^R      ^Q      ^T      <undef> ^Z      0       ^W

it's worth noting a few other key bindings that are used regularly.

First, one you may well already be familiar with is \C-c , which interrupts a program, terminating it:

$ sleep 99
[[Hit \C-c]]
^C
$

Similarly, \C-z suspends a program, allowing you to 'foreground' it again and continue with the fg builtin.

$ sleep 10
[[ Hit \C-z]]
^Z
[1]+  Stopped                 sleep 10
$ fg
sleep 10

\C-d sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it on a bash shell, the bash shell you are in will close.

Finally, \C-w deletes the word before the cursor

These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.

Daz April 29, 2019 at 11:15 pm

Hi Ian,

What OS are you running because stty -e gives the following on Centos 6.x and Ubuntu 18.04.2

stty -e
stty: invalid argument '-e'
Try 'stty –help' for more information. Reply

Leon May 14, 2019 at 5:12 am

`stty -a` works for me (Ubuntu 14)

yachris May 16, 2019 at 4:40 pm

You might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus' program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.

AriSweedler May 17, 2019 at 4:50 am

I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline commands sourced at startup!

I do not know exactly when or how the inputrc is read.

Most of what I learned about inputrc stuff is from https://www.topbug.net/blog/2017/07/31/inputrc-for-humans/ .

Here is my inputrc, if anyone wants: https://github.com/AriSweedler/dotfiles/blob/master/.inputrc .

[Jul 04, 2020] Eleven bash Tips You Might Want to Know by Ian Miell

Highly recommended!
Notable quotes:
"... Material here based on material from my book Learn Bash the Hard Way . Free preview available here . ..."
"... natively in bash ..."
Jul 04, 2020 | zwischenzugs.com

Here are some tips that might help you be more productive with bash.

1) ^x^y^

A gem I use all the time.

Ever typed anything like this?

$ grp somestring somefile
-bash: grp: command not found

Sigh. Hit 'up', 'left' until at the 'p' and type 'e' and return.

Or do this:

$ ^rp^rep^
grep 'somestring' somefile
$

One subtlety you may want to note though is:

$ grp rp somefile
$ ^rp^rep^
$ grep rp somefile

If you wanted rep to be searched for, then you'll need to dig into the man page and use a more powerful history command:

$ grp rp somefile
$ !!:gs/rp/rep
grep rep somefile
$

... ... ...


Material here based on material from my book
Learn Bash the Hard Way .
Free preview available here .


3) shopt vs set

This one bothered me for a while.

What's the difference between set and shopt ?

set s we saw before , but shopt s look very similar. Just inputting shopt shows a bunch of options:

$ shopt
cdable_vars    off
cdspell        on
checkhash      off
checkwinsize   on
cmdhist        on
compat31       off
dotglob        off

I found a set of answers here . Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding shopt as another way to set extra shell options. But I'm still unsure if you know the answer, let me know.

4) Here Docs and Here Strings

'Here docs' are files created inline in the shell.

The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.

Type this:

$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc

Notice that:

Lesser known is the 'here string':

$ cat > asd <<< 'This file has one line'
5) String Variable Manipulation

You may have written code like this before, where you use tools like sed to manipulate strings:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS

But you may not be aware that this is possible natively in bash .

This means that you can dispense with lots of sed and awk shenanigans.

One way to rewrite the above is:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS

The second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script .

If you want to use glob patterns that are greedy (see globbing here ) then you double up:

$ VAR='HEADERMy voice is my passwordFOOTER'
$ echo ${VAR##HEADER*}
$ echo ${VAR%%*FOOTER}
6) ​Variable Defaults

These are very handy when you're knocking up scripts quickly.

If you have a variable that's not set, you can 'default' them by using this. Create a file called default.sh with these contents

#!/bin/bash
FIRST_ARG="${1:-no_first_arg}"
SECOND_ARG="${2:-no_second_arg}"
THIRD_ARG="${3:-no_third_arg}"
echo ${FIRST_ARG}
echo ${SECOND_ARG}
echo ${THIRD_ARG}

Now run chmod +x default.sh and run the script with ./default.sh first second .

Observer how the third argument's default has been assigned, but not the first two.

You can also assign directly with ${VAR: = defaultval} (equals sign, not dash) but note that this won't work with positional variables in scripts or functions. Try changing the above script to see how it fails.

7) Traps

The trap built-in can be used to 'catch' when a signal is sent to your script.

Here's an example I use in my own cheapci script:

function cleanup() {
    rm -rf "${BUILD_DIR}"
    rm -f "${LOCK_FILE}"
    # get rid of /tmp detritus, leaving anything accessed 2 days ago+
    find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf
    echo "cleanup done"                                                                                                                          
} 
trap cleanup TERM INT QUIT

Any attempt to CTRL-C , CTRL- or terminate the program using the TERM signal will result in cleanup being called first.

Be aware:

  • Trap logic can get very tricky (eg handling signal race conditions)
  • The KILL signal can't be trapped in this way

But mostly I've used this for 'cleanups' like the above, which serve their purpose.

8) Shell Variables

It's well worth getting to know the standard shell variables available to you . Here are some of my favourites:

RANDOM

Don't rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:

$ echo ${RANDOM}
16313
$ # Not enough digits?
$ echo ${RANDOM}${RANDOM}
113610703
$ NEWFILE=/tmp/newfile_${RANDOM}
$ touch $NEWFILE
REPLY

No need to give a variable name for read

$ read
my input
$ echo ${REPLY}
LINENO and SECONDS

Handy for debugging

$ echo ${LINENO}
115
$ echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO
174380
174381
116

Note that there are two 'lines' above, even though you used ; to separate the commands.

TMOUT

You can timeout reads, which can be really handy in some scripts

#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}

... ... ...

10) Associative Arrays

Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here ).

What I didn't know until I read up on it was that you can have associative arrays in bash.

Type this out for a demo:

$ declare -A MYAA=([one]=1 [two]=2 [three]=3)
$ MYAA[one]="1"
$ MYAA[two]="2"
$ echo $MYAA
$ echo ${MYAA[one]}
$ MYAA[one]="1"
$ WANT=two
$ echo ${MYAA[$WANT]}

Note that this is only available in bashes 4.x+.

... ... ...

[Jul 04, 2020] Learn Bash Debugging Techniques the Hard Way by Ian Miell

Highly recommended!
Notable quotes:
"... NOTE: If you are on a Mac, then you might only get second-level granularity on the date! ..."
Jul 04, 2020 | zwischenzugs.com

... ... ... Managing Variables

Variables are a core part of most serious bash scripts (and even one-liners!), so managing them is another important way to reduce the possibility of your script breaking.

Change your script to add the 'set' line immediately after the first line and see what happens:

#!/bin/bash
set -o nounset
A="some value"
echo "${A}"
echo "${B}"

...I always set nounset on my scripts as a habit. It can catch many problems before they become serious.

Tracing Variables

If you are working with a particularly complex script, then you can get to the point where you are unsure what happened to a variable.

Try running this script and see what happens:

#!/bin/bash 
set -o nounset 
declare A="some value" 
function a { 
  echo "${BASH_SOURCE}>A A=${A} LINENO:${1}" 
} 
trap "a $LINENO" DEBUG 
B=value 
echo "${A}" 
A="another value" 
echo "${A}" 
echo "${B}"

There's a problem with this code. The output is slightly wrong. Can you work out what is going on? If so, try and fix it.

You may need to refer to the bash man page, and make sure you understand quoting in bash properly.

It's quite a tricky one to fix 'properly', so if you can't fix it, or work out what's wrong with it, then ask me directly and I will help.

Profiling Bash Scripts

Returning to the xtrace (or set -x flag), we can exploit its use of a PS variable to implement the profiling of a script:

#!/bin/bash
set -o nounset
set -o xtrace
declare A="some value"
PS4='$(date "+%s%N => ")'
B=
echo "${A}"
A="another value"
echo "${A}"
echo "${B}"
ls
pwd
curl -q bbc.co.uk

From this you should be able to tell what PS4 does. Have a play with it, and read up and experiment with the other PS variables to get familiar with what they do.

NOTE: If you are on a Mac, then you might only get second-level granularity on the date!

Linting with Shellcheck

Finally, here is a very useful tip for understanding bash more deeply and improving any bash scripts you come across.

Shellcheck is a website and a package available on most platforms that gives you advice to help fix and improve your shell scripts. Very often, its advice has prompted me to research more deeply and understand bash better.

Here is some example output from a script I found on my laptop:

$ shellcheck shrinkpdf.sh
In shrinkpdf.sh line 44:
          -dColorImageResolution=$3             \
                                 ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 46:
          -dGrayImageResolution=$3              \
                                ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 48:
          -dMonoImageResolution=$3              \
                                ^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 57:
        if [ ! -f "$1" -o ! -f "$2" ]; then
                      ^-- SC2166: Prefer [ p ] || [ q ] as [ p -o q ] is not well defined.
In shrinkpdf.sh line 60:
        ISIZE="$(echo $(wc -c "$1") | cut -f1 -d\ )"
                      ^-- SC2046: Quote this to prevent word splitting.
                      ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
In shrinkpdf.sh line 61:
        OSIZE="$(echo $(wc -c "$2") | cut -f1 -d\ )"
                      ^-- SC2046: Quote this to prevent word splitting.
                      ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.

The most common reminders are regarding potential quoting issues, but you can see other useful tips in the above output, such as preferred arguments to the test construct, and advice on "useless" echo s.

Exercise

1) Find a large bash script on a social coding site such as GitHub, and run shellcheck over it. Contribute back any improvements you find.


[Jul 02, 2020] 7 Bash history shortcuts you will actually use by Ian Miell

Highly recommended!
Notable quotes:
"... The "last argument" one: !$ ..."
"... The " n th argument" one: !:2 ..."
"... The "all the arguments": !* ..."
"... The "last but n " : !-2:$ ..."
"... The "get me the folder" one: !$:h ..."
"... I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need. ..."
"... Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means). ..."
Oct 02, 2019 | opensource.com

7 Bash history shortcuts you will actually use Save time on the command line with these essential Bash shortcuts. 02 Oct 2019 Ian 205 up 12 comments Image by : Opensource.com x Subscribe now

Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining only the well-known !! trick I learned when I first started using Bash.

So most of them were never committed to memory.

More on Bash This article outlines the shortcuts I actually use every day. It is based on some of the contents of my book, Learn Bash the hard way ; (you can read a preview of it to learn more).

When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required, but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to get them under your fingers, as the time you save will be significant in the long run.

1. The "last argument" one: !$

If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command into your line.

Consider this scenario:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory

Ach, I put the wrongfile filename in my command. I should have put rightfile instead.

You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:

$ mv / path / to / rightfile ! $
mv / path / to / rightfile / some / other / place

and the command will work.

There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last command is one I use the most.

2. The " n th argument" one: !:2

Ever done anything like this?

$ tar -cvf afolder afolder.tar
tar: failed to open

Like many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.

tar_2x.png

When you mix up arguments like that, you can run:

$ ! : 0 ! : 1 ! : 3 ! : 2
tar -cvf afolder.tar afolder

and your reputation will be saved.

The last command's items are zero-indexed and can be substituted in with the number after the !: .

Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.

3. The "all the arguments": !*

Imagine I run a command like:

$ grep '(ping|pong)' afile

The arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .

I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).

$ egrep ! : 1 -$
egrep '(ping|pong)' afile
ping

You don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).

4. The "last but n " : !-2:$

The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the original one, which means that the last command is no longer the one I want to reference.

For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:

$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
$ ls / path / to /
rightfile

I can no longer use the !$ shortcut.

In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to grab the last argument from an older command:

$ mv / path / to / rightfile ! - 2 :$
mv / path / to / rightfile / some / other / place

Again, once you learn it, you may be surprised at how often you need it.

5. The "get me the folder" one: !$:h

This one looks less promising on the face of it, but I use it dozens of times daily.

Imagine I run a command like this:

$ tar -cvf system.tar / etc / system
tar: / etc / system: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.

The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.

I can do this at a stroke with:

$ cd ! $:h
cd / etc

This one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only the /etc ."

6. The "the current line" one: !#:1

For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning it. I wish I'd done so a long time ago. I most commonly use it to make backup files:

$ cp / path / to / some / file ! #:1.bak
cp / path / to / some / file / path / to / some / file.bak

but once under the fingers, it can be a very quick alternative to

7. The "search and replace" one: !!:gs

This one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.

Say I want to tell the world that my s key does not work and outputs f instead:

$ echo my f key doef not work
my f key doef not work

Then I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:

$ !! :gs / f / s /
echo my s key does not work
my s key does not work

It doesn't work only on single characters; I can replace words or sentences, too:

$ !! :gs / does / did /
echo my s key did not work
my s key did not work Test them out

Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?

$ ping ! #:0:gs/i/o
$ vi / tmp /! : 0 .txt
$ ls ! $:h
$ cd ! - 2 :h
$ touch ! $! - 3 :$ !! ! $.txt
$ cat ! : 1 -$ Conclusion

Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks to learn, these are my favorites that I frequently put to use.

If you want to dive even deeper into all that Bash can teach you, pick up my book, Learn Bash the hard way or check out my online course, Master the Bash shell .


This article was originally posted on Ian's blog, Zwischenzugs.com , and is reused with permission.

Orr, August 25, 2019 at 10:39 pm

BTW – you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls' and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!

Dima August 26, 2019 at 7:40 am

Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.

cbarrick on 03 Oct 2019

Your "current line" example is too contrived. Your example is copying to a backup like this:

$ cp /path/to/some/file !#:1.bak

But a better way to write that is with filename generation:

$ cp /path/to/some/file{,.bak}

That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.

Darryl Martin August 26, 2019 at 4:41 pm

I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing several down and sticking them on my terminal for reference.

A couple additions I'm sure you know.

  1. I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
  2. I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing me to modify it if necessary.

Ricardo J. Barberis on 06 Oct 2019

The problem with bash's history shorcuts for me is... that I never had the need to learn them.

Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means).

Examples:

Ctrl+R for a Reverse search
Ctrl+A to move to the begnining of the line (Home key also)
Ctrl+E to move to the End of the line (End key also)
Ctrl+K to Kill (delete) text from the cursor to the end of the line
Ctrl+U to kill text from the cursor to the beginning of the line
Alt+F to move Forward one word (Ctrl+Right arrow also)
Alt+B to move Backward one word (Ctrl+Left arrow also)
etc.

YMMV of course.

[Jul 02, 2020] Some Relatively Obscure Bash Tips zwischenzugs

Jul 02, 2020 | zwischenzugs.com

2) |&

You may already be familiar with 2>&1 , which redirects standard error to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe both standard output and standard error into the next stage of the pipeline like this:

if doesnotexist |& grep 'command not found' >/dev/null
then
  echo oops
fi
3) $''

This construct allows you to specify specific bytes in scripts without fear of triggering some kind of encoding problem. Here's a command that will grep through files looking for UK currency ('£') signs in hexadecimal recursively:

grep -r $'\xc2\xa3' *

You can also use octal:

grep -r $'\302\243' *
4) HISTIGNORE

If you are concerned about security, and ever type in commands that might have sensitive data in them, then this one may be of use.

This environment variable does not put the commands specified in your history file if you type them in. The commands are separated by colons:

HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"

You have to specify the whole line, so a glob character may be needed if you want to exclude commands and their arguments or flags.

5) fc

If readline key bindings aren't under your fingers, then this one may come in handy.

It calls up the last command you ran, and places it into your preferred editor (specified by the EDITOR variable). Once edited, it re-runs the command.

6) ((i++))

If you can't be bothered with faffing around with variables in bash with the $[] construct, you can use the C-style compound command.

So, instead of:

A=1
A=$[$A+1]
echo $A

you can do:

A=1
((A++))
echo $A

which, especially with more complex calculations, might be easier on the eye.

7) caller

Another builtin bash command, caller gives context about the context of your shell's

SHLVL is a related shell variable which gives the level of depth of the calling stack.

This can be used to create stack traces for more complex bash scripts.

Here's a die function, adapted from the bash hackers' wiki that gives a stack trace up through the calling frames:

#!/bin/bash
die() {
  local frame=0
  ((FRAMELEVEL=SHLVL - frame))
  echo -n "${FRAMELEVEL}: "
  while caller $frame; do
    ((frame++));
    ((FRAMELEVEL=SHLVL - frame))
    if [[ ${FRAMELEVEL} -gt -1 ]]
    then
      echo -n "${FRAMELEVEL}: "
    fi
  done
  echo "$*"
  exit 1
}

which outputs:

3: 17 f1 ./caller.sh
2: 18 f2 ./caller.sh
1: 19 f3 ./caller.sh
0: 20 main ./caller.sh
*** an error occured ***
8) /dev/tcp/host/port

This one can be particularly handy if you find yourself on a container running within a Kubernetes cluster service mesh without any network tools (a frustratingly common experience).

Bash provides you with some virtual files which, when referenced, can create socket connections to other servers.

This snippet, for example, makes a web request to a site and returns the output.

exec 9<>/dev/tcp/brvtsdflnxhkzcmw.neverssl.com/80
echo -e "GET /online HTTP/1.1\r\nHost: brvtsdflnxhkzcmw.neverssl.com\r\n\r\n" >&9
cat <&9

The first line opens up file descriptor 9 to the host brvtsdflnxhkzcmw.neverssl.com on port 80 for reading and writing. Line two sends the raw HTTP request to that socket connection's file descriptor. The final line retrieves the response.

Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much everyone is running on https, but when running from application containers within a service mesh can still prove invaluable, as requests there are initiated using HTTP.

9) Co-processes

Since version 4 of bash it has offered the capability to run named coprocesses.

It seems to be particularly well-suited to managing the inputs and outputs to other processes in a fine-grained way. Here's an annotated and trivial example:

coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)

This sets up the coprocess as a subshell with the name testproc .

Within the subshell, there's a never-ending while loop that counts its own iterations with the i variable. It outputs two lines: the iteration number, and a line read in from standard input.

After creating the coprocess, bash sets up an array with that name with the file descriptor numbers for the standard input and standard output. So this:

echo "${testproc[@]}"

in my terminal outputs:

63 60

Bash also sets up a variable with the process identifier for the coprocess, which you can see by echoing it:

echo "${testproc_PID}"

You can now input data to the standard input of this coprocess at will like this:

echo input1 >&"${testproc[1]}"

In this case, the command resolves to: echo input1 >&60 , and the >&[INTEGER] construct ensures the redirection goes to the coprocess's standard input.

Now you can read the output of the coprocess's two lines in a similar way, like this:

read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"

You might use this to create an expect -like script if you were so inclined, but it could be generally useful if you want to manage inputs and outputs. Named pipes are another way to achieve a similar result.

Here's a complete listing for those who want to cut and paste:

!/bin/bash
coproc testproc (
  i=1
  while true
  do
    echo "iteration:${i}"
    ((i++))
    read -r aline
    echo "${aline}"
  done
)
echo "${testproc[@]}"
echo "${testproc_PID}"
echo input1 >&"${testproc[1]}"
read -r output1a <&"${testproc[0]}"
read -r output1b <&"${testproc[0]}"
echo "${output1a}"
echo "${output1b}"
echo input2 >&"${testproc[1]}"
read -r output2a <&"${testproc[0]}"
read -r output2b <&"${testproc[0]}"
echo "${output2a}"
echo "${output2b}"

[Jul 01, 2020] Use curl to test an application's endpoint or connectivity to an upstream service endpoint

Notable quotes:
"... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
Jul 01, 2020 | opensource.com

curl

curl transfers a URL. Use this command to test an application's endpoint or connectivity to an upstream service endpoint. c url can be useful for determining if your application can reach another service, such as a database, or checking if your service is healthy.

As an example, imagine your application throws an HTTP 500 error indicating it can't reach a MongoDB database:

$ curl -I -s myapplication: 5000
HTTP / 1.0 500 INTERNAL SERVER ERROR

The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop:

$ curl -I -s database: 27017
HTTP / 1.0 200 OK

So what could be the problem? Check if your application can get to other places besides the database from the application host:

$ curl -I -s https: // opensource.com
HTTP / 1.1 200 OK

That seems to be okay. Now try to reach the database from the application host. Your application is using the database's hostname, so try that first:

$ curl database: 27017
curl: ( 6 ) Couldn 't resolve host ' database '

This indicates that your application cannot resolve the database because the URL of the database is unavailable or the host (container or VM) does not have a nameserver it can use to resolve the hostname.

[Jul 01, 2020] Stupid Bash tricks- History, reusing arguments, files and directories, functions, and more by Valentin Bajrami

A moderately interesting example here is the example of changing sudo systemctl start into sudo systemctl stop via !!:s/status/start/
But it probably can be optimized so that you do not need to type start (it can be deleted as the last word). So you can try !0 stop instead
Jul 01, 2020 | www.redhat.com

See also Bash bang commands- A must-know trick for the Linux command line - Enable Sysadmin

Let's say I run the following command:

$> sudo systemctl status sshd

Bash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status with my previous command. That command was saved in history , so I can reference it. I simply run:

$> !!:s/status/start/
sudo systemctl start sshd

The above expression has the following content:

The result is that the sshd service is started.

Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:

$> echo "HISTSIZE=5000" >> ~/.bashrc && source ~/.bashrc

What if I want to display the last three commands in my history? I enter:

$> history 3
 1002  ls
 1003  tail audit.log
 1004  history 3

I run tail on audit.log by referring to the history line number. In this case, I use line 1003:

$> !1003
tail audit.log
Reference the last argument of the previous command

When I want to list directory contents for different directories, I may change between directories quite often. There is a nice trick you can use to refer to the last argument of the previous command. For example:

$> pwd
/home/username/
$> ls some/very/long/path/to/some/directory
foo-file bar-file baz-file

In the above example, /some/very/long/path/to/some/directory is the last argument of the previous command.

If I want to cd (change directory) to that location, I enter something like this:

$> cd $_

$> pwd
/home/username/some/very/long/path/to/some/directory

Now simply use a dash character to go back to where I was:

$> cd -
$> pwd
/home/username/

[Jun 26, 2020] Vim show line numbers by default on Linux

Notable quotes:
"... Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. ..."
"... We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers. ..."
Feb 29, 2020 | www.cyberciti.biz

How do I show line numbers in Vim by default on Linux? Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions.

.... Let us see how to display the line number in vim permanently. Vim (Vi IMproved) is not just free text editor, but it is the number one editor for Linux sysadmin and software development work.

By default, Vim doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the following instructions. My experience shows that line numbers are useful for debugging shell scripts, program code, and configuration files. Let us see how to display the line number in vim permanently.

Vim show line numbers by default

Turn on absolute line numbering by default in vim:

  1. Open vim configuration file ~/.vimrc by typing the following command:
    vim ~/.vimrc
  2. Append set number
  3. Press the Esc key
  4. To save the config file, type :w and hit Enter key
  5. You can temporarily disable the absolute line numbers within vim session, type:
    :set nonumber
  6. Want to enable disabled the absolute line numbers within vim session? Try:
    :set number
  7. We can see vim line numbers on the left side.
Relative line numbers

Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. Once again edit the ~/vimrc, run:
vim ~/vimrc
Finally, turn relative line numbers on:
set relativenumber
Save and close the file in vim text editor.
VIM relative line numbers

How to show "Hybrid" line numbers in Vim by default

What happens when you put the following two config directives in ~/.vimrc ?
set number
set relativenumber

That is right. We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers.

Conclusion

Today we learned about permanent line number settings for the vim text editor. By adding the "set number" config directive in Vim configuration file named ~/.vimrc, we forced vim to show line numbers each time vim started. See vim docs here for more info and following tutorials too:

[May 20, 2020] The mktemp Command Tutorial With Examples For Beginners

May 20, 2020 | www.ostechnix.com

Mktemp is part of GNU coreutils package. So don't bother with installation. We will see some practical examples now.

To create a new temporary file, simply run:

$ mktemp

You will see an output like below:

/tmp/tmp.U0C3cgGFpk

How To Create temporary file using mktemp command in Linux

As you see in the output, a new temporary file with random name "tmp.U0C3cgGFpk" is created in /tmp directory. This file is just an empty file.

You can also create a temporary file with a specified suffix. The following command will create a temporary file with ".txt" extension:

$ mktemp --suffix ".txt"
/tmp/tmp.sux7uKNgIA.txt

How about a temporary directory? Yes, it is also possible! To create a temporary directory, use -d option.

$ mktemp -d

This will create a random empty directory in /tmp folder.

Sample output:

/tmp/tmp.PE7tDnm4uN

Create temporary directory using mktemp command in Linux

All files will be created with u+rw permission, and directories with u+rwx , minus umask restrictions. In other words, the resulting file will have read and write permissions for the current user, but no permissions for the group or others. And the resulting directory will have read, write and executable permissions for the current user, but no permissions for groups or others.

You can verify the file permissions using "ls" command:

$ ls -al /tmp/tmp.U0C3cgGFpk
-rw------- 1 sk sk 0 May 14 13:20 /tmp/tmp.U0C3cgGFpk

Verify the directory permissions using "ls" command:

$ ls -ld /tmp/tmp.PE7tDnm4uN
drwx------ 2 sk sk 4096 May 14 13:25 /tmp/tmp.PE7tDnm4uN

Check file and directory permissions in Linux


Suggested read:


Create temporary files or directories with custom names using mktemp command

As I already said, all files and directories are created with a random file names. We can also create a temporary file or directory with a custom name. To do so, simply add at least 3 consecutive 'X's at the end of the file name like below.

$ mktemp ostechnixXXX
ostechnixq70

Similarly, to create directory, just run:

$ mktemp -d ostechnixXXX
ostechnixcBO

Please note that if you choose a custom name, the files/directories will be created in the current working directory, not /tmp location . In this case, you need to manually clean up them.

Also, as you may noticed, the X's in the file name are replaced with random characters. You can however add any suffix of your choice.

For instance, I want to add "blog" at the end of the filename. Hence, my command would be:

$ mktemp ostechnixXXX --suffix=blog
ostechnixZuZblog

Now we do have the suffix "blog" at the end of the filename.

If you don't want to create any file or directory, you can simply perform a dry run like below.

$ mktemp -u
/tmp/tmp.oK4N4U6rDG

For help, run:

$ mktemp --help
Why do we actually need mktemp?

You might wonder why do we need "mktemp" while we can easily create empty files using "touch filename" command. The mktemp command is mainly used for creating temporary files/directories with random name. So, we don't need to bother figuring out the names. Since mktemp randomizes the names, there won't be any name collision. Also, mktemp creates files safely with permission 600(rw) and directories with permission 700(rwx), so the other users can't access it. For more details, check man pages.

$ man mktemp

[May 06, 2020] Creating and managing partitions in Linux with parted Enable Sysadmin by Tyler Carrigan

Apr 30, 2020 | www.redhat.com

Red Hat Sysddmin

Listing partitions with parted

The first thing that you want to do anytime that you need to make changes to your disk is to find out what partitions you already have. Displaying existing partitions allows you to make informed decisions moving forward and helps you nail down the partition names will need for future commands. Run the parted command to start parted in interactive mode and list partitions. It will default to your first listed drive. You will then use the print command to display disk information.

[root@rhel ~]# parted /dev/sdc
    GNU Parted 3.2
    Using /dev/sdc
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) print                                                            
    Error: /dev/sdc: unrecognised disk label
    Model: ATA VBOX HARDDISK (scsi)                                           
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:
    (parted)

Creating new partitions with parted

Now that you can see what partitions are active on the system, you are going to add a new partition to /dev/sdc . You can see in the output above that there is no partition table for this partition, so add one by using the mklabel command. Then use mkpart to add the new partition. You are creating a new primary partition using the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.

(parted) mklabel msdos                                                    
    (parted) mkpart                                                           
    Partition type?  primary/extended? primary                                
    File system type?  [ext2]? ext4                                           
    Start? 1                                                                  
    End? 50                                                                   
    (parted)                                                                  
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.3MB  49.3MB  primary  ext4         lba

Modifying existing partitions with parted

Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then shrink it back to the original 50 MB. First, note the partition number. You can find this information by using the print command. You are then going to use the resizepart command to make the modifications.

(parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [50.3MB]? 100                                                       
        
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End    Size    Type     File system  Flags
     1      1049kB  100MB  99.0MB  primary

You can see in the above output that I resized partition number one from 50 MB to 100 MB. You can then verify the changes with the print command. You can now resize it back down to 50 MB. Keep in mind that shrinking a partition can cause data loss.

    (parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [100MB]? 50                                                         
    Warning: Shrinking a partition can cause data loss, are you sure you want to
    continue?
    Yes/No? yes                                                               
    
    (parted) print
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.0MB  49.0MB  primary

Removing partitions with parted

Now, let's look at how to remove the partition you created at /dev/sdc1 by using the rm command inside of the parted suite. Again, you will need the partition number, which is found in the print output.

NOTE: Be sure that you have all of the information correct here, there are no safeguards or are you sure? questions asked. When you run the rm command, it will delete the partition number you give it.

    (parted) rm 1                                                             
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start  End  Size  Type  File system  Flags

[Apr 03, 2020] Use Midnight Commander like a pro by Igor Kilmer

Apr 03, 2020 | klimer.eu

Panels

Common actions

Panel options

Bonus assignments

[Mar 12, 2020] 7 tips to speed up your Linux command line navigation Enable Sysadmin

Mar 12, 2020 | www.redhat.com

A bonus shortcut

You can use the keyboard combination, Alt+. , to repeat the last argument.

Note: The shortcut is Alt+. (dot).

$ mkdir /path/to/mydir

$ cd Alt.

You are now in the /path/to/mydir directory.

[Mar 05, 2020] Using Ctags with MC

Mar 05, 2020 | frankhesse.wordpress.com

the Midnight Commander's built-in editor turned out to be. Below is one of the features of mc 4.7, namely the use of the ctags / etags utilities together with mcedit to navigate through the code.

Code Navigation
Training
Support for this functionality appeared in mcedit from version 4.7.0-pre1.
To use it, you need to index the directory with the project using the ctags or etags utility, for this you need to run the following commands:

$ cd /home/user/projects/myproj
$ find . -type f -name "*.[ch]" | etags -lc --declarations -

or
$ find . -type f -name "*.[ch]" | ctags --c-kinds=+p --fields=+iaS --extra=+q -e -L-

')

me marginwidth=


After the utility completes, a TAGS file will appear in the root directory of our project, which mcedit will use.
Well, practically all that needs to be done in order for mcedit to find the definition of the functions of variables or properties of the object under study.

Using
Imagine that we need to determine the place where the definition of the locked property of an edit object is located in some source code of a rather large project.


/* Succesful, so unlock both files */
if (different_filename) {
if (save_lock)
edit_unlock_file (exp);
if (edit->locked)
edit->locked = edit_unlock_file (edit->filename);
} else {
if (edit->locked || save_lock)
edit->locked = edit_unlock_file (edit->filename);
}

me marginwidth=

To do this, put the cursor at the end of the word locked and press alt + enter , a list of possible options appears, as in the screenshot below.
image

After selecting the desired option, we get to the line with the definition.

[Mar 05, 2020] How to switch the editor in mc (midnight commander) from nano to mcedit?

Jan 01, 2014 | askubuntu.com

Ask Question Asked 9 years, 2 months ago Active 6 months ago Viewed 123k times

https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html


sdu ,

Using ubuntu 10.10 the editor in mc (midnight commander) is nano. How can i switch to the internal mc editor (mcedit)?

Isaiah ,

Press the following keys in order, one at a time:
  1. F9 Activates the top menu.
  2. o Selects the Option menu.
  3. c Opens the configuration dialog.
  4. i Toggles the use internal edit option.
  5. s Saves your preferences.

Hurnst , 2014-06-21 02:34:51

Run MC as usual. On the command line right above the bottom row of menu selections type select-editor . This should open a menu with a list of all of your installed editors. This is working for me on all my current linux machines.

, 2010-12-09 18:07:18

You can also change the standard editor. Open a terminal and type this command:
sudo update-alternatives --config editor

You will get an list of the installed editors on your system, and you can chose your favorite.

AntonioK , 2015-01-27 07:06:33

If you want to leave mc and system settings as it is now, you may just run it like
$ EDITOR=mcedit

> ,

Open Midnight Commander, go to Options -> Configuration and check "use internal editor" Hit save and you are done.

[Mar 05, 2020] How to change your hostname in Linux Enable Sysadmin

Notable quotes:
"... pretty ..."
"... transient ..."
"... Want to try out Red Hat Enterprise Linux? Download it now for free. ..."
Mar 05, 2020 | www.redhat.com

How to change your hostname in Linux What's in a name, you ask? Everything. It's how other systems, services, and users "see" your system.

Posted March 3, 2020 | by Tyler Carrigan (Red Hat)

Image
Image by Pixabay
More Linux resources

Your hostname is a vital piece of system information that you need to keep track of as a system administrator. Hostnames are the designations by which we separate systems into easily recognizable assets. This information is especially important to make a note of when working on a remotely managed system. I have experienced multiple instances of companies changing the hostnames or IPs of storage servers and then wondering why their data replication broke. There are many ways to change your hostname in Linux; however, in this article, I'll focus on changing your name as viewed by the network (specifically in Red Hat Enterprise Linux and Fedora).

Background

A quick bit of background. Before the invention of DNS, your computer's hostname was managed through the HOSTS file located at /etc/hosts . Anytime that a new computer was connected to your local network, all other computers on the network needed to add the new machine into the /etc/hosts file in order to communicate over the network. As this method did not scale with the transition into the world wide web era, DNS was a clear way forward. With DNS configured, your systems are smart enough to translate unique IPs into hostnames and back again, ensuring that there is little confusion in web communications.

Modern Linux systems have three different types of hostnames configured. To minimize confusion, I list them here and provide basic information on each as well as a personal best practice:

It is recommended to pick a pretty hostname that is unique and not easily confused with other systems. Allow the transient and static names to be variations on the pretty, and you will be good to go in most circumstances.

Working with hostnames

Now, let's look at how to view your current hostname. The most basic command used to see this information is hostname -f . This command displays the system's fully qualified domain name (FQDN). To relate back to the three types of hostnames, this is your transient hostname. A better way, at least in terms of the information provided, is to use the systemd command hostnamectl to view your transient hostname and other system information:

Image

Before moving on from the hostname command, I'll show you how to use it to change your transient hostname. Using hostname <x> (where x is the new hostname), you can change your network name quickly, but be careful. I once changed the hostname of a customer's server by accident while trying to view it. That was a small but painful error that I overlooked for several hours. You can see that process below:

Image

It is also possible to use the hostnamectl command to change your hostname. This command, in conjunction with the right flags, can be used to alter all three types of hostnames. As stated previously, for the purposes of this article, our focus is on the transient hostname. The command and its output look something like this:

Image

The final method to look at is the sysctl command. This command allows you to change the kernel parameter for your transient name without having to reboot the system. That method looks something like this:

Image GNOME tip

Using GNOME, you can go to Settings -> Details to view and change the static and pretty hostnames. See below:

Image Wrapping up

I hope that you found this information useful as a quick and easy way to manipulate your machine's network-visible hostname. Remember to always be careful when changing system hostnames, especially in enterprise environments, and to document changes as they are made.

Want to try out Red Hat Enterprise Linux? Download it now for free. Topics: Linux Tyler Carrigan Tyler is a community manager at Enable Sysadmin, a submarine veteran, and an all-round tech enthusiast! He was first introduced to Red Hat in 2012 by way of a Red Hat Enterprise Linux-based combat system inside the USS Georgia Missile Control Center. More about me

[Mar 05, 2020] Debug your shell scripts with bashdb by Ben Martin

Nov 24, 2008 | www.linux.com

Author: Ben Martin

The Bash Debugger Project (bashdb) lets you set breakpoints, inspect variables, perform a backtrace, and step through a bash script line by line. In other words, it provides the features you expect in a C/C++ debugger to anyone programming a bash script.

To see if your standard bash executable has bashdb support, execute the command shown below; if you are not taken to a bashdb prompt then you'll have to install bashdb yourself.

$ bash --debugger -c "set|grep -i dbg" ... bashdb

The Ubuntu Intrepid repository contains a package for bashdb, but there is no special bashdb package in the openSUSE 11 or Fedora 9 repositories. I built from source using version 4.0-0.1 of bashdb on a 64-bit Fedora 9 machine, using the normal ./configure; make; sudo make install commands.

You can start the Bash Debugger using the bash --debugger foo.sh syntax or the bashdb foo.sh command. The former method is recommended except in cases where I/O redirection might cause issues, and it's what I used. You can also use bashdb through ddd or from an Emacs buffer.

The syntax for many of the commands in bashdb mimics that of gdb, the GNU debugger. You can step into functions, use next to execute the next line without stepping into any functions, generate a backtrace with bt , exit bashdb with quit or Ctrl-D, and examine a variable with print $foo . Aside from the prefixing of the variable with $ at the end of the last sentence, there are some other minor differences that you'll notice. For instance, pressing Enter on a blank line in bashdb executes the previous step or next command instead of whatever the previous command was.

The print command forces you to prefix shell variables with the dollar sign ( $foo ). A slightly shorter way of inspecting variables and functions is to use the x foo command, which uses declare to print variables and functions.

Both bashdb and your script run inside the same bash shell. Because bash lacks some namespace properties, bashdb will include some functions and symbols into the global namespace which your script can get at. bashdb prefixes its symbols with _Dbg_ , so you should avoid that prefix in your scripts to avoid potential clashes. bashdb also uses some environment variables; it uses the DBG_ prefix for its own, and relies on some standard bash ones that begin with BASH_ .

me name=

To illustrate the use of bashdb, I'll work on the small bash script below, which expects a numeric argument n and calculates the nth Fibonacci number .

#!/bin/bash version="0.01"; fibonacci() { n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} if [ $n -le 1 ]; then echo $n else l=`fibonacci $((n-1))` r=`fibonacci $((n-2))` echo $((l + r)) fi } for i in `seq 1 10` do result=$(fibonacci $i) echo "i=$i result=$result" done

The below session shows bashdb in action, stepping over and then into the fibonacci function and inspecting variables. I've made my input text bold for ease of reading. An initial backtrace ( bt ) shows that the script begins at line 3, which is where the version variable is written. The next and list commands then progress to the next line of the script a few times and show the context of the current execution line. After one of the next commands I press Enter to execute next again. I invoke the examine command through the single letter shortcut x . Notice that the variables are printed out using declare as opposed to their display on the next line using print . Finally I set a breakpoint at the start of the fibonacci function and continue the execution of the shell script. The fibonacci function is called and I move to the next line a few times and inspect a variable.

$ bash --debugger ./fibonacci.sh ... (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb bt ->0 in file `./fibonacci.sh' at line 3 ##1 main() called from file `./fibonacci.sh' at line 0 bashdb next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb list 16:==>for i in `seq 1 10` 17: do 18: result=$(fibonacci $i) 19: echo "i=$i result=$result" 20: done bashdb next (/home/ben/testing/bashdb/fibonacci.sh:18): 18: result=$(fibonacci $i) bashdb (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb x i result declare -- i="1" declare -- result="" bashdb print $i $result 1 bashdb break fibonacci Breakpoint 1 set in file /home/ben/testing/bashdb/fibonacci.sh, line 5. bashdb continue Breakpoint 1 hit (1 times). (/home/ben/testing/bashdb/fibonacci.sh:5): 5: fibonacci() { bashdb next (/home/ben/testing/bashdb/fibonacci.sh:6): 6: n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} bashdb next (/home/ben/testing/bashdb/fibonacci.sh:7): 7: if [ $n -le 1 ]; then bashdb x n declare -- n="2" bashdb quit

Notice that the number in the bashdb prompt toward the end of the above example is enclosed in parentheses. Each set of parentheses indicates that you have entered a subshell. In this example this is due to being inside a shell function.

In the below example I use a watchpoint to see if and where the result variable changes. Notice the initial next command. I found that if I didn't issue that next then my watch would fail to work. As you can see, after I issue c to continue execution, execution is stopped whenever the result variable is about to change, and the new and old value are displayed.

(/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb<1> watch result 0: ($result)==0 arith: 0 bashdb<2> c Watchpoint 0: $result changed: old value: '' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb<3> c i=1 result=1 i=2 result=1 Watchpoint 0: $result changed: old value: '1' new value: '2' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"

To get around the strange initial next requirement I used the watche command in the below session, which lets you stop whenever an expression becomes true. In this case I'm not overly interested in the first few Fibonacci numbers so I set a watch to have execution stop when the result is greater than 4. You can also use a watche command without a condition; for example, watche result would stop execution whenever the result variable changed.

$ bash --debugger ./fibonacci.sh (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> watche result > 4 0: (result > 4)==0 arith: 1 bashdb<1> continue i=1 result=1 i=2 result=1 i=3 result=2 i=4 result=3 Watchpoint 0: result > 4 changed: old value: '0' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"

When a shell script goes wrong, many folks use the time-tested method of incrementally adding in echo or printf statements to look for invalid values or code paths that are never reached. With bashdb, you can save yourself time by just adding a few watches on variables or setting a few breakpoints.

[Mar 04, 2020] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Jan 01, 2019 | stackoverflow.com

A command-line HTML pretty-printer: Making messy HTML readable [closed] Ask Question Asked 10 years, 1 month ago Active 10 months ago Viewed 51k times


knorv ,

Closed. This question is off-topic . It is not currently accepting answers.

jonjbar ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

tidy inputfile.html

Paul Brit ,

Update 2018: The homebrew/dupes is now deprecated, tidy-html5 may be directly installed.
brew install tidy-html5

Original reply:

Tidy from OS X doesn't support HTML5 . But there is experimental branch on Github which does.

To get it:

 brew tap homebrew/dupes
 brew install tidy --HEAD
 brew untap homebrew/dupes

That's it! Have fun!

Boris , 2019-11-16 01:27:35

Error: No available formula with the name "tidy" . brew install tidy-html5 works. – Pysis Apr 4 '17 at 13:34

[Feb 29, 2020] files - How to get over device or resource busy

Jan 01, 2011 | unix.stackexchange.com

ripper234 , 2011-04-13 08:51:26

I tried to rm -rf a folder, and got "device or resource busy".

In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock this" method, and not complete articles like this one . Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)

camh , 2011-04-13 09:22:46

The tool you want is lsof , which stands for list open files .

It has a lot of options, so check the man page, but if you want to see all open files under a directory:

lsof +D /path

That will recurse through the filesystem under /path , so beware doing it on large directory trees.

Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.

kip2 , 2014-04-03 01:24:22

sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:

umount /path

BillThor ,

I use fuser for this kind of thing. It will list which process is using a file or files within a mount.

user73011 ,

Here is the solution:
  1. Go into the directory and type ls -a
  2. You will find a .xyz file
  3. vi .xyz and look into what is the content of the file
  4. ps -ef | grep username
  5. You will see the .xyz content in the 8th column (last row)
  6. kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the 8th column
  7. Now try to delete the folder or file.

Choylton B. Higginbottom ,

I had this same issue, built a one-liner starting with @camh recommendation:
lsof +D ./ | awk '{print $2}' | tail -n +2 | xargs kill -9

The awk command grabs the PIDS. The tail command gets rid of the pesky first entry: "PID". I used -9 on kill, others might have safer options.

user5359531 ,

I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem, since the files are typically named like .nfs000000123089abcxyz .

My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file will have been removed automatically, at which point I am free to delete the directory.

This typically happens in directories where I am installing or compiling software libraries.

gloriphobia , 2017-03-23 12:56:22

I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused for ages because I couldn't get rid of it -- I kept getting "Device or resource busy" !

By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the mount command, i.e. sudo umount path

Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder again and I could delete it.

Hopefully this can help someone else who comes across this problem!

bil , 2018-04-04 14:10:20

Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved it, but this
ps -ef | grep name-of-busy-dir

Showed me the process and the PID (column two).

sudo kill -15 pid-here

fixed it.

Prabhat Kumar Singh , 2017-08-01 08:07:36

If you have the server accessible, Try

Deleting that dir from the server

Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.

I too had this problem where

lsof +D path : gives no output

ps -ef : gives no relevant information

[Feb 28, 2020] linux - Convert a time span in seconds to formatted time in shell - Stack Overflow

Jan 01, 2012 | stackoverflow.com

Convert a time span in seconds to formatted time in shell Ask Question Asked 7 years, 3 months ago Active 2 years ago Viewed 43k times


Darren , 2012-11-16 18:59:53

I have a variable of $i which is seconds in a shell script, and I am trying to convert it to 24 HOUR HH:MM:SS. Is this possible in shell?

sampson-chen , 2012-11-16 19:17:51

Here's a fun hacky way to do exactly what you are looking for =)
date -u -d @${i} +"%T"

Explanation:

glenn jackman ,

Another approach: arithmetic
i=6789
((sec=i%60, i/=60, min=i%60, hrs=i/60))
timestamp=$(printf "%d:%02d:%02d" $hrs $min $sec)
echo $timestamp

produces 1:53:09

Alan Tam , 2014-02-17 06:48:21

The -d argument applies to date from coreutils (Linux) only.

In BSD/OS X, use

date -u -r $i +%T

kossboss , 2015-01-07 13:43:36

Here is my algo/script helpers on my site: http://ram.kossboss.com/seconds-to-split-time-convert/ I used this elogant algo from here: Convert seconds to hours, minutes, seconds
convertsecs() {
 ((h=${1}/3600))
 ((m=(${1}%3600)/60))
 ((s=${1}%60))
 printf "%02d:%02d:%02d\n" $h $m $s
}
TIME1="36"
TIME2="1036"
TIME3="91925"

echo $(convertsecs $TIME1)
echo $(convertsecs $TIME2)
echo $(convertsecs $TIME3)

Example of my second to day, hour, minute, second converter:

# convert seconds to day-hour:min:sec
convertsecs2dhms() {
 ((d=${1}/(60*60*24)))
 ((h=(${1}%(60*60*24))/(60*60)))
 ((m=(${1}%(60*60))/60))
 ((s=${1}%60))
 printf "%02d-%02d:%02d:%02d\n" $d $h $m $s
 # PRETTY OUTPUT: uncomment below printf and comment out above printf if you want prettier output
 # printf "%02dd %02dh %02dm %02ds\n" $d $h $m $s
}
# setting test variables: testing some constant variables & evaluated variables
TIME1="36"
TIME2="1036"
TIME3="91925"
# one way to output results
((TIME4=$TIME3*2)) # 183850
((TIME5=$TIME3*$TIME1)) # 3309300
((TIME6=100*86400+3*3600+40*60+31)) # 8653231 s = 100 days + 3 hours + 40 min + 31 sec
# outputting results: another way to show results (via echo & command substitution with         backticks)
echo $TIME1 - `convertsecs2dhms $TIME1`
echo $TIME2 - `convertsecs2dhms $TIME2`
echo $TIME3 - `convertsecs2dhms $TIME3`
echo $TIME4 - `convertsecs2dhms $TIME4`
echo $TIME5 - `convertsecs2dhms $TIME5`
echo $TIME6 - `convertsecs2dhms $TIME6`

# OUTPUT WOULD BE LIKE THIS (If none pretty printf used): 
# 36 - 00-00:00:36
# 1036 - 00-00:17:16
# 91925 - 01-01:32:05
# 183850 - 02-03:04:10
# 3309300 - 38-07:15:00
# 8653231 - 100-03:40:31
# OUTPUT WOULD BE LIKE THIS (If pretty printf used): 
# 36 - 00d 00h 00m 36s
# 1036 - 00d 00h 17m 16s
# 91925 - 01d 01h 32m 05s
# 183850 - 02d 03h 04m 10s
# 3309300 - 38d 07h 15m 00s
# 1000000000 - 11574d 01h 46m 40s

Basile Starynkevitch ,

If $i represents some date in second since the Epoch, you could display it with
  date -u -d @$i +%H:%M:%S

but you seems to suppose that $i is an interval (e.g. some duration) not a date, and then I don't understand what you want.

Shilv , 2016-11-24 09:18:57

I use C shell, like this:
#! /bin/csh -f

set begDate_r = `date +%s`
set endDate_r = `date +%s`

set secs = `echo "$endDate_r - $begDate_r" | bc`
set h = `echo $secs/3600 | bc`
set m = `echo "$secs/60 - 60*$h" | bc`
set s = `echo $secs%60 | bc`

echo "Formatted Time: $h HOUR(s) - $m MIN(s) - $s SEC(s)"
Continuing @Daren`s answer, just to be clear: If you want to use the conversion to your time zone , don't use the "u" switch , as in: date -d @$i +%T or in some cases date -d @"$i" +%T

[Feb 16, 2020] Recover deleted files in Debian with TestDisk

Images deletes; see the original link for details
Feb 16, 2020 | vitux.com

... ... ...

You can verify if the utility is indeed installed on your system and also check its version number by using the following command:

$ testdisk --version

Or,

$ testdisk -v

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png" alt="Check TestDisk version" width="734" height="216" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-64.png 734w, https://vitux.com/wp-content/uploads/2019/10/word-image-64-300x88.png 300w" sizes="(max-width: 734px) 100vw, 734px" />

Step 2: Run TestDisk and create a new testdisk.log file

Use the following command in order to run the testdisk command line utility:

$ sudo testdisk

The output will give you a description of the utility. It will also let you create a testdisk.log file. This file will later include useful information about how and where your lost file was found, listed and resumed.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png" alt="Using Testdisk" width="736" height="411" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-65.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-65-300x168.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The above output gives you three options about what to do with this file:

Create: (recommended)- This option lets you create a new log file.

Append: This option lets you append new information to already listed information in this file from any previous session.

No Log: Choose this option if you do not want to record anything about the session for later use.

Important: TestDisk is a pretty intelligent tool. It does know that many beginners will also be using the utility for recovering lost files. Therefore, it predicts and suggests the option you should be ideally selecting on a particular screen. You can see the suggested options in a highlighted form. You can select an option through the up and down arrow keys and then entering to make your choice.

In the above output, I would opt for creating a new log file. The system might ask you the password for sudo at this point.

Step 3: Select your recovery drive

The utility will now display a list of drives attached to your system. In my case, it is showing my hard drive as it is the only storage device on my system.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png" alt="Choose recovery drive" width="729" height="493" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-66.png 729w, https://vitux.com/wp-content/uploads/2019/10/word-image-66-300x203.png 300w" sizes="(max-width: 729px) 100vw, 729px" />

Select Proceed, through the right and left arrow keys and hit Enter. As mentioned in the note in the above screenshot, correct disk capacity must be detected in order for a successful file recovery to be performed.

Step 4: Select Partition Table Type of your Selected Drive

Now that you have selected a drive, you need to specify its partition table type of your on the following screen:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png" alt="Choose partition table" width="736" height="433" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-67.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-67-300x176.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

The utility will automatically highlight the correct choice. Press Enter to continue.

If you are sure that the testdisk intelligence is incorrect, you can make the correct choice from the list and then hit Enter.

Step 5: Select the 'Advanced' option for file recovery

When you have specified the correct drive and its partition type, the following screen will appear:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png" alt="Advanced file recovery options" width="736" height="446" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-68.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-68-300x182.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

Recovering lost files is only one of the features of testdisk, the utility offers much more than that. Through the options displayed in the above screenshot, you can select any of those features. But here we are interested only in recovering our accidentally deleted file. For this, select the Advanced option and hit enter.

In this utility if you reach a point you did not intend to, you can go back by using the q key.

Step 6: Select the drive partition where you lost the file

If your selected drive has multiple partitions, the following screen lets you choose the relevant one from them.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png" alt="Choose partition from where the file shall be recovered" width="736" height="499" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-69-300x203.png 300w" sizes="(max-width: 736px) 100vw, 736px" />

I lost my file while I was using Linux, Debian. Make your choice and then choose the List option from the options shown at the bottom of the screen.

This will list all the directories on your partition.

Step 7: Browse to the directory from where you lost the file

When the testdisk utility displays all the directories of your operating system, browse to the directory from where you deleted/lost the file. I remember that I lost the file from the Downloads folder in my home directory. So I will browse to home:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png" alt="Select directory" width="733" height="458" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-70.png 733w, https://vitux.com/wp-content/uploads/2019/10/word-image-70-300x187.png 300w" sizes="(max-width: 733px) 100vw, 733px" />

My username (sana):

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png" alt="Choose user folder" width="735" height="449" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-71.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-71-300x183.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

And then the Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png" alt="Choose downloads" width="738" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-72.png 738w, https://vitux.com/wp-content/uploads/2019/10/word-image-72-300x185.png 300w" sizes="(max-width: 738px) 100vw, 738px" />

Tip: You can use the left arrow to go back to the previous directory.

When you have reached your required directory, you will see the deleted files in colored or highlighted form.

And, here I see my lost file "accidently_removed.docx" in the list. Of course, I intentionally named it this as I had to illustrate the whole process to you.

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png" alt="Highlighted files" width="735" height="498" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-73.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-73-300x203.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

Step 8: Copy the deleted file to be restored

By now, you must have found your lost file in the list. Use the C option to copy the selected file. This file will later be restored to the location you will specify in the next step:

Step 9: Specify the location where the found file will be restored

Now that we have copied the lost file that we have now found, the testdisk utility will display the following screen so that we can specify where to restore it.

You can specify any accessible location as it is only a simple UI thing to copy and paste the file to your desired location.

I am specifically selecting the location from where I lost the file, my Downloads folder:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png" alt="Choose location to restore file" width="732" height="456" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-74.png 732w, https://vitux.com/wp-content/uploads/2019/10/word-image-74-300x187.png 300w" sizes="(max-width: 732px) 100vw, 732px" />

Step 10: Copy/restore the file to the selected location

After making the selection about where you want to restore the file, click the C button. This will restore your file to that location:

<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png" alt="Restored file successfully" width="735" height="496" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-75.png 735w, https://vitux.com/wp-content/uploads/2019/10/word-image-75-300x202.png 300w" sizes="(max-width: 735px) 100vw, 735px" />

See the text in green in the above screenshot? This is actually great news. Now my file is restored on the specified location.

This might seem to be a slightly long process but it is definitely worth getting your lost file back. The restored file will most probably be in a locked state. This means that only an authorized user can access and open it.

We all need this tool time and again, but if you want to delete it till you further need it you can do so through the following command:

$ sudo apt-get remove testdisk

You can also delete the testdisk.log file if you want. It is such a relief to get your lost file back!

Recover deleted files in Debian with TestDisk Karim Buzdar February 11, 2020 Debian , Linux , Shell Market smarter with automated messaging tools. ads via Carbon Search About This Site Vitux.com aims to become a Linux compendium with lots of unique and up to date tutorials. Most Popular Copyright © vitux.com

[Feb 16, 2020] A List Of Useful Console Services For Linux Users by sk

Images deletes; see the original link for details
Feb 13, 2020 | www.ostechnix.com
Cheatsheets for Linux/Unix commands

You probably heard about cheat.sh . I use this service everyday! This is one of the useful service for all Linux users. It displays concise Linux command examples.

For instance, to view the curl command cheatsheet , simply run the following command from your console:

$ curl cheat.sh/curl

It is that simple! You don't need to go through man pages or use any online resources to learn about commands. It can get you the cheatsheets of most Linux and unix commands in couple seconds.

ls command cheatsheet:

$ curl cheat.sh/ls

find command cheatsheet:

$ curl cheat.sh/find

It is highly recommended tool!


Recommended read:


... ... ...

IP Address

We can find the local ip address using ip command. But what about the public IP address? It is simple!

To find your public IP address, just run the following commands from your Terminal:

$ curl ipinfo.io/ip
157.46.122.176
$ curl eth0.me
157.46.122.176
$ curl checkip.amazonaws.com
157.46.122.176
$ curl icanhazip.com
2409:4072:631a:c033:cc4b:4d25:e76c:9042

There is also a console service to display the ip address in JSON format.

$ curl httpbin.org/ip
{
  "origin": "157.46.122.176"
}

... ... ...

Dictionary

Want to know the meanig of an English word? Here is how you can get the meaning of a word – gustatory

$ curl 'dict://dict.org/d:gustatory'
220 pan.alephnull.com dictd 1.12.1/rf on Linux 4.4.0-1-amd64 <auth.mime> <[email protected]>
250 ok
150 1 definitions retrieved
151 "Gustatory" gcide "The Collaborative International Dictionary of English v.0.48"
Gustatory \Gust"a*to*ry\, a.
Pertaining to, or subservient to, the sense of taste; as, the
gustatory nerve which supplies the front of the tongue.
[1913 Webster]
.
250 ok [d/m/c = 1/0/16; 0.000r 0.000u 0.000s]
221 bye [d/m/c = 0/0/0; 0.000r 0.000u 0.000s]
Text sharing

You can share texts via some console services. These text sharing services are often useful for sharing code.

Here is an example.

$ echo "Welcome To OSTechNix!" | curl -F 'f:1=<-' ix.io
http://ix.io/2bCA

The above command will share the text "Welcome To OSTechNix" via ix.io site. Anyone can view access this text from a web browser by navigating to the URL – http://ix.io/2bCA

Another example:

$ echo "Welcome To OSTechNix!" | curl -F file=@- 0x0.st
http://0x0.st/i-0G.txt
File sharing

Not just text, we can even share files to anyone using a console service called filepush .

$ curl --upload-file ostechnix.txt filepush.co/upload/ostechnix.txt
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    72    0     0  100    72      0     54  0:00:01  0:00:01 --:--:--    54http://filepush.co/8x6h/ostechnix.txt
100   110  100    38  100    72     27     53  0:00:01  0:00:01 --:--:--    81

The above command will upload the ostechnix.txt file to filepush.co site. You can access this file from anywhere by navgating to the link – http://filepush.co/8x6h/ostechnix.txt

Another text sharing console service is termbin :

$ echo "Welcome To OSTechNix!" | nc termbin.com 9999

There is also another console service named transfer.sh . But it doesn't work at the time of writing this guide.

Browser

There are many text browsers are available for Linux. Browsh is one of them and you can access it right from your Terminal using command:

$ ssh brow.sh

Browsh is a modern, text browser that supports graphics including video. Technically speaking, it is not much of a browser, but some kind of terminal front-end of browser. It uses headless Firefox to render the web page and then converts it to ASCII art. Refer the following guide for more details.

Create QR codes for given string

Do you want to create QR-codes for a given string? That's easy!

$ curl qrenco.de/ostechnix

Here is the QR code for "ostechnix" string.

URL Shortners

Want to shorten a long URLs shorter to make them easier to post or share with your friends? Use Tinyurl console service to shorten them:

$ curl -s http://tinyurl.com/api-create.php?url=https://www.ostechnix.com/pigz-compress-and-decompress-files-in-parallel-in-linux/
http://tinyurl.com/vkc5c5p

[Jan 25, 2020] timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time

You can achieve the same affect with at command which allows more flexible time patterns.
Jan 23, 2020 | linuxize.com

timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given period of time. In other words, timeout allows you to run a command with a time limit. The timeout command is a part of the GNU core utilities package which is installed on almost any Linux distribution.

It is handy when you want to run a command that doesn't have a built-in timeout option.

In this article, we will explain how to use the Linux timeout command.

How to Use the timeout Command #

The syntax for the timeout command is as follows:

timeout [OPTIONS] DURATION COMMAND [ARG]

The DURATION can be a positive integer or a floating-point number, followed by an optional unit suffix:

When no unit is used, it defaults to seconds. If the duration is set to zero, the associated timeout is disabled.

The command options must be provided before the arguments.

Here are a few basic examples demonstrating how to use the timeout command:

If you want to run a command that requires elevated privileges such as tcpdump , prepend sudo before timeout :

sudo timeout 300 tcpdump -n -w data.pcap
Sending Specific Signal #

If no signal is given, timeout sends the SIGTERM signal to the managed command when the time limit is reached. You can specify which signal to send using the -s ( --signal ) option.

For example, to send SIGKILL to the ping command after one minute you would use:

sudo timeout -s SIGKILL ping 8.8.8.8

The signal can be specified by its name like SIGKILL or its number like 9 . The following command is identical to the previous one:

sudo timeout -s 9 ping 8.8.8.8

To get a list of all available signals, use the kill -l command:

kill -l
Killing Stuck Processes #

SIGTERM , the default signal that is sent when the time limit is exceeded can be caught or ignored by some processes. In that situations, the process continues to run after the termination signal is send.

To make sure the monitored command is killed, use the -k ( --kill-after ) option following by a time period. When this option is used after the given time limit is reached, the timeout command sends SIGKILL signal to the managed program that cannot be caught or ignored.

In the following example, timeout runs the command for one minute, and if it is not terminated, it will kill it after ten seconds:

sudo timeout -k 10 1m ping 8.8.8.8

timeout -k "./test.sh"

killed after the given time limit is reached

Preserving the Exit Status #

timeout returns 124 when the time limit is reached. Otherwise, it returns the exit status of the managed command.

To return the exit status of the command even when the time limit is reached, use the --preserve-status option:

timeout --preserve-status 5 ping 8.8.8.8
Running in Foreground #

By default, timeout runs the managed command in the background. If you want to run the command in the foreground, use the --foreground option:

timeout --foreground 5m ./script.sh

This option is useful when you want to run an interactive command that requires user input.

Conclusion #

The timeout command is used to run a given command with a time limit.

timeout is a simple command that doesn't have a lot of options. Typically you will invoke timeout only with two arguments, the duration, and the managed command.

If you have any questions or feedback, feel free to leave a comment.

timeout terminal

Related Tutorials

If you like our content, please consider buying us a coffee.
Thank you for your support!

Buy me a coffee

Sign up to our newsletter and get our latest tutorials and news straight to your mailbox.

Subscribe

We'll never share your email address or spam you.

Jan 25, 2020

Pidof Command in Linux
<img alt="" src=/post/pidof-command-in-linux/featured.jpg>

Jan 22, 2020

Tcpdump Command in Linux
<img alt="" src=/post/tcpdump-command-in-linux/featured.jpg>

Jan 17, 2020

Id command in Linux
<img alt="" src=/post/id-command-in-linux/featured.jpg>
Write a comment Please enable JavaScript to view the <a href=https://disqus.com/?ref_noscript>comments powered by Disqus.</a> ESC © 2020 Linuxize.com Privacy Policy Contact <div><img src="//pixel.quantserve.com/pixel/p-31iz6hfFutd16.gif?labels=Domain.linuxize_com,DomainId.93605" border="0" height="1" width="1" alt="Quantcast"/></div> <img src="https://sb.scorecardresearch.com/p?c1=2&c2=20015427&cv=2.0&cj=1"/>

[Jan 16, 2020] Watch Command in Linux

Jan 16, 2020 | linuxhandbook.com

Last Updated on January 10, 2020 By Abhishek Leave a Comment

Watch is a great utility that automatically refreshes data. Some of the more common uses for this command involve monitoring system processes or logs, but it can be used in combination with pipes for more versatility.
watch [options] [command]
Watch command examples
Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?ssl=1" alt="Watch Command" srcset="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?w=800&amp;ssl=1 800w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=300%2C169&amp;ssl=1 300w, https://i2.wp.com/linuxhandbook.com/wp-content/uploads/Watch_Command.png?resize=768%2C432&amp;ssl=1 768w" sizes="(max-width: 800px) 100vw, 800px" data-recalc-dims="1" />

Using watch command without any options will use the default parameter of 2.0 second refresh intervals.

As I mentioned before, one of the more common uses is monitoring system processes. Let's use it with the free command . This will give you up to date information about our system's memory usage.

watch free

Yes, it is that simple my friends.

Every 2.0s: free                                pop-os: Wed Dec 25 13:47:59 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     3846372    25571572      676612     3178904    27702636
Swap:             0           0           0
Adjust refresh rate of watch command

You can easily change how quickly the output is updated using the -n flag.

watch -n 10 free
Every 10.0s: free                               pop-os: Wed Dec 25 13:58:32 2019

              total        used        free      shared  buff/cache   available
Mem:       32596848     4522508    24864196      715600     3210144    26988920
Swap:             0           0           0

This changes from the default 2.0 second refresh to 10.0 seconds as you can see in the top left corner of our output.

Remove title or header info from watch command output
watch -t free

The -t flag removes the title/header information to clean up output. The information will still refresh every 2 seconds but you can change that by combining the -n option.

              total        used        free      shared  buff/cache   available
Mem:       32596848     3683324    25089268     1251908     3824256    27286132
Swap:             0           0           0
Highlight the changes in watch command output

You can add the -d option and watch will automatically highlight changes for us. Let's take a look at this using the date command. I've included a screen capture to show how the highlighting behaves.

Watch Command <img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/watch_command.gif?ssl=1" alt="Watch Command" data-recalc-dims="1"/>
Using pipes with watch

You can combine items using pipes. This is not a feature exclusive to watch, but it enhances the functionality of this software. Pipes rely on the | symbol. Not coincidentally, this is called a pipe symbol or sometimes a vertical bar symbol.

watch "cat /var/log/syslog | tail -n 3"

While this command runs, it will list the last 3 lines of the syslog file. The list will be refreshed every 2 seconds and any changes will be displayed.

Every 2.0s: cat /var/log/syslog | tail -n 3                                                      pop-os: Wed Dec 25 15:18:06 2019

Dec 25 15:17:24 pop-os dbus-daemon[1705]: [session uid=1000 pid=1705] Successfully activated service 'org.freedesktop.Tracker1.Min
er.Extract'
Dec 25 15:17:24 pop-os systemd[1591]: Started Tracker metadata extractor.
Dec 25 15:17:45 pop-os systemd[1591]: tracker-extract.service: Succeeded.

Conclusion

Watch is a simple, but very useful utility. I hope I've given you ideas that will help you improve your workflow.

This is a straightforward command, but there are a wide range of potential uses. If you have any interesting uses that you would like to share, let us know about them in the comments.

[Jan 16, 2020] Linux tools How to use the ss command by Ken Hess (Red Hat)

ss is the Swiss Army Knife of system statistics commands. It's time to say buh-bye to netstat and hello to ss.
Jan 13, 2020 | www.redhat.com

If you're like me, you still cling to soon-to-be-deprecated commands like ifconfig , nslookup , and netstat . The new replacements are ip , dig , and ss , respectively. It's time to (reluctantly) let go of legacy utilities and head into the future with ss . The ip command is worth a mention here because part of netstat 's functionality has been replaced by ip . This article covers the essentials for the ss command so that you don't have to dig (no pun intended) for them.

More Linux resources

Formally, ss is the socket statistics command that replaces netstat . In this article, I provide netstat commands and their ss replacements. Michale Prokop, the developer of ss , made it easy for us to transition into ss from netstat by making some of netstat 's options operate in much the same fashion in ss .

For example, to display TCP sockets, use the -t option:

$ netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 rhel8:ssh               khess-mac:62036         ESTABLISHED

$ ss -t
State         Recv-Q          Send-Q                    Local Address:Port                   Peer Address:Port          
ESTAB         0               0                          192.168.1.65:ssh                    192.168.1.94:62036

You can see that the information given is essentially the same, but to better mimic what you see in the netstat command, use the -r (resolve) option:

$ ss -tr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:ssh                             khess-mac:62036

And to see port numbers rather than their translations, use the -n option:

$ ss -ntr
State            Recv-Q             Send-Q                          Local Address:Port                         Peer Address:Port             
ESTAB            0                  0                                       rhel8:22                              khess-mac:62036

It isn't 100% necessary that netstat and ss mesh, but it does make the transition a little easier. So, try your standby netstat options before hitting the man page or the internet for answers, and you might be pleasantly surprised at the results.

For example, the netstat command with the old standby options -an yield comparable results (which are too long to show here in full):

$ netstat -an |grep LISTEN

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN     
tcp6       0      0 :::22                   :::*                    LISTEN     
unix  2      [ ACC ]     STREAM     LISTENING     28165    /run/user/0/systemd/private
unix  2      [ ACC ]     STREAM     LISTENING     20942    /var/lib/sss/pipes/private/sbus-dp_implicit_files.642
unix  2      [ ACC ]     STREAM     LISTENING     28174    /run/user/0/bus
unix  2      [ ACC ]     STREAM     LISTENING     20241    /var/run/lsm/ipc/simc
<truncated>

$ ss -an |grep LISTEN

u_str             LISTEN              0                    128                                             /run/user/0/systemd/private 28165                  * 0                   
                                                            
u_str             LISTEN              0                    128                   /var/lib/sss/pipes/private/sbus-dp_implicit_files.642 20942                  * 0                   
                                                            
u_str             LISTEN              0                    128                                                         /run/user/0/bus 28174                  * 0                   
                                                            
u_str             LISTEN              0                    5                                                     /var/run/lsm/ipc/simc 20241                  * 0                   
<truncated>

The TCP entries fall at the end of the ss command's display and at the beginning of netstat 's. So, there are layout differences even though the displayed information is really the same.

If you're wondering which netstat commands have been replaced by the ip command, here's one for you:

$ netstat -g
IPv6/IPv4 Group Memberships
Interface       RefCnt Group
--------------- ------ ---------------------
lo              1      all-systems.mcast.net
enp0s3          1      all-systems.mcast.net
lo              1      ff02::1
lo              1      ff01::1
enp0s3          1      ff02::1:ffa6:ab3e
enp0s3          1      ff02::1:ff8d:912c
enp0s3          1      ff02::1
enp0s3          1      ff01::1

$ ip maddr
1:	lo
	inet  224.0.0.1
	inet6 ff02::1
	inet6 ff01::1
2:	enp0s3
	link  01:00:5e:00:00:01
	link  33:33:00:00:00:01
	link  33:33:ff:8d:91:2c
	link  33:33:ff:a6:ab:3e
	inet  224.0.0.1
	inet6 ff02::1:ffa6:ab3e
	inet6 ff02::1:ff8d:912c
	inet6 ff02::1
	inet6 ff01::1

The ss command isn't perfect (sorry, Michael). In fact, there is one significant ss bummer. You can try this one for yourself to compare the two:

$ netstat -s 

Ip:
    Forwarding: 2
    6231 total packets received
    2 with invalid addresses
    0 forwarded
    0 incoming packets discarded
    3104 incoming packets delivered
    2011 requests sent out
    243 dropped because of missing route
<truncated>

$ ss -s

Total: 182
TCP:   3 (estab 1, closed 0, orphaned 0, timewait 0)

Transport Total     IP        IPv6
RAW	  1         0         1        
UDP	  3         2         1        
TCP	  3         2         1        
INET	  7         4         3        
FRAG	  0         0         0

If you figure out how to display the same info with ss , please let me know.

Maybe as ss evolves, it will include more features. I guess Michael or someone else could always just look at the netstat command to glean those statistics from it. For me, I prefer netstat , and I'm not sure exactly why it's being deprecated in favor of ss . The output from ss is less human-readable in almost every instance.

What do you think? What about ss makes it a better option than netstat ? I suppose I could ask the same question of the other net-tools utilities as well. I don't find anything wrong with them. In my mind, unless you're significantly improving an existing utility, why bother deprecating the other?

There, you have the ss command in a nutshell. As netstat fades into oblivion, I'm sure I'll eventually embrace ss as its successor.

Want more on networking topics? Check out the Linux networking cheat sheet .

Ken Hess is an Enable SysAdmin Community Manager and an Enable SysAdmin contributor. Ken has used Red Hat Linux since 1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open source and other topics. More about me

[Jan 16, 2020] Thirteen Useful Tools for Working with Text on the Command Line - Make Tech Easier

Jan 16, 2020 | www.maketecheasier.com

Thirteen Useful Tools for Working with Text on the Command Line By Karl Wakim – Posted on Jan 9, 2020 Jan 9, 2020 in Linux Text Tool Linux Command Line Featured

GNU/Linux distributions include a wealth of programs for handling text, most of which are provided by the GNU core utilities. There's somewhat of a learning curve, but these utilities can prove very useful and efficient when used correctly.

Here are thirteen powerful text manipulation tools every command-line user should know.

1. cat

Cat was designed to con cat enate files but is most often used to display a single file. Without any arguments, cat reads standard input until Ctrl + D is pressed (from the terminal or from another program output if using a pipe). Standard input can also be explicitly specified with a - .

Cat has a number of useful options, notably:

In the following example, we are concatenating and numbering the contents of file1, standard input, and file3.

cat -n file1 - file3
Linux Text Tools Cat
2. sort

As its name suggests, sort sorts file contents alphabetically and numerically.

Linux Text Tools Sort
3. uniq

Uniq takes a sorted file and removes duplicate lines. It is often chained with sort in a single command.

Linux Text Tools Uniq
4. comm

Comm is used to compare two sorted files, line by line. It outputs three columns: the first two columns contain lines unique to the first and second file respectively, and the third displays those found in both files.

Linux Text Tools Comm
5. cut

Cut is used to retrieve specific sections of lines, based on characters, fields, or bytes. It can read from a file or from standard input if no file is specified.

Cutting by character position

The -c option specifies a single character position or one or more ranges of characters.

For example:

Linux Text Tools Cut Char

Cutting by field

Fields are separated by a delimiter consisting of a single character, which is specified with the -d option. The -f option selects a field position or one or more ranges of fields using the same format as above.

Linux Text Tools Cut Field
6. dos2unix

GNU/Linux and Unix usually terminate text lines with a line feed (LF), while Windows uses carriage return and line feed (CRLF). Compatibility issues can arise when handling CRLF text on Linux, which is where dos2unix comes in. It converts CRLF terminators to LF.

In the following example, the file command is used to check the text format before and after using dos2unix .

Linux Text Tools Dos2unix
7. fold

To make long lines of text easier to read and handle, you can use fold , which wraps lines to a specified width.

Fold strictly matches the specified width by default, breaking words where necessary.

fold -w 30 longline.txt
Linux Text Tools Fold

If breaking words is undesirable, you can use the -s option to break at spaces.

fold -w 30 -s longline.txt
Linux Text Tools Fold Spaces
8. iconv

This tool converts text from one encoding to another, which is very useful when dealing with unusual encodings.

iconv -f input_encoding -t output_encoding -o output_file input_file

Note: you can list the available encodings with iconv -l

9. sed

sed is a powerful and flexible s tream ed itor, most commonly used to find and replace strings with the following syntax.

The following command will read from the specified file (or standard input), replacing the parts of text that match the regular expression pattern with the replacement string and outputting the result to the terminal.

sed s/pattern/replacement/g filename

To modify the original file instead, you can use the -i flag.

Linux Text Tools Sed
10. wc

The wc utility prints the number of bytes, characters, words, or lines in a file.

Linux Text Tools Wc
11. split

You can use split to divide a file into smaller files, by number of lines, by size, or to a specific number of files.

Splitting by number of lines

split -l num_lines input_file output_prefix
Linux Text Tools Split Lines

Splitting by bytes

split -b bytes input_file output_prefix
Linux Text Tools Split Bytes

Splitting to a specific number of files

split -n num_files input_file output_prefix
Linux Text Tools Split Number
12. tac

Tac, which is cat in reverse, does exactly that: it displays files with the lines in reverse order.

Linux Text Tools Tac
13. tr

The tr tool is used to translate or delete sets of characters.

A set of characters is usually either a string or ranges of characters. For instance:

Refer to the tr manual page for more details.

To translate one set to another, use the following syntax:

tr SET1 SET2

For instance, to replace lowercase characters with their uppercase equivalent, you can use the following:

tr "a-z" "A-Z"
Linux Text Tools Tr

To delete a set of characters, use the -d flag.

tr -d SET
Linux Text Tools Tr D

To delete the complement of a set of characters (i.e. everything except the set), use -dc .

tr -dc SET
Linux Text Tools Tr Dc
Conclusion

There is plenty to learn when it comes to Linux command line. Hopefully, the above commands can help you to better deal with text in the command line.

[Dec 12, 2019] Use timedatectl to Control System Time and Date in Linux

Dec 12, 2019 | www.maketecheasier.com

Mastering the Command Line: Use timedatectl to Control System Time and Date in Linux By Himanshu Arora – Posted on Nov 11, 2014 Nov 9, 2014 in Linux

The timedatectl command in Linux allows you to query and change the system clock and its settings. It comes as part of systemd, a replacement for the sysvinit daemon used in the GNU/Linux and Unix systems.

In this article, we will discuss this command and the features it provides using relevant examples.

Timedatectl examples

Note – All examples described in this article are tested on GNU bash, version 4.3.11(1).

Display system date/time information

Simply run the command without any command line options or flags, and it gives you information on the system's current date and time, as well as time-related settings. For example, here is the output when I executed the command on my system:

$ timedatectl
      Local time: Sat 2014-11-08 05:46:40 IST
  Universal time: Sat 2014-11-08 00:16:40 UTC
        Timezone: Asia/Kolkata (IST, +0530)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

So you can see that the output contains information on LTC, UTC, and time zone, as well as settings related to NTP, RTC and DST for the localhost.

Update the system date or time using the set-time option

To set the system clock to a specified date or time, use the set-time option followed by a string containing the new date/time information. For example, to change the system time to 6:40 am, I used the following command:

$ sudo timedatectl set-time "2014-11-08 06:40:00"

and here is the output:

$ timedatectl
      Local time: Sat 2014-11-08 06:40:02 IST
  Universal time: Sat 2014-11-08 01:10:02 UTC
        Timezone: Asia/Kolkata (IST, +0530)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

Observe that the Local time field now shows the updated time. Similarly, you can update the system date, too.

Update the system time zone using the set-timezone option

To set the system time zone to the specified value, you can use the set-timezone option followed by the time zone value. To help you with the task, the timedatectl command also provides another useful option. list-timezones provides you with a list of available time zones to choose from.

For example, here is the scrollable list of time zones the timedatectl command produced on my system:

timedatectl-timezones

To change the system's current time zone from Asia/Kolkata to Asia/Kathmandu, here is the command I used:

$ timedatectl set-timezone Asia/Kathmandu

and to verify the change, here is the output of the timedatectl command:

$ timedatectl
      Local time: Sat 2014-11-08 07:11:23 NPT
  Universal time: Sat 2014-11-08 01:26:23 UTC
        Timezone: Asia/Kathmandu (NPT, +0545)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

You can see that the time zone was changed to the new value.

Configure RTC

You can also use the timedatectl command to configure RTC (real-time clock). For those who are unaware, RTC is a battery-powered computer clock that keeps track of the time even when the system is turned off. The timedatectl command offers a set-local-rtc option which can be used to maintain the RTC in either local time or universal time.

This option requires a boolean argument. If 0 is supplied, the system is configured to maintain the RTC in universal time:

$ timedatectl set-local-rtc 0

but in case 1 is supplied, it will maintain the RTC in local time instead.

$ timedatectl set-local-rtc 1

A word of caution : Maintaining the RTC in the local time zone is not fully supported and will create various problems with time zone changes and daylight saving adjustments. If at all possible, use RTC in UTC.

Another point worth noting is that if set-local-rtc is invoked and the --adjust-system-clock option is passed, the system clock is synchronized from the RTC again, taking the new setting into account. Otherwise the RTC is synchronized from the system clock.

Configure NTP-based network time synchronization

NTP, or Network Time Protocol, is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. It is intended to synchronize all participating computers to within a few milliseconds of UTC.

The timedatectl command provides a set-ntp option that controls whether NTP based network time synchronization is enabled. This option expects a boolean argument. To enable NTP-based time synchronization, run the following command:

$ timedatectl set-ntp true

To disable, run:

$ timedatectl set-ntp false
Conclusion

As evident from the examples described above, the timedatectl command is a handy tool for system administrators who can use it to to adjust various system clocks and RTC configurations as well as poll remote servers for time information. To learn more about the command, head over to its man page .

[Dec 12, 2019] Set Time-Date-Timezone using Command Line in Linux

Dec 12, 2019 | linoxide.com

Set Time/Date/Timezone in Ubuntu Linux February 5, 2019 Updated September 27, 2019 By Pungki Arianto LINUX COMMANDS , LINUX HOWTO How to set time and time zone in ubuntu linux

Time is an important aspect in Linux systems especially in critical services such as cron jobs. Having the correct time on the server ensures that the server operates in a healthy environment that consists of distributed systems and maintains accuracy in the workplace.

In this tutorial, we will focus on how to set time/date/time zone and to synchronize the server clock with your Ubuntu Linux machine.

Check Current Time

You can verify the current time and date using the date and the timedatectl commands. These linux commands can be executed straight from the terminal as a regular user or as a superuser. The commands are handy usefulness of the two commands is seen when you want to correct a wrong time from the command line.

Using the date command

Log in as a root user and use the command as follows

$ date

Output

check date using date command

You can also use the same command to check a date 2 days ago

$ date --date="2 days ago"

Output

check date 2 days ago

Using timedatectl command

Checking on the status of the time on your system as well as the present time settings, use the command timedatectl as shown

# timedatectl

or

# timedatectl  status

how to set time

Changing Time

We use the timedatectl to change system time using the format HH:MM: SS. HH stands for the hour in 24-hour format, MM stands for minutes and SS for seconds.

Setting the time to 09:08:07 use the command as follows (using the timedatectl)

# timedatectl set-time 09:08:07
using date command

Changing time means all the system processes are running on the same clock putting the desktop and server at the same time. From the command line, use date command as follows

# date +%T -s "10:13:13"

Where,
• 10: Hour (hh)
• 13: Minute (mm)
• 13: Second (ss)

To change the locale to either AM or PM use the %p in the following format.

# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"
Change Date

Generally, you want your system date and time is set automatically. If for some reason you have to change it manually using date command, we can use this command :

# date --set="20140125 09:17:00"

It will set your current date and time of your system into 'January 25, 2014' and '09:17:00 AM'. Please note, that you must have root privilege to do this.

You can use timedatectl to set the time and the date respectively. The accepted format is YYYY-MM-DD, YYYY represents the year, MM the month in two digits and DD for the day in two digits. Changing the date to 15 January 2019, you should use the following command

# timedatectl set-time 20190115
Create custom date format

To create custom date format, use a plus sign (+)

$ date +"Day : %d Month : %m Year : %Y"
Day: 05 Month: 12 Year: 2013

$ date +%D
12/05/13

%D format follows Year/Month/Day format .

You can also put the day name if you want. Here are some examples :

$ date +"%a %b %d %y"
Fri 06 Dec 2013

$ date +"%A %B %d %Y"
Friday December 06 2013

$ date +"%A %B %d %Y %T"
Friday December 06 2013 00:30:37

$ date +"%A %B-%d-%Y %c"
Friday December-06-2013 12:30:37 AM WIB

List/Change time zone

Changing the time zone is crucial when you want to ensure that everything synchronizes with the Network Time Protocol. The first thing to do is to list all the region's time zones using the list-time zones option or grep to make the command easy to understand

# timedatectl list-timezones

The above command will present a scrollable format.

list time zones

Recommended timezone for servers is UTC as it doesn't have daylight savings. If you know, the specific time zones set it using the name using the following command

# timedatectl set-timezone America/Los_Angeles

To display timezone execute

# timedatectl | grep "Time"

check timezone

Set the Local-rtc

The Real-time clock (RTC) which is also referred to as the hardware clock is independent of the operating system and continues to run even when the server is shut down.

Use the following command

# timedatectl set-local-rtc 0

In addition, the following command for the local time

# timedatectl set-local-rtc 1
Check/Change CMOS Time

The computer CMOS battery will automatically synchronize time with system clock as long as the CMOS is working correctly.

Use the hwclock command to check the CMOS date as follows

# hwclock

check time using hwclock

To synchronize the CMOS date with system date use the following format

# hwclock –systohc

To have the correct time for your Linux environment is critical because many operations depend on it. Such operations include logging events and corn jobs as well. we hope you found this article useful.

Read Also:

[Nov 09, 2019] Mirroring a running system into a ramdisk Oracle Linux Blog

Nov 09, 2019 | blogs.oracle.com

javascript:void(0)

Mirroring a running system into a ramdisk Greg Marsden

In this blog post, Oracle Linux kernel developer William Roche presents a method to mirror a running system into a ramdisk.

A RAM mirrored System ?

There are cases where a system can boot correctly but after some time, can lose its system disk access - for example an iSCSI system disk configuration that has network issues, or any other disk driver problem. Once the system disk is no longer accessible, we rapidly face a hang situation followed by I/O failures, without the possibility of local investigation on this machine. I/O errors can be reported on the console:

 XFS (dm-0): Log I/O Error Detected....

Or losing access to basic commands like:

# ls
-bash: /bin/ls: Input/output error

The approach presented here allows a small system disk space to be mirrored in memory to avoid the above I/O failures situation, which provides the ability to investigate the reasons for the disk loss. The system disk loss will be noticed as an I/O hang, at which point there will be a transition to use only the ram-disk.

To enable this, the Oracle Linux developer Philip "Bryce" Copeland created the following method (more details will follow):

Disk and memory sizes:

As we are going to mirror the entire system installation to the memory, this system installation image has to fit in a fraction of the memory - giving enough memory room to hold the mirror image and necessary running space.

Of course this is a trade-off between the memory available to the server and the minimal disk size needed to run the system. For example a 12GB disk space can be used for a minimal system installation on a 16GB memory machine.

A standard Oracle Linux installation uses XFS as root fs, which (currently) can't be shrunk. In order to generate a usable "small enough" system, it is recommended to proceed to the OS installation on a correctly sized disk space. Of course, a correctly sized installation location can be created using partitions of large physical disk. Then, the needed application filesystems can be mounted from their current installation disk(s). Some system adjustments may also be required (services added, configuration changes, etc...).

This configuration phase should not be underestimated as it can be difficult to separate the system from the needed applications, and keeping both on the same space could be too large for a RAM disk mirroring.

The idea is not to keep an entire system load active when losing disks access, but to be able to have enough system to avoid system commands access failure and analyze the situation.

We are also going to avoid the use of swap. When the system disk access is lost, we don't want to require it for swap data. Also, we don't want to use more memory space to hold a swap space mirror. The memory is better used directly by the system itself.

The system installation can have a swap space (for example a 1.2GB space on our 12GB disk example) but we are neither going to mirror it nor use it.

Our 12GB disk example could be used with: 1GB /boot space, 11GB LVM Space (1.2GB swap volume, 9.8 GB root volume).

Ramdisk memory footprint:

The ramdisk size has to be a little larger (8M) than the root volume size that we are going to mirror, making room for metadata. But we can deal with 2 types of ramdisk:

We can expect roughly 30% to 50% memory space gain from zram compared to brd, but zram must use 4k I/O blocks only. This means that the filesystem used for root has to only deal with a multiple of 4k I/Os.

Basic commands:

Here is a simple list of commands to manually create and use a ramdisk and mirror the root filesystem space. We create a temporary configuration that needs to be undone or the subsequent reboot will not work. But we also provide below a way of automating at startup and shutdown.

Note the root volume size (considered to be ol/root in this example):

?
1 2 3 # lvs --units k -o lv_size ol/root LSize 10268672.00k

Create a ramdisk a little larger than that (at least 8M larger):

?
1 # modprobe brd rd_nr=1 rd_size=$((10268672 + 8*1024))

Verify the created disk:

?
1 2 3 # lsblk /dev/ram0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ram0 1:0 0 9.8G 0 disk

Put the disk under lvm control

?
1 2 3 4 5 6 7 8 9 # pvcreate /dev/ram0 Physical volume "/dev/ram0" successfully created. # vgextend ol /dev/ram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/ram0 Logical volume ol/root successfully converted.

We now have ol/root mirror to our /dev/ram0 disk.

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 40.70 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

A few minutes (or seconds) later, the synchronization is completed:

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

We have our mirrored configuration running !

For security, we can also remove the swap and /boot, /boot/efi(if it exists) mount points:

?
1 2 3 # swapoff -a # umount /boot/efi # umount /boot

Stopping the system also requires some actions as you need to cleanup the configuration so that it will not be looking for a gone ramdisk on reboot.

?
1 2 3 4 5 6 7 # lvconvert -y -m 0 ol/root /dev/ram0 Logical volume ol/root successfully converted. # vgreduce ol /dev/ram0 Removed "/dev/ram0" from volume group "ol" # mount /boot # mount /boot/efi # swapon -a
What about in-memory compression ?

As indicated above, zRAM devices can compress data in-memory, but 2 main problems need to be fixed:

Make lvm work with zram:

The lvm configuration file has to be changed to take into account the "zram" type of devices. Including the following "types" entry to the /etc/lvm/lvm.conf file in its "devices" section:

?
1 2 3 devices { types = [ "zram" , 16 ] }
Root file system I/Os:

A standard Oracle Linux installation uses XFS, and we can check the sector size used (depending on the disk type used) with

?
1 2 3 4 5 6 7 8 9 10 # xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

We can notice here that the sector size (sectsz) used on this root fs is a standard 512 bytes. This fs type cannot be mirrored with a zRAM device, and needs to be recreated with 4k sector sizes.

Transforming the root file system to 4k sector size:

This is simply a backup (to a zram disk) and restore procedure after recreating the root FS. To do so, the system has to be booted from another system image. Booting from an installation DVD image can be a good possibility.

?
1 2 3 sh-4.2 # vgchange -a y ol 2 logical volume(s) in volume group "ol" now active sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 sh-4.2 # modprobe zram sh-4.2 # echo 10G > /sys/block/zram0/disksize sh-4.2 # mkfs.xfs /dev/zram0 meta-data=/dev/zram0 isize=256 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mkdir /mnt2 sh-4.2 # mount /dev/zram0 /mnt2 sh-4.2 # xfsdump -L BckUp -M dump -f /mnt2/ROOT /mnt xfsdump: using file dump (drive_simple) strategy xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsdump: level 0 dump of localhost:/mnt ... xfsdump: dump complete: 130 seconds elapsed xfsdump: Dump Summary: xfsdump: stream 0 /mnt2/ROOT OK (success) xfsdump: Dump Status: SUCCESS sh-4.2 # umount /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 sh-4.2 # mkfs.xfs -f -s size=4096 /dev/mapper/ol-root meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 sh-4.2 # xfsrestore -f /mnt2/ROOT /mnt xfsrestore: using file dump (drive_simple) strategy xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsrestore: searching media for dump ... xfsrestore: restore complete: 337 seconds elapsed xfsrestore: Restore Summary: xfsrestore: stream 0 /mnt2/ROOT OK (success) xfsrestore: Restore Status: SUCCESS sh-4.2 # umount /mnt sh-4.2 # umount /mnt2
?
1 sh-4.2 # reboot
?
1 2 3 4 5 6 7 8 9 10 $ xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

With sectsz=4096, our system is now ready for zRAM mirroring.

Basic commands with a zRAM device: ?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 # modprobe zram # zramctl --find --size 10G /dev/zram0 # pvcreate /dev/zram0 Physical volume "/dev/zram0" successfully created. # vgextend ol /dev/zram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/zram0 Logical volume ol/root successfully converted. # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 12.38 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # zramctl NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lzo 10G 9.8G 5.3G 5.5G 1

The compressed disk uses a total of 5.5GB of memory to mirror a 9.8G volume size (using in this case 8.5G).

Removal is performed the same way as brd, except that the device is /dev/zram0 instead of /dev/ram0.

Automating the process:

Fortunately, the procedure can be automated on system boot and shutdown with the following scripts (given as examples).

The start method: /usr/sbin/start-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/start-raid1-ramdisk ]

After a chmod 555 /usr/sbin/start-raid1-ramdisk, running this script on a 4k xfs root file system should show something like:

?
1 2 3 4 5 6 7 8 9 10 11 # /usr/sbin/start-raid1-ramdisk Volume group "ol" is already consistent. RAID1 ramdisk: intending to use 10276864 K of memory for facilitation of [ / ] Physical volume "/dev/zram0" successfully created. Volume group "ol" successfully extended Logical volume ol/root successfully converted. Waiting for mirror to synchronize... LVM RAID1 sync of [ / ] took 00:01:53 sec Logical volume ol/root changed. NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lz4 9.8G 9.8G 5.5G 5.8G 1

The stop method: /usr/sbin/stop-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/stop-raid1-ramdisk ]

After a chmod 555 /usr/sbin/stop-raid1-ramdisk, running this script should show something like:

?
1 2 3 4 5 6 # /usr/sbin/stop-raid1-ramdisk Volume group "ol" is already consistent. Logical volume ol/root changed. Logical volume ol/root successfully converted. Removed "/dev/zram0" from volume group "ol" Labels on physical volume "/dev/zram0" successfully wiped.

A service Unit file can also be created: /etc/systemd/system/raid1-ramdisk.service [https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/raid1-ramdisk.service]

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [Unit] Description=Enable RAMdisk RAID 1 on LVM After= local -fs.target Before= shutdown .target reboot.target halt.target [Service] ExecStart=/usr/sbin/start-raid1-ramdisk ExecStop=/usr/sbin/stop-raid1-ramdisk Type=oneshot RemainAfterExit= yes TimeoutSec=0 [Install] WantedBy=multi-user.target
Conclusion:

When the system disk access problem manifests itself, the ramdisk mirror branch will provide the possibility to investigate the situation. This procedure goal is not to keep the system running on this memory mirror configuration, but help investigate a bad situation.

When the problem is identified and fixed, I really recommend to come back to a standard configuration -- enjoying the entire memory of the system, a standard system disk, a possible swap space etc.

Hoping the method described here can help. I also want to thank for their reviews Philip "Bryce" Copeland who also created the first prototype of the above scripts, and Mark Kanda who also helped testing many aspects of this work.

[Nov 09, 2019] chkservice Is A systemd Unit Manager With A Terminal User Interface

The site is https://github.com/linuxenko/chkservice The tool is written in C++
Looks like in version 0.3 the author increased the complexity by adding features which probably are not needed at all
Nov 07, 2019 | www.linuxuprising.com

chkservice systemd manager
chkservice, a terminal user interface (TUI) for managing systemd units, has been updated recently with window resize and search support.

chkservice is a simplistic systemd unit manager that uses ncurses for its terminal interface. Using it you can enable or disable, and start or stop a systemd unit. It also shows the units status (enabled, disabled, static or masked).

You can navigate the chkservice user interface using keyboard shortcuts:

To enable or disable a unit press Space , and to start or stop a unity press s . You can access the help screen which shows all available keys by pressing ? .

The command line tool had its first release in August 2017, with no new releases until a few days ago when version 0.2 was released, quickly followed by 0.3.

With the latest 0.3 release, chkservice adds a search feature that allows easily searching through all systemd units.

To search, type / followed by your search query, and press Enter . To search for the next item matching your search query you'll have to type / again, followed by Enter or Ctrl + m (without entering any search text).

Another addition to the latest chkservice is window resize support. In the 0.1 version, the tool would close when the user tried to resize the terminal window. That's no longer the case now, chkservice allowing the resize of the terminal window it runs in.

And finally, the last addition to the latest chkservice 0.3 is G-g navigation support . Press G ( Shift + g ) to navigate to the bottom, and g to navigate to the top.

Download and install chkservice

The initial (0.1) chkservice version can be found in the official repositories of a few Linux distributions, including Debian and Ubuntu (and Debian or Ubuntu based Linux distribution -- e.g. Linux Mint, Pop!_OS, Elementary OS and so on).

There are some third-party repositories available as well, including a Fedora Copr, Ubuntu / Linux Mint PPA, and Arch Linux AUR, but at the time I'm writing this, only the AUR package was updated to the latest chkservice version 0.3.

You may also install chkservice from source. Use the instructions provided in the tool's readme to either create a DEB package or install it directly.

[Nov 08, 2019] Multiple Linux sysadmins working as root

No new interesting ideas for such an important topic whatsoever. One of the main problems here is documenting actions of each administrator in such a way that the set of actions was visible to everybody in a convenient and transparent matter. With multiple terminal opened Unix history is not the file from which you can deduct each sysadmin actions as parts of the history from additional terminals are missing. , not smooch access. Actually Solaris has some ideas implemented in Solaris 10, but they never made it to Linux
May 21, 2012 | serverfault.com

In our team we have three seasoned Linux sysadmins having to administer a few dozen Debian servers. Previously we have all worked as root using SSH public key authentication. But we had a discussion on what is the best practice for that scenario and couldn't agree on anything.

Everybody's SSH public key is put into ~root/.ssh/authorized_keys2

Using personalized accounts and sudo

That way we would login with personalized accounts using SSH public keys and use sudo to do single tasks with root permissions. In addition we could give ourselves the "adm" group that allows us to view log files.

Using multiple UID 0 users

This is a very unique proposal from one of the sysadmins. He suggest to create three users in /etc/passwd all having UID 0 but different login names. He claims that this is not actually forbidden and allow everyone to be UID 0 but still being able to audit.

Comments:

The second option is the best one IMHO. Personal accounts, sudo access. Disable root access via SSH completely. We have a few hundred servers and half a dozen system admins, this is how we do it.

How does agent forwarding break exactly?

Also, if it's such a hassle using sudo in front of every task you can invoke a sudo shell with sudo -s or switch to a root shell with sudo su -

thepearson thepearson 775 8 8 silver badges 18 18 bronze badges

add a comment | 9 With regard to the 3rd suggested strategy, other than perusal of the useradd -o -u userXXX options as recommended by @jlliagre, I am not familiar with running multiple users as the same uid. (hence if you do go ahead with that, I would be interested if you could update the post with any issues (or sucesses) that arise...)

I guess my first observation regarding the first option "Everybody's SSH public key is put into ~root/.ssh/authorized_keys2", is that unless you absolutely are never going to work on any other systems;

  1. then at least some of the time, you are going to have to work with user accounts and sudo

The second observation would be, that if you work on systems that aspire to HIPAA, PCI-DSS compliance, or stuff like CAPP and EAL, then you are going to have to work around the issues of sudo because;

  1. It an industry standard to provide non-root individual user accounts, that can be audited, disabled, expired, etc, typically using some centralized user database.

So; Using personalized accounts and sudo

It is unfortunate that as a sysadmin, almost everything you will need to do on a remote machine is going to require some elevated permissions, however it is annoying that most of the SSH based tools and utilities are busted while you are in sudo

Hence I can pass on some tricks that I use to work-around the annoyances of sudo that you mention. The first problem is that if root login is blocked using PermitRootLogin=no or that you do not have the root using ssh key, then it makes SCP files something of a PITA.

Problem 1 : You want to scp files from the remote side, but they require root access, however you cannot login to the remote box as root directly.

Boring Solution : copy the files to home directory, chown, and scp down.

ssh userXXX@remotesystem , sudo su - etc, cp /etc/somefiles to /home/userXXX/somefiles , chown -R userXXX /home/userXXX/somefiles , use scp to retrieve files from remote.

Less Boring Solution : sftp supports the -s sftp_server flag, hence you can do something like the following (if you have configured password-less sudo in /etc/sudoers );

sftp  -s '/usr/bin/sudo /usr/libexec/openssh/sftp-server' \
userXXX@remotehost:/etc/resolv.conf

(you can also use this hack-around with sshfs, but I am not sure its recommended... ;-)

If you don't have password-less sudo rights, or for some configured reason that method above is broken, I can suggest one more less boring file transfer method, to access remote root files.

Port Forward Ninja Method :

Login to the remote host, but specify that the remote port 3022 (can be anything free, and non-reserved for admins, ie >1024) is to be forwarded back to port 22 on the local side.

 [localuser@localmachine ~]$ ssh userXXX@remotehost -R 3022:localhost:22
Last login: Mon May 21 05:46:07 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------

Get root in the normal fashion...

-bash-3.2$ sudo su -
[root@remotehost ~]#

Now you can scp the files in the other direction avoiding the boring boring step of making a intermediate copy of the files;

[root@remotehost ~]#  scp -o NoHostAuthenticationForLocalhost=yes \
 -P3022 /etc/resolv.conf localuser@localhost:~
localuser@localhost's password: 
resolv.conf                                 100%  
[root@remotehost ~]#

Problem 2: SSH agent forwarding : If you load the root profile, e.g. by specifying a login shell, the necessary environment variables for SSH agent forwarding such as SSH_AUTH_SOCK are reset, hence SSH agent forwarding is "broken" under sudo su - .

Half baked answer :

Anything that properly loads a root shell, is going to rightfully reset the environment, however there is a slight work-around your can use when you need BOTH root permission AND the ability to use the SSH Agent, AT THE SAME TIME

This achieves a kind of chimera profile, that should really not be used, because it is a nasty hack , but is useful when you need to SCP files from the remote host as root, to some other remote host.

Anyway, you can enable that your user can preserve their ENV variables, by setting the following in sudoers;

 Defaults:userXXX    !env_reset

this allows you to create nasty hybrid login environments like so;

login as normal;

[localuser@localmachine ~]$ ssh userXXX@remotehost 
Last login: Mon May 21 12:33:12 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
-bash-3.2$ env | grep SSH_AUTH
SSH_AUTH_SOCK=/tmp/ssh-qwO715/agent.1971

create a bash shell, that runs /root/.profile and /root/.bashrc . but preserves SSH_AUTH_SOCK

-bash-3.2$ sudo -E bash -l

So this shell has root permissions, and root $PATH (but a borked home directory...)

bash-3.2# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=user_u:system_r:unconfined_t
bash-3.2# echo $PATH
/usr/kerberos/sbin:/usr/local/sbin:/usr/sbin:/sbin:/home/xtrabm/xtrabackup-manager:/usr/kerberos/bin:/opt/admin/bin:/usr/local/bin:/bin:/usr/bin:/opt/mx/bin

But you can use that invocation to do things that require remote sudo root, but also the SSH agent access like so;

bash-3.2# scp /root/.ssh/authorized_keys ssh-agent-user@some-other-remote-host:~
/root/.ssh/authorized_keys              100%  126     0.1KB/s   00:00    
bash-3.2#

Tom H Tom H 8,793 3 3 gold badges 34 34 silver badges 57 57 bronze badges

add a comment | 2 The 3rd option looks ideal - but have you actually tried it out to see what's happenning? While you might see the additional usernames in the authentication step, any reverse lookup is going to return the same value.

Allowing root direct ssh access is a bad idea, even if your machines are not connected to the internet / use strong passwords.

Usually I use 'su' rather than sudo for root access.

symcbean symcbean 18.8k 1 1 gold badge 24 24 silver badges 40 40 bronze badges

add a comment | 2 I use (1), but I happened to type

rm -rf / tmp *

on one ill-fated day.I can see to be bad enough if you have more than a handful admins.

(2) Is probably more engineered - and you can become full-fledged root through sudo su -. Accidents are still possible though.

(3) I would not touch with a barge pole. I used it on Suns, in order to have a non-barebone-sh root account (if I remember correctly) but it was never robust - plus I doubt it would be very auditable.

add a comment | 2 Definitely answer 2.
  1. Means that you're allowing SSH access as root . If this machine is in any way public facing, this is just a terrible idea; back when I ran SSH on port 22, my VPS got multiple attempts hourly to authenticate as root. I had a basic IDS set up to log and ban IPs that made multiple failed attempts, but they kept coming. Thankfully, I'd disabled SSH access as the root user as soon as I had my own account and sudo configured. Additionally, you have virtually no audit trail doing this.
  2. Provides root access as and when it is needed. Yes, you barely have any privileges as a standard user, but this is pretty much exactly what you want; if an account does get compromised, you want it to be limited in its abilities. You want any super user access to require a password re-entry. Additionally, sudo access can be controlled through user groups, and restricted to particular commands if you like, giving you more control over who has access to what. Additionally, commands run as sudo can be logged, so it provides a much better audit trail if things go wrong. Oh, and don't just run "sudo su -" as soon as you log in. That's terrible, terrible practice.
  3. Your sysadmin's idea is bad. And he should feel bad. No, *nix machines probably won't stop you from doing this, but both your file system, and virtually every application out there expects each user to have a unique UID. If you start going down this road, I can guarantee that you'll run into problems. Maybe not immediately, but eventually. For example, despite displaying nice friendly names, files and directories use UID numbers to designate their owners; if you run into a program that has a problem with duplicate UIDs down the line, you can't just change a UID in your passwd file later on without having to do some serious manual file system cleanup.

sudo is the way forward. It may cause additional hassle with running commands as root, but it provides you with a more secure box, both in terms of access and auditing.

Rohaq Rohaq 121 3 3 bronze badges

Definitely option 2, but use groups to give each user as much control as possible without needing to use sudo. sudo in front of every command loses half the benefit because you are always in the danger zone. If you make the relevant directories writable by the sysadmins without sudo you return sudo to the exception which makes everyone feel safer.

Julian Julian 121 4 4 bronze badges

In the old days, sudo did not exist. As a consequence, having multiple UID 0 users was the only available alternative. But it's still not that good, notably with logging based on the UID to obtain the username. Nowadays, sudo is the only appropriate solution. Forget anything else.

It is documented permissible by fact. BSD unices have had their toor account for a long time, and bashroot users tend to be accepted practice on systems where csh is standard (accepted malpractice ;)

add a comment | 0 Perhaps I'm weird, but method (3) is what popped into my mind first as well. Pros: you'd have every users name in logs and would know who did what as root. Cons: they'd each be root all the time, so mistakes can be catastrophic.

I'd like to question why you need all admins to have root access. All 3 methods you propose have one distinct disadvantage: once an admin runs a sudo bash -l or sudo su - or such, you lose your ability to track who does what and after that, a mistake can be catastrophic. Moreover, in case of possible misbehaviour, this even might end up a lot worse.

Instead you might want to consider going another way:

This way, martin would be able to safely handle postfix, and in case of mistake or misbehaviour, you'd only lose your postfix system, not entire server.

Same logic can be applied to any other subsystem, such as apache, mysql, etc.

Of course, this is purely theoretical at this point, and might be hard to set up. It does look like a better way to go tho. At least to me. If anyone tries this, please let me know how it went.

Tuncay Göncüoğlu Tuncay Göncüoğlu 561 3 3 silver badges 9 9 bronze badges

[Nov 08, 2019] Perl tricks for system administrators by Ruth Holloway Feed

Notable quotes:
"... /home/<department>/<username> ..."
Jul 27, 2016 | opensource.com

Did you know that Perl is a great programming language for system administrators? Perl is platform-independent so you can do things on different operating systems without rewriting your scripts. Scripting in Perl is quick and easy, and its portability makes your scripts amazingly useful. Here are a few examples, just to get your creative juices flowing! Renaming a bunch of files

Suppose you need to rename a whole bunch of files in a directory. In this case, we've got a directory full of .xml files, and we want to rename them all to .html . Easy-peasy!

#!/usr/bin/perl
use strict ;
use warnings ;

foreach my $file ( glob "*.xml" ) {
my $new = substr ( $file , 0 , - 3 ) . "html" ;
rename $file , $new ;
}

Then just cd to the directory where you need to make the change, and run the script. You could put this in a cron job, if you needed to run it regularly, and it is easily enhanced to accept parameters.

Speaking of accepting parameters, let's take a look at a script that does just that.

Creating a Linux user account

Programming and development

Suppose you need to regularly create Linux user accounts on your system, and the format of the username is first initial/last name, as is common in many businesses. (This is, of course, a good idea, until you get John Smith and Jane Smith working at the same company -- or want John to have two accounts, as he works part-time in two different departments. But humor me, okay?) Each user account needs to be in a group based on their department, and home directories are of the format /home/<department>/<username> . Let's take a look at a script to do that:

#!/usr/bin/env perl
use strict ;
use warnings ;

my $adduser = '/usr/sbin/adduser' ;

use Getopt :: Long qw ( GetOptions ) ;

# If the user calls the script with no parameters,
# give them help!

if ( not @ ARGV ) {
usage () ;
}

# Gather our options; if they specify any undefined option,
# they'll get sent some help!

my %opts ;
GetOptions ( \%opts ,
'fname=s' ,
'lname=s' ,
'dept=s' ,
'run' ,
) or usage () ;

# Let's validate our inputs. All three parameters are
# required, and must be alphabetic.
# You could be clever, and do this with a foreach loop,
# but let's keep it simple for now.

if ( not $opts { fname } or $opts { fname } !~ /^[a-zA-Z]+$/ ) {
usage ( "First name must be alphabetic" ) ;
}
if ( not $opts { lname } or $opts { lname } !~ /^[a-zA-Z]+$/ ) {
usage ( "Last name must be alphabetic" ) ;
}
if ( not $opts { dept } or $opts { dept } !~ /^[a-zA-Z]+$/ ) {
usage ( "Department must be alphabetic" ) ;
}

# Construct the username and home directory

my $username = lc ( substr ( $opts { fname } , 0 , 1 ) . $opts { lname }) ;
my $home = "/home/$opts{dept}/$username" ;

# Show them what we've got ready to go.

print "Name: $opts{fname} $opts{lname} \n " ;
print "Username: $username \n " ;
print "Department: $opts{dept} \n " ;
print "Home directory: $home \n\n " ;

# use qq() here, so that the quotes in the --gecos flag
# get carried into the command!

my $cmd = qq ( $adduser -- home $home -- ingroup $opts { dept } \\
-- gecos "$opts{fname} $opts{lname}" $username ) ;

print "$cmd \n " ;
if ( $opts { run }) {
system $cmd ;
} else {
print "You need to add the --run flag to actually execute \n " ;
}

sub usage {
my ( $msg ) = @_ ;
if ( $msg ) {
print "$msg \n\n " ;
}
print "Usage: $0 --fname FirstName --lname LastName --dept Department --run \n " ;
exit ;
}

As with the previous script, there are opportunities for enhancement, but something like this might be all that you need for this task.

One more, just for fun!

Change copyright text in every Perl source file in a directory tree

Now we're going to try a mass edit. Suppose you've got a directory full of code, and each file has a copyright statement somewhere in it. (Rich Bowen wrote a great article, Copyright statements proliferate inside open source code a couple of years ago that discusses the wisdom of copyright statements in open source code. It is a good read, and I recommend it highly. But again, humor me.) You want to change that text in each and every file in the directory tree. File::Find and File::Slurp are your friends!

#!/usr/bin/perl
use strict ;
use warnings ;

use File :: Find qw ( find ) ;
use File :: Slurp qw ( read_file write_file ) ;

# If the user gives a directory name, use that. Otherwise,
# use the current directory.

my $dir = $ARGV [ 0 ] || '.' ;

# File::Find::find is kind of dark-arts magic.
# You give it a reference to some code,
# and a directory to hunt in, and it will
# execute that code on every file in the
# directory, and all subdirectories. In this
# case, \&change_file is the reference
# to our code, a subroutine. You could, if
# what you wanted to do was really short,
# include it in a { } block instead. But doing
# it this way is nice and readable.

find ( \&change_file , $dir ) ;

sub change_file {
my $name = $_ ;

# If the file is a directory, symlink, or other
# non-regular file, don't do anything

if ( not - f $name ) {
return ;
}
# If it's not Perl, don't do anything.

if ( substr ( $name , - 3 ) ne ".pl" ) {
return ;
}
print "$name \n " ;

# Gobble up the file, complete with carriage
# returns and everything.
# Be wary of this if you have very large files
# on a system with limited memory!

my $data = read_file ( $name ) ;

# Use a regex to make the change. If the string appears
# more than once, this will change it everywhere!

$data =~ s/Copyright Old/Copyright New/g ;

# Let's not ruin our original files

my $backup = "$name.bak" ;
rename $name , $backup ;
write_file ( $name , $data ) ;

return ;
}

Because of Perl's portability, you could use this script on a Windows system as well as a Linux system -- it Just Works because of the underlying Perl interpreter code. In our create-an-account code above, that one is not portable, but is Linux-specific because it uses Linux commands such as adduser .

In my experience, I've found it useful to have a Git repository of these things somewhere that I can clone on each new system I'm working with. Over time, you'll think of changes to make to the code to enhance the capabilities, or you'll add new scripts, and Git can help you make sure that all your tools and tricks are available on all your systems.

I hope these little scripts have given you some ideas how you can use Perl to make your system administration life a little easier. In addition to these longer scripts, take a look at a fantastic list of Perl one-liners, and links to other Perl magic assembled by Mischa Peterson.

[Nov 08, 2019] Manage NTP with Chrony by David Both

Dec 03, 2018 | opensource.com

Chronyd is a better choice for most networks than ntpd for keeping computers synchronized with the Network Time Protocol.

"Does anybody really know what time it is? Does anybody really care?"
Chicago , 1969

Perhaps that rock group didn't care what time it was, but our computers do need to know the exact time. Timekeeping is very important to computer networks. In banking, stock markets, and other financial businesses, transactions must be maintained in the proper order, and exact time sequences are critical for that. For sysadmins and DevOps professionals, it's easier to follow the trail of email through a series of servers or to determine the exact sequence of events using log files on geographically dispersed hosts when exact times are kept on the computers in question.

I used to work at an organization that received over 20 million emails per day and had four servers just to accept and do a basic filter on the incoming flood of email. From there, emails were sent to one of four other servers to perform more complex anti-spam assessments, then they were delivered to one of several additional servers where the emails were placed in the correct inboxes. At each layer, the emails would be sent to one of the next-level servers, selected only by the randomness of round-robin DNS. Sometimes we had to trace a new message through the system until we could determine where it "got lost," according to the pointy-haired bosses. We had to do this with frightening regularity.

Most of that email turned out to be spam. Some people actually complained that their [joke, cat pic, recipe, inspirational saying, or other-strange-email]-of-the-day was missing and asked us to find it. We did reject those opportunities.

Our email and other transactional searches were aided by log entries with timestamps that -- today -- can resolve down to the nanosecond in even the slowest of modern Linux computers. In very high-volume transaction environments, even a few microseconds of difference in the system clocks can mean sorting thousands of transactions to find the correct one(s).

The NTP server hierarchy

Computers worldwide use the Network Time Protocol (NTP) to synchronize their times with internet standard reference clocks via a hierarchy of NTP servers. The primary servers are at stratum 1, and they are connected directly to various national time services at stratum 0 via satellite, radio, or even modems over phone lines. The time service at stratum 0 may be an atomic clock, a radio receiver tuned to the signals broadcast by an atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS satellites.

To prevent time requests from time servers lower in the hierarchy (i.e., with a higher stratum number) from overwhelming the primary reference servers, there are several thousand public NTP stratum 2 servers that are open and available for anyone to use. Many organizations with large numbers of hosts that need an NTP server will set up their own time servers so that only one local host accesses the stratum 2 time servers, then they configure the remaining network hosts to use the local time server which, in my case, is a stratum 3 server.

NTP choices

The original NTP daemon, ntpd , has been joined by a newer one, chronyd . Both keep the local host's time synchronized with the time server. Both services are available, and I have seen nothing to indicate that this will change anytime soon.

Chrony has features that make it the better choice for most environments for the following reasons:

The NTP and Chrony RPM packages are available from standard Fedora repositories. You can install both and switch between them, but modern Fedora, CentOS, and RHEL releases have moved from NTP to Chrony as their default time-keeping implementation. I have found that Chrony works well, provides a better interface for the sysadmin, presents much more information, and increases control.

Just to make it clear, NTP is a protocol that is implemented with either NTP or Chrony. If you'd like to know more, read this comparison between NTP and Chrony as implementations of the NTP protocol.

This article explains how to configure Chrony clients and servers on a Fedora host, but the configuration for CentOS and RHEL current releases works the same.

Chrony structure

The Chrony daemon, chronyd , runs in the background and monitors the time and status of the time server specified in the chrony.conf file. If the local time needs to be adjusted, chronyd does it smoothly without the programmatic trauma that would occur if the clock were instantly reset to a new time.

Chrony's chronyc tool allows someone to monitor the current status of Chrony and make changes if necessary. The chronyc utility can be used as a command that accepts subcommands, or it can be used as an interactive text-mode program. This article will explain both uses.

Client configuration

The NTP client configuration is simple and requires little or no intervention. The NTP server can be defined during the Linux installation or provided by the DHCP server at boot time. The default /etc/chrony.conf file (shown below in its entirety) requires no intervention to work properly as a client. For Fedora, Chrony uses the Fedora NTP pool, and CentOS and RHEL have their own NTP server pools. Like many Red Hat-based distributions, the configuration file is well commented.

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.fedora.pool.ntp.org iburst

# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys

# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

Let's look at the current status of NTP on a virtual machine I use for testing. The chronyc command, when used with the tracking subcommand, provides statistics that report how far off the local system is from the reference server.

[root@studentvm1 ~]# chronyc tracking
Reference ID : 23ABED4D (ec2-35-171-237-77.compute-1.amazonaws.com)
Stratum : 3
Ref time (UTC) : Fri Nov 16 16:21:30 2018
System time : 0.000645622 seconds slow of NTP time
Last offset : -0.000308577 seconds
RMS offset : 0.000786140 seconds
Frequency : 0.147 ppm slow
Residual freq : -0.073 ppm
Skew : 0.062 ppm
Root delay : 0.041452706 seconds
Root dispersion : 0.022665167 seconds
Update interval : 1044.2 seconds
Leap status : Normal
[root@studentvm1 ~]#

The Reference ID in the first line of the result is the server the host is synchronized to -- in this case, a stratum 3 reference server that was last contacted by the host at 16:21:30 2018. The other lines are described in the chronyc(1) man page .

The sources subcommand is also useful because it provides information about the time source configured in chrony.conf .

[root@studentvm1 ~]# chronyc sources
210 Number of sources = 5
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ 192.168.0.51 3 6 377 0 -2613us[-2613us] +/- 63ms
^+ dev.smatwebdesign.com 3 10 377 28m -2961us[-3534us] +/- 113ms
^+ propjet.latt.net 2 10 377 465 -1097us[-1085us] +/- 77ms
^* ec2-35-171-237-77.comput> 2 10 377 83 +2388us[+2395us] +/- 95ms
^+ PBX.cytranet.net 3 10 377 507 -1602us[-1589us] +/- 96ms
[root@studentvm1 ~]#

The first source in the list is the time server I set up for my personal network. The others were provided by the pool. Even though my NTP server doesn't appear in the Chrony configuration file above, my DHCP server provides its IP address for the NTP server. The "S" column -- Source State -- indicates with an asterisk ( * ) the server our host is synced to. This is consistent with the data from the tracking subcommand.

The -v option provides a nice description of the fields in this output.

[root@studentvm1 ~]# chronyc sources -v
210 Number of sources = 5

.-- Source mode '^' = server, '=' = peer, '#' = local clock.
/ .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| / '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) --. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^+ 192.168.0.51 3 7 377 28 -2156us[-2156us] +/- 63ms
^+ triton.ellipse.net 2 10 377 24 +5716us[+5716us] +/- 62ms
^+ lithium.constant.com 2 10 377 351 -820us[ -820us] +/- 64ms
^* t2.time.bf1.yahoo.com 2 10 377 453 -992us[ -965us] +/- 46ms
^- ntp.idealab.com 2 10 377 799 +3653us[+3674us] +/- 87ms
[root@studentvm1 ~]#

If I wanted my server to be the preferred reference time source for this host, I would add the line below to the /etc/chrony.conf file.

server 192.168.0.51 iburst prefer

I usually place this line just above the first pool server statement near the top of the file. There is no special reason for this, except I like to keep the server statements together. It would work just as well at the bottom of the file, and I have done that on several hosts. This configuration file is not sequence-sensitive.

The prefer option marks this as the preferred reference source. As such, this host will always be synchronized with this reference source (as long as it is available). We can also use the fully qualified hostname for a remote reference server or the hostname only (without the domain name) for a local reference time source as long as the search statement is set in the /etc/resolv.conf file. I prefer the IP address to ensure that the time source is accessible even if DNS is not working. In most environments, the server name is probably the better option, because NTP will continue to work even if the server's IP address changes.

If you don't have a specific reference source you want to synchronize to, it is fine to use the defaults.

Configuring an NTP server with Chrony

The nice thing about the Chrony configuration file is that this single file configures the host as both a client and a server. To add a server function to our host -- it will always be a client, obtaining its time from a reference server -- we just need to make a couple of changes to the Chrony configuration, then configure the host's firewall to accept NTP requests.

Open the /etc/chrony.conf file in your favorite text editor and uncomment the local stratum 10 line. This enables the Chrony NTP server to continue to act as if it were connected to a remote reference server if the internet connection fails; this enables the host to continue to be an NTP server to other hosts on the local network.

Let's restart chronyd and track how the service is working for a few minutes. Before we enable our host as an NTP server, we want to test a bit.

[root@studentvm1 ~]# systemctl restart chronyd ; watch chronyc tracking

The results should look like this. The watch command runs the chronyc tracking command every two seconds so we can watch changes occur over time.

Every 2.0s: chronyc tracking studentvm1: Fri Nov 16 20:59:31 2018

Reference ID : C0A80033 (192.168.0.51)
Stratum : 4
Ref time (UTC) : Sat Nov 17 01:58:51 2018
System time : 0.001598277 seconds fast of NTP time
Last offset : +0.001791533 seconds
RMS offset : 0.001791533 seconds
Frequency : 0.546 ppm slow
Residual freq : -0.175 ppm
Skew : 0.168 ppm
Root delay : 0.094823152 seconds
Root dispersion : 0.021242738 seconds
Update interval : 65.0 seconds
Leap status : Normal

Notice that my NTP server, the studentvm1 host, synchronizes to the host at 192.168.0.51, which is my internal network NTP server, at stratum 4. Synchronizing directly to the Fedora pool machines would result in synchronization at stratum 3. Notice also that the amount of error decreases over time. Eventually, it should stabilize with a tiny variation around a fairly small range of error. The size of the error depends upon the stratum and other network factors. After a few minutes, use Ctrl+C to break out of the watch loop.

To turn our host into an NTP server, we need to allow it to listen on the local network. Uncomment the following line to allow hosts on the local network to access our NTP server.

# Allow NTP client access from local network.
allow 192.168.0.0/16

Note that the server can listen for requests on any local network it's attached to. The IP address in the "allow" line is just intended for illustrative purposes. Be sure to change the IP network and subnet mask in that line to match your local network's.

Restart chronyd .

[root@studentvm1 ~]# systemctl restart chronyd

To allow other hosts on your network to access this server, configure the firewall to allow inbound UDP packets on port 123. Check your firewall's documentation to find out how to do that.

Testing

Your host is now an NTP server. You can test it with another host or a VM that has access to the network on which the NTP server is listening. Configure the client to use the new NTP server as the preferred server in the /etc/chrony.conf file, then monitor that client using the chronyc tools we used above.

Chronyc as an interactive tool

As I mentioned earlier, chronyc can be used as an interactive command tool. Simply run the command without a subcommand and you get a chronyc command prompt.

[root@studentvm1 ~]# chronyc
chrony version 3.4
Copyright (C) 1997-2003, 2007, 2009-2018 Richard P. Curnow and others
chrony comes with ABSOLUTELY NO WARRANTY. This is free software, and
you are welcome to redistribute it under certain conditions. See the
GNU General Public License version 2 for details.

chronyc>

You can enter just the subcommands at this prompt. Try using the tracking , ntpdata , and sources commands. The chronyc command line allows command recall and editing for chronyc subcommands. You can use the help subcommand to get a list of possible commands and their syntax.

Conclusion

Chrony is a powerful tool for synchronizing the times of client hosts, whether they are all on the local network or scattered around the globe. It's easy to configure because, despite the large number of options available, only a few configurations are required for most circumstances.

After my client computers have synchronized with the NTP server, I like to set the system hardware clock from the system (OS) time by using the following command:

/sbin/hwclock --systohc

This command can be added as a cron job or a script in cron.daily to keep the hardware clock synced with the system time.

Chrony and NTP (the service) both use the same configuration, and the files' contents are interchangeable. The man pages for chronyd , chronyc , and chrony.conf contain an amazing amount of information that can help you get started or learn about esoteric configuration options.

Do you run your own NTP server? Let us know in the comments and be sure to tell us which implementation you are using, NTP or Chrony.

[Nov 08, 2019] Vim universe. fzf - command line fuzzy finder by Alexey Samoshkin

Nov 08, 2019 | www.youtube.com

Zeeshan Jan , 1 month ago (edited)

Alexey thanks for great video, I have a question, how did you integrate the fzf and bat. When I am in my zsh using tmux then when I type fzf and search for a file I am not able to select multiple files using TAB I can do this inside VIM but not in the tmux iTerm terminal also I am not able to see the preview I have already installed bat using brew on my mac book pro. also when I type cd ** it doesn't work

Paul Hale , 4 months ago

Thanks for the video. When searching in vim dotfiles are hidden. How can we configure so that dotfiles are shown but .git and .git subfolders are ignored?

[Nov 08, 2019] 10 resources every sysadmin should know about Opensource.com

Nov 08, 2019 | opensource.com

Cheat

Having a hard time remembering a command? Normally you might resort to a man page, but some man pages have a hard time getting to the point. It's the reason Chris Allen Lane came up with the idea (and more importantly, the code) for a cheat command .

The cheat command displays cheatsheets for common tasks in your terminal. It's a man page without the preamble. It cuts to the chase and tells you exactly how to do whatever it is you're trying to do. And if it lacks a common example that you think ought to be included, you can submit an update.

$ cheat tar
# To extract an uncompressed archive:
tar -xvf '/path/to/foo.tar'

# To extract a .gz archive:
tar -xzvf '/path/to/foo.tgz'
[ ... ]

You can also treat cheat as a local cheatsheet system, which is great for all the in-house commands you and your team have invented over the years. You can easily add a local cheatsheet to your own home directory, and cheat will find and display it just as if it were a popular system command.

[Nov 08, 2019] A Linux user's guide to Logical Volume Management Opensource.com

Nov 08, 2019 | opensource.com

In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes.

Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups.

Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one.

Adding a new logical volume

There are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently.

The basic steps for adding a new logical volume are as follows.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create a new logical volumes (LV) from the space in the volume group.
  6. Create a filesystem on the new logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.

Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems.

Example

This example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install hard drive

If there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps.

Create Physical Volume from hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Extend the existing Volume Group

In this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Create the Logical Volume

First create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvcreate -L +50G --name Stuff MyVG01
Create the filesystem

Creating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume.

mkfs -t ext4 /dev/MyVG01/Stuff
Add a filesystem label

Adding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems.

e2label /dev/MyVG01/Stuff Stuff
Mount the filesystem

At this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem.

You should also check to verify the volume has been created correctly. You can use the df , lvs, and vgs commands to do this.

Resizing a logical volume in an LVM filesystem

The need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group.
  6. If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.
Example

This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install the hard drive

If there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps.

Create a Physical Volume from the hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Add PV to existing Volume Group

For this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Extend the Logical Volume

Extend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvextend -L +50G /dev/MyVG01/Stuff
Expand the filesystem

Extending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume.

resize2fs /dev/MyVG01/Stuff

You should check to verify the resizing has been performed correctly. You can use the df , lvs, and vgs commands to do this.

Tips

Over the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you.

I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch. Topics Business Linux How-tos and tutorials Sysadmin About the author David Both - David Both is an Open Source Software and GNU/Linux advocate, trainer, writer, and speaker who lives in Raleigh North Carolina. He is a strong proponent of and evangelist for the "Linux Philosophy." David has been in the IT industry for nearly 50 years. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for over 20 years. David prefers to purchase the components and build his...

[Nov 08, 2019] 10 killer tools for the admin in a hurry Opensource.com

Nov 08, 2019 | opensource.com

NixCraft
Use the site's internal search function. With more than a decade of regular updates, there's gold to be found here -- useful scripts and handy hints that can solve your problem straight away. This is often the second place I look after Google.

Webmin
This gives you a nice web interface to remotely edit your configuration files. It cuts down on a lot of time spent having to juggle directory paths and sudo nano , which is handy when you're handling several customers.

Windows Subsystem for Linux
The reality of the modern workplace is that most employees are on Windows, while the grown-up gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks from (gasp) a Windows desktop.

What do you do? Install a virtual machine? It's actually much faster and far less work to configure if you install the Windows Subsystem for Linux compatibility layer, now available at no cost on Windows 10.

This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries on the local machine, have full access to both Windows and Linux filesystems, and mount network drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors.

mRemoteNG
This is an excellent SSH and remote desktop client for when you have 100+ servers to manage.

Setting up a network so you don't have to do it again

A poorly planned network is the sworn enemy of the admin who hates working overtime.

IP Addressing Schemes that Scale
The diabolical thing about running out of IP addresses is that, when it happens, the network's grown large enough that a new addressing scheme is an expensive, time-consuming pain in the proverbial.

Ain't nobody got time for that!

At some point, IPv6 will finally arrive to save the day. Until then, these one-size-fits-most IP addressing schemes should keep you going, no matter how many network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and espresso machines the world throws at us.

Linux Chmod Permissions Cheat Sheet
A short but sweet cheat sheet of Bash commands to set permissions across the network. This is so when Bill from Customer Service falls for that ransomware scam, you're recovering just his files and not the entire company's.

VLSM Subnet Calculator
Just put in the number of networks you want to create from an address space and the number of hosts you want per network, and it calculates what the subnet mask should be for everything.

Single-purpose Linux distributions

Need a Linux box that does just one thing? It helps if someone else has already sweated the small stuff on an operating system you can install and have ready immediately.

Each of these has, at one point, made my work day so much easier.

Porteus Kiosk
This is for when you want a computer totally locked down to just a web browser. With a little tweaking, you can even lock the browser down to just one website. This is great for public access machines. It works with touchscreens or with a keyboard and mouse.

Parted Magic
This is an operating system you can boot from a USB drive to partition hard drives, recover data, and run benchmarking tools.

IPFire
Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire." That's my second favorite thing about this Linux distribution. My favorite is that it's a seriously solid software suite. It's so easy to set up and configure, and there is a heap of plugins available to extend it.

What about your top tools and cheat sheets?

So, how about you? What tools, resources, and cheat sheets have you found to make the workday easier? I'd love to know. Please share in the comments.

[Nov 02, 2019] LVM spanning over multiple disks What disk is a file on? Can I lose a drive without total loss

Notable quotes:
"... If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part. ..."
"... So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost. ..."
"... LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd. ..."
"... As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason. ..."
"... it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something. ..."
Mar 16, 2015 | serverfault.com

LVM spanning over multiple disks: What disk is a file on? Can I lose a drive without total loss? Ask Question Asked 8 years, 10 months ago Active 4 years, 6 months ago Viewed 9k times 7 2 I have three 990GB partitions over three drives in my server. Using LVM, I can create one ~3TB partition for file storage.

1) How does the system determine what partition to use first?
2) Can I find what disk a file or folder is physically on?
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk? storage lvm share

edited Mar 16 '15 at 12:53

HopelessN00b 49k 25 25 gold badges 121 121 silver badges 194 194 bronze badges asked Dec 2 '10 at 2:28 Luke has no name Luke has no name 989 10 10 silver badges 13 13 bronze badges

add a comment | 3 Answers 3 active oldest votes 12
  1. The system fills from the first disk in the volume group to the last, unless you configure striping with extents.
  2. I don't think this is possible, but where I'd start to look is in the lvs/vgs commands man pages.
  3. If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part.
  4. So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost.
Peter Grace Peter Grace 2,676 2 2 gold badges 22 22 silver badges 38 38 bronze badges add a comment | 3

1) How does the system determine what partition to use first?

LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd.

2) Can I find what disk a file or folder is physically on?

You can determine what PVs a LV has allocation extents on. But I don't know of a way to get that information for an individual file.

3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk?

As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason.

3dinfluence 3dinfluence 12k 1 1 gold badge 23 23 silver badges 38 38 bronze badges

add a comment | 2 I don't know the answer to #2, so I'll leave that to someone else. I suspect "no", but I'm willing to be happily surprised.

1 is: you tell it, when you combine the physical volumes into a volume group.

3 is: it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something.

And to the overall, unasked question: yeah, you probably don't really want to do that.

[Oct 08, 2019] Forward root email on Linux server

Oct 08, 2019 | www.reddit.com

Hi, generally I configure /etc/aliases to forward root messages to my work email address. I found this useful, because sometimes I become aware of something wrong...

I create specific email filter on my MUA to put everything with "fail" in subject in my ALERT subfolder, "update" or "upgrade" in my UPGRADE subfolder, and so on.

It is a bit annoying, because with > 50 server, there is lot of "rumor", anyway.

How do you manage that?

Thank you!

[Oct 02, 2019] raid5 - Can I recover a RAID 5 array if two drives have failed - Server Fault

Oct 02, 2019 | serverfault.com

Can I recover a RAID 5 array if two drives have failed? Ask Question Asked 9 years ago Active 2 years, 3 months ago Viewed 58k times I have a Dell 2600 with 6 drives configured in a RAID 5 on a PERC 4 controller. 2 drives failed at the same time, and according to what I know a RAID 5 is recoverable if 1 drive fails. I'm not sure if the fact I had six drives in the array might save my skin.

I bought 2 new drives and plugged them in but no rebuild happened as I expected. Can anyone shed some light? raid raid5 dell-poweredge share Share a link to this question

add a comment | 4 Answers 4 active oldest votes

11 Regardless of how many drives are in use, a RAID 5 array only allows for recovery in the event that just one disk at a time fails.

What 3molo says is a fair point but even so, not quite correct I think - if two disks in a RAID5 array fail at the exact same time then a hot spare won't help, because a hot spare replaces one of the failed disks and rebuilds the array without any intervention, and a rebuild isn't possible if more than one disk fails.

For now, I am sorry to say that your options for recovering this data are going to involve restoring a backup.

For the future you may want to consider one of the more robust forms of RAID (not sure what options a PERC4 supports) such as RAID 6 or a nested RAID array . Once you get above a certain amount of disks in an array you reach the point where the chance that more than one of them can fail before a replacement is installed and rebuilt becomes unacceptably high. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 13:37 longneck 21.1k 3 3 gold badges 43 43 silver badges 76 76 bronze badges answered Sep 21 '10 at 14:43 Rob Moir Rob Moir 30k 4 4 gold badges 53 53 silver badges 84 84 bronze badges

add a comment | 2 You can try to force one or both of the failed disks to be online from the BIOS interface of the controller. Then check that the data and the file system are consistent. share Share a link to this answer Copy link | improve this answer answered Sep 21 '10 at 15:35 Mircea Vutcovici Mircea Vutcovici 13.6k 3 3 gold badges 42 42 silver badges 69 69 bronze badges add a comment | 2 Direct answer is "No". In-direct -- "It depends". Mainly it depends on whether disks are partially out of order, or completely. In case there're partially broken, you can give it a try -- I would copy (using tool like ddrescue) both failed disks. Then I'd try to run the bunch of disks using Linux SoftRAID -- re-trying with proper order of disks and stripe-size in read-only mode and counting CRC mismatches. It's quite doable, I should say -- this text in Russian mentions 12 disk RAID50's recovery using LSR , for example. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 15:12 Skyhawk 13.5k 3 3 gold badges 45 45 silver badges 91 91 bronze badges answered Jun 8 '12 at 14:11 poige poige 7,370 2 2 gold badges 16 16 silver badges 38 38 bronze badges add a comment | 0 It is possible if raid was with one spare drive , and one of your failed disks died before the second one. So, you just need need to try reconstruct array virtually with 3d party software . Found small article about this process on this page: http://www.angeldatarecovery.com/raid5-data-recovery/

And, if you realy need one of died drives you can send it to recovery shops. With of this images you can reconstruct raid properly with good channces.

[Sep 23, 2019] How to recover deleted files with foremost on Linux - LinuxConfig.org

Sep 23, 2019 | linuxconfig.org
Details
System Administration
15 September 2019
Contents In this article we will talk about foremost , a very useful open source forensic utility which is able to recover deleted files using the technique called data carving . The utility was originally developed by the United States Air Force Office of Special Investigations, and is able to recover several file types (support for specific file types can be added by the user, via the configuration file). The program can also work on partition images produced by dd or similar tools.

In this tutorial you will learn:

foremost-manual <img src=https://linuxconfig.org/images/foremost_manual.png alt=foremost-manual width=1200 height=675 /> Foremost is a forensic data recovery program for Linux used to recover files using their headers, footers, and data structures through a process known as file carving. Software Requirements and Conventions Used
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution-independent
Software The "foremost" program
Other Familiarity with the command line interface
Conventions # - requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ - requires given linux commands to be executed as a regular non-privileged user
Installation

Since foremost is already present in all the major Linux distributions repositories, installing it is a very easy task. All we have to do is to use our favorite distribution package manager. On Debian and Ubuntu, we can use apt :

$ sudo apt install foremost

In recent versions of Fedora, we use the dnf package manager to install packages , the dnf is a successor of yum . The name of the package is the same:

$ sudo dnf install foremost

If we are using ArchLinux, we can use pacman to install foremost . The program can be found in the distribution "community" repository:

$ sudo pacman -S foremost

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

me name=


Basic usage
WARNING
No matter which file recovery tool or process your are going to use to recover your files, before you begin it is recommended to perform a low level hard drive or partition backup, hence avoiding an accidental data overwrite !!! In this case you may re-try to recover your files even after unsuccessful recovery attempt. Check the following dd command guide on how to perform hard drive or partition low level backup.

The foremost utility tries to recover and reconstruct files on the base of their headers, footers and data structures, without relying on filesystem metadata . This forensic technique is known as file carving . The program supports various types of files, as for example:

The most basic way to use foremost is by providing a source to scan for deleted files (it can be either a partition or an image file, as those generated with dd ). Let's see an example. Imagine we want to scan the /dev/sdb1 partition: before we begin, a very important thing to remember is to never store retrieved data on the same partition we are retrieving the data from, to avoid overwriting delete files still present on the block device. The command we would run is:

$ sudo foremost -i /dev/sdb1

By default, the program creates a directory called output inside the directory we launched it from and uses it as destination. Inside this directory, a subdirectory for each supported file type we are attempting to retrieve is created. Each directory will hold the corresponding file type obtained from the data carving process:

output
├── audit.txt
├── avi
├── bmp
├── dll
├── doc
├── docx
├── exe
├── gif
├── htm
├── jar
├── jpg
├── mbd
├── mov
├── mp4
├── mpg
├── ole
├── pdf
├── png  
├── ppt
├── pptx
├── rar
├── rif
├── sdw
├── sx
├── sxc
├── sxi
├── sxw
├── vis
├── wav
├── wmv
├── xls
├── xlsx
└── zip

When foremost completes its job, empty directories are removed. Only the ones containing files are left on the filesystem: this let us immediately know what type of files were successfully retrieved. By default the program tries to retrieve all the supported file types; to restrict our search, we can, however, use the -t option and provide a list of the file types we want to retrieve, separated by a comma. In the example below, we restrict the search only to gif and pdf files:

$ sudo foremost -t gif,pdf -i /dev/sdb1

https://www.youtube.com/embed/58S2wlsJNvo

In this video we will test the forensic data recovery program Foremost to recover a single png file from /dev/sdb1 partition formatted with the EXT4 filesystem.

me name=


Specifying an alternative destination

As we already said, if a destination is not explicitly declared, foremost creates an output directory inside our cwd . What if we want to specify an alternative path? All we have to do is to use the -o option and provide said path as argument. If the specified directory doesn't exist, it is created; if it exists but it's not empty, the program throws a complain:

ERROR: /home/egdoc/data is not empty
        Please specify another directory or run with -T.

To solve the problem, as suggested by the program itself, we can either use another directory or re-launch the command with the -T option. If we use the -T option, the output directory specified with the -o option is timestamped. This makes possible to run the program multiple times with the same destination. In our case the directory that would be used to store the retrieved files would be:

/home/egdoc/data_Thu_Sep_12_16_32_38_2019
The configuration file

The foremost configuration file can be used to specify file formats not natively supported by the program. Inside the file we can find several commented examples showing the syntax that should be used to accomplish the task. Here is an example involving the png type (the lines are commented since the file type is supported by default):

# PNG   (used in web pages)
#       (NOTE THIS FORMAT HAS A BUILTIN EXTRACTION FUNCTION)
#       png     y       200000  \x50\x4e\x47?   \xff\xfc\xfd\xfe

The information to provide in order to add support for a file type, are, from left to right, separated by a tab character: the file extension ( png in this case), whether the header and footer are case sensitive ( y ), the maximum file size in Bytes ( 200000 ), the header ( \x50\x4e\x47? ) and and the footer ( \xff\xfc\xfd\xfe ). Only the latter is optional and can be omitted.

If the path of the configuration file it's not explicitly provided with the -c option, a file named foremost.conf is searched and used, if present, in the current working directory. If it is not found the default configuration file, /etc/foremost.conf is used instead.

Adding the support for a file type

By reading the examples provided in the configuration file, we can easily add support for a new file type. In this example we will add support for flac audio files. Flac (Free Lossless Audio Coded) is a non-proprietary lossless audio format which is able to provide compressed audio without quality loss. First of all, we know that the header of this file type in hexadecimal form is 66 4C 61 43 00 00 00 22 ( fLaC in ASCII), and we can verify it by using a program like hexdump on a flac file:

$ hexdump -C
blind_guardian_war_of_wrath.flac|head
00000000  66 4c 61 43 00 00 00 22  12 00 12 00 00 00 0e 00  |fLaC..."........|
00000010  36 f2 0a c4 42 f0 00 4d  04 60 6d 0b 64 36 d7 bd  |6...B..M.`m.d6..|
00000020  3e 4c 0d 8b c1 46 b6 fe  cd 42 04 00 03 db 20 00  |>L...F...B.... .|
00000030  00 00 72 65 66 65 72 65  6e 63 65 20 6c 69 62 46  |..reference libF|
00000040  4c 41 43 20 31 2e 33 2e  31 20 32 30 31 34 31 31  |LAC 1.3.1 201411|
00000050  32 35 21 00 00 00 12 00  00 00 54 49 54 4c 45 3d  |25!.......TITLE=|
00000060  57 61 72 20 6f 66 20 57  72 61 74 68 11 00 00 00  |War of Wrath....|
00000070  52 45 4c 45 41 53 45 43  4f 55 4e 54 52 59 3d 44  |RELEASECOUNTRY=D|
00000080  45 0c 00 00 00 54 4f 54  41 4c 44 49 53 43 53 3d  |E....TOTALDISCS=|
00000090  32 0c 00 00 00 4c 41 42  45 4c 3d 56 69 72 67 69  |2....LABEL=Virgi|

As you can see the file signature is indeed what we expected. Here we will assume a maximum file size of 30 MB, or 30000000 Bytes. Let's add the entry to the file:

flac    y       30000000    \x66\x4c\x61\x43\x00\x00\x00\x22

The footer signature is optional so here we didn't provide it. The program should now be able to recover deleted flac files. Let's verify it. To test that everything works as expected I previously placed, and then removed, a flac file from the /dev/sdb1 partition, and then proceeded to run the command:

$ sudo foremost -i /dev/sdb1 -o $HOME/Documents/output

As expected, the program was able to retrieve the deleted flac file (it was the only file on the device, on purpose), although it renamed it with a random string. The original filename cannot be retrieved because, as we know, files metadata is contained in the filesystem, and not in the file itself:

/home/egdoc/Documents
└── output
    ├── audit.txt
    └── flac
        └── 00020482.flac

me name=


The audit.txt file contains information about the actions performed by the program, in this case:

Foremost version 1.5.7 by Jesse Kornblum, Kris
Kendall, and Nick Mikus
Audit File

Foremost started at Thu Sep 12 23:47:04 2019
Invocation: foremost -i /dev/sdb1 -o /home/egdoc/Documents/output
Output directory: /home/egdoc/Documents/output
Configuration file: /etc/foremost.conf
------------------------------------------------------------------
File: /dev/sdb1
Start: Thu Sep 12 23:47:04 2019
Length: 200 MB (209715200 bytes)

Num      Name (bs=512)         Size      File Offset     Comment

0:      00020482.flac         28 MB        10486784
Finish: Thu Sep 12 23:47:04 2019

1 FILES EXTRACTED

flac:= 1
------------------------------------------------------------------

Foremost finished at Thu Sep 12 23:47:04 2019
Conclusion

In this article we learned how to use foremost, a forensic program able to retrieve deleted files of various types. We learned that the program works by using a technique called data carving , and relies on files signatures to achieve its goal. We saw an example of the program usage and we also learned how to add the support for a specific file type using the syntax illustrated in the configuration file. For more information about the program usage, please consult its manual page.

[Sep 18, 2019] Delete Files That Have Not Been Accessed For A Given Time On Linux

Sep 18, 2019 | www.ostechnix.com

Delete Files That Have Not Been Accessed For A Given Time On Linux

by sk · Published September 16, 2019 · Updated September 17, 2019

We already have covered how to manually find and delete files older than X days using "find" command in Linux . Today we will do the same, but only if the files have not been accessed for a certain period of time. Say hello to "Tmpwatch" , a command line utility to recursively delete files that haven't been accessed for a given time. Not just files, tmpwatch will also delete empty directories as well.

By default, Tmpwatch will decide which files/directories should be deleted based on their atime (access time). You can, of course, change this behaviour by using ctime (inode change time), mtime (modification time) values as well. Normally, Tmpwatch can be used to delete the contents of /tmp directory and other unused/unwanted stuffs like old log files.

An important warning!!

Before start using this tool, you must know that Tmpwatch will delete files and directories recursively based on the given criteria. Do not run tmpwatch in / (root directory) . This directory contains important files which are required to keep the Linux system running. If you're not careful enough, tmpwatch will delete the important system files and directories that matches the given criteria in the entire root directory. There is no safeguard mechanism built into Tmpwatch tool to prevent you from running it on root directory. So, there is no way to undo the operation. You have been warned!

Install Tmpwatch

Tmpwatch is available in the default repositories of most Linux distributions.

On Fedora, you can install it using command:

$ sudo dnf install tmpwatch

On CentOS:

$ sudo yum install tmpwatch

On openSUSE:

$ sudo zypper install tmpwatch

On Debian and its derivatives like Ubuntu, Tmpwatch is available in different name i.e Tmpreaper . Tmpreaper is mostly based on `tmpwatch-1.2/1.4′ by Erik Troan from Redhat. Now, tmpreaper is being maintained for Debian by Paul Slootman .

To install tmpreaper on Debian, Ubuntu, Linux Mint, run:

$ sudo apt install tmpreaper
Delete Files That Have Not Been Accessed For A Given Time Using Tmpwatch / Tmpreaper

Usage of Tmpwatch and Tmpreaper is almost same. If you're on Debian-based systems, replace "Tmpwatch" with "Tmpreaper" in the following examples.

Delete files which are not accessed more than X days

To delete files more than 10 days old, run:

tmpwatch 10d /var/log/

The above command will delete all the files and empty directories which are not accessed more than 10 days from /var/log/ folder.

Delete files which are not modified more than X days

Like I already said, Tmpwatch will delete files based on their access time. You can also delete files based on their modification time (mtime) using -m option.

For example, the following command will delete files which are not modified for the 10 days in /var/log/ folder.

tmpwatch -m 10d /var/log/

Here, -m refers the modification time and d is the <time_spec> parameter. The <time_spec> parameter defines the age threshold for removing files. You can use the following time_spec parameters for removing files.

Hours is the default.

For instance, to delete files which are not modified for the past 10 hours , simply run:

tmpwatch -m 10 /var/log/

As you might have noticed, I haven't used time_spec parameter in the above command. Because, h (for hours) is default parameter, so we don't have to mention it when deleting files that haven't been modified for the past X hours.

Delete Symlinks

If you want to delete symlinks, not just regular files and directories, use -s option like below.

tmpwatch -s 10 /var/log/
Delete all files

To remove all file types, not just regular files, symlinks, and directories, use -a option.

tmpwatch -a 10 /var/log/

The above command will delete all types of files including regular files, symlinks, and directories in the /var/log/ folder.

Exclude directories from deletion

Sometimes, you might want to delete files, but not directories. if so, the command would be:

tmpwatch -am 10 --nodirs /var/log/

The above command will delete all files except the directories which are not modified for the past 10 hours.

Perform a test run without actually delete anything

Sometimes, you might want to view which files are actually going to be deleted. This will be helpful when running Tmpwatch on an important directory. If so, run Tmpwatch in test mode with -t option.

tmpwatch -t 30 /var/log/

Sample output from CentOS 7 server:

removing file /var/log/wtmp
removing directory /var/log/ppp if empty
removing directory /var/log/tuned if empty
removing directory /var/log/anaconda if empty
removing file /var/log/dmesg.old
removing file /var/log/boot.log
removing file /var/log/dnf.librepo.log

On Debian-based systems, you will see an output like below.

$ tmpreaper -t 30 /var/log/
(PID 1803) Pretending to clean up directory `/var/log/'.
(PID 1804) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 1804) Back from recursing down `apache2'.
(PID 1804) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 1804) Back from recursing down `dbconfig-common'.
(PID 1804) Pretending to clean up directory `dist-upgrade'.
(PID 1804) Back from recursing down `dist-upgrade'.
(PID 1804) Pretending to clean up directory `lxd'.
(PID 1804) Back from recursing down `lxd'.
Pretending to remove file `/var/log//cloud-init.log'.
(PID 1804) Pretending to clean up directory `landscape'.
Pretending to remove file `landscape/sysinfo.log'.
(PID 1804) Back from recursing down `landscape'.
[...]

This will only simulate the operation, but don't actually delete anything. Tmpwatch will simply perform a dry run and show you which files are going to be deleted in the output.

Force file deletion

If you want to forcibly delete the files, use -f option.

tmpwatch -f 10h /var/log/

Normally, the files owned by the current user, with no write access are not removed. The -f option will delete them as well.

Skip certain files from deletion

Tmpreaper has an option to skip files from deletion. This will be useful when you want to keep certain types of files and deleting everything else. If so, use –protect option like below.

tmpreaper --protect '*.txt' -t 10h /var/log/

This command will skip all files that have .txt extension from deletion

Sample output:

(PID 2623) Pretending to clean up directory `/var/log/'.
(PID 2624) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 2624) Back from recursing down `apache2'.
(PID 2624) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 2624) Back from recursing down `dbconfig-common'.
(PID 2624) Pretending to clean up directory `dist-upgrade'.
(PID 2624) Back from recursing down `dist-upgrade'.
Pretending to remove empty directory `dist-upgrade'.
Entry matching `--protect' pattern skipped. `ostechnix.txt'
(PID 2624) Pretending to clean up directory `lxd'.

As you can see, Tmpreaper skips the *.txt files from deletion.

This option is not available in Tmpwatch, by the way.

Setting up cron job to delete files periodically

You may not want to manually run Tmpwatch/Tmpreaper all the time. In that case, you could setup a cron job to automate the clean process.

When installing Tmpreaper , it will create a daily cron job ( /etc/cron.daily/tmpreaper ). This job will read the options from /etc/timereaper.conf file and act accordingly. Open the file and change the values as per your requirement. By default, Tmpreaper will delete files that 7 days older. You can, however, change this by modifying the value "TMPREAPER_TIME=7d" in tmpreaper.conf file.

If you use "Tmpwatch", you need to manually create cron job and put the cron entry in it.

# crontab -e

Add the following line:

0 1 * * * /usr/sbin/tmpwatch 30d /var/log/

As per the above cron job, Tmpwatch will run everyday at 1am and delete files which are 30 days older.

For more details about setting cron jobs, refer the following link.

Again, please careful while using Tmpwatch/Tmpreaper commands . Double check the path before running it to avoid data loss.

For more details, refer man pages.

$ man tmpwatch

Or,

$ man tmpreaper

[Sep 16, 2019] Artistic Style - Index

Sep 16, 2019 | astyle.sourceforge.net

Artistic Style 3.1 A Free, Fast, and Small Automatic Formatter
for C, C++, C++/CLI, Objective‑C, C#, and Java Source Code

Project Page: http://astyle.sourceforge.net/
SourceForge: http://sourceforge.net/projects/astyle/

Artistic Style is a source code indenter, formatter, and beautifier for the C, C++, C++/CLI, Objective‑C, C# and Java programming languages.

When indenting source code, we as programmers have a tendency to use both spaces and tab characters to create the wanted indentation. Moreover, some editors by default insert spaces instead of tabs when pressing the tab key. Other editors (Emacs for example) have the ability to "pretty up" lines by automatically setting up the white space before the code on the line, possibly inserting spaces in code that up to now used only tabs for indentation.

The NUMBER of spaces for each tab character in the source code can change between editors (unless the user sets up the number to his liking...). One of the standard problems programmers face when moving from one editor to another is that code containing both spaces and tabs, which was perfectly indented, suddenly becomes a mess to look at. Even if you as a programmer take care to ONLY use spaces or tabs, looking at other people's source code can still be problematic.

To address this problem, Artistic Style was created – a filter written in C++ that automatically re-indents and re-formats C / C++ / Objective‑C / C++/CLI / C# / Java source files. It can be used from a command line, or it can be incorporated as a library in another program.

[Sep 16, 2019] Usage -- PrettyPrinter 0.18.0 documentation

Sep 16, 2019 | prettyprinter.readthedocs.io

Usage

Install the package with pip :

pip install prettyprinter

Then, instead of

from pprint import pprint

do

from prettyprinter import cpprint

for colored output. For colorless output, remove the c prefix from the function name:

from prettyprinter import pprint

[Sep 16, 2019] JavaScript code prettifier

Sep 16, 2019 | github.com

Announcement: Action required rawgit.com is going away .

An embeddable script that makes source-code snippets in HTML prettier.

[Sep 16, 2019] Pretty-print for shell script

Sep 16, 2019 | stackoverflow.com

Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Sep 9 at 7:47

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(mktemp -u "XXXXXXXXXX");
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

> ,

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 16, 2019] A command-line HTML pretty-printer Making messy HTML readable - Stack Overflow

Notable quotes:
"... Have a look at the HTML Tidy Project: http://www.html-tidy.org/ ..."
Sep 16, 2019 | stackoverflow.com

nisetama ,Aug 12 at 10:33

I'm looking for recommendations for HTML pretty printers which fulfill the following requirements:

> ,

Have a look at the HTML Tidy Project: http://www.html-tidy.org/

The granddaddy of HTML tools, with support for modern standards.

There used to be a fork called tidy-html5 which since became the official thing. Here is its GitHub repository .

Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern standards.

For your needs, here is the command line to call Tidy:

[Sep 13, 2019] How To Delete Files Older Or Newer Than N Days Using find (With Extra Examples) - Linux Uprising Blog

Sep 13, 2019 | www.linuxuprising.com

Only delete files matching .extension older than N days from a directory and all its subdirectories:

find /directory/path/ -type f -mtime +N -name '*.extension' -delete

You can add -maxdepth 1 to prevent the command from going through subdirectories, and only delete files and 1st level depth only directories:
find /directory/path/ -mindepth 1 -maxdepth 1 -mtime +N -delete

You may also use -ctime +N , used to match (and delete in this example) files that had their status last changed N days ago (the file attributes/metadata AND/OR file content was modified) , as opposed to -mtime , which only matches files based on when their content was last modified:
find /directory/path/ -mindepth 1 -ctime +N -delete

[Sep 12, 2019] 9 Best File Comparison and Difference (Diff) Tools for Linux

Sep 12, 2019 | www.tecmint.com

3. Kompare

Kompare is a diff GUI wrapper that allows users to view differences between files and also merge them.

Some of its features include:

  1. Supports multiple diff formats
  2. Supports comparison of directories
  3. Supports reading diff files
  4. Customizable interface
  5. Creating and applying patches to source files
Kompare Tool - Compare Two Files in Linux <img aria-describedby="caption-attachment-21311" src="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png" alt="Kompare Tool - Compare Two Files in Linux" width="1097" height="701" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux.png 1097w, https://www.tecmint.com/wp-content/uploads/2016/07/Kompare-Two-Files-in-Linux-768x491.png 768w" sizes="(max-width: 1097px) 100vw, 1097px" />

Kompare Tool – Compare Two Files in Linux

Visit Homepage : https://www.kde.org/applications/development/kompare/

4. DiffMerge

DiffMerge is a cross-platform GUI application for comparing and merging files. It has two functionality engines, the Diff engine which shows the difference between two files, which supports intra-line highlighting and editing and a Merge engine which outputs the changed lines between three files.

It has got the following features:

  1. Supports directory comparison
  2. File browser integration
  3. Highly configurable
DiffMerge - Compare Files in Linux <img aria-describedby="caption-attachment-21312" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png" alt="DiffMerge - Compare Files in Linux" width="1078" height="700" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux.png 1078w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffMerge-Compare-Files-in-Linux-768x499.png 768w" sizes="(max-width: 1078px) 100vw, 1078px" />

DiffMerge – Compare Files in Linux

Visit Homepage : https://sourcegear.com/diffmerge/

5. Meld – Diff Tool

Meld is a lightweight GUI diff and merge tool. It enables users to compare files, directories plus version controlled programs. Built specifically for developers, it comes with the following features:

  1. Two-way and three-way comparison of files and directories
  2. Update of file comparison as a users types more words
  3. Makes merges easier using auto-merge mode and actions on changed blocks
  4. Easy comparisons using visualizations
  5. Supports Git, Mercurial, Subversion, Bazaar plus many more
Meld - A Diff Tool to Compare File in Linux <img aria-describedby="caption-attachment-21313" src="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png" alt="Meld - A Diff Tool to Compare File in Linux" width="1028" height="708" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux.png 1028w, https://www.tecmint.com/wp-content/uploads/2016/07/Meld-Diff-Tool-to-Compare-Files-in-Linux-768x529.png 768w" sizes="(max-width: 1028px) 100vw, 1028px" />

Meld – A Diff Tool to Compare File in Linux

Visit Homepage : http://meldmerge.org/

6. Diffuse – GUI Diff Tool

Diffuse is another popular, free, small and simple GUI diff and merge tool that you can use on Linux. Written in Python, It offers two major functionalities, that is: file comparison and version control, allowing file editing, merging of files and also output the difference between files.

You can view a comparison summary, select lines of text in files using a mouse pointer, match lines in adjacent files and edit different file. Other features include:

  1. Syntax highlighting
  2. Keyboard shortcuts for easy navigation
  3. Supports unlimited undo
  4. Unicode support
  5. Supports Git, CVS, Darcs, Mercurial, RCS, Subversion, SVK and Monotone
DiffUse - A Tool to Compare Text Files in Linux <img aria-describedby="caption-attachment-21314" src="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png" alt="DiffUse - A Tool to Compare Text Files in Linux" width="1030" height="795" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux.png 1030w, https://www.tecmint.com/wp-content/uploads/2016/07/DiffUse-Compare-Text-Files-in-Linux-768x593.png 768w" sizes="(max-width: 1030px) 100vw, 1030px" />

DiffUse – A Tool to Compare Text Files in Linux

Visit Homepage : http://diffuse.sourceforge.net/

7. XXdiff – Diff and Merge Tool

XXdiff is a free, powerful file and directory comparator and merge tool that runs on Unix like operating systems such as Linux, Solaris, HP/UX, IRIX, DEC Tru64. One limitation of XXdiff is its lack of support for unicode files and inline editing of diff files.

It has the following list of features:

  1. Shallow and recursive comparison of two, three file or two directories
  2. Horizontal difference highlighting
  3. Interactive merging of files and saving of resulting output
  4. Supports merge reviews/policing
  5. Supports external diff tools such as GNU diff, SIG diff, Cleareddiff and many more
  6. Extensible using scripts
  7. Fully customizable using resource file plus many other minor features
xxdiff Tool <img aria-describedby="caption-attachment-21315" src="https://www.tecmint.com/wp-content/uploads/2016/07/xxdiff-Tool.png" alt="xxdiff Tool" width="718" height="401" />

xxdiff Tool

Visit Homepage : http://furius.ca/xxdiff/

8. KDiff3 – – Diff and Merge Tool

KDiff3 is yet another cool, cross-platform diff and merge tool made from KDevelop . It works on all Unix-like platforms including Linux and Mac OS X, Windows.

It can compare or merge two to three files or directories and has the following notable features:

  1. Indicates differences line by line and character by character
  2. Supports auto-merge
  3. In-built editor to deal with merge-conflicts
  4. Supports Unicode, UTF-8 and many other codecs
  5. Allows printing of differences
  6. Windows explorer integration support
  7. Also supports auto-detection via byte-order-mark "BOM"
  8. Supports manual alignment of lines
  9. Intuitive GUI and many more
KDiff3 Tool for Linux <img aria-describedby="caption-attachment-21418" src="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png" alt="KDiff3 Tool for Linux" width="950" height="694" srcset="https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux.png 950w, https://www.tecmint.com/wp-content/uploads/2016/07/KDiff3-Tool-for-Linux-768x561.png 768w" sizes="(max-width: 950px) 100vw, 950px" />

KDiff3 Tool for Linux

Visit Homepage : http://kdiff3.sourceforge.net/

9. TkDiff

TkDiff is also a cross-platform, easy-to-use GUI wrapper for the Unix diff tool. It provides a side-by-side view of the differences between two input files. It can run on Linux, Windows and Mac OS X.

Additionally, it has some other exciting features including diff bookmarks, a graphical map of differences for easy and quick navigation plus many more.

Visit Homepage : https://sourceforge.net/projects/tkdiff/

Having read this review of some of the best file and directory comparator and merge tools, you probably want to try out some of them. These may not be the only diff tools available you can find on Linux, but they are known to offer some the best features, you may also want to let us know of any other diff tools out there that you have tested and think deserve to be mentioned among the best.

[Sep 07, 2019] How to Debug Bash Scripts by Mike Ward

Sep 05, 2019 | linuxconfig.org

05 September 2019

... ... ... How to use other Bash options

The Bash options for debugging are turned off by default, but once they are turned on by using the set command, they stay on until explicitly turned off. If you are not sure which options are enabled, you can examine the $- variable to see the current state of all the variables.

$ echo $-
himBHs
$ set -xv && echo $-
himvxBHs

There is another useful switch we can use to help us find variables referenced without having any value set. This is the -u switch, and just like -x and -v it can also be used on the command line, as we see in the following example:

set u option at command line <img src=https://linuxconfig.org/images/02-how-to-debug-bash-scripts.png alt="set u option at command line" width=1200 height=254 /> Setting u option at the command line

We mistakenly assigned a value of 7 to the variable called "level" then tried to echo a variable named "score" that simply resulted in printing nothing at all to the screen. Absolutely no debug information was given. Setting our -u switch allows us to see a specific error message, "score: unbound variable" that indicates exactly what went wrong.

We can use those options in short Bash scripts to give us debug information to identify problems that do not otherwise trigger feedback from the Bash interpreter. Let's walk through a couple of examples.

#!/bin/bash

read -p "Path to be added: " $path

if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
results from addpath script <img src=https://linuxconfig.org/images/03-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=417 /> Using x option when running your Bash script

In the example above we run the addpath script normally and it simply does not modify our PATH . It does not give us any indication of why or clues to mistakes made. Running it again using the -x option clearly shows us that the left side of our comparison is an empty string. $path is an empty string because we accidentally put a dollar sign in front of "path" in our read statement. Sometimes we look right at a mistake like this and it doesn't look wrong until we get a clue and think, "Why is $path evaluated to an empty string?"

Looking this next example, we also get no indication of an error from the interpreter. We only get one value printed per line instead of two. This is not an error that will halt execution of the script, so we're left to simply wonder without being given any clues. Using the -u switch,we immediately get a notification that our variable j is not bound to a value. So these are real time savers when we make mistakes that do not result in actual errors from the Bash interpreter's point of view.

#!/bin/bash

for i in 1 2 3
do
        echo $i $j
done
results from count.sh script <img src=https://linuxconfig.org/images/04-how-to-debug-bash-scripts.png alt="results from count.sh script" width=1200 height=291 /> Using u option running your script from the command line

Now surely you are thinking that sounds fine, but we seldom need help debugging mistakes made in one-liners at the command line or in short scripts like these. We typically struggle with debugging when we deal with longer and more complicated scripts, and we rarely need to set these options and leave them set while we run multiple scripts. Setting -xv options and then running a more complex script will often add confusion by doubling or tripling the amount of output generated.

Fortunately we can use these options in a more precise way by placing them inside our scripts. Instead of explicitly invoking a Bash shell with an option from the command line, we can set an option by adding it to the shebang line instead.

#!/bin/bash -x

This will set the -x option for the entire file or until it is unset during the script execution, allowing you to simply run the script by typing the filename instead of passing it to Bash as a parameter. A long script or one that has a lot of output will still become unwieldy using this technique however, so let's look at a more specific way to use options.


me name=


For a more targeted approach, surround only the suspicious blocks of code with the options you want. This approach is great for scripts that generate menus or detailed output, and it is accomplished by using the set keyword with plus or minus once again.

#!/bin/bash

read -p "Path to be added: " $path

set -xv
if [ "$path" = "/home/mike/bin" ]; then
        echo $path >> $PATH
        echo "new path: $PATH"
else
        echo "did not modify PATH"
fi
set +xv
results from addpath script <img src=https://linuxconfig.org/images/05-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=469 /> Wrapping options around a block of code in your script

We surrounded only the blocks of code we suspect in order to reduce the output, making our task easier in the process. Notice we turn on our options only for the code block containing our if-then-else statement, then turn off the option(s) at the end of the suspect block. We can turn these options on and off multiple times in a single script if we can't narrow down the suspicious areas, or if we want to evaluate the state of variables at various points as we progress through the script. There is no need to turn off an option If we want it to continue for the remainder of the script execution.

For completeness sake we should mention also that there are debuggers written by third parties that will allow us to step through the code execution line by line. You might want to investigate these tools, but most people find that that they are not actually needed.

As seasoned programmers will suggest, if your code is too complex to isolate suspicious blocks with these options then the real problem is that the code should be refactored. Overly complex code means bugs can be difficult to detect and maintenance can be time consuming and costly.

One final thing to mention regarding Bash debugging options is that a file globbing option also exists and is set with -f . Setting this option will turn off globbing (expansion of wildcards to generate file names) while it is enabled. This -f option can be a switch used at the command line with bash, after the shebang in a file or, as in this example to surround a block of code.

#!/bin/bash

echo "ignore fileglobbing option turned off"
ls *

echo "ignore file globbing option set"
set -f
ls *
set +f
results from -f option <img src=https://linuxconfig.org/images/06-how-to-debug-bash-scripts.png alt="results from -f option" width=1200 height=314 /> Using f option to turn off file globbing How to use trap to help debug

There are more involved techniques worth considering if your scripts are complicated, including using an assert function as mentioned earlier. One such method to keep in mind is the use of trap. Shell scripts allow us to trap signals and do something at that point.

A simple but useful example you can use in your Bash scripts is to trap on EXIT .

#!/bin/bash

trap 'echo score is $score, status is $status' EXIT

if [ -z  ]; then
        status="default"
else
        status=
fi

score=0
if [ ${USER} = 'superman' ]; then
        score=99
elif [ $# -gt 1 ]; then
        score=
fi
results from using trap EXIT <img src=https://linuxconfig.org/images/07-how-to-debug-bash-scripts.png alt="results from using trap EXIT" width=1200 height=469 /> Using trap EXIT to help debug your script

me name=


As you can see just dumping the current values of variables to the screen can be useful to show where your logic is failing. The EXIT signal obviously does not need an explicit exit statement to be generated; in this case the echo statement is executed when the end of the script is reached.

Another useful trap to use with Bash scripts is DEBUG . This happens after every statement, so it can be used as a brute force way to show the values of variables at each step in the script execution.

#!/bin/bash

trap 'echo "line ${LINENO}: score is $score"' DEBUG

score=0

if [ "${USER}" = "mike" ]; then
        let "score += 1"
fi

let "score += 1"

if [ "" = "7" ]; then
        score=7
fi
exit 0
results from using trap DEBUG <img src=https://linuxconfig.org/images/08-how-to-debug-bash-scripts.png alt="results from using trap DEBUG" width=1200 height=469 /> Using trap DEBUG to help debug your script Conclusion

When you notice your Bash script not behaving as expected and the reason is not clear to you for whatever reason, consider what information would be useful to help you identify the cause then use the most comfortable tools available to help you pinpoint the issue. The xtrace option -x is easy to use and probably the most useful of the options presented here, so consider trying it out next time you're faced with a script that's not doing what you thought it would

[Sep 06, 2019] Using Case Insensitive Matches with Bash Case Statements by Steven Vona

Jun 30, 2019 | www.putorius.net

If you want to match the pattern regardless of it's case (Capital letters or lowercase letters) you can set the nocasematch shell option with the shopt builtin. You can do this as the first line of your script. Since the script will run in a subshell it won't effect your normal environment.

#!/bin/bash
 shopt -s nocasematch
 read -p "Name a Star Trek character: " CHAR
 case $CHAR in
   "Seven of Nine" | Neelix | Chokotay | Tuvok | Janeway )
       echo "$CHAR was in Star Trek Voyager"
       ;;&
   Archer | Phlox | Tpol | Tucker )
       echo "$CHAR was in Star Trek Enterprise"
       ;;&
   Odo | Sisko | Dax | Worf | Quark )
       echo "$CHAR was in Star Trek Deep Space Nine"
       ;;&
   Worf | Data | Riker | Picard )
       echo "$CHAR was in Star Trek The Next Generation" &&  echo "/etc/redhat-release"
       ;;
   *) echo "$CHAR is not in this script." 
       ;;
 esac

[Sep 04, 2019] Exec - Process Replacement Redirection in Bash by Steven Vona

Sep 02, 2019 | www.putorius.net

The Linux exec command is a bash builtin and a very interesting utility. It is not something most people who are new to Linux know. Most seasoned admins understand it but only use it occasionally. If you are a developer, programmer or DevOp engineer it is probably something you use more often. Lets take a deep dive into the builtin exec command, what it does and how to use it.

Table of Contents

Basics of the Sub-Shell

In order to understand the exec command, you need a fundamental understanding of how sub-shells work.

... ... ...

What the Exec Command Does

In it's most basic function the exec command changes the default behavior of creating a sub-shell to run a command. If you run exec followed by a command, that command will REPLACE the original process, it will NOT create a sub-shell.

An additional feature of the exec command, is redirection and manipulation of file descriptors . Explaining redirection and file descriptors is outside the scope of this tutorial. If these are new to you please read " Linux IO, Standard Streams and Redirection " to get acquainted with these terms and functions.

In the following sections we will expand on both of these functions and try to demonstrate how to use them.

How to Use the Exec Command with Examples

Let's look at some examples of how to use the exec command and it's options.

Basic Exec Command Usage – Replacement of Process

If you call exec and supply a command without any options, it simply replaces the shell with command .

Let's run an experiment. First, I ran the ps command to find the process id of my second terminal window. In this case it was 17524. I then ran "exec tail" in that second terminal and checked the ps command again. If you look at the screenshot below, you will see the tail process replaced the bash process (same process ID).

Linux terminal screenshot showing the exec command replacing a parent process instead of creating a sub-shell.
Screenshot 3

Since the tail command replaced the bash shell process, the shell will close when the tail command terminates.

Exec Command Options

If the -l option is supplied, exec adds a dash at the beginning of the first (zeroth) argument given. So if we ran the following command:

exec -l tail -f /etc/redhat-release

It would produce the following output in the process list. Notice the highlighted dash in the CMD column.

The -c option causes the supplied command to run with a empty environment. Environmental variables like PATH , are cleared before the command it run. Let's try an experiment. We know that the printenv command prints all the settings for a users environment. So here we will open a new bash process, run the printenv command to show we have some variables set. We will then run printenv again but this time with the exec -c option.

animated gif showing the exec command output with the -c option supplied.

In the example above you can see that an empty environment is used when using exec with the -c option. This is why there was no output to the printenv command when ran with exec.

The last option, -a [name], will pass name as the first argument to command . The command will still run as expected, but the name of the process will change. In this next example we opened a second terminal and ran the following command:

exec -a PUTORIUS tail -f /etc/redhat-release

Here is the process list showing the results of the above command:

Linux terminal screenshot showing the exec command using the -a option to replace the name of the first argument
Screenshot 5

As you can see, exec passed PUTORIUS as first argument to command , therefore it shows in the process list with that name.

Using the Exec Command for Redirection & File Descriptor Manipulation

The exec command is often used for redirection. When a file descriptor is redirected with exec it affects the current shell. It will exist for the life of the shell or until it is explicitly stopped.

If no command is specified, redirections may be used to affect the current shell environment.

– Bash Manual

Here are some examples of how to use exec for redirection and manipulating file descriptors. As we stated above, a deep dive into redirection and file descriptors is outside the scope of this tutorial. Please read " Linux IO, Standard Streams and Redirection " for a good primer and see the resources section for more information.

Redirect all standard output (STDOUT) to a file:
exec >file

In the example animation below, we use exec to redirect all standard output to a file. We then enter some commands that should generate some output. We then use exec to redirect STDOUT to the /dev/tty to restore standard output to the terminal. This effectively stops the redirection. Using the cat command we can see that the file contains all the redirected output.

Screenshot of Linux terminal using exec to redirect all standard output to a file
Open a file as file descriptor 6 for writing:
exec 6> file2write
Open file as file descriptor 8 for reading:
exec 8< file2read
Copy file descriptor 5 to file descriptor 7:
exec 7<&5
Close file descriptor 8:
exec 8<&-
Conclusion

In this article we covered the basics of the exec command. We discussed how to use it for process replacement, redirection and file descriptor manipulation.

In the past I have seen exec used in some interesting ways. It is often used as a wrapper script for starting other binaries. Using process replacement you can call a binary and when it takes over there is no trace of the original wrapper script in the process table or memory. I have also seen many System Administrators use exec when transferring work from one script to another. If you call a script inside of another script the original process stays open as a parent. You can use exec to replace that original script.

I am sure there are people out there using exec in some interesting ways. I would love to hear your experiences with exec. Please feel free to leave a comment below with anything on your mind.

Resources

[Sep 03, 2019] bash - How to convert strings like 19-FEB-12 to epoch date in UNIX - Stack Overflow

Feb 11, 2013 | stackoverflow.com

Asked 6 years, 6 months ago Active 2 years, 2 months ago Viewed 53k times 24 4

hellish ,Feb 11, 2013 at 3:45

In UNIX how to convert to epoch milliseconds date strings like:
19-FEB-12
16-FEB-12
05-AUG-09

I need this to compare these dates with the current time on the server.

> ,

To convert a date to seconds since the epoch:
date --date="19-FEB-12" +%s

Current epoch:

date +%s

So, since your dates are in the past:

NOW=`date +%s`
THEN=`date --date="19-FEB-12" +%s`

let DIFF=$NOW-$THEN
echo "The difference is: $DIFF"

Using BSD's date command, you would need

$ date -j -f "%d-%B-%y" 19-FEB-12 +%s

Differences from GNU date :

  1. -j prevents date from trying to set the clock
  2. The input format must be explicitly set with -f
  3. The input date is a regular argument, not an option (viz. -d )
  4. When no time is specified with the date, use the current time instead of midnight.

[Sep 03, 2019] Linux - UNIX Convert Epoch Seconds To the Current Time - nixCraft

Sep 03, 2019 | www.cyberciti.biz

Print Current UNIX Time

Type the following command to display the seconds since the epoch:

date +%s

date +%s

Sample outputs:
1268727836

Convert Epoch To Current Time

Type the command:

date -d @Epoch
date -d @1268727836
date -d "1970-01-01 1268727836 sec GMT"

date -d @Epoch date -d @1268727836 date -d "1970-01-01 1268727836 sec GMT"

Sample outputs:

Tue Mar 16 13:53:56 IST 2010

Please note that @ feature only works with latest version of date (GNU coreutils v5.3.0+). To convert number of seconds back to a more readable form, use a command like this:

date -d @1268727836 +"%d-%m-%Y %T %z"

date -d @1268727836 +"%d-%m-%Y %T %z"

Sample outputs:

16-03-2010 13:53:56 +0530

[Sep 03, 2019] command line - How do I convert an epoch timestamp to a human readable format on the cli - Unix Linux Stack Exchange

Sep 03, 2019 | unix.stackexchange.com

Gilles ,Oct 11, 2010 at 18:14

date -d @1190000000 Replace 1190000000 with your epoch

Stefan Lasiewski ,Oct 11, 2010 at 18:04

$ echo 1190000000 | perl -pe 's/(\d+)/localtime($1)/e' 
Sun Sep 16 20:33:20 2007

This can come in handy for those applications which use epoch time in the logfiles:

$ tail -f /var/log/nagios/nagios.log | perl -pe 's/(\d+)/localtime($1)/e'
[Thu May 13 10:15:46 2010] EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT;HOSTA;check_raid;0;check_raid.pl: OK (Unit 0 on Controller 0 is OK)

Stéphane Chazelas ,Jul 31, 2015 at 20:24

With bash-4.2 or above:
printf '%(%F %T)T\n' 1234567890

(where %F %T is the strftime() -type format)

That syntax is inspired from ksh93 .

In ksh93 however, the argument is taken as a date expression where various and hardly documented formats are supported.

For a Unix epoch time, the syntax in ksh93 is:

printf '%(%F %T)T\n' '#1234567890'

ksh93 however seems to use its own algorithm for the timezone and can get it wrong. For instance, in Britain, it was summer time all year in 1970, but:

$ TZ=Europe/London bash -c 'printf "%(%c)T\n" 0'
Thu 01 Jan 1970 01:00:00 BST
$ TZ=Europe/London ksh93 -c 'printf "%(%c)T\n" "#0"'
Thu Jan  1 00:00:00 1970

DarkHeart ,Jul 28, 2014 at 3:56

Custom format with GNU date :
date -d @1234567890 +'%Y-%m-%d %H:%M:%S'

Or with GNU awk :

awk 'BEGIN { print strftime("%Y-%m-%d %H:%M:%S", 1234567890); }'

Linked SO question: https://stackoverflow.com/questions/3249827/convert-from-unixtime-at-command-line

,

The two I frequently use are:
$ perl -leprint\ scalar\ localtime\ 1234567890
Sat Feb 14 00:31:30 2009

[Sep 03, 2019] Time conversion using Bash Vanstechelman.eu

Sep 03, 2019 | www.vanstechelman.eu

Time conversion using Bash This article show how you can obtain the UNIX epoch time (number of seconds since 1970-01-01 00:00:00 UTC) using the Linux bash "date" command. It also shows how you can convert a UNIX epoch time to a human readable time.

Obtain UNIX epoch time using bash
Obtaining the UNIX epoch time using bash is easy. Use the build-in date command and instruct it to output the number of seconds since 1970-01-01 00:00:00 UTC. You can do this by passing a format string as parameter to the date command. The format string for UNIX epoch time is '%s'.

lode@srv-debian6:~$ date "+%s"
1234567890

To convert a specific date and time into UNIX epoch time, use the -d parameter. The next example shows how to convert the timestamp "February 20th, 2013 at 08:41:15" into UNIX epoch time.

lode@srv-debian6:~$ date "+%s" -d "02/20/2013 08:41:15"
1361346075

Converting UNIX epoch time to human readable time
Even though I didn't find it in the date manual, it is possible to use the date command to reformat a UNIX epoch time into a human readable time. The syntax is the following:

lode@srv-debian6:~$ date -d @1234567890
Sat Feb 14 00:31:30 CET 2009

The same thing can also be achieved using a bit of perl programming:

lode@srv-debian6:~$ perl -e 'print scalar(localtime(1234567890)), "\n"'
Sat Feb 14 00:31:30 2009

Please note that the printed time is formatted in the timezone in which your Linux system is configured. My system is configured in UTC+2, you can get another output for the same command.

[Sep 03, 2019] Run PerlTidy to beautify the code

Notable quotes:
"... Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a . ..."
Sep 03, 2019 | perlmaven.com

The Code-TidyAll distribution provides a command line script called tidyall that will use Perl::Tidy to change the layout of the code.

This tandem needs 2 configuration file.

The .perltidyrc file contains the instructions to Perl::Tidy that describes the layout of a Perl-file. We used the following file copied from the source code of the Perl Maven project.

-pbp
-nst
-et=4
--maximum-line-length=120

# Break a line after opening/before closing token.
-vt=0
-vtc=0

The tidyall command uses a separate file called .tidyallrc that describes which files need to be beautified.

[PerlTidy]
select = {lib,t}/**/*.{pl,pm,t}
select = Makefile.PL
select = {mod2html,podtree2html,pods2html,perl2html}
argv = --profile=$ROOT/.perltidyrc

[SortLines]
select = .gitignore
Once I installed Code::TidyAll and placed those files in the root directory of the project, I could run tidyall -a .

That created a directory called .tidyall.d/ where it stores cached versions of the files, and changed all the files that were matches by the select statements in the .tidyallrc file.

Then, I added .tidyall.d/ to the .gitignore file to avoid adding that subdirectory to the repository and ran tidyall -a again to make sure the .gitignore file is sorted.

[Sep 02, 2019] Switch statement for bash script

Sep 02, 2019 | www.linuxquestions.org
Switch statement for bash script
<a rel='nofollow' target='_blank' href='//rev.linuxquestions.org/www/delivery/ck.php?n=a054b75'><img border='0' alt='' src='//rev.linuxquestions.org/www/delivery/avw.php?zoneid=10&amp;n=a054b75' /></a>
[ Log in to get rid of this advertisement] Hello, i am currently trying out the switch statement using bash script.

CODE:
showmenu () {
echo "1. Number1"
echo "2. Number2"
echo "3. Number3"
echo "4. All"
echo "5. Quit"
}

while true
do
showmenu
read choice
echo "Enter a choice:"
case "$choice" in
"1")
echo "Number One"
;;
"2")
echo "Number Two"
;;
"3")
echo "Number Three"
;;
"4")
echo "Number One, Two, Three"
;;
"5")
echo "Program Exited"
exit 0
;;
*)
echo "Please enter number ONLY ranging from 1-5!"
;;
esac
done

OUTPUT:
1. Number1
2. Number2
3. Number3
4. All
5. Quit
Enter a choice:

So, when the code is run, a menu with option 1-5 will be shown, then the user will be asked to enter a choice and finally an output is shown. But it is possible if the user want to enter multiple choices. For example, user enter choice "1" and "3", so the output will be "Number One" and "Number Three". Any idea?

Just something to get you started.

Code:

#! /bin/bash
showmenu ()
{
    typeset ii
    typeset -i jj=1
    typeset -i kk
    typeset -i valid=0  # valid=1 if input is good

    while (( ! valid ))
    do
        for ii in "${options[@]}"
        do
            echo "$jj) $ii"
            let jj++
        done
        read -e -p 'Select a list of actions : ' -a answer
        jj=0
        valid=1
        for kk in "${answer[@]}"
        do
            if (( kk < 1 || kk > "${#options[@]}" ))
            then
                echo "Error Item $jj is out of bounds" 1>&2
                valid=0
                break
            fi
            let jj++
        done
    done
}

typeset -r c1=Number1
typeset -r c2=Number2
typeset -r c3=Number3
typeset -r c4=All
typeset -r c5=Quit
typeset -ra options=($c1 $c2 $c3 $c4 $c5)
typeset -a answer
typeset -i kk
while true
do
    showmenu
    for kk in "${answer[@]}"
    do
        case $kk in
        1)
            echo 'Number One'
            ;;
        2)
            echo 'Number Two'
            ;;
        3)
            echo 'Number Three'
            ;;
        4)
            echo 'Number One, Two, Three'
            ;;
        5)
            echo 'Program Exit'
            exit 0
            ;;
        esac
    done 
done
stevenworr
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by stevenworr
Old 11-16-2009, 10:10 PM # 4
wjs1990 Member
Registered: Nov 2009 Posts: 30
Original Poster
Rep: Reputation: 15
Ok will try it out first. Thanks.
Last edited by wjs1990; 11-16-2009 at 10:13 PM .
wjs1990
View Public Profile
View LQ Blog
View Review Entries
View HCL Entries
Find More Posts by wjs1990
Old 11-16-2009, 10:16 PM # 5
evo2 LQ Guru
Registered: Jan 2009 Location: Japan Distribution: Mostly Debian and CentOS Posts: 5,945
Rep: Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376 Reputation: 1376
This can be done just by wrapping your case block in a for loop and changing one line.

Code:

#!/bin/bash
showmenu () {
    echo "1. Number1"
    echo "2. Number2"
    echo "3. Number3"
    echo "4. All"
    echo "5. Quit"
}

while true ; do
    showmenu
    read choices
    for choice in $choices ; do
        case "$choice" in
            1)
                echo "Number One" ;;
            2)
                echo "Number Two" ;;
            3)
                echo "Number Three" ;;
            4)
                echo "Numbers One, two, three" ;;
            5)
                echo "Exit"
                exit 0 ;;
            *)
                echo "Please enter number ONLY ranging from 1-5!"
                ;;
        esac
    done
done
You can now enter any number of numbers seperated by white space.

Cheers,

EVo2.

[Sep 02, 2019] bash - Pretty-print for shell script

Oct 21, 2010 | stackoverflow.com

Pretty-print for shell script Ask Question Asked 8 years, 10 months ago Active 30 days ago Viewed 14k times


Benoit ,Oct 21, 2010 at 13:19

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.

Do you know of one ?

Jamie ,Sep 11, 2012 at 3:00

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)

Though, some bugs with << (expecting EOF as first character on a line) e.g.

EDIT: ZZ not ZQ

Daniel Martí ,Apr 8, 2018 at 13:52

A bit late to the party, but it looks like shfmt could do the trick for you.

Brian Chrisman ,Aug 11 at 4:08

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}

this eliminates comments and reindents the script "bash way".

If you have HEREDOCS in your script, they got ruined by the sed in the previous function.

So use:

reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}

But all your script will have a 4 spaces indentation.

Or you can do:

reindent () 
{ 
    rstr=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 16 | head -n 1);
    source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
    echo '#!/bin/bash';
    declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/    /"
}

which takes care also of heredocs.

Pius Raeder ,Jan 10, 2017 at 8:35

Found this http://www.linux-kheops.com/doc/perl/perl-aubert/fmt.script .

Very nice, only one thing i took out is the [...]->test substitution.

[Sep 02, 2019] mvdan-sh A shell parser, formatter, and interpreter (POSIX-Bash-mksh)

Written in Go language
Sep 02, 2019 | github.com
go parser shell bash formatter posix mksh interpreter bash-parser beautify
  1. Go 98.8%
  2. Other 1.2%
Type Name Latest commit message Commit time
Failed to load latest commit information.
_fuzz/ it
_js
cmd
expand
fileutil
interp
shell
syntax
.gitignore
.travis.yml
LICENSE
README.md
go.mod
go.sum
release-docker.sh
README.md

sh

A shell parser, formatter and interpreter. Supports POSIX Shell , Bash and mksh . Requires Go 1.11 or later.

Quick start

To parse shell scripts, inspect them, and print them out, see the syntax examples .

For high-level operations like performing shell expansions on strings, see the shell examples .

shfmt

Go 1.11 and later can download the latest v2 stable release:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/cmd/shfmt

The latest v3 pre-release can be downloaded in a similar manner, using the /v3 module:

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/shfmt

Finally, any older release can be built with their respective older Go versions by manually cloning, checking out a tag, and running go build ./cmd/shfmt .

shfmt formats shell programs. It can use tabs or any number of spaces to indent. See canonical.sh for a quick look at its default style.

You can feed it standard input, any number of files or any number of directories to recurse into. When recursing, it will operate on .sh and .bash files and ignore files starting with a period. It will also operate on files with no extension and a shell shebang.

shfmt -l -w script.sh

Typically, CI builds should use the command below, to error if any shell scripts in a project don't adhere to the format:

shfmt -d .

Use -i N to indent with a number of spaces instead of tabs. There are other formatting options - see shfmt -h . For example, to get the formatting appropriate for Google's Style guide, use shfmt -i 2 -ci .

Packages are available on Arch , CRUX , Docker , FreeBSD , Homebrew , NixOS , Scoop , Snapcraft , and Void .

Replacing bash -n

bash -n can be useful to check for syntax errors in shell scripts. However, shfmt >/dev/null can do a better job as it checks for invalid UTF-8 and does all parsing statically, including checking POSIX Shell validity:

$ echo '${foo:1 2}' | bash -n
$ echo '${foo:1 2}' | shfmt
1:9: not a valid arithmetic operator: 2
$ echo 'foo=(1 2)' | bash --posix -n
$ echo 'foo=(1 2)' | shfmt -p
1:5: arrays are a bash feature

gosh

cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/gosh

Experimental shell that uses interp . Work in progress, so don't expect stability just yet.

Fuzzing

This project makes use of go-fuzz to find crashes and hangs in both the parser and the printer. To get started, run:

git checkout fuzz
./fuzz

Caveats

$ echo '${array[spaced string]}' | shfmt
1:16: not a valid arithmetic operator: string
$ echo '${array[dash-string]}' | shfmt
${array[dash - string]}
$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))

JavaScript

A subset of the Go packages are available as an npm package called mvdan-sh . See the _js directory for more information.

Docker

To build a Docker image, checkout a specific version of the repository and run:

docker build -t my:tag -f cmd/shfmt/Dockerfile .

Related projects

[Aug 29, 2019] Parsing bash script options with getopts by Kevin Sookocheff

Mar 30, 2018 | sookocheff.com

Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes | Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your script. Bash provides the getopts built-in function to do just that. This tutorial explains how to use the getopts built-in function to parse arguments and options to a bash script.

The getopts function takes three parameters. The first is a specification of which options are valid, listed as a sequence of letters. For example, the string 'ht' signifies that the options -h and -t are valid.

The second argument to getopts is a variable that will be populated with the option or argument to be processed next. In the following loop, opt will hold the value of the current option that has been parsed by getopts .

while getopts ":ht" opt; do
  case ${opt} in
    h ) # process option a
      ;;
    t ) # process option t
      ;;
    \? ) echo "Usage: cmd [-h] [-t]"
      ;;
  esac
done

This example shows a few additional features of getopts . First, if an invalid option is provided, the option variable is assigned the value ? . You can catch this case and provide an appropriate usage message to the user. Second, this behaviour is only true when you prepend the list of valid options with : to disable the default error handling of invalid options. It is recommended to always disable the default error handling in your scripts.

The third argument to getopts is the list of arguments and options to be processed. When not provided, this defaults to the arguments and options provided to the application ( $@ ). You can provide this third argument to use getopts to parse any list of arguments and options you provide.

Shifting processed options

The variable OPTIND holds the number of options parsed by the last call to getopts . It is common practice to call the shift command at the end of your processing loop to remove options that have already been handled from $@ .

shift $((OPTIND -1))
Parsing options with arguments

Options that themselves have arguments are signified with a : . The argument to an option is placed in the variable OPTARG . In the following example, the option t takes an argument. When the argument is provided, we copy its value to the variable target . If no argument is provided getopts will set opt to : . We can recognize this error condition by catching the : case and printing an appropriate error message.

while getopts ":t:" opt; do
  case ${opt} in
    t )
      target=$OPTARG
      ;;
    \? )
      echo "Invalid option: $OPTARG" 1>&2
      ;;
    : )
      echo "Invalid option: $OPTARG requires an argument" 1>&2
      ;;
  esac
done
shift $((OPTIND -1))
An extended example – parsing nested arguments and options

Let's walk through an extended example of processing a command that takes options, has a sub-command, and whose sub-command takes an additional option that has an argument. This is a mouthful so let's break it down using an example. Let's say we are writing our own version of the pip command . In this version you can call pip with the -h option to display a help message.

> pip -h
Usage:
    pip -h                      Display this help message.
    pip install                 Install a Python package.

We can use getopts to parse the -h option with the following while loop. In it we catch invalid options with \? and shift all arguments that have been processed with shift $((OPTIND -1)) .

while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install                 Install a Python package."
      exit 0
      ;;
    \? )
      echo "Invalid Option: -$OPTARG" 1>&2
      exit 1
      ;;
  esac
done
shift $((OPTIND -1))

Now let's add the sub-command install to our script. install takes as an argument the Python package to install.

> pip install urllib3

install also takes an option, -t . -t takes as an argument the location to install the package to relative to the current directory.

> pip install urllib3 -t ./src/lib

To process this line we must find the sub-command to execute. This value is the first argument to our script.

subcommand=$1
shift # Remove `pip` from the argument list

Now we can process the sub-command install . In our example, the option -t is actually an option that follows the package argument so we begin by removing install from the argument list and processing the remainder of the line.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list
    ;;
esac

After shifting the argument list we can process the remaining arguments as if they are of the form package -t src/lib . The -t option takes an argument itself. This argument will be stored in the variable OPTARG and we save it to the variable target for further work.

case "$subcommand" in
  install)
    package=$1
    shift # Remove `install` from the argument list

  while getopts ":t:" opt; do
    case ${opt} in
      t )
        target=$OPTARG
        ;;
      \? )
        echo "Invalid Option: -$OPTARG" 1>&2
        exit 1
        ;;
      : )
        echo "Invalid Option: -$OPTARG requires an argument" 1>&2
        exit 1
        ;;
    esac
  done
  shift $((OPTIND -1))
  ;;
esac

Putting this all together, we end up with the following script that parses arguments to our version of pip and its sub-command install .

package=""  # Default to empty package
target=""  # Default to empty target

# Parse options to the `pip` command
while getopts ":h" opt; do
  case ${opt} in
    h )
      echo "Usage:"
      echo "    pip -h                      Display this help message."
      echo "    pip install <package>       Install <package>."
      exit 0
      ;;
   \? )
     echo "Invalid Option: -$OPTARG" 1>&2
     exit 1
     ;;
  esac
done
shift $((OPTIND -1))

subcommand=$1; shift  # Remove 'pip' from the argument list
case "$subcommand" in
  # Parse options to the install sub command
  install)
    package=$1; shift  # Remove 'install' from the argument list

    # Process package options
    while getopts ":t:" opt; do
      case ${opt} in
        t )
          target=$OPTARG
          ;;
        \? )
          echo "Invalid Option: -$OPTARG" 1>&2
          exit 1
          ;;
        : )
          echo "Invalid Option: -$OPTARG requires an argument" 1>&2
          exit 1
          ;;
      esac
    done
    shift $((OPTIND -1))
    ;;
esac

After processing the above sequence of commands, the variable package will hold the package to install and the variable target will hold the target to install the package to. You can use this as a template for processing any set of arguments and options to your scripts.

bash getopts

[Aug 29, 2019] How do I parse command line arguments in Bash - Stack Overflow

Jul 10, 2017 | stackoverflow.com

Livven, Jul 10, 2017 at 8:11

Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great edits/comments/suggestions. In order save maintenance time, I've modified the code block to be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ". Instead, copy-paste the code block, see the output, make the change, rerun the script, and comment "I changed X to Y and " I don't have time to test your ideas and tell you if they work.
Method #1: Using bash without getopt[s]

Two common ways to pass key-value-pair arguments are:

Bash Space-Separated (e.g., --option argument ) (without getopt[s])

Usage demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts

cat >/tmp/demo-space-separated.sh <<'EOF'
#!/bin/bash

POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"

case $key in
    -e|--extension)
    EXTENSION="$2"
    shift # past argument
    shift # past value
    ;;
    -s|--searchpath)
    SEARCHPATH="$2"
    shift # past argument
    shift # past value
    ;;
    -l|--lib)
    LIBPATH="$2"
    shift # past argument
    shift # past value
    ;;
    --default)
    DEFAULT=YES
    shift # past argument
    ;;
    *)    # unknown option
    POSITIONAL+=("$1") # save it in an array for later
    shift # past argument
    ;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters

echo "FILE EXTENSION  = ${EXTENSION}"
echo "SEARCH PATH     = ${SEARCHPATH}"
echo "LIBRARY PATH    = ${LIBPATH}"
echo "DEFAULT         = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
    echo "Last line of