Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Linux Tips

News

Recommended Links

Converting CentOS to Oracle Linux with centos2ol script

Unix Sysadmin Tips

Linux netwoking tips

RHEL Tips

Suse Tips
Bash Tips and Trick Attaching to and detaching from screen sessions Filesystems tips Midnight Commander Tips and Tricks FAR Manager Tips WinSCP Tips SSH Tips
Shell Tips Perl Tips How to rename files with special characters in names VIM Tips GNU Tar Tips GNU Screen Tips Teraterm Tips
Total Commander Usage Tips SCP Tips SSH Tips How to rename files with special characters in names Rsync Tips Solaris Tips SPARC ILOM Tips
Grub Frontpage Tips Linux Start up and Run Levels AWK Tips Sysadmin Horror Stories Humor Etc

There are several large collection of Linux Tips on the Internet. Those are mixture of obsolete and useful tips so some work need to be done selecting valuable info from junk. Among them:

For YUM tips one can look at Yum - Linux@Duke Project Wiki

Linux Gazette regularly publishes tips column. See for example More 2 Cent Tips! LG #106

Some references from Linux Today also might be useful, for example


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 12, 2021] 7 'dmesg' Commands for Troubleshooting and Collecting Information of Linux Systems

Jun 09, 2021 | www.tecmint.com

List all Detected Devices

To discover which hard disks has been detected by kernel, you can search for the keyword " sda " along with " grep " like shown below.

[[email protected] ~]# dmesg | grep sda

[    1.280971] sd 2:0:0:0: [sda] 488281250 512-byte logical blocks: (250 GB/232 GiB)
[    1.281014] sd 2:0:0:0: [sda] Write Protect is off
[    1.281016] sd 2:0:0:0: [sda] Mode Sense: 00 3a 00 00
[    1.281039] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[    1.359585]  sda: sda1 sda2 < sda5 sda6 sda7 sda8 >
[    1.360052] sd 2:0:0:0: [sda] Attached SCSI disk
[    2.347887] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[   22.928440] Adding 3905532k swap on /dev/sda6.  Priority:-1 extents:1 across:3905532k FS
[   23.950543] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro
[   24.134016] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)
[   24.330762] EXT4-fs (sda7): mounted filesystem with ordered data mode. Opts: (null)
[   24.561015] EXT4-fs (sda8): mounted filesystem with ordered data mode. Opts: (null)

NOTE : The "˜sda' first SATA hard drive, "˜sdb' is the second SATA hard drive and so on. Search with "˜hda' or "˜hdb' in the case of IDE hard drive.

[Jun 08, 2021] Recovery LVM Data from RAID

May 24, 2021 | blog.dougco.com

Recovery LVM Data from RAID – Doug's Blog

We had a client that had an OLD fileserver box, a Thecus N4100PRO. It was completely dust-ridden and the power supply had burned out.

Since these drives were in a RAID configuration, you could not hook any one of them up to a windows box, or a linux box to see the data. You have to hook them all up to a box and reassemble the RAID.

We took out the drives (3 of them) and then used an external SATA to USB box to connect them to a Linux server running CentOS. You can use parted to see what drives are now being seen by your linux system:

parted -l | grep 'raid\|sd'

Then using that output, we assembled the drives into a software array:

mdadm -A /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2

If we tried to only use two of those drives, it would give an error, since these were all in a linear RAID in the Thecus box.

If the last command went well, you can see the built array like so:

root% cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd2[0] sdb2[2] sdc2[1]
1459012480 blocks super 1.0 128k rounding

Note the personality shows the RAID type, in our case it was linear, which is probably the worst RAID since if any one drive fails, your data is lost. So good thing these drives outlasted the power supply! Now we find the physical volume:

pvdisplay /dev/md0

Gives us:

-- Physical volume --
PV Name /dev/md0
VG Name vg0
PV Size 1.36 TB / not usable 704.00 KB
Allocatable yes
PE Size (KByte) 2048
Total PE 712408
Free PE 236760
Allocated PE 475648
PV UUID iqwRGX-zJ23-LX7q-hIZR-hO2y-oyZE-tD38A3

Then we find the logical volume:

lvdisplay /dev/vg0

Gives us:

-- Logical volume --
LV Name /dev/vg0/syslv
VG Name vg0
LV UUID UtrwkM-z0lw-6fb3-TlW4-IpkT-YcdN-NY1orZ
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384

-- Logical volume --
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID 0qsIdY-i2cA-SAHs-O1qt-FFSr-VuWO-xuh41q
LV Write Access read/write
LV Status NOT available
LV Size 928.00 GB
Current LE 475136
Segments 1
Allocation inherit
Read ahead sectors 16384

We want to focus on the lv0 volume. You cannot mount yet, until you are able to lvscan them.

lvscan

Show us things are inactive currently:

inactive '/dev/vg0/syslv' [1.00 GB] inherit
inactive '/dev/vg0/lv0' [928.00 GB] inherit

So we set them active with:

vgchange vg0 -a y

And doing lvscan again shows:

ACTIVE '/dev/vg0/syslv' [1.00 GB] inherit
ACTIVE '/dev/vg0/lv0' [928.00 GB] inherit

Now we can mount with:

mount /dev/vg0/lv0 /mnt

And viola! We have our data up and accessable in /mnt to recover! Of course your setup is most likely going to look different from what I have shown you above, but hopefully this gives some helpful information for you to recover your own data.

[Jun 02, 2021] The Poetterisation of GNU-Linux

10, 2013 | www.slated.org

I've found a disturbing trend in GNU/Linux, where largely unaccountable cliques of developers unilaterally decide to make fundamental changes to the way it works, based on highly subjective and arrogant assumptions, then forge ahead with little regard to those who actually use the software, much less the well-established principles upon which that OS was originally built. The long litany of examples includes Ubuntu Unity , Gnome Shell , KDE 4 , the /usr partition , SELinux , PolicyKit , Systemd , udev and PulseAudio , to name a few.

I hereby dub this phenomenon the " Poetterisation of GNU/Linux ".

The broken features, creeping bloat, and in particular the unhealthy tendency toward more monolithic, less modular code in certain Free Software projects, is a very serious problem, and I have a very serous opposition to it. I abandoned Windows to get away from that sort of nonsense, I didn't expect to have to deal with it in GNU/Linux.

Clearly this situation is untenable.

The motivation for these arbitrary changes mostly seems to be rooted in the misguided concept of "popularity", which makes no sense at all for something that's purely academic and non-commercial in nature. More users does not equal more developers. Indeed more developers does not even necessarily equal more or faster progress. What's needed is more of the right sort of developers, or at least more of the existing developers to adopt the right methods.

This is the problem with distros like Ubuntu, as the most archetypal example. Shuttleworth pushed hard to attract more users, with heavy marketing and by making Ubuntu easy at all costs, but in so doing all he did was amass a huge burden, in the form of a large influx of users who were, by and large, purely consumers, not contributors.

As a result, many of those now using GNU/Linux are really just typical Microsoft or Apple consumers, with all the baggage that entails. They're certainly not assets of any kind. They have expectations forged in a world of proprietary licensing and commercially-motivated, consumer-oriented, Hollywood-style indoctrination, not academia. This is clearly evidenced by their belligerently hostile attitudes toward the GPL, FSF, GNU and Stallman himself, along with their utter contempt for security and other well-established UNIX paradigms, and their unhealthy predilection for proprietary software, meaningless aesthetics and hype.

Reading the Ubuntu forums is an exercise in courting abject despair, as one witnesses an ignorant hoard demand GNU/Linux be mutated into the bastard son of Windows and Mac OS X. And Shuttleworth, it seems, is only too happy to oblige , eagerly assisted by his counterparts on other distros and upstream projects, such as Lennart Poettering and Richard Hughes, the former of whom has somehow convinced every distro to mutate the Linux startup process into a hideous monolithic blob , and the latter of whom successfully managed to undermine 40 years of UNIX security in a single stroke, by obliterating the principle that unprivileged users should not be allowed to install software system-wide.

GNU/Linux does not need such people, indeed it needs to get rid of them as a matter of extreme urgency. This is especially true when those people are former (or even current) Windows programmers, because they not only bring with them their indoctrinated expectations, misguided ideologies and flawed methods, but worse still they actually implement them , thus destroying GNU/Linux from within.

Perhaps the most startling example of this was the Mono and Moonlight projects, which not only burdened GNU/Linux with all sorts of "IP" baggage, but instigated a sort of invasion of Microsoft "evangelists" and programmers, like a Trojan horse, who subsequently set about stuffing GNU/Linux with as much bloated, patent encumbered garbage as they could muster.

I was part of a group who campaigned relentlessly for years to oust these vermin and undermine support for Mono and Moonlight, and we were largely successful. Some have even suggested that my diatribes , articles and debates (with Miguel de Icaza and others) were instrumental in securing this victory, so clearly my efforts were not in vain.

Amassing a large user-base is a highly misguided aspiration for a purely academic field like Free Software. It really only makes sense if you're a commercial enterprise trying to make as much money as possible. The concept of "market share" is meaningless for something that's free (in the commercial sense).

Of course Canonical is also a commercial enterprise, but it has yet to break even, and all its income is derived through support contracts and affiliate deals, none of which depends on having a large number of Ubuntu users (the Ubuntu One service is cross-platform, for example).

What GNU/Linux needs is a small number of competent developers producing software to a high technical standard, who respect the well-established UNIX principles of security , efficiency , code correctness , logical semantics , structured programming , modularity , flexibility and engineering simplicity (a.k.a. the KISS Principle ), just as any scientist or engineer in the field of computer science and software engineering should .

What it doesn't need is people who shrug their shoulders and bleat " disks are cheap ".

[May 28, 2021] Bash scripting- Moving from backtick operator to $ parentheses

May 20, 2021 | www.redhat.com

You can achieve the same result by replacing the backticks with the $ parens, like in the example below:

⯠echo "There are $(ls | wc -l) files in this directory"
There are 3 files in this directory

Here's another example, still very simple but a little more realistic. I need to troubleshoot something in my network connections, so I decide to show my total and waiting connections minute by minute.

⯠cat netinfo.sh
#!/bin/bash
while true
do
  ss -an > netinfo.txt
  connections_total=$(cat netinfo.txt | wc -l)
  connections_waiting=$(grep WAIT netinfo.txt | wc -l)
  printf "$(date +%R) - Total=%6d Waiting=%6d\n" $connections_total $connections_waiting
  sleep 60
done

⯠./netinfo.sh
22:59 - Total=  2930 Waiting=   977
23:00 - Total=  2923 Waiting=   963
23:01 - Total=  2346 Waiting=   397
23:02 - Total=  2497 Waiting=   541

It doesn't seem like a huge difference, right? I just had to adjust the syntax. Well, there are some implications involving the two approaches. If you are like me, who automatically uses the backticks without even blinking, keep reading.

Deprecation and recommendations

Deprecation sounds like a bad word, and in many cases, it might really be bad.

When I was researching the explanations for the backtick operator, I found some discussions about "are the backtick operators deprecated?"

The short answer is: Not in the sense of "on the verge of becoming unsupported and stop working." However, backticks should be avoided and replaced by the $ parens syntax.

The main reasons for that are (in no particular order):

1. Backticks operators can become messy if the internal commands also use backticks.

2. The $ parens operator is safer and more predictable.

Here are some examples of the behavioral differences between backticks and $ parens:

⯠echo '\$x'
\$x

⯠echo `echo '\$x'`
$x

⯠echo $(echo '\$x')
\$x

You can find additional examples of the differences between backticks and $ parens behavior here .

[ Free cheat sheet: Get a list of Linux utilities and commands for managing servers and networks . ]

Wrapping up

If you compare the two approaches, it seems logical to think that you should always/only use the $ parens approach. And you might think that the backtick operators are only used by sysadmins from an older era .

Well, that might be true, as sometimes I use things that I learned long ago, and in simple situations, my "muscle memory" just codes it for me. For those ad-hoc commands that you know that do not contain any nasty characters, you might be OK using backticks. But for anything that is more perennial or more complex/sophisticated, please go with the $ parens approach.

[May 23, 2021] Adding arguments and options to your Bash scripts

May 23, 2021 | www.redhat.com

Handling options

The ability for a Bash script to handle command line options such as -h to display help gives you some powerful capabilities to direct the program and modify what it does. In the case of your -h option, you want the program to print the help text to the terminal session and then quit without running the rest of the program. The ability to process options entered at the command line can be added to the Bash script using the while command in conjunction with the getops and case commands.

The getops command reads any and all options specified at the command line and creates a list of those options. The while command loops through the list of options by setting the variable $options for each in the code below. The case statement is used to evaluate each option in turn and execute the statements in the corresponding stanza. The while statement will continue to assess the list of options until they have all been processed or an exit statement is encountered, which terminates the program.

Be sure to delete the help function call just before the echo "Hello world!" statement so that the main body of the program now looks like this.

############################################################
############################################################
# Main program                                             #
############################################################
############################################################
############################################################
# Process the input options. Add options as needed.        #
############################################################
# Get the options
while getopts ":h" option; do
   case $option in
      h) # display Help
         Help
         exit;;
   esac
done

echo "Hello world!"

Notice the double semicolon at the end of the exit statement in the case option for -h . This is required for each option. Add to this case statement to delineate the end of each option.

Testing is now a little more complex. You need to test your program with several different options -- and no options -- to see how it responds. First, check to ensure that with no options that it prints "Hello world!" as it should.

[student@testvm1 ~]$ hello.sh
Hello world!

That works, so now test the logic that displays the help text.

[student@testvm1 ~]$ hello.sh -h
Add a description of the script functions here.

Syntax: scriptTemplate [-g|h|t|v|V]
options:
g     Print the GPL license notification.
h     Print this Help.
v     Verbose mode.
V     Print software version and exit.

That works as expected, so now try some testing to see what happens when you enter some unexpected options.

[student@testvm1 ~]$ hello.sh -x
Hello world!
[student@testvm1 ~]$ hello.sh -q
Hello world!
[student@testvm1 ~]$ hello.sh -lkjsahdf
Add a description of the script functions here.

Syntax: scriptTemplate [-g|h|t|v|V]
options:
g     Print the GPL license notification.
h     Print this Help.
v     Verbose mode.
V     Print software version and exit.

[student@testvm1 ~]$
Handling invalid options

The program just ignores the options for which you haven't created specific responses without generating any errors. Although in the last entry with the -lkjsahdf options, because there is an "h" in the list, the program did recognize it and print the help text. Testing has shown that one thing that is missing is the ability to handle incorrect input and terminate the program if any is detected.

You can add another case stanza to the case statement that will match any option for which there is no explicit match. This general case will match anything you haven't provided a specific match for. The case statement now looks like this.

while getopts ":h" option; do
   case $option in
      h) # display Help
         Help
         exit;;
     \?) # Invalid option
         echo "Error: Invalid option"
         exit;;
   esac
done
Kubernetes and OpenShift

This bit of code deserves an explanation about how it works. It seems complex but is fairly easy to understand. The while – done structure defines a loop that executes once for each option in the getopts – option structure. The ":h" string -- which requires the quotes -- lists the possible input options that will be evaluated by the case – esac structure. Each option listed must have a corresponding stanza in the case statement. In this case, there are two. One is the h) stanza which calls the Help procedure. After the Help procedure completes, execution returns to the next program statement, exit;; which exits from the program without executing any more code even if some exists. The option processing loop is also terminated, so no additional options would be checked.

Notice the catch-all match of \? as the last stanza in the case statement. If any options are entered that are not recognized, this stanza prints a short error message and exits from the program.

Any additional specific cases must precede the final catch-all. I like to place the case stanzas in alphabetical order, but there will be circumstances where you want to ensure that a particular case is processed before certain other ones. The case statement is sequence sensitive, so be aware of that when you construct yours.

The last statement of each stanza in the case construct must end with the double semicolon ( ;; ), which is used to mark the end of each stanza explicitly. This allows those programmers who like to use explicit semicolons for the end of each statement instead of implicit ones to continue to do so for each statement within each case stanza.

Test the program again using the same options as before and see how this works now.

The Bash script now looks like this.

#!/bin/bash
############################################################
# Help                                                     #
############################################################
Help()
{
   # Display Help
   echo "Add description of the script functions here."
   echo
   echo "Syntax: scriptTemplate [-g|h|v|V]"
   echo "options:"
   echo "g     Print the GPL license notification."
   echo "h     Print this Help."
   echo "v     Verbose mode."
   echo "V     Print software version and exit."
   echo
}

############################################################
############################################################
# Main program                                             #
############################################################
############################################################
############################################################
# Process the input options. Add options as needed.        #
############################################################
# Get the options
while getopts ":h" option; do
   case $option in
      h) # display Help
         Help
         exit;;
     \?) # Invalid option
         echo "Error: Invalid option"
         exit;;
   esac
done


echo "hello world!"

Be sure to test this version of your program very thoroughly. Use random input and see what happens. You should also try testing valid and invalid options without using the dash ( - ) in front.

Using options to enter data

First, add a variable and initialize it. Add the two lines shown in bold in the segment of the program shown below. This initializes the $Name variable to "world" as the default.

<snip>
############################################################
############################################################
# Main program                                             #
############################################################
############################################################

# Set variables
Name="world"

############################################################
# Process the input options. Add options as needed.        #
<snip>

Change the last line of the program, the echo command, to this.

echo "hello $Name!"

Add the logic to input a name in a moment but first test the program again. The result should be exactly the same as before.

[dboth@david ~]$ hello.sh
hello world!
[dboth@david ~]$
# Get the options
while getopts ":hn:" option; do
   case $option in
      h) # display Help
         Help
         exit;;
      n) # Enter a name
         Name=$OPTARG;;
     \?) # Invalid option
         echo "Error: Invalid option"
         exit;;
   esac
done

$OPTARG is always the variable name used for each new option argument, no matter how many there are. You must assign the value in $OPTARG to a variable name that will be used in the rest of the program. This new stanza does not have an exit statement. This changes the program flow so that after processing all valid options in the case statement, execution moves on to the next statement after the case construct.

Test the revised program.

[dboth@david ~]$ hello.sh
hello world!
[dboth@david ~]$ hello.sh -n LinuxGeek46
hello LinuxGeek46!
[dboth@david ~]$ hello.sh -n "David Both"
hello David Both!
[dboth@david ~]$

The completed program looks like this.

#!/bin/bash
############################################################
# Help                                                     #
############################################################
Help()
{
   # Display Help
   echo "Add description of the script functions here."
   echo
   echo "Syntax: scriptTemplate [-g|h|v|V]"
   echo "options:"
   echo "g     Print the GPL license notification."
   echo "h     Print this Help."
   echo "v     Verbose mode."
   echo "V     Print software version and exit."
   echo
}

############################################################
############################################################
# Main program                                             #
############################################################
############################################################

# Set variables
Name="world"

############################################################
# Process the input options. Add options as needed.        #
############################################################
# Get the options
while getopts ":hn:" option; do
   case $option in
      h) # display Help
         Help
         exit;;
      n) # Enter a name
         Name=$OPTARG;;
     \?) # Invalid option
         echo "Error: Invalid option"
         exit;;
   esac
done


echo "hello $Name!"

Be sure to test the help facility and how the program reacts to invalid input to verify that its ability to process those has not been compromised. If that all works as it should, then you have successfully learned how to use options and option arguments.

[Jan 02, 2021] How to convert from CentOS or Oracle Linux to RHEL

convert2rhel is an RPM package which contains a Python2.x script written in completely incomprehensible over-modulazed manner. Python obscurantism in action ;-)
Looks like a "backbox" tool unless you know Python well. As such it is dangerous to rely upon.
Jan 02, 2021 | access.redhat.com

[Jan 02, 2021] Linux sysadmin basics- Start NIC at boot

Nov 14, 2019 | www.redhat.com

If you've ever booted a Red Hat-based system and have no network connectivity, you'll appreciate this quick fix.

Posted: | (Red Hat)

Image
"Fast Ethernet PCI Network Interface Card SN5100TX.jpg" by Jana.Wiki is licensed under CC BY-SA 3.0

It might surprise you to know that if you forget to flip the network interface card (NIC) switch to the ON position (shown in the image below) during installation, your Red Hat-based system will boot with the NIC disconnected:

Image
Setting the NIC to the ON position during installation.
More Linux resources

But, don't worry, in this article I'll show you how to set the NIC to connect on every boot and I'll show you how to disable/enable your NIC on demand.

If your NIC isn't enabled at startup, you have to edit the /etc/sysconfig/network-scripts/ifcfg-NIC_name file, where NIC_name is your system's NIC device name. In my case, it's enp0s3. Yours might be eth0, eth1, em1, etc. List your network devices and their IP addresses with the ip addr command:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

Note that my primary NIC (enp0s3) has no assigned IP address. I have virtual NICs because my Red Hat Enterprise Linux 8 system is a VirtualBox virtual machine. After you've figured out what your physical NIC's name is, you can now edit its interface configuration file:

$ sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

and change the ONBOOT="no" entry to ONBOOT="yes" as shown below:

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="77cb083f-2ad3-42e2-9070-697cb24edf94"
DEVICE="enp0s3"
ONBOOT="yes"

Save and exit the file.

You don't need to reboot to start the NIC, but after you make this change, the primary NIC will be on and connected upon all subsequent boots.

To enable the NIC, use the ifup command:

ifup enp0s3

Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

Now the ip addr command displays the enp0s3 device with an IP address:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.64/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
       valid_lft 86266sec preferred_lft 86266sec
    inet6 2600:1702:a40:88b0:c30:ce7e:9319:9fe0/64 scope global dynamic noprefixroute 
       valid_lft 3467sec preferred_lft 3467sec
    inet6 fe80::9b21:3498:b83c:f3d4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

To disable a NIC, use the ifdown command. Please note that issuing this command from a remote system will terminate your session:

ifdown enp0s3

Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

That's a wrap

It's frustrating to encounter a Linux system that has no network connection. It's more frustrating to have to connect to a virtual KVM or to walk up to the console to fix it. It's easy to miss the switch during installation, I've missed it myself. Now you know how to fix the problem and have your system network-connected on every boot, so before you drive yourself crazy with troubleshooting steps, try the ifup command to see if that's your easy fix.

Takeaways: ifup, ifdown, /etc/sysconfig/network-scripts/ifcfg-NIC_name

[Jan 02, 2021] Looking forward to Linux network configuration in the initial ramdisk (initrd)

Nov 24, 2020 | www.redhat.com
The need for an initrd

When you press a machine's power button, the boot process starts with a hardware-dependent mechanism that loads a bootloader . The bootloader software finds the kernel on the disk and boots it. Next, the kernel mounts the root filesystem and executes an init process.

This process sounds simple, and it might be what actually happens on some Linux systems. However, modern Linux distributions have to support a vast set of use cases for which this procedure is not adequate.

First, the root filesystem could be on a device that requires a specific driver. Before trying to mount the filesystem, the right kernel module must be inserted into the running kernel. In some cases, the root filesystem is on an encrypted partition and therefore needs a userspace helper that asks the passphrase to the user and feeds it to the kernel. Or, the root filesystem could be shared over the network via NFS or iSCSI, and mounting it may first require configured IP addresses and routes on a network interface.

[ You might also like: Linux networking: 13 uses for netstat ]

To overcome these issues, the bootloader can pass to the kernel a small filesystem image (the initrd) that contains scripts and tools to find and mount the real root filesystem. Once this is done, the initrd switches to the real root, and the boot continues as usual.

The dracut infrastructure

On Fedora and RHEL, the initrd is built through dracut . From its home page , dracut is "an event-driven initramfs infrastructure. dracut (the tool) is used to create an initramfs image by copying tools and files from an installed system and combining it with the dracut framework, usually found in /usr/lib/dracut/modules.d ."

A note on terminology: Sometimes, the names initrd and initramfs are used interchangeably. They actually refer to different ways of building the image. An initrd is an image containing a real filesystem (for example, ext2) that gets mounted by the kernel. An initramfs is a cpio archive containing a directory tree that gets unpacked as a tmpfs. Nowadays, the initrd images are deprecated in favor of the initramfs scheme. However, the initrd name is still used to indicate the boot process involving a temporary filesystem.

Kernel command-line

Let's revisit the NFS-root scenario that was mentioned before. One possible way to boot via NFS is to use a kernel command-line containing the root=dhcp argument.

The kernel command-line is a list of options passed to the kernel from the bootloader, accessible to the kernel and applications. If you use GRUB, it can be changed by pressing the e key on a boot entry and editing the line starting with linux .

The dracut code inside the initramfs parses the kernel command-line and starts DHCP on all interfaces if the command-line contains root=dhcp . After obtaining a DHCP lease, dracut configures the interface with the parameters received (IP address and routes); it also extracts the value of the root-path DHCP option from the lease. The option carries an NFS server's address and path (which could be, for example, 192.168.50.1:/nfs/client ). Dracut then mounts the NFS share at this location and proceeds with the boot.

If there is no DHCP server providing the address and the NFS root path, the values can be configured explicitly in the command line:

root=nfs:192.168.50.1:/nfs/client ip=192.168.50.101:::24::ens2:none

Here, the first argument specifies the NFS server's address, and the second configures the ens2 interface with a static IP address.

There are two syntaxes to specify network configuration for an interface:

ip=<interface>:{dhcp|on|any|dhcp6|auto6}[:[<mtu>][:<macaddr>]]

ip=<client-IP>:[<peer>]:<gateway-IP>:<netmask>:<client_hostname>:<interface>:{none|off|dhcp|on|any|dhcp6|auto6|ibft}[:[<mtu>][:<macaddr>]]

The first can be used for automatic configuration (DHCP or IPv6 SLAAC), and the second for static configuration or a combination of automatic and static. Here some examples:

ip=enp1s0:dhcp
ip=192.168.10.30::192.168.10.1:24::enp1s0:none
ip=[2001:0db8::02]::[2001:0db8::01]:64::enp1s0:none

Note that if you pass an ip= option, but dracut doesn't need networking to mount the root filesystem, the option is ignored. To force network configuration without a network root, add rd.neednet=1 to the command line.

You probably noticed that among automatic configuration methods, there is also ibft . iBFT stands for iSCSI Boot Firmware Table and is a mechanism to pass parameters about iSCSI devices from the firmware to the operating system. iSCSI (Internet Small Computer Systems Interface) is a protocol to access network storage devices. Describing iBFT and iSCSI is outside the scope of this article. What is important is that by passing ip=ibft to the kernel, the network configuration is retrieved from the firmware.

Dracut also supports adding custom routes, specifying the machine name and DNS servers, creating bonds, bridges, VLANs, and much more. See the dracut.cmdline man page for more details.

Network modules

The dracut framework included in the initramfs has a modular architecture. It comprises a series of modules, each containing scripts and binaries to provide specific functionality. You can see which modules are available to be included in the initramfs with the command dracut --list-modules .

At the moment, there are two modules to configure the network: network-legacy and network-manager . You might wonder why different modules provide the same functionality.

network-legacy is older and uses shell scripts calling utilities like iproute2 , dhclient , and arping to configure interfaces. After the switch to the real root, a different network configuration service runs. This service is not aware of what the network-legacy module intended to do and the current state of each interface. This can lead to problems maintaining the state across the root switch boundary.

A prominent example of a state to be kept is the DHCP lease. If an interface's address changed during the boot, the connection to an NFS share would break, causing a boot failure.

To ensure a seamless transition, there is a need for a mechanism to pass the state between the two environments. However, passing the state between services having different configuration models can be a problem.

The network-manager dracut module was created to improve this situation. The module runs NetworkManager in the initrd to configure connection profiles generated from the kernel command-line. Once done, NetworkManager serializes its state, which is later read by the NetworkManager instance in the real root.

Fedora 31 was the first distribution to switch to network-manager in initrd by default. On RHEL 8.2, network-legacy is still the default, but network-manager is available. On RHEL 8.3, dracut will use network-manager by default.

Enabling a different network module

While the two modules should be largely compatible, there are some differences in behavior. Some of those are documented in the nm-initrd-generator man page. In general, it is suggested to use the network-manager module when NetworkManager is enabled.

To rebuild the initrd using a specific network module, use one of the following commands:

# dracut --add network-legacy  --force --verbose
# dracut --add network-manager --force --verbose

Since this change will be reverted the next time the initrd is rebuilt, you may want to make the change permanent in the following way:

# echo 'add_dracutmodules+=" network-manager "' > /etc/dracut.conf.d/network-module.conf
# dracut --regenerate-all --force --verbose

The --regenerate-all option also rebuilds all the initramfs images for the kernel versions found on the system.

The network-manager dracut module

As with all dracut modules, the network-manager module is split into stages that are called at different times during the boot (see the dracut.modules man page for more details).

The first stage parses the kernel command-line by calling /usr/libexec/nm-initrd-generator to produce a list of connection profiles in /run/NetworkManager/system-connections . The second part of the module runs after udev has settled, i.e., after userspace has finished handling the kernel events for devices (including network interfaces) found in the system.

When NM is started in the real root environment, it registers on D-Bus, configures the network, and remains active to react to events or D-Bus requests. In the initrd, NetworkManager is run in the configure-and-quit=initrd mode, which doesn't register on D-Bus (since it's not available in the initrd, at least for now) and exits after reaching the startup-complete event.

The startup-complete event is triggered after all devices with a matching connection profile have tried to activate, successfully or not. Once all interfaces are configured, NM exits and calls dracut hooks to notify other modules that the network is available.

Note that the /run/NetworkManager directory containing generated connection profiles and other runtime state is copied over to the real root so that the new NetworkManager process running there knows exactly what to do.

Troubleshooting

If you have network issues in dracut, this section contains some suggestions for investigating the problem.

The first thing to do is add rd.debug to the kernel command-line, enabling debug logging in dracut. Logs are saved to /run/initramfs/rdsosreport.txt and are also available in the journal.

If the system doesn't boot, it is useful to get a shell inside the initrd environment to manually check why things aren't working. For this, there is an rd.break command-line argument. Note that the argument spawns a shell when the initrd has finished its job and is about to give control to the init process in the real root filesystem. To stop at a different stage of dracut (for example, after command-line parsing), use the following argument:

rd.break={cmdline|pre-udev|pre-trigger|initqueue|pre-mount|mount|pre-pivot|cleanup}

The initrd image contains a minimal set of binaries; if you need a specific tool at the dracut shell, you can rebuild the image, adding what is missing. For example, to add the ping and tcpdump binaries (including all their dependent libraries), run:

# dracut -f  --install "ping tcpdump"

and then optionally verify that they were included successfully:

# lsinitrd | grep "ping\|tcpdump"
Arguments: -f --install 'ping tcpdump'
-rwxr-xr-x   1 root     root        82960 May 18 10:26 usr/bin/ping
lrwxrwxrwx   1 root     root           11 May 29 20:35 usr/sbin/ping -> ../bin/ping
-rwxr-xr-x   1 root     root      1065224 May 29 20:35 usr/sbin/tcpdump
The generator

If you are familiar with NetworkManager configuration, you might want to know how a given kernel command-line is translated into NetworkManager connection profiles. This can be useful to better understand the configuration mechanism and find syntax errors in the command-line without having to boot the machine.

The generator is installed in /usr/libexec/nm-initrd-generator and must be called with the list of kernel arguments after a double dash. The --stdout option prints the generated connections on standard output. Let's try to call the generator with a sample command line:

$ /usr/libexec/nm-initrd-generator --stdout -- \
          ip=enp1s0:dhcp:00:99:88:77:66:55 rd.peerdns=0

802-3-ethernet.cloned-mac-address: '99:88:77:66:55' is not a valid MAC
address

In this example, the generator reports an error because there is a missing field for the MTU after enp1s0 . Once the error is corrected, the parsing succeeds and the tool prints out the connection profile generated:

$ /usr/libexec/nm-initrd-generator --stdout -- \
        ip=enp1s0:dhcp::00:99:88:77:66:55 rd.peerdns=0

*** Connection 'enp1s0' ***

[connection]
id=enp1s0
uuid=e1fac965-4319-4354-8ed2-39f7f6931966
type=ethernet
interface-name=enp1s0
multi-connect=1
permissions=

[ethernet]
cloned-mac-address=00:99:88:77:66:55
mac-address-blacklist=

[ipv4]
dns-search=
ignore-auto-dns=true
may-fail=false
method=auto

[ipv6]
addr-gen-mode=eui64
dns-search=
ignore-auto-dns=true
method=auto

[proxy]

Note how the rd.peerdns=0 argument translates into the ignore-auto-dns=true property, which makes NetworkManager ignore DNS servers received via DHCP. An explanation of NetworkManager properties can be found on the nm-settings man page.

[ Network getting out of control? Check out Network automation for everyone, a free book from Red Hat . ]

Conclusion

The NetworkManager dracut module is enabled by default in Fedora and will also soon be enabled on RHEL. It brings better integration between networking in the initrd and NetworkManager running in the real root filesystem.

While the current implementation is working well, there are some ideas for possible improvements. One is to abandon the configure-and-quit=initrd mode and run NetworkManager as a daemon started by a systemd service. In this way, NetworkManager will be run in the same way as when it's run in the real root, reducing the code to be maintained and tested.

To completely drop the configure-and-quit=initrd mode, NetworkManager should also be able to register on D-Bus in the initrd. Currently, dracut doesn't have any module providing a D-Bus daemon because the image should be minimal. However, there are already proposals to include it as it is needed to implement some new features.

With D-Bus running in the initrd, NetworkManager's powerful API will be available to other tools to query and change the network state, unlocking a wide range of applications. One of those is to run nm-cloud-setup in the initrd. The service, shipped in the NetworkManager-cloud-setup Fedora package fetches metadata from cloud providers' infrastructure (EC2, Azure, GCP) to automatically configure the network.

[Jan 01, 2021] Looks like potentially Oracle can pickup up to 65% of CentOS users

Jan 01, 2021 | forums.centos.org

What do you think of the recent Red Hat announcement about CentOS Linux/Stream?

I can use either CentOS Linux or Stream and it makes no difference to me
6
11%
I will switch reluctantly to CentOS Stream but I'd rather not
2
4%
I depend on CentOS Linux 8 and its stability and now I need a new alternative
10
19%
I love the idea of CentOS Stream and can't wait to use it
1
2%
I'm off to a different distribution before CentOS 8 sunsets at the end of 2021
13
24%
I feel completely betrayed by this decision and will avoid Red Hat solutions in future
22
41%
Total votes: 54

[Jan 01, 2021] Oracle Linux DTrace

Jan 01, 2021 | www.oracle.com

... DTrace gives the operational insights that have long been missing in the data center, such as memory consumption, CPU time or what specific function calls are being made.

Developers can learn about and experiment with DTrace on Oracle Linux by installing the appropriate RPMs:

[Jan 01, 2021] Oracle Linux vs. Red Hat Enterprise Linux by Jim Brull

Jan 05, 2019 | www.centroid.com

... ... ...

Here's what we found.

[Jan 01, 2021] Consider looking at openSUSE (still run out of Germany)

Jan 01, 2021 | www.reddit.com

If you are on CentOS-7 then you will probably be okay until RedHat pulls the plug on 2024-06-30 so do don't do anything rash. If you are on CentOS-8 then your days are numbered (to ~ 365) because this OS will shift from major-minor point updates to a streaming model at the end of 2021. Let's look at two early founders: SUSE started in Germany in 1991 whilst RedHat started in America a year later. SUSE sells support for SLE (Suse Linux Enterprise) which means you need a license to install-run-update-upgrade it. Likewise RedHat sells support for RHEL (Red Hat Enterprise Linux). SUSE also offers "openSUSE Leap" (released once a year as a major-minor point release of SLE) and "openSUSE Tumbleweed" (which is a streaming thingy). A couple of days ago I installed "OpenSUSE Leap" onto an old HP-Compaq 6000 desktop just to try it out (the installer actually had a few features I liked better than the CentOS-7 installer). When I get back to the office in two weeks, I'm going to try installing "OpenSUSE Leap" onto an HP-DL385p_gen8. I'll work with this for a few months and I am comfortable, I will migrate my employer's solution over to "OpenSUSE Leap".

Parting thoughts:

  1. openSUSE is run out of Germany. IMHO switching over to a European distro is similar to those database people who preferred MariaDB to MySQL when Oracle was still hoping that MySQL would die from neglect.

  2. Someone cracked off to me the other day that now that IBM is pulling strings at "Red Hat", that the company should be renamed "Blue Hat"

7 comments 47% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

general-noob 4 points · 3 days ago

I downloaded and tried it last week and was actually pretty impressed. I have only ever tested SUSE in the past. Honestly, I'll stick with Red Hat/CentOS whatever, but I was still impressed. I'd recommend people take a look.

servingwater 2 points · 3 days ago

I have been playing with OpenSUSE a bit, too. Very solid this time around. In the past I never had any luck with it. But Leap 15.2 is doing fine for me. Just testing it virtually. TW also is pretty sweet and if I were to use a rolling release, it would be among the top contenders.

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.
Of course afterwards you can disable it with the proper commands but it becomes a pain with YAST, as it seems YAST insists on being opened by root.

neilrieck 2 points · 2 days ago

Thanks for that "heads about" about root

gdhhorn 1 point · 2 days ago

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.

I'm running Leap 15.2 on the laptops my kids run for school. During installation, I simply deselected the option for the account used to be an administrator; this required me to set a different password for administrative purposes.

Perhaps I'm misunderstanding your comment.

servingwater 1 point · 2 days ago

I think you might.
My point is/was that if I select to choose my regular user to be admin, I don't expect for the system to create and activate a root account anyways and then just assign it my password.
I expect the root account to be disabled.

gdhhorn 2 points · 2 days ago

I didn't realize it made a user, 'root,' and auto generated a password. I'd always assumed if I said to make the user account admin, that was it.

TIL, thanks.

servingwater 1 point · 2 days ago

I was surprised, too. I was bit "shocked" when I realized, after the install, that I could login as root with my user password.
At the very least, IMHO, it should then still have you set the root password, even if you choose to make your user admin.
It for one lets you know that OpenSUSE is not disabling root and two gives you a chance to give it a different password.
But other than that subjective issue I found OpenSUSE Leap a very solid distro.

[Jan 01, 2021] What about the big academic labs? (Fermilab, CERN, DESY, etc)

Jan 01, 2021 | www.reddit.com

The big academic labs (Fermilab, CERN and DESY to only name three of many used to run something called Scientific Linux which was also maintained by Red Hat.see: https://scientificlinux.org/ and https://en.wikipedia.org/wiki/Scientific_Linux Shortly after Red Hat acquired CentOS in 2014, Red Hat convinced the big academic labs to begin migrating over to CentOS (no one at that time thought that Red Hat would become Blue Hat) 11 comments 67% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

phil_g 14 points · 2 days ago

To clarify, as a user of Scientific Linux:

Scientific Linux is not and was not maintained by Red Hat. Like CentOS, when it was truly a community distribution, Scientific Linux was an independent rebuild of the RHEL source code published by Red Hat. It is maintained primarily by people at Fermilab. (It's slightly different from CentOS in that CentOS aimed for binary compatibility with RHEL, while that is not a goal of Scientific Linux. In practice, SL often achieves binary compatibility, but if you have issues with that, it's more up to you to fix them than the SL maintainers.)

I don't know anything about Red Hat convincing institutions to stop using Scientific Linux; the first I heard about the topic was in April 2019 when Fermilab announced there would be no Scientific Linux 8 . (They may reverse that decision. At the moment, they're " investigating the best path forward ", with a decision to be announced in the first few months of 2021.) level 2 neilrieck 4 points · 2 days ago

I fear you are correct. I just stumbled onto this article: https://www.linux.com/training-tutorials/scientific-linux-great-distro-wrong-name/ Even the wikipedia article states "This product is derived from the free and open-source software made available by Red Hat, but is not produced, maintained or supported by them." But it does seem that Scientific Linux was created as a replacement for Fermilab Linux. I've also seen references to CC7 to mean "Cern Centos 7". CERN is keeping their Linux page up to date because what I am seeing here ( https://linux.web.cern.ch/ ) today is not what I saw 2-weeks ago.

There are

Niarbeht 16 points · 2 days ago

There are

Uh oh, guys, they got him!

deja_geek 9 points · 2 days ago

RedHat didn't convince them to stop using Scientific Linux, Fermilab no longer needed to have their own rebuild of RHEL sources. They switched to CentOS and modified CentOS if they needed to (though I don't really think they needed to)

meat_bunny 10 points · 2 days ago

Maintaining your own distro is a pain in the ass.

My crystal ball says they'll just use whatever RHEL rebuild floats to the top in a few months like the rest of us.

carlwgeorge 2 points · 2 days ago

SL has always been an independent rebuild. It has never been maintained, sponsored, or owned by Red Hat. They decided on their own to not build 8 and instead collaborate on CentOS. They even gained representation on the CentOS board (one from Fermi, one from CERN).

I'm not affiliated with any of those organizations, but my guess is they will switch to some combination of CentOS Stream and RHEL (under the upcoming no/low cost program).

VestoMSlipher 1 point · 11 hours ago

https://linux.web.cern.ch/#information-on-change-of-end-of-life-for-centos-8

[Jan 01, 2021] CentOS HAS BEEN CANCELLED !!!

Jan 01, 2021 | forums.centos.org

Re: CentOS HAS BEEN CANCELLED !!!

Post by whoop " 2020/12/08 20:00:36

Is anybody considering switching to RHEL's free non-production developer subscription? As I understand it, it is free and receives updates.
The only downside as I understand it is that you have to renew your license every year (and that you can't use it in commercial production).

[Dec 30, 2020] Switching from CentOS to Oracle Linux: a hands-on example

In view of the such effective and free promotion of Oracle Linux by IBM/Red Hat brass as the top replacement for CentOS, the script can probably be slightly enhanced.
The script works well for simple systems, but still has some sharp edges. Checks for common bottlenecks should be added. For exmple scale in /boot should be checked if this is a separate filesystem. It was not done. See my Also, in case it was invoked the second time after the failure of the step "Installing base packages for Oracle Linux..." it can remove hundreds of system RPM (including sshd, cron, and several other vital packages ;-).
And failures on this step are probably the most common type of failures in conversion. Inexperienced sysadmins or even experienced sysadmins in a hurry often make this blunder running the script the second time.
It probably happens due to the presence of the line 'yum remove -y "${new_releases[@]}" ' in the function remove_repos, because in their excessive zeal to restore the system after error the programmers did not understand that in certain situations those packages that they want to delete via YUM have dependences and a lot of them (line 65 in the current version of the script) Yum blindly deletes over 300 packages including such vital as sshd, cron, etc. Due to this execution of the script probably should be blocked if Oracle repositories are already present. This check is absent.
After this "mass extinction of RPM packages," event you need to be pretty well versed in yum to recover. The names of the deleted packages are in yum log, so you can reinstall them and something it helps. In other cases system remain unbootable and the restore from the backup is the only option.
Due sudden surge of popularity of Oracle Linux due to Red Hat CentOS8 fiasco, the script definitely can benefit from better diagnostics. The current diagnostic is very rudimentary. It might also make sense to make steps modular in the classic /etc/init.d fashion and make initial steps shippable so that the script can be resumed after the error. Most of the steps have few dependencies, which can be resolved by saving variables during the first run and sourcing them if the the first step is not step 1.
Also, it makes sense to check the amount of free space in /boot filesystem if /boot is a separate filesystem. The script requires approx 100MB of free space in this filesystem. Failure to write a new kernel to it due to the lack of free space leads to the situation of "half-baked" installation, which is difficult to recover without senior sysadmin skills.
See additional considerations about how to enhance the script at http://www.softpanorama.org/Commercial_linuxes/Oracle_linux/conversion_of_centos_to_oracle_linux.shtml
Dec 15, 2020 Simon Coter Blog

... ... ...

We published a blog post earlier this week that explains why , but here is the TL;DR version:

For these reasons, we created a simple script to allow users to switch from CentOS to Oracle Linux about five years ago. This week, we moved the script to GitHub to allow members of the CentOS community to help us improve and extend the script to cover more CentOS respins and use cases.

The script can switch CentOS Linux 6, 7 or 8 to the equivalent version of Oracle Linux. Let's take a look at just how simple the process is.

Download the centos2ol.sh script from GitHub

The simplest way to get the script is to use curl :

$ curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10747 100 10747 0 0 31241 0 --:--:-- --:--:-- --:--:-- 31241

If you have git installed, you could clone the git repository from GitHub instead.

Run the centos2ol.sh script to switch to Oracle Linux

To switch to Oracle Linux, just run the script as root using sudo :

$ sudo bash centos2ol.sh

Sample output of script run .

As part of the process, the default kernel is switched to the latest release of Oracle's Unbreakable Enterprise Kernel (UEK) to enable extensive performance and scalability improvements to the process scheduler, memory management, file systems, and the networking stack. We also replace the existing CentOS kernel with the equivalent Red Hat Compatible Kernel (RHCK) which may be required by any specific hardware or application that has imposed strict kernel version restrictions.

Switching the default kernel (optional)

Once the switch is complete, but before rebooting, the default kernel can be changed back to the RHCK. First, use grubby to list all installed kernels:

[demo@c8switch ~]$ sudo grubby --info=ALL | grep ^kernel
[sudo] password for demo:
kernel="/boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64"
kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
kernel="/boot/vmlinuz-4.18.0-193.el8.x86_64"
kernel="/boot/vmlinuz-0-rescue-0dbb9b2f3c2744779c72a28071755366"

In the output above, the first entry (index 0) is UEK R6, based on the mainline kernel version 5.4. The second kernel is the updated RHCK (Red Hat Compatible Kernel) installed by the switch process while the third one is the kernel that were installed by CentOS and the final entry is the rescue kernel.

Next, use grubby to verify that UEK is currently the default boot option:

[demo@c8switch ~]$ sudo grubby --default-kernel
/boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64

To replace the default kernel, you need to specify either the path to its vmlinuz file or its index. Use grubby to get that information for the replacement:

[demo@c8switch ~]$ sudo grubby --info /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
index=1
kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
args="ro crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet $tuned_params"
root="/dev/mapper/cl-root"
initrd="/boot/initramfs-4.18.0-240.1.1.el8_3.x86_64.img $tuned_initrd"
title="Oracle Linux Server (4.18.0-240.1.1.el8_3.x86_64) 8.3"
id="0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64"

Finally, use grubby to change the default kernel, either by providing the vmlinuz path:

[demo@c8switch ~]$ sudo grubby --set-default /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Or its index:

[demo@c8switch ~]$ sudo grubby --set-default-index 1
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Changing the default kernel can be done at any time, so we encourage you to take UEK for a spin before switching back.

It's easy to access, try it out.

For more information visit oracle.com/linux .

[Dec 30, 2020] HPE ClearOS

Dec 30, 2020 | arstechnica.com

The last of the RHEL downstreams up for discussion today is Hewlett-Packard Enterprise's in-house distro, ClearOS . Hewlett-Packard makes ClearOS available as a pre-installed option on its ProLiant server line, and the company offers a free Community version to all comers.

ClearOS is an open source software platform that leverages the open source model to deliver a simplified, low cost hybrid IT experience for SMBs. The value of ClearOS is the integration of free open source technologies making it easier to use. By not charging for open source, ClearOS focuses on the value SMBs gain from the integration so SMBs only pay for the products and services they need and value.

ClearOS is mostly notable here for its association with industry giant HPE and its availability as an OEM distro on ProLiant servers. It seems to be a bit behind the times -- the most recent version is ClearOS 7.x, which is in turn based on RHEL 7. In addition to being a bit outdated compared with other options, it also appears to be a rolling release itself -- more comparable to CentOS Stream itself, than to the CentOS Linux that came before it.

ClearOS is probably most interesting to small business types who might consider buying ProLiant servers with RHEL-compatible OEM Linux pre-installed later.

[Dec 30, 2020] Where do I go now that CentOS Linux is gone- Check our list - Ars Technica

Dec 30, 2020 | arstechnica.com

Springdale Linux

I've seen a lot of folks mistakenly recommending the deceased Scientific Linux distro as a CentOS replacement -- that won't work, because Scientific Linux itself was deprecated in favor of CentOS. However, Springdale Linux is very similar -- like Scientific Linux, it's a RHEL rebuild distro made by and for the academic scientific community. Unlike Scientific Linux, it's still actively maintained!

Springdale Linux is maintained and made available by Princeton and Rutgers universities, who use it for their HPC projects. It has been around for quite a long time. One Springdale Linux user from Carnegie Mellon describes their own experience with Springdale (formerly PUIAS -- Princeton University Institute for Advanced Study) as a 10-year ride.

Theresa Arzadon-Labajo, one of Springdale Linux's maintainers, gave a pretty good seat-of-the-pants overview in a recent mailing list discussion :

The School of Mathematics at the Institute for Advanced Study has been using Springdale (formerly PUIAS, then PU_IAS) since its inception. All of our *nix servers and workstations (yes, workstations) are running Springdale. On the server side, everything "just works", as is expected from a RHEL clone. On the workstation side, most of the issues we run into have to do with NVIDIA drivers, and glibc compatibility issues (e.g Chrome, Dropbox, Skype, etc), but most issues have been resolved or have a workaround in place.

... Springdale is a community project, and [it] mostly comes down to the hours (mostly Josko) that we can volunteer to the project. The way people utilize Springdale varies. Some are like us and use the whole thing. Others use a different OS and use Springdale just for its computational repositories.

Springdale Linux should be a natural fit for universities and scientists looking for a CentOS replacement. It will likely work for most anyone who needs it -- but its relatively small community and firm roots in academia will probably make it the most comfortable for those with similar needs and environments.

[Dec 30, 2020] GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

Dec 30, 2020 | distrowatch.com

64"best idea" ... (by Otis on 2020-12-25 19:38:01 GMT from United States)
@62 dang it BSD takes care of all that anxiety about systemd and the other bloaty-with-time worries as far as I can tell. GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

[Dec 30, 2020] Scientific Linux website states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

Dec 30, 2020 | distrowatch.com

Centos (by David on 2020-12-22 04:29:46 GMT from United States)
I was using Centos 8.2 on an older, desktop home computer. When Centos dropped long term support on version 8, I was a little peeved, but not a whole lot, since it is free, anyway. Out of curiosity I installed Scientific Linux 7.9 on the same computer, and it works better that Centos 8. Then I tried installing SL 7.9 on my old laptop -- it even worked on that!

Previously, when I had tried to install Centos 8 on the laptop, an old Dell inspiron 1501, the graphics were garbage --the screen displayed kind of a color mosaic --and the keyboard/everthing else was locked up. I also tried Centos 7.9 on it and installation from minimal dvd produced a bunch of errors and then froze part way through.

I will stick with Scientific Linux 7 for now. In 2024 I will worry about which distro to migrate to. Note: Scientific Linux websites states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

[Dec 30, 2020] Springdale vs. CentOS

Dec 30, 2020 | distrowatch.com

52Springdale vs. CentOS (by whoKnows on 2020-12-23 05:39:01 GMT from Switzerland)

@51 • Personal opinion only. (by R. Cain)

"Personal opinion only. [...] After all the years of using Linux, and experiencing first-hand the hobby mentality that has taken over [...], I prefer to use a distribution which has all the earmarks of [...] being developed AND MAINTAINED by a professional organization."

Yeah, your answer is exactly what I expected it to be.

The thing with Springdale is as following: it's maintained by the very professional team of IT specialists at the Institute for Advanced Study (Princeton University) for the own needs. That's why there's no fancy website, RHEL Wiki, live ISOs and such.

They also maintain several other repositories for add-on packages (computing, unsupported [with audio/video codecs] ...).

With other words, if you're a professional who needs an RHEL clone, you'll be fine with it; if you're a hobbyist who needs a how-to on everything and anything, you can still use the knowledge base of RHEL/CentOS/Oracle ...

If you're 'small business' who needs a professional support, you'd get RHEL - unlike CentOS, Springdale is not a commercial distribution selling you support and schooling. Springdale is made by professional and for the professionals.

https://www.ias.edu/math/computing/Springdale-Linux
https://researchcomputing.princeton.edu/faq/what-is-a-cluster

[Dec 29, 2020] Migrer de CentOS Oracle Linux Petit retour d'exp rience Le blog technique de Microlinux

Highly recommended!
Google translation
Notable quotes:
"... Free to use, free to download, free to update. Always ..."
"... Unbreakable Enterprise Kernel ..."
"... (What You Get Is What You Get ..."
Dec 30, 2020 | blog.microlinux.fr

In 2010 I had the opportunity to put my hands in the shambles of Oracle Linux during an installation and training mission carried out on behalf of ASF (Highways of the South of France) which is now called Vinci Autoroutes. I had just published Linux on the onions at Eyrolles, and since the CentOS 5.3 distribution on which it was based looked 99% like Oracle Linux 5.3 under the hood, I had been chosen by the company ASF to train their future Linux administrators.

All these years, I knew that Oracle Linux existed, as did another series of Red Hat clones like CentOS, Scientific Linux, White Box Enterprise Linux, Princeton University's PUIAS project, etc. I didn't care any more, since CentOS perfectly met all my server needs.

Following the disastrous announcement of the CentOS project, I had a discussion with my compatriot Michael Kofler, a Linux guru who has published a series of excellent books on our favorite operating system, and who has migrated from CentOS to Oracle Linux for the Linux ad administration courses he teaches at the University of Graz. We were not in our first discussion on this subject, as the CentOS project was already accumulating a series of rather worrying delays for version 8 updates. In comparison, Oracle Linux does not suffer from these structural problems, so I kept this option in a corner of my head.

A problematic reputation

Oracle suffers from a problematic reputation within the free software community, for a variety of reasons. It was the company that ruined OpenOffice and Java, put the hook on MySQL and let Solaris sink. Oracle CEO Larry Ellison has been the center of his name because of his unhinged support for Donald Trump. As for the company's commercial policy, it has been marked by a notorious aggressiveness in the hunt for patents.

On the other hand, we have free and free apps like VirtualBox, which run perfectly on millions of developer workstations all over the world. And then the very discreet Oracle Linux , which works perfectly and without making any noise since 2006, and which is also a free and free operating system.

Install Oracle Linux

For a first test, I installed Oracle Linux 7.9 and 8.3 in two virtual machines on my workstation. Since it is a Red Hat Enterprise Linux-compliant clone, the installation procedure is identical to that of RHEL and CentOS, with a few small details.

Oracle Linux Installation

Info Normally, I never care about banner ads that scroll through graphic installers. This time, the slogan Free to use, free to download, free to update. Always still caught my attention.

An indestructible kernel?

Oracle Linux provides its own Linux kernel newer than the one provided by Red Hat, and named Unbreakable Enterprise Kernel (UEK). This kernel is installed by default and replaces older kernels provided upstream for versions 7 and 8. Here's what it looks like oracle Linux 7.9.

$ uname -a
Linux oracle-el7 5.4.17-2036.100.6.1.el7uek.x86_64 #2 SMP Thu Oct 29 17:04:48 
PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
Well-crafted packet deposits

At first glance, the organization of official and semi-official package filings seems much clearer and better organized than under CentOS. For details, I refer you to the respective explanatory pages for the 7.x and 8.x versions.

Well-structured documentation

Like the organization of deposits, Oracle Linux's documentation is worth mentioning here, because it is simply exemplary. The main index refers to the different versions of Oracle Linux, and from there, you can access a whole series of documents in HTML and PDF formats that explain in detail the peculiarities of the system and its day-to-day management. As I go along with this documentation, I discover a multitude of pleasant little details, such as the fact that Oracle packages display metadata for security updates, which is not the case for CentOS packages.

Migrating from CentOS to Oracle Linux

The Switch your CentOS systems to Oracle Linux web page identifies a number of reasons why Oracle Linux is a better choice than CentOS when you want to have a company-grade free as in free beer operating system, which provides low-risk updates for each version over a decade. This page also features a script that transforms an existing CentOS system into a two-command Oracle Linux system on the fly. centos2ol.sh

So I tested this script on a CentOS 7 server from Online/Scaleway.

# curl -O https://linux.oracle.com/switch/centos2ol.sh
# chmod +x centos2ol.sh
# ./centos2ol.sh

The script grinds about twenty minutes, we restart the machine and we end up with a clean Oracle Linux system. To do some cleaning, just remove the deposits of saved packages.

# rm -f /etc/yum.repos.d/*.repo.deactivated
Migrating a CentOS 8.x server?

At first glance, the script only predicted the migration of CentOS 7.9 to Oracle Linux 7.9. On a whim, I sent an email to the address at the bottom of the page, asking if support for CentOS 8.x was expected in the near future. centos2ol.sh

A very nice exchange of emails ensued with a guy from Oracle, who patiently answered all the questions I asked him. And just twenty-four hours later, he sent me a link to an Oracle Github repository with an updated version of the script that supports the on-the-fly migration of CentOS 8.x to Oracle Linux 8.x.

So I tested it with a cool installation of a CentOS 8 server at Online/Scaleway.

# yum install git
# git clone https://github.com/oracle/centos2ol.git
# cd centos2ol/
# chmod +x centos2ol.sh
# ./centos2ol.sh

Again, it grinds a good twenty minutes, and at the end of the restart, we end up with a public machine running oracle Linux 8.

Conclusion

I will probably have a lot more to say about that. For my part, I find this first experience with Oracle Linux rather conclusive, and if I decided to share it here, it is that it will probably solve a common problem to a lot of admins of production servers who do not support their system becoming a moving target overnight.

Post Scriptum for the chilly purists

Finally, for all of you who want to use a free and free clone of Red Hat Enterprise Linux without selling their soul to the devil, know that Springdale Linux is a solid alternative. It is maintained by Princeton University in the United States according to the principle WYGIWYG (What You Get Is What You Get ), it is provided raw de-cluttering and without any documentation, but it works just as well.


Writing this documentation takes time and significant amounts of espresso coffee. Do you like this blog? Give the editor a coffee by clicking on the cup.

[Dec 29, 2020] Oracle Linux is "CentOS done right"

Notable quotes:
"... If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject: ..."
"... Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers. ..."
"... [Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on." ..."
Dec 21, 2020 | distrowatch.com

Microlinux

And what about Oracle Linux? (by Microlinux on 2020-12-21 08:11:33 GMT from France)

If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject:

https://blog.microlinux.fr/migration-centos-oracle-linux/

Currently Rocky Linux is not much more than a README file on Github and a handful of Slack (ew!) discussion channels.

Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers.

dragonmouth

9@Jesse on CentOS: (by dragonmouth on 2020-12-21 13:11:04 GMT from United States)

"There is no rush and I recommend waiting a bit for the dust to settle on the situation before leaping to an alternative. "

For private users there may be plenty of time to find an alternative. However, corporate IT departments are not like jet skis able to turn on a dime. They are more like supertankers or aircraft carriers that take miles to make a turn. By the time all the committees meet and come to some decision, by the time all the upper managers who don't know what the heck they are talking about expound their opinions and by the time the CentOS replacement is deployed, a year will be gone. For corporations, maybe it is not a time to PANIC, yet, but it is high time to start looking for the O/S that will replace CentOS.

Ricardo

"This looks like the vendor equivalent..." (by Ricardo on 2020-12-21 18:06:49 GMT from Argentina)

[Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on."

Jesse, I couldn't have articulated it better. I'm stealing that phrase :)

Cheers and happy holidays to everyone!

[Dec 28, 2020] Time to move to Oracle Linux

Dec 28, 2020 | www.cyberciti.biz
Kyle Dec 9, 2020 @ 2:13

It's an ibm money grab. It's a shame, I use centos to develop and host web applications om my linode. Obviously a small time like that I can't afford red hat, but use it at work. Centos allowed me to come home and take skills and dev on my free time and apply it to work.

I also use Ubuntu, but it looks like the shift will be greater to Ubuntu. Noname Dec 9, 2020 @ 4:20

As others said here, this is money grab. Me thinks IBM was the worst thing that happened to Linux since systemd... Yui Dec 9, 2020 @ 4:49

Hello CentOS users,

I also work for a non-profit (Cancer and other research) and use CentOS for HPC. We choose CentOS over Debian due to the 10-year support cycle and CentOS goes well with HPC cluster. We also wanted every single penny to go to research purposes and not waste our donations and grants on software costs. What are my CentOS alternatives for HPC? Thanks in advance for any help you are able to provide. Holmes Dec 9, 2020 @ 5:06

Folks who rely on CentOS saw this coming when Red Hat brought them 6 years ago. Last year IBM brought Red Hat. Now, IBM+Red Hat found a way to kill the stable releases in order to get people signing up for RHEL subscriptions. Doesn't that sound exactly like "EEE" (embrace, extend, and exterminate) model? Petr Dec 9, 2020 @ 5:08

For me it's simple.
I will keep my openSUSE Leap and expand it's footprint.
Until another RHEL compatible distro is out. If I need a RHEL compatible distro for testing, until then, I will use Oracle with the RHEL kernel.
OpenSUSE is the closest to RHEL in terms of stability (if not better) and I am very used to it. Time to get some SLES certifications as well. Someone Dec 9, 2020 @ 5:23

While I like Debian, and better still Devuan (systemd ), some RHEL/CentOS features like kickstart and delta RPMs don't seem to be there (or as good). Debian preseeding is much more convoluted than kickstart for example. Vonskippy Dec 10, 2020 @ 1:24

That's ok. For us, we left RHEL (and the CentOS testing cluster) when the satan spawn known as SystemD became the standard. We're now a happy and successful FreeBSD shop.

[Dec 28, 2020] This quick and dirty hack worked fine to convert centos 8 to oracle linux 8

Notable quotes:
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 28, 2020 | blog.centos.org

Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
wget \
${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
reboot
dnf remove kernel

[Dec 28, 2020] Red Hat interpretation of CenOS8 fiasco

Highly recommended!
" People are complaining because you are suddenly killing CentOS 8 which has been released last year with the promise of binary compatibility to RHEL 8 and security updates until 2029."
One of immanent features of GPL is that it allow clones to exist. Which means the Oracle Linix, or Rocky Linux, or Lenin Linux will simply take CentOS place and Red hat will be in disadvantage, now unable to control the clone to the extent they managed to co-opt and control CentOS. "Embrace and extinguish" change i now will hand on Red Hat and probably will continue to hand for years from now. That may not be what Redhat brass wanted: reputational damage with zero of narrative effect on the revenue stream. I suppose the majority of CentOS community will finally migrate to emerging RHEL clones. If that was the Red Hat / IBM goal - well, they will reach it.
Notable quotes:
"... availability gap ..."
"... Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it. ..."
"... We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community. ..."
"... If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output. ..."
"... You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions. ..."
"... Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later. ..."
"... They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. ..."
"... IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have ..."
"... What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable. ..."
"... As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH? ..."
Dec 28, 2020 | blog.centos.org

Let's go back to 2003 where Red Hat saw the opportunity to make a fundamental change to become an enterprise software company with an open source development methodology.

To do so Red Hat made a hard decision and in 2003 split Red Hat Linux into Red Hat Enterprise Linux (RHEL) and Fedora Linux. RHEL was the occasional snapshot of Fedora Linux that was a product -- slowed, stabilized, and paced for production. Fedora Linux and the Project around it were the open source community for innovating -- speedier, prone to change, and paced for exploration. This solved the problem of trying to hold to two, incompatible core values (fast/slow) in a single project. After that, each distribution flourished within its intended audiences.

But that split left two important gaps. On the project/community side, people still wanted an OS that strived to be slower-moving, stable-enough, and free of cost -- an availability gap . On the product/customer side, there was an openness gap -- RHEL users (and consequently all rebuild users) couldn't contribute easily to RHEL. The rebuilds arose and addressed the availability gap, but they were closed to contributions to the core Linux distro itself.

In 2012, Red Hat's move toward offering products beyond the operating system resulted in a need for an easy-to-access platform for open source development of the upstream projects -- such as Gluster, oVirt, and RDO -- that these products are derived from. At that time, the pace of innovation in Fedora made it not an easy platform to work with; for example, the pace of kernel updates in Fedora led to breakage in these layered projects.

We formed a team I led at Red Hat to go about solving this problem, and, after approaching and discussing it with the CentOS Project core team, Red Hat and the CentOS Project agreed to " join forces ." We said joining forces because there was no company to acquire, so we hired members of the core team and began expanding CentOS beyond being just a rebuild project. That included investing in the infrastructure and protecting the brand. The goal was to evolve into a project that also enabled things to be built on top of it, and a project that would be exponentially more open to contribution than ever before -- a partial solution to the openness gap.

Bringing home the CentOS Linux users, folks who were stuck in that availability gap, closer into the Red Hat family was a wonderful side effect of this plan. My experience going from participant to active open source contributor began in 2003, after the birth of the Fedora Project. At that time, as a highly empathetic person I found it challenging to handle the ongoing emotional waves from the Red Hat Linux split. Many of my long time community friends themselves were affected. As a company, we didn't know if RHEL or Fedora Linux were going to work out. We had made a hard decision and were navigating the waters from the aftershock. Since then we've all learned a lot, including the more difficult dynamics of an open source development methodology. So to me, bringing the CentOS and other rebuild communities into an actual relationship with Red Hat again was wonderful to see, experience, and help bring about.

Over the past six years since finally joining forces, we made good progress on those goals. We started Special Interest Groups (SIGs) to manage the layered project experience, such as the Storage SIG, Virt Sig, and Cloud SIG. We created a governance structure where there hadn't been one before. We brought RHEL source code to be housed at git.centos.org . We designed and built out a significant public build infrastructure and CI/CD system in a project that had previously been sealed-boxes all the way down.


cmdrlinux says: December 19, 2020 at 2:36 pm

"This brings us to today and the current chapter we are living in right now. The move to shift focus of the project to CentOS Stream is about filling that openness gap in some key ways. Essentially, Red Hat is filling the development and contribution gap that exists between Fedora and RHEL by shifting the place of CentOS from just downstream of RHEL to just upstream of RHEL."

Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it.

Mark Danon says: December 19, 2020 at 4:14 pm

Redhat has no obligation to maintain both CentOS 8 and CentOS stream. Heck, they have no obligation to maintain CentOS either. Maintaining both will only increase the workload of CentOS maintainers. I don't suppose you are volunteering to help them do the work? Be thankful for a distribution that you have been using so far, and move on.

Dave says: December 20, 2020 at 7:16 am

We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community.

If they were going to pull this stunt they shouldn't have gone ahead with CentOS 8 at all and fulfilled any lifecycle expectations for CentOS 7.

Konstantin says: December 21, 2020 at 12:24 am

Sorry, but that's a BS. CentOS Stream and CentOS Linux are not mutually replaceable. You cannot sell that BS to any people actually knowing the intrinsics of how CentOS Linux was being developed.

If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output.

No, it is just a primitive, direct and lame way to either force "free users" to either pay, or become your free-to-use beta testers (CentOS Stream *is* beta, whatever you say).

I predict you will be somewhat amazed at the actual results.

Not talking about the breach of trust. Now how much would cost all your (RH's) further promises and assurances?

Chris Mair says: December 20, 2020 at 3:21 pm

To: [email protected]
To: [email protected]

Hi,

Re: https://blog.centos.org/2020/12/balancing-the-needs-around-the-centos-platform/

you can spin this to the moon and back. The fact remains you just killed CentOS Linux and your users' trust by moving the EOL of CentOS Linux 8 from 2029 to 2021.

You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions.

Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later.

The correct way to handle this would have been to kill the future CentOS 9, giving everybody the time to cope with the changes.

I've earned my RHCE in 2003 (yes that's seventeen years ago). Since then, many times, I've recommended RHEL or CentOS to the clients I do free lance work for. Just a few weeks ago I was asked to give an opinion on six CentOS 7 boxes about to be deployed into a research system to be upgraded to 8. I gave my go. Well, that didn't last long.

What do you expect me to recommend now? Buying RHEL licenses? That may or may be not have a certain cost per year and may or may be not supported until a given date? Once you grant yourself the freedom to retract whatever published information, how can I trust you? What added values do I get over any of the community supported distributions (given I can support myself)?

And no, CentOS Stream cannot "cover 95% (or so) of current user workloads". Stream was introduces as "a rolling preview of what's next in RHEL".

I'm not interested at all in a "a rolling preview of what's next in RHEL". I'm interested in a stable distribution I can trust to get updates until the given EOL date.

You've made me look elsewhere for that.

-- Chris

Chip says: December 20, 2020 at 6:16 pm

I guess my biggest issue is They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. What they did was basically bait and switch. Not cool. Especially not cool for those running multiple nodes on high performance computing clusters.

Alex says: December 21, 2020 at 12:51 am

I have over 300,000 Centos nodes that require Long term support as it's impossible to turn them over rapidly. I also have 154,000 RHEL nodes. I now have to migrate 454,000 nodes over to Ubuntu because Redhat just made the dumbest decision short of letting IBM acquire them I've seen. Whitehurst how could you let this happen? Nothing like millions in lost revenue from a single customer.

Nika jous says: December 21, 2020 at 1:43 pm

Just migrated to OpenSUSE. Rather than crying for dead os it's better to act yourself. Redhat is a sinking ship it probably want last next decade.Legendary failure like ibm never have upper hand in Linux world. It's too competitive now. Customers have more options to choose. I think person who have take this decision probably ignorant about the current market or a top grade fool.

Ang says: December 22, 2020 at 2:36 am

IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have. You say you are reading them, but choose to ignore it and that is even worse!

People still don't understand why CentOS stream and CentOS can't co-exist. If your goal was not to support CentOS 8, why did you put 2029 date or why did you even release CentOS 8 in the first place?

Hell, you could have at least had the goodwill with the community to make CentOS 8 last until end of CentOS 7! But no, you discontinued CentOS 8 giving people only 1 year to respond, and timed it right after EOL of CentOS6.

Why didn't you even bother asking the community first and come to a compromise or something?

Again, not a single person had a problem with CentOS stream, the problem was having the rug pulled under their feet! So stop pretending and address it properly!

Even worse, you knew this was an issue, it's like literally #1 on your issue list "Shift Board to be more transparent in support of becoming a contributor-focused open source project"

And you FAILED! Where was the transparency?!

Ang says: December 22, 2020 at 2:36 am

A link to the issue: https://git.centos.org/centos/board/issue/1

AP says: December 22, 2020 at 6:55 am

What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable.

Len Inkster says: December 22, 2020 at 4:13 pm

As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH?

Anyone who wants to continue with CENTOS? Fork the project and maintain it yourselves. That how we got to CENTOS from Linus Torvalds original Linux.

Peter says: December 22, 2020 at 5:36 pm

I can only comment this as disappointment, if not betrayal, to whole CentOS user base. This decision was clearly done, without considering impact to majority of CentOS community use cases.

If you need upstream contributions channel for RHEL, create it, do not destroy the stable downstream. Clear and simple. All other 'explanations' are cover ups for real purpose of this action.

This stinks of politics within IBM/RH meddling with CentOS. I hope, Rocky will bring the desired stability, that community was relying on with CentOS.

Goodbye CentOS, it was nice 15 years.

Ken Sanderson says: December 23, 2020 at 1:57 pm

We've just agreed to cancel out RHEL subscriptions and will be moving them and our Centos boxes away as well. It was a nice run but while it will be painful, it is a chance to move far far away from the terrible decisions made here.

[Dec 28, 2020] Red Hat Goes Full IBM and Says Farewell to CentOS - ServeTheHome

Dec 28, 2020 | www.servethehome.com

The intellectually easy answer to what is happening is that IBM is putting pressure on Red Hat to hit bigger numbers in the future. Red Hat sees a captive audience in its CentOS userbase and is looking to covert a percentage to paying customers. Everyone else can go to Ubuntu or elsewhere if they do not want to pay...

[Dec 28, 2020] Call our sales people and open your wallet if you use CentOS in prod

Dec 28, 2020 | freedomben.medium.com

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Red Hat expects you to call their sales people and open your wallet if you use CentOS in production. That will not happen.

Dec 27, 2020 | freedomben.medium.com

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Why Red Hat dumped CentOS for CentOS Stream by Steven J. Vaughan-Nichols

Red hat always has uneasy relationship with CentOS. Red hat brass always viwed it as something that streal Red hat licences. So "Stop thesteal" mve might be not IBM inspired but it is firmly in IBM tradition. And like many similar IBM moves it will backfire.
This hiring of CentOS developers in 2014 gave them unprecedented control over the project. Why on Earth they now want independent projects like Rocly Linux to re-emerge to fill the vacuum. They can't avoid the side affect of using GPL -- it allows clones. .Why it is better to have a project that are hostile to Red Hat and that "in-house" domesticated project is unclear to me. As many large enterprises deploy mix of Red Hat and CentOS the initial reaction might in the opposite direction the Red Hat brass expected: they will get less licesses, not more by adopting "One IBM way"
Dec 21, 2020 | www.zdnet.com

On Hacker News , the leading comment was: "Imagine if you were running a business, and deployed CentOS 8 based on the 10-year lifespan promise . You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed us."

Over at Reddit/Linux , another person snarled, "We based our Open Source project on the latest CentOS releases since CentOS 4. Our flagship product is running on CentOS 8 and we *sure* did bet the farm on the promised EOL of 31st May 2029."

A popular tweet from The Best Linux Blog In the Unixverse, nixcraft , an account with over 200-thousand subscribers, went: Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to /dev/null. IBM buys Red Hat: CentOS is going to >/dev/null . Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure ASAP."

Many others joined in this choir of annoyed CentOS users that it was IBM's fault that their favorite Linux was being taken away from them. Still, others screamed Red Hat was betraying open-source itself.

... ... ...

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

... ... ...

[Dec 27, 2020] There are now countless Internet servers out there that run CentOS. This is why the Debian project is so important.

Dec 27, 2020 | freedomben.medium.com

There are companies that sell appliances based on CentOS. Websense/Forcepoint is one of them. The Websense appliance runs the base OS of CentOS, on top of which runs their Web-filtering application. Same with RSA. Their NetWitness SIEM runs on top of CentOS.

Likewise, there are now countless Internet servers out there that run CentOS. There's now a huge user base of CentOS out there.

This is why the Debian project is so important. I will be converting everything that is currently CentOS to Debian. Those who want to use the Ubuntu fork of Debian, that is also probably a good idea.

[Dec 23, 2020] Red Hat and GPL: uneasy romance ended long ego, but Red Hat still depends on GPL as it does not develop many components and gets them for free from the community and other vendors

It all about money and about executive bonuses: shortsighted executive want more and more money as if the current huge revenue is not enough...
Dec 23, 2020 | www.zdnet.com

former Red Hat executive confided, "CentOS was gutting sales. The customer perception was 'it's from Red Hat and it's a clone of RHEL, so it's good to go!' It's not. It's a second-rate copy." From where, this person sits, "This is 100% defensive to stave off more losses to CentOS."

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

Yet another Red Hat staffer snapped, "Look at the CentOS FAQ . It says right there:

CentOS Linux is NOT supported in any way by Red Hat, Inc.

CentOS Linux is NOT Red Hat Linux, it is NOT Fedora Linux. It is NOT Red Hat Enterprise Linux. It is NOT RHEL. CentOS Linux does NOT contain Red Hat® Linux, Fedora, or Red Hat® Enterprise Linux.

CentOS Linux is NOT a clone of Red Hat® Enterprise Linux.

CentOS Linux is built from publicly available source code provided by Red Hat, Inc for Red Hat Enterprise Linux in a completely different (CentOS Project maintained) build system.

We don't owe you anything."

[Dec 10, 2020] Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

Dec 10, 2020 | blog.centos.org

Ward Mundy says: December 9, 2020 at 3:12 am

Happy to report that we've invested exactly one day in CentOS 7 to CentOS 8 migration. Thanks, IBM. Now we can turn our full attention to Debian and never look back.

Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

[Dec 10, 2020] Does Oracle Linux have staying power against Red Hat

Notable quotes:
"... If you need official support, Oracle support is generally cheaper than RedHat. ..."
"... You can legally run OL free and have access to patches/repositories. ..."
"... Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. ..."
"... Premium OL subscription includes a few nice bonuses like DTrace and Ksplice. ..."
"... Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle). ..."
Dec 10, 2020 | blog.centos.org

Matthew Stier says: December 8, 2020 at 8:11 pm

My office switched the bulk of our RHEL to OL years ago, and find it a great product, and great support, and only needing to get support for systems we actually want support on.

Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some EL4 systems I still have running. (Its a matter of going through the list of installed packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the package (without dependencies) from the OL ISO.)

art_ok 1 point· 5 minutes ago

We have been using Oracle Linux exclusively last 5-6 years for everything - thousands of servers both for internal use and hundred or so customers.

Not a single time regretted, had any issues or were tempted to move to RedHat let alone CentOS.

I found Oracle Linux has several advantages over RedHat/CentOS:

If you need official support, Oracle support is generally cheaper than RedHat. You can legally run OL free and have access to patches/repositories. Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. It is very easy to switch between supported and free setup (say, you have proof-of-concept setup running free OL, but then it is being promoted to production status - just matter of registering box with Oracle, no need to reinstall/reconfigure anything). You can easily move licensed/support from one box to another so you always run the same OS and do not have to think and decide (RedHat for production / CentOS for Dec/test). You have a choice to run good old RedHat kernel or use newer Oracle kernel (which is pretty much vanilla kernel with minimal modification - just newer). We generally run Oracle kernels on all boxes unless we have to support particularly pedantic customer who insist on using old RedHat kernel. Premium OL subscription includes a few nice bonuses like DTrace and Ksplice.

Overall, it is pleasure to work and support OL.

Negatives:

I found RedHat knowledge base / documentation is much better than Oracle's Oracle does not offer extensive support for "advanced" products like JBoss, Directory Server, etc. Obviously Oracle has its own equivalent commercial offerings (Weblogic, etc) and prefers customers to use them. Some complain about quality of Oracle's support. Can't really comment on that. Had no much exposure to RedHat support, maybe used it couple of times and it was good. Oracle support can be slower, but in most cases it is good/sufficient. Actually over the last few years support quality for Linux has improved noticeably - guess Oracle pushes their cloud very aggressively and as a result invests in Linux support (as Oracle cloud aka OCI runs on Oracle Linux).
art_ok 1 point· just now

Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle).

[Dec 10, 2020] Backlash against Red Hat management started

At the end IBM/Red Hat might even lose money as powerful organizations, such as universities, might abandon Red Hat as the platform. Or may be not. Red Hat managed to push systemd down the throat without any major hit to the revenue. Why not to repeat the trick with CentOS? In any case IBM owns enterprise Linux and bitter complains and threats of retribution in this forum is just a symptom that the development now is completely driven by corporate brass, and all key decisions belong to them.
Community wise, this is plain bad news for Open Source and all Open Source communities. IBM explained to them very clearly: you does not matter. And organized minority always beat disorganized majority. Actually most large organizations will probably stick with Red Hat compatible OS, probably moving to Oracle Linux or Rocky Linux, is it materialize, not to Debian.
What is interesting is that most people here believe that when security patches are stopped that's the end of the life for the particular Linux version. It is an interesting superstition and it shows how conditioned by corporations Linux folk are and how far from BSD folk they are actually are. Security is an architectural thing first and foremost. Patched are just band aid and they can't change general security situation in Linux no matter how hard anyone tries. But they now serve as a powerful tool of corporate mind control over the user population. Feat is a powerful instrument of mind control.
In reality security of most systems on internal network does no change one bit with patches. And on external network only applications that have open ports that matter (that's why ssh should be restricted to the subnets used, not to be opened to the whole world)
Notable quotes:
"... Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution ..."
"... We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS. ..."
"... First CoreOS, now CentOS. It's about time to switch to one of the *BSDs. ..."
"... I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. ..."
"... IBM is declining, hence they need more profit from "useless" product line. So disgusting ..."
"... An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window ..."
"... Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms. ..."
"... Another one bites the dust due to corporate greed, which IBM exemplifies ..."
"... More likely to drive people entirely out of the RHEL ecosystem. ..."
"... Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/ ..."
"... 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.' ..."
"... Read again: CentOS Stream is not a production operating system. 'Nuff said. ..."
"... This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. ..."
"... Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen. ..."
"... What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM. ..."
"... IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. ..."
"... Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. ..."
"... Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora. ..."
"... There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest. ..."
"... The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle. ..."
"... Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion. ..."
"... I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM ..."
"... IBM are seeing every CentOS install as a missed RHEL subscription... ..."
"... Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM? ..."
"... Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time. ..."
"... As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux. ..."
"... Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM ..."
"... Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. ..."
Dec 10, 2020 | blog.centos.org

Internet User says: December 8, 2020 at 5:13 pm

This is a pretty clear indication that you people are completely out of touch with your users.

Joel B. D. says: December 8, 2020 at 5:17 pm

Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution. Do you realize how much market share you will be losing and how much chaos you will be creating with this?

"If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options". So this is the way RH is telling us they don't want anyone to use CentOS anymore and switch to RHEL?

Michael says: December 8, 2020 at 8:31 pm

That's exactly what they're saying. We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS.

OS says: December 8, 2020 at 6:20 pm

First CoreOS, now CentOS. It's about time to switch to one of the *BSDs.

JD says: December 8, 2020 at 6:35 pm

Wow. Well, I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. I've just started preparing to shift from Scientific Linux 7 to CentOS due to SL being discontinued by 2024. Glad I've only just started - not much work to throw away.

ShameOnIBM says: December 8, 2020 at 7:07 pm

IBM is declining, hence they need more profit from "useless" product line. So disgusting

MLF says: December 8, 2020 at 7:15 pm

An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window. Does anyone know if this decision of dumping centos8 is final?

MM says: December 8, 2020 at 7:28 pm

Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms.

Already existing functioning distribution ecosystem, can probably do good with influx of resources to enhance the missing bits, such as further improving SELinux support and expanding Debian security team.

I say this without any official or unofficial involvement with the Debian project, other than being a user.

And we have just launched hundred of Centos 8 servers.

Faisal Sehbai says: December 8, 2020 at 7:32 pm

Another one bites the dust due to corporate greed, which IBM exemplifies. This is why I shuddered when they bought RH. There is nothing that IBM touches that gets better, other than the bottom line of their suits!

Disgusting!

William Smith says: December 8, 2020 at 7:39 pm

This is a big mistake. RedHat did this with RedHat Linux 9 the market leading Linux and created Fedora, now an also-ran to Ubuntu. I spent a lot of time during Covid to convert from earlier versions to 8, and now will have to review that work with my customer.

Daniele Brunengo says: December 8, 2020 at 7:48 pm

I just finished building a CentOS 8 web server, worked out all the nooks and crannies and was very satisfied with the result. Now I have to do everything from scratch? The reason why I chose this release was that every website and its brother were giving a 2029 EOL. Changing that is the worst betrayal of trust possible for the CentOS community. It's unbelievable.

David Potterveld says: December 8, 2020 at 8:08 pm

What a colossal blunder: a pivot from the long-standing mission of an OS providing stability, to an unstable development platform, in a manner that betrays its current users. They should remove the "C" from CentOS because it no longer has any connection to a community effort. I wonder if this is a move calculated to drive people from a free near clone of RHEL to a paid RHEL subscription? More likely to drive people entirely out of the RHEL ecosystem.

a says: December 8, 2020 at 9:08 pm

From a RHEL perspective I understand why they'd want it this way. CentOS was probably cutting deep into potential RedHat license sales. Though why or how RedHat would have a say in how CentOS is being run in the first place is.. troubling.

From a CentOS perspective you may as well just take the project out back and close it now. If people wanted to run beta-test tier RHEL they'd run Fedora. "LATER SECURITY FIXES AND UNTESTED 'FEATURES'?! SIGN ME UP!" -nobody

I'll probably run CentOS 7 until the end and then swap over to Debian when support starts hurting me. What a pain.

Ralf says: December 8, 2020 at 9:08 pm

Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

I'm a current user of old school CentOS, so keep your promise, Mr CTO.

Tamas says: December 8, 2020 at 10:01 pm

That was quick: "Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."

https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

Konstantin says: December 9, 2020 at 3:36 pm

From the same article: 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.'

Read again: CentOS Stream is not a production operating system. 'Nuff said.

Samuel C. says: December 8, 2020 at 10:53 pm

This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. IBM did what I thought they would screw up Red Hat. My company is dumping IBM software everywhere - this means we need to dump CentOS now too.

Brendan says: December 9, 2020 at 12:15 am

Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen.

vinci says: December 8, 2020 at 11:45 pm

I can't believe what IBM is actually doing. This is a direct move against all that open source means. They want to do exactly the same thing they're doing with awx (vs. ansible tower). You're going against everything that stands for open source. And on top of that you choose to stop offering support for Centos 8, all of a sudden! What a horrid move on your part. This only reliable choice that remains is probably going to be Debian/Ubuntu. What a waste...

Peter Vonway says: December 8, 2020 at 11:56 pm

What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM.

Scott says: December 9, 2020 at 8:38 am

This is exactly it. IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. while not taking into account the fact that Red Hat's strong adoption into the enterprise is a direct consequence of engineers using the nonproprietary version to develop things at home in their spare time.

Having an open source, non support contract version of your OS is exactly what drives adoption towards the supported version once the business decides to put something into production.

They are choosing to kill the golden goose in order to get the next few eggs faster. IBM doesn't care about anything but its large enterprise customers. Very stereotypically IBM.

OSLover says: December 9, 2020 at 12:09 am

So sad. Not only breaking the support promise but so quickly (2021!)

Business wise, a lot of business software is providing CentOS packages and support. Like hosting panels, backup software, virtualization, Management. I mean A LOT of money worldwide is in dark waters now with this announcement. It took years for CentOS to appear in their supported and tested distros. It will disappear now much faster.

Community wise, this is plain bad news for Open Source and all Open Source communities. This is sad. I wonder, are open source developers nowadays happy to spend so many hours for something that will in the end benefit IBM "subscribers" only in the end? I don't think they are.

What a sad way to end 2020.

technick says: December 9, 2020 at 12:09 am

I don't want to give up on CentOS but this is a strong life changing decision. My background is linux engineering with over 15+ years of hardcore experience. CentOS has always been my go to when an organization didn't have the appetite for RHEL and the $75 a year license fee per instance. I fought off Ubuntu take overs at 2 of the last 3 organizations I've been with successfully. I can't, won't fight off any more and start advocating for Ubuntu or pure Debian moving forward.

RIP CentOS. Red Hat killed a great project. I wonder if Anisble will be next?

ConcernedAdmin says: December 9, 2020 at 12:47 am

Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. I can't think moving everything I so caressed and loved to a mess like Ubuntu.

John says: December 9, 2020 at 1:32 am

Assinine. This is completely ridiculous. I have migrated several servers from CentOS 7 to 8 recently with more to go. We also have a RHEL subscription for outward facing servers, CentOS internal. This type of change should absolutely have been announced for CentOS 9. This is garbage saying 1 year from now when it was supposed to be till 2029. A complete betrayal. One year to move everything??? Stupid.

Now I'm going to be looking at a couple of other options but it won't be RHEL after this type of move. This has destroyed my trust in RHEL as I'm sure IBM pushed for this. You will be losing my RHEL money once I chose and migrate. I get companies exist to make money and that's fine. This though is purely a naked money grab that betrays an established timeline and is about to force massive work on lots of people in a tiny timeframe saying "f you customers.". You will no longer get my money for doing that to me

Concerned Fren says: December 9, 2020 at 1:52 am

In hind sight it's clear to see that the only reason RHEL took over CentOS was to kill the competition.

This is also highly frustrating as I just completed new CentOS8 and RHEL8 builds for Non-production and Production Servers and had already begun deployments. Now I'm left in situation of finding a new Linux distribution for our enterprise while I sweat out the last few years of RHEL7/CentOS7. Ubuntu is probably a no go there enterprise tooling is somewhat lacking, and I am of the opinion that they will likely be gobbled up buy Microsoft in the next few years.

Unfortunately, the short-sighted RH/IBMer that made this decision failed to realize that a lot of Admins that used Centos at home and in the enterprise also advocated and drove sales towards RedHat as well. Now with this announcement I'm afraid the damage is done and even if you were to take back your announcement, trust has been broken and the blowback will ultimately mean the death of CentOS and reduced sales of RHEL. There is however an opportunity for another Corporations such as SUSE which is own buy Microfocus to capitalize on this epic blunder simply by announcing an LTS version of OpenSues Leap. This would in turn move people/corporations to the Suse platform which in turn would drive sale for SLES.

William Ashford says: December 9, 2020 at 2:02 am

So the inevitable has come to pass, what was once a useful Distro will disappear like others have. Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora.

Christian Reiss says: December 9, 2020 at 6:28 am

This is disgusting. Bah. As a CTO I will now - today - assemble my teams and develop a plan to migrate all DataCenters back to Debian for good. I will also instantly instruct the termination of all mirroring of your software.

For the software (CentOS) I hope for a quick death that will not drag on for years.

Ian says: December 9, 2020 at 2:10 am

This is a bit sad. There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest.

There is a genuine benefit associated with the existence of Centos for Redhat however it would appear that that benefit isn't great enough and some arse clown thought that by forcing users to migrate it will increase Redhat's revenue.

The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle.

cody says: December 9, 2020 at 4:53 am

Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion.

Ganesan Rajagopal says: December 9, 2020 at 5:09 am

Thankfully we just started our migration from CentOS 7 to 8 and this surely puts a stop to that. Even if CentOS backtracks on this decision because of community backlash, the reality is the trust is lost. You've just given a huge leg for Ubuntu/Debian in the enterprise. Congratulations!

Bomel says: December 9, 2020 at 6:22 am

I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM, and start looking for alternatives to replace existing RH & IBM products under my watch.

Steve says: December 9, 2020 at 8:57 am

IBM are seeing every CentOS install as a missed RHEL subscription...

Ralf says: December 9, 2020 at 10:29 am

Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM?

Michel-André says: December 9, 2020 at 5:18 pm

Hi all,

Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.

Even though RedHat/CentOS has a very large share of the Linux server market, it will suffer the same fate as Novell (had 85% of the matket), disappearing into darkness !

Mihel-André

PeteVM says: December 9, 2020 at 5:27 pm

As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux.

CentOS is dead. Time to either go back to Debian and its derivatives, or just pay for RHEL, or IBMEL, and suck it up.

JadeK says: December 9, 2020 at 6:36 pm

I am mid-migration from Rhel/Cent6 to 8. I now have to stop a major project for several hundred systems. My group will have to go back to rebuild every CentOS 8 system we've spent the last 6 months deploying.

Congrats fellas, you did it. You perfected the transition to Debian from CentOS.

Godimir Kroczweck says: December 9, 2020 at 8:21 pm

I find it kind of funny, I find it kind of sad. The dreams in which I moving 1.5K+ machines to whatever distro I yet have to find fitting for replacement to are the..

Wait. How could one with all the seriousness consider cutting down already published EOL a good idea?

I literally had to convince people to move from Ubuntu and Debian installations to CentOS for sake of stability and longer support, just for become looking like a clown now, because with single move distro deprived from both of this.

Paul R says: December 9, 2020 at 9:14 pm

Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM

Nicholas Knight says: December 9, 2020 at 9:34 pm

Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. It is now clear Red Hat no longer knows what "good faith" means, and acts only as a Trumpian vacuum of wealth.

[Dec 10, 2020] GPL bites Red hat in the butt: they might faceemarge of CentOs alternative due to the wave of support for such distro

Dec 10, 2020 | blog.centos.org

Sam Callis says: December 8, 2020 at 3:58 pm

I have been using CentOS for over 10 years and one of the things I loved about it was how stable it has been. Now, instead of being a stable release, it is changing to the beta testing ground for RHEL 8.

And instead of 10 years of a support you need to update to the latest dot release. This has me, very concerned.

Sieciowski says: December 9, 2020 at 11:19 am

well, 10 years - have you ever contributed with anything for the CentOS community, or paid them a wage or at least donated some decent hardware for development or maybe just being parasite all the time and now are you surprised that someone has to buy it's your own lunches for a change?

If you think you might have done it even better why not take RH sources and make your own FreeRHos whatever distro, then support, maintain and patch all the subsequent versions for free?

Joe says: December 9, 2020 at 11:47 am

That's ridiculous. RHEL has benefitted from the free testing and corner case usage of CentOS users and made money hand-over-fist on RHEL. Shed no tears for using CentOS for free. That is the benefit of opening the core of your product.

Ljubomir Ljubojevic says: December 9, 2020 at 12:31 pm

You are missing a very important point. Goal of CentOS project was to rebuild RHEL, nothing else. If money was the problem, they could have asked for donations and it would be clear is there can be financial support for rebuild or not.

Putting entire community in front of done deal is disheartening and no one will trust Red Hat that they are pro-community, not to mention Red Hat employees that sit in CentOS board, who can trust their integrity after this fiasco?

Matt Phelps says: December 8, 2020 at 4:12 pm

This is a breach of trust from the already published timeline of CentOS 8 where the EOL was May 2029. One year's notice for such a massive change is unacceptable.

Move this approach to CentOS 9

fahrradflucht says: December 8, 2020 at 5:37 pm

This! People already started deploying CentOS 8 with the expectation of 10 years of updates. - Even a migration to RHEL 8 would imply completely reprovisioning the systems which is a big ask for systems deployed in the field.

Gregory Kurtzer says: December 8, 2020 at 4:27 pm

I am considering creating another rebuild of RHEL and may even be able to hire some people for this effort. If you are interested in helping, please join the HPCng slack (link on the website hpcng.org).

Greg (original founder of CentOS)

Reply
A says: December 8, 2020 at 7:11 pm

Not a programmer, but I'd certainly use it. I hope you get it off the ground.

Michael says: December 8, 2020 at 8:26 pm

This sounds like a great idea and getting control away from corporate entities like IBM would be helpful. Have you considered reviving the Scientific Linux project?

Bond Masuda says: December 8, 2020 at 11:53 pm

Feel free to contact me. I'm a long time RH user (since pre-RHEL when it was RHL) in both server and desktop environments. I've built and maintained some RPMs for some private projects that used CentOS as foundation. I can contribute compute and storage resources. I can program in a few different languages.

Rex says: December 9, 2020 at 3:46 am

Dear Greg,

Thank you for considering starting another RHEL rebuild. If and when you do, please consider making your new website a Brave Verified Content Creator. I earn a little bit of money every month using the Brave browser, and I end up donating it to Wikipedia every month because there are so few Brave Verified websites.

The verification process is free, and takes about 15 to 30 minutes. I believe that the Brave browser now has more than 8 million users.

dovla091 says: December 9, 2020 at 10:47 am

Wikipedia. The so called organization that get tons of money from tech oligarchs and yet the whine about we need money and support? (If you don't believe me just check their biggest donors) also they keen to be insanely biased and allow to write on their web whoever pays the most... Seriously, find other organisation to donate your money

dan says: December 9, 2020 at 4:00 am

Please keep us updated. I can't donate much, but I'm sure many would love to donate to this cause.

Chad Gregory says: December 9, 2020 at 7:21 pm

Not sure what I could do but I will keep an eye out things I could help with. This change to CentOS really pisses me off as I have stood up 2 CentOS servers for my works production environment in the last year.

Vasile M says: December 8, 2020 at 8:43 pm

LOL... CentOS is RH from 2014 to date. What you expected? As long as CentOS is so good and stable, that cuts some of RHEL sales... RH and now IBM just think of profit. It was expected, search the net for comments back in 2014.

[Dec 10, 2020] Amazon Linux 2

Dec 10, 2020 | aws.amazon.com

Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from Amazon Web Services (AWS). It provides a secure, stable, and high performance execution environment to develop and run cloud and enterprise applications. With Amazon Linux 2, you get an application environment that offers long term support with access to the latest innovations in the Linux ecosystem. Amazon Linux 2 is provided at no additional charge.

Amazon Linux 2 is available as an Amazon Machine Image (AMI) for use on Amazon Elastic Compute Cloud (Amazon EC2). It is also available as a Docker container image and as a virtual machine image for use on Kernel-based Virtual Machine (KVM), Oracle VM VirtualBox, Microsoft Hyper-V, and VMware ESXi. The virtual machine images can be used for on-premises development and testing. Amazon Linux 2 supports the latest Amazon EC2 features and includes packages that enable easy integration with AWS. AWS provides ongoing security and maintenance updates for Amazon Linux 2.

[Dec 10, 2020] A letter to IBM brass

Notable quotes:
"... Redhat endorsed that moral contract when you brought official support to CentOS back in 2014. ..."
"... Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community. ..."
"... Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds. ..."
"... Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future. ..."
"... This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle. ..."
"... I think you destroyed a large part of the RHEL / CentOS community with this move today. ..."
"... Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain. ..."
Dec 10, 2020 | blog.centos.org

Orsiris de Jong says: December 9, 2020 at 9:41 am

Dear IBM,

As a lot of us here, I've been in the CentOS / RHEL community for more than 10 years.
Reasons of that choice were stability, long term support and good hardware vendor support.

Like many others, I've built much of my skills upon this linux flavor for years, and have been implicated into the community for numerous bug reports, bug fixes, and howto writeups.

Using CentOS was the good alternative to RHEL on a lot of non critical systems, and for smaller companies like the one I work for.

The moral contract has always been a rock solid "Community Enterprise OS" in exchange of community support, bug reports & fixes, and growing interest from developers.

Redhat endorsed that moral contract when you brought official support to CentOS back in 2014.

Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community.

Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds.

This will make RHEL less and less widely used by startups, enthusiasts and others.

CentOS Stream being the upstream of RHEL, I highly doubt system architects and developers are willing to be beta testers for RHEL.

Providing a free RHEL subscription for Open Source projects just sounds like your next step to keep a bit of the exodus from happening, but I'd bet that "free" subscription will get more and more restrictions later on, pushing to a full RHEL support contract.

As a lot of people here, I won't go the Oracle way, they already did a very good job destroying other company's legacy.

Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future.

This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle.

I think you destroyed a large part of the RHEL / CentOS community with this move today.

Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain.

... ... ...

[Dec 10, 2020] CentOS will be RHEL's beta, but CentOS denies this

IBM have a history of taking over companies and turning them into junk, so I am not that surprised. I am surprised that it took IBM brass so long to kill CentOS after Red Hat acquisition.
Notable quotes:
"... By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%. ..."
"... Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024 ..."
Dec 10, 2020 | www.zdnet.com

I'm far from alone. By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.

If you think you just realized why Red Hat might want to remove CentOS from the server playing field, you're far from the first to think that.

Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024

[Dec 10, 2020] Time to bring back Scientific Linux

Notable quotes:
"... I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. ..."
Dec 10, 2020 | www.reddit.com

I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. https://listserv.fnal.gov/scripts/wa.exe?A2=SCIENTIFIC-LINUX-ANNOUNCE;11d6001.1904 l

clickwir 19 points· 1 day ago

Time to bring back Scientific Linux.

[Dec 10, 2020] CentOS Project: Embraced, extended, extinguished.

Notable quotes:
"... My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that. ..."
Dec 10, 2020 | www.reddit.com

KugelKurt 18 points· 1 day ago

I wonder what Red Hat's plan is WRT companies like Blackmagic Design that ship CentOS as part of their studio equipment.

The cost of a RHEL license isn't the issue when the overall cost of the equipment is in the tens of thousands but unless I missed a change in Red Hat's trademark policy, Blackmagic cannot distribute a modified version of RHEL and without removing all trademarks first.

I don't think a rolling release distribution is what BMD wants.

My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that.

[Dec 10, 2020] Oracle Linux -- A better alternative to CentOS

Currently limited of CentOS 6 and CentOS7.
Dec 10, 2020 | linux.oracle.com
Oracle Linux: A better alternative to CentOS

We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and DTrace.

But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?

We're putting Oracle Linux in your hands by doing two things:

We think you'll like what you find, and we'd love for you to give it a try.

FAQ

Wait, doesn't Oracle Linux cost money?
Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at yum.oracle.com . Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.
Is this just another CentOS?
Inasmuch as they're both 100% binary-compatible with Red Hat Enterprise Linux, yes, this is just like CentOS. Your applications will continue to work without any modification whatsoever. However, there are several important differences that make Oracle Linux far superior to CentOS.
How is this better than CentOS?
Well, for one, you're getting the exact same bits our paying enterprise customers are getting . So that means a few things. Importantly, it means virtually no delay between when Red Hat releases a kernel and when Oracle Linux does:


Delay in kernel security advisories since January 2018: CentOS vs Oracle Linux; CentOS has large fluctuations in delays

So if you don't want to risk another CentOS delay, Oracle Linux is a better alternative for you. It turns out that our enterprise customers don't like to wait for updates -- and neither should you.

What about the code quality?
Again, you're running the exact same code that our enterprise customers are, so it has to be rock-solid. Unlike CentOS, we have a large paid team of developers, QA, and support engineers that work to make sure this is reliable.
What if I want support?
If you're running Oracle Linux and want support, you can purchase a support contract from us (and it's significantly cheaper than support from Red Hat). No reinstallation, no nothing -- remember, you're running the same code as our customers.

Contrast that with the CentOS/RHEL story. If you find yourself needing to buy support, have fun reinstalling your system with RHEL before anyone will talk to you.

Why are you doing this?
This is not some gimmick to get you running Oracle Linux so that you buy support from us. If you're perfectly happy running without a support contract, so are we. We're delighted that you're running Oracle Linux instead of something else.

At the end of the day, we're proud of the work we put into Oracle Linux. We think we have the most compelling Linux offering out there, and we want more people to experience it.

How do I make the switch?
Run the following as root:

curl -O https://linux.oracle.com/switch/centos2ol.sh
sh centos2ol.sh

What versions of CentOS can I switch?
centos2ol.sh can convert your CentOS 6 and 7 systems to Oracle Linux.
What does the script do?
The script has two main functions: it switches your yum configuration to use the Oracle Linux yum server to update some core packages and installs the latest Oracle Unbreakable Enterprise Kernel. That's it! You won't even need to restart after switching, but we recommend you do to take advantage of UEK.
Is it safe?
The centos2ol.sh script takes precautions to back up and restore any repository files it changes, so if it does not work on your system it will leave it in working order. If you encounter any issues, please get in touch with us by emailing [email protected] .

[Dec 10, 2020] The demise of CentOs and independent training providers

Dec 10, 2020 | blog.centos.org

Anthony Mwai

says: December 8, 2020 at 8:44 pm

IBM is messing up RedHat after the take over last year. This is the most unfortunate news to the Free Open-Source community. Companies have been using CentOS as a testing bed before committing to purchase RHEL subscription licenses.

We need to rethink before rolling out RedHat/CentOS 8 training in our Centre.

Joe says: December 9, 2020 at 1:03 pm

You can use Oracle Linux in exactly the same way as you did CentOS except that you have the option of buying support without reinstalling a "commercial" variant.

Everything's in the public repos except a few addons like ksplice. You don't even have to go through the e-delivery to download the ISOs any more, they're all linked from yum.oracle.com

TechSmurf says: December 9, 2020 at 12:38 am

Not likely. Oracle Linux has extensive use by paying Oracle customers as a host OS for their database software and in general purposes for Oracle Cloud Infrastructure.

Oracle customers would be even less thrilled about Streams than CentOS users. I hate to admit it, but Oracle has the opportunity to take a significant chunk of the CentOS user base if they don't do anything Oracle-ish, myself included.

I'll be pretty surprised if they don't completely destroy their own windfall opportunity, though.

David Anderson says: December 8, 2020 at 7:16 pm

"OEL is literally a rebranded RH."

So, what's not to like? I also was under the impression that OEL was a paid offering, but apparently this is wrong - https://www.oracle.com/ar/a/ocom/docs/linux/oracle-linux-ds-1985973.pdf - "Oracle Linux is easy to download and completely free to use, distribute, and update."

Bill Murmor says: December 9, 2020 at 5:04 pm

So, what's the problem?

IBM has discontinued CentOS. Oracle is producing a working replacement for CentOS. If, at some point, Oracle attacks their product's users in the way IBM has here, then one can move to Debian, but for now, it's a working solution, as CentOS no longer is.

k1 says: December 9, 2020 at 7:58 pm

Because it's a trust issue. RedHat has lost trust. Oracle never had it in the first place.

[Dec 10, 2020] Oracle has a converter script for CentOS 7. And here is a quick hack to convert CentOs8 to Oracle Linux

You can use Oracle Linux exactly like CentOS, only better
Ang says: December 9, 2020 at 5:04 pm "I never thought we'd see the day Oracle is more trustworthy than RedHat/IBM. But I guess such things do happen with time..."
Notable quotes:
"... The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option. ..."
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 10, 2020 | blog.centos.org

Charlie F. says: December 8, 2020 at 6:37 pm

Oracle has a converter script for CentOS 7, and they will sell you OS support after you run it:

https://linux.oracle.com/switch/centos/

It would be nice if Oracle would update that for CentOS 8.

David Anderson says: December 8, 2020 at 7:15 pm

The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option.

Max Grü says: December 9, 2020 at 2:05 pm

Oracle Linux is free. The only thing that costs money is support for it. I quote "Yes, we know that this is Oracle, but it's actually free. Seriously."

Reply
Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
wget \
${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
reboot
dnf remove kernel

[Dec 10, 2020] Linux Subshells for Beginners With Examples - LinuxConfig.org

Dec 10, 2020 | linuxconfig.org

Bash allows two different subshell syntaxes, namely $() and back-tick surrounded statements. Let's look at some easy examples to start:

$ echo '$(echo 'a')'
$(echo a)
$ echo "$(echo 'a')"
a
$ echo "a$(echo 'b')c"
abc
$ echo "a`echo 'b'`c"
abc

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=1566064928&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710648&bpp=17&bdt=1664&idt=-M&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&correlator=5015138629854&frm=20&pv=2&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=577243598424319&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=1663&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530671%2C21068083&oid=3&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=1Bdmecc8YW&p=https%3A//linuxconfig.org&dtd=634


In the first command, as an example, we used ' single quotes. This resulted in our subshell command, inside the single quotes, to be interpreted as literal text instead of a command. This is standard Bash: ' indicates literal, " indicates that the string will be parsed for subshells and variables.

In the second command we swap the ' to " and thus the string is parsed for actual commands and variables. The result is that a subshell is being started, thanks to our subshell syntax ( $() ), and the command inside the subshell ( echo 'a' ) is being executed literally, and thus an a is produced, which is then inserted in the overarching / top level echo . The command at that stage can be read as echo "a" and thus the output is a .

In the third command, we further expand this to make it clearer how subshells work in-context. We echo the letter b inside the subshell, and this is joined on the left and the right by the letters a and c yielding the overall output to be abc in a similar fashion to the second command.

In the fourth and last command, we exemplify the alternative Bash subshell syntax of using back-ticks instead of $() . It is important to know that $() is the preferred syntax, and that in some remote cases the back-tick based syntax may yield some parsing errors where the $() does not. I would thus strongly encourage you to always use the $() syntax for subshells, and this is also what we will be using in the following examples.

Example 2: A little more complex
$ touch a
$ echo "-$(ls [a-z])"
-a
$ echo "-=-||$(ls [a-z] | xargs ls -l)||-=-"
-=-||-rw-rw-r-- 1 roel roel 0 Sep  5 09:26 a||-=-

Here, we first create an empty file by using the touch a command. Subsequently, we use echo to output something which our subshell $(ls [a-z]) will generate. Sure, we can execute the ls directly and yield more or less the same result, but note how we are adding - to the output as a prefix.

In the final command, we insert some characters at the front and end of the echo command which makes the output look a bit nicer. We use a subshell to first find the a file we created earlier ( ls [a-z] ) and then - still inside the subshell - pass the results of this command (which would be only a literally - i.e. the file we created in the first command) to the ls -l using the pipe ( | ) and the xargs command. For more information on xargs, please see our articles xargs for beginners with examples and multi threaded xargs with examples .

Example 3: Double quotes inside subshells and sub-subshells!
echo "$(echo "$(echo "it works")" | sed 's|it|it surely|')"
it surely works

https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=2724449972&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&adsid=ChEIgM2S_gUQzN_42M_QwuOnARIqAL1WgU0IPKMTPLYMrAFnUAY1w18hzIzNy0CGR82uXn3xCpt9jLaEISQY&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710249&bpp=9&bdt=1264&idt=211&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&prev_fmts=754x189%2C0x0&nras=1&correlator=5015138629854&frm=20&pv=1&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=2308974393696511&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=3548&biw=1519&bih=762&scr_x=0&scr_y=513&eid=42530671%2C21068083&oid=3&psts=AGkb-H_kCAb-qHdw3GuwXq6RB3MJbClRq9VISu7n8l1rpQZCm8sfL6sdfh-BMTltKIaB0w&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&jar=2020-11-30-05&ifi=2&uci=a!2&btvi=2&fsb=1&xpc=0ozDx6KTCC&p=https%3A//linuxconfig.org&dtd=4517


Cool, no? Here we see that double quotes can be used inside the subshell without generating any parsing errors. We also see how a subshell can be nested inside another subshell. Are you able to parse the syntax? The easiest way is to start "in the middle or core of all subshells" which is in this case would be the simple echo "it works" .

This command will output it works as a result of the subshell call $(echo "it works") . Picture it works in place of the subshell, i.e.

echo "$(echo "it works" | sed 's|it|it surely|')"
it surely works

This looks simpler already. Next it is helpful to know that the sed command will do a substitute (thanks to the s command just before the | command separator) of the text it to it surely . You can read the sed command as replace __it__ with __it surely__. The output of the subshell will thus be it surely works`, i.e.

echo "it surely works"
it surely works
Conclusion

In this article, we have seen that subshells surely work (pun intended), and that they can be used in wide variety of circumstances, due to their ability to be inserted inline and within the context of the overarching command. Subshells are very powerful and once you start using them, well, there will likely be no stopping. Very soon you will be writing something like:

$ VAR="goodbye"; echo "thank $(echo "${VAR}" | sed 's|^| and |')" | sed 's|k |k you|'

This one is for you to try and play around with! Thank you and goodbye

[Dec 09, 2020] Is Oracle A Real Alternative To CentOS

Notable quotes:
"... massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html ..."
Dec 09, 2020 | centosfaq.org

Is Oracle A Real Alternative To CentOS? Home " CentOS " Is Oracle A Real Alternative To CentOS? December 8, 2020 Frank Cox CentOS 33 Comments

Is Oracle a real alternative to CentOS ? I'm asking because genuinely don't know; I've never paid any attention to Oracle's Linux offering before now.

But today I've seen a couple of the folks here mention Oracle Linux and I see that Oracle even offers a script to convert CentOS 7 to Oracle. Nothing about CentOS 8 in that script, though.

https://linux.oracle.com/switch/ CentOS /

That page seems to say that Oracle Linux is everything that CentOS was prior to today's announcement.

But someone else here just said that the first thing Oracle Linux does is to sign you up for an Oracle account.

So, for people who know a lot more about these things than I do, what's the downside of using Oracle Linux versus CentOS? I assume that things like epel/rpmfusion/etc will work just as they do under CentOS since it's supposed to be bit-for-bit compatible like CentOS was. What does the "sign up with Oracle" stuff actually do, and can you cancel, avoid, or strip it out if you don't want it?

Based on my extremely limited knowledge around Oracle Linux, it sounds like that might be a go-to solution for CentOS refugees.

But is it, really?

Karl Vogel says: December 9, 2020 at 3:05 am

... ... ..

Go to https://linux.oracle.com/switch/CentOS/ , poke around a bit, and you end up here:
https://yum.oracle.com/oracle-linux-downloads.html

I just went to the ISO page and I can grab whatever I like without signing up for anything, so nothing's changed since I first used it.

... ... ...

Gianluca Cecchi says: December 9, 2020 at 3:30 am

[snip]

Only to point out that while in CentOS (8.3, but the same in 7.x) the situation is like this:

[g.cecchi@skull8 ~]$ ll /etc/redhat-release /etc/CentOS-release
-rw-r–r– 1 root root 30 Nov 10 16:49 /etc/CentOS-release lrwxrwxrwx 1 root root 14 Nov 10 16:49 /etc/redhat-release -> CentOS-release
[g.cecchi@skull8 ~]$

[g.cecchi@skull8 ~]$ cat /etc/CentOS-release CentOS Linux release 8.3.2011

in Oracle Linux (eg 7.7) you get two different files:

$ ll /etc/redhat-release /etc/oracle-release 
-rw-r–r– 1 root root 32 Aug 8 2019 /etc/oracle-release 
-rw-r–r– 1 root root 52 Aug 8 2019 /etc/redhat-release 
$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
$ cat /etc/oracle-release Oracle Linux Server release 7.7 

This is generally done so that sw pieces officially certified only on upstream enterprise vendor and that test contents of the redhat-release file are satisfied. Using the lsb_release command on an Oracle Linux 7.6 machine:

# lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: OracleServer Description: Oracle Linux Server release 7.6 
Release: 7.6 
Codename: n/a 
# 

Gianluca

Rainer Traut says: December 9, 2020 at 4:18 am

Am 08.12.20 um 18:54 schrieb Frank Cox:

Yes, it is better than CentOS and in some aspects better than RHEL:

– faster security updates than CentOS, directly behind RHEl
– better kernels than RHEL and CentOS (UEKs) wih more features
– free to download (no subscription needed):
https://yum.oracle.com/oracle-linux-isos.html
– free to use:
https://yum.oracle.com/oracle-linux-8.html
massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html

Rainer Traut says: December 9, 2020 at 4:26 am

Hi,

Am 08.12.20 um 19:03 schrieb Jon Pruente:

KVM is a subscription feature. They want you to run Oracle VM Server for x86 (which is based on Xen) so they can try to upsell you to use the Oracle Cloud. There's other things, but that stood out immediately.

Oracle Linux FAQ (PDF): https://www.oracle.com/a/ocom/docs/027617.pdf

There is no subscription needed. All needed repositories for the oVirt based virtualization are freely available.

https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-manager/getstart/manager-install.html#manager-install-prepare

Rainer Traut says: December 10, 2020 at 4:40 am

Am 09.12.20 um 17:52 schrieb Frank Cox:

I'll try to answer best to my knowledge.

I have an oracle account but never used it for/with Oracle linux. There are oracle communities where you need an oracle account: https://community.oracle.com/tech/apps-infra/categories/oracle_linux

Niki Kovacs says: December 10, 2020 at 10:22 am

Le 10/12/2020 ŕ 17:18, Frank Cox a écrit :

That's it. I know Oracle's history, but I think for Oracle Linux, they may be much better than their reputation. I'm currently fiddling around with it, and I like it very much. Plus there's a nice script to turn an existing CentOS installation into an Oracle Linux system.

Cheers,

Niki

--
Microlinux – Solutions informatiques durables
7, place de l'église – 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : [email protected] Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Ljubomir Ljubojevic says: December 10, 2020 at 12:53 pm

There is always Springdale Linux made by Princeton University: https://puias.math.ias.edu/

Johnny Hughes says: December 10, 2020 at 4:10 pm

Am 10.12.20 um 19:53 schrieb Ljubomir Ljubojevic:

I did a conversion of a test webserver from C8 to Springdale. It went smoothly.

Niki Kovacs says: December 12, 2020 at 11:29 am

Le 08/12/2020 ŕ 18:54, Frank Cox a écrit :

I spent the last three days experimenting with it. Here's my take on it: https://blog.microlinux.fr/migration-CentOS-oracle-linux/

tl;dr: Very nice if you don't have any qualms about the company.

Cheers,

Niki

--
Microlinux – Solutions informatiques durables 7, place de l'église – 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : [email protected] Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Frank Cox says: December 12, 2020 at 11:52 am

That's a really excellent article, Nicholas. Thanks ever so much for posting about your experience.

Peter Huebner says: December 15, 2020 at 5:07 am

Am Dienstag, den 15.12.2020, 10:14 +0100 schrieb Ruslanas Gžibovskis:

According to the Oracle license terms and official statements, it is "free to download, use and share. There is no license cost, no need for a contract, and no usage audits."

Recommendation only: "For business-critical infrastructure, consider Oracle Linux Support." Only optional, not a mandatory requirement. see: https://www.oracle.com/linux

No need for such a construct. Oracle Linux can be used on any production system without the legal requirement to obtain a extra commercial license. Same as in CentOS.

So Oracle Linux can be used free as in "free-beer" currently for any system, even for commercial purposes. Nevertheless, Oracle can change that license terms in the future, but this applies as well to all other company-backed linux distributions.
--
Peter Huebner

[Sep 05, 2020] documentation - How do I get the list of exit codes (and-or return codes) and meaning for a command-utility

Sep 05, 2020 | unix.stackexchange.com

What exit code should I use?

There is no "recipe" to get the meanings of an exit status of a given terminal command.

My first attempt would be the manpage:

user@host:~# man ls 
   Exit status:
       0      if OK,

       1      if minor problems (e.g., cannot access subdirectory),

       2      if serious trouble (e.g., cannot access command-line argument).

Second : Google . See wget as an example.

Third : The exit statuses of the shell, for example bash. Bash and it's builtins may use values above 125 specially. 127 for command not found, 126 for command not executable. For more information see the bash exit codes .

Some list of sysexits on both Linux and BSD/OS X with preferable exit codes for programs (64-78) can be found in /usr/include/sysexits.h (or: man sysexits on BSD):

0   /* successful termination */
64  /* base value for error messages */
64  /* command line usage error */
65  /* data format error */
66  /* cannot open input */
67  /* addressee unknown */
68  /* host name unknown */
69  /* service unavailable */
70  /* internal software error */
71  /* system error (e.g., can't fork) */
72  /* critical OS file missing */
73  /* can't create (user) output file */
74  /* input/output error */
75  /* temp failure; user is invited to retry */
76  /* remote error in protocol */
77  /* permission denied */
78  /* configuration error */
/* maximum listed value */

The above list allocates previously unused exit codes from 64-78. The range of unallotted exit codes will be further restricted in the future.

However above values are mainly used in sendmail and used by pretty much nobody else, so they aren't anything remotely close to a standard (as pointed by @Gilles ).

In shell the exit status are as follow (based on Bash):

According to the above table, exit codes 1 - 2, 126 - 165, and 255 have special meanings, and should therefore be avoided for user-specified exit parameters.

Please note that out of range exit values can result in unexpected exit codes (e.g. exit 3809 gives an exit code of 225, 3809 % 256 = 225).

See:

You will have to look into the code/documentation. However the thing that comes closest to a "standardization" is errno.h share improve this answer follow answered Jan 22 '14 at 7:35 Thorsten Staerk 2,885 1 1 gold badge 17 17 silver badges 25 25 bronze badges

PSkocik ,

thanks for pointing the header file.. tried looking into the documentation of a few utils.. hard time finding the exit codes, seems most will be the stderrs... – precise Jan 22 '14 at 9:13

[May 06, 2020] Creating and managing partitions in Linux with parted Enable Sysadmin by Tyler Carrigan

Apr 30, 2020 | www.redhat.com

Red Hat Sysddmin

Listing partitions with parted

The first thing that you want to do anytime that you need to make changes to your disk is to find out what partitions you already have. Displaying existing partitions allows you to make informed decisions moving forward and helps you nail down the partition names will need for future commands. Run the parted command to start parted in interactive mode and list partitions. It will default to your first listed drive. You will then use the print command to display disk information.

[root@rhel ~]# parted /dev/sdc
    GNU Parted 3.2
    Using /dev/sdc
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) print                                                            
    Error: /dev/sdc: unrecognised disk label
    Model: ATA VBOX HARDDISK (scsi)                                           
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: unknown
    Disk Flags:
    (parted)

Creating new partitions with parted

Now that you can see what partitions are active on the system, you are going to add a new partition to /dev/sdc . You can see in the output above that there is no partition table for this partition, so add one by using the mklabel command. Then use mkpart to add the new partition. You are creating a new primary partition using the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.

(parted) mklabel msdos                                                    
    (parted) mkpart                                                           
    Partition type?  primary/extended? primary                                
    File system type?  [ext2]? ext4                                           
    Start? 1                                                                  
    End? 50                                                                   
    (parted)                                                                  
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.3MB  49.3MB  primary  ext4         lba

Modifying existing partitions with parted

Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then shrink it back to the original 50 MB. First, note the partition number. You can find this information by using the print command. You are then going to use the resizepart command to make the modifications.

(parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [50.3MB]? 100                                                       
        
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End    Size    Type     File system  Flags
     1      1049kB  100MB  99.0MB  primary

You can see in the above output that I resized partition number one from 50 MB to 100 MB. You can then verify the changes with the print command. You can now resize it back down to 50 MB. Keep in mind that shrinking a partition can cause data loss.

    (parted) resizepart                                                       
    Partition number? 1                                                       
    End?  [100MB]? 50                                                         
    Warning: Shrinking a partition can cause data loss, are you sure you want to
    continue?
    Yes/No? yes                                                               
    
    (parted) print
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  50.0MB  49.0MB  primary

Removing partitions with parted

Now, let's look at how to remove the partition you created at /dev/sdc1 by using the rm command inside of the parted suite. Again, you will need the partition number, which is found in the print output.

NOTE: Be sure that you have all of the information correct here, there are no safeguards or are you sure? questions asked. When you run the rm command, it will delete the partition number you give it.

    (parted) rm 1                                                             
    (parted) print                                                            
    Model: ATA VBOX HARDDISK (scsi)
    Disk /dev/sdc: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:
    
    Number  Start  End  Size  Type  File system  Flags

[Mar 05, 2020] Bash IDE - Visual Studio Marketplace

Notable quotes:
"... all your shell scripts ..."
Mar 05, 2020 | marketplace.visualstudio.com
Bash IDE

Visual Studio Code extension utilizing the bash language server , that is based on Tree Sitter and its grammar for Bash and supports explainshell integration.

Features Configuration

To get documentation for flags on hover (thanks to explainshell), run the explainshell Docker container :

docker run --rm --name bash-explainshell -p 5000:5000 chrismwendt/codeintel-bash-with-explainshell

And add this to your VS Code settings:

    "bashIde.explainshellEndpoint": "http://localhost:5000",

For security reasons, it defaults to "" , which disables explainshell integration. When set, this extension will send requests to the endpoint and displays documentation for flags.

Once https://github.com/idank/explainshell/pull/125 is merged, it would be possible to set this to "https://explainshell.com" , however doing this is not recommended as it will leak all your shell scripts to a third party -- do this at your own risk, or better always use a locally running Docker image.

[Feb 18, 2020] Articles on Linux by Ken Hess

Jul 13, 2019 | www.linuxtoday.com

[Feb 05, 2020] How to disable startup graphic in CentOS

Feb 05, 2020 | forums.centos.org

Post by neuronetv " 2014/08/20 22:24:51

I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.

<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help.

See also https://www.youtube.com/watch?v=oFl40XzlXp4

[Feb 05, 2020] Disable startup graphic

This is still a problem today... See also centOS 7 hung at "Starting Plymouth switch root service"
Feb 05, 2020 | forums.centos.org
disable startup graphic

Post by neuronetv " 2014/08/20 22:24:51

I can't figure out how to disable the startup graphic in centos 7 64bit. In centos 6 I always did it by removing "rhgb quiet" from /boot/grub/grub.conf but there is no grub.conf in centos 7. I also tried yum remove rhgb but that wasn't present either.
<moan> I've never understood why the devs include this startup graphic, I see loads of users like me who want a text scroll instead.</moan>
Thanks for any help. Top
User avatar TrevorH
Forum Moderator
Posts: 27492
Joined: 2009/09/24 10:40:56
Location: Brighton, UK
Re: disable startup graphic

Post by TrevorH " 2014/08/20 23:09:40

The file to amend now is /boot/grub2/grub.cfg and also /etc/default/grub. If you only amend the defaults file then you need to run grub2-mkconfig -o /boot/grub2/grub.cfg afterwards to get a new file generated but you can also edit the grub.cfg file directly though your changes will be wiped out next kernel install if you don't also edit the 'default' file. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
neuronetv
Posts: 76
Joined: 2012/01/08 21:53:07
Re: disable startup graphic

Post by neuronetv " 2014/08/21 13:12:45

thanks for that, I did the edits and now the scroll is back. Top
larryg
Posts: 3
Joined: 2014/07/17 04:48:28
Re: disable startup graphic

Post by larryg " 2014/08/21 19:27:16

The preferred method to do this is using the command plymouth-set-default-theme.

If you enter this command, without parameters, as user root you'll see something like
>plymouth-set-default-theme
charge
details
text

This lists the themes installed on your computer. The default is 'charge'. If you want to see the boot up details you used to see in version 6, try
>plymouth-set-default-theme details

Followed by the command
>dracut -f

Then reboot.

This process modifies the boot loader so you won't have to update your grub.conf file manually everytime for each new kernel update.

There are numerous themes available you can download from CentOS or in general. Just google 'plymouth themes' to see other possibilities, if you're looking for graphics type screens. Top

User avatar TrevorH
Forum Moderator
Posts: 27492
Joined: 2009/09/24 10:40:56
Location: Brighton, UK
Re: disable startup graphic

Post by TrevorH " 2014/08/21 22:47:49

Editing /etc/default/grub to remove rhgb quiet makes it permanent too. CentOS 6 will die in November 2020 - migrate sooner rather than later!
CentOS 5 has been EOL for nearly 3 years and should no longer be used for anything!
Full time Geek, part time moderator. Use the FAQ Luke Top
MalAdept
Posts: 1
Joined: 2014/11/02 20:06:27
Re: disable startup graphic

Post by MalAdept " 2014/11/02 20:23:37

I tried both TrevorH's and LarryG's methods, and LarryG wins.

Editing /etc/default/grub to remove "rhgb quiet" gave me the scrolling boot messages I want, but it reduced maxmum display resolution (nouveau driver) from 1920x1080 to 1024x768! I put "rhgb quiet" back in and got my 1920x1080 back.

Then I tried "plymouth-set-default-theme details; dracut -f", and got verbose booting without loss of display resolution. Thanks LarryG! Top

dunwell
Posts: 116
Joined: 2010/12/20 18:49:52
Location: Colorado
Contact: Contact dunwell
Re: disable startup graphic

Post by dunwell " 2015/12/13 00:17:18

I have used this mod to get back the details for grub boot, thanks to all for that info.

However when I am watching it fills the page and then rather than scrolling up as it did in V5 it blanks and starts again at the top. Of course there is FAIL message right before it blanks :lol: that I want to see and I can't slam the Scroll Lock fast enough to catch it. Anyone know how to get the details to scroll up rather than the blank and re-write?

Alan D. Top

aks
Posts: 2915
Joined: 2014/09/20 11:22:14
Re: disable startup graphic

Post by aks " 2015/12/13 09:15:51

Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure. Top
dunwell
Posts: 116
Joined: 2010/12/20 18:49:52
Location: Colorado
Contact: Contact dunwell
Re: disable startup graphic

Post by dunwell " 2015/12/13 14:18:29

aks wrote: Yeah the scroll lock/ctrl+q/ctrl+s will not work with systemd you can't pause the screen like you used to be able to (it was a design choice, due to parallel daemon launching, apparently).
If you do boot, you can always use journalctrl to view the logs.
In Fedora you can use journalctl --list-boots to list boots (not 100% sure about CentOS 7.x - perhaps in 7.1 or 7.2?). You can also use things like journalctl --boot=-1 (the last boot), and parse the log at you leisure.
Thanks for the followup aks. Actually I have found that the Scroll Lock does pause (Ctrl-S/Q not) but it all goes by so fast that I'm not fast enough to stop it before the screen blanks and then starts writing again. What I am really wondering is how to get the screen to scroll up when it gets to the bottom of the screen rather than blanking and starting to write again at the top. That is annoying! :x

Alan D. Top

aks
Posts: 2915
Joined: 2014/09/20 11:22:14
Re: disable startup graphic

Post by aks " 2015/12/13 19:14:29

Yes it is and no you can't. Kudos to Lennard for making or lives so much shitter....

[Feb 05, 2020] How do deactivate plymouth boot screen?

Jan 01, 2012 | askubuntu.com

Ask Question Asked 8 years ago Active 7 years, 7 months ago Viewed 57k times


> ,

11

Jo-Erlend Schinstad , 2012-01-25 22:06:57

Lately, booting Ubuntu on my desktop has become seriously slow. We're talking two minutes. It used to take 10-20 seconds. Because of plymouth, I can't see what's going on. I would like to deactivate it, but not really uninstall it. What's the quickest way to do that? I'm using Precise, but I suspect a solution for 11.10 would work just as well.

WinEunuuchs2Unix , 2017-07-21 22:08:06

Did you try: sudo update-initramfs – mgajda Jun 19 '12 at 0:54

> ,

17

Panther ,

Easiest quick fix is to edit the grub line as you boot.

Hold down the shift key so you see the menu. Hit the e key to edit

Edit the 'linux' line, remove the 'quiet' and 'splash'

To disable it in the long run

Edit /etc/default/grub

Change the line – GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to

GRUB_CMDLINE_LINUX_DEFAULT=""

And then update grub

sudo update-grub

Panther , 2016-10-27 15:43:04

Removing quiet and splash removes the splash, but I still only have a purple screen with no text. What I want to do, is to see the actual boot messages. – Jo-Erlend Schinstad Jan 25 '12 at 22:25

Tuminoid ,

How about pressing CTRL+ALT+F2 for console allowing you to see whats going on.. You can go back to GUI/Plymouth by CTRL+ALT+F7 .

Don't have my laptop here right now, but IIRC Plymouth has upstart job in /etc/init , named plymouth???.conf, renaming that probably achieves what you want too more permanent manner.

Jānis Elmeris , 2013-12-03 08:46:54

No, there's nothing on the other consoles. – Jo-Erlend Schinstad Jan 25 '12 at 22:22

[Nov 09, 2019] Mirroring a running system into a ramdisk Oracle Linux Blog

Nov 09, 2019 | blogs.oracle.com

javascript:void(0)

Mirroring a running system into a ramdisk Greg Marsden

In this blog post, Oracle Linux kernel developer William Roche presents a method to mirror a running system into a ramdisk.

A RAM mirrored System ?

There are cases where a system can boot correctly but after some time, can lose its system disk access - for example an iSCSI system disk configuration that has network issues, or any other disk driver problem. Once the system disk is no longer accessible, we rapidly face a hang situation followed by I/O failures, without the possibility of local investigation on this machine. I/O errors can be reported on the console:

 XFS (dm-0): Log I/O Error Detected....

Or losing access to basic commands like:

# ls
-bash: /bin/ls: Input/output error

The approach presented here allows a small system disk space to be mirrored in memory to avoid the above I/O failures situation, which provides the ability to investigate the reasons for the disk loss. The system disk loss will be noticed as an I/O hang, at which point there will be a transition to use only the ram-disk.

To enable this, the Oracle Linux developer Philip "Bryce" Copeland created the following method (more details will follow):

Disk and memory sizes:

As we are going to mirror the entire system installation to the memory, this system installation image has to fit in a fraction of the memory - giving enough memory room to hold the mirror image and necessary running space.

Of course this is a trade-off between the memory available to the server and the minimal disk size needed to run the system. For example a 12GB disk space can be used for a minimal system installation on a 16GB memory machine.

A standard Oracle Linux installation uses XFS as root fs, which (currently) can't be shrunk. In order to generate a usable "small enough" system, it is recommended to proceed to the OS installation on a correctly sized disk space. Of course, a correctly sized installation location can be created using partitions of large physical disk. Then, the needed application filesystems can be mounted from their current installation disk(s). Some system adjustments may also be required (services added, configuration changes, etc...).

This configuration phase should not be underestimated as it can be difficult to separate the system from the needed applications, and keeping both on the same space could be too large for a RAM disk mirroring.

The idea is not to keep an entire system load active when losing disks access, but to be able to have enough system to avoid system commands access failure and analyze the situation.

We are also going to avoid the use of swap. When the system disk access is lost, we don't want to require it for swap data. Also, we don't want to use more memory space to hold a swap space mirror. The memory is better used directly by the system itself.

The system installation can have a swap space (for example a 1.2GB space on our 12GB disk example) but we are neither going to mirror it nor use it.

Our 12GB disk example could be used with: 1GB /boot space, 11GB LVM Space (1.2GB swap volume, 9.8 GB root volume).

Ramdisk memory footprint:

The ramdisk size has to be a little larger (8M) than the root volume size that we are going to mirror, making room for metadata. But we can deal with 2 types of ramdisk:

We can expect roughly 30% to 50% memory space gain from zram compared to brd, but zram must use 4k I/O blocks only. This means that the filesystem used for root has to only deal with a multiple of 4k I/Os.

Basic commands:

Here is a simple list of commands to manually create and use a ramdisk and mirror the root filesystem space. We create a temporary configuration that needs to be undone or the subsequent reboot will not work. But we also provide below a way of automating at startup and shutdown.

Note the root volume size (considered to be ol/root in this example):

?
1 2 3 # lvs --units k -o lv_size ol/root LSize 10268672.00k

Create a ramdisk a little larger than that (at least 8M larger):

?
1 # modprobe brd rd_nr=1 rd_size=$((10268672 + 8*1024))

Verify the created disk:

?
1 2 3 # lsblk /dev/ram0 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ram0 1:0 0 9.8G 0 disk

Put the disk under lvm control

?
1 2 3 4 5 6 7 8 9 # pvcreate /dev/ram0 Physical volume "/dev/ram0" successfully created. # vgextend ol /dev/ram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/ram0 Logical volume ol/root successfully converted.

We now have ol/root mirror to our /dev/ram0 disk.

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 40.70 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

A few minutes (or seconds) later, the synchronization is completed:

?
1 2 3 4 5 6 7 8 # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/ram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/ram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0)

We have our mirrored configuration running !

For security, we can also remove the swap and /boot, /boot/efi(if it exists) mount points:

?
1 2 3 # swapoff -a # umount /boot/efi # umount /boot

Stopping the system also requires some actions as you need to cleanup the configuration so that it will not be looking for a gone ramdisk on reboot.

?
1 2 3 4 5 6 7 # lvconvert -y -m 0 ol/root /dev/ram0 Logical volume ol/root successfully converted. # vgreduce ol /dev/ram0 Removed "/dev/ram0" from volume group "ol" # mount /boot # mount /boot/efi # swapon -a
What about in-memory compression ?

As indicated above, zRAM devices can compress data in-memory, but 2 main problems need to be fixed:

Make lvm work with zram:

The lvm configuration file has to be changed to take into account the "zram" type of devices. Including the following "types" entry to the /etc/lvm/lvm.conf file in its "devices" section:

?
1 2 3 devices { types = [ "zram" , 16 ] }
Root file system I/Os:

A standard Oracle Linux installation uses XFS, and we can check the sector size used (depending on the disk type used) with

?
1 2 3 4 5 6 7 8 9 10 # xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

We can notice here that the sector size (sectsz) used on this root fs is a standard 512 bytes. This fs type cannot be mirrored with a zRAM device, and needs to be recreated with 4k sector sizes.

Transforming the root file system to 4k sector size:

This is simply a backup (to a zram disk) and restore procedure after recreating the root FS. To do so, the system has to be booted from another system image. Booting from an installation DVD image can be a good possibility.

?
1 2 3 sh-4.2 # vgchange -a y ol 2 logical volume(s) in volume group "ol" now active sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 sh-4.2 # modprobe zram sh-4.2 # echo 10G > /sys/block/zram0/disksize sh-4.2 # mkfs.xfs /dev/zram0 meta-data=/dev/zram0 isize=256 agcount=4, agsize=655360 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2621440, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mkdir /mnt2 sh-4.2 # mount /dev/zram0 /mnt2 sh-4.2 # xfsdump -L BckUp -M dump -f /mnt2/ROOT /mnt xfsdump: using file dump (drive_simple) strategy xfsdump: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsdump: level 0 dump of localhost:/mnt ... xfsdump: dump complete: 130 seconds elapsed xfsdump: Dump Summary: xfsdump: stream 0 /mnt2/ROOT OK (success) xfsdump: Dump Status: SUCCESS sh-4.2 # umount /mnt
?
1 2 3 4 5 6 7 8 9 10 11 12 sh-4.2 # mkfs.xfs -f -s size=4096 /dev/mapper/ol-root meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0, sparse=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 sh-4.2 # mount /dev/mapper/ol-root /mnt
?
1 2 3 4 5 6 7 8 9 10 11 sh-4.2 # xfsrestore -f /mnt2/ROOT /mnt xfsrestore: using file dump (drive_simple) strategy xfsrestore: version 3.1.7 (dump format 3.0) - type ^C for status and control xfsrestore: searching media for dump ... xfsrestore: restore complete: 337 seconds elapsed xfsrestore: Restore Summary: xfsrestore: stream 0 /mnt2/ROOT OK (success) xfsrestore: Restore Status: SUCCESS sh-4.2 # umount /mnt sh-4.2 # umount /mnt2
?
1 sh-4.2 # reboot
?
1 2 3 4 5 6 7 8 9 10 $ xfs_info / meta-data=/dev/mapper/ol-root isize=256 agcount=4, agsize=641792 blks = sectsz=4096 attr=2, projid32bit=1 = crc=0 finobt=0 spinodes=0 data = bsize=4096 blocks=2567168, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=2560, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

With sectsz=4096, our system is now ready for zRAM mirroring.

Basic commands with a zRAM device: ?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 # modprobe zram # zramctl --find --size 10G /dev/zram0 # pvcreate /dev/zram0 Physical volume "/dev/zram0" successfully created. # vgextend ol /dev/zram0 Volume group "ol" successfully extended # vgscan --cache Reading volume groups from cache. Found volume group "ol" using metadata type lvm2 # lvconvert -y -m 1 ol/root /dev/zram0 Logical volume ol/root successfully converted. # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 12.38 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol Iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices root ol rwi-aor--- 9.79g 100.00 root_rimage_0(0),root_rimage_1(0) [root_rimage_0] ol iwi-aor--- 9.79g /dev/sda2(307) [root_rimage_1] ol iwi-aor--- 9.79g /dev/zram0(1) [root_rmeta_0] ol ewi-aor--- 4.00m /dev/sda2(2814) [root_rmeta_1] ol ewi-aor--- 4.00m /dev/zram0(0) swap ol -wi-ao---- <1.20g /dev/sda2(0) # zramctl NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lzo 10G 9.8G 5.3G 5.5G 1

The compressed disk uses a total of 5.5GB of memory to mirror a 9.8G volume size (using in this case 8.5G).

Removal is performed the same way as brd, except that the device is /dev/zram0 instead of /dev/ram0.

Automating the process:

Fortunately, the procedure can be automated on system boot and shutdown with the following scripts (given as examples).

The start method: /usr/sbin/start-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/start-raid1-ramdisk ]

After a chmod 555 /usr/sbin/start-raid1-ramdisk, running this script on a 4k xfs root file system should show something like:

?
1 2 3 4 5 6 7 8 9 10 11 # /usr/sbin/start-raid1-ramdisk Volume group "ol" is already consistent. RAID1 ramdisk: intending to use 10276864 K of memory for facilitation of [ / ] Physical volume "/dev/zram0" successfully created. Volume group "ol" successfully extended Logical volume ol/root successfully converted. Waiting for mirror to synchronize... LVM RAID1 sync of [ / ] took 00:01:53 sec Logical volume ol/root changed. NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT /dev/zram0 lz4 9.8G 9.8G 5.5G 5.8G 1

The stop method: /usr/sbin/stop-raid1-ramdisk: [ https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/stop-raid1-ramdisk ]

After a chmod 555 /usr/sbin/stop-raid1-ramdisk, running this script should show something like:

?
1 2 3 4 5 6 # /usr/sbin/stop-raid1-ramdisk Volume group "ol" is already consistent. Logical volume ol/root changed. Logical volume ol/root successfully converted. Removed "/dev/zram0" from volume group "ol" Labels on physical volume "/dev/zram0" successfully wiped.

A service Unit file can also be created: /etc/systemd/system/raid1-ramdisk.service [https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/raid1-ramdisk.service]

?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [Unit] Description=Enable RAMdisk RAID 1 on LVM After= local -fs.target Before= shutdown .target reboot.target halt.target [Service] ExecStart=/usr/sbin/start-raid1-ramdisk ExecStop=/usr/sbin/stop-raid1-ramdisk Type=oneshot RemainAfterExit= yes TimeoutSec=0 [Install] WantedBy=multi-user.target
Conclusion:

When the system disk access problem manifests itself, the ramdisk mirror branch will provide the possibility to investigate the situation. This procedure goal is not to keep the system running on this memory mirror configuration, but help investigate a bad situation.

When the problem is identified and fixed, I really recommend to come back to a standard configuration -- enjoying the entire memory of the system, a standard system disk, a possible swap space etc.

Hoping the method described here can help. I also want to thank for their reviews Philip "Bryce" Copeland who also created the first prototype of the above scripts, and Mark Kanda who also helped testing many aspects of this work.

[Nov 08, 2019] A Linux user's guide to Logical Volume Management Opensource.com

Nov 08, 2019 | opensource.com

In Figure 1, two complete physical hard drives and one partition from a third hard drive have been combined into a single volume group. Two logical volumes have been created from the space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been created on each of the two logical volumes.

Figure 1: LVM allows combining partitions and entire hard drives into Volume Groups.

Adding disk space to a host is fairly straightforward but, in my experience, is done relatively infrequently. The basic steps needed are listed below. You can either create an entirely new volume group or you can add the new space to an existing volume group and either expand an existing logical volume or create a new one.

Adding a new logical volume

There are times when it is necessary to add a new logical volume to a host. For example, after noticing that the directory containing virtual disks for my VirtualBox virtual machines was filling up the /home filesystem, I decided to create a new logical volume in which to store the virtual machine data, including the virtual disks. This would free up a great deal of space in my /home filesystem and also allow me to manage the disk space for the VMs independently.

The basic steps for adding a new logical volume are as follows.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create a new logical volumes (LV) from the space in the volume group.
  6. Create a filesystem on the new logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.

Now for the details. The following sequence is taken from an example I used as a lab project when teaching about Linux filesystems.

Example

This example shows how to use the CLI to extend an existing volume group to add more space to it, create a new logical volume in that space, and create a filesystem on the logical volume. This procedure can be performed on a running, mounted filesystem.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install hard drive

If there is not enough space in the volume group on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive, and then perform the following steps.

Create Physical Volume from hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Extend the existing Volume Group

In this example we will extend an existing volume group rather than creating a new one; you can choose to do it either way. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Create the Logical Volume

First create the Logical Volume (LV) from existing free space within the Volume Group. The command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvcreate -L +50G --name Stuff MyVG01
Create the filesystem

Creating the Logical Volume does not create the filesystem. That task must be performed separately. The command below creates an EXT4 filesystem that fits the newly created Logical Volume.

mkfs -t ext4 /dev/MyVG01/Stuff
Add a filesystem label

Adding a filesystem label makes it easy to identify the filesystem later in case of a crash or other disk related problems.

e2label /dev/MyVG01/Stuff Stuff
Mount the filesystem

At this point you can create a mount point, add an appropriate entry to the /etc/fstab file, and mount the filesystem.

You should also check to verify the volume has been created correctly. You can use the df , lvs, and vgs commands to do this.

Resizing a logical volume in an LVM filesystem

The need to resize a filesystem has been around since the beginning of the first versions of Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume Management.

  1. If necessary, install a new hard drive.
  2. Optional: Create a partition on the hard drive.
  3. Create a physical volume (PV) of the complete hard drive or a partition on the hard drive.
  4. Assign the new physical volume to an existing volume group (VG) or create a new volume group.
  5. Create one or more logical volumes (LV) from the space in the volume group, or expand an existing logical volume with some or all of the new space in the volume group.
  6. If you created a new logical volume, create a filesystem on it. If adding space to an existing logical volume, use the resize2fs command to enlarge the filesystem to fill the space in the logical volume.
  7. Add appropriate entries to /etc/fstab for mounting the filesystem.
  8. Mount the filesystem.
Example

This example describes how to resize an existing Logical Volume in an LVM environment using the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4 filesystems. I do not recommend that you do so on any critical system, but it can be done and I have done so many times; even on the root (/) filesystem. Use your judgment.

WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.

Install the hard drive

If there is not enough space on the existing hard drive(s) in the system to add the desired amount of space it may be necessary to add a new hard drive and create the space to add to the Logical Volume. First, install the physical hard drive and then perform the following steps.

Create a Physical Volume from the hard drive

It is first necessary to create a new Physical Volume (PV). Use the command below, which assumes that the new hard drive is assigned as /dev/hdd.

pvcreate /dev/hdd

It is not necessary to create a partition of any kind on the new hard drive. This creation of the Physical Volume which will be recognized by the Logical Volume Manager can be performed on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the entire hard drive, creating a partition first does not offer any particular advantages and uses disk space for metadata that could otherwise be used as part of the PV.

Add PV to existing Volume Group

For this example, we will use the new PV to extend an existing Volume Group. After the Physical Volume has been created, extend the existing Volume Group (VG) to include the space on the new PV. In this example, the existing Volume Group is named MyVG01.

vgextend /dev/MyVG01 /dev/hdd
Extend the Logical Volume

Extend the Logical Volume (LV) from existing free space within the Volume Group. The command below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is Stuff.

lvextend -L +50G /dev/MyVG01/Stuff
Expand the filesystem

Extending the Logical Volume will also expand the filesystem if you use the -r option. If you do not use the -r option, that task must be performed separately. The command below resizes the filesystem to fit the newly resized Logical Volume.

resize2fs /dev/MyVG01/Stuff

You should check to verify the resizing has been performed correctly. You can use the df , lvs, and vgs commands to do this.

Tips

Over the years I have learned a few things that can make logical volume management even easier than it already is. Hopefully these tips can prove of some value to you.

I know that, like me, many sysadmins have resisted the change to Logical Volume Management. I hope that this article will encourage you to at least try LVM. I am really glad that I did; my disk management tasks are much easier since I made the switch. Topics Business Linux How-tos and tutorials Sysadmin About the author David Both - David Both is an Open Source Software and GNU/Linux advocate, trainer, writer, and speaker who lives in Raleigh North Carolina. He is a strong proponent of and evangelist for the "Linux Philosophy." David has been in the IT industry for nearly 50 years. He has taught RHCE classes for Red Hat and has worked at MCI Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open Source Software for over 20 years. David prefers to purchase the components and build his...

[Nov 02, 2019] LVM spanning over multiple disks What disk is a file on? Can I lose a drive without total loss

Notable quotes:
"... If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part. ..."
"... So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost. ..."
"... LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd. ..."
"... As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason. ..."
"... it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something. ..."
Mar 16, 2015 | serverfault.com

LVM spanning over multiple disks: What disk is a file on? Can I lose a drive without total loss? Ask Question Asked 8 years, 10 months ago Active 4 years, 6 months ago Viewed 9k times 7 2 I have three 990GB partitions over three drives in my server. Using LVM, I can create one ~3TB partition for file storage.

1) How does the system determine what partition to use first?
2) Can I find what disk a file or folder is physically on?
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk? storage lvm share

edited Mar 16 '15 at 12:53

HopelessN00b 49k 25 25 gold badges 121 121 silver badges 194 194 bronze badges asked Dec 2 '10 at 2:28 Luke has no name Luke has no name 989 10 10 silver badges 13 13 bronze badges

add a comment | 3 Answers 3 active oldest votes 12
  1. The system fills from the first disk in the volume group to the last, unless you configure striping with extents.
  2. I don't think this is possible, but where I'd start to look is in the lvs/vgs commands man pages.
  3. If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part.
  4. So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost.
Peter Grace Peter Grace 2,676 2 2 gold badges 22 22 silver badges 38 38 bronze badges add a comment | 3

1) How does the system determine what partition to use first?

LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd.

2) Can I find what disk a file or folder is physically on?

You can determine what PVs a LV has allocation extents on. But I don't know of a way to get that information for an individual file.

3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk?

As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason.

3dinfluence 3dinfluence 12k 1 1 gold badge 23 23 silver badges 38 38 bronze badges

add a comment | 2 I don't know the answer to #2, so I'll leave that to someone else. I suspect "no", but I'm willing to be happily surprised.

1 is: you tell it, when you combine the physical volumes into a volume group.

3 is: it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something.

And to the overall, unasked question: yeah, you probably don't really want to do that.

[Nov 02, 2019] Raid-5 is obsolete if you use large drives , such as 2TB or 3TB disks. You should instead use raid-6 ( two disks can fail)

Notable quotes:
"... RAID5 can survive a single drive failure. However, once you replace that drive, it has to be initialized. Depending on the controller and other things, this can take anywhere from 5-18 hours. During this time, all drives will be in constant use to re-create the failed drive. It is during this time that people worry that the rebuild would cause another drive near death to die, causing a complete array failure. ..."
"... If during a rebuild one of the remaining disks experiences BER, your rebuild stops and you may have headaches recovering from such a situation, depending on controller design and user interaction. ..."
"... RAID5 + a GOOD backup is something to consider, though. ..."
"... Raid-5 is obsolete if you use large drives , such as 2TB or 3TB disks. You should instead use raid-6 ..."
"... RAID 6 offers more redundancy than RAID 5 (which is absolutely essential, RAID 5 is a walking disaster) at the cost of multiple parity writes per data write. This means the performance will be typically worse (although it's not theoretically much worse, since the parity operations are in parallel). ..."
Oct 03, 2019 | hardforum.com

RAID5 can survive a single drive failure. However, once you replace that drive, it has to be initialized. Depending on the controller and other things, this can take anywhere from 5-18 hours. During this time, all drives will be in constant use to re-create the failed drive. It is during this time that people worry that the rebuild would cause another drive near death to die, causing a complete array failure.

This isn't the only danger. The problem with 2TB disks, especially if they are not 4K sector disks, is that they have relative high BER rate for their capacity, so the likelihood of BER actually occurring and translating into an unreadable sector is something to worry about.

If during a rebuild one of the remaining disks experiences BER, your rebuild stops and you may have headaches recovering from such a situation, depending on controller design and user interaction.

So i would say with modern high-BER drives you should say:

So essentially you'll lose one parity disk alone for the BER issue. Not everyone will agree with my analysis, but considering RAID5 with today's high-capacity drives 'safe' is open for debate.

RAID5 + a GOOD backup is something to consider, though.

  1. So you're saying BER is the error count that 'escapes' the ECC correction? I do not believe that is correct, but i'm open to good arguments or links.

    As i understand, the BER is what prompt bad sectors, where the number of errors exceed that of the ECC error correcting ability; and you will have an unrecoverable sector (Current Pending Sector in SMART output).

    Also these links are interesting in this context:

    http://blog.econtech.selfip.org/200...s-not-fully-readable-a-lawsuit-in-the-making/

    The short story first: Your consumer level 1TB SATA drive has a 44% chance that it can be completely read without any error. If you run a RAID setup, this is really bad news because it may prevent rebuilding an array in the case of disk failure, making your RAID not so Redundant. Click to expand...
    Not sure on the numbers the article comes up with, though.

    Also this one is interesting:
    http://lefthandnetworks.typepad.com/virtual_view/2008/02/what-does-data.html

    BER simply means that while reading your data from the disk drive you will get an average of one non-recoverable error in so many bits read, as specified by the manufacturer. Click to expand...
    Rebuilding the data on a replacement drive with most RAID algorithms requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed, and therefore the RAID rebuild fails and data is lost. The frequency of this disastrous occurrence is derived from the BER. Simple calculations will show that the chance of data loss due to BER is much greater than all other reasons combined. Click to expand...
    These links do suggest that BER works to produce un-recoverable sectors, and not 'escape' them as 'undetected' bad sectors, if i understood you correctly.
  1. parityOCP said:
    That's guy's a bit of a scaremonger to be honest. He may have a point with consumer drives, but the article is sensationalised to a certain degree. However, there are still a few outfits that won't go past 500GB/drive in an array (even with enterprise drives), simply to reduce the failure window during a rebuild. Click to expand...
    Why is he a scaremonger? He is correct. Have you read his article? In fact, he has copied his argument from Adam Leventhal(?) that was one of the ZFS developers I believe.

    Adam's argument goes likes this:
    Disks are getting larger all the time, in fact, the storage increases exponentially. At the same time, the bandwidth is increasing not that fast - we are still at 100MB/sek even after decades. So, bandwidth has increased maybe 20x after decades. While storage has increased from 10MB to 3TB = 300.000 times.

    The trend is clear. In the future when we have 10TB drives, they will not be much faster than today. This means, to repair an raid with 3TB disks today, will take several days, maybe even one week. With 10TB drives, it will take several weeks, maybe a month.

    Repairing a raid stresses the other disks much, which means they can break too. Experienced sysadmins reports that this happens quite often during a repair. Maybe because those disks come from the same batch, they have the same weakness. Some sysadmins therefore mix disks from different vendors and batches.

    Hence, I would not want to run a raid with 3TB disks and only use raid-5. During those days, if only another disk crashes you have lost all your data.

    Hence, that article is correct, and he is not a scaremonger. Raid-5 is obsolete if you use large drives, such as 2TB or 3TB disks. You should instead use raid-6 (two disks can fail). That is the conclusion of the article: use raid-6 with large disks, forget raid-5. This is true, and not scaremongery.

    In fact, ZFS has therefore something called raidz3 - which means that three disks can fail without problems. To the OT: no raid-5 is not safe. Neither is raid-6, because neither of them can not always repair nor detect corrupted data. There are cases when they dont even notice that you got corrupted bits. See my other thread for more information about this. That is the reason people are switching to ZFS - which always CAN detect and repair those corrupted bits. I suggest, sell your hardware raid card, and use ZFS which requires no hardware. ZFS just uses JBOD.

    Here are research papers on raid-5, raid-6 and ZFS and corruption:
    http://hardforum.com/showpost.php?p=1036404173&postcount=73

  1. brutalizer said:
    The trend is clear. In the future when we have 10TB drives, they will not be much faster than today. This means, to repair an raid with 3TB disks today, will take several days, maybe even one week. With 10TB drives, it will take several weeks, maybe a month. Click to expand...
    While I agree with the general claim that the larger HDDs (1.5, 2, 3TBs) are best used in RAID 6, your claim about rebuild times is way off.

    I think it is not unreasonable to assume that the 10TB drives will be able to read and write at 200 MB/s or more. We already have 2TB drives with 150MB/s sequential speeds, so 200 MB/s is actually a conservative estimate.

    10e12/200e6 = 50000 secs = 13.9 hours. Even if there is 100% overhead (half the throughput), that is less than 28 hours to do the rebuild. It is a long time, but it is no where near a month! Try to back your claims in reality.

    And you have again made the false claim that "ZFS - which always CAN detect and repair those corrupted bits". ZFS can usually detect corrupted bits, and can usually correct them if you have duplication or parity, but nothing can always detect and repair. ZFS is safer than many alternatives, but nothing is perfectly safe. Corruption can and has happened with ZFS, and it will happen again.

Is RAID5 safe with Five 2TB Hard Drives ? | [H]ard|Forum Your browser indicates if you've visited this link

https://hardforum.com /threads/is-raid5-safe-with-five-2tb-hard-drives.1560198/

Hence, that article is correct, and he is not a scaremonger. Raid-5 is obsolete if you use large drives , such as 2TB or 3TB disks. You should instead use raid-6 ( two disks can fail). That is the conclusion of the article: use raid-6 with large disks, forget raid-5 . This is true, and not scaremongery.

RAID 5 Data Recovery How to Rebuild a Failed RAID 5 - YouTube

RAID 5 vs RAID 10: Recommended RAID For Safety and ... Your browser indicates if you've visited this link

https://www.cyberciti.biz /tips/raid5-vs-raid-10-safety-performance.html

RAID 6 offers more redundancy than RAID 5 (which is absolutely essential, RAID 5 is a walking disaster) at the cost of multiple parity writes per data write. This means the performance will be typically worse (although it's not theoretically much worse, since the parity operations are in parallel).

[Oct 02, 2019] raid5 - Can I recover a RAID 5 array if two drives have failed - Server Fault

Oct 02, 2019 | serverfault.com

Can I recover a RAID 5 array if two drives have failed? Ask Question Asked 9 years ago Active 2 years, 3 months ago Viewed 58k times I have a Dell 2600 with 6 drives configured in a RAID 5 on a PERC 4 controller. 2 drives failed at the same time, and according to what I know a RAID 5 is recoverable if 1 drive fails. I'm not sure if the fact I had six drives in the array might save my skin.

I bought 2 new drives and plugged them in but no rebuild happened as I expected. Can anyone shed some light? raid raid5 dell-poweredge share Share a link to this question

add a comment | 4 Answers 4 active oldest votes

11 Regardless of how many drives are in use, a RAID 5 array only allows for recovery in the event that just one disk at a time fails.

What 3molo says is a fair point but even so, not quite correct I think - if two disks in a RAID5 array fail at the exact same time then a hot spare won't help, because a hot spare replaces one of the failed disks and rebuilds the array without any intervention, and a rebuild isn't possible if more than one disk fails.

For now, I am sorry to say that your options for recovering this data are going to involve restoring a backup.

For the future you may want to consider one of the more robust forms of RAID (not sure what options a PERC4 supports) such as RAID 6 or a nested RAID array . Once you get above a certain amount of disks in an array you reach the point where the chance that more than one of them can fail before a replacement is installed and rebuilt becomes unacceptably high. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 13:37 longneck 21.1k 3 3 gold badges 43 43 silver badges 76 76 bronze badges answered Sep 21 '10 at 14:43 Rob Moir Rob Moir 30k 4 4 gold badges 53 53 silver badges 84 84 bronze badges

add a comment | 2 You can try to force one or both of the failed disks to be online from the BIOS interface of the controller. Then check that the data and the file system are consistent. share Share a link to this answer Copy link | improve this answer answered Sep 21 '10 at 15:35 Mircea Vutcovici Mircea Vutcovici 13.6k 3 3 gold badges 42 42 silver badges 69 69 bronze badges add a comment | 2 Direct answer is "No". In-direct -- "It depends". Mainly it depends on whether disks are partially out of order, or completely. In case there're partially broken, you can give it a try -- I would copy (using tool like ddrescue) both failed disks. Then I'd try to run the bunch of disks using Linux SoftRAID -- re-trying with proper order of disks and stripe-size in read-only mode and counting CRC mismatches. It's quite doable, I should say -- this text in Russian mentions 12 disk RAID50's recovery using LSR , for example. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 15:12 Skyhawk 13.5k 3 3 gold badges 45 45 silver badges 91 91 bronze badges answered Jun 8 '12 at 14:11 poige poige 7,370 2 2 gold badges 16 16 silver badges 38 38 bronze badges add a comment | 0 It is possible if raid was with one spare drive , and one of your failed disks died before the second one. So, you just need need to try reconstruct array virtually with 3d party software . Found small article about this process on this page: http://www.angeldatarecovery.com/raid5-data-recovery/

And, if you realy need one of died drives you can send it to recovery shops. With of this images you can reconstruct raid properly with good channces.

[Sep 23, 2019] How to recover deleted files with foremost on Linux - LinuxConfig.org

Sep 23, 2019 | linuxconfig.org
Details
System Administration
15 September 2019
Contents In this article we will talk about foremost , a very useful open source forensic utility which is able to recover deleted files using the technique called data carving . The utility was originally developed by the United States Air Force Office of Special Investigations, and is able to recover several file types (support for specific file types can be added by the user, via the configuration file). The program can also work on partition images produced by dd or similar tools.

In this tutorial you will learn:

foremost-manual <img src=https://linuxconfig.org/images/foremost_manual.png alt=foremost-manual width=1200 height=675 /> Foremost is a forensic data recovery program for Linux used to recover files using their headers, footers, and data structures through a process known as file carving. Software Requirements and Conventions Used
Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Distribution-independent
Software The "foremost" program
Other Familiarity with the command line interface
Conventions # - requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ - requires given linux commands to be executed as a regular non-privileged user
Installation

Since foremost is already present in all the major Linux distributions repositories, installing it is a very easy task. All we have to do is to use our favorite distribution package manager. On Debian and Ubuntu, we can use apt :

$ sudo apt install foremost

In recent versions of Fedora, we use the dnf package manager to install packages , the dnf is a successor of yum . The name of the package is the same:

$ sudo dnf install foremost

If we are using ArchLinux, we can use pacman to install foremost . The program can be found in the distribution "community" repository:

$ sudo pacman -S foremost

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

me name=


Basic usage
WARNING
No matter which file recovery tool or process your are going to use to recover your files, before you begin it is recommended to perform a low level hard drive or partition backup, hence avoiding an accidental data overwrite !!! In this case you may re-try to recover your files even after unsuccessful recovery attempt. Check the following dd command guide on how to perform hard drive or partition low level backup.

The foremost utility tries to recover and reconstruct files on the base of their headers, footers and data structures, without relying on filesystem metadata . This forensic technique is known as file carving . The program supports various types of files, as for example:

The most basic way to use foremost is by providing a source to scan for deleted files (it can be either a partition or an image file, as those generated with dd ). Let's see an example. Imagine we want to scan the /dev/sdb1 partition: before we begin, a very important thing to remember is to never store retrieved data on the same partition we are retrieving the data from, to avoid overwriting delete files still present on the block device. The command we would run is:

$ sudo foremost -i /dev/sdb1

By default, the program creates a directory called output inside the directory we launched it from and uses it as destination. Inside this directory, a subdirectory for each supported file type we are attempting to retrieve is created. Each directory will hold the corresponding file type obtained from the data carving process:

output
├── audit.txt
├── avi
├── bmp
├── dll
├── doc
├── docx
├── exe
├── gif
├── htm
├── jar
├── jpg
├── mbd
├── mov
├── mp4
├── mpg
├── ole
├── pdf
├── png  
├── ppt
├── pptx
├── rar
├── rif
├── sdw
├── sx
├── sxc
├── sxi
├── sxw
├── vis
├── wav
├── wmv
├── xls
├── xlsx
└── zip

When foremost completes its job, empty directories are removed. Only the ones containing files are left on the filesystem: this let us immediately know what type of files were successfully retrieved. By default the program tries to retrieve all the supported file types; to restrict our search, we can, however, use the -t option and provide a list of the file types we want to retrieve, separated by a comma. In the example below, we restrict the search only to gif and pdf files:

$ sudo foremost -t gif,pdf -i /dev/sdb1

https://www.youtube.com/embed/58S2wlsJNvo

In this video we will test the forensic data recovery program Foremost to recover a single png file from /dev/sdb1 partition formatted with the EXT4 filesystem.

me name=


Specifying an alternative destination

As we already said, if a destination is not explicitly declared, foremost creates an output directory inside our cwd . What if we want to specify an alternative path? All we have to do is to use the -o option and provide said path as argument. If the specified directory doesn't exist, it is created; if it exists but it's not empty, the program throws a complain:

ERROR: /home/egdoc/data is not empty
        Please specify another directory or run with -T.

To solve the problem, as suggested by the program itself, we can either use another directory or re-launch the command with the -T option. If we use the -T option, the output directory specified with the -o option is timestamped. This makes possible to run the program multiple times with the same destination. In our case the directory that would be used to store the retrieved files would be:

/home/egdoc/data_Thu_Sep_12_16_32_38_2019
The configuration file

The foremost configuration file can be used to specify file formats not natively supported by the program. Inside the file we can find several commented examples showing the syntax that should be used to accomplish the task. Here is an example involving the png type (the lines are commented since the file type is supported by default):

# PNG   (used in web pages)
#       (NOTE THIS FORMAT HAS A BUILTIN EXTRACTION FUNCTION)
#       png     y       200000  \x50\x4e\x47?   \xff\xfc\xfd\xfe

The information to provide in order to add support for a file type, are, from left to right, separated by a tab character: the file extension ( png in this case), whether the header and footer are case sensitive ( y ), the maximum file size in Bytes ( 200000 ), the header ( \x50\x4e\x47? ) and and the footer ( \xff\xfc\xfd\xfe ). Only the latter is optional and can be omitted.

If the path of the configuration file it's not explicitly provided with the -c option, a file named foremost.conf is searched and used, if present, in the current working directory. If it is not found the default configuration file, /etc/foremost.conf is used instead.

Adding the support for a file type

By reading the examples provided in the configuration file, we can easily add support for a new file type. In this example we will add support for flac audio files. Flac (Free Lossless Audio Coded) is a non-proprietary lossless audio format which is able to provide compressed audio without quality loss. First of all, we know that the header of this file type in hexadecimal form is 66 4C 61 43 00 00 00 22 ( fLaC in ASCII), and we can verify it by using a program like hexdump on a flac file:

$ hexdump -C
blind_guardian_war_of_wrath.flac|head
00000000  66 4c 61 43 00 00 00 22  12 00 12 00 00 00 0e 00  |fLaC..."........|
00000010  36 f2 0a c4 42 f0 00 4d  04 60 6d 0b 64 36 d7 bd  |6...B..M.`m.d6..|
00000020  3e 4c 0d 8b c1 46 b6 fe  cd 42 04 00 03 db 20 00  |>L...F...B.... .|
00000030  00 00 72 65 66 65 72 65  6e 63 65 20 6c 69 62 46  |..reference libF|
00000040  4c 41 43 20 31 2e 33 2e  31 20 32 30 31 34 31 31  |LAC 1.3.1 201411|
00000050  32 35 21 00 00 00 12 00  00 00 54 49 54 4c 45 3d  |25!.......TITLE=|
00000060  57 61 72 20 6f 66 20 57  72 61 74 68 11 00 00 00  |War of Wrath....|
00000070  52 45 4c 45 41 53 45 43  4f 55 4e 54 52 59 3d 44  |RELEASECOUNTRY=D|
00000080  45 0c 00 00 00 54 4f 54  41 4c 44 49 53 43 53 3d  |E....TOTALDISCS=|
00000090  32 0c 00 00 00 4c 41 42  45 4c 3d 56 69 72 67 69  |2....LABEL=Virgi|

As you can see the file signature is indeed what we expected. Here we will assume a maximum file size of 30 MB, or 30000000 Bytes. Let's add the entry to the file:

flac    y       30000000    \x66\x4c\x61\x43\x00\x00\x00\x22

The footer signature is optional so here we didn't provide it. The program should now be able to recover deleted flac files. Let's verify it. To test that everything works as expected I previously placed, and then removed, a flac file from the /dev/sdb1 partition, and then proceeded to run the command:

$ sudo foremost -i /dev/sdb1 -o $HOME/Documents/output

As expected, the program was able to retrieve the deleted flac file (it was the only file on the device, on purpose), although it renamed it with a random string. The original filename cannot be retrieved because, as we know, files metadata is contained in the filesystem, and not in the file itself:

/home/egdoc/Documents
└── output
    ├── audit.txt
    └── flac
        └── 00020482.flac

me name=


The audit.txt file contains information about the actions performed by the program, in this case:

Foremost version 1.5.7 by Jesse Kornblum, Kris
Kendall, and Nick Mikus
Audit File

Foremost started at Thu Sep 12 23:47:04 2019
Invocation: foremost -i /dev/sdb1 -o /home/egdoc/Documents/output
Output directory: /home/egdoc/Documents/output
Configuration file: /etc/foremost.conf
------------------------------------------------------------------
File: /dev/sdb1
Start: Thu Sep 12 23:47:04 2019
Length: 200 MB (209715200 bytes)

Num      Name (bs=512)         Size      File Offset     Comment

0:      00020482.flac         28 MB        10486784
Finish: Thu Sep 12 23:47:04 2019

1 FILES EXTRACTED

flac:= 1
------------------------------------------------------------------

Foremost finished at Thu Sep 12 23:47:04 2019
Conclusion

In this article we learned how to use foremost, a forensic program able to retrieve deleted files of various types. We learned that the program works by using a technique called data carving , and relies on files signatures to achieve its goal. We saw an example of the program usage and we also learned how to add the support for a specific file type using the syntax illustrated in the configuration file. For more information about the program usage, please consult its manual page.

[Aug 31, 2019] Linux on your laptop A closer look at EFI boot options

Aug 31, 2019 | www.zdnet.com
Before EFI, the standard boot process for virtually all PC systems was called "MBR", for Master Boot Record; today you are likely to hear it referred to as "Legacy Boot". This process depended on using the first physical block on a disk to hold some information needed to boot the computer (thus the name Master Boot Record); specifically, it held the disk address at which the actual bootloader could be found, and the partition table that defined the layout of the disk. Using this information, the PC firmware could find and execute the bootloader, which would then bring up the computer and run the operating system.

This system had a number of rather obvious weaknesses and shortcomings. One of the biggest was that you could only have one bootable object on each physical disk drive (at least as far as the firmware boot was concerned). Another was that if that first sector on the disk became corrupted somehow, you were in deep trouble.

Over time, as part of the Extensible Firmware Interface, a new approach to boot configuration was developed. Rather than storing critical boot configuration information in a single "magic" location, EFI uses a dedicated "EFI boot partition" on the desk. This is a completely normal, standard disk partition, the same as which may be used to hold the operating system or system recovery data.

The only requirement is that it be FAT formatted, and it should have the boot and esp partition flags set (esp stands for EFI System Partition). The specific data and programs necessary for booting is then kept in directories on this partition, typically in directories named to indicate what they are for. So if you have a Windows system, you would typically find directories called 'Boot' and 'Microsoft' , and perhaps one named for the manufacturer of the hardware, such as HP. If you have a Linux system, you would find directories called opensuse, debian, ubuntu, or any number of others depending on what particular Linux distribution you are using.

It should be obvious from the description so far that it is perfectly possible with the EFI boot configuration to have multiple boot objects on a single disk drive.

Before going any further, I should make it clear that if you install Linux as the only operating system on a PC, it is not necessary to know all of this configuration information in detail. The installer should take care of setting all of this up, including creating the EFI boot partition (or using an existing EFI boot partition), and further configuring the system boot list so that whatever system you install becomes the default boot target.

If you were to take a brand new computer with UEFI firmware, and load it from scratch with any of the current major Linux distributions, it would all be set up, configured, and working just as it is when you purchase a new computer preloaded with Windows (or when you load a computer from scratch with Windows). It is only when you want to have more than one bootable operating system – especially when you want to have both Linux and Windows on the same computer – that things may become more complicated.

The problems that arise with such "multiboot" systems are generally related to getting the boot priority list defined correctly.

When you buy a new computer with Windows, this list typically includes the Windows bootloader on the primary disk, and then perhaps some other peripheral devices such as USB, network interfaces and such. When you install Linux alongside Windows on such a computer, the installer will add the necessary information to the EFI boot partition, but if the boot priority list is not changed, then when the system is rebooted after installation it will simply boot Windows again, and you are likely to think that the installation didn't work.

There are several ways to modify this boot priority list, but exactly which ones are available and whether or how they work depends on the firmware of the system you are using, and this is where things can get really messy. There are just about as many different UEFI firmware implementations as there are PC manufacturers, and the manufacturers have shown a great deal of creativity in the details of this firmware.

First, in the simplest case, there is a software utility included with Linux called efibootmgr that can be used to modify, add or delete the boot priority list. If this utility works properly, and the changes it makes are permanent on the system, then you would have no other problems to deal with, and after installing it would boot Linux and you would be happy. Unfortunately, while this is sometimes the case it is frequently not. The most common reason for this is that changes made by software utilities are not actually permanently stored by the system BIOS, so when the computer is rebooted the boot priority list is restored to whatever it was before, which generally means that Windows gets booted again.

The other common way of modifying the boot priority list is via the computer BIOS configuration program. The details of how to do this are different for every manufacturer, but the general procedure is approximately the same. First you have to press the BIOS configuration key (usually F2, but not always, unfortunately) during system power-on (POST). Then choose the Boot item from the BIOS configuration menu, which should get you to a list of boot targets presented in priority order. Then you need to modify that list; sometimes this can be done directly in that screen, via the usual F5/F6 up/down key process, and sometimes you need to proceed one level deeper to be able to do that. I wish I could give more specific and detailed information about this, but it really is different on every system (sometimes even on different systems produced by the same manufacturer), so you just need to proceed carefully and figure out the steps as you go.

I have seen a few rare cases of systems where neither of these methods works, or at least they don't seem to be permanent, and the system keeps reverting to booting Windows. Again, there are two ways to proceed in this case. The first is by simply pressing the "boot selection" key during POST (power-on). Exactly which key this is varies, I have seen it be F12, F9, Esc, and probably one or two others. Whichever key it turns out to be, when you hit it during POST you should get a list of bootable objects defined in the EFI boot priority list, so assuming your Linux installation worked you should see it listed there. I have known of people who were satisfied with this solution, and would just use the computer this way and have to press boot select each time they wanted to boot Linux.

The alternative is to actually modify the files in the EFI boot partition, so that the (unchangeable) Windows boot procedure would actually boot Linux. This involves overwriting the Windows file bootmgfw.efi with the Linux file grubx64.efi. I have done this, especially in the early days of EFI boot, and it works, but I strongly advise you to be extremely careful if you try it, and make sure that you keep a copy of the original bootmgfw.efi file. Finally, just as a final (depressing) warning, I have also seen systems where this seemed to work, at least for a while, but then at some unpredictable point the boot process seemed to notice that something had changed and it restored bootmgfw.efi to its original state – thus losing the Linux boot configuration again. Sigh.

So, that's the basics of EFI boot, and how it can be configured. But there are some important variations possible, and some caveats to be aware of.

[Aug 03, 2019] Creating Bootable Linux USB Drive with Etcher

Aug 03, 2019 | linuxize.com

There are several different applications available for free use which will allow you to flash ISO images to USB drives. In this example, we will use Etcher. It is a free and open-source utility for flashing images to SD cards & USB drives and supports Windows, macOS, and Linux.

Head over to the Etcher downloads page , and download the most recent Etcher version for your operating system. Once the file is downloaded, double-click on it and follow the installation wizard.

Creating Bootable Linux USB Drive using Etcher is a relatively straightforward process, just follow the steps outlined below:

  1. Connect the USB flash drive to your system and Launch Etcher.
  2. Click on the Select image button and locate the distribution .iso file.
  3. If only one USB drive is attached to your machine, Etcher will automatically select it. Otherwise, if more than one SD cards or USB drives are connected make sure you have selected the correct USB drive before flashing the image.

[Mar 25, 2019] How to Monitor Disk IO in Linux Linux Hint

Mar 25, 2019 | linuxhint.com

Monitoring Specific Storage Devices or Partitions with iostat:

By default, iostat monitors all the storage devices of your computer. But, you can monitor specific storage devices (such as sda, sdb etc) or specific partitions (such as sda1, sda2, sdb4 etc) with iostat as well.

For example, to monitor the storage device sda only, run iostat as follows:

$ sudo iostat sda

Or

$ sudo iostat -d 2 sda

As you can see, only the storage device sda is monitored.

You can also monitor multiple storage devices with iostat.

For example, to monitor the storage devices sda and sdb , run iostat as follows:

$ sudo iostat sda sdb

Or

$ sudo iostat -d 2 sda sdb

If you want to monitor specific partitions, then you can do so as well.

For example, let's say, you want to monitor the partitions sda1 and sda2 , then run iostat as follows:

$ sudo iostat sda1 sda2

Or

$ sudo iostat -d 2 sda1 sda2

As you can see, only the partitions sda1 and sda2 are monitored.

Monitoring LVM Devices with iostat:

You can monitor the LVM devices of your computer with the -N option of iostat.

To monitor the LVM devices of your Linux machine as well, run iostat as follows:

$ sudo iostat -N -d 2

You can also monitor specific LVM logical volume as well.

For example, to monitor the LVM logical volume centos-root (let's say), run iostat as follows:

$ sudo iostat -N -d 2 centos-root

Changing the Units of iostat:

By default, iostat generates reports in kilobytes (kB) unit. But there are options that you can use to change the unit.

For example, to change the unit to megabytes (MB), use the -m option of iostat.

You can also change the unit to human readable with the -h option of iostat. Human readable format will automatically pick the right unit depending on the available data.

To change the unit to megabytes, run iostat as follows:

$ sudo iostat -m -d 2 sda

To change the unit to human readable format, run iostat as follows:

$ sudo iostat -h -d 2 sda

I copied as file and as you can see, the unit is now in megabytes (MB).

It changed to kilobytes (kB) as soon as the file copy is over.

Extended Display of iostat:

If you want, you can display a lot more information about disk i/o with iostat. To do that, use the -x option of iostat.

For example, to display extended information about disk i/o, run iostat as follows:

$ sudo iostat -x -d 2 sda

You can find what each of these fields (rrqm/s, %wrqm etc) means in the man page of iostat.

Getting Help:

If you need more information on each of the supported options of iostat and what each of the fields of iostat means, I recommend you take a look at the man page of iostat.

You can access the man page of iostat with the following command:

$ man iostat

So, that's how you use iostat in Linux. Thanks for reading this article.

[Mar 10, 2019] How do I detach a process from Terminal, entirely?

Mar 10, 2019 | superuser.com

stackoverflow.com, Aug 25, 2016 at 17:24

I use Tilda (drop-down terminal) on Ubuntu as my "command central" - pretty much the way others might use GNOME Do, Quicksilver or Launchy.

However, I'm struggling with how to completely detach a process (e.g. Firefox) from the terminal it's been launched from - i.e. prevent that such a (non-)child process

For example, in order to start Vim in a "proper" terminal window, I have tried a simple script like the following:

exec gnome-terminal -e "vim $@" &> /dev/null &

However, that still causes pollution (also, passing a file name doesn't seem to work).

lhunath, Sep 23, 2016 at 19:08

First of all; once you've started a process, you can background it by first stopping it (hit Ctrl - Z ) and then typing bg to let it resume in the background. It's now a "job", and its stdout / stderr / stdin are still connected to your terminal.

You can start a process as backgrounded immediately by appending a "&" to the end of it:

firefox &

To run it in the background silenced, use this:

firefox </dev/null &>/dev/null &

Some additional info:

nohup is a program you can use to run your application with such that its stdout/stderr can be sent to a file instead and such that closing the parent script won't SIGHUP the child. However, you need to have had the foresight to have used it before you started the application. Because of the way nohup works, you can't just apply it to a running process .

disown is a bash builtin that removes a shell job from the shell's job list. What this basically means is that you can't use fg , bg on it anymore, but more importantly, when you close your shell it won't hang or send a SIGHUP to that child anymore. Unlike nohup , disown is used after the process has been launched and backgrounded.

What you can't do, is change the stdout/stderr/stdin of a process after having launched it. At least not from the shell. If you launch your process and tell it that its stdout is your terminal (which is what you do by default), then that process is configured to output to your terminal. Your shell has no business with the processes' FD setup, that's purely something the process itself manages. The process itself can decide whether to close its stdout/stderr/stdin or not, but you can't use your shell to force it to do so.

To manage a background process' output, you have plenty of options from scripts, "nohup" probably being the first to come to mind. But for interactive processes you start but forgot to silence ( firefox < /dev/null &>/dev/null & ) you can't do much, really.

I recommend you get GNU screen . With screen you can just close your running shell when the process' output becomes a bother and open a new one ( ^Ac ).


Oh, and by the way, don't use " $@ " where you're using it.

$@ means, $1 , $2 , $3 ..., which would turn your command into:

gnome-terminal -e "vim $1" "$2" "$3" ...

That's probably not what you want because -e only takes one argument. Use $1 to show that your script can only handle one argument.

It's really difficult to get multiple arguments working properly in the scenario that you gave (with the gnome-terminal -e ) because -e takes only one argument, which is a shell command string. You'd have to encode your arguments into one. The best and most robust, but rather cludgy, way is like so:

gnome-terminal -e "vim $(printf "%q " "$@")"

Limited Atonement ,Aug 25, 2016 at 17:22

nohup cmd &

nohup detaches the process completely (daemonizes it)

Randy Proctor ,Sep 13, 2016 at 23:00

If you are using bash , try disown [ jobspec ] ; see bash(1) .

Another approach you can try is at now . If you're not superuser, your permission to use at may be restricted.

Stephen Rosen ,Jan 22, 2014 at 17:08

Reading these answers, I was under the initial impression that issuing nohup <command> & would be sufficient. Running zsh in gnome-terminal, I found that nohup <command> & did not prevent my shell from killing child processes on exit. Although nohup is useful, especially with non-interactive shells, it only guarantees this behavior if the child process does not reset its handler for the SIGHUP signal.

In my case, nohup should have prevented hangup signals from reaching the application, but the child application (VMWare Player in this case) was resetting its SIGHUP handler. As a result when the terminal emulator exits, it could still kill your subprocesses. This can only be resolved, to my knowledge, by ensuring that the process is removed from the shell's jobs table. If nohup is overridden with a shell builtin, as is sometimes the case, this may be sufficient, however, in the event that it is not...


disown is a shell builtin in bash , zsh , and ksh93 ,

<command> &
disown

or

<command> &; disown

if you prefer one-liners. This has the generally desirable effect of removing the subprocess from the jobs table. This allows you to exit the terminal emulator without accidentally signaling the child process at all. No matter what the SIGHUP handler looks like, this should not kill your child process.

After the disown, the process is still a child of your terminal emulator (play with pstree if you want to watch this in action), but after the terminal emulator exits, you should see it attached to the init process. In other words, everything is as it should be, and as you presumably want it to be.

What to do if your shell does not support disown ? I'd strongly advocate switching to one that does, but in the absence of that option, you have a few choices.

  1. screen and tmux can solve this problem, but they are much heavier weight solutions, and I dislike having to run them for such a simple task. They are much more suitable for situations in which you want to maintain a tty, typically on a remote machine.
  2. For many users, it may be desirable to see if your shell supports a capability like zsh's setopt nohup . This can be used to specify that SIGHUP should not be sent to the jobs in the jobs table when the shell exits. You can either apply this just before exiting the shell, or add it to shell configuration like ~/.zshrc if you always want it on.
  3. Find a way to edit the jobs table. I couldn't find a way to do this in tcsh or csh , which is somewhat disturbing.
  4. Write a small C program to fork off and exec() . This is a very poor solution, but the source should only consist of a couple dozen lines. You can then pass commands as commandline arguments to the C program, and thus avoid a process specific entry in the jobs table.

Sheljohn ,Jan 10 at 10:20

  1. nohup $COMMAND &
  2. $COMMAND & disown
  3. setsid command

I've been using number 2 for a very long time, but number 3 works just as well. Also, disown has a 'nohup' flag of '-h', can disown all processes with '-a', and can disown all running processes with '-ar'.

Silencing is accomplished by '$COMMAND &>/dev/null'.

Hope this helps!

dunkyp

add a comment ,Mar 25, 2009 at 1:51
I think screen might solve your problem

Nathan Fellman ,Mar 23, 2009 at 14:55

in tcsh (and maybe in other shells as well), you can use parentheses to detach the process.

Compare this:

> jobs # shows nothing
> firefox &
> jobs
[1]  + Running                       firefox

To this:

> jobs # shows nothing
> (firefox &)
> jobs # still shows nothing
>

This removes firefox from the jobs listing, but it is still tied to the terminal; if you logged in to this node via 'ssh', trying to log out will still hang the ssh process.

,

To disassociate tty shell run command through sub-shell for e.g.

(command)&

When exit used terminal closed but process is still alive.

check -

(sleep 100) & exit

Open other terminal

ps aux | grep sleep

Process is still alive.

[Mar 10, 2019] linux - How to attach terminal to detached process

Mar 10, 2019 | unix.stackexchange.com

Ask Question 86


Gilles ,Feb 16, 2012 at 21:39

I have detached a process from my terminal, like this:
$ process &

That terminal is now long closed, but process is still running and I want to send some commands to that process's stdin. Is that possible?

Samuel Edwin Ward ,Dec 22, 2018 at 13:34

Yes, it is. First, create a pipe: mkfifo /tmp/fifo . Use gdb to attach to the process: gdb -p PID

Then close stdin: call close (0) ; and open it again: call open ("/tmp/fifo", 0600)

Finally, write away (from a different terminal, as gdb will probably hang):

echo blah > /tmp/fifo

NiKiZe ,Jan 6, 2017 at 22:52

When original terminal is no longer accessible...

reptyr might be what you want, see https://serverfault.com/a/284795/187998

Quote from there:

Have a look at reptyr , which does exactly that. The github page has all the information.
reptyr - A tool for "re-ptying" programs.

reptyr is a utility for taking an existing running program and attaching it to a new terminal. Started a long-running process over ssh, but have to leave and don't want to interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session and head on home.

USAGE

reptyr PID

"reptyr PID" will grab the process with id PID and attach it to your current terminal.

After attaching, the process will take input from and write output to the new terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have to run "bg" or "fg" in the old terminal. This is likely impossible to fix in a reasonable way without patching your shell.)

manatwork ,Nov 20, 2014 at 22:59

I am quite sure you can not.

Check using ps x . If a process has a ? as controlling tty , you can not send input to it any more.

9942 ?        S      0:00 tail -F /var/log/messages
9947 pts/1    S      0:00 tail -F /var/log/messages

In this example, you can send input to 9947 doing something like echo "test" > /dev/pts/1 . The other process ( 9942 ) is not reachable.

Next time, you could use screen or tmux to avoid this situation.

Stéphane Gimenez ,Feb 16, 2012 at 16:16

EDIT : As Stephane Gimenez said, it's not that simple. It's only allowing you to print to a different terminal.

You can try to write to this process using /proc . It should be located in /proc/ pid /fd/0 , so a simple :

echo "hello" > /proc/PID/fd/0

should do it. I have not tried it, but it should work, as long as this process still has a valid stdin file descriptor. You can check it with ls -l on /proc/ pid /fd/ .

See nohup for more details about how to keep processes running.

Stéphane Gimenez ,Nov 20, 2015 at 5:08

Just ending the command line with & will not completely detach the process, it will just run it in the background. (With zsh you can use &! to actually detach it, otherwise you have do disown it later).

When a process runs in the background, it won't receive input from its controlling terminal anymore. But you can send it back into the foreground with fg and then it will read input again.

Otherwise, it's not possible to externally change its filedescriptors (including stdin) or to reattach a lost controlling terminal unless you use debugging tools (see Ansgar's answer , or have a look at the retty command).

[Mar 10, 2019] linux - Preventing tmux session created by systemd from automatically terminating on Ctrl+C - Stack Overflow

Mar 10, 2019 | stackoverflow.com

Preventing tmux session created by systemd from automatically terminating on Ctrl+C Ask Question -1


Jim Stewart ,Nov 10, 2018 at 12:55

Since a few days I'm successfully running the new Minecraft Bedrock Edition dedicated server on my Ubuntu 18.04 LTS home server. Because it should be available 24/7 and automatically startup after boot I created a systemd service for a detached tmux session:

tmux.minecraftserver.service

[Unit]
Description=tmux minecraft_server detached

[Service]
Type=forking
WorkingDirectory=/home/mine/minecraftserver
ExecStart=/usr/bin/tmux new -s minecraftserver -d "LD_LIBRARY_PATH=. /home/mine/minecraftser$
User=mine

[Install]
WantedBy=multi-user.target

Everything works as expected but there's one tiny thing that keeps bugging me:

How can I prevent tmux from terminating it's whole session when I press Ctrl+C ? I just want to terminate the Minecraft server process itself instead of the whole tmux session. When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

FlKo ,Nov 12, 2018 at 6:21

When starting the server from the command line in a manually created tmux session this does work (session stays alive) but not when the session was brought up by systemd .

The difference between these situations is actually unrelated to systemd. In one case, you're starting the server from a shell within the tmux session, and when the server terminates, control returns to the shell. In the other case, you're starting the server directly within the tmux session, and when it terminates there's no shell to return to, so the tmux session also dies.

tmux has an option to keep the session alive after the process inside it dies (look for remain-on-exit in the manpage), but that's probably not what you want: you want to be able to return to an interactive shell, to restart the server, investigate why it died, or perform maintenance tasks, for example. So it's probably better to change your command to this:

'LD_LIBRARY_PATH=. /home/mine/minecraftserver/ ; exec bash'

That is, first run the server, and then, after it terminates, replace the process (the shell which tmux implicitly spawns to run the command, but which will then exit) with another, interactive shell. (For some other ways to get an interactive shell after the command exits, see e. g. this question – but note that the <(echo commands) syntax suggested in the top answer is not available in systemd unit files.)

FlKo ,Nov 12, 2018 at 6:21

I as able to solve this by using systemd's ExecStartPost and tmux's send-keys like this:
[Unit]
Description=tmux minecraft_server detached

[Service]
Type=forking
WorkingDirectory=/home/mine/minecraftserver
ExecStart=/usr/bin/tmux new -d -s minecraftserver
ExecStartPost=/usr/bin/tmux send-keys -t minecraftserver "cd /home/mine/minecraftserver/" Enter "LD_LIBRARY_PATH=. ./bedrock_server" Enter

User=mine

[Install]
WantedBy=multi-user.target

[Jan 29, 2019] RHEL7 is a fine OS, the only thing it s missing is a really good init system.

Highly recommended!
Or in other words, a simple, reliable and clear solution (which has some faults due to its age) was replaced with a gigantic KISS violation. No engineer worth the name will ever do that. And if it needs doing, any good engineer will make damned sure to achieve maximum compatibility and a clean way back. The systemd people seem to be hell-bent on making it as hard as possible to not use their monster. That alone is a good reason to stay away from it.
Notable quotes:
"... We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile. ..."
"... I think we should call systemd the Master Control Program since it seems to like making other programs functions its own. ..."
"... RHEL7 is a fine OS, the only thing it's missing is a really good init system. ..."
Oct 14, 2018 | linux.slashdot.org

Reverend Green ( 4973045 ) , Monday December 11, 2017 @04:48AM ( #55714431 )

Re: Does systemd make ... ( Score: 5 , Funny)

Systemd is nothing but a thinly-veiled plot by Vladimir Putin and Beyonce to import illegal German Nazi immigrants over the border from Mexico who will then corner the market in kimchi and implement Sharia law!!!

Anonymous Coward , Monday December 11, 2017 @01:38AM ( #55714015 )

Re:It violates fundamental Unix principles ( Score: 4 , Funny)

The Emacs of the 2010s.

DontBeAMoran ( 4843879 ) , Monday December 11, 2017 @01:57AM ( #55714059 )
Re:It violates fundamental Unix principles ( Score: 5 , Funny)

We are systemd. Lower your memory locks and surrender your processes. We will add your calls and code distinctiveness to our own. Your functions will adapt to service us. Resistance is futile.

serviscope_minor ( 664417 ) , Monday December 11, 2017 @04:47AM ( #55714427 ) Journal
Re:It violates fundamental Unix principles ( Score: 4 , Insightful)

I think we should call systemd the Master Control Program since it seems to like making other programs functions its own.

Anonymous Coward , Monday December 11, 2017 @01:47AM ( #55714035 )
Don't go hating on systemd ( Score: 5 , Funny)

RHEL7 is a fine OS, the only thing it's missing is a really good init system.

[Jan 26, 2019] Ten Things I Wish I'd Known About about bash

Highly recommended!
Jan 06, 2018 | zwischenzugs.com
Intro

Recently I wanted to deepen my understanding of bash by researching as much of it as possible. Because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it .

A preview is available here .

You don't have to look hard on the internet to find plenty of useful one-liners in bash, or scripts. And there are guides to bash that seem somewhat intimidating through either their thoroughness or their focus on esoteric detail.

Here I've focussed on the things that either confused me or increased my power and productivity in bash significantly, and tried to communicate them (as in my book) in a way that emphasises getting the understanding right.

Enjoy!

hero

1) `` vs $()

These two operators do the same thing. Compare these two lines:

$ echo `ls`
$ echo $(ls)

Why these two forms existed confused me for a long time.

If you don't know, both forms substitute the output of the command contained within it into the command.

The principal difference is that nesting is simpler.

Which of these is easier to read (and write)?

    $ echo `echo \`echo \\\`echo inside\\\`\``

or:

    $ echo $(echo $(echo $(echo inside)))

If you're interested in going deeper, see here or here .

2) globbing vs regexps

Another one that can confuse if never thought about or researched.

While globs and regexps can look similar, they are not the same.

Consider this command:

$ rename -n 's/(.*)/new$1/' *

The two asterisks are interpreted in different ways.

The first is ignored by the shell (because it is in quotes), and is interpreted as '0 or more characters' by the rename application. So it's interpreted as a regular expression.

The second is interpreted by the shell (because it is not in quotes), and gets replaced by a list of all the files in the current working folder. It is interpreted as a glob.

So by looking at man bash can you figure out why these two commands produce different output?

$ ls *
$ ls .*

The second looks even more like a regular expression. But it isn't!

3) Exit Codes

Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.

Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't succeed, you get a non-zero code. 1 is a 'general error', and others can give you more information (eg which signal killed it, for example).

But these rules don't always hold:

$ grep not_there /dev/null
$ echo $?

$? is a special bash variable that's set to the exit code of each command after it runs.

Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return 0 ?

Grok this and a lot will click into place in what follows.

4) if statements, [ and [[

Here's another 'spot the difference' similar to the backticks one above.

What will this output?

if grep not_there /dev/null
then
    echo hi
else
    echo lo
fi

grep's return code makes code like this work more intuitively as a side effect of its use of exit codes.

Now what will this output?

a) hihi
b) lolo
c) something else

if [ $(grep not_there /dev/null) = '' ]
then
    echo -n hi
else
    echo -n lo
fi
if [[ $(grep not_there /dev/null) = '' ]]
then
    echo -n hi
else
    echo -n lo
fi

The difference between [ and [[ was another thing I never really understood. [ is the original form for tests, and then [[ was introduced, which is more flexible and intuitive. In the first if block above, the if statement barfs because the $(grep not_there /dev/null) is evaluated to nothing, resulting in this comparison:

[ = '' ]

which makes no sense. The double bracket form handles this for you.

This is why you occasionally see comparisons like this in bash scripts:

if [ x$(grep not_there /dev/null) = 'x' ]

so that if the command returns nothing it still runs. There's no need for it, but that's why it exists.

5) set s

Bash has configurable options which can be set on the fly. I use two of these all the time:

set -e

exits from a script if any command returned a non-zero exit code (see above).

This outputs the commands that get run as they run:

set -x

So a script might start like this:

#!/bin/bash
set -e
set -x
grep not_there /dev/null
echo $?

What would that script output?

6) ​​ <()

This is my favourite. It's so under-used, perhaps because it can be initially baffling, but I use it all the time.

It's similar to $() in that the output of the command inside is re-used.

In this case, though, the output is treated as a file. This file can be used as an argument to commands that take files as an argument.

Confused? Here's an example.

Have you ever done something like this?

$ grep somestring file1 > /tmp/a
$ grep somestring file2 > /tmp/b
$ diff /tmp/a /tmp/b

That works, but instead you can write:

diff <(grep somestring file1) <(grep somestring file2)

Isn't that neater?

7) Quoting

Quoting's a knotty subject in bash, as it is in many software contexts.

Firstly, variables in quotes:

A='123'  
echo "$A"
echo '$A'

Pretty simple – double quotes dereference variables, while single quotes go literal.

So what will this output?

mkdir -p tmp
cd tmp
touch a
echo "*"
echo '*'

Surprised? I was.

8) Top three shortcuts

There are plenty of shortcuts listed in man bash , and it's not hard to find comprehensive lists. This list consists of the ones I use most often, in order of how often I use them.

Rather than trying to memorize them all, I recommend picking one, and trying to remember to use it until it becomes unconscious. Then take the next one. I'll skip over the most obvious ones (eg !! – repeat last command, and ~ – your home directory).

!$

I use this dozens of times a day. It repeats the last argument of the last command. If you're working on a file, and can't be bothered to re-type it command after command it can save a lot of work:

grep somestring /long/path/to/some/file/or/other.txt
vi !$

​​ !:1-$

This bit of magic takes this further. It takes all the arguments to the previous command and drops them in. So:

grep isthere /long/path/to/some/file/or/other.txt
egrep !:1-$
fgrep !:1-$

The ! means 'look at the previous command', the : is a separator, and the 1 means 'take the first word', the - means 'until' and the $ means 'the last word'.

Note: you can achieve the same thing with !* . Knowing the above gives you the control to limit to a specific contiguous subset of arguments, eg with !:2-3 .

:h

I use this one a lot too. If you put it after a filename, it will change that filename to remove everything up to the folder. Like this:

grep isthere /long/path/to/some/file/or/other.txt
cd !$:h

which can save a lot of work in the course of the day.

9) startup order

The order in which bash runs startup scripts can cause a lot of head-scratching. I keep this diagram handy (from this great page):

shell-startup-actual

It shows which scripts bash decides to run from the top, based on decisions made about the context bash is running in (which decides the colour to follow).

So if you are in a local (non-remote), non-login, interactive shell (eg when you run bash itself from the command line), you are on the 'green' line, and these are the order of files read:

/etc/bash.bashrc
~/.bashrc
[bash runs, then terminates]
~/.bash_logout

This can save you a hell of a lot of time debugging.

10) getopts (cheapci)

If you go deep with bash, you might end up writing chunky utilities in it. If you do, then getting to grips with getopts can pay large dividends.

For fun, I once wrote a script called cheapci which I used to work like a Jenkins job.

The code here implements the reading of the two required, and 14 non-required arguments . Better to learn this than to build up a bunch of bespoke code that can get very messy pretty quickly as your utility grows.


This is based on some of the contents of my book Learn Bash the Hard Way , available at $7 :

[Dec 23, 2018] Rule #0 of any checklist

Notable quotes:
"... The Checklist Manifesto ..."
"... The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg . ..."
"... any checklist should start off verifying that what you "know" to be true is true ..."
"... Before starting, ask the "Is it plugged in?" question first. What happened today was an example of when asking "Is it plugged in?" would have helped. ..."
"... moral of the story: Make sure that your understanding of the current state is correct. If you're a developer trying to fix a problem, make sure that you are actually able to understand the problem first. ..."
Dec 23, 2018 | hexmode.com

A while back I mentioned Atul Gawande 's book The Checklist Manifesto . Today, I got another example of how to improve my checklists.

The book talks about how checklists reduce major errors in surgery. Hospitals that use checklists are drastically less likely to amputate the wrong leg .

So, the takeaway for me is this: any checklist should start off verifying that what you "know" to be true is true . (Thankfully, my errors can be backed out with very little long term consequences, but I shouldn't use this as an excuse to forego checklists.)

Before starting, ask the "Is it plugged in?" question first. What happened today was an example of when asking "Is it plugged in?" would have helped.

Today I was testing the thumbnailing of some MediaWiki code and trying to understand the $wgLocalFileRepo variable. I copied part of an /images/ directory over from another wiki to my test wiki. I verified that it thumbnailed correctly.

So far so good.

Then I changed the directory parameter and tested. No thumbnail. Later, I realized this is to be expected because I didn't copy over the original images. So that is one issue.

I erased (what I thought was) the thumbnail image and tried again on the main repo. It worked again–I got a thumbnail.

I tried copying over the images directory to the new directory, but it the new thumbnailing directory structure didn't produce a thumbnail.

I tried over and over with the same thumbnail and was confused because it kept telling me the same thing.

I added debugging statements and still got no where.

Finally, I just did an ls on the directory to verify it was there. It was. And it had files in it.

But not the file I was trying to produce a thumbnail of.

The system that "worked" had the thumbnail, but not the original file.

So, moral of the story: Make sure that your understanding of the current state is correct. If you're a developer trying to fix a problem, make sure that you are actually able to understand the problem first.

Maybe your perception of reality is wrong. Mine was. I was sure that the thumbnails were being generated each time until I discovered that I hadn't deleted the thumbnails, I had deleted the original.

[Dec 13, 2018] Red Hat Linux Professional Users Groups

Compare with Oracle recommendations. Some setting might be wrong. Oracle recommendes, see Oracle kernel parameters tuning on Linux
Dec 13, 2018 | www.linkedin.com

Oracle recommmendations:

ip_local_port_range Minimum:9000 Maximum: 65000 /proc/sys/net/ipv4/ip_local_port_range
rmem_default 262144 /proc/sys/net/core/rmem_default
rmem_max 4194304 /proc/sys/net/core/rmem_max
wmem_default 262144 /proc/sys/net/core/wmem_default
wmem_max 1048576 /proc/sys/net/core/wmem_max
tcp_wmem 262144 /proc/sys/net/ipv4/tcp_wmem
tcp_rmem 4194304 /proc/sys/net/ipv4/tcp_rmem

Minesh Patel , Site Reliability Engineer, Austin, Texas Area

TCP IO setting on Red hat will reduce your intermittent or random slowness problem or there issue if you have TCP IO of default settings.

For Red Hat Linux: 131071 is default value.

Double the value from 131071 to 262144
cat /proc/sys/net/core/rmem_max
131071 → 262144
cat /proc/sys/net/core/rmem_default
129024 → 262144
 cat /proc/sys/net/core/wmem_default
129024 → 262144
 cat /proc/sys/net/core/wmem_max
131071 → 262144
To improve fail over performance in a RAC cluster, consider changing the following IP kernel parameters as well:
net.ipv4.tcp_keepalive_time
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_retries2
net.ipv4.tcp_syn_retries
# sysctl -w net.ipv4.ip_local_port_range="1024 65000"

To make the change permanent, add the following line to the /etc/sysctl.conf file, which is used during the boot process:

net.ipv4.ip_local_port_range=1024 65000

The first number is the first local port allowed for TCP and UDP traffic, and the second number is the last port number.

[Nov 12, 2018] Linux Find Out Which Process Is Listening Upon a Port

Jun 25, 2012 | www.cyberciti.biz

How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111 or udp port 7000 under Linux?

You can the following programs to find out about port numbers and its associated process:

  1. netstat – a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
  2. fuser – a command line tool to identify processes using files or sockets.
  3. lsof – a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that opened them.
  4. /proc/$pid/ file system – Under Linux /proc includes a directory for each running process (including kernel processes) at /proc/PID, containing information about that process, notably including the processes name that opened port.

You must run above command(s) as the root user.

netstat example

Type the following command:
# netstat -tulpn
Sample outputs:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1138/mysqld     
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      850/portmap     
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2    
tcp        0      0 0.0.0.0:55091           0.0.0.0:*               LISTEN      910/rpc.statd   
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1467/dnsmasq    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      992/sshd        
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1565/cupsd      
tcp        0      0 0.0.0.0:7000            0.0.0.0:*               LISTEN      3813/transmission
tcp6       0      0 :::22                   :::*                    LISTEN      992/sshd        
tcp6       0      0 ::1:631                 :::*                    LISTEN      1565/cupsd      
tcp6       0      0 :::7000                 :::*                    LISTEN      3813/transmission
udp        0      0 0.0.0.0:111             0.0.0.0:*                           850/portmap     
udp        0      0 0.0.0.0:662             0.0.0.0:*                           910/rpc.statd   
udp        0      0 192.168.122.1:53        0.0.0.0:*                           1467/dnsmasq    
udp        0      0 0.0.0.0:67              0.0.0.0:*                           1467/dnsmasq    
udp        0      0 0.0.0.0:68              0.0.0.0:*                           3697/dhclient   
udp        0      0 0.0.0.0:7000            0.0.0.0:*                           3813/transmission
udp        0      0 0.0.0.0:54746           0.0.0.0:*                           910/rpc.statd

TCP port 3306 was opened by mysqld process having PID # 1138. You can verify this using /proc, enter:
# ls -l /proc/1138/exe
Sample outputs:

lrwxrwxrwx 1 root root 0 2010-10-29 10:20 /proc/1138/exe -> /usr/sbin/mysqld

You can use grep command to filter out information:
# netstat -tulpn | grep :80
Sample outputs:

tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1607/apache2
Video demo

https://www.youtube.com/embed/h3fJlmuGyos

fuser command

Find out the processes PID that opened tcp port 7000, enter:
# fuser 7000/tcp
Sample outputs:

7000/tcp:             3813

Finally, find out process name associated with PID # 3813, enter:
# ls -l /proc/3813/exe
Sample outputs:

lrwxrwxrwx 1 vivek vivek 0 2010-10-29 11:00 /proc/3813/exe -> /usr/bin/transmission

/usr/bin/transmission is a bittorrent client, enter:
# man transmission
OR
# whatis transmission
Sample outputs:

transmission (1)     - a bittorrent client
Task: Find Out Current Working Directory Of a Process

To find out current working directory of a process called bittorrent or pid 3813, enter:
# ls -l /proc/3813/cwd
Sample outputs:

lrwxrwxrwx 1 vivek vivek 0 2010-10-29 12:04 /proc/3813/cwd -> /home/vivek

OR use pwdx command, enter:
# pwdx 3813
Sample outputs:

3813: /home/vivek
Task: Find Out Owner Of a Process

Use the following command to find out the owner of a process PID called 3813:
# ps aux | grep 3813
OR
# ps aux | grep '[3]813'
Sample outputs:

vivek     3813  1.9  0.3 188372 26628 ?        Sl   10:58   2:27 transmission

OR try the following ps command:
# ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
Sample outputs:

3813 vivek    vivek    transmission                   02:44:05 Fri Oct 29 10:58:40 2010

Another option is /proc/$PID/environ, enter:
# cat /proc/3813/environ
OR
# grep --color -w -a USER /proc/3813/environ
Sample outputs (note –colour option):

Fig.01: grep output
Fig.01: grep output

lsof Command Example

Type the command as follows:

lsof -i :portNumber 
lsof -i tcp:portNumber 
lsof -i udp:portNumber 
lsof -i :80
lsof -i :80 | grep LISTEN

lsof -i :portNumber lsof -i tcp:portNumber lsof -i udp:portNumber lsof -i :80 lsof -i :80 | grep LISTEN

Sample outputs:

apache2   1607     root    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1616 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1617 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1618 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1619 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)
apache2   1620 www-data    3u  IPv4   6472      0t0  TCP *:www (LISTEN)

Now, you get more information about pid # 1607 or 1616 and so on:
# ps aux | grep '[1]616'
Sample outputs:
www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
I recommend the following command to grab info about pid # 1616:
# ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
Sample outputs:

1616 www-data www-data /usr/sbin/apache2 -k start     03:16:22 Fri Oct 29 10:20:17 2010

Where,

Help: I Discover an Open Port Which I Don't Recognize At All

The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers:
$ grep port /etc/services
$ grep 443 /etc/services

Sample outputs:

https		443/tcp				# http protocol over TLS/SSL
https		443/udp
Check For rootkit

I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers. See how to detecting / checking rootkits under Linux .

Keep an Eye On Your Bandwidth Graphs

Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.

See also:

See the following man pages for more information:
$ man ps
$ man grep
$ man lsof
$ man netstat
$ man fuser

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . GOT FEEDBACK? CLICK HERE TO JOIN THE DISCUSSION

[Nov 08, 2018] How to find which process is regularly writing to disk?

Notable quotes:
"... tick...tick...tick...trrrrrr ..."
"... /var/log/syslog ..."
Nov 08, 2018 | unix.stackexchange.com

Cedric Martin , Jul 27, 2012 at 4:31

How can I find which process is constantly writing to disk?

I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

But here I'm not sure.

I tried the following:

ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp

but nothing is changing there.

Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

(HD is a Seagate Barracude 500GB)

Mat , Jul 27, 2012 at 6:03

Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

Cedric Martin , Jul 27, 2012 at 7:02

@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

camh , Jul 27, 2012 at 9:48

Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

mnmnc , Jul 27, 2012 at 8:27

Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

example output:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
    6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
    7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
    8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
 1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
   10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]

Cedric Martin , Aug 2, 2012 at 15:56

thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

ndemou , Jun 20, 2016 at 15:32

I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

scai , Jul 27, 2012 at 10:48

You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

dan3 , Jul 15, 2013 at 8:32

It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

scai , Jul 16, 2013 at 6:32

You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

dan3 , Jul 16, 2013 at 7:22

I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

scai , Jul 16, 2013 at 10:50

I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

Volker Siegel , Apr 16, 2014 at 22:57

I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

Gilles , Jul 28, 2012 at 1:34

Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
auditctl -S sync -S fsync -S fdatasync -a exit,always

Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

Gilles , Mar 23 at 18:24

@StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

cas , Jul 27, 2012 at 9:40

It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

for disk in /dev/sd? ; do
  /sbin/hdparm -q -S 0 "/dev/$disk"
done

Cedric Martin , Aug 2, 2012 at 16:03

that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

cas , Aug 2, 2012 at 21:42

IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

Micheal Johnson , Mar 12, 2016 at 20:48

It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

Micheal Johnson , Mar 12, 2016 at 20:51

Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

,

I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

[Nov 08, 2018] Determining what process is bound to a port

Mar 14, 2011 | unix.stackexchange.com
I know that using the command:
lsof -i TCP

(or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.

Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.

Cakemox , Mar 14, 2011 at 20:48

netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not all others (like AIX.) Add -t if you want TCP only.
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:24800           0.0.0.0:*               LISTEN      27899/synergys
tcp        0      0 0.0.0.0:8000            0.0.0.0:*               LISTEN      3361/python
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      2264/mysqld
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      22964/apache2
tcp        0      0 192.168.99.1:53         0.0.0.0:*               LISTEN      3389/named
tcp        0      0 192.168.88.1:53         0.0.0.0:*               LISTEN      3389/named

etc.

xxx , Mar 14, 2011 at 21:01

Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something similar for Solaris? – user5721 Mar 14 '11 at 21:01

Rich Homolka , Mar 15, 2011 at 19:56

netstat -p above is my vote. also look at lsof . – Rich Homolka Mar 15 '11 at 19:56

Jonathan , Aug 26, 2014 at 18:50

As an aside, for windows it's similar: netstat -aon | more – Jonathan Aug 26 '14 at 18:50

sudo , May 25, 2017 at 2:24

What about for SCTP? – sudo May 25 '17 at 2:24

frielp , Mar 15, 2011 at 13:33

On AIX, netstat & rmsock can be used to determine process binding:
[root@aix] netstat -Ana|grep LISTEN|grep 80
f100070000280bb0 tcp4       0      0  *.37               *.*        LISTEN
f1000700025de3b0 tcp        0      0  *.80               *.*        LISTEN
f1000700002803b0 tcp4       0      0  *.111              *.*        LISTEN
f1000700021b33b0 tcp4       0      0  127.0.0.1.32780    *.*        LISTEN

# Port 80 maps to f1000700025de3b0 above, so we type:
[root@aix] rmsock f1000700025de3b0 tcpcb
The socket 0x25de008 is being held by process 499790 (java).

Olivier Dulac , Sep 18, 2013 at 4:05

Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt to remove it) ? – Olivier Dulac Sep 18 '13 at 4:05

Vitor Py , Sep 26, 2013 at 14:18

@OlivierDulac: "Unlike what its name implies, rmsock does not remove the socket, if it is being used by a process. It just reports the process holding the socket." ( ibm.com/developerworks/community/blogs/cgaix/entry/ ) – Vitor Py Sep 26 '13 at 14:18

Olivier Dulac , Sep 26, 2013 at 16:00

@vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't even try to remove it when a process holds it. That's cool! Thx! – Olivier Dulac Sep 26 '13 at 16:00

frielp , Mar 15, 2011 at 13:27

Another tool available on Linux is ss . From the ss man page on Fedora:
NAME
       ss - another utility to investigate sockets
SYNOPSIS
       ss [options] [ FILTER ]
DESCRIPTION
       ss is used to dump socket statistics. It allows showing information 
       similar to netstat. It can display more TCP and state informations  
       than other tools.

Example output below - the final column shows the process binding:

[root@box] ss -ap
State      Recv-Q Send-Q      Local Address:Port          Peer Address:Port
LISTEN     0      128                    :::http                    :::*        users:(("httpd",20891,4),("httpd",20894,4),("httpd",20895,4),("httpd",20896,4)
LISTEN     0      128             127.0.0.1:munin                    *:*        users:(("munin-node",1278,5))
LISTEN     0      128                    :::ssh                     :::*        users:(("sshd",1175,4))
LISTEN     0      128                     *:ssh                      *:*        users:(("sshd",1175,3))
LISTEN     0      10              127.0.0.1:smtp                     *:*        users:(("sendmail",1199,4))
LISTEN     0      128             127.0.0.1:x11-ssh-offset                  *:*        users:(("sshd",25734,8))
LISTEN     0      128                   ::1:x11-ssh-offset                 :::*        users:(("sshd",25734,7))

Eugen Constantin Dinca , Mar 14, 2011 at 23:47

For Solaris you can use pfiles and then grep by sockname: or port: .

A sample (from here ):

pfiles `ptree | awk '{print $1}'` | egrep '^[0-9]|port:'

rickumali , May 8, 2011 at 14:40

I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e. http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it became obvious what the process was (for the record, it was Splunk ).

One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!

Gilles , Oct 8, 2015 at 21:04

In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. – Gilles May 8 '11 at 14:45

[Nov 08, 2018] How to find which process is regularly writing to disk?

Notable quotes:
"... tick...tick...tick...trrrrrr ..."
"... /var/log/syslog ..."
Jul 27, 2012 | unix.stackexchange.com

Cedric Martin , Jul 27, 2012 at 4:31

How can I find which process is constantly writing to disk?

I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.

And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting ( tick...tick...tick...trrrrrr rinse and repeat every second or so).

In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and I simply redirected that one (not important) logging to a (real) RAM disk.

But here I'm not sure.

I tried the following:

ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp

but nothing is changing there.

Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.

Could it be something in the kernel/system I just installed or do I have a faulty harddisk?

hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled from big sources (Emacs) without issue so I don't think the system is bad.

(HD is a Seagate Barracude 500GB)

Mat , Jul 27, 2012 at 6:03

Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce for a few "clicks"...) – Mat Jul 27 '12 at 6:03

Cedric Martin , Jul 27, 2012 at 7:02

@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; ) – Cedric Martin Jul 27 '12 at 7:02

camh , Jul 27, 2012 at 9:48

Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access time. – camh Jul 27 '12 at 9:48

mnmnc , Jul 27, 2012 at 8:27

Did you tried to examin what programs like iotop is showing? It will tell you exacly what kind of process is currently writing to the disk.

example output:

Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
    1 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % init
    2 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [kthreadd]
    3 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/0]
    6 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/0]
    7 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [watchdog/0]
    8 rt/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [migration/1]
 1033 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [flush-8:0]
   10 be/4 root        0.00 B/s    0.00 B/s  0.00 %  0.00 % [ksoftirqd/1]

Cedric Martin , Aug 2, 2012 at 15:56

thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had to apt-get iotop . Very cool command! – Cedric Martin Aug 2 '12 at 15:56

ndemou , Jun 20, 2016 at 15:32

I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO bandwidth used. – ndemou Jun 20 '16 at 15:32

scai , Jul 27, 2012 at 10:48

You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog . This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current activity.

dan3 , Jul 15, 2013 at 8:32

It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging, which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) – dan3 Jul 15 '13 at 8:32

scai , Jul 16, 2013 at 6:32

You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the disk activity there is no need to stop the syslog daemon. – scai Jul 16 '13 at 6:32

dan3 , Jul 16, 2013 at 7:22

I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll try it again :) – dan3 Jul 16 '13 at 7:22

scai , Jul 16, 2013 at 10:50

I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger a write to disk. – scai Jul 16 '13 at 10:50

Volker Siegel , Apr 16, 2014 at 22:57

I would assume there is rate general rate limiting in place for the log messages, which handles this case too(?) – Volker Siegel Apr 16 '14 at 22:57

Gilles , Jul 28, 2012 at 1:34

Assuming that the disk noises are due to a process causing a write and not to some disk spindown problem , you can use the audit subsystem (install the auditd package ). Put a watch on the sync calls and its friends:
auditctl -S sync -S fsync -S fdatasync -a exit,always

Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed! Check in /etc/auditd.conf that the flush option is set to none .

If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises. With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by - , then that log is flushed to disk after each write.

Gilles , Mar 23 at 18:24

@StephenKitt Huh. No. The asker mentioned Debian so I've changed it to a link to the Debian package. – Gilles Mar 23 at 18:24

cas , Jul 27, 2012 at 9:40

It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without spinning up the drive - cretinous!).

This is not only annoying, it can wear out the drives faster as many drives have only a limited number of park cycles. e.g. see https://bugs.launchpad.net/ubuntu/+source/hdparm/+bug/952556 for a description of the problem.

I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script, or in /etc/rc.local or similar.

for disk in /dev/sd? ; do
  /sbin/hdparm -q -S 0 "/dev/$disk"
done

Cedric Martin , Aug 2, 2012 at 16:03

that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster? I mean: it's never ever "resting" as long as the system is on then? – Cedric Martin Aug 2 '12 at 16:03

cas , Aug 2, 2012 at 21:42

IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily exceeded if the drive is idling and spinning up every few seconds) – cas Aug 2 '12 at 21:42

Micheal Johnson , Mar 12, 2016 at 20:48

It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good though is when the hard drive repeatedly spins down and up again in a short period of time. – Micheal Johnson Mar 12 '16 at 20:48

Micheal Johnson , Mar 12, 2016 at 20:51

Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using the computer and is likely to need the drive again soon. – Micheal Johnson Mar 12 '16 at 20:51

,

I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped

[Oct 16, 2018] How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command by Prakash Subramanian

Oct 15, 2018 | www.2daygeek.com
It's a important topic for Linux admin (such a wonderful topic) so, everyone must be aware of this and practice how to use this in the efficient way.

In Linux, whenever we install any packages which has services or daemons. By default all the services "init & systemd" scripts will be added into it but it wont enabled.

Hence, we need to enable or disable the service manually if it's required. There are three major init systems are available in Linux which are very famous and still in use.

What is init System?

In Linux/Unix based operating systems, init (short for initialization) is the first process that started during the system boot up by the kernel.

It's holding a process id (PID) of 1. It will be running in the background continuously until the system is shut down.

Init looks at the /etc/inittab file to decide the Linux run level then it starts all other processes & applications in the background as per the run level.

BIOS, MBR, GRUB and Kernel processes were kicked up before hitting init process as part of Linux booting process.

Below are the available run levels for Linux (There are seven runlevels exist, from zero to six).

Below three init systems are widely used in Linux.

What is System V (Sys V)?

System V (Sys V) is one of the first and traditional init system for Unix like operating system. init is the first process that started during the system boot up by the kernel and it's a parent process for everything.

Most of the Linux distributions started using traditional init system called System V (Sys V) first. Over the years, several replacement init systems were released to address design limitations in the standard versions such as launchd, the Service Management Facility, systemd and Upstart.

But systemd has been adopted by several major Linux distributions over the traditional SysV init systems.

What is Upstart?

Upstart is an event-based replacement for the /sbin/init daemon which handles starting of tasks and services during boot, stopping them during shutdown and supervising them while the system is running.

It was originally developed for the Ubuntu distribution, but is intended to be suitable for deployment in all Linux distributions as a replacement for the venerable System-V init.

It was used in Ubuntu from 9.10 to Ubuntu 14.10 & RHEL 6 based systems after that they are replaced with systemd.

What is systemd?

Systemd is a new init system and system manager which was implemented/adapted into all the major Linux distributions over the traditional SysV init systems.

systemd is compatible with SysV and LSB init scripts. It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.

It's a parant process for everything and Fedora 15 is the first distribution which was adapted systemd instead of upstart. systemctl is command line utility and primary tool to manage the systemd daemons/services such as (start, restart, stop, enable, disable, reload & status).

systemd uses .service files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring /cgroup/systemd file.

How to Enable or Disable Services on Boot Using chkconfig Commmand?

The chkconfig utility is a command-line tool that allows you to specify in which
runlevel to start a selected service, as well as to list all available services along with their current setting.

Also, it will allows us to enable or disable a services from the boot. Make sure you must have superuser privileges (either root or sudo) to use this command.

All the services script are located on /etc/rd.d/init.d .

How to list All Services in run-level

The -–list parameter displays all the services along with their current status (What run-level the services are enabled or disabled).

# chkconfig --list
NetworkManager     0:off    1:off    2:on    3:on    4:on    5:on    6:off
abrt-ccpp          0:off    1:off    2:off    3:on    4:off    5:on    6:off
abrtd              0:off    1:off    2:off    3:on    4:off    5:on    6:off
acpid              0:off    1:off    2:on    3:on    4:on    5:on    6:off
atd                0:off    1:off    2:off    3:on    4:on    5:on    6:off
auditd             0:off    1:off    2:on    3:on    4:on    5:on    6:off
.
.

How to check the Status of Specific Service

If you would like to see a particular service status in run-level then use the following format and grep the required service.

In this case, we are going to check the auditd service status in run-level

[Oct 15, 2018] Breaking News! SUSE Linux Sold for $2.5 Billion It's FOSS by Abhishek Prakash

Aqusition by a private equity shark is never good news for a software vendor...
Jul 03, 2018 | itsfoss.com

British software company Micro Focus International has agreed to sell SUSE Linux and its associated software business to Swedish private equity group EQT Partners for $2.535 billion. Read the details. ­ rm 3 months ago

This comment is awaiting moderation

Novell acquired SUSE in 2003 for $210 million ­ asoc 4 months ago

This comment is awaiting moderation

"It has over 1400 employees all over the globe "
They should be updating their CVs.

[Sep 10, 2018] Find Version of Currently Running Ubuntu

Looks like lsb_release is limited to Ubuntu and Debian...
Sep 10, 2018 | www.mysysad.com
lsb_release -a

Here is the syntax to determine which Ubuntu kernel version you are currently running on.

uname -a
uname -r

[Sep 04, 2018] Unifying custom scripts system-wide with rpm on Red Hat-CentOS

Highly recommended!
Aug 24, 2018 | linuxconfig.org
Objective Our goal is to build rpm packages with custom content, unifying scripts across any number of systems, including versioning, deployment and undeployment. Operating System and Software Versions Requirements Privileged access to the system for install, normal access for build. Difficulty MEDIUM Conventions Introduction One of the core feature of any Linux system is that they are built for automation. If a task may need to be executed more than one time - even with some part of it changing on next run - a sysadmin is provided with countless tools to automate it, from simple shell scripts run by hand on demand (thus eliminating typo errors, or only save some keyboard hits) to complex scripted systems where tasks run from cron at a specified time, interacting with each other, working with the result of another script, maybe controlled by a central management system etc.

While this freedom and rich toolset indeed adds to productivity, there is a catch: as a sysadmin, you write a useful script on a system, which proves to be useful on another, so you copy the script over. On a third system the script is useful too, but with minor modification - maybe a new feature useful only that system, reachable with a new parameter. Generalization in mind, you extend the script to provide the new feature, and complete the task it was written for as well. Now you have two versions of the script, the first is on the first two system, the second in on the third system.

You have 1024 computers running in the datacenter, and 256 of them will need some of the functionality provided by that script. In time you will have 64 versions of the script all over, every version doing its job. On the next system deployment you need a feature you recall you coded at some version, but which? And on which systems are they?

On RPM based systems, such as Red Hat flavors, a sysadmin can take advantage of the package manager to create order in the custom content, including simple shell scripts that may not provide else but the tools the admin wrote for convenience.

In this tutorial we will build a custom rpm for Red Hat Enterprise Linux 7.5 containing two bash scripts, parselogs.sh and pullnews.sh to provide a way that all systems have the latest version of these scripts in the /usr/local/sbin directory, and thus on the path of any user who logs in to the system.


me width=


Distributions, major and minor versions In general, the minor and major version of the build machine should be the same as the systems the package is to be deployed, as well as the distribution to ensure compatibility. If there are various versions of a given distribution, or even different distributions with many versions in your environment (oh, joy!), you should set up build machines for each. To cut the work short, you can just set up build environment for each distribution and each major version, and have them on the lowest minor version existing in your environment for the given major version. Of cause they don't need to be physical machines, and only need to be running at build time, so you can use virtual machines or containers.

In this tutorial our work is much easier, we only deploy two scripts that have no dependencies at all (except bash ), so we will build noarch packages which stand for "not architecture dependent", we'll also not specify the distribution the package is built for. This way we can install and upgrade them on any distribution that uses rpm , and to any version - we only need to ensure that the build machine's rpm-build package is on the oldest version in the environment. Setting up building environment To build custom rpm packages, we need to install the rpm-build package:

# yum install rpm-build
From now on, we do not use root user, and for a good reason. Building packages does not require root privilege, and you don't want to break your building machine.

Building the first version of the package Let's create the directory structure needed for building:

$ mkdir -p rpmbuild/SPECS
Our package is called admin-scripts, version 1.0. We create a specfile that specifies the metadata, contents and tasks performed by the package. This is a simple text file we can create with our favorite text editor, such as vi . The previously installed rpmbuild package will fill your empty specfile with template data if you use vi to create an empty one, but for this tutorial consider the specification below called admin-scripts-1.0.spec :

me width=


Name:           admin-scripts
Version:        1
Release:        0
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release
Place the specfile in the rpmbuild/SPEC directory we created earlier.

We need the sources referenced in the specfile - in this case the two shell scripts. Let's create the directory for the sources (called as the package name appended with the main version):

$ mkdir -p rpmbuild/SOURCES/admin-scripts-1/scripts
And copy/move the scripts into it:
$ ls rpmbuild/SOURCES/admin-scripts-1/scripts/
parselogs.sh  pullnews.sh

me width=


As this tutorial is not about shell scripting, the contents of these scripts are irrelevant. As we will create a new version of the package, and the pullnews.sh is the script we will demonstrate with, it's source in the first version is as below:
#!/bin/bash
echo "news pulled"
exit 0
Do not forget to add the appropriate rights to the files in the source - in our case, execution right:
chmod +x rpmbuild/SOURCES/admin-scripts-1/scripts/*.sh
Now we create a tar.gz archive from the source in the same directory:
cd rpmbuild/SOURCES/ && tar -czf admin-scripts-1.tar.gz admin-scripts-1
We are ready to build the package:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.0.spec
We'll get some output about the build, and if anything goes wrong, errors will be shown (for example, missing file or path). If all goes well, our new package will appear in the RPMS directory generated by default under the rpmbuild directory (sorted into subdirectories by architecture):
$ ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm
We have created a simple yet fully functional rpm package. We can query it for all the metadata we supplied earlier:
$ rpm -qpi rpmbuild/RPMS/noarch/admin-scripts-1-0.noarch.rpm 
Name        : admin-scripts
Version     : 1
Release     : 0
Architecture: noarch
Install Date: (not installed)
Group       : Application/Other
Size        : 78
License     : GPL
Signature   : (none)
Source RPM  : admin-scripts-1-0.src.rpm
Build Date  : 2018. aug.  1., Wed, 13.27.34 CEST
Build Host  : build01.foobar.com
Relocations : (not relocatable)
Packager    : John Doe 
URL         : www.foobar.com/admin-scripts
Summary     : FooBar Inc. IT dept. admin scripts
Description :
Package installing latest version the admin scripts used by the IT dept.
And of cause we can install it (with root privileges): Installing custom scripts with rpm Installing custom scripts with rpm

me width=


As we installed the scripts into a directory that is on every user's $PATH , you can run them as any user in the system, from any directory:
$ pullnews.sh 
news pulled
The package can be distributed as it is, and can be pushed into repositories available to any number of systems. To do so is out of the scope of this tutorial - however, building another version of the package is certainly not. Building another version of the package Our package and the extremely useful scripts in it become popular in no time, considering they are reachable anywhere with a simple yum install admin-scripts within the environment. There will be soon many requests for some improvements - in this example, many votes come from happy users that the pullnews.sh should print another line on execution, this feature would save the whole company. We need to build another version of the package, as we don't want to install another script, but a new version of it with the same name and path, as the sysadmins in our organization already rely on it heavily.

First we change the source of the pullnews.sh in the SOURCES to something even more complex:

#!/bin/bash
echo "news pulled"
echo "another line printed"
exit 0
We need to recreate the tar.gz with the new source content - we can use the same filename as the first time, as we don't change version, only release (and so the Source0 reference will be still valid). Note that we delete the previous archive first:
cd rpmbuild/SOURCES/ && rm -f admin-scripts-1.tar.gz && tar -czf admin-scripts-1.tar.gz admin-scripts-1
Now we create another specfile with a higher release number:
cp rpmbuild/SPECS/admin-scripts-1.0.spec rpmbuild/SPECS/admin-scripts-1.1.spec
We don't change much on the package itself, so we simply administrate the new version as shown below:
Name:           admin-scripts
Version:        1
Release:        1
Summary:        FooBar Inc. IT dept. admin scripts
Packager:       John Doe 
Group:          Application/Other
License:        GPL
URL:            www.foobar.com/admin-scripts
Source0:        %{name}-%{version}.tar.gz
BuildArch:      noarch

%description
Package installing latest version the admin scripts used by the IT dept.

%prep
%setup -q


%build

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/local/sbin
cp scripts/* $RPM_BUILD_ROOT/usr/local/sbin/

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root,-)
%dir /usr/local/sbin
/usr/local/sbin/parselogs.sh
/usr/local/sbin/pullnews.sh

%doc

%changelog
* Wed Aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line
* Wed Aug 1 2018 John Doe 
- release 1.0 - initial release

me width=


All done, we can build another version of our package containing the updated script. Note that we reference the specfile with the higher version as the source of the build:
rpmbuild --bb rpmbuild/SPECS/admin-scripts-1.1.spec
If the build is successful, we now have two versions of the package under our RPMS directory:
ls rpmbuild/RPMS/noarch/
admin-scripts-1-0.noarch.rpm  admin-scripts-1-1.noarch.rpm
And now we can install the "advanced" script, or upgrade if it is already installed. Upgrading custom scripts with rpm Upgrading custom scripts with rpm

And our sysadmins can see that the feature request is landed in this version:

rpm -q --changelog admin-scripts
* sze aug 22 2018 John Doe 
- release 1.1 - pullnews.sh v1.1 prints another line

* sze aug 01 2018 John Doe 
- release 1.0 - initial release
Conclusion

We wrapped our custom content into versioned rpm packages. This means no older versions left scattered across systems, everything is in it's place, on the version we installed or upgraded to. RPM gives the ability to replace old stuff needed only in previous versions, can add custom dependencies or provide some tools or services our other packages rely on. With effort, we can pack nearly any of our custom content into rpm packages, and distribute it across our environment, not only with ease, but with consistency.

[Jan 14, 2018] How to remount filesystem in read write mode under Linux

Jan 14, 2018 | kerneltalks.com

Most of the time on newly created file systems of NFS filesystems we see error like below :

1 2 3 4 root @ kerneltalks # touch file1 touch : cannot touch ' file1 ' : Read - only file system

This is because file system is mounted as read only. In such scenario you have to mount it in read-write mode. Before that we will see how to check if file system is mounted in read only mode and then we will get to how to re mount it as a read write filesystem.


How to check if file system is read only

To confirm file system is mounted in read only mode use below command –

1 2 3 4 # cat /proc/mounts | grep datastore / dev / xvdf / datastore ext3 ro , seclabel , relatime , data = ordered 0 0

Grep your mount point in cat /proc/mounts and observer third column which shows all options which are used in mounted file system. Here ro denotes file system is mounted read-only.

You can also get these details using mount -v command

1 2 3 4 root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( ro , relatime , seclabel , data = ordered )

In this output. file system options are listed in braces at last column.


Re-mount file system in read-write mode

To remount file system in read-write mode use below command –

1 2 3 4 5 6 root @ kerneltalks # mount -o remount,rw /datastore root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 ( rw , relatime , seclabel , data = ordered )

Observe after re-mounting option ro changed to rw . Now, file system is mounted as read write and now you can write files in it.

Note : It is recommended to fsck file system before re mounting it.

You can check file system by running fsck on its volume.

1 2 3 4 5 6 7 8 9 10 root @ kerneltalks # df -h /datastore Filesystem Size Used Avail Use % Mounted on / dev / xvda2 10G 881M 9.2G 9 % / root @ kerneltalks # fsck /dev/xvdf fsck from util - linux 2.23.2 e2fsck 1.42.9 ( 28 - Dec - 2013 ) / dev / xvdf : clean , 12 / 655360 files , 79696 / 2621440 blocks

Sometimes there are some corrections needs to be made on file system which needs reboot to make sure there are no processes are accessing file system.

[Jan 14, 2018] Linux yes Command Tutorial for Beginners (with Examples)

Jan 14, 2018 | www.howtoforge.com

You can see that user has to type 'y' for each query. It's in situation like these where yes can help. For the above scenario specifically, you can use yes in the following way:

yes | rm -ri test Q3. Is there any use of yes when it's used alone?

Yes, there's at-least one use: to tell how well a computer system handles high amount of loads. Reason being, the tool utilizes 100% processor for systems that have a single processor. In case you want to apply this test on a system with multiple processors, you need to run a yes process for each processor.

[Jan 14, 2018] Working with Vim Editor Advanced concepts

Jan 14, 2018 | linuxtechlab.com

Opening multiple files with VI/VIM editor

To open multiple files, command would be same as is for a single file; we just add the file name for second file as well.

$ vi file1 file2 file 3

Now to browse to next file, we can use

$ :n

or we can also use

$ :e filename

Run external commands inside the editor

We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command is,

$ :! command

An example for this would be

$ :! df -H

Searching for a pattern

To search for a word or pattern in the text file, we use following two commands in command mode,

Both of these commands are used for same purpose, only difference being the direction they search in. An example would be,

$ :/ search pattern (If at beginning of the file)

$ :/ search pattern (If at the end of the file)

Searching & replacing a pattern

We might be required to search & replace a word or a pattern from our text files. So rather than finding the occurrence of word from whole text file & replace it, we can issue a command from the command mode to replace the word automatically. Syntax for using search & replacement is,

$ :s/pattern_to_be_found/New_pattern/g

Suppose we want to find word "alpha" & replace it with word "beta", the command would be

$ :s/alpha/beta/g

If we want to only replace the first occurrence of word "alpha", then the command would be

$ :s/alpha/beta/

Using Set commands

We can also customize the behaviour, the and feel of the vi/vim editor by using the set command. Here is a list of some options that can be use set command to modify the behaviour of vi/vim editor,

Some other commands to modify vi editors are,

$ :colorscheme its used to change the color scheme for the editor. (for VIM editor only)

$ :syntax on will turn on the color syntax for .xml, .html files etc. (for VIM editor only)

This complete our tutorial, do mention your queries/questions or suggestions in the comment box below.

[Oct 31, 2017] Shell config subfiles by Tom Ryder

Notable quotes:
"... Note that we unset the config variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference. ..."
"... Thanks to commenter oylenshpeegul for correcting the syntax of the loops. ..."
Jan 30, 2015 | sanctum.geek.nz

Large shell startup scripts ( .bashrc , .profile ) over about fifty lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome to manage over time, and if you keep your dotfiles under version control it's not terribly helpful to see large sets of commits just editing the one file when it could be more instructive if broken up into files by section.

Given that shell configuration is just shell code, we can apply the source builtin (or the . builtin for POSIX sh ) to load several files at the end of a .bashrc , for example:

source ~/.bashrc.options
source ~/.bashrc.aliases
source ~/.bashrc.functions

This is a better approach, but it still binds us into using those filenames; we still have to edit the ~/.bashrc file if we want to rename them, or remove them, or add new ones.

Fortunately, UNIX-like systems have a common convention for this, the .d directory suffix, in which sections of configuration can be stored to be read by a main configuration file dynamically. In our case, we can create a new directory ~/.bashrc.d :

$ ls ~/.bashrc.d
options.bash
aliases.bash
functions.bash

With a slightly more advanced snippet at the end of ~/.bashrc , we can then load every file with the suffix .bash in this directory:

# Load any supplementary scripts
for config in "$HOME"/.bashrc.d/*.bash ; do
    source "$config"
done
unset -v config

Note that we unset the config variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the ~/.bashrc.d directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference.

The same method can be applied with .profile to load all scripts with the suffix .sh in ~/.profile.d , if we want to write in POSIX sh , with some slightly different syntax:

# Load any supplementary scripts
for config in "$HOME"/.profile.d/*.sh ; do
    . "$config"
done
unset -v config

Another advantage of this method is that if you have your dotfiles under version control, you can arrange to add extra snippets on a per-machine basis unversioned, without having to update your .bashrc file.

Here's my implementation of the above method, for both .bashrc and .profile :

Thanks to commenter oylenshpeegul for correcting the syntax of the loops.

[Oct 27, 2017] Neat trick of using su command for killing all processes for a particular user

Oct 27, 2017 | unix.stackexchange.com

If you pass -1 as the process ID argument to either the kill shell command or the kill C function , then the signal is sent to all the processes it can reach, which in practice means all the processes of the user running the kill command or syscall.

su -c 'kill -TERM -1' bob

In C (error checking omitted):

if (fork() == 0) {
    setuid(uid);
    signal(SIGTERM, SIG_DFL);
    kill(-1, SIGTERM);
}

[Oct 27, 2017] c - How do I kill all a user's processes using their UID - Unix Linux Stack Exchange

Oct 27, 2017 | unix.stackexchange.com

osgx ,Aug 4, 2011 at 10:07

Use pkill -U UID or pkill -u UID or username instead of UID. Sometimes skill -u USERNAME may work, another tool is killall -u USERNAME .

Skill was a linux-specific and is now outdated, and pkill is more portable (Linux, Solaris, BSD).

pkill allow both numberic and symbolic UIDs, effective and real http://man7.org/linux/man-pages/man1/pkill.1.html

pkill - ... signal processes based on name and other attributes

    -u, --euid euid,...
         Only match processes whose effective user ID is listed.
         Either the numerical or symbolical value may be used.
    -U, --uid uid,...
         Only match processes whose real user ID is listed.  Either the
         numerical or symbolical value may be used.

Man page of skill says is it allowed only to use username, not user id: http://man7.org/linux/man-pages/man1/skill.1.html

skill, snice ... These tools are obsolete and unportable. The command syntax is poorly defined. Consider using the killall, pkill

  -u, --user user
         The next expression is a username.

killall is not marked as outdated in Linux, but it also will not work with numberic UID; only username: http://man7.org/linux/man-pages/man1/killall.1.html

killall - kill processes by name

   -u, --user
         Kill only processes the specified user owns.  Command names
         are optional.

I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use full list of processes (doing some readdir of /proc ). I think, they will iterate over /proc digital subfolders and check every found process for match.

To get list of users, use getpwent (it will get one user per call).

skill (procps & procps-ng) and killall (psmisc) tools both uses getpwnam library call to parse argument of -u option, and only username will be parsed. pkill (procps & procps-ng) uses both atol and getpwnam to parse -u / -U argument and allow both numeric and textual user specifier.

; ,Aug 4, 2011 at 10:11

pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux specifically. – Lars Wirzenius Aug 4 '11 at 10:11

Petesh ,Aug 4, 2011 at 10:58

to get the list of users use the one liner: getent passwd | awk -F: '{print $1}' – Petesh Aug 4 '11 at 10:58

; ,Aug 4, 2011 at 12:07

what about I give a command like: "kill -ju UID" from C system() call? – user489152 Aug 4 '11 at 12:07

osgx ,Aug 4, 2011 at 15:01

is it an embedded linux? you have no skill, pkill and killall? Even busybox embedded shell has pkill and killall. – osgx Aug 4 '11 at 15:01

michalzuber ,Apr 23, 2015 at 7:47

killall -u USERNAME worked like charm – michalzuber Apr 23 '15 at 7:47

[Oct 19, 2017] Bash One-Liners bashoneliners.com

Oct 19, 2017 | www.bashoneliners.com
Kill a process running on port 8080
 $ lsof -i :8080 | awk 'NR > 1 {print $2}' | xargs --no-run-if-empty kill

-- by Janos on Sept. 1, 2017, 8:31 p.m.

Make a new folder and cd into it.
 $ mkcd(){ NAME=$1; mkdir -p "$NAME"; cd "$NAME"; }

-- by PrasannaNatarajan on Aug. 3, 2017, 6:49 a.m.

Go up to a particular folder
 $ alias ph='cd ${PWD%/public_html*}/public_html'

-- by Jab2870 on July 18, 2017, 6:07 p.m.

Explanation

I work on a lot of websites and often need to go up to the public_html folder.

This command creates an alias so that however many folders deep I am, I will be taken up to the correct folder.

alias ph='....' : This creates a shortcut so that when command ph is typed, the part between the quotes is executed

cd ... : This changes directory to the directory specified

PWD : This is a global bash variable that contains the current directory

${...%/public_html*} : This removes /public_html and anything after it from the specified string

Finally, /public_html at the end is appended onto the string.

So, to sum up, when ph is run, we ask bash to change the directory to the current working directory with anything after public_html removed.

Open another terminal at current location
 $ $TERMINAL & disown

-- by Jab2870 on July 18, 2017, 3:04 p.m.

Explanation

Opens another terminal window at the current location.

Use Case

I often cd into a directory and decide it would be useful to open another terminal in the same folder, maybe for an editor or something. Previously, I would open the terminal and repeat the CD command.

I have aliased this command to open so I just type open and I get a new terminal already in my desired folder.

The & disown part of the command stops the new terminal from being dependant on the first meaning that you can still use the first and if you close the first, the second will remain open. Limitations

It relied on you having the $TERMINAL global variable set. If you don't have this set you could easily change it to something like the following:

gnome-terminal & disown or konsole & disown

Preserve your fingers from cd ..; cd ..; cd..; cd..;
 $ up(){ DEEP=$1; for i in $(seq 1 ${DEEP:-"1"}); do cd ../; done; }

-- by alireza6677 on June 28, 2017, 5:40 p.m.

Generate a sequence of numbers
 $ echo {01..10}

-- by Elkku on March 1, 2015, 12:04 a.m.

Explanation

This example will print:

01 02 03 04 05 06 07 08 09 10

While the original one-liner is indeed IMHO the canonical way to loop over numbers, the brace expansion syntax of Bash 4.x has some kick-ass features such as correct padding of the number with leading zeros. Limitations

The zero-padding feature works only in Bash >=4.

Tweet

Related one-liners
Generate a sequence of numbers
 $ for ((i=1; i<=10; ++i)); do echo $i; done

-- by Janos on Nov. 4, 2014, 12:29 p.m.

Explanation

This is similar to seq , but portable. seq does not exist in all systems and is not recommended today anymore. Other variations to emulate various uses with seq :

# seq 1 2 10
for ((i=1; i<=10; i+=2)); do echo $i; done

# seq -w 5 10
for ((i=5; i<=10; ++i)); do printf '%02d\n' $i; done
Find recent logs that contain the string "Exception"
 $ find . -name '*.log' -mtime -2 -exec grep -Hc Exception {} \; | grep -v :0$

-- by Janos on July 19, 2014, 7:53 a.m.

Explanation

The find :

  • -name '*.log' -- match files ending with .log
  • -mtime -2 -- match files modified within the last 2 days
  • -exec CMD ARGS \; -- for each file found, execute command, where {} in ARGS will be replaced with the file's path

The grep :

  • -c is to print the count of the matches instead of the matches themselves
  • -H is to print the name of the file, as grep normally won't print it when there is only one filename argument
  • The output lines will be in the format path:count . Files that didn't match "Exception" will still be printed, with 0 as count
  • The second grep filters the output of the first, excluding lines that end with :0 (= the files that didn't contain matches)

Extra tips:

  • Change "Exception" to the typical relevant failure indicator of your application
  • Add -i for grep to make the search case insensitive
  • To make the find match strictly only files, add -type f
  • Schedule this as a periodic job, and pipe the output to a mailer, for example | mailx -s 'error counts' [email protected]
Remove offending key from known_hosts file with one swift move
 $ sed -i 18d .ssh/known_hosts

-- by EvaggelosBalaskas on Jan. 16, 2013, 2:29 p.m.

Explanation

Using sed to remove a specific line.

The -i parameter is to edit the file in-place. Limitations

This works as posted in GNU sed . In BSD sed , the -i flag requires a parameter to use as the suffix of a backup file. You can set it to empty to not use a backup file:

[Oct 16, 2017] Indenting Here-Documents - bash Cookbook

Oct 16, 2017 | www.safaribooksonline.com

Indenting Here-Documents Problem

The here-document is great, but it's messing up your shell script's formatting. You want to be able to indent for readability. Solution

Use <<- and then you can use tab characters (only!) at the beginning of lines to indent this portion of your shell script.

   $ cat myscript.sh
        ...
             grep $1 <<-'EOF'
                lots of data
                can go here
                it's indented with tabs
                to match the script's indenting
                but the leading tabs are
                discarded when read
                EOF
            ls
        ...
        $
Discussion

The hyphen just after the << is enough to tell bash to ignore the leading tab characters. This is for tab characters only and not arbitrary white space. This is especially important with the EOF or any other marker designation. If you have spaces there, it will not recognize the EOF as your ending marker, and the "here" data will continue through to the end of the file (swallowing the rest of your script). Therefore, you may want to always left-justify the EOF (or other marker) just to be safe, and let the formatting go on this one line.

[Oct 16, 2017] Indenting bourne shell here documents

Oct 16, 2017 | prefetch.net

The Bourne shell provides here documents to allow block of data to be passed to a process through STDIN. The typical format for a here document is something similar to this:

command <<ARBITRARY_TAG
data to pass 1
data to pass 2
ARBITRARY_TAG

This will send the data between the ARBITRARY_TAG statements to the standard input of the process. In order for this to work, you need to make sure that the data is not indented. If you indent it for readability, you will get a syntax error similar to the following:

./test: line 12: syntax error: unexpected end of file

To allow your here documents to be indented, you can append a "-" to the end of the redirection strings like so:

if [ "${STRING}" = "SOMETHING" ]
then
        somecommand <<-EOF
        this is a string1
        this is a string2
        this is a string3
        EOF
fi

You will need to use tabs to indent the data, but that is a small price to pay for added readability. Nice!

[Oct 09, 2017] TMOUT - Auto Logout Linux Shell When There Isn't Any Activity by Aaron Kili

Oct 07, 2017 | www.tecmint.com
... ... ..

To enable automatic user logout, we will be using the TMOUT shell variable, which terminates a user's login shell in case there is no activity for a given number of seconds that you can specify.

To enable this globally (system-wide for all users), set the above variable in the /etc/profile shell initialization file.

[Jul 29, 2017] shell - How does this bash code detect an interactive session - Stack Overflow

Notable quotes:
"... ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ..."
Jul 29, 2017 | stackoverflow.com

user1284631 , asked Jun 5 '13 at 8:44

Following some issues with scp (it did not like the presence of the bash bind command in my .bashrc file, apparently), I followed the advice of a clever guy on the Internet (I just cannot find that post right now) that put at the top of its .bashrc file this:
[[ ${-#*} != ${-} ]] || return

in order to make sure that the bash initialization is NOT executed unless in interactive session.

Now, that works. However, I am not able to figure how it works. Could you enlighten me?

According to this answer , the $- is the current options set for the shell and I know that the ${} is the so-called "substring" syntax for expanding variables.

However, I do not understand the ${-#*i} part. And why $-#*i is not the same as ${-#*i} .

blue , answered Jun 5 '13 at 8:49

parameter#word}

$parameter##word}

The word is expanded to produce a pattern just as in filename expansion. If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the '#' case) or the longest matching pattern (the '##' case) deleted.

If parameter is '@' or ' ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ', the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list.

Source: http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html

So basically what happens in ${-#*i} is that *i is expanded, and if it matches the beginning of the value of $- , then the result of the whole expansion is $- with the shortest matching pattern between *i and $- deleted.

Example

VAR "baioasd" 
echo ${VAR#*i};

outputs oasd .

In your case

If shell is interactive, $- will contain the letter 'i', so when you strip the variable $- of the pattern *i you will get a string that is different from the original $- ( [[ ${-#*i} != ${-} ]] yelds true). If shell is not interactive, $- does not contain the letter 'i' so the pattern *i does not match anything in $- and [[ ${-#*i} != $- ]] yelds false, and the return statement is executed.

perreal , answered Jun 5 '13 at 8:53

See this :

To determine within a startup script whether or not Bash is running interactively, test the value of the '-' special parameter. It contains i when the shell is interactive

Your substitution removes the string up to, and including the i and tests if the substituted version is equal to the original string. They will be different if there is i in the ${-} .

[Jul 25, 2017] Local variables

Notable quotes:
"... completely local and separate ..."
Jul 25, 2017 | wiki.bash-hackers.org

local to a function:

myfunc
()
local
var
=VALUE
 
# alternative, only when used INSIDE a function
declare
var
=VALUE
 
...

The local keyword (or declaring a variable using the declare command) tags a variable to be treated completely local and separate inside the function where it was declared:

foo
=external
 
printvalue
()
local
foo
=internal
 
echo
$foo

 
 
# this will print "external"
echo
$foo

 
# this will print "internal"

printvalue
 
# this will print - again - "external"
echo
$foo

[Jul 25, 2017] Environment variables

Notable quotes:
"... environment variables ..."
"... including the environment variables ..."
Jul 25, 2017 | wiki.bash-hackers.org

The environment space is not directly related to the topic about scope, but it's worth mentioning.

Every UNIX® process has a so-called environment . Other items, in addition to variables, are saved there, the so-called environment variables . When a child process is created (in Bash e.g. by simply executing another program, say ls to list files), the whole environment including the environment variables is copied to the new process. Reading that from the other side means: Only variables that are part of the environment are available in the child process.

A variable can be tagged to be part of the environment using the export command:

# create a new variable and set it:
# -> This is a normal shell variable, not an environment variable!
myvariable
"Hello world."

 
# make the variable visible to all child processes:
# -> Make it an environment variable: "export" it
export
 myvariable

Remember that the exported variable is a copy . There is no provision to "copy it back to the parent." See the article about Bash in the process tree !


1) under specific circumstances, also by the shell itself

[Jul 25, 2017] Block commenting

Jul 25, 2017 | wiki.bash-hackers.org

: (colon) and input redirection. The : does nothing, it's a pseudo command, so it does not care about standard input. In the following code example, you want to test mail and logging, but not dump the database, or execute a shutdown:

#!/bin/bash
# Write info mails, do some tasks and bring down the system in a safe way
echo
"System halt requested"
 mail
-s
"System halt"
 netadmin
example.com
logger
-t
 SYSHALT
"System halt requested"

 
##### The following "code block" is effectively ignored

:
<<
"SOMEWORD"
etc
init.d
mydatabase clean_stop
mydatabase_dump
var
db
db1
mnt
fsrv0
backups
db1
logger
-t
 SYSHALT
"System halt: pre-shutdown actions done, now shutting down the system"

shutdown
-h
 NOW
SOMEWORD
##### The ignored codeblock ends here
What happened? The : pseudo command was given some input by redirection (a here-document) - the pseudo command didn't care about it, effectively, the entire block was ignored.

The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check redirection with here-documents for more

[Jul 25, 2017] Doing specific tasks: concepts, methods, ideas

Notable quotes:
"... under construction! ..."
Jul 25, 2017 | wiki.bash-hackers.org

[Feb 14, 2017] Ms Dos style aliases for linux

I think alias ipconfig = 'ifconfig' is really useful for people who work with Linus from Windows POc desktop/laptop.
Feb 14, 2017 | bash.cyberciti.biz
# MS-DOS / XP cmd like stuff  
   alias edit = $VISUAL  
   alias copy = 'cp'  
   alias cls = 'clear'  
   alias del = 'rm'  
   alias dir = 'ls'  
   alias md = 'mkdir'  
   alias move = 'mv'  
   alias rd = 'rmdir'  
   alias ren = 'mv'  
   alias ipconfig = 'ifconfig'

[Feb 04, 2017] Quickly find differences between two directories

You will be surprised, but GNU diff use in Linux understands the situation when two arguments are directories and behaves accordingly
Feb 04, 2017 | www.cyberciti.biz

The diff command compare files line by line. It can also compare two directories:

# Compare two folders using diff ##
diff /etc /tmp/etc_old  
Rafal Matczak September 29, 2015, 7:36 am
§ Quickly find differences between two directories
And quicker:
 diff -y <(ls -l ${DIR1}) <(ls -l ${DIR2})  

[Feb 04, 2017] Restoring deleted /tmp folder

Jan 13, 2015 | cyberciti.biz

As my journey continues with Linux and Unix shell, I made a few mistakes. I accidentally deleted /tmp folder. To restore it all you have to do is:

mkdir /tmp
chmod 1777 /tmp
chown root:root /tmp
ls -ld /tmp
 
mkdir /tmp chmod 1777 /tmp chown root:root /tmp ls -ld /tmp 

[Feb 04, 2017] Use CDPATH to access frequent directories in bash - Mac OS X Hints

Feb 04, 2017 | hints.macworld.com
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash
Mar 21, '05 10:01:00AM • Contributed by: jonbauman

I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ):

The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use):
CDPATH=".:~:~/Library"

This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents 
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...
[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.]

cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM

Check out the bash command shopt -s cdable_vars

From the man bash page:

cdable_vars

If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.

With this set, if I give the following bash command:

export d="/Users/chap/Desktop"

I can then simply type

cd d

to change to my Desktop directory.

I put the shopt command and the various export commands in my .bashrc file.

[Feb 04, 2017] Copy file into multiple directories

Feb 04, 2017 | www.cyberciti.biz
Instead of running:
cp /path/to/file /usr/dir1
cp /path/to/file /var/dir2
cp /path/to/file /nas/dir3

Run the following command to copy file into multiple dirs:

echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file

[Feb 04, 2017] 20 Unix Command Line Tricks – Part I

Feb 04, 2017 | www.cyberciti.biz
Locking a directory

For privacy of my data I wanted to lock down /downloads on my file server. So I ran:

chmod
 0000 
/
downloads

chmod 0000 /downloads

The root user can still has access and ls and cd commands will not work. To go back:

chmod
 0755 
/
downloads

chmod 0755 /downloads Clear gibberish all over the screen

Just type:

reset

reset Becoming human

Pass the -h or -H (and other options) command line option to GNU or BSD utilities to get output of command commands like ls, df, du, in human-understandable formats:

ls
-lh
# print sizes in human readable format (e.g., 1K 234M 2G)
df
-h
df
-k
# show output in bytes, KB, MB, or GB
free
-b
free
-k
free
-m
free
-g
# print sizes in human readable format (e.g., 1K 234M 2G)
du
-h
# get file system perms in human readable format
stat
-c
%
A 
/
boot
# compare human readable numbers
sort
-h
-a
file
# display the CPU information in human readable format on a Linux

lscpu
lscpu 
-e

lscpu 
-e
=cpu,node
# Show the  size of each file but in a more human readable way
tree
-h
tree
-h
/
boot

ls -lh # print sizes in human readable format (e.g., 1K 234M 2G) df -h df -k # show output in bytes, KB, MB, or GB free -b free -k free -m free -g # print sizes in human readable format (e.g., 1K 234M 2G) du -h # get file system perms in human readable format stat -c %A /boot # compare human readable numbers sort -h -a file # display the CPU information in human readable format on a Linux lscpu lscpu -e lscpu -e=cpu,node # Show the size of each file but in a more human readable way tree -h tree -h /boot Show information about known users in the Linux based system

Just type:

## linux version ##

lslogins
 
## BSD version ##

logins

## linux version ## lslogins## BSD version ## logins

Sample outputs:

UID USER      PWD-LOCK PWD-DENY LAST-LOGIN GECOS
  0 root             0        0   22:37:59 root
  1 bin              0        1            bin
  2 daemon           0        1            daemon
  3 adm              0        1            adm
  4 lp               0        1            lp
  5 sync             0        1            sync
  6 shutdown         0        1 2014-Dec17 shutdown
  7 halt             0        1            halt
  8 mail             0        1            mail
 10 uucp             0        1            uucp
 11 operator         0        1            operator
 12 games            0        1            games
 13 gopher           0        1            gopher
 14 ftp              0        1            FTP User
 27 mysql            0        1            MySQL Server
 38 ntp              0        1            
 48 apache           0        1            Apache
 68 haldaemon        0        1            HAL daemon
 69 vcsa             0        1            virtual console memory owner
 72 tcpdump          0        1            
 74 sshd             0        1            Privilege-separated SSH
 81 dbus             0        1            System message bus
 89 postfix          0        1            
 99 nobody           0        1            Nobody
173 abrt             0        1            
497 vnstat           0        1            vnStat user
498 nginx            0        1            nginx user
499 saslauth         0        1            "Saslauthd user"
Confused on a top command output?

Seriously, you need to try out htop instead of top:

sudo
htop

sudo htop Want to run the same command again?

Just type !! . For example:

/
myhome
/
dir
/
script
/
name arg1 arg2
 
# To run the same command again 
!!

 
## To run the last command again as root user
sudo
!!

/myhome/dir/script/name arg1 arg2# To run the same command again !!## To run the last command again as root user sudo !!

The !! repeats the most recent command. To run the most recent command beginning with "foo":

!
foo
# Run the most recent command beginning with "service" as root
sudo
!
service

!foo # Run the most recent command beginning with "service" as root sudo !service

The !$ use to run command with the last argument of the most recent command:

# Edit nginx.conf
sudo
vi
/
etc
/
nginx
/
nginx.conf
 
# Test nginx.conf for errors
/
sbin
/
nginx 
-t
-c
/
etc
/
nginx
/
nginx.conf
 
# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you
# can edit file again with vi
sudo
vi
!
$

# Edit nginx.conf sudo vi /etc/nginx/nginx.conf# Test nginx.conf for errors /sbin/nginx -t -c /etc/nginx/nginx.conf# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you # can edit file again with vi sudo vi !$ Get a reminder you when you have to leave

If you need a reminder to leave your terminal, type the following command:

leave +hhmm

leave +hhmm

Where,

Home sweet home

Want to go the directory you were just in? Run:
cd -
Need to quickly return to your home directory? Enter:
cd
The variable CDPATH defines the search path for the directory containing directories:

export
CDPATH
=
/
var
/
www:
/
nas10

export CDPATH=/var/www:/nas10

Now, instead of typing cd /var/www/html/ I can simply type the following to cd into /var/www/html path:

cd
 html

cd html Editing a file being viewed with less pager

To edit a file being viewed with less pager, press v . You will have the file for edit under $EDITOR:

less
*
.c
less
 foo.html
## Press v to edit file ##
## Quit from editor and you would return to the less pager again ##

less *.c less foo.html ## Press v to edit file ## ## Quit from editor and you would return to the less pager again ## List all files or directories on your system

To see all of the directories on your system, run:

find
/
-type
 d 
|
less

 
# List all directories in your $HOME
find
$HOME
-type
 d 
-ls
|
less

find / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less

To see all of the files, run:

find
/
-type
 f 
|
less

 
# List all files in your $HOME
find
$HOME
-type
 f 
-ls
|
less

find / -type f | less# List all files in your $HOME find $HOME -type f -ls | less Build directory trees in a single command

You can create directory trees one at a time using mkdir command by passing the -p option:

mkdir
-p
/
jail
/
{
dev,bin,sbin,etc,usr,lib,lib64
}
ls
-l
/
jail
/

mkdir -p /jail/{dev,bin,sbin,etc,usr,lib,lib64} ls -l /jail/ Copy file into multiple directories

Instead of running:

cp
/
path
/
to
/
file
/
usr
/
dir1
cp
/
path
/
to
/
file
/
var
/
dir2
cp
/
path
/
to
/
file
/
nas
/
dir3

cp /path/to/file /usr/dir1 cp /path/to/file /var/dir2 cp /path/to/file /nas/dir3

Run the following command to copy file into multiple dirs:

echo
/
usr
/
dir1 
/
var
/
dir2 
/
nas
/
dir3 
|
xargs
-n
1
cp
-v
/
path
/
to
/
file

echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file

Creating a shell function is left as an exercise for the reader

Quickly find differences between two directories

The diff command compare files line by line. It can also compare two directories:

ls
-l
/
tmp
/
r
ls
-l
/
tmp
/
s
# Compare two folders using diff ##
diff
/
tmp
/
r
/
/
tmp
/
s
/

[Feb 04, 2017] List all files or directories on your system

Feb 04, 2017 | www.cyberciti.biz
List all files or directories on your system

To see all of the directories on your system, run:

find
/
-type
 d 
|
less

 
# List all directories in your $HOME
find
$HOME
-type
 d 
-ls
|
less

find / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less

To see all of the files, run:

find
/
-type
 f 
|
less

 
# List all files in your $HOME
find
$HOME
-type
 f 
-ls
|
less

[Jan 26, 2012] A last-resort trick to recover your machine from the brink of death

Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for air (aka memory) .

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.

Controlling runaway processes on Linux • Rudd-O.com

Sometimes, buggy memory hogs can choke your machine. Here, two tricks: one to recover from a memory choke, another to prevent memory chokes forever.

Misbehaving application frozen?

Has an application stopped responding on your machine? Well, as long as your machine is still responsive, you can use these tricks to nuke it safely.

On KDE

Hit Ctrl+Alt+Esc. Your mouse cursor will change - from an arrow to a small skull/crossbones combo. Hit the stubborn application with the skull.

It'll die.

On GNOME

Add a new launcher to your panel and set it so it executes the xkill application. Now, when an application starts stupidifying itself, just hit the launcher you created (the cursor will change to a square-type "target"), then hit the application with your mouse cursor.

It'll die.

Application choking your machine?

Of course, if your machine is already too slow to use these tricks, they won't help you much. Here's why, sometimes, your machine ditches itself into a molasses pit, and how to rescue it from certain death.

Memory and pathologies

…almost all applications have this pathological idea (encouraged by the operating system) that memory is a limitless resource…

You see, almost all applications have this pathological idea (encouraged by the operating system) that memory is a limitless resource - and when they go overboard, the operating system just dips into the hard disk to simulate memory.

Sometimes, bugs in an application do cause them to go haywire, requesting memory like there's no end. Once your machine goes down that lane, there isn't a simple way to recover it, short of powering it off forcibly.

A fortress-like operating system like Linux isn't supposed to die, and yet it does. However, the fortress I'm so proud of can, and does, provide you with effective measures against premature deaths of these sort.

A last-resort trick to recover your machine from the brink of death

Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for airmemory.

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem ;-) - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.

A definitive cure to runaway applications

Become an administrator (root) and use your favorite text editor to open the file /etc/security/limits.conf:

# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain>        <type>  <item>  <value>
#
#Where:
#<domain> can be:
#        - an user name
#        - a group name, with @group syntax
#        - the wildcard *, for default entry
#        - the wildcard %, can be also used with %group syntax,
#                 for maxlogin limit
#
#<type> can have the two values:
#        - "soft" for enforcing the soft limits
#        - "hard" for enforcing hard limits
#
#<item> can be one of the following:
#        - core - limits the core file size (KB)
#        - data - max data size (KB)
#        - fsize - maximum filesize (KB)
#        - memlock - max locked-in-memory address space (KB)
#        - nofile - max number of open files
#        - rss - max resident set size (KB)
#        - stack - max stack size (KB)
#        - cpu - max CPU time (MIN)
#        - nproc - max number of processes
#        - as - address space limit

Add the following line anywhere on the file:

*      soft      as      512000
limits.conf is a little-known godsend, and a definite requirement to keep large computing farms or terminal servers under control.

You will need to restart any sessions (graphical/terminal) for this change to take effect. Additionally, you will need to restart the graphical session manager (GDM or KDM).

Of course, 512000 is just my favorite setting - but that's because I have the privilege of using multi-gigabyte memory sticks on my machine. If you have much less memory than me, you will want to tune this, while keeping in mind that modern applications can and do take more than 300 MB under exceptional circumstances.

Like, for example, Firefox with 50 tabs/windows open. Or Evolution managing 2 GB of e-mail. Hey, both circumstances happen to me, but I guess I'm an oddball.

What this "magic incantation" in limits.conf does

limits.conf is the file that lets you set per-user/process/system resource limits. There are several limits to choose from.

One of them is the address space (as). The address space refers to the maximum amount of RAM (in kilobytes) that a process may request from the operating system. Any requests above the configured limit are simply refused.

Advantages/disadvantages

The fortunate side effect of this recipe is that the majority of applications will disappear and die a horrible death if they request memory indiscriminately. Which beats having to turn the machine off.

The unfortunate side effect of refused requests for more memory is that the majority of applications will disappear and die a horrible death if they request memory indiscriminately. That's because they don't know how to cope with more memory. Until software developers actually implement error handling for lack of memory, this will be a small nuisance - hey, save often and you'll be safe ;-).

Temporarily disabling memory limits for picky applications

Another disadvantage: certain applications don't run when a limit is set. WINE is among those applications. However, you can use a terminal window to disable this limit temporarily (within any applications started from the terminal):

[rudd-o@andrea /media/windows/UnrealTournament]$ ulimit -v unlimited
[rudd-o@andrea /media/windows/UnrealTournament]$ wine System/UnrealTournament.exe

That's it for this tutorial

So, the infrequent dead application, SAKing your computer, or powering it off… what do you prefer? Me… well, once I upgraded my new "desktop" machine (1U Dell PowerEdge SC1425) to 2 GB, I haven't looked back.

But my old machine did thank me a lot for keeping it off memory crunches :-). It's now safe in PVR heaven, piratingtime-shifting TV shows for my pleasure.

[Aug 09, 2011] Creating a Linux ramdisk

While performing some testing a few weeks ago, I needed to create a ramdisk on one of my redhat AS 4.0 servers. I knew Solaris supported tmpfs, and after a bit of googling was surprised to find that Linux supported the tmpfs pseudo-file system as well. To create a ramdisk on a Linux host, you first need to find a suitable place to mount the tmpfs file system. For my tests, I used mkdir to create a directory valled /var/ramdisk:

$ mkdir /var/ramdisk

Once the mount point is identified, you can use the mount command to mount a tmpfs file system on top of that mount point:

$ mount -t tmpfs none /var/ramdisk -o size=28m

Now each time you access /var/ramdisk, your reads and writes will be coming directly from memory. Nice!

[Dec 17, 2009] Top Ten Things I Miss in Windows

Thoughts on Technology

Klipper/Copy & Paste Manager

I use this one alot when I am either coding or writing a research paper for school. More often than not I find I have copied something new only to discover I need to paste a link or block of code again from two copies back. Having a tray icon where I can recall the last ten copies or so is mighty useful.

Gnome-Do

Most anyone who uses the computer in their everyday work will tell you that less mouse clicks means faster speed and thus (typically) more productivity. Gnome-Do is a program that allows you to cut down on mouse clicks (so long as you know what program you are looking to load). The jist of what it does is this: you assign a series of hot keys to call up the search bar (personally I use control+alt+space) and then you start typing in the name of an application or folder you want to open and it will start searching for it - once the correct thing is displayed all you need to do is tap enter to load it up. The best part is that it remembers which programs you use most often. Meaning that most times you only need to type the first letter or two of a commonly used application for it to find the one you are looking for.

[Aug 7, 2009] atd daemon is not running on Suse 10 SP2 by default, so at commands fail.

It needs to be manually enabled via chkconfig and started with service command to ensure consistency in behavior with the Solaris. AIX and HP-UX.

[Aug 5, 2009] 10 Essential UNIX-Linux Command Cheat Sheets TECH SOURCE FROM BOHOL

Linux has become so idiot proof nowadays that there is less and less need to use the command line. However, the commands and shell scripts have remained powerful for advanced users to utilize to help them do complicated tasks quickly and efficiently.

To those of you who are aspiring to become a UNIX/Linux guru, you have to know loads of commands and learn how to effectively use them. But there is really no need to memorize everything since there are plenty of cheat sheets available on the web and on books. To spare you from the hassles of searching, I have here a collection of 10 essential UNIX/Linux cheat sheets that can greatly help you on your quest for mastery...

[Aug 4, 2009] Tech Tip View Config Files Without Comments Linux Journal

I've been using this grep invocation for years to trim comments out of config files. Comments are great but can get in your way if you just want to see the currently running configuration. I've found files hundreds of lines long which had fewer than ten active configuration lines, it's really hard to get an overview of what's going on when you have to wade through hundreds of lines of comments.

$ grep ^[^#] /etc/ntp.conf

The regex ^[^#] matches the first character of any line, as long as that character that is not a #. Because blank lines don't have a first character they're not matched either, resulting in a nice compact output of just the active configuration lines.

[Aug 3, 2009] 10 super-cool Linux hacks you did not know about

1. Run top in batch mode

2. Write to more than one file at once with tee

3. Unleash the accounting power with pacct

4. Dump utmp and wtmp logs

5. Monitor CPU and disk usage with iostat

6. Monitor memory usage with vmstat

7. Combine the power of iostat and vmstat with dstat

8. Collect, report or save system activity information with sar

9. Create UDP server-client - version 1

10. Configure UDP server-client - version 2

Linux Commando Remap Caps Lock key for virtual console windows

Remap Caps Lock key for virtual console windows

My last blog entry explains how to use xmodmap to remap the Caps Lock key to the Escape key in X. That takes care of the keyboard mapping when you are in X. What about when you are in a virtual console window? You need to follow the steps below. Make sure that you sudo root before you execute the following commands.
  1. Find out the keycode of the key that you want remapped.

    Execute the showkey command as root in a virtual consolde:

    $ showkey
    kb mode was UNICODE
    
    press any key (program terminates after 10s of last keypress)...
    0x9c
    Hit the Caps Lock key, wait 10 seconds (default timeout), and the showkey command will exit on its own.
    $ showkey
    kb mode was UNICODE
    
    press any key (program terminates after 10s of last keypress)...
    0x9c
    0x3a 
    0xba
    The keycode for the Caps Lock key is 0x3a in hex, or 58 in decimal.

  2. Find out the symbolic name (key symbol) of the key that you want to map to.
    You can list all the supported symbolic names by dumpkeys -l and grep for esc:
    $ dumpkeys -l |grep -i esc 
    0x001b Escape
    0x081b Meta_Escape
  3. Remap the keycode 58 to the Escape key symbol.
    $ (echo `dumpkeys |grep -i keymaps`;  \
       echo keycode 58 = Escape)          \
       | loadkeys -
    Thanks to cjwatson who pointed me to prepending the keymaps statement from dumpkeys. The keymaps statement is a shorthand notation defining what key modifiers you are defining with the key. See man keymaps(5) for more info.
To make the new key mapping permanent, you need to put the loadkeys command in a bootup script.

For my Debian Etch system, I put the
(echo `dumpkeys |grep -i keymaps`; echo keycode 58 = Escape) |loadkeys - command in /etc/rc.local.

Remapping Keys Under Linux

To swap caps lock and control:

# Make the Caps Lock key be a Control key:
xmodmap -e "remove lock = Caps_Lock"
xmodmap -e "add control = Caps_Lock"

# Make the Left Control key be a Caps Lock key:
xmodmap -e "remove control = Control_L"
xmodmap -e "add lock = Control_L"

Questions Answered Below

The instructions in this page apply only to Linux in an X environment (like KDE).

Terminology

How These Relate To One Another

Keycodes, keysyms, and modifiers relate in the following way:

keycodekeysymmodifier (optional)

So for example, on my keyboard:

keycode 38 (the 'a' key)keysym 0x61 (the symbol 'a')

keycode 50 (the left 'shift' key)keysym 0xffe1 (the action 'the left shift key is down')the shift modifier

Note that technically, each keycode can be mapped to more than one keysym. The first mapping applies when no modifier is pressed; the second applies when the shift key is pressed. (I haven't figured out how to use the third and fourth yet.) So for example, the second mapping on my 'a' key is:

keycode 38 (the 'a' key)keysym 0x61 (the symbol 'A')

In other words, when modifier 'shift' is active, my 'a' key generates an 'A' instead of an 'a'.

Viewing Your Settings

Changing Your Settings

Say you want to map the caps lock key to be the control modifier. You have two sensible choices for how to do this:

Caps Lock KeyCaps Lock actionControl Modifier

Caps Lock KeyControl_L actionControl Modifier

To do the first, you need to change the action → modifier mapping. Do this as follows:

xmodmap -e "remove lock = Caps_Lock"
xmodmap -e "add control = Caps_Lock"

To do the second, you need to change the keycode → action mapping, so you'll need to know the keycode of your caps lock key. To find the keycode for your caps lock key use xev, as described above. Mine is 66. So:

xmodmap -e "keycode 66 = Control_L"

Help!

If you mess things up, the simplest way to fix things is to log out of the window manager and log back in.

For More Information

Notes

By David Vespe, April 2006

Remap Caps Lock

The Caps Lock key on most PC keyboards is in the position where the Control key is on many other keyboards, and vice versa. This can make it difficult for programmers to use the "wrong" kind of keyboard.

This page describes how to RemapCapsLock on different keys in different OperatingSystems.

One really stupid thing about PeeCee keyboards is that manufacturers even realized that putting caps-lock on home row was a bad idea because people kept hitting it with the 'a' key. Did they move it? No, that would be too sensible. They carved a little piece of it off to leave a bigger gap. So now if I re-map a standard PC-10* keyboard so that left-control is in a sensible place, it is still harder to use than it should be. :(

Many people (the majority, clearly) feel that the placement of CTRL below the SHIFT key is a better location for it. However, the backspace key is way out of the way -- it would be better if the CAPSLOCK and backspace keys were swapped.


Unix, Console

If you have loadkeys (as you would under Linux), this should do the trick:
 loadkeys /usr/share/keymaps/i386/qwerty/emacs2.kmap.gz
To reset to the defaults (you may have to switch to another tty and back to undo ctrl-lock):
 loadkeys -d
Unix, X

Under Redhat 8.0, just enable the following line in /etc/X11/XF86Config

        Option      "XkbOptions"        "ctrl:swapcaps"
Replace "swapcaps" with "nocaps" to turn both keys into "Control."

With X, there are at least 2 different ways to remap the keys. One is using xmodmap. For example, man xmodmap shows how to swap the left control key and the CapsLock key:

 ! Swap Caps_Lock and Control_L
 !
 remove Lock = Caps_Lock
 remove Control = Control_L
 keysym Control_L = Caps_Lock
 keysym Caps_Lock = Control_L
 add Lock = Caps_Lock
 add Control = Control_L                 
Many people don't want a CapsLock key at all. They can change the CapsLock key to a ControlKey? by using the following lines in xmodmap:
 clear Lock
 keycode 0x7e = Control_R
 add Control = Control_R 
Maybe you have to change the keycode 0x7e. You can find the keycodes with xev. I Furthermore, this only works if you don't have a right control key. I hope somebody has a solution which does not have this restriction.

This solution might be the easiest one. If you do not have a problem owning a dead key in your keyboard you might disable CapsLock at all:

 "remove lock = Caps_Lock"  (or just: "clear lock")
A better solution might be this sequence, which is keycode independent and does not remove existing control keys:
 remove Lock = Caps_Lock
 remove Control = Control_L
 keysym Caps_Lock = Control_L
 add Lock = Caps_Lock
 add Control = Control_L
Now, you can use another solution which uses xkb. For that, you will have to find the sybols directory on your unix system. There, you add a file which might be called 'ctrl' containing the following:
 // eliminate the caps lock key completely (replace with control)
 partial modifier_keys
 xkb_symbols "nocaps" {
     key <CAPS>  {  symbols[Group1]= [ Control_L ] };
     modifier_map  Control { <CAPS>, <LCTL> };
 };
This eliminates the caps lock key if included in a keymap. We can do this by changing the file en_US:
 xkb_symbols "pc101" {
     include "ctrl(nocaps)"
     key <RALT> { [ Mode_switch,  Multi_key ] };
augment "us(pc101)" include "iso9995-3(basic101)"

modifier_map Mod3 { Mode_switch }; };

You can then add the keyboard using a line like:
 /usr/X11R6/lib/X11/xkb/xkbcomp -w 1 -R/usr/X11R6/lib/X11/xkb -xkm -m en_US keymap/xfree86 0:0
Now, unfortunately there are probably errors in the text above. Please correct and make it working for other systems than RedHat Linux.

Red Hat Knowledgebase How can I find information on the maximum amount of memory my system can handle

The dmidecode command can be used to display information from the systems' BIOS that includes the maximum memory that the BIOS will support. This information is displayed by dmidecode as type 16 (Physical Memory Array) which can be filtered with the command dmidecode -t 16.

For instance, the following output shows a system that can support a maximum of 16GB of RAM.

Handle 0x0032, DMI type 16, 15 bytes
Physical Memory Array
	Location: System Board Or Motherboard
	Use: System Memory
	Error Correction Type: None
	Maximum Capacity: 16 GB
	Error Information Handle: Not Provided
	Number Of Devices: 4

[Jan 27, 2009] Linux Keyboard Shortcuts Safe Way to Exit During System Freezes

Jan 26, 2009 | Linux Today

"Alt + SysR + K
Kill all processes (including X), which are running on the currently active virtual console.

"Alt + SysRq + E
Send the TERM signal to all running processes except init, asking them to exit."

Magic Tricks With the Sysreq Key(Dec 10, 2008)
Sounds of Crashing Hard Drives(Nov 24, 2008)
Fix Unresponsive or Frozen Linux Computers Using Shortcuts(Nov 11, 2008)
Linux Kernel Magic SysRq Keys in openSUSE for Crash Recovery(Sep 29, 2008)
Rebooting the Magic Way(Aug 22, 2008)
Fix a Frozen System with the Magic SysRq Keys(Sep 17, 2007)

Tips & Tricks

partprobe - inform the OS of partition table changes

One of the major benefits to using Red Hat Enterprise Linux is that once the operating system is up and running, it tends to stay that way. This also holds true when it comes to reconfiguring a system; mostly. One Achilles heel for Linux, until the past couple of years, has been the fact that the Linux kernel only reads partition table information at system initialization, necessitating a reboot any time you wish to add new disk partitions to a running system.

The good news, however, is that disk re-partitioning can now also be handled 'on-the-fly' thanks to the 'partprobe' command, which is part of the 'parted' package.

Using 'partprobe' couldn't be more simple. Any time you use 'fdisk', 'parted' or any other favorite partitioning utility you may have to modify the partition table for a drive, run 'partprobe' after you exit the partitioning utility and 'partprobe' will let the kernel know about the modified partition table information. If you have several disk drives and want to specify a specific drive for 'partprobe' to scan, you can run 'partprobe <device_node>'

Of course, given a particular hardware configuration, shutting down your system to add hardware may be unavoidable, it's still nice to be given the option of not having to do so and 'partprobe' fills that niche quite nicely.

partprobe [-d] [-s] [devices...]

DESCRIPTION

This manual page documents briefly the partprobe command.

partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table.

OPTIONS

This program uses short UNIX style options.
-d
Don't update the kernel.
-s
Show a summary of devices and their partitions.
-h
Show summary of options.
-v
Show version of program.

CENTOS/RHEL 5 CONFIGURATION TIPS

Yum & Repositories
I noticed this issue with both CentOS 4 and 5 - Yum will often choose bad mirrors from the mirrorlist file - for example, choosing overseas servers, when an official NZ server exists. And in some cases, the servers it has chosen are horribly slow.

You will probably find that you get better download speeds by editing /etc/yum.repos.d/CentOS-Base.repo and commenting out the mirrorlist lines and setting the baseurl line to point to your preferred local mirror.

Yum-updatesd
CentOS 5 has a new daemon called yum-updatesd, which replaces the old cron job yum update scripts. This script will check frequently for updates, and can be configured to download and/or install them.

However, this daemon is bad for a server, since it doesn't run at a fixed time - I really don't want my server downloading and updating software during the busiest time of day thank-you-very-much!

So, it's bad for a server. Let's disable it with:

service yum-updatesd stop
chkconfig --level 2345 yum-updatesd off

Plus I don't like the idea of having a full blown daemon where a simple cronjob will do the trick perfectly fine - seems like overkill. (although it appears yum-updatesd has some useful features like dbus integration for desktop users)

So, I replace it with my favorite cronjob script approach, by running the following (as root of course):

cat << "EOF" > /etc/cron.daily/yumupdate
 #!/bin/sh
 # install any yum updates
/usr/bin/yum -R 10 -e 0 -d 1 -y update yum > /var/log/yum.cron.log 2>&1
/usr/bin/yum -R 120 -e 0 -d 1 -y update  >> /var/log/yum.cron.log 2>&1
if [ -s /var/log/yum.cron.log ]; then
        /bin/cat /var/log/yum.cron.log | mail root -s "Yum update information" 2>&1
fi
EOF

and if you want to clear up the package cache every week:
cat << "EOF" > /etc/cron.weekly/yumclean
 #!/bin/sh
 # remove downloaded packages
/usr/bin/yum -e 0 -d 0 clean packages
EOF

(please excuse the leading space infront of the comments ( #) - it is to work around a limitation in my site, which I will fix shortly. Just copy the lines into a text editor and remove the space, before pasting into the terminal)

This will install 2 scripts that get run around 4:00am (as set in /etc/crontab) which will check for updates and download and install any automatically. If there were any updates, it will send out an email, if there were none, it doesn't send anything.

(of course, you need sendmail/whatever_fucking_email_server_you_like configured correctly to get the alerts!)

You can change yum to just download and not install the updates (just RTFM), but I've never had a update break anything - update compatibility and quality is always very high - so I use automatic updates.

CentOS 4 had something very similar to this, with the addition of a bootscript to turn the cronjobs on and off.

* Please check out the update at the bottom of this page for futher information on this.


Apache Quirks
If you are using indexing in apache (indexing is when you can browse folders/files), you may find that the browsing page looks small and nasty.

The fix is to edit /etc/httpd/conf/httpd.conf and change the following line:

IndexOptions FancyIndexing VersionSort NameWidth=* HTMLTable
to
IndexOptions FancyIndexing VersionSort NameWidth=*

This should make the index full screen again. I'm not sure if this is an apache bug, a distro bug or some other weird issue, because I'm sure HTMLTable isn't supposed to be all small like that.

(FYI: CentOS 4 did not have the HTMLTable option active)

SSL Certificates
Redhat have moved things around with SSL certificates a lot. What it seems like happened (I have only had a quick look into this), is that they were going to provide a new tool to generate SSL certificates called "genkey" but pulled it out before release.

To make things more fun, they also removed the good old Makefile that was in /etc/httpd/conf/ that allowed you to generate SSL certificates & keys.

However, I found the same Makefile again in /etc/pki/tls/certs/


Vi vs. Vim
If you use vi/vim, you should check this posting out.

That's all the issues that I've come across for now - if I find any more things to note, I'll update this page with the information and put a note on my blog.

/etc/sysconfig/network

Note the NETMASK should be defined in /etc/sysconfig/network-scripts/ifcfg-eth0

/etc/sysconfig/network

The /etc/sysconfig/network file is used to specify information about the desired network configuration. The following values may be used:

How to see parameter of ext3 filesystem

# tune2fs -l /dev/mapper/vg00-lv06
tune2fs 1.38 (30-Jun-2005)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: c0615eba-5bb6-443d-81c7-7f3c1eb829b2
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal filetype needs_recovery sparse_super
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1310720
Block count: 2621440
Reserved block count: 131072
Free blocks: 2365370
Free inodes: 1309130
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Mon May 21 11:16:17 2007
Last mount time: Tue May 6 17:40:40 2008
Last write time: Tue May 6 17:40:40 2008
Mount count: 3
Maximum mount count: 500
Last checked: Thu Apr 3 11:51:39 2008
Check interval: 5184000 (2 months)
Next check after: Mon Jun 2 11:51:39 2008
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 36a54cdf-3f8e-482b-9e2c-a48b6ac1d27e
Journal backup: inode blocks

[Nov 30, 2007] freshmeat.net Project details for Expect-lite

About:
Expect-lite is a wrapper for expect, created to make expect programming even easier. The wrapper permits the creation of expect script command files by using special character(s) at the beginning of each line to indicate the expect-lite action. Basic expect-lite scripts can be created by simply cutting and pasting text from a terminal window into a script, and adding '>' '

Release focus: Major feature enhancements

Changes:
The entire command script read subsystem has changed. The previous system read directly from the script file. The new system reads the script file into a buffer, which can be randomly accessed. This permits looping (realistically only repeat loops). Infinite loop protection has been added. Variable increment and decrement have been added to support looping.

Author:
Craig Miller [contact developer]

[Nov 30, 2007] Got more than a gig of RAM and 32-bit Linux Here's how to use it By Bruce Byfield

September 21, 2007 | Linux.com
Nowadays, many machines are running with 2-4 gigabytes of RAM, and their owners are discovering a problem: When they run 32-bit GNU/Linux distributions, their extra RAM is not being used. Fortunately, correcting the problem is only a matter of installing or building a kernel with a few specific parameters enabled or disabled.

The problem exists because 32-bit Linux kernels are designed to access only 1GB of RAM by default. The workaround for this limitation is vaguely reminiscent of the virtual memory solution once used by DOS, with a high memory area of virtual memory being constantly mapped to physical addresses. This high memory can be enabled for up to 4GB by one kernel parameter, or up to 64GB on a Pentium Pro or higher processor with another parameter. However, since these parameters have not been needed on most machines until recently, the standard kernels in many distributions have not enabled them.

Increasingly, many distributions are enabling high memory for 4GB. Ubuntu default kernels have been enabling this process at least since version 6.10, and so have Fedora 7's. By contrast, Debian's default 486 kernels do not. Few distros, if any, enable 64GB by default.

To check whether your kernel is configured to use all your RAM, enter the command free -m. This command gives you the total amount of unused RAM on your system, as well as the size of your swap file, in megabytes. If the total memory is 885, then no high memory is enabled on your system (the rest of the first gigabyte is reserved by the kernel for its own purposes). Similarly, if the result shows over 1 gigabyte but less than 4GB when you know you have more, then the 4GB parameter is enabled, but not the 64GB one. In either case, you will need to add a new kernel to take full advantage of your RAM.

[Nov 20, 2007] Games with discolors

eval `dircolors ~/.dir_colors`
alias ls="ls --color=auto"

The command 'dircolors' takes its data from the file ~/.dir_colors and
creates an environment variable LS_COLORS. The command 'ls --color' takes
its colors from the environmental variable LS_COLORS.

So, write a suitable ~/.dir_colors file, and execute the command
'dircolors'. To get a starting file for editing, do this:

dircolors -p > ~/.dir_colors

The ~/.dir_colors file so created includes directions on coding the colors
for different kinds of files.

See man dircolors.

[Nov 1, 2007] Changing Gnome behavior to standard UNIX CDE style (application displayed on a particular desktop are visible only this desktop toolbar.

You need to unclick:

/apps/panel/applets/windows_list_screen/pref/display_in_all_workspaces

Controlling runaway processes on Linux • Rudd-O.com

... Have you heard of the magic SysRq key?

No?

Well, it's magic. It's directly shunted to the Linux kernel. You press ALT, press the PrintScreen (SysRq) key, and while holding them both down, press one of the letters (each letter has a different function assigned to it).

It's not normally enabled, but you can enable it by putting

kernel.sysrq = 1

in your machine's /etc/sysctl.conf file. Oh, and then rebooting.

Here's why it's useful.

So, what does SysRq do, really?

Hit Alt+SysRq+K - the windowing system will restart. More effective than Ctrl+Alt+Backspace.

Suppose a GUI application you just opened is starting to swallow massive amounts of RAM. Like, one gigabyte, perhaps? Your machine is locking up, and you feel the mouse start to stutter at first, then freeze completely - while the hard disk light in your computer's front panel is lighting up frantically, gasping for airmemory.

You now have three choices:

  1. Sit it out and let the Linux kernel detect this situation and kill the abusive application. This can take way more than 15 minutes.
  2. Press the computer's power off button for 5 seconds. This shuts your machine down uncleanly and leads to data loss.
  3. Hit the magic SysRq combo: Alt+SysRq+K.

Should you choose option 3, the graphical subsystem dies immediately. That's because Alt+SysRq+K kills any application that holds the keyboard open - and, you guessed it, the graphical subsystem is holding it open. This premature death of the GUI causes all GUI applications to die in a cascade, including the abusive application.

Two to ten seconds later, you will be presented with a login prompt.

Sure, you lost changes to all files you haven't saved, and all the tabs in your Web browser… but at least you didn't have to reboot uncleanly, did you?

But, Ctrl+Alt+Backspace?

Once the machine is in a critically heavy memory crunch, Ctrl+Alt+Backspace will take too much time to work, because the windowing system will be pressed for memory to even execute. The magic SysRq key has the luxury of not having that problem ;-) - if Ctrl+Alt+Backspace were an IV drip, SysRq would be like a central line.

Why this key combination exists

The reason this key combo exists is simple. Alt+SysRq+K is called SAK (System Attention Key). It was designed back in the days of, um, yore, to kill all applications snooping on the keyboard - so administrators wishing to log in could safely do so without anyone sniffing their passwords.

As a preventative security measure, it sure works against keyloggers and other malware that may be snooping on your keyboard, may I say. And it most definitely works against your run-of-the-mill temporary memory shortage ;-).

Advantages/disadvantages

Well, the major disadvantages are:

But, on a memory crunch, this beats rebooting hands-down. And that's the biggest advantage.

Recommended Links



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last updated: January 03, 2021