Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Skepticism and critical thinking is not panacea, but can help to understand the world better

Slightly Skeptical View on Enterprise Unix Administration

News Webliography of problems with "pure" cloud environment Recommended Books Recommended Links Shadow IT Project Management Linux command line helpers
Softpanorama sysadmin utilities Saferm -- wrapper for rm command Neatbash -- a simple bash prettyprinter Neatperl -- a simple Perl prettyprinter Pythonizer: Translator from Perl to Python Top Vulnerabilities in Linux Environment Registering a server using Red Hat Subscription Manager (RHSM)
Sysadmin Horror Stories Missing backup horror stories Creative uses of rm Perl Wiki as a System Administrator Tool Frontpage as a poor man personal knowledge management system Information is not knowledge Static Web site content generators
The tar pit of Red Hat overcomplexity Notes on RHCSA Certification for RHEL 7 Red Hat Enterprise Linux Life Cycle Recovery of LVM partitions Notes on hard drives partitioning for Linux Troubleshooting HPOM agents Root filesystem is mounted read only on boot
Sysadmin cheatsheets Systemd invasion into Linux Server space Registering a server using Red Hat Subscription Manager (RHSM) Nagios in Large Enterprise Environment Sudoers File Examples Dealing with multiple flavors of Unix SSH Configuration
Unix Configuration Management Tools Job schedulers Unix System Monitoring Is DevOps a yet another "for profit" technocult Using HP ILO virtual CDROM Resetting frozen iDRAC without unplugging the server ILO command line interface
Bare metal recovery of Linux systems Relax-and-Recover on RHEL HP Operations Manager Troubleshooting HPOM agents Number of Servers per Sysadmin Recommended Tools to Enhance Command Line Usage in Windows Programmable Keyboards
Over 50 and unemployed Surviving a Bad Performance Review Understanding Micromanagers and Control Freaks Bozos or Empty Suits (Aggressive Incompetent Managers) Narcissists Female Sociopaths Bully Managers
Slackerism Information Overload Workagolism and Burnout Unix Sysadmin Tips Orthodox Editors Admin Humor Sysadmin Health Issues


The KISS rule can be expanded as: Keep It Simple, Sysadmin ;-)

This page is written as a protest against overcomplexity and bizarre data center atmosphere typical in "semi-outsourced" or fully outsourced datacenters ;-). Unix/Linux sysadmins are being killed by overcomplexity of the environment, some new "for profit" technocults  like DevOps, and outsourcing.  Large swats of Linux knowledge (and many excellent  books)  were  made obsolete by Red Hat with the introduction of systemd. Especially affected are older, most experienced members of the team, who have unique set of organization knowledge as well as specifics of their career which allowed them to watch the development of Linux almost from the version 0.92.

System administration is still a unique area were people with the ability to program can display their own creativity with relative ease and can still enjoy "old style" atmosphere of software development, when you yourself put a specification, implement it, test the program and then use it in daily work. This is a very exciting, unique opportunity that no DevOps can ever provide.

But the conditions are getting worse and worse. That's why an increasing number of sysadmins are far from being excited about working in those positions, or outright want to quick the  field (or, at least, work 4 days a week). And that include sysadmins who have tremendous speed and capability to process and learn new information. Even for them "enough is enough".   The answer is different for each individual sysadmins, but usually is some variation of the following themes: 

  1. Too rapid pace of change with a lot of "change for the sake of the change"  often serving as smokescreen for outsourcing efforts (VMware yesterday, Azure today, Amazon cloud tomorrow, etc)
  2. Excessive automation can be a problem. It increases the number of layers between fundamental process and sysadmin. and thus it makes troubleshooting much harder. Moreover often it does not produce tangible benefits in comparison with simpler tools while dramatically increasing the level of complexity of environment.  See Unix Configuration Management Tools for deeper discussion of this issue.
  3. Job insecurity due to outsourcing/offshoring -- constant pressure to cut headcount in the name of "efficiency" which in reality is more connected with the size of top brass bonuses than anything related to IT datacenter functioning. Sysadmins over 50 are especially vulnerable category here and in case they are laid off have almost no chances to get back into the IT workforce at the previous level of salary/benefits. Often the only job they can find is job  in Home Depot, or similar retail outlets.  See Over 50 and unemployed
  4. Back breaking level of overcomplexity and bizarre tech decisions crippling the data center (aka crapification ). "Potemkin village culture" often prevails in evaluation of software in large US corporations. The surface shine is more important than the substance. The marketing brochures and manuals are no different from mainstream news media stories in the level of BS they spew. IBM is especially guilty (look how they marketed IBM Watson; as Oren Etzioni, CEO of the Allen Institute for AI noted "the only intelligent thing about Watson was IBM PR department [push]").
  5. Bureaucratization/fossilization of the large companies IT environment. That includes using "Performance Reviews" (prevalent in IT variant of waterboarding ;-) for the enforcement of management policies, priorities, whims, etc.  See Office Space (1999) - IMDb  for humorous take on IT culture.  That creates alienation from the company (as it should). One can think of the modern corporate Data Center as an organization where the administration has  tremendously more power in the decision-making process and eats up more of the corporate budget, while the people who do the actual work are increasingly ignored and their share of the budget gradually shrinks. Purchasing of "non-standard" software or hardware is often so complicated that it never tried even if benefits are tangible.
  6. "Neoliberal austerity" (which is essentially another name for the "war on labor") -- Drastic cost cutting measures at the expense of workforce such as elimination of external vendor training, crapification of benefits, limitation of business trips and enforcing useless or outright harmful for business "new" products instead of "tried and true" old with  the same function.  They are often accompanied by the new cultural obsession with "character" (as in "he/she has a right character" -- which in "Neoliberal speak" means he/she is a toothless conformist ;-), glorification of groupthink, and the intensification of surveillance.

As Charlie Schluting noted in 2010: (Enterprise Networking Plane, April 7, 2010)

What happened to the old "sysadmin" of just a few years ago? We've split what used to be the sysadmin into application teams, server teams, storage teams, and network teams. There were often at least a few people, the holders of knowledge, who knew how everything worked, and I mean everything. Every application, every piece of network gear, and how every server was configured -- these people could save a business in times of disaster.

Now look at what we've done. Knowledge is so decentralized we must invent new roles to act as liaisons between all the IT groups.

Architects now hold much of the high-level "how it works" knowledge, but without knowing how any one piece actually does work.

In organizations with more than a few hundred IT staff and developers, it becomes nearly impossible for one person to do and know everything. This movement toward specializing in individual areas seems almost natural. That, however, does not provide a free ticket for people to turn a blind eye.

Specialization

You know the story: Company installs new application, nobody understands it yet, so an expert is hired. Often, the person with a certification in using the new application only really knows how to run that application. Perhaps they aren't interested in learning anything else, because their skill is in high demand right now. And besides, everything else in the infrastructure is run by people who specialize in those elements. Everything is taken care of.

Except, how do these teams communicate when changes need to take place? Are the storage administrators teaching the Windows administrators about storage multipathing; or worse logging in and setting it up because it's faster for the storage gurus to do it themselves? A fundamental level of knowledge is often lacking, which makes it very difficult for teams to brainstorm about new ways evolve IT services. The business environment has made it OK for IT staffers to specialize and only learn one thing.

If you hire someone certified in the application, operating system, or network vendor you use, that is precisely what you get. Certifications may be a nice filter to quickly identify who has direct knowledge in the area you're hiring for, but often they indicate specialization or compensation for lack of experience.

Resource Competition

Does your IT department function as a unit? Even 20-person IT shops have turf wars, so the answer is very likely, "no." As teams are split into more and more distinct operating units, grouping occurs. One IT budget gets split between all these groups. Often each group will have a manager who pitches his needs to upper management in hopes they will realize how important the team is.

The "us vs. them" mentality manifests itself at all levels, and it's reinforced by management having to define each team's worth in the form of a budget. One strategy is to illustrate a doomsday scenario. If you paint a bleak enough picture, you may get more funding. Only if you are careful enough to illustrate the failings are due to lack of capital resources, not management or people. A manager of another group may explain that they are not receiving the correct level of service, so they need to duplicate the efforts of another group and just implement something themselves. On and on, the arguments continue.

Most often, I've seen competition between server groups result in horribly inefficient uses of hardware. For example, what happens in your organization when one team needs more server hardware? Assume that another team has five unused servers sitting in a blade chassis. Does the answer change? No, it does not. Even in test environments, sharing doesn't often happen between IT groups.

With virtualization, some aspects of resource competition get better and some remain the same. When first implemented, most groups will be running their own type of virtualization for their platform. The next step, I've most often seen, is for test servers to get virtualized. If a new group is formed to manage the virtualization infrastructure, virtual machines can be allocated to various application and server teams from a central pool and everyone is now sharing. Or, they begin sharing and then demand their own physical hardware to be isolated from others' resource hungry utilization. This is nonetheless a step in the right direction. Auto migration and guaranteed resource policies can go a long way toward making shared infrastructure, even between competing groups, a viable option.

Blamestorming

The most damaging side effect of splitting into too many distinct IT groups is the reinforcement of an "us versus them" mentality. Aside from the notion that specialization creates a lack of knowledge, blamestorming is what this article is really about. When a project is delayed, it is all too easy to blame another group. The SAN people didn't allocate storage on time, so another team was delayed. That is the timeline of the project, so all work halted until that hiccup was restored. Having someone else to blame when things get delayed makes it all too easy to simply stop working for a while.

More related to the initial points at the beginning of this article, perhaps, is the blamestorm that happens after a system outage.

Say an ERP system becomes unresponsive a few times throughout the day. The application team says it's just slowing down, and they don't know why. The network team says everything is fine. The server team says the application is "blocking on IO," which means it's a SAN issue. The SAN team say there is nothing wrong, and other applications on the same devices are fine. You've ran through nearly every team, but without an answer still. The SAN people don't have access to the application servers to help diagnose the problem. The server team doesn't even know how the application runs.

See the problem? Specialized teams are distinct and by nature adversarial. Specialized staffers often relegate themselves into a niche knowing that as long as they continue working at large enough companies, "someone else" will take care of all the other pieces.

I unfortunately don't have an answer to this problem. Maybe rotating employees between departments will help. They gain knowledge and also get to know other people, which should lessen the propensity to view them as outsiders

The tragic part of the current environment is that it is like shifting sands. And it is not only due to the "natural process of crapification of operating systems" in which the OS gradually loses its architectural integrity. The pace of change is just too fast to adapt for mere humans. And most of it represents "change for the  sake of change" not some valuable improvement or extension of capabilities.

If you are a sysadmin, who is writing  his own scripts, you write on the sand beach, spending a lot of time thinking over and debugging your scripts. Which raise you productivity and diminish the number of possible errors. But the next OS version or organizational change wipes considerable part of your word and you need to revise your scripts again. The tale of Sisyphus can now be re-interpreted as a prescient warning about the thankless task of sysadmin to learn new staff and maintain their own script library ;-)  Sometimes a lot of work is wiped out because the corporate brass decides to switch to a different flavor of Linux,  or we add "yet another flavor" due to a large acquisition.  Add to this inevitable technological changes and the question arise, can't you get a more respectable profession, in which 66% of knowledge is not replaced in the next ten years.  For a talented and not too old person staying employed in sysadmin profession is probably a mistake, or at least a very questionable decision.

Balkanization of linux demonstrated also in the Babylon  Tower of system programming languages (C, C++, Perl, Python, Ruby, Go, Java to name a few) and systems that supposedly should help you but mostly do quite opposite (Puppet, Ansible, Chef, etc). Add to this monitoring infrastructure (say Nagios) and you definitely have an information overload.

Inadequate training just add to the stress. First of all corporations no longer want to pay for it. So you are your own and need to do it mostly on your free time, as the workload is substantial in most organizations. Of course summer "dead season" at least partially exists, but it is rather short. Using free or low cost courses if they are available, or buying your own books and trying to learn new staff using them is of course is the mark of any good sysadmin, but should not be the only source of new knowledge. Communication with colleagues who have high level of knowledge in selected areas is as important or even more important. But this is very difficult as often sysadmin works in isolation.  Professional groups like Linux user group exist mostly in metropolitan areas of large cities. Coronavirus made those groups even more problematic.

Days when you can for a week travel to vendor training center and have a chance to communicate with other admins from different organization for a week (which probably was the most valuable part of the whole exercise; although I can tell that training by Sun (Solaris) and IBM (AIX) in late 1990th was really high quality using highly qualified instructors, from which you can learn a lot outside the main topic of the course.  Thos days are long in the past. Unlike "Trump University" Sun courses could probably have been called "Sun University." Most training now is via Web and chances for face-to-face communication disappeared.  Also from learning "why" the stress now is on learning of "how".  Why topic typically are reserved to "advanced" courses.

Also the necessary to relearn staff again and again (and often new technologies/daemons/version of OS) are iether the same, or even inferior to previous, or represent open scam in which training is the way to extract money from lemmings (Agile, most of DevOps hoopla, etc). This is typical neoliberal mentality (" greed is good") implemented in education. There is also tendency to treat virtual machines and cloud infrastructure as separate technologies, which requires separate training and separate set of certifications (AWS, Azure).  This is a kind of infantilization of profession when a person who learned a lot of staff in previous 10 years need to forget it and relearn most of it again and again.

Of course.  sysadmins are not the only suffered. Computer scientists also now struggle with  the excessive level of complexity and too quickly shifting sand. Look at the tragedy of Donald Knuth with this life long idea to create comprehensive monograph for system programmers (The Art of Computer programming). He was flattened by the shifting sands and probably will not be able to finish even volume 4 (out of seven that were planned) in his lifetime. 

Of course, much  depends on the evolution of hardware and changes caused by the evolution of hardware such as mass introduction of large SSDs, multi-core CPUs and large RAM.

Nobody is now surprised to see a server with 128GB of RAM, laptop with 16Gb of RAM, or cellphones with  4GB of RAM and 2GHZ CPU (Please note that IBM Pc stated with 1 MB of RAM (of which only 640KB was available for programs) and 4.7 MHz (not GHz) single core CPU without floating arithmetic unit).  Hardware evolution while painful is inevitable and it changes the software landscape. Thanks God hardware progress slowed down recently as it reached physical limits of technology (we probably will not see 2 nanometer lithography based CPU and 8GHz CPU clock speed in our lifetimes) and progress now is mostly measured by the number of cores packed in the same die.

The there is other set of significant changes which is course not by progress of hardware (or software) but mainly by fashion and the desire of certain (and powerful) large corporations to entrench their market position. Such changes are more difficult to accept. It is difficult or even impossible to predict which technology became fashionable tomorrow. For example how long DevOps will remain in fashion.

Typically such techno-fashion lasts around a decade. After that it typically fades in oblivion,  or even is debunked, and former idols shattered (verification crazy is a nice example here). Fro example this strange re-invention of the ideas of "glass-walls datacenter" under then banner of DevOps  (and old timers still remember that IBM datacenters were hated with passion, and this hate created additional non-technological incentive for mini-computers and later for IBM PC)  is characterized by the level of hype usually reserved for women fashion.  Moreover sometimes it looks to me that the movie The Devil Wears Prada is a subtle parable on sysadmin work.

Add to this horrible job market, especially for university graduated and older sysadmins (see Over 50 and unemployed ) and one probably start suspect that the life of modern sysadmin is far from paradise. When you read some job description  on sites like Monster, Dice or  Indeed you just ask yourself, if those people really want to hire anybody, or often such a job position is just a smoke screen for H1B candidates job certification.  The level of details often is so precise that it is almost impossible to fit this specialization. They do not care about the level of talent, they do not want to train a suitable candidate. They want a person who fit 100% from day 1.  Also often position are available mostly in place like New York of San Francisco, were both rent and property prices are high and growing while income growth has been stagnant.

Vandalism of Unix performed by Red Hat with RHEL 7 makes the current  environment somewhat unhealthy. It is clear that this was done to enhance Red Hat marketing position, in the interests of the Red Hat and IBM brass, not in the interest of the community. This is a typical Microsoft-style trick which make dozens of high quality books written by very talented authors instantly semi-obsolete.  And question arise whether it make sense to write any book about RHEL administration other than for a solid advance.  Of course, systemd  generated some backlash, but the position  of Red Hat as Microsoft of Linux allows them to shove down the throat their inferior technical decisions. In a way it reminds me the way Microsoft dealt with Windows 7 replacing it with Windows 10.  Essentially destroying previous Windows interface ecosystem and putting keyboard users into some disadvantage  (while preserving binary compatibility). Red Hat essentially did the same for server sysadmins.

Dr. Nikolai Bezroukov

P.S. See also

P.P.S. Here are my notes/reflection of sysadmin problems that often arise in rather strange (and sometimes pretty toxic) IT departments of large corporations:


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

Home 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

For the list of top articles see Recommended Links section

2018 2017 2016 2015 2014 2013 2012 2011 2010 2009
2008 2007 2006 2005 2004 2003 2002 2001 2000 1999

Highly relevant job about life of a sysadmin: "I appreciate Woody Allen's humor because one of my safety valves is an appreciation for life's absurdities. His message is that life isn't a funeral march to the grave. It's a polka."

-- Dennis Kusinich

If you are frustrated read Admin Humor

[May 10, 2021] The Tilde Text Editor

Highly recommended!
This is an editor similar to FDE and can be used as external editor for MC
May 10, 2021 | os.ghalkes.nl

Tilde is a text editor for the console/terminal, which provides an intuitive interface for people accustomed to GUI environments such as Gnome, KDE and Windows. For example, the short-cut to copy the current selection is Control-C, and to paste the previously copied text the short-cut Control-V can be used. As another example, the File menu can be accessed by pressing Meta-F.

However, being a terminal-based program there are limitations. Not all terminals provide sufficient information to the client programs to make Tilde behave in the most intuitive way. When this is the case, Tilde provides work-arounds which should be easy to work with.

The main audience for Tilde is users who normally work in GUI environments, but sometimes require an editor for a console/terminal environment. This may be because the computer in question is a server which does not provide a GUI, or is accessed remotely over SSH. Tilde allows these users to edit files without having to learn a completely new interface, such as vi or Emacs do. A result of this choice is that Tilde will not provide all the fancy features that Vim or Emacs provide, but only the most used features.

News Tilde version 1.1.2 released

This release fixes a bug where Tilde would discard read lines before an invalid character, while requested to continue reading.

23-May-2020

Tilde version 1.1.1 released

This release fixes a build failure on C++14 and later compilers

12-Dec-2019

[May 10, 2021] Split a String in Bash

May 10, 2021 | www.xmodulo.com

When you need to split a string in bash, you can use bash's built-in read command. This command reads a single line of string from stdin, and splits the string on a delimiter. The split elements are then stored in either an array or separate variables supplied with the read command. The default delimiter is whitespace characters (' ', '\t', '\r', '\n'). If you want to split a string on a custom delimiter, you can specify the delimiter in IFS variable before calling read .

# strings to split
var1="Harry Samantha Bart   Amy"
var2="green:orange:black:purple"

# split a string by one or more whitespaces, and store the result in an array
read -a my_array <<< $var1

# iterate the array to access individual split words
for elem in "${my_array[@]}"; do
    echo $elem
done

echo "----------"
# split a string by a custom delimter
IFS=':' read -a my_array2 <<< $var2
for elem in "${my_array2[@]}"; do
    echo $elem
done
Harry
Samantha
Bart
Amy
----------
green
orange
black
purple

[May 10, 2021] How to manipulate strings in bash

May 10, 2021 | www.xmodulo.com

Remove a Trailing Newline Character from a String in Bash

If you want to remove a trailing newline or carriage return character from a string, you can use the bash's parameter expansion in the following form.

${string%$var}

This expression implies that if the "string" contains a trailing character stored in "var", the result of the expression will become the "string" without the character. For example:

# input string with a trailing newline character
input_line=$'This is my example line\n'
# define a trailing character.  For carriage return, replace it with $'\r' 
character=$'\n'

echo -e "($input_line)"
# remove a trailing newline character
input_line=${input_line%$character}
echo -e "($input_line)"
(This is my example line
)
(This is my example line)
Trim Leading/Trailing Whitespaces from a String in Bash

If you want to remove whitespaces at the beginning or at the end of a string (also known as leading/trailing whitespaces) from a string, you can use sed command.

my_str="   This is my example string    "

# original string with leading/trailing whitespaces
echo -e "($my_str)"

# trim leading whitespaces in a string
my_str=$(echo "$my_str" | sed -e "s/^[[:space:]]*//")
echo -e "($my_str)"

# trim trailing whitespaces in a string
my_str=$(echo "$my_str" | sed -e "s/[[:space:]]*$//")
echo -e "($my_str)"
(   This is my example string    )
(This is my example string    )      ← leading whitespaces removed
(This is my example string)          ← trailing whitespaces removed

If you want to stick with bash's built-in mechanisms, the following bash function can get the job done.

trim() {
    local var="$*"
    # remove leading whitespace characters
    var="${var#"${var%%[![:space:]]*}"}"
    # remove trailing whitespace characters
    var="${var%"${var##*[![:space:]]}"}"   
    echo "$var"
}

my_str="   This is my example string    "
echo "($my_str)"

my_str=$(trim $my_str)
echo "($my_str)"

[May 10, 2021] String Operators - Learning the bash Shell, Second Edition

May 10, 2021 | www.oreilly.com

Table 4-1. Substitution Operators

Operator Substitution
$ { varname :- word }

If varname exists and isn't null, return its value; otherwise return word .

Purpose :

Returning a default value if the variable is undefined.

Example :

${count:-0} evaluates to 0 if count is undefined.

$ { varname := word }

If varname exists and isn't null, return its value; otherwise set it to word and then return its value. Positional and special parameters cannot be assigned this way.

Purpose :

Setting a variable to a default value if it is undefined.

Example :

$ {count := 0} sets count to 0 if it is undefined.

$ { varname :? message }

If varname exists and isn't null, return its value; otherwise print varname : followed by message , and abort the current command or script (non-interactive shells only). Omitting message produces the default message parameter null or not set .

Purpose :

Catching errors that result from variables being undefined.

Example :

{count :?" undefined! " } prints "count: undefined!" and exits if count is undefined.

$ { varname : + word }

If varname exists and isn't null, return word ; otherwise return null.

Purpose :

Testing for the existence of a variable.

Example :

$ {count :+ 1} returns 1 (which could mean "true") if count is defined.

$ { varname : offset }
$ { varname : offset : length }

Performs substring expansion. a It returns the substring of $ varname starting at offset and up to length characters. The first character in $ varname is position 0. If length is omitted, the substring starts at offset and continues to the end of $ varname . If offset is less than 0 then the position is taken from the end of $ varname . If varname is @ , the length is the number of positional parameters starting at parameter offset .

Purpose :

Returning parts of a string (substrings or slices ).

Example :

If count is set to frogfootman , $ {count :4} returns footman . $ {count :4:4} returns foot .

[ 52 ]

Table 4-2. Pattern-Matching Operators

Operator Meaning
$ { variable # pattern }

If the pattern matches the beginning of the variable's value, delete the shortest part that matches and return the rest.

$ { variable ## pattern }

If the pattern matches the beginning of the variable's value, delete the longest part that matches and return the rest.

$ { variable % pattern }

If the pattern matches the end of the variable's value, delete the shortest part that matches and return the rest.

$ { variable %% pattern }

If the pattern matches the end of the variable's value, delete the longest part that matches and return the rest.

$ { variable / pattern / string }
$ { variable // pattern / string }

The longest match to pattern in variable is replaced by string . In the first form, only the first match is replaced. In the second form, all matches are replaced. If the pattern is begins with a # , it must match at the start of the variable. If it begins with a % , it must match with the end of the variable. If string is null, the matches are deleted. If variable is @ or * , the operation is applied to each positional parameter in turn and the expansion is the resultant list. a

[May 10, 2021] Concatenating Strings with the += Operator

May 10, 2021 | linuxize.com

Another way of concatenating strings in bash is by appending variables or literal strings to a variable using the += operator:

VAR1="Hello, "
VAR1+=" World"
echo "$VAR1"
Copy
Hello, World
Copy

The following example is using the += operator to concatenate strings in bash for loop :

languages.sh
VAR=""
for ELEMENT in 'Hydrogen' 'Helium' 'Lithium' 'Beryllium'; do
  VAR+="${ELEMENT} "
done

echo "$VAR"
Copy
Hydrogen Helium Lithium Beryllium

[May 10, 2021] String Operators (Korn Shell) - Daniel Han's Technical Notes

May 10, 2021 | sites.google.com

4.3 String Operators

The curly-bracket syntax allows for the shell's string operators . String operators allow you to manipulate values of variables in various useful ways without having to write full-blown programs or resort to external UNIX utilities. You can do a lot with string-handling operators even if you haven't yet mastered the programming features we'll see in later chapters.

In particular, string operators let you do the following:

4.3.1 Syntax of String Operators

The basic idea behind the syntax of string operators is that special characters that denote operations are inserted between the variable's name and the right curly brackets. Any argument that the operator may need is inserted to the operator's right.

The first group of string-handling operators tests for the existence of variables and allows substitutions of default values under certain conditions. These are listed in Table 4.1 . [6]

[6] The colon ( : ) in each of these operators is actually optional. If the colon is omitted, then change "exists and isn't null" to "exists" in each definition, i.e., the operator tests for existence only.

Table 4.1: Substitution Operators
Operator Substitution
${ varname :- word } If varname exists and isn't null, return its value; otherwise return word .
Purpose : Returning a default value if the variable is undefined.
Example : ${count:-0} evaluates to 0 if count is undefined.
${ varname := word} If varname exists and isn't null, return its value; otherwise set it to word and then return its value.[7]
Purpose : Setting a variable to a default value if it is undefined.
Example : $ {count:=0} sets count to 0 if it is undefined.
${ varname :? message } If varname exists and isn't null, return its value; otherwise print varname : followed by message , and abort the current command or script. Omitting message produces the default message parameter null or not set .
Purpose : Catching errors that result from variables being undefined.
Example : {count :?" undefined! " } prints "count: undefined!" and exits if count is undefined.
${ varname :+ word } If varname exists and isn't null, return word ; otherwise return null.
Purpose : Testing for the existence of a variable.
Example : ${count:+1} returns 1 (which could mean "true") if count is defined.

[7] Pascal, Modula, and Ada programmers may find it helpful to recognize the similarity of this to the assignment operators in those languages.

The first two of these operators are ideal for setting defaults for command-line arguments in case the user omits them. We'll use the first one in our first programming task.

Task 4.1

You have a large album collection, and you want to write some software to keep track of it. Assume that you have a file of data on how many albums you have by each artist. Lines in the file look like this:

14 Bach, J.S.
1       Balachander, S.
21      Beatles
6       Blakey, Art

Write a program that prints the N highest lines, i.e., the N artists by whom you have the most albums. The default for N should be 10. The program should take one argument for the name of the input file and an optional second argument for how many lines to print.

By far the best approach to this type of script is to use built-in UNIX utilities, combining them with I/O redirectors and pipes. This is the classic "building-block" philosophy of UNIX that is another reason for its great popularity with programmers. The building-block technique lets us write a first version of the script that is only one line long:

sort -nr $1 | head -${2:-10}

Here is how this works: the sort (1) program sorts the data in the file whose name is given as the first argument ( $1 ). The -n option tells sort to interpret the first word on each line as a number (instead of as a character string); the -r tells it to reverse the comparisons, so as to sort in descending order.

The output of sort is piped into the head (1) utility, which, when given the argument - N , prints the first N lines of its input on the standard output. The expression -${2:-10} evaluates to a dash ( - ) followed by the second argument if it is given, or to -10 if it's not; notice that the variable in this expression is 2 , which is the second positional parameter.

Assume the script we want to write is called highest . Then if the user types highest myfile , the line that actually runs is:

sort -nr myfile | head -10

Or if the user types highest myfile 22 , the line that runs is:

sort -nr myfile | head -22

Make sure you understand how the :- string operator provides a default value.

This is a perfectly good, runnable script-but it has a few problems. First, its one line is a bit cryptic. While this isn't much of a problem for such a tiny script, it's not wise to write long, elaborate scripts in this manner. A few minor changes will make the code more readable.

First, we can add comments to the code; anything between # and the end of a line is a comment. At a minimum, the script should start with a few comment lines that indicate what the script does and what arguments it accepts. Second, we can improve the variable names by assigning the values of the positional parameters to regular variables with mnemonic names. Finally, we can add blank lines to space things out; blank lines, like comments, are ignored. Here is a more readable version:

#
#       highest filename [howmany]
#
#       Print howmany highest-numbered lines in file filename.
#       The input file is assumed to have lines that start with
#       numbers.  Default for howmany is 10.
#

filename=$1

howmany=${2:-10}
sort -nr $filename | head -$howmany

The square brackets around howmany in the comments adhere to the convention in UNIX documentation that square brackets denote optional arguments.

The changes we just made improve the code's readability but not how it runs. What if the user were to invoke the script without any arguments? Remember that positional parameters default to null if they aren't defined. If there are no arguments, then $1 and $2 are both null. The variable howmany ( $2 ) is set up to default to 10, but there is no default for filename ( $1 ). The result would be that this command runs:

sort -nr | head -10

As it happens, if sort is called without a filename argument, it expects input to come from standard input, e.g., a pipe (|) or a user's terminal. Since it doesn't have the pipe, it will expect the terminal. This means that the script will appear to hang! Although you could always type [CTRL-D] or [CTRL-C] to get out of the script, a naive user might not know this.

Therefore we need to make sure that the user supplies at least one argument. There are a few ways of doing this; one of them involves another string operator. We'll replace the line:

filename=$1

with:

filename=${1:?"filename missing."}

This will cause two things to happen if a user invokes the script without any arguments: first the shell will print the somewhat unfortunate message:

highest: 1: filename missing.

to the standard error output. Second, the script will exit without running the remaining code.

With a somewhat "kludgy" modification, we can get a slightly better error message. Consider this code:

filename=$1
filename=${filename:?"missing."}

This results in the message:

highest: filename: missing.

(Make sure you understand why.) Of course, there are ways of printing whatever message is desired; we'll find out how in Chapter 5 .

Before we move on, we'll look more closely at the two remaining operators in Table 4.1 and see how we can incorporate them into our task solution. The := operator does roughly the same thing as :- , except that it has the "side effect" of setting the value of the variable to the given word if the variable doesn't exist.

Therefore we would like to use := in our script in place of :- , but we can't; we'd be trying to set the value of a positional parameter, which is not allowed. But if we replaced:

howmany=${2:-10}

with just:

howmany=$2

and moved the substitution down to the actual command line (as we did at the start), then we could use the := operator:

sort -nr $filename | head -${howmany:=10}

Using := has the added benefit of setting the value of howmany to 10 in case we need it afterwards in later versions of the script.

The final substitution operator is :+ . Here is how we can use it in our example: Let's say we want to give the user the option of adding a header line to the script's output. If he or she types the option -h , then the output will be preceded by the line:

ALBUMS  ARTIST

Assume further that this option ends up in the variable header , i.e., $header is -h if the option is set or null if not. (Later we will see how to do this without disturbing the other positional parameters.)

The expression:

${header:+"ALBUMS  ARTIST\n"}

yields null if the variable header is null, or ALBUMS══ARTIST\n if it is non-null. This means that we can put the line:

print -n ${header:+"ALBUMS  ARTIST\n"}

right before the command line that does the actual work. The -n option to print causes it not to print a LINEFEED after printing its arguments. Therefore this print statement will print nothing-not even a blank line-if header is null; otherwise it will print the header line and a LINEFEED (\n).

4.3.2 Patterns and Regular Expressions

We'll continue refining our solution to Task 4-1 later in this chapter. The next type of string operator is used to match portions of a variable's string value against patterns . Patterns, as we saw in Chapter 1 are strings that can contain wildcard characters ( * , ? , and [] for character sets and ranges).

Wildcards have been standard features of all UNIX shells going back (at least) to the Version 6 Bourne shell. But the Korn shell is the first shell to add to their capabilities. It adds a set of operators, called regular expression (or regexp for short) operators, that give it much of the string-matching power of advanced UNIX utilities like awk (1), egrep (1) (extended grep (1)) and the emacs editor, albeit with a different syntax. These capabilities go beyond those that you may be used to in other UNIX utilities like grep , sed (1) and vi (1).

Advanced UNIX users will find the Korn shell's regular expression capabilities occasionally useful for script writing, although they border on overkill. (Part of the problem is the inevitable syntactic clash with the shell's myriad other special characters.) Therefore we won't go into great detail about regular expressions here. For more comprehensive information, the "last word" on practical regular expressions in UNIX is sed & awk , an O'Reilly Nutshell Handbook by Dale Dougherty. If you are already comfortable with awk or egrep , you may want to skip the following introductory section and go to "Korn Shell Versus awk/egrep Regular Expressions" below, where we explain the shell's regular expression mechanism by comparing it with the syntax used in those two utilities. Otherwise, read on.

4.3.2.1 Regular expression basics

Think of regular expressions as strings that match patterns more powerfully than the standard shell wildcard schema. Regular expressions began as an idea in theoretical computer science, but they have found their way into many nooks and crannies of everyday, practical computing. The syntax used to represent them may vary, but the concepts are very much the same.

A shell regular expression can contain regular characters, standard wildcard characters, and additional operators that are more powerful than wildcards. Each such operator has the form x ( exp ) , where x is the particular operator and exp is any regular expression (often simply a regular string). The operator determines how many occurrences of exp a string that matches the pattern can contain. See Table 4.2 and Table 4.3 .

Table 4.2: Regular Expression Operators
Operator Meaning
* ( exp ) 0 or more occurrences of exp
+ ( exp ) 1 or more occurrences of exp
? ( exp ) 0 or 1 occurrences of exp
@ ( exp1 | exp2 |...) exp1 or exp2 or...
! ( exp ) Anything that doesn't match exp [8]

[8] Actually, !( exp ) is not a regular expression operator by the standard technical definition, though it is a handy extension.

Table 4.3: Regular Expression Operator Examples
Expression Matches
x x
* ( x ) Null string, x , xx , xxx , ...
+ ( x ) x , xx , xxx , ...
? ( x ) Null string, x
! ( x ) Any string except x
@ ( x ) x (see below)

Regular expressions are extremely useful when dealing with arbitrary text, as you already know if you have used grep or the regular-expression capabilities of any UNIX editor. They aren't nearly as useful for matching filenames and other simple types of information with which shell users typically work. Furthermore, most things you can do with the shell's regular expression operators can also be done (though possibly with more keystrokes and less efficiency) by piping the output of a shell command through grep or egrep .

Nevertheless, here are a few examples of how shell regular expressions can solve filename-listing problems. Some of these will come in handy in later chapters as pieces of solutions to larger tasks.

  1. The emacs editor supports customization files whose names end in .el (for Emacs LISP) or .elc (for Emacs LISP Compiled). List all emacs customization files in the current directory.
  2. In a directory of C source code, list all files that are not necessary. Assume that "necessary" files end in .c or .h , or are named Makefile or README .
  3. Filenames in the VAX/VMS operating system end in a semicolon followed by a version number, e.g., fred.bob;23 . List all VAX/VMS-style filenames in the current directory.

Here are the solutions:

  1. In the first of these, we are looking for files that end in .el with an optional c . The expression that matches this is * .el ? (c) .
  2. The second example depends on the four standard subexpressions * .c , * .h , Makefile , and README . The entire expression is !( * .c| * .h|Makefile|README) , which matches anything that does not match any of the four possibilities.
  3. The solution to the third example starts with * \ ; : the shell wildcard * followed by a backslash-escaped semicolon. Then, we could use the regular expression +([0-9]) , which matches one or more characters in the range [0-9] , i.e., one or more digits. This is almost correct (and probably close enough), but it doesn't take into account that the first digit cannot be 0. Therefore the correct expression is * \;[1-9] * ([0-9]) , which matches anything that ends with a semicolon, a digit from 1 to 9, and zero or more digits from 0 to 9.

Regular expression operators are an interesting addition to the Korn shell's features, but you can get along well without them-even if you intend to do a substantial amount of shell programming.

In our opinion, the shell's authors missed an opportunity to build into the wildcard mechanism the ability to match files by type (regular, directory, executable, etc., as in some of the conditional tests we will see in Chapter 5 ) as well as by name component. We feel that shell programmers would have found this more useful than arcane regular expression operators.

The following section compares Korn shell regular expressions to analogous features in awk and egrep . If you aren't familiar with these, skip to the section entitled "Pattern-matching Operators."

4.3.2.2 Korn shell versus awk/egrep regular expressions

Table 4.4 is an expansion of Table 4.2 : the middle column shows the equivalents in awk / egrep of the shell's regular expression operators.

Table 4.4: Shell Versus egrep/awk Regular Expression Operators
Korn Shell egrep/awk Meaning
* ( exp ) exp * 0 or more occurrences of exp
+( exp ) exp + 1 or more occurrences of exp
? ( exp ) exp ? 0 or 1 occurrences of exp
@( exp1 | exp2 |...) exp1 | exp2 |... exp1 or exp2 or...
! ( exp ) (none) Anything that doesn't match exp

These equivalents are close but not quite exact. Actually, an exp within any of the Korn shell operators can be a series of exp1 | exp2 |... alternates. But because the shell would interpret an expression like dave|fred|bob as a pipeline of commands, you must use @(dave|fred|bob) for alternates by themselves.

For example:

It is worth re-emphasizing that shell regular expressions can still contain standard shell wildcards. Thus, the shell wildcard ? (match any single character) is the equivalent to . in egrep or awk , and the shell's character set operator [ ... ] is the same as in those utilities. [9] For example, the expression +([0-9]) matches a number, i.e., one or more digits. The shell wildcard character * is equivalent to the shell regular expression * ( ?) .

[9] And, for that matter, the same as in grep , sed , ed , vi , etc.

A few egrep and awk regexp operators do not have equivalents in the Korn shell. These include:

The first two pairs are hardly necessary, since the Korn shell doesn't normally operate on text files and does parse strings into words itself.

4.3.3 Pattern-matching Operators

Table 4.5 lists the Korn shell's pattern-matching operators.

Table 4.5: Pattern-matching Operators
Operator Meaning
$ { variable # pattern } If the pattern matches the beginning of the variable's value, delete the shortest part that matches and return the rest.
$ { variable ## pattern } If the pattern matches the beginning of the variable's value, delete the longest part that matches and return the rest.
$ { variable % pattern } If the pattern matches the end of the variable's value, delete the shortest part that matches and return the rest.
$ { variable %% pattern } If the pattern matches the end of the variable's value, delete the longest part that matches and return the rest.

These can be hard to remember, so here's a handy mnemonic device: # matches the front because number signs precede numbers; % matches the rear because percent signs follow numbers.

The classic use for pattern-matching operators is in stripping off components of pathnames, such as directory prefixes and filename suffixes. With that in mind, here is an example that shows how all of the operators work. Assume that the variable path has the value /home /billr/mem/long.file.name ; then:

Expression                   Result
${path##/*/}                       long.file.name
${path#/*/}              billr/mem/long.file.name
$path              /home/billr/mem/long.file.name
${path%.*}         /home/billr/mem/long.file
${path%%.*}        /home/billr/mem/long

The two patterns used here are /*/ , which matches anything between two slashes, and . * , which matches a dot followed by anything.

We will incorporate one of these operators into our next programming task.

Task 4.2

You are writing a C compiler, and you want to use the Korn shell for your front-end.[10]

[10] Don't laugh-many UNIX compilers have shell scripts as front-ends.

Think of a C compiler as a pipeline of data processing components. C source code is input to the beginning of the pipeline, and object code comes out of the end; there are several steps in between. The shell script's task, among many other things, is to control the flow of data through the components and to designate output files.

You need to write the part of the script that takes the name of the input C source file and creates from it the name of the output object code file. That is, you must take a filename ending in .c and create a filename that is similar except that it ends in .o .

The task at hand is to strip the .c off the filename and append .o . A single shell statement will do it:

objname=${filename%.c}.o

This tells the shell to look at the end of filename for .c . If there is a match, return $filename with the match deleted. So if filename had the value fred.c , the expression ${filename%.c} would return fred . The .o is appended to make the desired fred.o , which is stored in the variable objname .

If filename had an inappropriate value (without .c ) such as fred.a , the above expression would evaluate to fred.a.o : since there was no match, nothing is deleted from the value of filename , and .o is appended anyway. And, if filename contained more than one dot-e.g., if it were the y.tab.c that is so infamous among compiler writers-the expression would still produce the desired y.tab.o . Notice that this would not be true if we used %% in the expression instead of % . The former operator uses the longest match instead of the shortest, so it would match .tab.o and evaluate to y.o rather than y.tab.o . So the single % is correct in this case.

A longest-match deletion would be preferable, however, in the following task.

Task 4.3

You are implementing a filter that prepares a text file for printer output. You want to put the file's name-without any directory prefix-on the "banner" page. Assume that, in your script, you have the pathname of the file to be printed stored in the variable pathname .

Clearly the objective is to remove the directory prefix from the pathname. The following line will do it:

bannername=${pathname##*/}

This solution is similar to the first line in the examples shown before. If pathname were just a filename, the pattern * / (anything followed by a slash) would not match and the value of the expression would be pathname untouched. If pathname were something like fred/bob , the prefix fred/ would match the pattern and be deleted, leaving just bob as the expression's value. The same thing would happen if pathname were something like /dave/pete/fred/bob : since the ## deletes the longest match, it deletes the entire /dave/pete/fred/ .

If we used # * / instead of ## * / , the expression would have the incorrect value dave/pete/fred/bob , because the shortest instance of "anything followed by a slash" at the beginning of the string is just a slash ( / ).

The construct $ { variable ## * /} is actually equivalent to the UNIX utility basename (1). basename takes a pathname as argument and returns the filename only; it is meant to be used with the shell's command substitution mechanism (see below). basename is less efficient than $ { variable ##/ * } because it runs in its own separate process rather than within the shell. Another utility, dirname (1), does essentially the opposite of basename : it returns the directory prefix only. It is equivalent to the Korn shell expression $ { variable %/ * } and is less efficient for the same reason.

4.3.4 Length Operator

There are two remaining operators on variables. One is $ {# varname }, which returns the length of the value of the variable as a character string. (In Chapter 6 we will see how to treat this and similar values as actual numbers so they can be used in arithmetic expressions.) For example, if filename has the value fred.c , then ${#filename} would have the value 6 . The other operator ( $ {# array [ * ]} ) has to do with array variables, which are also discussed in Chapter 6 .

http://docstore.mik.ua/orelly/unix2.1/ksh/ch04_03.htm

[May 10, 2021] Lazy Linux: 10 essential tricks for admins by Vallard Benincosa

IBM is notorious for destroying useful information . This article is no longer available from IBM.
Jul 20, 2008

Originally from: |IBM DeveloperWorks

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time-and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset. But wait you say, typing reset is too close to typing reboot or shutdown. Your palms start to sweat-especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line

Use the arrow key again to highlight the line that begins with kernel, and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1

Then press Enter, B, and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door. To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.

Figure 4. Poking a hole in the firewall

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh thedude@blackbox.example.com .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.

  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4.)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5, ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com

    This time the SSH flag we used was -L, which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb; 1024Mb/8 = 128MB; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk, grep, and sed. There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped (|d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1. This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8, you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system-a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool:

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

[May 09, 2021] Good Alternatives To Man Pages Every Linux User Needs To Know by Sk

Images removed. See the original for full text.
Notable quotes:
"... you need Ruby 1.8.7+ installed on your machine for this to work. ..."
| ostechnix.com

1. Bropages

The slogan of the Bropages utility is just get to the point . It is true! The bropages are just like man pages, but it will display examples only. As its slogan says, It skips all text part and gives you the concise examples for command line programs. The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work. To install Ruby on Rails in CentOS and Ubuntu, refer the following guide: The slogan of the Bropages utility is just get to the point . It is true!

The bropages are just like man pages, but it will display examples only. As its slogan says, It skips all text part and gives you the concise examples for command line programs. The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work...After After installing gem, all you have to do to install bro pages is:

$ gem install bropages
... The usage is incredibly easy! ...just type:
$ bro find
... The good thing thing is you can upvote or downvote the examples.

As you see in the above screenshot, we can upvote to first command by entering the following command: As you see in the above screenshot, we can upvote to first command by entering the following command:

$ bro thanks
You will be asked to enter your Email Id. Enter a valid Email to receive the verification code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top. You will be asked to enter your Email Id. Enter a valid Email to receive the verification code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top.
Bropages.org requires an email address verification to do this
What's your email address?
sk@senthilkumar.com
Great! We're sending an email to sk@senthilkumar.com
Please enter the verification code: apHelH13ocC7OxTyB7Mo9p
Great! You're verified! FYI, your email and code are stored locally in ~/.bro
You just gave thanks to an entry for find!
You rock!
To upvote the second command, type:
$ bro thanks 2
Similarly, to downvote the first command, run:
$ bro ...no

... ... ...

2. Cheat

Cheat is another useful alternative to man pages to learn Unix commands. It allows you to create and view interactive Linux/Unix commands cheatsheets on the command-line. The recommended way to install Cheat is using Pip package manager.,,,

... ... ...

Cheat usage is trivial.

$ cheat find
You will be presented with the list of available examples of find command: ... ... ...

To view help section, run: To view help section, run:

$ cheat -h
For more details, see project's GitHub repository: For more details, see project's GitHub repository: 3. TLDR Pages

TLDR is a collection of simplified and community-driven man pages. Unlike man pages, TLDR pages focuses only on practical examples. TLDR can be installed using npm . So, you need NodeJS installed on your machine for this to work.

To install NodeJS in Linux, refer the following guide. To install NodeJS in Linux, refer the following guide.

After installing npm, run the following command to install tldr. After installing npm, run the following command to install tldr.
$ npm install -g tldr
TLDR clients are also available for Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your Android devices. TLDR clients are also available for Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your Android devices. There are many TLDR clients available. You can view them all here

3.1. Usage To display the documentation of any command, fro example find , run:

$ tldr find
You will see the list of available examples of find command. ...To view the list of all commands in the cache, run: To view the list of all commands in the cache, run:
$ tldr --list-all
...To update the local cache, run: To update the local cache, run: To update the local cache, run:
$ tldr -u
Or, Or,
$ tldr --update
To display the help section, run: To display the help section, run:
$ tldr -h
For more details, refer TLDR github page.4. TLDR++

Tldr++ is yet another client to access the TLDR pages. Unlike the other Tldr clients, it is fully interactive .

5. Tealdeer

Tealdeer is a fast, un-official tldr client that allows you to access and display Linux commands cheatsheets in your Terminal. The developer of Tealdeer claims it is very fast compared to the official tldr client and other community-supported tldr clients.

6. tldr.jsx web client

The tldr.jsx is a a Reactive web client for tldr-pages. If you don't want to install anything on your system, you can try this client online from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just a Web-browser. Open a web browser and navigate to The tldr.jsx is a a Reactive web client for tldr-pages. If you don't want to install anything on your system, you can try this client online from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just a Web-browser. Open a web browser and navigate to Open a web browser and navigate to Open a web browser and navigate to https://tldr.ostera.io/ page.

7. Navi interactive commandline cheatsheet tool

Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all other comprehensive text parts. For more details, check the following link. Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all other comprehensive text parts. For more details, check the following link.

8. Manly

I came across this utility recently and I thought that it would be a worth addition to this list. Say hello to Manly , a compliment to man pages. Manly is written in Python , so you can install it using Pip package manager.

Manly is slightly different from the above three utilities. It will not display any examples and also you need to mention the flags or options along with the commands. Say for example, the following example won't work:

$ manly dpkg
But, if you mention any flag/option of a command, you will get a small description of the given command and its options.
$ manly dpkg -i -R
View Linux
$ manly --help
And also take a look at the project's GitHub page. And also take a look at the project's GitHub page.
Suggested Read: Suggested Read:

[May 08, 2021] How To Clone Your Linux Install With Clonezilla

Notable quotes:
"... Note: Clonezilla ISO is under 300 MiB in size. As a result, any flash drive with at least 512 MiB of space will work. ..."
May 08, 2021 | www.addictivetips.com

... one of the most popular (and reliable ways) to backup your data with Clonezilla. This tool lets you clone your Linux install. With it, you can load a live USB and easily "clone" hard drives, operating systems and more..

Downloading Clonezilla Clonezilla is available only as a live operating system. There are multiple versions of the live disk. That being said, we recommend just downloading the ISO file. The stable version of the software is available at Clonezilla.org. On the download page, select your CPU architecture from the dropdown menu (32 bit or 64 bit).

Then, click "filetype" and click ISO. After all of that, click the download button.

How to get the new Spotlight-like Microsoft launcher on Windows 10 Pause Unmute Remaining Time -0:36 Making The Live Disk Regardless of the operating system, the fastest and easiest way to make a Linux live-disk is with the Etcher USB imaging tool. Head over to this page to download it. Follow the instructions on the page, as it will explain the three-step process it takes to make a live disk.

Note: Clonezilla ISO is under 300 MiB in size. As a result, any flash drive with at least 512 MiB of space will work.

Device To Image Cloning Backing up a Linux installation directly to an image file with Clonezilla is a simple process. To start off, select the "device-image" option in the Clonezilla menu. On the next page, the software gives a whole lot of different ways to create the backup.

The hard drive image can be saved to a Samba server, an SSH server, NFS and etc. If you're savvy with any of these, select it. If you're a beginner, connect a USB hard drive (or mount a second hard drive connected to the PC) and select the "local_dev" option.

Selecting "local_dev" prompts Clonezilla to ask the user to set up a hard drive as the destination for the hard drive menu. Look through the listing and select the hard drive you'd like to use. Additionally, use the menu selector to choose what directory on the drive the hard drive image will save to.

With the storage location set up, the process can begin. Clonezilla asks to run the backup wizard. There are two options: "Beginner" and "Expert". Select "Beginner" to start the process.

On the next page, tell Clonezilla how to save the hard drive. Select "savedisk" to copy the entire hard drive to one file. Select "saveparts" to backup the drive into separate partition images.

Restoring Backup Images To restore an image, load Clonezilla and select the "device-image" option. Next, select "local_dev". Use the menu to select the hard drive previously used to save the hard drive image. In the directory browser, select the same options you used to create the image.

Clonezilla - Downloads

[May 08, 2021] LFCA- Learn User Account Management Part 5

May 08, 2021 | www.tecmint.com

The /etc/gshadow File

This file contains encrypted or ' shadowed ' passwords for group accounts and, for security reasons, cannot be accessed by regular users. It's only readable by the root user and users with sudo privileges.

$ sudo cat /etc/gshadow

tecmint:!::

From the far left, the file contains the following fields:

[May 05, 2021] Machines are expensive

May 05, 2021 | www.unz.com

Mancubus , says: May 5, 2021 at 12:54 pm GMT • 5.6 hours ago

I keep happening on these mentions of manufacturing jobs succumbing to automation, and I can't think of where these people are getting their information.

I work in manufacturing. Production manufacturing, in fact, involving hundreds, thousands, tens of thousands of parts produced per week. Automation has come a long way, but it also hasn't. A layman might marvel at the technologies while taking a tour of the factory, but upon closer inspection, the returns are greatly diminished in the last two decades. Advances have afforded greater precision, cheaper technologies, but the only reason China is a giant of manufacturing is because labor is cheap. They automate less than Western factories, not more, because humans cost next to nothing, but machines are expensive.

[May 03, 2021] Do You Replace Your Server Or Go To The Cloud- The Answer May Surprise You

May 03, 2021 | www.forbes.com

Is your server or servers getting old? Have you pushed it to the end of its lifespan? Have you reached that stage where it's time to do something about it? Join the crowd. You're now at that decision point that so many other business people are finding themselves this year. And the decision is this: do you replace that old server with a new server or do you go to: the cloud.

Everyone's talking about the cloud nowadays so you've got to consider it, right? This could be a great new thing for your company! You've been told that the cloud enables companies like yours to be more flexible and save on their IT costs. It allows free and easy access to data for employees from wherever they are, using whatever devices they want to use. Maybe you've seen the recent survey by accounting software maker MYOB that found that small businesses that adopt cloud technologies enjoy higher revenues. Or perhaps you've stumbled on this analysis that said that small businesses are losing money as a result of ineffective IT management that could be much improved by the use of cloud based services. Or the poll of more than 1,200 small businesses by technology reseller CDW which discovered that " cloud users cite cost savings, increased efficiency and greater innovation as key benefits" and that " across all industries, storage and conferencing and collaboration are the top cloud services and applications."

So it's time to chuck that old piece of junk and take your company to the cloud, right? Well just hold on.

There's no question that if you're a startup or a very small company or a company that is virtual or whose employees are distributed around the world, a cloud based environment is the way to go. Or maybe you've got high internal IT costs or require more computing power. But maybe that's not you. Maybe your company sells pharmaceutical supplies, provides landscaping services, fixes roofs, ships industrial cleaning agents, manufactures packaging materials or distributes gaskets. You are not featured in Fast Company and you have not been invited to presenting at the next Disrupt conference. But you know you represent the very core of small business in America. I know this too. You are just like one of my company's 600 clients. And what are these companies doing this year when it comes time to replace their servers?

These very smart owners and managers of small and medium sized businesses who have existing applications running on old servers are not going to the cloud. Instead, they've been buying new servers.

Wait, buying new servers? What about the cloud?

At no less than six of my clients in the past 90 days it was time to replace servers. They had all waited as long as possible, conserving cash in a slow economy, hoping to get the most out of their existing machines. Sound familiar? But the servers were showing their age, applications were running slower and now as the companies found themselves growing their infrastructure their old machines were reaching their limit. Things were getting to a breaking point, and all six of my clients decided it was time for a change. So they all moved to cloud, right?

PROMOTED

https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Nope. None of them did. None of them chose the cloud. Why? Because all six of these small business owners and managers came to the same conclusion: it was just too expensive. Sorry media. Sorry tech world. But this is the truth. This is what's happening in the world of established companies.

Consider the options. All of my clients' evaluated cloud based hosting services from Amazon , Microsoft and Rackspace . They also interviewed a handful of cloud based IT management firms who promised to move their existing applications (Office, accounting, CRM, databases) to their servers and manage them offsite. All of these popular options are viable and make sense, as evidenced by their growth in recent years. But when all the smoke cleared, all of these services came in at about the same price: approximately $100 per month per user. This is what it costs for an existing company to move their existing infrastructure to a cloud based infrastructure in 2013. We've got the proposals and we've done the analysis.

You're going through the same thought process, so now put yourself in their shoes. Suppose you have maybe 20 people in your company who need computer access. Suppose you are satisfied with your existing applications and don't want to go through the agony and enormous expense of migrating to a new cloud based application. Suppose you don't employ a full time IT guy, but have a service contract with a reliable local IT firm.

Now do the numbers: $100 per month x 20 users is $2,000 per month or $24,000 PER YEAR for a cloud based service. How many servers can you buy for that amount? Imagine putting that proposal out to an experienced, battle-hardened, profit generating small business owner who, like all the smart business owners I know, look hard at the return on investment decision before parting with their cash.

For all six of these clients the decision was a no-brainer: they all bought new servers and had their IT guy install them. But can't the cloud bring down their IT costs? All six of these guys use their IT guy for maybe half a day a month to support their servers (sure he could be doing more, but small business owners always try to get away with the minimum). His rate is $150 per hour. That's still way below using a cloud service.

No one could make the numbers work. No one could justify the return on investment. The cloud, at least for established businesses who don't want to change their existing applications, is still just too expensive.

Please know that these companies are, in fact, using some cloud-based applications. They all have virtual private networks setup and their people access their systems over the cloud using remote desktop technologies. Like the respondents in the above surveys, they subscribe to online backup services, share files on DropBox and Microsoft 's file storage, make their calls over Skype, take advantage of Gmail and use collaboration tools like Google Docs or Box. Many of their employees have iPhones and Droids and like to use mobile apps which rely on cloud data to make them more productive. These applications didn't exist a few years ago and their growth and benefits cannot be denied.

Paul-Henri Ferrand, President of Dell North America, doesn't see this trend continuing. "Many smaller but growing businesses are looking and/or moving to the cloud," he told me. "There will be some (small businesses) that will continue to buy hardware but I see the trend is clearly toward the cloud. As more business applications become more available for the cloud, the more likely the trend will continue."

He's right. Over the next few years the costs will come down. Your beloved internal application will become out of date and your only option will be to migrate to a cloud based application (hopefully provided by the same vendor to ease the transition). Your technology partners will help you and the process will be easier, and less expensive than today. But for now, you may find it makes more sense to just buy a new server. It's OK. You're not alone.

Besides Forbes, Gene Marks writes weekly for The New York Times and Inc.com .

Related on Forbes:

[Apr 29, 2021] Linux tips for using GNU Screen - Opensource.com

Apr 29, 2021 | opensource.com

Using GNU Screen

GNU Screen's basic usage is simple. Launch it with the screen command, and you're placed into the zeroeth window in a Screen session. You may hardly notice anything's changed until you decide you need a new prompt.

When one terminal window is occupied with an activity (for instance, you've launched a text editor like Vim or Jove , or you're processing video or audio, or running a batch job), you can just open a new one. To open a new window, press Ctrl+A , release, and then press c . This creates a new window on top of your existing window.

You'll know you're in a new window because your terminal appears to be clear of anything aside from its default prompt. Your other terminal still exists, of course; it's just hiding behind the new one. To traverse through your open windows, press Ctrl+A , release, and then n for next or p for previous . With just two windows open, n and p functionally do the same thing, but you can always open more windows ( Ctrl+A then c ) and walk through them.

Split screen

GNU Screen's default behavior is more like a mobile device screen than a desktop: you can only see one window at a time. If you're using GNU Screen because you love to multitask, being able to focus on only one window may seem like a step backward. Luckily, GNU Screen lets you split your terminal into windows within windows.

To create a horizontal split, press Ctrl+A and then s . This places one window above another, just like window panes. The split space is, however, left unpurposed until you tell it what to display. So after creating a split, you can move into the split pane with Ctrl+A and then Tab . Once there, use Ctrl+A then n to navigate through all your available windows until the content you want to be displayed is in the split pane.

You can also create vertical splits with Ctrl+A then | (that's a pipe character, or the Shift option of the \ key on most keyboards).

[Apr 22, 2021] TLDR pages- Simplified Alternative To Linux Man Pages That You'll Love

Images removed. See the original for full text.
Apr 22, 2021 | fossbytes.com

The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and community-driven man pages. It's an effort to make the experience of using man pages simpler with the help of practical examples. For those who don't know, TLDR is taken from common internet slang Too Long Didn't Read .

In case you wish to compare, let's take the example of tar command. The usual man page extends over 1,000 lines. It's an archiving utility that's often combined with a compression method like bzip or gzip. Take a look at its man page:

On the other hand, TLDR pages lets you simply take a glance at the command and see how it works. Tar's TLDR page simply looks like this and comes with some handy examples of the most common tasks you can complete with this utility:

Let's take another example and show you what TLDR pages has to offer when it comes to apt:

Having shown you how TLDR works and makes your life easier, let's tell you how to install it on your Linux-based operating system.

How to install and use TLDR pages on Linux?

The most mature TLDR client is based on Node.js and you can install it easily using NPM package manager. In case Node and NPM are not available on your system, run the following command:

sudo apt-get install nodejs
sudo apt-get install npm

In case you're using an OS other than Debian, Ubuntu, or Ubuntu's derivatives, you can use yum, dnf, or pacman package manager as per your convenience.

[Apr 22, 2021] Alternatives of man in Linux command line

Images removed. See the original for full text.
Jan 01, 2020 | www.chuanjin.me

When we need help in Linux command line, man is usually the first friend we check for more information. But it became my second line support after I met other alternatives, e.g. tldr , cheat and eg .

tldr

tldr stands for too long didn't read , it is a simplified and community-driven man pages. Maybe we forget the arguments to a command, or just not patient enough to read the long man document, here tldr comes in, it will provide concise information with examples. And I even contributed a couple of lines code myself to help a little bit with the project on Github. It is very easy to install: npm install -g tldr , and there are many clients available to pick to be able to access the tldr pages. E.g. install Python client with pip install tldr ,

To display help information, run tldr -h or tldr tldr .

Take curl as an example

tldr++

tldr++ is an interactive tldr client written with go, I just steal the gif from its official site.

cheat

Similarly, cheat allows you to create and view interactive cheatsheets on the command-line. It was designed to help remind *nix system administrators of options for commands that they use frequently, but not frequently enough to remember. It is written in Golang, so just download the binary and add it into your PATH .

eg

eg provides useful examples with explanations on the command line.

So I consult tldr , cheat or eg before I ask man and Google.

[Apr 22, 2021] 5 modern alternatives to essential Linux command-line tools by Ricardo Gerardi

While some of those tools do provide additional functionality sticking to classic tool makes more sense. So user beware.
Jun 25, 2020 | opensource.com

In our daily use of Linux/Unix systems, we use many command-line tools to complete our work and to understand and manage our systems -- tools like du to monitor disk utilization and top to show system resources. Some of these tools have existed for a long time. For example, top was first released in 1984, while du 's first release dates to 1971.

Over the years, these tools have been modernized and ported to different systems, but, in general, they still follow their original idea, look, and feel.

These are great tools and essential to many system administrators' workflows. However, in recent years, the open source community has developed alternative tools that offer additional benefits. Some are just eye candy, but others greatly improve usability, making them a great choice to use on modern systems. These include the following five alternatives to the standard Linux command-line tools.

1. ncdu as a replacement for du

The NCurses Disk Usage ( ncdu ) tool provides similar results to du but in a curses-based, interactive interface that focuses on the directories that consume most of your disk space. ncdu spends some time analyzing the disk, then displays the results sorted by your most used directories or files, like this:

ncdu 1.14.2 ~ Use the arrow keys to navigate, press ? for help
--- /home/rgerardi ------------------------------------------------------------
96.7 GiB [##########] /libvirt
33.9 GiB [### ] /.crc
...
Total disk usage: 159.4 GiB Apparent size: 280.8 GiB Items: 561540

Navigate to each entry by using the arrow keys. If you press Enter on a directory entry, ncdu displays the contents of that directory:

--- /home/rgerardi/libvirt ----------------------------------------------------
/..
91.3 GiB [##########] /images
5.3 GiB [ ] /media

You can use that to drill down into the directories and find which files are consuming the most disk space. Return to the previous directory by using the Left arrow key. By default, you can delete files with ncdu by pressing the d key, and it asks for confirmation before deleting a file. If you want to disable this behavior to prevent accidents, use the -r option for read-only access: ncdu -r .

ncdu is available for many platforms and Linux distributions. For example, you can use dnf to install it on Fedora directly from the official repositories:

$ sudo dnf install ncdu

You can find more information about this tool on the ncdu web page .

2. htop as a replacement for top

htop is an interactive process viewer similar to top but that provides a nicer user experience out of the box. By default, htop displays the same metrics as top in a pleasant and colorful display.

By default, htop looks like this:

htop_small.png

(Ricardo Gerardi, CC BY-SA 4.0 )

In contrast to default top :

top_small.png

(Ricardo Gerardi, CC BY-SA 4.0 )

In addition, htop provides system overview information at the top and a command bar at the bottom to trigger commands using the function keys, and you can customize it by pressing F2 to enter the setup screen. In setup, you can change its colors, add or remove metrics, or change display options for the overview bar.

More Linux resources While you can configure recent versions of top to achieve similar results, htop provides saner default configurations, which makes it a nice and easy to use process viewer.

To learn more about this project, check the htop home page .

3. tldr as a replacement for man

The tldr command-line tool displays simplified command utilization information, mostly including examples. It works as a client for the community tldr pages project .

This tool is not a replacement for man . The man pages are still the canonical and complete source of information for many tools. However, in some cases, man is too much. Sometimes you don't need all that information about a command; you're just trying to remember the basic options. For example, the man page for the curl command has almost 3,000 lines. In contrast, the tldr for curl is 40 lines long and looks like this:

$ tldr curl

# curl
Transfers data from or to a server.
Supports most protocols, including HTTP, FTP, and POP3.
More information: < https: // curl.haxx.se > .

- Download the contents of an URL to a file:

curl http: // example.com -o filename

- Download a file , saving the output under the filename indicated by the URL:

curl -O http: // example.com / filename

- Download a file , following [ L ] ocation redirects, and automatically [ C ] ontinuing ( resuming ) a previous file transfer:

curl -O -L -C - http: // example.com / filename

- Send form-encoded data ( POST request of type ` application / x-www-form-urlencoded ` ) :

curl -d 'name=bob' http: // example.com / form
- Send a request with an extra header, using a custom HTTP method:

curl -H 'X-My-Header: 123' -X PUT http: // example.com
- Send data in JSON format, specifying the appropriate content-type header:

curl -d '{"name":"bob"}' -H 'Content-Type: application/json' http: // example.com / users / 1234

... TRUNCATED OUTPUT

TLDR stands for "too long; didn't read," which is internet slang for a summary of long text. The name is appropriate for this tool because man pages, while useful, are sometimes just too long.

In Fedora, the tldr client was written in Python. You can install it using dnf . For other client options, consult the tldr pages project .

In general, the tldr tool requires access to the internet to consult the tldr pages. The Python client in Fedora allows you to download and cache these pages for offline access.

For more information on tldr , you can use tldr tldr .

4. jq as a replacement for sed/grep for JSON

jq is a command-line JSON processor. It's like sed or grep but specifically designed to deal with JSON data. If you're a developer or system administrator who uses JSON in your daily tasks, this is an essential tool in your toolbox.

The main benefit of jq over generic text-processing tools like grep and sed is that it understands the JSON data structure, allowing you to create complex queries with a single expression.

To illustrate, imagine you're trying to find the name of the containers in this JSON file:

{
"apiVersion" : "v1" ,
"kind" : "Pod" ,
"metadata" : {
"labels" : {
"app" : "myapp"
} ,
"name" : "myapp" ,
"namespace" : "project1"
} ,
"spec" : {
"containers" : [
{
"command" : [
"sleep" ,
"3000"
] ,
"image" : "busybox" ,
"imagePullPolicy" : "IfNotPresent" ,
"name" : "busybox"
} ,
{
"name" : "nginx" ,
"image" : "nginx" ,
"resources" : {} ,
"imagePullPolicy" : "IfNotPresent"
}
] ,
"restartPolicy" : "Never"
}
}

If you try to grep directly for name , this is the result:

$ grep name k8s-pod.json
"name" : "myapp" ,
"namespace" : "project1"
"name" : "busybox"
"name" : "nginx" ,

grep returned all lines that contain the word name . You can add a few more options to grep to restrict it and, with some regular-expression manipulation, you can find the names of the containers. To obtain the result you want with jq , use an expression that simulates navigating down the data structure, like this:

$ jq '.spec.containers[].name' k8s-pod.json
"busybox"
"nginx"

This command gives you the name of both containers. If you're looking for only the name of the second container, add the array element index to the expression:

$ jq '.spec.containers[1].name' k8s-pod.json
"nginx"

Because jq is aware of the data structure, it provides the same results even if the file format changes slightly. grep and sed may provide different results with small changes to the format.

jq has many features, and covering them all would require another article. For more information, consult the jq project page , the man pages, or tldr jq .

5. fd as a replacement for find

fd is a simple and fast alternative to the find command. It does not aim to replace the complete functionality find provides; instead, it provides some sane defaults that help a lot in certain scenarios.

For example, when searching for source-code files in a directory that contains a Git repository, fd automatically excludes hidden files and directories, including the .git directory, as well as ignoring patterns from the .gitignore file. In general, it provides faster searches with more relevant results on the first try.

By default, fd runs a case-insensitive pattern search in the current directory with colored output. The same search using find requires you to provide additional command-line parameters. For example, to search all markdown files ( .md or .MD ) in the current directory, the find command is this:

$ find . -iname "*.md"

Here is the same search with fd :

$ fd .md

In some cases, fd requires additional options; for example, if you want to include hidden files and directories, you must use the option -H , while this is not required in find .

fd is available for many Linux distributions. Install it in Fedora using the standard repositories:

$ sudo dnf install fd-find

For more information, consult the fd GitHub repository .

... ... ...

S Arun-Kumar on 25 Jun 2020

I use "meld" in place of "diff" Ricardo Gerardi on 25 Jun 2020

Thanks ! I never used "meld". I'll give it a try.
Keith Peters on 25 Jun 2020

exa for ls Ricardo Gerardi on 25 Jun 2020

Thanks. I'll give it a try. brick on 27 Jun 2020

Another (fancy looking) alternative for ls is lsd. Miguel Perez on 25 Jun 2020

Bat instead of cat, ripgrep instead of grep, httpie instead of curl, bashtop instead of htop, autojump instead of cd... Drto on 25 Jun 2020

ack instead of grep for files. Million times faster.
Gordon Harris on 25 Jun 2020

The yq command line utility is useful too. It's just like jq, except for yaml files and has the ability to convert yaml into json.
Matt howard on 26 Jun 2020

Glances is a great top replacement too Paul M on 26 Jun 2020

Try "mtr" instead of traceroute
Try "hping2" instead of ping
Try "pigz" instead of gzip jmtd on 28 Jun 2020

I've never used ncdu, but I recommend "duc" as a du replacement https://github.com/zevv/duc/

You run a separate "duc index" command to capture disk space usage in a database file and then can explore the data very quickly with "duc ui" ncurses ui. There's also GUI and web front-ends that give you a nice graphical pie chart interface.

In my experience the index stage is faster than plain du. You can choose to re-index only certain folders if you want to update some data quickly without rescanning everything.

wurn on 29 Jun 2020

Imho, jq uses a syntax that's ok for simple queries but quickly becomes horrible when you need more complex queries. Pjy is a sensible replacement for jq, having an (improved) python syntax which is familiar to many people and much more readable: https://github.com/hydrargyrum/pjy
Jack Orenstein on 29 Jun 2020

Also along the lines of command-line alternatives, take a look at marcel, which is a modern shell: https://marceltheshell.org . The basic idea is to pipe Python values instead of strings, between commands. It integrates smoothly with host commands (and, presumably, the alternatives discussed here), and also integrates remote access and database access. Ricardo Fraile on 05 Jul 2020

"tuptime" instead of "uptime".
It tracks the history of the system, not only the current one. The Cube on 07 Jul 2020

One downside of all of this is that there are even more things to remember. I learned find, diff, cat, vi (and ed), grep and a few others starting in 1976 on 6th edition. They have been enhanced some, over the years (for which I use man when I need to remember), and learned top and other things as I needed them, but things I did back then still work great now. KISS is still a "thing". Especially in scripts one is going to use on a wide variety of distributions or for a long time. These kind of tweaks are fun and all, but add complexity and reduce one's inter-system mobility. (And don't get me started on systemd 8P).

[Apr 22, 2021] replace(1) - Linux manual page

Apr 22, 2021 | www.man7.org
REPLACE(1)               MariaDB Database System              REPLACE(1)
NAME top
       replace - a string-replacement utility
SYNOPSIS top
       replace arguments
DESCRIPTION top
       The replace utility program changes strings in place in files or
       on the standard input.

       Invoke replace in one of the following ways:

           shell> replace from to [from to] ... -- file_name [file_name] ...
           shell> replace from to [from to] ... < file_name

       from represents a string to look for and to represents its
       replacement. There can be one or more pairs of strings.

       Use the -- option to indicate where the string-replacement list
       ends and the file names begin. In this case, any file named on
       the command line is modified in place, so you may want to make a
       copy of the original before converting it.  replace prints a
       message indicating which of the input files it actually modifies.

       If the -- option is not given, replace reads the standard input
       and writes to the standard output.

       replace uses a finite state machine to match longer strings
       first. It can be used to swap strings. For example, the following
       command swaps a and b in the given files, file1 and file2:

           shell> replace a b b a -- file1 file2 ...

       The replace program is used by msql2mysql. See msql2mysql(1).

       replace supports the following options.

       •   -?, -I

           Display a help message and exit.

       •   -#debug_options

           Enable debugging.

       •   -s

           Silent mode. Print less information what the program does.

       •   -v

           Verbose mode. Print more information about what the program
           does.

       •   -V

           Display version information and exit.
COPYRIGHT top
       Copyright 2007-2008 MySQL AB, 2008-2010 Sun Microsystems, Inc.,
       2010-2015 MariaDB Foundation

       This documentation is free software; you can redistribute it
       and/or modify it only under the terms of the GNU General Public
       License as published by the Free Software Foundation; version 2
       of the License.

       This documentation is distributed in the hope that it will be
       useful, but WITHOUT ANY WARRANTY; without even the implied
       warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
       See the GNU General Public License for more details.

       You should have received a copy of the GNU General Public License
       along with the program; if not, write to the Free Software
       Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
       02110-1335 USA or see http://www.gnu.org/licenses/.
SEE ALSO top
       For more information, please refer to the MariaDB Knowledge Base,
       available online at https://mariadb.com/kb/
AUTHOR top
       MariaDB Foundation (http://www.mariadb.org/).
COLOPHON top
       This page is part of the MariaDB (MariaDB database server)
       project.  Information about the project can be found at 
       ⟨http://mariadb.org/⟩.  If you have a bug report for this manual
       page, see ⟨https://mariadb.com/kb/en/mariadb/reporting-bugs/⟩.
       This page was obtained from the project's upstream Git repository
       ⟨https://github.com/MariaDB/server⟩ on 2021-04-01.  (At that
       time, the date of the most recent commit that was found in the
       repository was 2020-11-03.)  If you discover any rendering
       problems in this HTML version of the page, or you believe there
       is a better or more up-to-date source for the page, or you have
       corrections or improvements to the information in this COLOPHON
       (which is not part o

[Apr 19, 2021] How To Display Linux Commands Cheatsheets Using Eg

Apr 19, 2021 | ostechnix.com

Eg is a free, open source program written in Python language and the code is freely available in GitHub. For those wondering, eg comes from the Latin word "Exempli Gratia" that literally means "for the sake of example" in English. Exempli Gratia is known by its abbreviation e.g. , in English speaking countries.

Install Eg in Linux

Eg can be installed using Pip package manager. If Pip is not available in your system, install it as described in the below link.

After installing Pip, run the following command to install eg on your Linux system:

$ pip install eg
Display Linux commands cheatsheets using Eg

Let us start by displaying the help section of eg program. To do so, run eg without any options:

$ eg

Sample output:

usage: eg [-h] [-v] [-f CONFIG_FILE] [-e] [--examples-dir EXAMPLES_DIR]
          [-c CUSTOM_DIR] [-p PAGER_CMD] [-l] [--color] [-s] [--no-color]
          [program]

eg provides examples of common command usage.

positional arguments:
  program               The program for which to display examples.

optional arguments:
  -h, --help            show this help message and exit
  -v, --version         Display version information about eg
  -f CONFIG_FILE, --config-file CONFIG_FILE
                        Path to the .egrc file, if it is not in the default
                        location.
  -e, --edit            Edit the custom examples for the given command. If
                        editor-cmd is not set in your .egrc and $VISUAL and
                        $EDITOR are not set, prints a message and does
                        nothing.
  --examples-dir EXAMPLES_DIR
                        The location to the examples/ dir that ships with eg
  -c CUSTOM_DIR, --custom-dir CUSTOM_DIR
                        Path to a directory containing user-defined examples.
  -p PAGER_CMD, --pager-cmd PAGER_CMD
                        String literal that will be invoked to page output.
  -l, --list            Show all the programs with eg entries.
  --color               Colorize output.
  -s, --squeeze         Show fewer blank lines in output.
  --no-color            Do not colorize output.

You can also bring the help section using this command too:

$ eg --help

Now let us see how to view example commands usage.

To display cheatsheet of a Linux command, for example grep , run:

$ eg grep

Sample output:

grep
 print all lines containing foo in input.txt
 grep "foo" input.txt
 print all lines matching the regex "^start" in input.txt
 grep -e "^start" input.txt
 print all lines containing bar by recursively searching a directory
 grep -r "bar" directory
 print all lines containing bar ignoring case
 grep -i "bAr" input.txt
 print 3 lines of context before and after each line matching "foo"
 grep -C 3 "foo" input.txt
 Basic Usage
 Search each line in input_file for a match against pattern and print
 matching lines:
 grep "<pattern>" <input_file>
[...]

[Apr 19, 2021] IBM returns to sales growth after a year of declines on cloud strength

They are probably mistaken about one trillion market opportunity.
Apr 19, 2021 | finance.yahoo.com

The 109-year-old firm is preparing to split itself into two public companies, with the namesake firm narrowing its focus on the so-called hybrid cloud, where it sees a $1 trillion market opportunity.

[Apr 19, 2021] How to Install and Use locate Command in Linux

Apr 19, 2021 | www.linuxshelltips.com

Before using the locate command you should check if it is installed in your machine. A locate command comes with GNU findutils or GNU mlocate packages. You can simply run the following command to check if locate is installed or not.

$ which locate
Check locate Command
Check locate Command

If locate is not installed by default then you can run the following commands to install.

$ sudo yum install mlocate     [On CentOS/RHEL/Fedora]
$ sudo apt install mlocate     [On Debian/Ubuntu/Mint]

Once the installation is completed you need to run the following command to update the locate database to quickly get the file location. That's how your result is faster when you use the locate command to find files in Linux.

$ sudo updatedb

The mlocate db file is located at /var/lib/mlocate/mlocate.db .

$ ls -l /var/lib/mlocate/mlocate.db
mlocate database
mlocate database

A good place to start and get to know about locate command is using the man page.

$ man locate
locate command manpage
locate command manpage
How to Use locate Command to Find Files Faster in Linux

To search for any files simply pass the file name as an argument to locate command.

$ locate .bashrc
Locate Files in Linux
Locate Files in Linux

If you wish to see how many matched items instead of printing the location of the file you can pass the -c flag.

$ sudo locate -c .bashrc
Find File Count Occurrence
Find File Count Occurrence

By default locate command is set to be case sensitive. You can make the search to be case insensitive by using the -i flag.

$ sudo locate -i file1.sh
Find Files Case Sensitive in Linux
Find Files Case Sensitive in Linux

You can limit the search result by using the -n flag.

$ sudo locate -n 3 .bashrc
Limit Search Results
Limit Search Results

When you delete a file and if you did not update the mlocate database it will still print the deleted file in output. You have two options now either to update mlocate db periodically or use -e flag which will skip the deleted files.

$ locate -i -e file1.sh
Skip Deleted Files
Skip Deleted Files

You can check the statistics of the mlocate database by running the following command.

$ locate -S
mlocate database stats
mlocate database stats

If your db file is in a different location then you may want to use -d flag followed by mlocate db path and filename to be searched for.

$ locate -d [ DB PATH ] [ FILENAME ]

Sometimes you may encounter an error, you can suppress the error messages by running the command with the -q flag.

$ locate -q [ FILENAME ]

That's it for this article. We have shown you all the basic operations you can do with locate command. It will be a handy tool for you when working on the command line.

[Apr 13, 2021] West Virginia will now give you $12,000 to move to its state and work remotely

Apr 13, 2021 | finance.yahoo.com


More content below More content below More content below More content below Brian Sozzi · Editor-at-Large Mon, April 12, 2021, 12:54 PM

West Virginia is opening up its arms -- and importantly its wallet -- to lure in those likely to be working from home for some time after the COVID-19 pandemic .

The state announced on Monday it would give people $12,000 cash with no strings attached to move to its confines. Also included is one year of free recreation at the state's various public lands, which it values at $2,500. Once all the particulars of the plan are added up, West Virginia says the total value to a person is $20,000.

The initiative is being made possible after a $25 million donation from Intuit's executive chairman (and former long-time CEO) Brad D. Smith and his wife Alys.

"I have the opportunity to spend a lot of time speaking with my peers in the industry in Silicon Valley as well as across the world. Most are looking at a hybrid model, but many of them -- if not all of them -- have expanded the percentage of their workforce that can work full-time remotely," Smith told Yahoo Finance Live about the plan.

Smith earned his bachelor's degree in business administration from Marshall University in West Virginia.

3D rendering of the flag of West Virginia on satin texture. Credit: Getty

Added Smith, "I think we have seen the pendulum swing all the way to the right when everyone had to come to the office and then all the way to left when everyone was forced to shelter in place. And somewhere in the middle, we'll all be experimenting in the next year or so to see where is that sweet-spot. But I do know employees now have gotten a taste for what it's like to be able to live in a new area with less commute time, less access to outdoor amenities like West Virginia has to offer. I think that's absolutely going to become part of the consideration set in this war for talent."

That war for talent post-pandemic could be about to heat up within corporate America, and perhaps spur states to follow West Virginia's lead.

The likes of Facebook, Twitter and Apple are among those big companies poised to have hybrid workforces for years after the pandemic. That has some employees considering moves to lower cost states and those that offer better overall qualities of life.

A recent study out of Gartner found that 82% of respondents intend to permit remote working some of the time as employees return to the workplace. Meanwhile, 47% plan to let employees work remotely permanently.

Brian Sozzi is an editor-at-large and anchor at Yahoo Finance . Follow Sozzi on Twitter @BrianSozzi and on LinkedIn .

[Apr 10, 2021] How to Use the xargs Command in Linux

Apr 10, 2021 | www.maketecheasier.com

... ... ...

Cut/Copy Operations

Xargs , along with the find command, can also be used to copy or move a set of files from one directory to another. For example, to move all the text files that are more than 10 minutes old from the current directory to the previous directory, use the following command:

find . -name "*.txt" -mmin +10 | xargs -n1 -I '{}' mv '{}' ../

The -I command line option is used by the xargs command to define a replace-string which gets replaced with names read from the output of the find command. Here the replace-string is {} , but it could be anything. For example, you can use "file" as a replace-string.

find . -name "*.txt" -mmin 10 | xargs -n1 -I 'file' mv 'file' ./practice
How to Tell xargs When to Quit

Suppose you want to list the details of all the .txt files present in the current directory. As already explained, it can be easily done using the following command:

find . -name "*.txt" | xargs ls -l

But there is one problem: the xargs command will execute the ls command even if the find command fails to find any .txt file. The following is an example.

https://googleads.g.doubleclick.net/pagead/ads?gdpr=0&us_privacy=1---&client=ca-pub-8765285789552883&output=html&h=175&slotname=8434584656&adk=2613966457&adf=2688049051&pi=t.ma~as.8434584656&w=700&fwrn=4&lmt=1618101879&rafmt=11&psa=1&format=700x175&url=https%3A%2F%2Fwww.maketecheasier.com%2Fmastering-xargs-command-linux%2F&flash=0&wgl=1&dt=1618103731474&bpp=31&bdt=493&idt=348&shv=r20210406&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D8cd0bebb139ab380-22b05008cfc600fe%3AT%3D1615567004%3ART%3D1615567004%3AS%3DALNI_MbrBrvKDJYr9qFQ5qDF00dIMBBf3Q&prev_fmts=0x0%2C700x175%2C700x175%2C700x175%2C700x175&nras=1&correlator=1984628299000&frm=20&pv=1&ga_vid=1005406816.1615567006&ga_sid=1618103732&ga_hid=1318844730&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=276&ady=6667&biw=1519&bih=762&scr_x=0&scr_y=0&eid=44735931%2C44740079%2C44739387&oid=3&pvsid=3605228639161633&pem=509&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&eae=0&fc=1920&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=128&bc=31&ifi=6&uci=a!6&btvi=5&fsb=1&xpc=Zlys0rYt2K&p=https%3A//www.maketecheasier.com&dtd=831

So you can see that there are no .txt files in the directory, but that didn't stop xargs from executing the ls command. To change this behavior, use the -r command line option:

find . -name "*.txt" | xargs -r ls -l

[Apr 01, 2021] How to use range and sequence expression in bash by Dan Nanni

Mar 29, 2021 | www.xmodulo.com

When you are writing a bash script, there are situations where you need to generate a sequence of numbers or strings . One common use of such sequence data is for loop iteration. When you iterate over a range of numbers, the range may be defined in many different ways (e.g., [0, 1, 2,..., 99, 100], [50, 55, 60,..., 75, 80], [10, 9, 8,..., 1, 0], etc). Loop iteration may not be just over a range of numbers. You may need to iterate over a sequence of strings with particular patterns (e.g., incrementing filenames; img001.jpg, img002.jpg, img003.jpg). For this type of loop control, you need to be able to generate a sequence of numbers and/or strings flexibly.

While you can use a dedicated tool like seq to generate a range of numbers, it is really not necessary to add such external dependency in your bash script when bash itself provides a powerful built-in range function called brace expansion . In this tutorial, let's find out how to generate a sequence of data in bash using brace expansion and what are useful brace expansion examples .

Brace Expansion

Bash's built-in range function is realized by so-called brace expansion . In a nutshell, brace expansion allows you to generate a sequence of strings based on supplied string and numeric input data. The syntax of brace expansion is the following.

{<string1>,<string2>,...,<stringN>}
{<start-number>..<end-number>}
{<start-number>..<end-number>..<increment>}
<prefix-string>{......}
{......}<suffix-string>
<prefix-string>{......}<suffix-string>

All these sequence expressions are iterable, meaning you can use them for while/for loops . In the rest of the tutorial, let's go over each of these expressions to clarify their use cases.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=5674857721&adk=3047986842&adf=3341013331&pi=t.ma~as.5674857721&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&dt=1617311559984&bpp=49&bdt=419&idt=296&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1350&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=Ug4rFEoUn3&p=https%3A//www.xmodulo.com&dtd=306

Use Case #1: List a Sequence of Strings

The first use case of brace expansion is a simple string list, which is a comma-separated list of string literals within the braces. Here we are not generating a sequence of data, but simply list a pre-defined sequence of string data.

{<string1>,<string2>,...,<stringN>}

You can use this brace expansion to iterate over the string list as follows.

for fruit in {apple,orange,lemon}; do
    echo $fruit
done
apple
orange
lemon

This expression is also useful to invoke a particular command multiple times with different parameters.

For example, you can create multiple subdirectories in one shot with:

$ mkdir -p /home/xmodulo/users/{dan,john,alex,michael,emma}

To create multiple empty files:

$ touch /tmp/{1,2,3,4}.log
Use Case #2: Define a Range of Numbers

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1795540232&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&dt=1617311560086&bpp=3&bdt=522&idt=212&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=2661&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=4Qr9I1IICq&p=https%3A//www.xmodulo.com&dtd=230

The most common use case of brace expansion is to define a range of numbers for loop iteration. For that, you can use the following expressions, where you specify the start/end of the range, as well as an optional increment value.

{<start-number>..<end-number>}
{<start-number>..<end-number>..<increment>}

To define a sequence of integers between 10 and 20:

echo {10..20}
10 11 12 13 14 15 16 17 18 19 20

You can easily integrate this brace expansion in a loop:

for num in {10..20}; do
    echo $num
done

To generate a sequence of numbers with an increment of 2 between 0 and 20:

echo {0..20..2}
0 2 4 6 8 10 12 14 16 18 20

You can generate a sequence of decrementing numbers as well:

echo {20..10}
20 19 18 17 16 15 14 13 12 11 10
echo {20..10..-2}
20 18 16 14 12 10

You can also pad the numbers with leading zeros, in case you need to use the same number of digits. For example:

echo {00..20..2}
00 02 04 06 08 10 12 14 16 18 20
Use Case #3: Generate a Sequence of Characters

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=2275625677&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&adsid=ChEI8N6VgwYQhfmhjs6mgZfVARIqAB-w9KHKYtk-pO1suXBsxL8W2AonVwnPmH2XuFwrRPO8MEEAXQpMrZaL&dt=1617311560089&bpp=13&bdt=524&idt=234&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x762&nras=2&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=4285&biw=1519&bih=762&scr_x=0&scr_y=1242&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&psts=AGkb-H_lFqstnD2HWv6DycAKvGu9yoyyH3Im0lIwlWU9l6Uc-8KMKIFblasNhvUgGzV4BHfOo--XblJj_VswXA%2CAGkb-H9o5YtqjrXVMh6mfBSJzTIgoTV2500RL7u85T0dFqY9L2FCM8n5K3kCkE5gmmIGpZe6AF47pvNGmYctKA%2CAGkb-H-ww6bPiVlNqpc1PRrGrEXcujNuzAiKCh9dMztOCLvaTDy5GzZj2TpeUNENhbxuLuuOYYD5RgOfQA&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-04-01-17&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=QImaZvyQly&p=https%3A//www.xmodulo.com&dtd=27097

Brace expansion can be used to generate not just a sequence of numbers, but also a sequence of characters.

{<start-character>..<end-character>}

To generate a sequence of alphabet characters between 'd' and 'p':

echo {d..p}
d e f g h i j k l m n o p

You can generate a sequence of upper-case alphabets as well.

for char1 in {A..B}; do
    for char2 in {A..B}; do
        echo "${char1}${char2}"
    done
done
AA
AB
BA
BB
Use Case #4: Generate a Sequence of Strings with Prefix/Suffix

It's possible to add a prefix and/or a suffix to a given brace expression as follows.

<prefix-string>{......}
{......}<suffix-string>
<prefix-string>{......}<suffix-string>

Using this feature, you can easily generate a list of sequentially numbered filenames:

# create incrementing filenames
for filename in img_{00..5}.jpg; do
    echo $filename
done
img_00.jpg
img_01.jpg
img_02.jpg
img_03.jpg
img_04.jpg
img_05.jpg
Use Case #5: Combine Multiple Brace Expansions

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1069835252&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&adsid=ChEI8N6VgwYQhfmhjs6mgZfVARIqAB-w9KHKYtk-pO1suXBsxL8W2AonVwnPmH2XuFwrRPO8MEEAXQpMrZaL&dt=1617311560132&bpp=3&bdt=568&idt=193&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x762%2C1200x200&nras=2&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=6156&biw=1519&bih=762&scr_x=0&scr_y=3151&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&psts=AGkb-H_lFqstnD2HWv6DycAKvGu9yoyyH3Im0lIwlWU9l6Uc-8KMKIFblasNhvUgGzV4BHfOo--XblJj_VswXA%2CAGkb-H9o5YtqjrXVMh6mfBSJzTIgoTV2500RL7u85T0dFqY9L2FCM8n5K3kCkE5gmmIGpZe6AF47pvNGmYctKA%2CAGkb-H-ww6bPiVlNqpc1PRrGrEXcujNuzAiKCh9dMztOCLvaTDy5GzZj2TpeUNENhbxuLuuOYYD5RgOfQA%2CAGkb-H_oWO6sMjx-sSACXECD6aXL8a7NcIP5miVIHjPj27ExAouRoqV1vRbD0UeQxrrlNTPAZbGg7YubopvUSA&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-04-01-17&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=twNmeHYXl4&p=https%3A//www.xmodulo.com&dtd=41555

Finally, it's possible to combine multiple brace expansions, in which case the combined expressions will generate all possible combinations of sequence data produced by each expression.

For example, we have the following script that prints all possible combinations of two-character alphabet strings using double-loop iteration.

for char1 in {A..Z}; do
    for char2 in {A..Z}; do
        echo "${char1}${char2}"
    done
done

By combining two brace expansions, the following single loop can produce the same output as above.

for str in {A..Z}{A..Z}; do
    echo $str
done
Conclusion

In this tutorial, I described a bash's built-in mechanism called brace expansion, which allows you to easily generate a sequence of arbitrary strings in a single command line. Brace expansion is useful not just for a bash script, but also in your command line environment (e.g., when you need to run the same command multiple times with different arguments). If you know any useful brace expansion tips and use cases, feel free to share it in the comment.

If you find this tutorial helpful, I recommend you check out the series of bash shell scripting tutorials provided by Xmodulo.

[Mar 30, 2021] How to catch and handle errors in bash

Mar 30, 2021 | www.xmodulo.com

How to catch and handle errors in bash

Last updated on March 28, 2021 by Dan Nanni

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=280&slotname=6357311593&adk=3477157422&adf=3251077269&pi=t.ma~as.6357311593&w=1200&fwrn=4&fwrnh=100&lmt=1617039750&rafmt=1&psa=1&format=1200x280&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&fwr=0&fwrattr=true&rpe=1&resp_fmts=3&wgl=1&dt=1617150500578&bpp=19&bdt=670&idt=289&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&correlator=2807789420329&frm=20&pv=2&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=31&ady=254&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeE%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&fsb=1&xpc=FeLkc0yKaB&p=https%3A//www.xmodulo.com&dtd=346

In an ideal world, things always work as expected, but you know that's hardly the case. The same goes in the world of bash scripting. Writing a robust, bug-free bash script is always challenging even for a seasoned system administrator. Even if you write a perfect bash script, the script may still go awry due to external factors such as invalid input or network problems. While you cannot prevent all errors in your bash script, at least you should try to handle possible error conditions in a more predictable and controlled fashion.

That is easier said than done, especially since error handling in bash is notoriously difficult. The bash shell does not have any fancy exception swallowing mechanism like try/catch constructs. Some bash errors may be silently ignored but may have consequences down the line. The bash shell does not even have a proper debugger.

In this tutorial, I'll introduce basic tips to catch and handle errors in bash . Although the presented error handling techniques are not as fancy as those available in other programming languages, hopefully by adopting the practice, you may be able to handle potential bash errors more gracefully.

Bash Error Handling Tip #1: Check the Exit Status

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=5674857721&adk=3047986842&adf=3341013331&pi=t.ma~as.5674857721&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&dt=1617150500597&bpp=37&bdt=688&idt=355&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1003&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=R4Jgtckaf2&p=https%3A//www.xmodulo.com&dtd=373

As the first line of defense, it is always recommended to check the exit status of a command, as a non-zero exit status typically indicates some type of error. For example:

if ! some_command; then
    echo "some_command returned an error"
fi

Another (more compact) way to trigger error handling based on an exit status is to use an OR list:

<command1> || <command2>

With this OR statement, <command2> is executed if and only if <command1> returns a non-zero exit status. So you can replace <command2> with your own error handling routine. For example:

error_exit()
{
    echo "Error: $1"
    exit 1
}

run-some-bad-command || error_exit "Some error occurred"

Bash provides a built-in variable called $? , which tells you the exit status of the last executed command. Note that when a bash function is called, $? reads the exit status of the last command called inside the function. Since some non-zero exit codes have special meanings , you can handle them selectively. For example:

# run some command
status=$?
if [ $status -eq 1 ]; then
    echo "General error"
elif [ $status -eq 2 ]; then
    echo "Misuse of shell builtins"
elif [ $status -eq 126 ]; then
    echo "Command invoked cannot execute"
elif [ $status -eq 128 ]; then
    echo "Invalid argument"
fi
Bash Error Handling Tip #2: Exit on Errors in Bash

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1795540232&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&dt=1617150500635&bpp=53&bdt=726&idt=346&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=2621&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=xlM0hGwtiw&p=https%3A//www.xmodulo.com&dtd=367

When you encounter an error in a bash script, by default, it throws an error message to stderr , but continues its execution in the rest of the script. In fact you see the same behavior in a terminal window; even if you type a wrong command by accident, it will not kill your terminal. You will just see the "command not found" error, but you terminal/bash session will still remain.

This default shell behavior may not be desirable for some bash script. For example, if your script contains a critical code block where no error is allowed, you want your script to exit immediately upon encountering any error inside that code block. To activate this "exit-on-error" behavior in bash, you can use the set command as follows.

set -e
#
# some critical code block where no error is allowed
#
set +e

Once called with -e option, the set command causes the bash shell to exit immediately if any subsequent command exits with a non-zero status (caused by an error condition). The +e option turns the shell back to the default mode. set -e is equivalent to set -o errexit . Likewise, set +e is a shorthand command for set +o errexit .

However, one special error condition not captured by set -e is when an error occurs somewhere inside a pipeline of commands. This is because a pipeline returns a non-zero status only if the last command in the pipeline fails. Any error produced by previous command(s) in the pipeline is not visible outside the pipeline, and so does not kill a bash script. For example:

set -e
true | false | true   
echo "This will be printed"  # "false" inside the pipeline not detected

If you want any failure in pipelines to also exit a bash script, you need to add -o pipefail option. For example:

set -o pipefail -e
true | false | true          # "false" inside the pipeline detected correctly
echo "This will not be printed"

Therefore, to protect a critical code block against any type of command errors or pipeline errors, use the following pair of set commands.

set -o pipefail -e
#
# some critical code block where no error or pipeline error is allowed
#
set +o pipefail +e
Bash Error Handling Tip #3: Try and Catch Statements in Bash

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=2275625677&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&adsid=ChAI8JiLgwYQkvKD_-vdud51EioAsc7QJfPbVjxhaA0k3D4cZGdWuanTHT1OnZFf-sYZ_FlsHeNm-m93y6g&dt=1617150500736&bpp=3&bdt=827&idt=284&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x714&nras=2&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=4322&biw=1519&bih=714&scr_x=0&scr_y=1473&eid=42530672%2C44740079%2C44739387&oid=3&psts=AGkb-H9kB9XBPoFQr4Nvbpzi-IDFo1H7_NaIL8M18sGGWSqpMo6EvnCzj-Qorx0rQkLTtpYfrxcistXQ3NLI%2CAGkb-H9NblhEl8n-XjoXLiznZ70w5Gvz_2AR1xlm3w9htg9Uoc9EqNnh-BnrA3HlHfn539NkqfOg0pb4UgvAzA%2CAGkb-H_8XpQQ502aEe7wRqWV9odZAPWfUTDNYIPLyzG6DAnUhxH_sAn3FM_H-EjHMVFKcfuXC1svgR-pJ4tNKQ&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-03-30-23&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=v8JM1LJbyF&p=https%3A//www.xmodulo.com&dtd=7982

Although the set command allows you to terminate a bash script upon any error that you deem critical, this mechanism is often not sufficient in more complex bash scripts where different types of errors could happen.

To be able to detect and handle different types of errors/exceptions more flexibly, you will need try/catch statements, which however are missing in bash. At least we can mimic the behaviors of try/catch as shown in this trycatch.sh script:

function try()
{
    [[ $- = *e* ]]; SAVED_OPT_E=$?
    set +e
}

function throw()
{
    exit $1
}

function catch()
{
    export exception_code=$?
    (( $SAVED_OPT_E )) && set +e
    return $exception_code
}

Here we define several custom bash functions to mimic the semantic of try and catch statements. The throw() function is supposed to raise a custom (non-zero) exception. We need set +e , so that the non-zero returned by throw() will not terminate a bash script. Inside catch() , we store the value of exception raised by throw() in a bash variable exception_code , so that we can handle the exception in a user-defined fashion.

Perhaps an example bash script will make it clear how trycatch.sh works. See the example below that utilizes trycatch.sh .

# Include trybatch.sh as a library
source ./trycatch.sh

# Define custom exception types
export ERR_BAD=100
export ERR_WORSE=101
export ERR_CRITICAL=102

try
(
    echo "Start of the try block"

    # When a command returns a non-zero, a custom exception is raised.
    run-command || throw $ERR_BAD
    run-command2 || throw $ERR_WORSE
    run-command3 || throw $ERR_CRITICAL

    # This statement is not reached if there is any exception raised
    # inside the try block.
    echo "End of the try block"
)
catch || {
    case $exception_code in
        $ERR_BAD)
            echo "This error is bad"
        ;;
        $ERR_WORSE)
            echo "This error is worse"
        ;;
        $ERR_CRITICAL)
            echo "This error is critical"
        ;;
        *)
            echo "Unknown error: $exit_code"
            throw $exit_code    # re-throw an unhandled exception
        ;;
    esac
}

In this example script, we define three types of custom exceptions. We can choose to raise any of these exceptions depending on a given error condition. The OR list <command> || throw <exception> allows us to invoke throw() function with a chosen <exception> value as a parameter, if <command> returns a non-zero exit status. If <command> is completed successfully, throw() function will be ignored. Once an exception is raised, the raised exception can be handled accordingly inside the subsequent catch block. As you can see, this provides a more flexible way of handling different types of error conditions.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1069835252&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&adsid=ChAI8JiLgwYQkvKD_-vdud51EioAsc7QJfPbVjxhaA0k3D4cZGdWuanTHT1OnZFf-sYZ_FlsHeNm-m93y6g&dt=1617150500740&bpp=33&bdt=832&idt=288&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x714%2C1200x200&nras=2&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=6943&biw=1519&bih=714&scr_x=0&scr_y=4095&eid=42530672%2C44740079%2C44739387&oid=3&psts=AGkb-H9kB9XBPoFQr4Nvbpzi-IDFo1H7_NaIL8M18sGGWSqpMo6EvnCzj-Qorx0rQkLTtpYfrxcistXQ3NLI%2CAGkb-H9NblhEl8n-XjoXLiznZ70w5Gvz_2AR1xlm3w9htg9Uoc9EqNnh-BnrA3HlHfn539NkqfOg0pb4UgvAzA%2CAGkb-H_8XpQQ502aEe7wRqWV9odZAPWfUTDNYIPLyzG6DAnUhxH_sAn3FM_H-EjHMVFKcfuXC1svgR-pJ4tNKQ%2CAGkb-H_LZaKgZXHhi-mp793u920dtCBuBuOdBYfg8GxP5Yl69G1LrubEm-DNODFvz9VDpFX0r4wQgNJ9B_IZKQ&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-03-30-23&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=cNiz7hdTMs&p=https%3A//www.xmodulo.com&dtd=13575

Granted, this is not a full-blown try/catch constructs. One limitation of this approach is that the try block is executed in a sub-shell . As you may know, any variables defined in a sub-shell are not visible to its parent shell. Also, you cannot modify the variables that are defined in the parent shell inside the try block, as the parent shell and the sub-shell have separate scopes for variables.

Conclusion

In this bash tutorial, I presented basic error handling tips that may come in handy when you want to write a more robust bash script. As expected these tips are not as sophisticated as the error handling constructs available in other programming language. If the bash script you are writing requires more advanced error handling than this, perhaps bash is not the right language for your task. You probably want to turn to other languages such as Python.

Let me conclude the tutorial by mentioning one essential tool that every shell script writer should be familiar with. ShellCheck is a static analysis tool for shell scripts. It can detect and point out syntax errors, bad coding practice and possible semantic issues in a shell script with much clarity. Definitely check it out if you haven't tried it.

If you find this tutorial helpful, I recommend you check out the series of bash shell scripting tutorials provided by Xmodulo.

[Mar 28, 2021] The Fake News about Fake Agile

Adherents to obscure cult behave exactly the way described. Funny that the author is one of the cultists, a true believer in Agile methodology.
Aug 23, 2019 | www.iconagility.com

All politics about fake news aside (PLEASE!), I've heard a growing number of reports, sighs and cries about Fake Agile. It's frustrating when people just don't get it, especially when they think they do. We can point fingers and vilify those who think differently -- or we can try to understand why this "us vs them" mindset is splintering the Agile community....

[Mar 24, 2021] How To Edit Multiple Files Using Vim Editor by Senthil Kumar

Images removed. Use the original for full text.
Mar 24, 2021 | ostechnix.com

March 17, 2018

...Now, let us edit these two files at a time using Vim editor. To do so, run:

$ vim file1.txt file2.txt

Vim will display the contents of the files in an order. The first file's contents will be shown first and then second file and so on.

Edit Multiple Files Using Vim Editor

Edit Multiple Files Using Vim Editor Switch between files

To move to the next file, type:

:n
Switch between files in Vim editor

Switch between files in Vim editor

To go back to previous file, type:

:N

Here, N is capital (Type SHIFT+n).

Start editing the files as the way you do with Vim editor. Press 'i' to switch to interactive mode and modify the contents as per your liking. Once done, press ESC to go back to normal mode.

Vim won't allow you to move to the next file if there are any unsaved changes. To save the changes in the current file, type:

ZZ

Please note that it is double capital letters ZZ (SHIFT+zz).

To abandon the changes and move to the previous file, type:

:N!

To view the files which are being currently edited, type:

:buffers
View files in buffer in VIm

View files in buffer in VIm

You will see the list of loaded files at the bottom.

List of files in buffer in Vim

List of files in buffer in Vim

To switch to the next file, type :buffer followed by the buffer number. For example, to switch to the first file, type:

:buffer 1

Or, just do:

:b 1
Switch to next file in Vim

Switch to next file in Vim

Just remember these commands to easily switch between buffers:

:bf            # Go to first file.
:bl            # Go to last file
:bn            # Go to next file.
:bp            # Go to previous file.
:b number  # Go to n'th file (E.g :b 2)
:bw            # Close current file.
Opening additional files for editing

We are currently editing two files namely file1.txt, file2.txt. You might want to open another file named file3.txt for editing. What will you do? It's easy! Just type :e followed by the file name like below.

:e file3.txt
Open additional files for editing in Vim

Open additional files for editing in Vim

Now you can edit file3.txt.

To view how many files are being edited currently, type:

:buffers
View all files in buffers in Vim

View all files in buffers in Vim

Please note that you can not switch between opened files with :e using either :n or :N . To switch to another file, type :buffer followed by the file buffer number.

Copying contents of one file into another

You know how to open and edit multiple files at the same time. Sometimes, you might want to copy the contents of one file into another. It is possible too. Switch to a file of your choice. For example, let us say you want to copy the contents of file1.txt into file2.txt.

To do so, first switch to file1.txt:

:buffer 1

Place the move cursor in-front of a line that wants to copy and type yy to yank(copy) the line. Then, move to file2.txt:

:buffer 2

Place the mouse cursor where you want to paste the copied lines from file1.txt and type p . For example, you want to paste the copied line between line2 and line3. To do so, put the mouse cursor before line and type p .

Sample output:

line1
line2
ostechnix
line3
line4
line5
Copying contents of one file into another file using Vim

Copying contents of one file into another file using Vim

To save the changes made in the current file, type:

ZZ

Again, please note that this is double capital ZZ (SHIFT+z).

To save the changes in all files and exit vim editor. type:

:wq

Similarly, you can copy any line from any file to other files.

Copying entire file contents into another

We know how to copy a single line. What about the entire file contents? That's also possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt.

To do so, open the file2.txt first:

$ vim file2.txt

If the files are already loaded, you can switch to file2.txt by typing:

:buffer 2

Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then, type the following command and hit ENTER key:

:r file1.txt
Copying entire contents of a file into another file

Copying entire contents of a file into another file

Here, r means read .

Now you will see the contents of file1.txt is pasted after line5 in file2.txt.

line1
line2
line3
line4
line5
ostechnix
open source
technology
linux
unix
Copying entire file contents into another file using Vim

Copying entire file contents into another file using Vim

To save the changes in the current file, type:

ZZ

To save all changes in all loaded files and exit vim editor, type:

:wq
Method 2

The another method to open multiple files at once is by using either -o or -O flags.

To open multiple files in horizontal windows, run:

$ vim -o file1.txt file2.txt
Open multiple files at once in Vim

Open multiple files at once in Vim

To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the following shortcuts to move between windows.

To open multiple files in vertical windows, run:

$ vim -O file1.txt file2.txt file3.txt
Open multiple files in vertical windows in Vim

Open multiple files in vertical windows in Vim

To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the following shortcuts to move between windows.

Everything else is same as described in method 1.

For example, to list currently loaded files, run:

:buffers

To switch between files:

:buffer 1

To open an additional file, type:

:e file3.txt

To copy entire contents of a file into another:

:r file1.txt

The only difference in method 2 is once you saved the changes in the current file using ZZ , the file will automatically close itself. Also, you need to close the files one by one by typing :wq . But, had you followed the method 1, when typing :wq all changes will be saved in all files and all files will be closed at once.

For more details, refer man pages.

$ man vim

[Mar 24, 2021] How To Comment Out Multiple Lines At Once In Vim Editor by Senthil Kumar Images removed. Use the original for full text. Images removed. Use the original for full text.

Nov 22, 2017 | ostechnix.com

...enter the following command:

:1,3s/^/#

In this case, we are commenting out the lines from 1 to 3. Check the following screenshot. The lines from 1 to 3 have been commented out.

Comment out multiple lines at once in vim

Comment out multiple lines at once in vim

To uncomment those lines, run:

:1,3s/^#/

Once you're done, unset the line numbers.

:set nonumber

Let us go ahead and see third method.

Method 3:

This one is same as above but slightly different.

Open the file in vim editor.

$ vim ostechnix.txt

Set line numbers:

:set number

Then, type the following command to comment out the lines.

:1,4s/^/# /

The above command will comment out lines from 1 to 4.

Comment out multiple lines in vim

Comment out multiple lines in vim

Finally, unset the line numbers by typing the following.

:set nonumber
Method 4:

This method is suggested by one of our reader Mr.Anand Nande in the comment section below.

Open file in vim editor:

$ vim ostechnix.txt

Press Ctrl+V to enter into 'Visual block' mode and press DOWN arrow to select all the lines in your file.

Select lines in Vim

Select lines in Vim

Then, press Shift+i to enter INSERT mode (this will place your cursor on the first line). Press Shift+3 which will insert '#' before your first line.

Insert '#' before the first line in Vim

Insert '#' before the first line in Vim

Finally, press ESC key, and you can now see all lines are commented out.

Comment out multiple lines using vim

Comment out multiple lines using vim Method 5:

This method is suggested by one of our Twitter follower and friend Mr.Tim Chase .

We can even target lines to comment out by regex. Open the file in vim editor.

$ vim ostechnix.txt

And type the following:

:g/\Linux/s/^/# /

The above command will comment out all lines that contains the word "Linux".

Comment out all lines that contains a specific word in Vim

Comment out all lines that contains a specific word in Vim

And, that's all for now. I hope this helps. If you know any other easier method than the given methods here, please let me know in the comment section below. I will check and add them in the guide. Also, have a look at the comment section below. One of our visitor has shared a good guide about Vim usage.

NUNY3 November 23, 2017 - 8:46 pm

If you want to be productive in Vim you need to talk with Vim with *language* Vim is using. Every solution that gets out of "normal
mode" is most probably not the most effective.

METHOD 1
Using "normal mode". For example comment first three lines with: I#j.j.
This is strange isn't it, but:
I –> capital I jumps to the beginning of row and gets into insert mode
# –> type actual comment character
–> exit insert mode and gets back to normal mode
j –> move down a line
. –> repeat last command. Last command was: I#
j –> move down a line
. –> repeat last command. Last command was: I#
You get it: After you execute a command, you just repeat j. cobination for the lines you would like to comment out.

METHOD 2
There is "command line mode" command to execute "normal mode" command.
Example: :%norm I#
Explanation:
% –> whole file (you can also use range if you like: 1,3 to do only for first three lines).
norm –> (short for normal)
I –> is normal command I that is, jump to the first character in line and execute insert
# –> insert actual character
You get it, for each range you select, for each of the line normal mode command is executed

METHOD 3
This is the method I love the most, because it uses Vim in the "I am talking to Vim" with Vim language principle.
This is by using extension (plug-in, add-in): https://github.com/tomtom/tcomment_vim extension.
How to use it? In NORMAL MODE of course to be efficient. Use: gc+action.

Examples:
gcap –> comment a paragraph
gcj –> comment current line and line bellow
gc3j –> comment current line and 3 lines bellow
gcgg –> comment current line and all the lines including first line in file
gcG –> comment current line and all the lines including last line in file
gcc –> shortcut for comment a current line

You name it it has all sort of combinations. Remember, you have to talk with Vim, to properly efficially use it.
Yes sure it also works with "visual mode", so you use it like: V select the lines you would like to mark and execute: gc

You see if I want to impress a friend I am using gc+action combination. Because I always get: What? How did you do it? My answer it is Vim, you need to talk with the text editor, not using dummy mouse and repeat actions.

NOTE: Please stop telling people to use DOWN arrow key. Start using h, j, k and l keys to move around. This keys are on home row of typist. DOWN, UP, LEFT and RIGHT key are bed habit used by beginners. It is very inefficient. You have to move your hand from home row to arrow keys.

VERY IMPORTANT: Do you want to get one million dollar tip for using Vim? Start using Vim like it was designed for use normal mode. Use its language: verbs, nouns, adverbs and adjectives. Interested what I am talking about? You should be, if you are serious about using Vim. Read this one million dollar answer on forum: https://stackoverflow.com/questions/1218390/what-is-your-most-productive-shortcut-with-vim/1220118#1220118 MDEBUSK November 26, 2019 - 7:07 am

I've tried the "boxes" utility with vim and it can be a lot of fun.

https://boxes.thomasjensen.com/ SÉRGIO ARAÚJO December 17, 2020 - 4:43 am

Method 6
:%norm I#

[Mar 24, 2021] How To Setup Backup Server Using Rsnapshot by Senthil Kumar

Apr 13, 2017 | ostechnix.com

... ... ...

Now, edit rsnapshot config file using command:

$ sudo nano /etc/rsnapshot.conf

The default configuration should just work fine. All you need to to define the backup directories and backup intervals.

First, let us setup the Root backup directory i.e We need to choose the directory where we want to store the file system back ups. In our case, I will store the back ups in /rsnapbackup/ directory.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=2935052334&adf=2291697075&pi=t.aa~a.4159015635~i.80~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1616633116&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fsetup-backup-server-using-rsnapshot-linux%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChAI8MbrggYQlaj876O1srwUEioAzRwCZrDRDgBUvrQaW5GbXDwh86QENBlw-v7-PR-7DnhX3_cVwCq2ufI&dt=1616633105357&bpp=3&bdt=1341&idt=3&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3De131ae5ed4aa45d7-22a89a6e0dc7000a%3AT%3D1616631806%3ART%3D1616631806%3AS%3DALNI_MYN9WDd7gGVc8V-I7ZewIJezifOTg&prev_fmts=728x90%2C780x280%2C340x280%2C0x0%2C780x280%2C780x280%2C780x280&nras=5&correlator=877677215578&frm=20&pv=1&ga_vid=1440358310.1616631807&ga_sid=1616633105&ga_hid=2128223842&ga_fc=0&u_tz=-240&u_his=2&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=5878&biw=1519&bih=762&scr_x=0&scr_y=2900&eid=31060287%2C44738185%2C44739387&oid=3&psts=AGkb-H_3MfY9AQf3__CNSVyjoDCpYu_ZaKaiHYqFHQ1wQJDCJhk-2CFzgXs7lxCtimCs29RaZoqMJvVRxIA%2CAGkb-H9jFVqbzgOeUl3vj0ufHziiJDG88wHSpYyHea1_SuZgYgku_spXI7u_Mw5lq5Lx3672kLVBHMXw5w%2CAGkb-H8awkyuv_oJsZhhOe9IPjgFhtTwqlJq7XJ6gfvkEWF40FhbHLmHilOFpHgD-K83h1G7n8vaRUTehfg%2CAGkb-H_ckOyStZCDLNTeIVabiCebw66dSIyH-MfyFZiH6pq4r1inFyrp81fGuJNHKRHVUVrMh_XNbpv-MLw%2CAGkb-H9SM9DZZmFihNrYkWRPSzDdb43TR0v35Yg8f_jeA4jEtFAhWB2AT2V1ONIP_oGSOumj3xM3sJE4GV43sQ%2CAGkb-H9SuZhdVHNjd3JIq9uWz6juU33Nlwy5JKxcDxmnxl-AC1GFKkElCoVRPBCv17-xB6hWLjhR0FtouuW-vw&pvsid=2810665002744857&pem=289&ref=https%3A%2F%2Fostechnix.com%2Fcategory%2Fbackup-tools%2F&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-20-21&ifi=8&uci=a!8&btvi=5&fsb=1&xpc=Z7XUbAeR7w&p=https%3A//ostechnix.com&dtd=10931

# All snapshots will be stored under this root directory.
#
snapshot_root   /rsnapbackup/

Again, you should use TAB key between snapshot_root element and your backup directory.

Scroll down a bit, and make sure the following lines (marked in bold) are uncommented:

[...]
#################################
# EXTERNAL PROGRAM DEPENDENCIES #
#################################

# LINUX USERS: Be sure to uncomment "cmd_cp". This gives you extra features.
# EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
#
# See the README file or the man page for more details.
#
cmd_cp /usr/bin/cp

# uncomment this to use the rm program instead of the built-in perl routine.
#
cmd_rm /usr/bin/rm

# rsync must be enabled for anything to work. This is the only command that
# must be enabled.
#
cmd_rsync /usr/bin/rsync

# Uncomment this to enable remote ssh backups over rsync.
#
cmd_ssh /usr/bin/ssh

# Comment this out to disable syslog support.
#
cmd_logger /usr/bin/logger

# Uncomment this to specify the path to "du" for disk usage checks.
# If you have an older version of "du", you may also want to check the
# "du_args" parameter below.
#
cmd_du /usr/bin/du

[...]

Next, we need to define the backup intervals:

#########################################
# BACKUP LEVELS / INTERVALS #
# Must be unique and in ascending order #
# e.g. alpha, beta, gamma, etc. #
#########################################

retain alpha 6
retain beta 7
retain gamma 4
#retain delta 3

Here, retain alpha 6 means that every time rsnapshot alpha run, it will make a new snapshot, rotate the old ones, and retain the most recent six (alpha.0 - alpha.5). You can define your own intervals. For more details, refer the rsnapshot man pages.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-6701402139964678&output=html&h=280&adk=2935052334&adf=1889294700&pi=t.aa~a.4159015635~i.94~rp.4&w=780&fwrn=4&fwrnh=100&lmt=1616633121&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=8125749717&psa=1&ad_type=text_image&format=780x280&url=https%3A%2F%2Fostechnix.com%2Fsetup-backup-server-using-rsnapshot-linux%2F&flash=0&fwr=0&pra=3&rh=195&rw=779&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChAI8MbrggYQlaj876O1srwUEioAzRwCZrDRDgBUvrQaW5GbXDwh86QENBlw-v7-PR-7DnhX3_cVwCq2ufI&dt=1616633105367&bpp=2&bdt=1351&idt=2&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3De131ae5ed4aa45d7-22a89a6e0dc7000a%3AT%3D1616631806%3ART%3D1616631806%3AS%3DALNI_MYN9WDd7gGVc8V-I7ZewIJezifOTg&prev_fmts=728x90%2C780x280%2C340x280%2C0x0%2C780x280%2C780x280%2C780x280%2C780x280&nras=6&correlator=877677215578&frm=20&pv=1&ga_vid=1440358310.1616631807&ga_sid=1616633105&ga_hid=2128223842&ga_fc=0&u_tz=-240&u_his=2&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=175&ady=7945&biw=1519&bih=762&scr_x=0&scr_y=4898&eid=31060287%2C44738185%2C44739387&oid=3&psts=AGkb-H_3MfY9AQf3__CNSVyjoDCpYu_ZaKaiHYqFHQ1wQJDCJhk-2CFzgXs7lxCtimCs29RaZoqMJvVRxIA%2CAGkb-H9jFVqbzgOeUl3vj0ufHziiJDG88wHSpYyHea1_SuZgYgku_spXI7u_Mw5lq5Lx3672kLVBHMXw5w%2CAGkb-H8awkyuv_oJsZhhOe9IPjgFhtTwqlJq7XJ6gfvkEWF40FhbHLmHilOFpHgD-K83h1G7n8vaRUTehfg%2CAGkb-H_ckOyStZCDLNTeIVabiCebw66dSIyH-MfyFZiH6pq4r1inFyrp81fGuJNHKRHVUVrMh_XNbpv-MLw%2CAGkb-H9SM9DZZmFihNrYkWRPSzDdb43TR0v35Yg8f_jeA4jEtFAhWB2AT2V1ONIP_oGSOumj3xM3sJE4GV43sQ%2CAGkb-H9SuZhdVHNjd3JIq9uWz6juU33Nlwy5JKxcDxmnxl-AC1GFKkElCoVRPBCv17-xB6hWLjhR0FtouuW-vw%2CAGkb-H_vc2WdY5H-Moj-ezEu7IDslUkOhKidPtG9RNqCgdFTwDB78MvRCqHwatWcUx6zfLcmgkpZDH-Ssas&pvsid=2810665002744857&pem=289&ref=https%3A%2F%2Fostechnix.com%2Fcategory%2Fbackup-tools%2F&rx=0&eae=0&fc=384&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-20-21&ifi=9&uci=a!9&btvi=6&fsb=1&xpc=DkVUC47tnJ&p=https%3A//ostechnix.com&dtd=16546

Next, we need to define the backup directories. Find the following directives in your rsnapshot config file and set the backup directory locations.

###############################
### BACKUP POINTS / SCRIPTS ###
###############################

# LOCALHOST
backup /root/ostechnix/ server/

Here, I am going to backup the contents of /root/ostechnix/ directory and save them in /rsnapbackup/server/ directory. Please note that I didn't specify the full path (/rsnapbackup/server/ ) in the above configuration. Because, we already mentioned the Root backup directory earlier.

Likewise, define the your remote client systems backup location.

# REMOTEHOST
backup sk@192.168.43.192:/home/sk/test/ client/

Here, I am going to backup the contents of my remote client system's /home/sk/test/ directory and save them in /rsnapbackup/client/ directory in my Backup server. Again, please note that I didn't specify the full path (/rsnapbackup/client/ ) in the above configuration. Because, we already mentioned the Root backup directory before.

Save and close /ect/rsnapshot.conf file.

Once you have made all your changes, run the following command to verify that the config file is syntactically valid.

rsnapshot configtest

If all is well, you will see the following output.

Syntax OK
Testing backups

Run the following command to test backups.

rsnapshot alpha

This take a few minutes depending upon the size of back ups.

Verifying backups

Check the whether the backups are really stored in the Root backup directory in the Backup server.

ls /rsnapbackup/

You will see the following output:

alpha.0

Check the alpha.0 directory:

ls /rsnapbackup/alpha.0/

You will see there are two directories automatically created, one for local backup (server), and another one for remote systems (client).

client/ server/

Check the client system back ups:

ls /rsnapbackup/alpha.0/client

Check the server system(local system) back ups:

ls /rsnapbackup/alpha.0/server
Automate back ups

You don't/can't run the rsnapshot command to make backup every time. Define a cron job and automate the backup job.

sudo vi /etc/cron.d/rsnapshot

Add the following lines:

0 */4 * * *     /usr/bin/rsnapshot alpha
50 23 * * *     /usr/bin/rsnapshot beta
00 22 1 * *     /usr/bin/rsnapshot delta

The first line indicates that there will be six alpha snapshots taken each day (at 0,4,8,12,16, and 20 hours), beta snapshots taken every night at 11:50pm, and delta snapshots will be taken at 10pm on the first day of each month. You can adjust timing as per your wish. Save and close the file.

Done! Rsnapshot will automatically take back ups on the defined time in the cron job. For more details, refer the man pages.

man rsnapshot

That's all for now. Hope this helps. I will soon here with another interesting guide. If you find this guide useful, please share it on your social, professional networks and support OSTechNix.

Cheers!

[Mar 24, 2021] How To Backup Your Entire Linux System Using Rsync by Senthil Kumar

Apr 25, 2017 | ostechnix.com

... ... ..

To backup the entire system, all you have to do is open your Terminal and run the following command as root user:

$ sudo rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt

This command will backup the entire root ( / ) directory, excluding /dev, /proc, /sys, /tmp, /run, /mnt, /media, /lost+found directories, and save the data in /mnt folder.

[Mar 24, 2021] CYA - System Snapshot And Restore Utility For Linux by Senthil Kumar

Jul 23, 2018 | ostechnix.com

CYA , stands for C over Y our A ssets, is a free, open source system snapshot and restore utility for any Unix-like operating systems that uses BASH shell. Cya is portable and supports many popular filesystems such as EXT2/3/4, XFS, UFS, GPFS, reiserFS, JFS, BtrFS, and ZFS etc. Please note that Cya will not backup the actual user data . It only backups and restores the operating system itself. Cya is actually a system restore utility . By default, it will backup all key directories like /bin/, /lib/, /usr/, /var/ and several others. You can, however, define your own directories and files path to include in the backup, so Cya will pick those up as well. Also, it is possible define some directories/files to skip from the backup. For example, you can skip /var/logs/ if you don't log files. Cya actually uses Rsync backup method under the hood. However, Cya is little bit easier than Rsync when creating rolling backups.

When restoring your operating system, Cya will rollback the OS using your backup profile which you created earlier. You can either restore the entire system or any specific directories only. You can also easily access the backup files even without a complete rollback using your terminal or file manager. Another notable feature is we can generate a custom recovery script to automate the mounting of your system partition(s) when you restore off a live CD, USB, or network image. In a nutshell, CYA can help you to restore your system to previous state when you end-up with a broken system caused by software update, configuration changes and intrusions/hacks etc.

... ... ...

Conclusion

Unlike Systemback and other system restore utilities, Cya is not a distribution-specific restore utility. It supports many Linux operating systems that uses BASH. It is one of the must-have applications in your arsenal. Install it right away and create snapshots. You won't regret when you accidentally crashed your Linux system.

[Mar 24, 2021] What commands are missing from your bashrc file- - Enable Sysadmin

Mar 24, 2021 | www.redhat.com

The idea was that sharing this would inspire others to improve their bashrc savviness. Take a look at what our Sudoers group shared and, please, borrow anything you like to make your sysadmin life easier.

[ You might also like: Parsing Bash history in Linux ]

Jonathan Roemer
# Require confirmation before overwriting target files. This setting keeps me from deleting things I didn't expect to, etc
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'

# Add color, formatting, etc to ls without re-typing a bunch of options every time
alias ll='ls -alhF'
alias ls="ls --color"
# So I don't need to remember the options to tar every time
alias untar='tar xzvf'
alias tarup='tar czvf'

# Changing the default editor, I'm sure a bunch of people have this so they don't get dropped into vi instead of vim, etc. A lot of distributions have system default overrides for these, but I don't like relying on that being around
alias vim='nvim'
alias vi='nvim'
Valentin Bajrami

Here are a few functions from my ~/.bashrc file:

# Easy copy the content of a file without using cat / selecting it etc. It requires xclip to be installed
# Example:  _cp /etc/dnsmasq.conf
_cp()
{
  local file="$1"
  local st=1
  if [[ -f $file ]]; then
    cat "$file" | xclip -selection clipboard
    st=$?
  else
    printf '%s\n' "Make sure you are copying the content of a file" >&2
  fi
  return $st    
}

# This is the function to paste the content. The content is now in your buffer.
# Example: _paste   

_paste()
{
  xclip -selection cliboard -o
}

# Generate a random password without installing any external tooling
genpw()
{
  alphanum=( {a..z} {A..Z} {0..9} ); for((i=0;i<=${#alphanum[@]};i++)); do printf '%s' "${alphanum[@]:$((RANDOM%255)):1}"; done; echo
}
# See what command you are using the most (this parses the history command)
cm() {
  history | awk ' { a[$4]++ } END { for ( i in a ) print a[i], i | "sort -rn | head -n10"}' | awk '$1 > max{ max=$1} { bar=""; i=s=10*$1/max;while(i-->0)bar=bar"#"; printf "%25s %15d %s %s", $2, $1,bar, "\n"; }'
}
Peter Gervase

For shutting down at night, I kill all SSH sessions and then kill any VPN connections:

#!/bin/bash
/usr/bin/killall ssh
/usr/bin/nmcli connection down "Raleigh (RDU2)"
/usr/bin/nmcli connection down "Phoenix (PHX2)"
Valentin Rothberg
alias vim='nvim'
alias l='ls -CF --color=always''
alias cd='cd -P' # follow symlinks
alias gits='git status'
alias gitu='git remote update'
alias gitum='git reset --hard upstream/master'
Steve Ovens
alias nano='nano -wET 4'
alias ls='ls --color=auto'
PS1="\[\e[01;32m\]\u@\h \[\e[01;34m\]\w  \[\e[01;34m\]$\[\e[00m\] "
export EDITOR=nano
export AURDEST=/var/cache/pacman/pkg
PATH=$PATH:/home/stratus/.gem/ruby/2.7.0/bin
alias mp3youtube='youtube-dl -x --audio-format mp3'
alias grep='grep --color'
alias best-youtube='youtube-dl -r 1M --yes-playlist -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]''
alias mv='mv -vv'
shopt -s histappend
HISTCONTROL=ignoreboth
Jason Hibbets

While my bashrc aliases aren't as sophisticated as the previous technologists, you can probably tell I really like shortcuts:

# User specific aliases and functions

alias q='exit'
alias h='cd ~/'
alias c='clear'
alias m='man'
alias lsa='ls -al'
alias s='sudo su -'
More Linux resources Bonus: Organizing bashrc files and cleaning up files

We know many sysadmins like to script things to make their work more automated. Here are a few tips from our Sudoers that you might find useful.

Chris Collins

I don't know who I need to thank for this, some awesome woman on Twitter whose name I no longer remember, but it's changed the organization of my bash aliases and commands completely.

I have Ansible drop individual <something>.bashrc files into ~/.bashrc.d/ with any alias or command or shortcut I want, related to any particular technology or Ansible role, and can manage them all separately per host. It's been the best single trick I've learned for .bashrc files ever.

Git stuff gets a ~/.bashrc.d/git.bashrc , Kubernetes goes in ~/.bashrc.d/kube.bashrc .

if [ -d ${HOME}/.bashrc.d ]
then
  for file in ~/.bashrc.d/*.bashrc
  do
    source "${file}"
  done
fi
Peter Gervase

These aren't bashrc aliases, but I use them all the time. I wrote a little script named clean for getting rid of excess lines in files. For example, here's nsswitch.conf with lots of comments and blank lines:

[pgervase@pgervase etc]$ head authselect/nsswitch.conf
# Generated by authselect on Sun Dec  6 22:12:26 2020
# Do not modify this file manually.

# If you want to make changes to nsswitch.conf please modify
# /etc/authselect/user-nsswitch.conf and run 'authselect apply-changes'.
#
# Note that your changes may not be applied as they may be
# overwritten by selected profile. Maps set in the authselect
# profile always take precedence and overwrites the same maps
# set in the user file. Only maps that are not set by the profile

[pgervase@pgervase etc]$ wc -l authselect/nsswitch.conf
80 authselect/nsswitch.conf

[pgervase@pgervase etc]$ clean authselect/nsswitch.conf
passwd:     sss files systemd
group:      sss files systemd
netgroup:   sss files
automount:  sss files
services:   sss files
shadow:     files sss
hosts:      files dns myhostname
bootparams: files
ethers:     files
netmasks:   files
networks:   files
protocols:  files
rpc:        files
publickey:  files
aliases:    files

[pgervase@pgervase etc]$ cat `which clean`
#! /bin/bash
#
/bin/cat $1 | /bin/sed 's/^[ \t]*//' | /bin/grep -v -e "^#" -e "^;" -e "^[[:space:]]*$" -e "^[ \t]+"

[ Free online course: Red Hat Enterprise Linux technical overview . ]

[Mar 24, 2021] How to read data from text files by Roberto Nozaki

Mar 24, 2021 | www.redhat.com

The following is the script I use to test the servers:

1     #!/bin/bash
2     
3     input_file=hosts.csv
4     output_file=hosts_tested.csv
5     
6     echo "ServerName,IP,PING,DNS,SSH" > "$output_file"
7     
8     tail -n +2 "$input_file" | while IFS=, read -r host ip _
9     do
10        if ping -c 3 "$ip" > /dev/null; then
11            ping_status="OK"
12        else
13            ping_status="FAIL"
14        fi
15    
16        if nslookup "$host" > /dev/null; then
17            dns_status="OK"
18        else
19            dns_status="FAIL"
20        fi
21    
22        if nc -z -w3 "$ip" 22 > /dev/null; then
23            ssh_status="OK"
24        else
25            ssh_status="FAIL"
26        fi
27    
28        echo "Host = $host IP = $ip" PING_STATUS = $ping_status DNS_STATUS = $dns_status SSH_STATUS = $ssh_status
29        echo "$host,$ip,$ping_status,$dns_status,$ssh_status" >> $output_file
30    done

[Mar 17, 2021] Year of Living Remotely by Angus Loten

Mar 12, 2021 | www.wsj.com

In the last week of April, Zoom reported that the number of daily users on its platform grew to more than 300 million , up from 10 million at the end of 2019.

Wayne Kurtzman, a research director at International Data Corp., said the crisis has accelerated the adoption of videoconferencing and other collaboration tools by roughly five years.

It has also driven innovation. New features expected in the year ahead include the use of artificial intelligence to enable real-time transcription and translation, informing people when they were mentioned in a meeting and why, and creating a short "greatest hits" version of meetings they may have missed, Mr. Kurtzman said.

Many businesses also ramped up their use of software bots , among other forms of automation, to handle routine workplace tasks like data entry and invoice processing.

The attention focused on keeping operations running saw many companies pull back on some long-running IT modernization efforts, or plans to build out ambitious data analytics and business intelligence systems.

Bob Parker, a senior vice president for industry research at IDC, said many companies were simply channeling funds to more urgent needs. But another key obstacle was an inability to access on-site resources to continue pre-Covid initiatives, he said, "especially for projects requiring significant process re-engineering," such as enterprise resource planning implementations and upgrades.

Related Video

[Mar 14, 2021] while loops in Bash

Mar 14, 2021 | www.redhat.com
while true
do
  df -k | grep home
  sleep 1
done

In this case, you're running the loop with a true condition, which means it will run forever or until you hit CTRL-C. Therefore, you need to keep an eye on it (otherwise, it will remain using the system's resources).

Note : If you use a loop like this, you need to include a command like sleep to give the system some time to breathe between executions. Running anything non-stop could become a performance issue, especially if the commands inside the loop involve I/O operations.

2. Waiting for a condition to become true

There are variations of this scenario. For example, you know that at some point, the process will create a directory, and you are just waiting for that moment to perform other validations.

You can have a while loop to keep checking for that directory's existence and only write a message while the directory does not exist.

https://asciinema.org/a/BQN8CDagw6k8bSbGJPYi5kqpg/embed?

If you want to do something more elaborate, you could create a script and show a more clear indication that the loop condition became true:

#!/bin/bash

while [ ! -d directory_expected ]
do
   echo "`date` - Still waiting" 
   sleep 1
done

echo "DIRECTORY IS THERE!!!"
More about automation 3. Using a while loop to manipulate a file

Another useful application of a while loop is to combine it with the read command to have access to columns (or fields) quickly from a text file and perform some actions on them.

In the following example, you are simply picking the columns from a text file with a predictable format and printing the values that you want to use to populate an /etc/hosts file.

https://asciinema.org/a/2b1u28XqoC7j7Muhd5zXqHkYP/embed?

Here the assumption is that the file has columns delimited by spaces or tabs and that there are no spaces in the content of the columns. That could shift the content of the fields and not give you what you needed.

Notice that you're just doing a simple operation to extract and manipulate information and not concerned about the command's reusability. I would classify this as one of those "quick and dirty tricks."

Of course, if this was something that you would repeatedly do, you should run it from a script, use proper names for the variables, and all those good practices (including transforming the filename in an argument and defining where to send the output, but today, the topic is while loops).

#!/bin/bash

cat servers.txt | grep -v CPU | while read servername cpu ram ip
do
   echo $ip $servername
done

[Mar 14, 2021] 7Zip 21.0 Provides Native Linux Support by Georgio Baremmi

Mar 12, 2021 | www.putorius.net

7zip is a wildly popular Windows program that is used to create archives. By default it uses 7z format which it claims is 30-70% better than the normal zip format. It also claims to compress to the regular zip format 2-10% more effectively than other zip compatible programs. It supports a wide variety of archive formats including (but not limited to) zip, gzip, bzip2, tar , and rar. Linux has had p7zip for a long time. However, this is the first time 7Zip developers have provided native Linux support.

Jump to Installation Instructions

p7zip vs 7Zip – What's the Difference

Linux has has p7zip for some time now. The p7zip is a port of the Windows 7zip package over to Linux/Unix. For the average user there is no difference. The p7zip package is a direct port from 7zip.

Why Bother Using 7zip if p7zip is available?

The main reason to use the new native Linux version of 7Zip is updates. The p7zip package that comes with my Fedora installation is version 16.02 from 2016. However, the newly installed 7zip version is 21.01 (alpha) which was released just a few days ago.

Details from p7zip Package

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-8729877671232535&output=html&h=336&slotname=4973377891&adk=4038654811&adf=967866540&pi=t.ma~as.4973377891&w=403&fwrn=4&lmt=1615657036&rafmt=11&psa=1&format=403x336&url=https%3A%2F%2Fwww.putorius.net%2F7zip-21-0-provides-native-linux-support.html&flash=0&wgl=1&dt=1615675808980&bpp=8&bdt=1381&idt=165&shv=r20210309&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3D1332e2a93e988e76-22a5b15fc8c6008f%3AT%3D1614981944%3ART%3D1614981944%3AS%3DALNI_MYTeS1RGk90N23EOEh83ZWg8Wg_2g&prev_fmts=875x95&correlator=5512722112914&frm=20&pv=1&ga_vid=469254566.1614981946&ga_sid=1615675809&ga_hid=2026024679&ga_fc=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1172&biw=1519&bih=714&scr_x=0&scr_y=0&eid=44735932%2C44736525%2C21068083%2C31060305&oid=3&pvsid=1982617572041203&pem=157&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CoEebr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=O5pymAxPH1&p=https%3A//www.putorius.net&dtd=175

7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21

Details from Native 7Zip Package

7-Zip (z) 21.01 alpha (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-03-09
Install Native 7Zip on Linux Command Line

First, we need to download the tar.zx package from the 7Zip website.

wget https://www.7-zip.org/a/7z2101-linux-x64.tar.xz

Next we extract the tar archive . Here I am extracting it to /home/gbaremmi/bin/ since that directory is in my PATH .

tar xvf 7z2101-linux-x64.tar.xz -C ~/bin/

That's it, you are now ready to use 7Zip.

If you have previously has the p7zip package installed you now have two similar commands. The p7zip package provides the 7z command. While the new native version of 7Zip provides the 7zz command.

Using Native 7Zip (7zz) in Linux

7Zip comes with a great deal of options. This full suite of options are beyond the scope of this article. Here we will cover basic archive creation and extraction.

Creating a 7z Archive with Native Linux 7Zip (7zz)

To create a 7z archive, we will call the newly install 7zz utiltiy and pass the a (add files to archive) command. We will then supply the name of the archive, and the files we want added.

[gbaremmi@putor ~]$ 7zz a words.7z dict-words/*

7-Zip (z) 21.01 alpha (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-03-09
 compiler: 9.3.0 GCC 9.3.0 64-bit locale=en_US.UTF-8 Utf16=on HugeFiles=on CPUs:4 Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz (40651),ASM,AES

Scanning the drive:
25192 files, 6650099 bytes (6495 KiB)

Creating archive: words.7z
Add new data to archive: 25192 files, 6650099 bytes (6495 KiB)
                         
Files read from disk: 25192
Archive size: 2861795 bytes (2795 KiB)
Everything is Ok

In the above example we are adding all the files in the dict-words directory to the words.7z archive.

Extracting Files from an Archive with Native Linux 7Zip (7zz)

Extracting an archive is very similar. Here we are using the e (extract) command.

[gbaremmi@putor new-dict]$ 7zz e words.7z 

7-Zip (z) 21.01 alpha (x64) : Copyright (c) 1999-2021 Igor Pavlov : 2021-03-09
 compiler: 9.3.0 GCC 9.3.0 64-bit locale=en_US.UTF-8 Utf16=on HugeFiles=on CPUs:4 Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz (40651),ASM,AES

Scanning the drive for archives:
1 file, 2861795 bytes (2795 KiB)

Extracting archive: words.7z
--
Path = words.7z
Type = 7z
Physical Size = 2861795
Headers Size = 186150
Method = LZMA2:23
Solid = +
Blocks = 1

Everything is Ok                    

Files: 25192
Size:       6650099
Compressed: 2861795

That's it! We have now installed native 7Zip and used it to create and extract our first archive.

Resources and Further Reading

[Mar 12, 2021] Connect computers through WebRTC.

Mar 12, 2021 | opensource.com

Snapdrop

If navigating a network through IP addresses and hostnames is confusing, or if you don't like the idea of opening a folder for sharing and forgetting that it's open for perusal, then you might prefer Snapdrop . This is an open source project that you can run yourself or use the demonstration instance on the internet to connect computers through WebRTC. WebRTC enables peer-to-peer connections through a web browser, meaning that two users on the same network can find each other by navigating to Snapdrop and then communicate with each other directly, without going through an external server.

snapdrop.jpg

(Seth Kenlon, CC BY-SA 4.0 )

Once two or more clients have contacted a Snapdrop service, users can trade files and chat messages back and forth, right over the local network. The transfer is fast, and your data stays local.

[Mar 12, 2021] 10 Best Compression Tools for Linux - Make Tech Easier

Mar 12, 2021 | www.maketecheasier.com

10 Best Compression Tools for Linux By Rubaiat Hossain / Mar 8, 2021 / Linux

File compression is an integral part of system administration. Finding the best compression method requires significant determination. Luckily, there are many robust compression tools for Linux that make backing up system data easier. Here, we present ten of the best Linux compression tools that can be useful to enterprises and users in this regard.

1. LZ4

LZ4 is the compression tool of choice for admins who need lightning-fast compression and decompression speed. It utilizes the LZ4 lossless algorithm, which belongs to the family of LZ77 byte-oriented compression algorithms. Moreover, LZ4 comes coupled with a high-speed decoder, making it one of the best Linux compression tools for enterprises.

https://googleads.g.doubleclick.net/pagead/ads?gdpr=0&us_privacy=1---&client=ca-pub-8765285789552883&output=html&h=175&slotname=7844251182&adk=581813558&adf=2864412812&pi=t.ma~as.7844251182&w=700&fwrn=4&lmt=1615556198&rafmt=11&psa=1&format=700x175&url=https%3A%2F%2Fwww.maketecheasier.com%2Fbest-compression-tools-linux%2F&flash=0&wgl=1&dt=1615567005169&bpp=15&bdt=386&idt=194&shv=r20210309&cbv=r20190131&ptt=9&saldr=aa&abxe=1&prev_fmts=0x0&nras=1&correlator=8182365181793&frm=20&pv=1&ga_vid=1005406816.1615567006&ga_sid=1615567006&ga_hid=671449088&ga_fc=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=276&ady=1641&biw=1519&bih=762&scr_x=0&scr_y=0&eid=31060287%2C44735931%2C44736524%2C21068083&oid=3&pvsid=2471147392406435&pem=509&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=1920&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=tvttwMBYDr&p=https%3A//www.maketecheasier.com&dtd=673 2. Zstandard

Zstandard is another fast compression tool for Linux that can be used for personal and enterprise projects. It's backed by Facebook and offers excellent compression ratios. Some of its most compelling features include the adaptive mode, which can control compression ratios based on I/O, the ability to trade speed for better compression, and the dictionary compression scheme. Zstandard also has a rich API with keybindings for all major programming languages.

3. lzop

lzop is a robust compression tool that utilizes the Lempel-Ziv-Oberhumer(LZO) compression algorithm. It provides breakneck compression speed by trading compression ratios. For example, it produces slightly larger files compared to gzip but requires only 10 percenr CPU runtime. Moreover, lzop can deal with system backups in multiple ways, including backup mode, single file mode, archive mode, and pipe mode.

4. Gzip

Gzip is certainly one of the most widely used compression tools for Linux admins. It is compatible with every GNU software, making it the perfect compression tool for remote engineers. Gzip leverages the Lempel-Ziv coding in deflate mode for file compression. It can reduce the size of source codes by up to 90 percent. Overall, this is an excellent choice for seasoned Linux users as well as software developers.

5. bzip2

bzip2 , a free compression tool for Linux, compresses files using the Burrows-Wheeler block-sorting compression algorithm and Huffman coding. It also supports several additional compression methods, such as run-length encoding, delta encoding, sparse bit array, and Huffman tables. It can also recover data from media drives in some cases. Overall, bzip2 is a suitable compression tool for everyday usage due to its robust compression abilities and fast decompression speed.

6. p7zip

p7zip is the port of 7-zip's command-line utility. It is a high-performance archiving tool with solid compression ratios and support for many popular formats, including tar, xz, gzip, bzip2, and zip. It uses the 7z format by default, which provides 30 to 50 percent better compression than standard zip compression . Moreover, you can use this tool for creating self-extracting and dynamically-sized volume archives.

7. pigz

pigz or parallel implementation of gzip is a reliable replacement for the gzip compression tool. It leverages multiple CPU cores to increase the compression speed dramatically. It utilizes the zlib and pthread libraries for implementing the multi-threading compression process. However, pigz can't decompress archives in parallel. Hence, you will not be able to get similar speeds during compression and decompression.

8. pixz

pixz is a parallel implementation of the XZ compressor with support for data indexing. Instead of producing one big block of compressed data like xz, it creates a set of smaller blocks. This makes randomly accessing the original data straightforward. Moreover, pixz also makes sure that the file permissions are preserved the way they were during compression and decompression.

https://googleads.g.doubleclick.net/pagead/ads?gdpr=0&us_privacy=1---&client=ca-pub-8765285789552883&output=html&h=175&slotname=8434584656&adk=2343021443&adf=2291700584&pi=t.ma~as.8434584656&w=700&fwrn=4&lmt=1615556198&rafmt=11&psa=1&format=700x175&url=https%3A%2F%2Fwww.maketecheasier.com%2Fbest-compression-tools-linux%2F&flash=0&wgl=1&dt=1615567005169&bpp=7&bdt=385&idt=230&shv=r20210309&cbv=r20190131&ptt=9&saldr=aa&abxe=1&prev_fmts=0x0%2C700x175&nras=1&correlator=8182365181793&frm=20&pv=1&ga_vid=1005406816.1615567006&ga_sid=1615567006&ga_hid=671449088&ga_fc=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=276&ady=6557&biw=1519&bih=762&scr_x=0&scr_y=0&eid=31060287%2C44735931%2C44736524%2C21068083&oid=3&pvsid=2471147392406435&pem=509&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=1920&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CoeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=uL5LnxBkJZ&p=https%3A//www.maketecheasier.com&dtd=686 9. plzip

plzip is a lossless data compressor tool that makes creative use of the multi-threading capabilities supported by modern CPUs. It is built on top of the lzlib library and provides a command-line interface similar to gzip and bzip2. One key benefit of plzip is its ability to fully leverage multiprocessor machines. plzip definitely warrants a try for admins who need a high-performance Linux compression tool to support parallel compression.

10. XZ Utils

XZ Utils is a suite of compression tools for Linux that can compress and decompress .xz and .lzma files. It primarily uses the LZMA2 algorithm for compression and can perform integrity checks of compressed data at ease. Since this tool is available to popular Linux distributions by default, it can be a viable choice for compression in many situations.

Wrapping Up

A plethora of reliable Linux compression tools makes it easy to archive and back up essential data . You can choose from many lossless compressors with high compression ratios such as LZ4, lzop, and bzip2. On the other hand, tools like Zstandard and plzip allow for more advanced compression workflows.

[Mar 12, 2021] How to measure elapsed time in bash by Dan Nanni

Mar 09, 2021 | www.xmodulo.com
When you call date with +%s option, it shows the current system clock in seconds since 1970-01-01 00:00:00 UTC. Thus, with this option, you can easily calculate time difference in seconds between two clock measurements.
start_time=$(date +%s)
# perform a task
end_time=$(date +%s)

# elapsed time with second resolution
elapsed=$(( end_time - start_time ))

Another (preferred) way to measure elapsed time in seconds in bash is to use a built-in bash variable called SECONDS . When you access SECONDS variable in a bash shell, it returns the number of seconds that have passed so far since the current shell was launched. Since this method does not require running the external date command in a subshell, it is a more elegant solution.

start_time=$SECONDS
sleep 5
elapsed=$(( SECONDS - start_time ))
echo $elapsed

This will display elapsed time in terms of the number of seconds. If you want a more human-readable format, you can convert $elapsed output as follows.

eval "echo Elapsed time: $(date -ud "@$elapsed" +'$((%s/3600/24)) days %H hr %M min %S sec')"

This will produce output like the following.

Elapsed time: 0 days 13 hr 53 min 20 sec

[Mar 07, 2021] A brief introduction to Ansible roles for Linux system administration by Shiwani Biradar

Jan 26, 2021 | www.redhat.com

Nodes

In Ansible architecture, you have a controller node and managed nodes. Ansible is installed on only the controller node. It's an agentless tool and doesn't need to be installed on the managed nodes. Controller and managed nodes are connected using the SSH protocol. All tasks are written into a "playbook" using the YAML language. Each playbook can contain multiple plays, which contain tasks , and tasks contain modules . Modules are reusable standalone scripts that manage some aspect of a system's behavior. Ansible modules are also known as task plugins or library plugins.

More about automation Roles

Playbooks for complex tasks can become lengthy and therefore difficult to read and understand. The solution to this problem is Ansible roles . Using roles, you can break long playbooks into multiple files making each playbook simple to read and understand. Roles are a collection of templates, files, variables, modules, and tasks. The primary purpose behind roles is to reuse Ansible code. DevOps engineers and sysadmins should always try to reuse their code. An Ansible role can contain multiple playbooks. It can easily reuse code written by anyone if the role is suitable for a given case. For example, you could write a playbook for Apache hosting and then reuse this code by changing the content of index.html to alter options for some other application or service.

The following is an overview of the Ansible role structure. It consists of many subdirectories, such as:

|-- README.md
|-- defaults
|-------main.yml
|-- files
|-- handlers
|-------main.yml
|-- meta
|-------main.yml
|-- tasks
|-------main.yml
|-- templates
|-- tests
|-------inventory
|-- vars
|-------main.yml

Initially, all files are created empty by using the ansible-galaxy command. So, depending on the task, you can use these directories. For example, the vars directory stores variables. In the tasks directory, you have main.yml , which is the main playbook. The templates directory is for storing Jinja templates. The handlers directory is for storing handlers.

Advantages of Ansible roles:

Ansible roles are structured directories containing sub-directories.

But did you know that Red Hat Enterprise Linux also provides some Ansible System Roles to manage operating system tasks?

System roles

The rhel-system-roles package is available in the Extras (EPEL) channel. The rhel-system-roles package is used to configure RHEL hosts. There are seven default rhel-system-roles available:

The rhel-system-roles package is derived from open source Linux system-roles . This Linux-system-role is available on Ansible Galaxy. The rhel-system-roles is supported by Red Hat, so you can think of this as if rhel-system-roles are downstream of Linux system-roles. To install rhel-system-roles on your machine, use:

$ sudo yum -y install rhel-system-roles
or
$ sudo dnf -y install rhel-system-roles

These roles are located in the /usr/share/ansible/roles/ directory.

Great DevOps Downloads

This is the default path, so whenever you use playbooks to reference these roles, you don't need to explicitly include the absolute path. You can also refer to the documentation for using Ansible roles. The path for the documentation is /usr/share/doc/rhel-system-roles

The documentation directory for each role has detailed information about that role. For example, the README.md file is an example of that role, etc. The documentation is self-explanatory.

The following is an example of a role.

Example

If you want to change the SELinux mode of the localhost machine or any host machine, then use the system roles. For this task, use rhel-system-roles.selinux

For this task the ansible-playbook looks like this:

---

- name: a playbook for SELinux mode
 hosts: localhost
 roles:

- rhel-system-roles.selinux
 vars:

- selinux_state: disabled

After running the playbook, you can verify whether the SELinux mode changed or not.

[ Looking for more on system automation? Get started with The Automated Enterprise, a free book from Red Hat . ]

Shiwani Biradar I am an OpenSource Enthusiastic undergraduate girl who is passionate about Linux &amp; open source technologies. I have knowledge of Linux , DevOps, and cloud. I am also an active contributor to Fedora. If you didn't find me exploring technologies then you will find me exploring food! More about me

[Mar 05, 2021] Edge servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center

Mar 05, 2021 | opensource.com

... Edge computing is a model of infrastructure design that places many "compute nodes" (a fancy word for a server ) geographically closer to people who use them most frequently. It can be part of the open hybrid-cloud model, in which a centralized data center exists to do all the heavy lifting but is bolstered by smaller regional servers to perform high frequency -- but usually less demanding -- tasks...

Historically, a computer was a room-sized device hidden away in the bowels of a university or corporate head office. Client terminals in labs would connect to the computer and make requests for processing. It was a centralized system with access points scattered around the premises. As modern networked computing has evolved, this model has been mirrored unexpectedly. There are centralized data centers to provide serious processing power, with client computers scattered around so that users can connect. However, the centralized model makes less and less sense as demands for processing power and speed are ramping up, so the data centers are being augmented with distributed servers placed on the "edge" of the network, closer to the users who need them.

The "edge" of a network is partly an imaginary place because network boundaries don't exactly map to physical space. However, servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center.

... ... ...

While it's not exclusive to Linux, container technology is an important part of cloud and edge computing. Getting to know Linux and Linux containers helps you learn to install, modify, and maintain "serverless" applications. As processing demands increase, it's more important to understand containers, Kubernetes and KubeEdge , pods, and other tools that are key to load balancing and reliability.

... ... ...

The cloud is largely a Linux platform. While there are great layers of abstraction, such as Kubernetes and OpenShift, when you need to understand the underlying technology, you benefit from a healthy dose of Linux knowledge. The best way to learn it is to use it, and Linux is remarkably easy to try . Get the edge on Linux so you can get Linux on the edge.

[Mar 04, 2021] Tips for using screen - Enable Sysadmin

Mar 04, 2021 | www.redhat.com

Rather than trying to limit yourself to just one session or remembering what is running on which screen, you can set a name for the session by using the -S argument:

[root@rhel7dev ~]# screen -S "db upgrade"
[detached from 25778.db upgrade]

[root@rhel7dev ~]# screen -ls
There are screens on:
    25778.db upgrade    (Detached)
    25706.pts-0.rhel7dev    (Detached)
    25693.pts-0.rhel7dev    (Detached)
    25665.pts-0.rhel7dev    (Detached)
4 Sockets in /var/run/screen/S-root.

[root@rhel7dev ~]# screen -x "db upgrade"
[detached from 25778.db upgrade]

[root@rhel7dev ~]#

To exit a screen session, you can type exit or hit Ctrl+A and then D .

Now that you know how to start, stop, and label screen sessions let's get a little more in-depth. To split your screen session in half vertically hit Ctrl+A and then the | key ( Shift+Backslash ). At this point, you'll have your screen session with the prompt on the left:

Image

To switch to your screen on the right, hit Ctrl+A and then the Tab key. Your cursor is now in the right session, but there's no prompt. To get a prompt hit Ctrl+A and then C . I can do this multiple times to get multiple vertical splits to the screen:

Image

You can now toggle back and forth between the two screen panes by using Ctrl+A+Tab .

What happens when you cat out a file that's larger than your console can display and so some content scrolls past? To scroll back in the buffer, hit Ctrl+A and then Esc . You'll now be able to use the cursor keys to move around the screen and go back in the buffer.

There are other options for screen , so to see them, hit Ctrl , then A , then the question mark :

Image

[ Free online course: Red Hat Enterprise Linux technical overview . ]

Further reading can be found in the man page for screen . This article is a quick introduction to using the screen command so that a disconnected remote session does not end up killing a process accidentally. Another program that is similar to screen is tmux and you can read about tmux in this article .

[Mar 03, 2021] How to move /var directory to another partition

Mar 03, 2021 | linuxconfig.org

How to move /var directory to another partition

System Administration
18 November 2020

me title=

/var directory has filled up and you are left with with no free disk space available. This is a typical scenario which can be easily fixed by mounting your /var directory on different partition. Let's get started by attaching new storage, partitioning and creating a desired file system. The exact steps may vary and are not part of this config article. Once ready obtain partition UUID of your new var partition eg. /dev/sdc1:
# blkid | grep sdc1
/dev/sdc1: UUID="1de46881-1f49-440e-89dd-6c32592491a7" TYPE="ext4" PARTUUID="652a2fee-01"
Create a new mount point and mount your new partition:
# mkdir /mnt/newvar
# mount /dev/sdc1 /mnt/newvar
Confirm that it is mounted. Note, your output will be different:
# df -h /mnt/newvar
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdc1       1.8T  1.6T  279G  85% /mnt/newvar
Copy current /var data to the new location:
# rsync -aqxP /var/* /mnt/newvar
Unmount new partition:
# umount /mnt/newvar/  /mnt/var/
Edit your /etc/fstab to include new partition and choosing a relevant file-system:
UUID=1de46881-1f49-440e-89dd-6c32592491a7 /var        ext4    defaults        0       2
Reboot your system and you are done. Confirm that everything is working correctly and optionally remove old var directory by booting to some Live Linux system etc.

[Mar 03, 2021] partitioning - How to move boot and root partitions to another drive - Ask Ubuntu

Mar 03, 2021 | askubuntu.com

How to move boot and root partitions to another drive Ask Question Asked 10 years, 6 months ago Active 1 year, 7 months ago Viewed 80k times


mlissner ,

34 20

I have two drives on my computer that have the following configuration:

Drive 1: 160GB, /home
Drive 2: 40GB, /boot and /

Unfortunately, drive 2 seems to be dying, because trying to write to it is giving me errors, and checking out the SMART settings shows a sad state of affairs.

I have plenty of space on Drive 1, so what I'd like to do is move the / and /boot partitions to it, remove Drive 2 from the system, replace Drive 2 with a new drive, then reverse the process.

I imagine I need to do some updating to grub, and I need to move some things around, but I'm pretty baffled how to exactly go about this. Since this is my main computer, I want to be careful not to mess things up so I can't boot. partitioning fstab Share Improve this question Follow asked Sep 1 '10 at 0:56 mlissner 2,013 2 2 gold badges 22 22 silver badges 35 35 bronze badges

Lucas ,

This is exactly what I had to do as well. I wrote a blog with full instructions on how to move root partition / to /home.Lucas Sep 17 '18 at 15:12

maco ,

31

You'll need to boot from a live cd. Add partitions for them to disk 1, copy all the contents over, and then use sudo blkid to get the UUID of each partition. On disk 1's new /, edit the /etc/fstab to use the new UUIDs you just looked up.

Updating GRUB depends on whether it's GRUB1 or GRUB2. If GRUB1, you need to edit /boot/grub/device.map

If GRUB2, I think you need to mount your partitions as they would be in a real situation. For example:

sudo mkdir /media/root
sudo mount /dev/sda1 /media/root
sudo mount /dev/sda2 /media/root/boot
sudo mount /dev/sda3 /media/root/home

(Filling in whatever the actual partitions are that you copied things to, of course)

Then bind mount /proc and /dev in the /media/root:

sudo mount -B /proc /media/root/proc
sudo mount -B /dev /media/root/dev
sudo mount -B /sys /media/root/sys

Now chroot into the drive so you can force GRUB to update itself according to the new layout:

sudo chroot /media/root
sudo update-grub

The second command will make one complaint (I forget what it is though...), but that's ok to ignore.

Test it by removing the bad drive. If it doesn't work, the bad drive should still be able to boot the system, but I believe these are all the necessary steps. Share Improve this answer Follow edited Jun 15 '14 at 23:04 Matthew Buckett 105 4 4 bronze badges answered Sep 1 '10 at 6:14 maco 14.4k 3 3 gold badges 27 27 silver badges 35 35 bronze badges

William Mortada ,

FYI to anyone viewing this these days, this does not apply to EFI setups. You need to mount /media/root/boot/efi , among other things. – wjandrea Sep 10 '16 at 7:54

sBlatt ,

6

If you replace the drive right away you can use dd (tried it on my server some months ago, and it worked like a charm).

You'll need a boot-CD for this as well.

  1. Start boot-CD
  2. Only mount Drive 1
  3. Run dd if=/dev/sdb1 of=/media/drive1/backuproot.img - sdb1 being your root ( / ) partition. This will save the whole partition in a file.
    • same for /boot
  4. Power off, replace disk, power on
  5. Run dd if=/media/drive1/backuproot.img of=/dev/sdb1 - write it back.
    • same for /boot

The above will create 2 partitions with the exact same size as they had before. You might need to adjust grub (check macos post).

If you want to resize your partitions (as i did):

  1. Create 2 Partitions on the new drive (for / and /boot ; size whatever you want)
  2. Mount the backup-image: mount /media/drive1/backuproot.img /media/backuproot/
  3. Mount the empty / partition: mount /dev/sdb1 /media/sdb1/
  4. Copy its contents to the new partition (i'm unsure about this command, it's really important to preserve ownership, cp -R won't do it!) cp -R --preserve=all /media/backuproot/* /media/sdb1
    • same for /boot/

This should do it. Share Improve this answer Follow edited Sep 10 '16 at 1:59 wjandrea 12.2k 4 4 gold badges 39 39 silver badges 83 83 bronze badges answered Sep 1 '10 at 9:53 sBlatt 3,849 2 2 gold badges 18 18 silver badges 19 19 bronze badges

> ,

It turns out that the new "40GB" drive I'm trying to install is smaller than my current "40GB" drive. I have both of them connected, and I'm booted into a liveCD. Is there an easy way to just dd from the old one to the new one, and call it a done deal? – mlissner Sep 4 '10 at 3:02

mlissner ,

6

My final solution to this was a combination of a number of techniques:

  1. I connected the dying drive and its replacement to the computer simultaneously.
  2. The new drive was smaller than the old, so I shrank the partitions on the old using GParted.
  3. After doing that, I copied the partitions on the old drive, and pasted them on the new (also using GParted).
  4. Next, I added the boot flag to the correct partition on the new drive, so it was effectively a mirror of the old drive.

This all worked well, but I needed to update grub2 per the instructions here .

After all this was done, things seem to work. Share Improve this answer Follow edited Jul 16 '19 at 23:35 Pablo Bianchi 7,787 3 3 gold badges 41 41 silver badges 76 76 bronze badges answered Sep 4 '10 at 8:35 mlissner 2,013 2 2 gold badges 22 22 silver badges 35 35 bronze badges

j.karlsson ,

Finally, this solved it for me. I had a Virtualbox disk (vdi file) that I needed to move to a smaller disk. However Virtualbox does not support shrinking a vdi file, so I had to create a new virtual disk and copy over the linux installation onto this new disk. I've spent two days trying to get it to boot. – j.karlsson Dec 19 '19 at 9:48

[Mar 03, 2021] How to Migrate the Root Filesystem to a New Disk - Support - SUSE

Mar 03, 2021 | www.suse.com

How to Migrate the Root Filesystem to a New Disk

This document (7018639) is provided subject to the disclaimer at the end of this document.

Environment SLE 11
SLE 12
Situation The root filesystem needs to be moved to a new disk or partition. Resolution 1. Use the media to go into rescue mode on the system. This is the safest way to copy data from the root disk so that it's not changing while we are copying from it. Make sure the new disk is available.

2. Copy data at the block(a) or filesystem(b) level depending on preference from the old disk to the new disk.
NOTE: If the dd command is not being used to copy data from an entire disk to an entire disk the partition(s) will need to be created prior to this step on the new disk so that the data can copied from partition to partition.

a. Here is a dd command for copying at the block level (the disks do not need to be mounted):
# dd if=/dev/<old root disk> of=/dev/<new root disk> bs=64k conv=noerror,sync

The dd command is not verbose and depending on the size of the disk could take some time to complete. While it is running the command will look like it is just hanging. If needed, to verify it is still running, use the ps command on another terminal window to find the dd command's process ID and use strace to follow that PID and make sure there is activity.
# ps aux | grep dd
# strace -p<process id>

After confirming activity, hit CTRL + c to end the strace command. Once the dd command is complete the terminal prompt will return allowing for new commands to be run.

b. Alternatively to dd, mount the disks and then use an rsync command for copying at the filesystem level:
# mount /dev/<old root disk> /mnt
# mkdir /mnt2
(If the new disk's root partition doesn't have a filesystem yet, create it now.)
# mount /dev/<new root disk> /mnt2
# rsync -zahP /mnt/ /mnt2/

This command is much more verbose than dd and there shouldn't be any issues telling that it is working. This does generally take longer than the dd command.

3. Setting up the partition boot label with either fdisk(a) or parted(b)
NOTE: This step can be skipped if the boot partition is separate from the root partition and has not changed. Also, if dd was used on an entire disk to an entire disk in section "a" of step 2 you can still skip this step since the partition table will have been copied to the new disk (If the partitions are not showing as available yet on the new disk run "partprobe" or enter fdisk and save no changes. ). This exception does not include using dd on only a partition.

a. Using fdisk to label the new root partition (which contains boot) as bootable.
# fdisk /dev/<new root disk>

From the fdisk shell type 'p' to list and verify the root partition is there.
Command (m for help): p
If the "Boot" column of the root partition does not have an "*" symbol then it needs to be activated. Type 'a' to toggle the bootable partition flag: Command (m for help): a Partition number (1-4): <number from output p for root partition>

After that use the 'p' command to verify the bootable flag is now enabled. Finally, save changes: Command (m for help): w

b. Alternatively to fdisk, use parted to label the new root partition (which contains boot) as bootable.
# parted /dev/sda

From the parted shell type "print" to list and verify the root partition is there.
(parted) print If the "Flags" column of the root partition doesn't include "boot" then it will need to be enabled. (parted) set <root partition number> boot on

After that use the "print" command again to verify the flag is now listed for the root partition. then exit parted to save the changes: (parted) quit

4. Updating Legacy GRUB(a) on SLE11 or GRUB2(b) on SLE12.
NOTE: Steps 4 through 6 will need to be done in a chroot environment on the new root disk. TID7018126 covers how to chroot in rescue mode: https://www.suse.com/support/kb/doc?id=7018126

a. Updating Legacy GRUB on SLE11
# vim /boot/grub/menu.lst

There are two changes that may need to occur in the menu.lst file. 1. If the contents of /boot are in the root partition which is being changed, we'll need to update the line "root (hd#,#)" which points to the disk with the contents of /boot.

Since the sd[a-z] device names are not persistent it's recommended to find the equivalent /dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name might be different in chroot than it was before chroot. Run this command to verify the disk name in chroot: # mount

For this line Grub uses "hd[0-9]" rather than "sd[a-z]" so sda would be hd0 and sdb would be hd1, and so on. Match to the disk as shown in the mount command within chroot. The partition number in Legacy Grub also starts at 0. So if it were sda1 it would be hd0,0 and if it were sdb2 it would be hd1,1. Update that line accordingly.

2. in the line starting with the word "kernel" (generally just below the root line we just went over) there should be a root=/dev/<old root disk> parameter. That will need to be updated to match the path and device name of the new root partition. root=/dev/disk/by-id/<new root partition> Also, if the swap partition was changed to the new disk you'll need to reflect that with the resume= parameter.
Save and exit after making the above changes as needed.
Next, run this command: # yast2 bootloader
( you may get a warning message about the boot loader. This can be ignored.)
Go to the "Boot Loader Installation" tab with ALT + a. Verify it is set to boot from the correct partition. For example, if the content of /boot is in the root partition then make sure it is set to boot from the root partition. Lastly hit ALT + o so that it will save the configuration. While the YaST2 module is existing it should also install the boot loader.
b Updating GRUB2 on SLE12 # vim /etc/default/grub

The parameter to update is the GRUB_CMDLINE_LINUX_DEFAULT. If there is a "root=/dev/<old root disk>" parameter update it so that it is "root=/dev/<new root disk>". If there is no root= parameter in there add it. Each parameter is space separated so make sure there is a space separating it from the other parameters. Also, if the swap partition was changed to the new disk you'll need to reflect that with the resume= parameter.

Since the sd[a-z] device names are not persistent it's recommended to find the equivalent /dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name might be different in chroot than it was before chroot. Run this command to verify the disk name in chroot before comparing with by-id or by-path: # mount

It might look something like this afterward: GRUB_CMDLINE_LINUX_DEFAULT="root=/dev/disk/by-id/<partition/disk name> resume=/dev/disk/by-id/<partition/disk name> splash=silent quiet showopts"
After saving changes to that file run this command to save them to the GRUB2 configuration: # grub2-mkconfig -o /boot/grub2/grub.cfg (You can ignore any errors about lvmetad during the output of the above command.)
After that run this command on the disk with the root partition. For example, if the root partition is sda2 run this command on sda:
# grub2-install /dev/<disk of root partition>

5. Correct the fstab file to match new partition name(s)
# vim /etc/fstab

Correct the root (/) partition mount row in the file so that it points to the new disk/partition name. If any other partitions were changed they will need to be updated as well. For example, changed from: /dev/<old root disk> / ext3 defaults 1 1 to: /dev/disk/by-id/<new root disk> / ext3 defaults 1 1

The 3rd through 6th column may vary from the example. The important aspect is to change the row that is root (/) on the second column and adjust in particular the first column to reflect the new root disk/partition. Save and exit after making needed changes.
6. Lastly, run the following command to rebuild the ramdisk to match updated information: # mkinitrd

7. Exit chroot and reboot the system to test if it will boot using the new disk. Make sure to adjust the BIOS boot order so that the new disk is prioritized first. Additional Information The range of environments that can impact the necessary steps to migrate a root filesystem makes it near impossible to cover every case. Some environments could require tweaks in the steps needed to make this migration a success. As always in administration, have backups ready and proceed with caution. Disclaimer

This Support Knowledgebase provides a valuable tool for SUSE customers and parties interested in our products and solutions to acquire information, ideas and learn from one another. Materials are provided for informational, personal or non-commercial use within your organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.

[Mar 03, 2021] How to move Linux root partition to another drive quickly - by Dominik Gacek - Medium

Mar 03, 2021 | medium.com

How to move Linux root partition to another drive quickly Dominik Gacek

Dominik Gacek

Jun 21, 2019 · 4 min read

There's a bunch of information over internet on how to clone the Linux drives or partitions between other drives and partitions using solution like partclone , clonezilla , partimage , dd or similar, and while most of them are working just fine, they're not always the fastest possible way to achieve the result.

Today I want to show you another approach that combines most of them, and I am finding it the easiest and fastest of all.

Assumptions:

  1. You are using GRUB 2 as a boot loader
  2. You have two disks/partitions where a destination one is at least the same size or larger than the original one.

Let's dive in into action.

Just "dd" it

First thing that we h ave to do, is to create a direct copy of our current root partition from our source disk into our target one.

Before you start, you have to know what are the device names of your drives, to check on that type in:

sudo fdisk -l

You should see the list of all the disks and partitions inside your system, along with the corresponding device names, most probably something like /dev/sdx where the x will be replaced with proper device letter, in addition to that you'll see all of the partitions for that device prefixed with partition number, so something like /dev/sdx1

Based on the partition size, device identifier and the file-system, you can say what partitions you'll switch your installation from and which one will be the target one.

I am assuming here, that you already have the proper destination partition created, but if you do not, you can utilize one of the tools like GParted or similar to create it.

Once you'll have those identifiers, let's use dd to create a clone, with command similar to.

sudo dd if=/dev/sdx1 of=/dev/sdy1 bs=64K conv=noerror,sync

Where /dev/sdx1 is your source partition, and /dev/sdy1 is your destination one.

It's really important to provide the proper devices into if and of arguments, cause otherwise you can overwrite your source disk instead!

The above process will take a while and once it's finished you should already be able to mount your new partition into the system by using two commands:

sudo mkdir /mnt/new
sudo mount /dev/sdy1 /mnt/new

There's also a chance that your device will be mounted automatically but that varies on a Linux distro of choice.

Once you execute it, if everything went smoothly you should be able to run

ls -l /mnt/new

And as the outcome you should see all the files from the core partition, being stored in the new location.

It finishes the first and most important part of the operation.

Now the tricky part

We do have our new partition moved into shiny new drive, but the problem that we have, is the fact that since they're the direct clones both of the devices will have the same UUIDs and if we want to load your installation from the new device properly, we'll have to adjust that as well.

First, execute following command to see the current disk uuid's

blkid

You'll see all of the partitions with the corresponding UUID.
Now, if we want to change it we have to first generate a new one using:

uuidgen

which will generate a brand new UUID for us, then let's copy it result and execute command similar to:

sudo tune2fs /dev/sdy1 -U cd6ecfb1-05e0-4dd7-89e7-8e78dad1fa0e

where in place of /dev/sdy1 you should provide your target partition device identifier, and in place of -U flag value, you should paste the value generated from uuidgen command.

Now the last thing to do, is to update our fstab file on new partition so that it'll contain the proper UUID, to do this, let's edit it with.

sudo vim /etc/fstab
# or nano or whatever editor of choice

you'll see something similar to the code below inside:

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdc1 during installation
UUID=cd6ecfb1–05e0–4dd7–89e7–8e78dad1fa0e / ext4 errors=remount-ro 0 1
# /home was on /dev/sdc2 during installation
UUID=667f98f4–9db1–415b-b326–65d16c528e29 /home ext4 defaults 0 2
/swapfile none swap sw 0 0
UUID=7AA7–10F1 /boot/efi vfat defaults 0 1

The bold part is important for us, so what we want to do, is to paste our new UUID replacing the current one specified for the / path.

And that's almost it

The last part you have to do is to simply update the grub.

There are a number of options here, for the brave ones you can edit the /boot/grub/grub.cfg

Another option is to simply reinstall grub into our new drive with command:

sudo grub-install /dev/sdx

And if you do not want to bother with editing or reinstalling grub manually, you can simply use the tool called grub-customizer to have a simple and easy GUI for all of those operations.

Happy partitioning! :)

[Mar 03, 2021] HDD to SSD cloning on Linux without re-installing - PCsuggest

Mar 03, 2021 | www.pcsuggest.com

HDD to SSD cloning on Linux without re-installing

Updated - March 25, 2020 by Arnab Satapathi

No doubt the old spinning hard drives are the main bottleneck of any Linux PC. Overall system responsiveness is highly dependent on storage drive performance.

So, here's how you can clone HDD to SSD without re-installing the existing Linux distro and now be clear about few things.

Of course it's not the only way to clone linux from HDD to SSD, rather it's exactly what I did after buying a SSD for my laptop.

This tutorial should work on every Linux distro with a little modification, depending on which distro you're using, I was using Ubuntu.

Contents

Hardware setup

As you're going to copy files from the hard drive to the SSD. So you need to attach the both disk at the same time on your PC/Laptop.

For desktops, it's easier, as there's always at least 2 SATA ports on the motherboard. You've just have to connect the SSD to any of the free SATA ports and you're done.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=258425468&pi=t.aa~a.1476024706~i.12~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822071&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071151&bpp=6&bdt=1482&idt=-M&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0&nras=2&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=1684&biw=1519&bih=762&scr_x=0&scr_y=0&oid=3&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=c5eY9hPQ6Q&p=https%3A//www.pcsuggest.com&dtd=84

On laptops it's a bit tricky, as there's no free SATA port. If the laptop has a DVD drive, then you could remove it and use a " 2nd hard drive caddy ". ssd caddy sample

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=2371715447&pi=t.aa~a.1476024706~i.13~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822071&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071151&bpp=2&bdt=1481&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280&nras=3&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=2511&biw=1519&bih=762&scr_x=0&scr_y=0&oid=3&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=LQ6LebZF03&p=https%3A//www.pcsuggest.com&dtd=104

It could be either 9.5 mm or 12.7 mm. Open up your laptop's DVD drive and get a rough measurement.

But if you don't want to play around with your DVD drive or there's no DVD at all, use a USB to SATA adapter .

Preferably a USB 3 adapter for better speed, like this one . However the "caddy" is the best you can do with your laptop.

Try AmazonPrime for free
Enjoy free shipping and One-Day delivery, cancel any time.

You'll need a bootable USB drive for letter steps, booting any live Linux distro of your choice, I used to Ubuntu.

You could use any method to create it, the dd approach will be the simplest. Here's detailed the tutorials, with MultiBootUSB and here's bootable USB with GRUB .

Create Partitions on the SSD

After successfully attaching the SSD, you need to partition it according to it's capacity and your choice. My SSD, SAMSUNG 850 EVO was absolutely blank, might be yours too as well. So, I had to create the partition table before creating disk partitions.

Now many question arises, likeWhat kind of partition table? How many partitions? Is there any need of a swap partition?

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3945169977&pi=t.aa~a.1476024706~i.22~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822074&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071163&bpp=2&bdt=1493&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762&nras=5&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=3738&biw=1519&bih=762&scr_x=0&scr_y=707&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=4ZCi9IMqCp&p=https%3A//www.pcsuggest.com&dtd=3284

Well, if your Laptop/PC has a UEFI based BIOS, and want to use the UEFI functionalities, you should use the GPT partition table.

For a regular desktop use, 2 separate partitions are enough, a root partition and a home . But if you want to boot through UEFI, then you also need to crate a 100 MB or more FAT32 partition.

I think a 32 GB root partition is just enough, but you've to decide yours depending on future plans. However you can go with as low as 8 GB root partition, if you know what you're doing.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3420926156&pi=t.aa~a.1476024706~i.25~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822093&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071173&bpp=2&bdt=1504&idt=3&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280&nras=6&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=4355&biw=1519&bih=762&scr_x=0&scr_y=1320&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=B28apczroD&p=https%3A//www.pcsuggest.com&dtd=22307

Of course you don't need a dedicated swap partition, at least what I think. If there's any need of swap in future, you can just create a swap file.

So, here's how I partitioned the disk. It's formatted with the MBR partition table, a 32 GB root partition and the rest of 256 GB(232.89 GiB) is home . linux hdd to ssd cloning disk partition

This SSD partitions were created with Gparted on the existing Linux system on the HDD. The SSD was connected to the DVD drive slot with a "Caddy", showing as /dev/sdb here.

Mount the HDD and SSD partitions

At the beginning of this step, you need to shutdown your PC and boot to any live Linux distro of your choice from a bootable USB drive.

The purpose of booting to a live linux session is for copying everything from the old root partition in a more cleaner way. I mean why copy unnecessary files or directories under /dev , /proc , /sys , /var , /tmp ?

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3106139488&pi=t.aa~a.1476024706~i.31~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822113&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071183&bpp=2&bdt=1514&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280&nras=7&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=5575&biw=1519&bih=762&scr_x=0&scr_y=2549&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=6&uci=a!6&btvi=5&fsb=1&xpc=ZmyKd7RTgz&p=https%3A//www.pcsuggest.com&dtd=42797

And of course you know how to boot from a USB drive, so I'm not going to repeat the same thing. After booting to the live session, you've to mount both the HDD and SSD.

As I used Ubuntu live, so just opened up the file manager to mount the volumes. At this point you've to be absolutely sure about which are the old and new root and home partitions.

And if you didn't had any separate /home partition on the HDD previously, then you've to be careful while copying files. As there could be lots of contents that won't fit inside the tiny root volume of the SSD in this case.

Finally if you don't want to use any graphical tool like file managers to mount the disk partition, then it's even better. An example below, only commands, not much explanation.

sudo -i    # after booting to the live session

mkdir -p /mnt/{root1,root2,home1,home2}       # Create the directories

mount /dev/sdb1 /mnt/root1/       # mount the root partitions
mount /dev/sdc1 /mnt/root2/

mount /dev/sdb2 /mnt/home1/       # mount the home partitions
mount /dev/sdc2 /mnt/home2/
Copy contents from the HDD to SSD

In this step, we'll be using the rsync command to clone HDD to SSD while preserving proper file permissions . And we'll assume that the all partitions are mounter like below.

  • Old root partition of the hard drive mounted on /media/ubuntu/root/
  • Old home partition of the hard drive on /media/ubuntu/home/
  • New root partition of the SSD, on /media/ubuntu/root1/
  • New home partition of the SSD mounted on /media/ubuntu/home1/

Actually in my case, both the root and home partitions were labelled as root and home, so udisk2 created the mount directories like above.

Note: Most probably your mount points are different. Don't just copy paste the commands below, modify them according to your system and requirements.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=3865305780&pi=t.aa~a.1476024706~i.41~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822136&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071192&bpp=2&bdt=1521&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280&nras=8&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=7106&biw=1519&bih=762&scr_x=0&scr_y=4066&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=7&uci=a!7&btvi=6&fsb=1&xpc=IxhPgWVhvg&p=https%3A//www.pcsuggest.com&dtd=64891

First copy the contents of one root partition to another.

rsync -axHAWXS --numeric-ids --info=progress2 /media/ubuntu/root/ /media/ubuntu/root1/

You can also see the transfer progress, that's helpful.

The copying process will take about 10 minutes or so to complete, depending on the size of it's contents.

Note: If there was no separate home partition on your previous installation and there's not enough space in the SSD's root partition, exclude the /home directory.

For that, we'll use the rsync command again.

rsync -axHAWXS --numeric-ids --info=progress2 --exclude={/home} /media/ubuntu/root/ /media/ubuntu/root1/

Now copy the contents of one home partition to another, and this is a bit tricky of your SSD is smaller in size than the HDD. You've to use the --exclude flag with rsync to exclude certain large files or folders.

So, here for an example , I wanted to exclude few excessively large folders.

rsync -axHAWXS --numeric-ids --info=progress2 --exclude={home/b00m/OS,home/b00m/Downloads} /media/ubuntu/home/ /media/ubuntu/home1/

Excluding files and folders with rsync is bit sketchy, the source folder is the starting point of any file or directory path. Make sure that the exclude path is properly implemented.

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=2709772142&pi=t.aa~a.1476024706~i.52~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822141&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071200&bpp=2&bdt=1530&idt=2&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280%2C720x280&nras=9&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=8338&biw=1519&bih=762&scr_x=0&scr_y=5324&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ%2CAGkb-H9XBHpi_X9gAzB4mP646K5sky0HEY1Py0ZNxsLcwJkkZAC8BmYR8RlNEPcor0vSct4cXCofh5ccTvm5jg&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=8&uci=a!8&btvi=7&fsb=1&xpc=wefQe5b0bo&p=https%3A//www.pcsuggest.com&dtd=70468

Note: You need to go through the below step only if you excluded the /home directory while cloning to SSD, as said above.

rsync -axHAWXS --numeric-ids --info=progress2 /media/ubuntu/root/home/ /media/ubuntu/home1/

Hope you've got the point, for a proper HDD to SSD cloning in linux, copy the contents of the HDD's root partition to the new SSD's root partition. And do the the same thing for the home partition too.

Install GRUB bootloader on the SSD

The SSD won't boot until there's a properly configured bootloader. And there's a very good chance that you'were using GRUB as a boot loader.

So, to install GRUB, we've to chroot on the root partition of the SSD and install it from there. Before that be sure about which device under the /dev directory is your SSD. In my case, it was /dev/sdb .

Note: You can just copy the first 512 byte from the HDD and dump it to the SSD, but I'm not going that way this time.

So, first step is chrooting, here's all the commands below, running all of then as super user.

sudo -i               # login as super user

mount -o bind /dev/ /media/ubuntu/root1/dev/
mount -o bind /dev/pts/ /media/ubuntu/root1/dev/pts/ 
mount -o bind /sys/ /media/ubuntu/root1/sys/
mount -o bind /proc/ /media/ubuntu/root1/proc/

chroot /media/ubuntu/root1/

https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-4144768032244546&output=html&h=280&adk=533979330&adf=1452168868&pi=t.aa~a.1476024706~i.61~rp.4&w=720&fwrn=4&fwrnh=100&lmt=1614822152&num_ads=1&rafmt=1&armr=3&sem=mc&pwprc=9644106860&psa=0&ad_type=text_image&format=720x280&url=https%3A%2F%2Fwww.pcsuggest.com%2Fhdd-to-ssd-cloning-linux%2F&flash=0&fwr=0&pra=3&rh=180&rw=720&rpe=1&resp_fmts=3&wgl=1&fa=27&adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&dt=1614822071209&bpp=1&bdt=1539&idt=1&shv=r20210301&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Db45ce29b1a5695bc-22a655f2bec6001e%3AT%3D1614822069%3ART%3D1614822069%3AS%3DALNI_MaBldrhQLm6vV4uaU5DxcPqh8lWFA&prev_fmts=0x0%2C720x280%2C720x280%2C1519x762%2C720x280%2C720x280%2C720x280%2C720x280%2C720x280&nras=10&correlator=8603306087876&frm=20&pv=1&ga_vid=1124187839.1614822070&ga_sid=1614822070&ga_hid=870565237&ga_fc=0&u_tz=-300&u_his=5&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=400&ady=9694&biw=1519&bih=762&scr_x=0&scr_y=6666&oid=3&psts=AGkb-H8gxSqvDq36RNQWR5eN_WqjYUZJ7c0ULbPi54K_RYlM4pFfuqZoora4huPEg5itg_jAQPeM6_31KMk%2CAGkb-H-bVUeiwyhx7ANrumK-JtJqzo0C3CSyXxl3KUuVzX3FQtyJ8d7UNroGIQorcuOyyhNscODVeQCBJqk%2CAGkb-H-rrVZ4B2G-dprY_wXXXDwwQiTO0-_M3dfCeQU6d46dbeFR_AWR0mesJMKJGiGSUJOjc3ZhkETreTAgqA%2CAGkb-H9kYEeJ_nIEBvEjmEmiKYDnIbf2LphCGDytTCLpjBBmERQNpYGl4MvTuPQmSCJLp4Oiief0VdG-0S11dA%2CAGkb-H-eYZ_9ko7awcr4tBFbOvkfpsFFmfo-1MrbYwbBfnvBdZTDa1nTn04Jv3rt5xJibXzYkAyAoPUqgIwFAQ%2CAGkb-H9XBHpi_X9gAzB4mP646K5sky0HEY1Py0ZNxsLcwJkkZAC8BmYR8RlNEPcor0vSct4cXCofh5ccTvm5jg%2CAGkb-H9f9fUn01smqVRP5aEnN31pZNxrDL15Qj0IlmWDPH8p8BIJMRy6cFTja0zNONcUCMw6gUHiQaNqou6aaQ&pvsid=4218920605731428&pem=472&ref=https%3A%2F%2Fduckduckgo.com%2F&rx=0&eae=0&fc=1408&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7Cs%7C&abl=NS&fu=8320&bc=31&jar=2021-03-02-22&ifi=9&uci=a!9&btvi=8&fsb=1&xpc=GYejSIH3w1&p=https%3A//www.pcsuggest.com&dtd=80956

After successfully chrooting to the SSD's root partition, install GRUB. And there's also a catch, if you want to use a UEFI compatible GRUB, then it's another long path. But we'll be installing the legacy BIOS version of the GRUB here.

grub-install /dev/sdb --boot-directory=/boot/ --target=i386-pc

If GRUB is installed without any problem, then update the configuration file.

update-grub

These two commands above are to be run inside the chroot, and don't exit from the chroot now. Here's the detailed GRUB rescue tutorial, both for legacy BIOS and UEFI systems.

Update the fstab entry

You've to properly update the fstab entry to properly mount the filesystems while booting.

Use the blkid command to know the proper UUID of the partitions. ssd blkid

Now open up the /etc/fstab file with your favorite text editor and add the proper root and home UUID at proper locations.

nano /etc/fstab

clone hdd to ssd fstab entryThe above is the final fstab entry from my laptops Ubuntu installation.

Shutdown and boot from the SSD

If you were using a USB to SATA converter to do all the above steps, then it's time to connect the SSD to a SATA port.

For desktops it's not a problem, just connect the SSD to any of it's available SATA port. But many laptop refuses to boot if the DVD drive is replaced with a SSD or HDD. So, in that case, remove the hard drive and slip the SSD in it's place.

After doing all the hardware stuff, it's better to check if the SSD is recognized by the BIOS/UEFI at all. Hit the BIOS setup button while powering it up, and check all the disks.

If the SSD is detected, then set it as the default boot device. Save all the changes to BIOS/UEFI and hit the power button again. BIOS boot selection menu

Now it's the moment of truth, if HDD to SSD cloning was done right, then Linux should boot. It will boot much faster than previous, you can check that with the systemd-analyze command.

Conclusion

As said before it's neither the only way nor the perfect, but was pretty simple for me.I got the idea from openwrt extroot setup, but previously used the squashfs tools instead of rsync.

It took around 20 minute to clone my HDD to SSD. But, well writing this tutorial took around 15X more time of that.

Hope I'll be able to add the GRUB installation process for UEFI based systems to this tutorial soon, stay tuned !

Also please don't forget to share your thoughts and suggestions on the comment section. Your comments

  1. Sh3l says

    December 21, 2020

    Hello,
    It seems you haven't gotten around writing that UEFI based article yet. But right now I really need the steps necessary to clone hdd to ssd in UEFI based system. Can you please let me know how to do it? Reply

    • Arnab Satapathi says

      December 22, 2020

      Create an extra UEFI partition, along with root and home partitions, FAT32, 100 to 200 MB, install GRUB in UEFI mode, it should boot.
      Commands should be like this -
      mount /dev/sda2 /boot/efi
      grub-install /dev/sda --target=x86_64-efi

      sda2 is the EFI partition.

      This could be helpful- https://www.pcsuggest.com/grub-rescue-linux/#GRUB_rescue_on_UEFI_systems

      Then edit the grub.cfg file under /boot/grub/ , you're good to go.

      If it's not booting try GRUB rescue, boot and install grub from there. Reply

  2. Pronay Guha says

    November 9, 2020

    I'm already using Kubuntu 20.04, and now I'm trying to add an SSD to my laptop. It is running windows alongside. I want the data to be there but instead of using HDD, the Kubuntu OS should use SSD. How to do it? Reply

  3. none says

    May 23, 2020

    Can you explain what to do if the original HDD has Swap and you don't want it on the SSD?
    Thanks. Reply

    • Arnab Satapathi says

      May 23, 2020

      You can ignore the Swap partition, as it's not essential for booting.

      Edit the /etc/fstab file, and use a swap file instead. Reply

  4. none says

    May 21, 2020

    A couple of problems:
    In one section you mount homeS and rootS as root1 root2 home1 home2 but in the next sectionS you call them root root1 home home1
    In the blkid image sda is SSD and sdb is HDD but you said in the previous paragraph that sdb is your SSD
    Thanks for the guide Reply

    • Arnab Satapathi says

      May 23, 2020

      The first portion is just an example, not the actual commands.

      There's some confusing paragraphs and formatting error, I agree. Reply

  5. oybek says

    April 21, 2020

    Thank you very much for the article
    Yesterday moved linux from hdd to ssd without any problem
    Brilliant article Reply

    • Pronay Guha says

      November 9, 2020

      hey, I'm trying to move Linux from HDD to SSD with windows as a dual boot option.
      What changes should I do? Reply

  6. Passingby says

    March 25, 2020

    Thank you for your article. It was very helpful. But i see one disadvantage. When you copy like cp -a /media/ubuntu/root/ /media/ubuntu/root1/ In root1 will be created root folder, but not all its content separately without folder. To avoid this you must add (*) after /
    It should be looked like cp -a /media/ubuntu/root/* /media/ubuntu/root1/ For my opinion rsync command is much more better. You see like files copping. And when i used cp, i did not understand the process hanged up or not. Reply

  7. David Keith says

    December 8, 2018

    Just a quick note: rsync, scp, cp etc. all seem to have a file size limitation of approximately 100GB. So this tutorial will work well with the average filesystem, but will bomb repeatedly if the file size is extremely large. Reply

  8. oldunixguy says

    June 23, 2018

    Question: If one doesn't need to exclude anything why not use "cp -a" instead of rsync?

    Question: You say "use a UEFI compatible GRUB, then it's another long path" but you don't tell us how to do this for UEFI. How do we do it? Reply

    • Arnab Satapathi says

      June 23, 2018

      1. Yeah, using cp -a is preferable if we don't have to exclude anything.
      2. At the moment of writing, I didn't had any PC/laptop with a UEFI firmware.

      Thanks for the feedback, fixed the first issue. Reply

  9. Alfonso says

    February 8, 2018

    best tutorial ever, thank you! Reply

    • Arnab Satapathi says

      February 8, 2018

      You're most welcome, truly I don't know how to respond such a praise. Thanks! Reply

  10. Emmanuel says

    February 3, 2018

    Far the best tutorial I've found "quickly" searching DuckDuckGo. Planning to migrate my system on early 2018. Thank you! I now visualize quite clearly the different steps I'll have to adapt and pass through. it also stick to the KISS* thank you again, the time you invested is very useful, at least for me!

    Best regards.

    Emmanuel Reply

    • Arnab Satapathi says

      February 3, 2018

      Wow! That's motivating, thanks Emmanuel.

[Mar 03, 2021] What Is /dev/shm And Its Practical Usage

Mar 03, 2021 | www.cyberciti.biz

Author: Vivek Gite Last updated: March 14, 2006 58 comments

/dev/shm is nothing but implementation of traditional shared memory concept. It is an efficient means of passing data between programs. One program will create a memory portion, which other processes (if permitted) can access. This will result into speeding up things on Linux.

shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage facility on many Unix-like operating systems. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage device.

https://googleads.g.doubleclick.net/pagead/ads?adsid=ChEIgIT9gQYQ78n4kau6k77BARIqAHLMC6sWH8hBnabPTgYiOhdb8bvYJLlsiAKdN-Fkw25eQLjbBWO3HPIa&jar=2021-03-02-22&client=ca-pub-7825705102693166&format=644x320&w=644&h=320&ptt=12&iu=4688727157&adk=1529679259&output=html&bc=7&pv=1&wgl=1&asnt=0-21263681621210495305&dff=system-ui%2C%20BlinkMacSystemFont%2C%20Roboto%2C%20%22Segoe%20UI%22%2C%20Segoe%2C%20%22Helvetica%20Neue%22%2C%20Tahoma%2C%20sans-serif&prev_fmts=1519x320&prev_slotnames=1433529302&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&ifi=2&pfx=0&adf=3590974695&nhd=0&adx=240&ady=936&oid=2&is_amp=5&_v=2102200206004&d_imp=1&c=57473004511&ga_cid=amp-57phOoIfF4DPpaL7S3NtDA&ga_hid=4511&dt=1614816474718&biw=1536&bih=762&u_aw=1536&u_ah=864&u_cd=24&u_w=1536&u_h=864&u_tz=-300&u_his=4&vis=1&scr_x=0&scr_y=0&url=https%3A%2F%2Fwww.cyberciti.biz%2Ftips%2Fwhat-is-devshm-and-its-practical-usage.html&ref=https%3A%2F%2Fduckduckgo.com%2F&bdt=993&dtd=210&__amp_source_origin=https%3A%2F%2Fwww.cyberciti.biz

If you type the mount command you will see /dev/shm as a tempfs file system. Therefore, it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm:
$ df -h
Sample outputs:

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/wks01-root
                      444G   70G  351G  17% /
tmpfs                 3.9G     0  3.9G   0% /lib/init/rw
udev                  3.9G  332K  3.9G   1% /dev
tmpfs                 3.9G  168K  3.9G   1% /dev/shm
/dev/sda1             228M   32M  184M  15% /boot
Nevertheless, where can I use /dev/shm?

You can use /dev/shm to improve the performance of application software such as Oracle or overall Linux system performance. On heavily loaded system, it can make tons of difference. For example VMware workstation/server can be optimized to improve your Linux host's performance (i.e. improve the performance of your virtual machines).

In this example, remount /dev/shm with 8G size as follows:
# mount -o remount,size=8G /dev/shm
To be frank, if you have more than 2GB RAM + multiple Virtual machines, this hack always improves performance. In this example, you will give you tmpfs instance on /disk2/tmpfs which can allocate 5GB RAM/SWAP in 5K inodes and it is only accessible by root:
# mount -t tmpfs -o size=5G,nr_inodes=5k,mode=700 tmpfs /disk2/tmpfs
Where,

How do I restrict or modify size of /dev/shm permanently?

You need to add or modify entry in /etc/fstab file so that system can read it after the reboot. Edit, /etc/fstab as a root user, enter:
# vi /etc/fstab
Append or modify /dev/shm entry as follows to set size to 8G

none      /dev/shm        tmpfs   defaults,size=8G        0 0

Save and close the file. For the changes to take effect immediately remount /dev/shm:
# mount -o remount /dev/shm
Verify the same:
# df -h

Recommend readings:

[Mar 03, 2021] How to move the /root directory

Mar 03, 2021 | serverfault.com

https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad 2 1

I would like to move my root user's directory to a larger partition. Sometimes "he" runs out of space when performing tasks.

Here are my partitions:

host3:~# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda1               334460    320649         0 100% /
tmpfs                   514128         0    514128   0% /lib/init/rw
udev                     10240       720      9520   8% /dev
tmpfs                   514128         0    514128   0% /dev/shm
/dev/sda9            228978900   1534900 215812540   1% /home
/dev/sda8               381138     10305    351155   3% /tmp
/dev/sda5              4806904    956852   3605868  21% /usr
/dev/sda6              2885780   2281584    457608  84% /var

The root user's home directory is /root. I would like to relocate this, and any other user's home directories to a new location, perhaps on sda9. How do I go about this? debian user-management linux Share Improve this question Follow asked Nov 30 '10 at 17:27 nicholas.alipaz 155 2 2 silver badges 7 7 bronze badges

Add a comment 3 Answers Active Oldest Votes 4

You should avoid symlinks, it can make nasty bugs to appear... one day. And very hard to debug.

Use mount --bind :

# as root
cp -a /root /home/
echo "" >> /etc/fstab
echo "/home/root /root none defaults,bind 0 0" >> /etc/fstab

# do it now
cd / ; mv /root /root.old; mkdir /root; mount -a

it will be made at every reboots which you should do now if you want to catch errors soon Share Improve this answer Follow answered Nov 30 '10 at 17:51 shellholic 1,257 8 8 silver badges 11 11 bronze badges

Add a comment

https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad 1

Never tried it, but you shouldn't have a problem with:
cd / to make sure you're not in the directory to be moved
mv /root /home/root
ln -s /home/root /root symlink it back to the original location. Share Improve this answer Follow answered Nov 30 '10 at 17:32 James L 5,645 1 1 gold badge 17 17 silver badges 23 23 bronze badges Add a comment 0

Share Improve this answer Follow answered Nov 30 '10 at 17:45 Sergey 2,076 15 15 silver badges 14 14 bronze badges

[Mar 03, 2021] The dmesg command is used to print the kernel's message buffer.

Mar 03, 2021 | www.redhat.com

11 Linux commands I can't live without - Enable Sysadmin

Command 9: dmesg

The dmesg command is used to print the kernel's message buffer. This is another important command that you cannot work without. It is much easier to troubleshoot a system when you can see what is going on, and what happened behind the scenes.

Image

[Mar 03, 2021] The classic case of "low free disk space"

Mar 03, 2021 | www.redhat.com

Originally from: Sysadmin university- Quick and dirty Linux tricks - Enable Sysadmin

Another example from real life: You are troubleshooting an issue and find out that one file system is at 100 percent of its capacity.

There may be many subdirectories and files in production, so you may have to come up with some way to classify the "worst directories" because the problem (or solution) could be in one or more.

In the next example, I will show a very simple scenario to illustrate the point.

https://asciinema.org/a/dt1WZkdpfCALbQ5XeiJNYxSCS/embed?

The sequence of steps is:

  1. We go to the file system where the disk space is low (I used my home directory as an example).
  2. Then, we use the command df -k * to show the sizes of directories in kilobytes.
  3. That requires some classification for us to find the big ones, but just sort is not enough because, by default, this command will not treat the numbers as values but just characters.
  4. We add -n to the sort command, which now shows us the biggest directories.
  5. In case we have to navigate to many other directories, creating an alias might be useful.

[Mar 01, 2021] Serious 10-year-old flaw in Linux sudo command; a new version patches it

Mar 01, 2021 | www.networkworld.com

Linux users should immediately patch a serious vulnerability to the sudo command that, if exploited, can allow unprivileged users gain root privileges on the host machine.

Called Baron Samedit, the flaw has been "hiding in plain sight" for about 10 years, and was discovered earlier this month by researchers at Qualys and reported to sudo developers, who came up with patches Jan. 19, according to a Qualys blog . (The blog includes a video of the flaw being exploited.)

[Get regularly scheduled insights by signing up for Network World newsletters.]

A new version of sudo -- sudo v1.9.5p2 -- has been created to patch the problem, and notifications have been posted for many Linux distros including Debian, Fedora, Gentoo, Ubuntu, and SUSE, according to Qualys.

According to the common vulnerabilities and exposures (CVE) description of Baron Samedit ( CVE-2021-3156 ), the flaw can be exploited "via 'sudoedit -s' and a command-line argument that ends with a single backslash character."

https://imasdk.googleapis.com/js/core/bridge3.444.1_en.html#goog_1515248305

According to Qualys, the flaw was introduced in July 2011 and affects legacy versions from 1.8.2 to 1.8.31p2 as well as default configurations of versions from 1.9.0 to 1.9.5p1.

[Mar 01, 2021] Smart ways to compare files on Linux by Sandra Henry-Stocker

Feb 16, 2021 | www.networkworld.com

colordiff

The colordiff command enhances the differences between two text files by using colors to highlight the differences.

5 Often-Overlooked Log Sources

SponsoredPost Sponsored by ReliaQuest

5 Often-Overlooked Log Sources

Some data sources present unique logging challenges, leaving organizations vulnerable to attack. Here's how to navigate each one to reduce risk and increase visibility.

$ colordiff attendance-2020 attendance-2021
10,12c10
< Monroe Landry
< Jonathan Moody
< Donnell Moore
---
< Sandra Henry-Stocker

If you add a -u option, those lines that are included in both files will appear in your normal font color.

wdiff

The wdiff command uses a different strategy. It highlights the lines that are only in the first or second files using special characters. Those surrounded by square brackets are only in the first file. Those surrounded by braces are only in the second file.

$ wdiff attendance-2020 attendance-2021
Alfreda Branch
Hans Burris
Felix Burt
Ray Campos
Juliet Chan
Denver Cunningham
Tristan Day
Kent Farmer
Terrie Harrington
[-Monroe Landry                 <== lines in file 1 start
Jonathon Moody
Donnell Moore-]                 <== lines only in file 1 stop
{+Sandra Henry-Stocker+}        <== line only in file 2
Leanne Park
Alfredo Potter
Felipe Rush
vimdiff

The vimdiff command takes an entirely different approach. It uses the vim editor to open the files in a side-by-side fashion. It then highlights the lines that are different using background colors and allows you to edit the two files and save each of them separately.

Unlike the commands described above, it runs on the desktop, not in a terminal window.

Strategies for Pixel-Perfect Applications across Web, Mobile, and Chat

SponsoredPost Sponsored by Outsystems

Strategies for Pixel-Perfect Applications across Web, Mobile, and Chat

This webinar will discuss key trends and strategies, identified by Forrester Research, for digital CX and customer self-service in 2021 and beyond. Register now

On Debian systems, you can install vimdiff with this command:

$ sudo apt install vim

vimdiff.jpg <=====================

kompare

The kompare command, like vimdifff , runs on your desktop. It displays differences between files to be viewed and merged and is often used by programmers to see and manage differences in their code. It can compare files or folders. It's also quite customizable.

Learn more at kde.org .

kdiff3

The kdiff3 tool allows you to compare up to three files and not only see the differences highlighted, but merge the files as you see fit. This tool is often used to manage changes and updates in program code.

Like vimdiff and kompare , kdiff3 runs on the desktop.

You can find more information on kdiff3 at sourceforge .

[Feb 28, 2021] Tagging commands on Linux by Sandra Henry-Stocker

Nov 20, 2020 | www.networkworld.com

Tags provide an easy way to associate strings that look like hash tags (e.g., #HOME ) with commands that you run on the command line. Once a tag is established, you can rerun the associated command without having to retype it. Instead, you simply type the tag. The idea is to use tags that are easy to remember for commands that are complex or bothersome to retype.

Unlike setting up an alias, tags are associated with your command history. For this reason, they only remain available if you keep using them. Once you stop using a tag, it will slowly disappear from your command history file. Of course, for most of us, that means we can type 500 or 1,000 commands before this happens. So, tags are a good way to rerun commands that are going to be useful for some period of time, but not for those that you want to have available permanently.

To set up a tag, type a command and then add your tag at the end of it. The tag must start with a # sign and should be followed immediately by a string of letters. This keeps the tag from being treated as part of the command itself. Instead, it's handled as a comment but is still included in your command history file. Here's a very simple and not particularly useful example:

[ Also see Invaluable tips and tricks for troubleshooting Linux . ]
$ echo "I like tags" #TAG

This particular echo command is now associated with #TAG in your command history. If you use the history command, you'll see it:

https://imasdk.googleapis.com/js/core/bridge3.444.1_en.html#goog_926521185

me width=

$ history | grep TAG
  998  08/11/20 08:28:29 echo "I like tags" #TAG     <==
  999  08/11/20 08:28:34 history | grep TAG

Afterwards, you can rerun the echo command shown by entering !? followed by the tag.

$ !? #TAG
echo "I like tags" #TAG
"I like tags"

The point is that you will likely only want to do this when the command you want to run repeatedly is so complex that it's hard to remember or just annoying to type repeatedly. To list your most recently updated files, for example, you might use a tag #REC (for "recent") and associate it with the appropriate ls command. The command below lists files in your home directory regardless of where you are currently positioned in the file system, lists them in reverse date order, and displays only the five most recently created or changed files.

$ ls -ltr ~ | tail -5 #REC <== Associate the tag with a command
drwxrwxr-x  2 shs     shs        4096 Oct 26 06:13 PNGs
-rw-rw-r--  1 shs     shs          21 Oct 27 16:26 answers
-rwx------  1 shs     shs         644 Oct 29 17:29 update_user
-rw-rw-r--  1 shs     shs      242528 Nov  1 15:54 my.log
-rw-rw-r--  1 shs     shs      266296 Nov  5 18:39 political_map.jpg
$ !? #REC                       <== Run the command that the tag is associated with
ls -ltr ~ | tail -5 #REC
drwxrwxr-x  2 shs     shs        4096 Oct 26 06:13 PNGs
-rw-rw-r--  1 shs     shs          21 Oct 27 16:26 answers
-rwx------  1 shs     shs         644 Oct 29 17:29 update_user
-rw-rw-r--  1 shs     shs      242528 Nov  1 15:54 my.log
-rw-rw-r--  1 shs     shs      266296 Nov  5 18:39 political_map.jpg

You can also rerun tagged commands using Ctrl-r (hold Ctrl key and press the "r" key) and then typing your tag (e.g., #REC). In fact, if you are only using one tag, just typing # after Ctrl-r should bring it up for you. The Ctrl-r sequence, like !? , searches through your command history for the string that you enter.

Tagging locations

Some people use tags to remember particular file system locations, making it easier to return to directories they"re working in without having to type complete directory paths.

5 Often-Overlooked Log Sources

SponsoredPost Sponsored by ReliaQuest

5 Often-Overlooked Log Sources

Some data sources present unique logging challenges, leaving organizations vulnerable to attack. Here's how to navigate each one to reduce risk and increase visibility.

$ cd /apps/data/stats/2020/11 #NOV
$ cat stats
$ cd
!? #NOV        <== takes you back to /apps/data/stats/2020/11

After using the #NOV tag as shown, whenever you need to move into the directory associated with #NOV , you have a quick way to do so – and one that doesn't require that you think too much about where the data files are stored.

NOTE: Tags don't need to be in all uppercase letters, though this makes them easier to recognize and unlikely to conflict with any commands or file names that are also in your command history.

Alternatives to tags

While tags can be very useful, there are other ways to do the same things that you can do with them.

To make commands easily repeatable, assign them to aliases.

Netskope Leadership Shows Where CASB and SWG Are Headed

BrandPost Sponsored by Netskope

Netskope Leadership Shows Where CASB and SWG Are Headed

As the status quo of security inverts from the data center to the user, Cloud Access Security Brokers and Secure Web Gateways increasingly will be the same conversation, not separate technology...

$ alias recent="ls -ltr ~ | tail -5"

To make multiple commands easily repeatable, turn them into a script.

#!/bin/bash
echo "Most recently updated files:"
ls -ltr ~ | tail -5

To make file system locations easier to navigate to, create symbolic links.

$ ln -s /apps/data/stats/2020/11 NOV

To rerun recently used commands, use the up arrow key to back up through your command history until you reach the command you want to reuse and then press the enter key.

You can also rerun recent commands by typing something like "history | tail -20" and then type "!" following by the number to the left of the command you want to rerun (e.g., !999).

Wrap-up

Tags are most useful when you need to run complex commands again and again in a limited timeframe. They're easy to set up and they fade away when you stop using them.

[Feb 28, 2021] Selectively reusing commands on Linux by Sandra Henry-Stocker

Feb 23, 2021 | www.networkworld.com

Reuse a command by typing a portion of it

One easy way to reuse a previously entered command (one that's still on your command history) is to type the beginning of the command. If the bottom of your history buffers looks like this, you could rerun the ps command that's used to count system processes simply by typing just !p .

Debunking the 3 Biggest Myths Around Cloud Migration

SponsoredPost Sponsored by Lenovo & Intel

Debunking the 3 Biggest Myths Around Cloud Migration

Can you name the 3 biggest misconceptions about cloud migration? Here's the truth - and how to solve the challenges.

$ history | tail -7
 1002  21/02/21 18:24:25 alias
 1003  21/02/21 18:25:37 history | more
 1004  21/02/21 18:33:45 ps -ef | grep systemd | wc -l
 1005  21/02/21 18:33:54 ls
 1006  21/02/21 18:34:16 echo "What's next?"

You can also rerun a command by entering a string that was included anywhere within it. For example, you could rerun the ps command shown in the listing above by typing !?sys? The question marks act as string delimiters.

$ !?sys?
ps -ef | grep systemd | wc -l
5

You could rerun the command shown in the listing above by typing !1004 but this would be more trouble if you're not looking at a listing of recent commands.

Run previous commands with changes

After the ps command shown above, you could count kworker processes instead of systemd processes by typing ^systemd^kworker^ . This replaces one process name with the other and runs the altered command. As you can see in the commands below, this string substitution allows you to reuse commands when they differ only a little.

$ ps -ef | grep systemd | awk '{ print $2 }' | wc -l
5
$ ^systemd^smbd^
ps -ef | grep smbd | awk '{ print $2 }' | wc -l
5
$ ^smbd^kworker^
ps -ef | grep kworker | awk '{ print $2 }' | wc -l
13

The string substitution is also useful if you mistype a command or file name.

Four Business Leaders Focus on the Future of Work

BrandPost Sponsored by DocuSign

Four Business Leaders Focus on the Future of Work

The pandemic of 2020 threw business into disarray, but provided opportunities to accelerate remote work, collaboration, and digital transformation

$ sudo ls -l /var/log/samba/corse
ls: cannot access '/var/log/samba/corse': No such file or directory
$ ^se^es^
sudo ls -l /var/log/samba/cores
total 8
drwx --  --  -- . 2 root root 4096 Feb 16 10:50 nmbd
drwx --  --  -- . 2 root root 4096 Feb 16 10:50 smbd
Reach back into history

You can also reuse commands with a character string that asks, for example, to rerun the command you entered some number of commands earlier. Entering !-11 would rerun the command you typed 11 commands earlier. In the output below, the !-3 reruns the first of the three earlier commands displayed.

$ ps -ef | wc -l
132
$ who
shs      pts/0        2021-02-21 18:19 (192.168.0.2)
$ date
Sun 21 Feb 2021 06:59:09 PM EST
$ !-3
ps -ef | wc -l
133
Reuse command arguments

Another thing you can do with your command history is reuse arguments that you provided to various commands. For example, the character sequence !:1 represents the first argument provided to the most recently run command, !:2 the second, !:3 the third and so on. !:$ represents the final argument. In this example, the arguments are reversed in the second echo command.

$ echo be the light
be the light
$ echo !:3 !:2 !:1
echo light the be
light the be
$ echo !:3 !:$
echo light light
light light

If you want to run a series of commands using the same argument, you could do something like this:

$ echo nemo
nemo
$ id !:1
id nemo
uid=1001(nemo) gid=1001(nemo) groups=1001(nemo),16(fish),27(sudo)
$ df -k /home/!:$
df -k /home/nemo
Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/sdb1      446885824 83472864 340642736  20% /home

Of course, if the argument was a long and complicated string, it might actually save you some time and trouble to use this technique. Please remember this is just an example!

Wrap-Up

Simple history command tricks can often save you a lot of trouble by allowing you to reuse rather than retype previously entered commands. Remember, however, that using strings to identify commands will recall only the most recent use of that string and that you can only rerun commands in this way if they are being saved in your history buffer.

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.

[Feb 28, 2021] Keep out ahead of shadow IT by Steven A. Lowe

Sep 28, 2015 | www.networkworld.com

Shadow IT has been presented as a new threat to IT departments because of the cloud. Not true -- the cloud has simply made it easier for non-IT personnel to acquire and create their own solutions without waiting for IT's permission. Moreover, the cloud has made this means of technical problem-solving more visible, bringing shadow IT into the light. In fact, "shadow IT" is more of a legacy pejorative for what should better be labeled "DIY IT." After all, shadow IT has always been about people solving their own problems with technology.

Here we take a look at how your organization can best go about leveraging the upside of DIY IT.

What sends non-IT problem-solvers into the shadows

The IT department is simply too busy, overworked, understaffed, underutilized, and sometimes even too disinterested to take on every marketing Web application idea or mobile app initiative for field work that comes its way. There are too many strategic initiatives, mission-critical systems, and standards committee meetings, so folks outside IT are often left with little recourse but to invent their own solutions using whatever technical means and expertise they have or can find.

How can this be a bad thing?

  1. They are sharing critical, private data with the wrong people somehow.
  2. Their data is fundamentally flawed, inaccurate, or out of date.
  3. Their data would be of use to many others, but they don't know it exists.
  4. Their ability to solve their own problems is a threat to IT.

Because shadow IT practitioners are subject matter experts in their domain, the second drawback is unlikely. The third is an opportunity lost, but that's not scary enough to sweat. The first and fourth are the most likely to instill fear -- with good reason. If something goes wrong with a home-grown shadow IT solution, the IT department will likely be made responsible, even if you didn't know it existed.

RECOMMENDED WHITEPAPERS

The wrong response to these fears is to try to eradicate shadow IT. Because if you really want to wipe out shadow IT, you would have to have access to all the network logs, corporate credit card reports, phone bills, ISP bills, and firewall logs, and it would take some effort to identify and block all unauthorized traffic in and out of the corporate network. You would have to rig the network to refuse to connect to unsanctioned devices, as well as block access to websites and cloud services like Gmail, Dropbox, Salesforce, Google apps, Trello, and so on. Simply knowing all you would have to block access to would be a job in itself.

Everything You Need to Know About Cloud Migration That No One Told You

SponsoredPost Sponsored by Lenovo & Intel

Everything You Need to Know About Cloud Migration That No One Told You

Can you name the 3 biggest misconceptions about cloud migration? Here's the truth - and how to solve the challenges.

MORE ON NETWORK WORLD: 26 crazy and scary things the TSA has found on travelers

Worse, if you clamp down on DIY solutions you become an obstacle, and attempts to solve departmental problems will submerge even further into the shadows -- but it will never go away. The business needs underlying DIY IT are too important.

The reality is, if you shift your strategy to embrace DIY solutions the right way, people would be able to safely solve their own problems without too much IT involvement and IT would be able to accomplish more for the projects where its expertise and oversight is truly critical.

Embrace DIY IT

Seek out shadow IT projects and help them, but above all respect the fact that this problem-solving technique exists. The folks who launch a DIY project are not your enemies; they are your co-workers, trying to solve their own problems, hampered by limited resources and understanding. The IT department may not have many more resources to spread around, but you have an abundance of technical know-how. Sharing that does not deplete it.

You can find the trail of shadow IT by looking at network logs, scanning email traffic and attachments, and so forth. You must be willing to support these activities, even if you do not like them . Whether or not you like them, they exist, and they likely have good reasons for existing. It doesn't matter if they were not done with your permission or to your specifications. Assume that they are necessary and help them do it right.

What is SecureX?

SponsoredPost Sponsored by Cisco

What is SecureX?

See how SecureX turns security from a blocker into an enabler.

Take the lead -- and lead

IT departments have the expertise to help others select the right technical solution for their needs. I'm not talking about RFPs, vendor/product evaluation meetings, software selection committees -- those are typically time-wasting, ivory-tower circuses that satisfy no one. I'm talking about helping colleagues figure out what it is they truly want and teaching them how to evaluate and select a solution that works for them -- and is compliant with a small set of minimal, relevant standards and policies.

That expertise could be of enormous benefit to the rest of the company, if only it was shared. An approachable IT department that places a priority on helping people solve their own problems -- instead of expending enormous effort trying to prevent largely unlikely, possibly even imaginary problems -- is what you should be striving for.

Think of it as being helpful without being intrusive. Sharing your expertise and taking the lead in helping non-IT departments help themselves not only shows consideration for your colleagues' needs, but it also helps solve real problems for real people -- while keeping the IT department informed about the technology choices made throughout the organization. Moreover, it sets up the IT department for success instead of surprises when the inevitable integration and data migration requests appear.

Plus, it's a heck of a lot cheaper than reinventing the wheel unnecessarily.

Create policies everyone can live with

IT is responsible for critical policies concerning the use of devices, networks, access to information, and so on. It is imperative that IT have in place a sane set of policies to safeguard the company from loss, liability, leakage, incomplete/inaccurate data, and security threats both internal and external. But everyone else has to live with these policies, too. If they are too onerous or convoluted or byzantine, they will be ignored.

Therefore, create policies that respect everyone's concerns and needs, not IT's alone. Here's the central question to ask yourself: Are you protecting the company or merely the status quo?

Security is a legitimate concern, of course, but most SaaS vendors understand security at least as well as you do, if not better. Being involved in the DIY procurement process (without being a bottleneck or a dictator) lets you ensure that minimal security criteria are met.

Data integrity is likewise a legitimate concern, but control of company data is largely an illusion. You can make data available or not, but you cannot control how it is used once accessed. Train and trust your people, and verify their activities. You should not and cannot make all decisions for them in advance.

Regulatory compliance, auditing, traceability, and so on are legitimate concerns, but they do not trump the rights of workers to solve their own problems. All major companies in similar fields are subject to the same regulations as your company. How you choose to comply with those regulations is up to you. The way you've always done it is not the only way, and it's probably not even the best way. Here, investigating what the major players in your field do, especially if they are more modern, efficient, and "cloudy" than you, is a great start.

The simplest way to manage compliance is to isolate the affected software from the rest of the system, since compliance is more about auditing and accountability than proscriptive processes. The major movers and shakers in the Internet space are all over new technologies, techniques, employee empowerment, and streamlining initiatives. Join them, or eat their dust.

Champion DIY IT

Once you have a sensible set of policies in place, it's high time to shine a light on shadow IT -- a celebratory spotlight, that is.

By championing DIY IT projects, you send a clear message that co-workers have no need to hide how they go about solving their problems. Make your intentions friendly and clear up front: that you are intent on improving business operations, recognizing and rewarding innovators and risk-takers, finding and helping those who need assistance, and promoting good practices for DIY IT. A short memo/email announcing this from a trusted, well-regarded executive is highly recommended.

Here are a few other ideas in helping you embrace DIY IT:

DIY IT can be a great benefit to your organization by relieving the load on the IT department and enabling more people to tap technical tools to be more productive in their work -- a win for everyone. But it can't happen without sane and balanced policies, active support from IT, and a companywide awareness that this sort of innovation and initiative is valued.

[Feb 27, 2021] 3 solid self-review tips for sysadmins by Anthony Critelli

The most solid tip is not to take this self-review seriously ;-)
And contrary to Anthony Critilli opinion this is not about "selling yourself". This is about management control of the workforce. In other words, annual performance reviews this is a mechanism for repression.
Use of corporate bullsh*t is probably the simplest and the most advisable strategy during those exercises. I like the recommendation "Tie your accomplishments to business goals and values" below. Never be frank in such situations.
Feb 25, 2021 | www.redhat.com

... you sell yourself by reminding your management team that you provide a great deal of objective value to the organization and that you deserve to be compensated accordingly. When I say compensation , I don't just mean salary. Compensation means different things to different people: Maybe you really want more pay, extra vacation time, a promotion, or even a lateral move. A well-written self-review can help you achieve these goals, assuming they are available at your current employer.

... ... ...

Tie your accomplishments to business goals and values

...It's hard to argue that decreasing user downtime from days to hours isn't a valuable contribution.

... ... ...

... I select a skill, technology, or area of an environment that I am weak in, and I discuss how I would like to build my knowledge. I might discuss how I want to improve my understanding of Kubernetes as we begin to adopt a containerization strategy, or I might describe how my on-call effectiveness could be improved by deepening my knowledge of a particular legacy environment.

... ... ...

Many of my friends and colleagues don't look forward to review season. They find it distracting and difficult to write a self-review. Often, they don't even know where to begin writing about their work from the previous year.

[Feb 20, 2021] Improve your productivity with this Linux keyboard tool - Opensource.com

Feb 20, 2021 | opensource.com

AutoKey is an open source Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever managed without. It can be a transformative tool to improve your productivity or simply a way to reduce the physical stress associated with typing.

This article will look at how to install and start using AutoKey, cover some simple recipes you can immediately use in your workflow, and explore some of the advanced features that AutoKey power users may find attractive.

Install and set up AutoKey

AutoKey is available as a software package on many Linux distributions. The project's installation guide contains directions for many platforms, including building from source. This article uses Fedora as the operating platform.

AutoKey comes in two variants: autokey-gtk, designed for GTK -based environments such as GNOME, and autokey-qt, which is QT -based.

You can install either variant from the command line:

sudo dnf install autokey-gtk

Once it's installed, run it by using autokey-gtk (or autokey-qt ).

Explore the interface

Before you set AutoKey to run in the background and automatically perform actions, you will first want to configure it. Bring up the configuration user interface (UI):

autokey-gtk -c

AutoKey comes preconfigured with some examples. You may wish to leave them while you're getting familiar with the UI, but you can delete them if you wish.

autokey-defaults.png

(Matt Bargenquast, CC BY-SA 4.0 )

The left pane contains a folder-based hierarchy of phrases and scripts. Phrases are text that you want AutoKey to enter on your behalf. Scripts are dynamic, programmatic equivalents that can be written using Python and achieve basically the same result of making the keyboard send keystrokes to an active window.

The right pane is where the phrases and scripts are built and configured.

Once you're happy with your configuration, you'll probably want to run AutoKey automatically when you log in so that you don't have to start it up every time. You can configure this in the Preferences menu ( Edit -> Preferences ) by selecting Automatically start AutoKey at login .

startautokey.png

(Matt Bargenquast, CC BY-SA 4.0 ) Correct common typos with AutoKey More Linux resources

Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems for you.

Create a new subfolder where you can group all your "typo correction" configurations. Select My Phrases in the left pane, then File -> New -> Subfolder . Name the subfolder Typos .

Create a new phrase in File -> New -> Phrase , and call it "grep."

Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering "grep" in the Enter phrase contents section (replacing the default "Enter phrase contents" text).

Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the Set button next to Abbreviations at the bottom of the UI.

In the dialog box that pops up, click the Add button and add "gerp" as a new abbreviation. Leave Remove typed abbreviation checked; this is what instructs AutoKey to replace any typed occurrence of the word "gerp" with "grep." Leave Trigger when typed as part of a word unchecked so that if you type a word containing "gerp" (such as "fingerprint"), it won't attempt to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated word.

[Feb 19, 2021] Installs only security updates via yum

The simplest way is # yum -y update --security
For RHEL7 the plugin yum-plugin-security is already a part of yum itself, no need to install anything.
Jan 16, 2020 | access.redhat.com

It is now possible to limit yum to install only security updates (as opposed to bug fixes or enhancements) using Red Hat Enterprise Linux 5,6, and 7. To do so, simply install the yum-security plugin:

For Red Hat Enterprise Linux 7 and 8

The plugin is already a part of yum itself, no need to install anything.

For Red Hat Enterprise Linux 5 and 6

# yum install yum-security
Raw
# yum list-sec
Raw
# yum list-security --security

For Red Hat Enterprise Linux 5, 6, 7 and 8

Raw
# yum updateinfo info security
Raw
# yum -y update --security

NOTE: It will install the last version available of any package with at least one security errata thus can install non-security erratas if they provide a more updated version of the package.

Raw
# yum update-minimal --security -y
Raw
# yum update --cve <CVE>

e.g.

Raw
# yum update --cve CVE-2008-0947

11 September 2014 5:30 PM R. Hinton Community Leader

For those seeking to discover what CVEs are addressed in a given existing RPM, try this method that Marc Milgram from Red Hat kindly provided at this discussion .

1) First download the specific rpm you are interested in.
2) Use the below command...

Raw
$ rpm -qp --changelog openssl-0.9.8e-27.el5_10.4.x86_64.rpm | grep CVE
- fix CVE-2014-0221 - recursion in DTLS code leading to DoS
- fix CVE-2014-3505 - doublefree in DTLS packet processing
- fix CVE-2014-3506 - avoid memory exhaustion in DTLS
- fix CVE-2014-3508 - fix OID handling to avoid information leak
- fix CVE-2014-3510 - fix DoS in anonymous (EC)DH handling in DTLS
- fix for CVE-2014-0224 - SSL/TLS MITM vulnerability
- fix for CVE-2013-0169 - SSL/TLS CBC timing attack (#907589)
- fix for CVE-2013-0166 - DoS in OCSP signatures checking (#908052)
  environment variable is set (fixes CVE-2012-4929 #857051)
- fix for CVE-2012-2333 - improper checking for record length in DTLS (#820686)
- fix for CVE-2012-2110 - memory corruption in asn1_d2i_read_bio() (#814185)
- fix for CVE-2012-0884 - MMA weakness in CMS and PKCS#7 code (#802725)
- fix for CVE-2012-1165 - NULL read dereference on bad MIME headers (#802489)
- fix for CVE-2011-4108 & CVE-2012-0050 - DTLS plaintext recovery
- fix for CVE-2011-4109 - double free in policy checks (#771771)
- fix for CVE-2011-4576 - uninitialized SSL 3.0 padding (#771775)
- fix for CVE-2011-4619 - SGC restart DoS attack (#771780)
- fix CVE-2010-4180 - completely disable code for
- fix CVE-2009-3245 - add missing bn_wexpand return checks (#570924)
- fix CVE-2010-0433 - do not pass NULL princ to krb5_kt_get_entry which
- fix CVE-2009-3555 - support the safe renegotiation extension and
- fix CVE-2009-2409 - drop MD2 algorithm from EVP tables (#510197)
- fix CVE-2009-4355 - do not leak memory when CRYPTO_cleanup_all_ex_data()
- fix CVE-2009-1386 CVE-2009-1387 (DTLS DoS problems)
- fix CVE-2009-1377 CVE-2009-1378 CVE-2009-1379
- fix CVE-2009-0590 - reject incorrectly encoded ASN.1 strings (#492304)
- fix CVE-2008-5077 - incorrect checks for malformed signatures (#476671)
- fix CVE-2007-3108 - side channel attack on private keys (#250581)
- fix CVE-2007-5135 - off-by-one in SSL_get_shared_ciphers (#309881)
- fix CVE-2007-4995 - out of order DTLS fragments buffer overflow (#321221)
- CVE-2006-2940 fix was incorrect (#208744)
- fix CVE-2006-2937 - mishandled error on ASN.1 parsing (#207276)
- fix CVE-2006-2940 - parasitic public keys DoS (#207274)
- fix CVE-2006-3738 - buffer overflow in SSL_get_shared_ciphers (#206940)
- fix CVE-2006-4343 - sslv2 client DoS (#206940)
- fix CVE-2006-4339 - prevent attack on PKCS#1 v1.5 signatures (#205180)
11 September 2014 5:34 PM R. Hinton Community Leader

Additionally,

If you are interested to see if a given CVE, or list of CVEs are applicable, you can use this method:

1) get the list of all applicable CVEs from Red Hat you wish,
- If you wanted to limit the search to a specific rpm such as "openssl", then at that above Red Hat link, you can enter "openssl" and filter out only openssl items, or filter against any other search term
- Place these into a file, one line after another, such as this limited example:
NOTE : These CVEs below are from limiting the CVEs to "openssl" in the manner I described above, and the list is not completed, there are plenty more for your date range .

Raw
CVE-2014-0016
CVE-2014-0017
CVE-2014-0036
CVE-2014-0041
...

2) Keep in mind the information in the article in this page, and run something like the following as root (a "for loop" will work in a bash shell):

Raw
[root@yoursystem]# for i in `cat listofcves.txt`;yum update --cve $i;done

And if the cve applies, it will prompt you to take the update, if it does not apply, it will tell you

Alternatively, I used this "echo n |" prior to the "yum update" exit the yum command with "n" if it found a hit:

Raw
[root@yoursystem]# for i in `cat listyoumade.txt`;echo n |yum update --cve $i;done

Then redirect the output to another file to make your determinations.

7 January 2015 9:54 AM f3792625

'yum info-sec' actually lists all patches, you need to use 'yum info-sec --security'

10 February 2016 1:00 PM Rackspace Customer

How is this the Severity information of RHSA updated populated?

Specifically the article shows the following output:

Raw
# yum updateinfo list
This system is receiving updates from RHN Classic or RHN Satellite.
RHSA-2014:0159 Important/Sec. kernel-headers-2.6.32-431.5.1.el6.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-devel-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-libs-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 Moderate/Sec.  mysql-server-5.1.73-3.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-sysinit-3.15.3-6.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-tools-3.15.3-6.el6_5.x86_64

On all of my systems, the output seems to be missing the severity information:

Raw
# yum updateinfo list
This system is receiving updates from RHN Classic or RHN Satellite.
RHSA-2014:0159 security       kernel-headers-2.6.32-431.5.1.el6.x86_64
RHSA-2014:0164 security       mysql-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 security       mysql-devel-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 security       mysql-libs-5.1.73-3.el6_5.x86_64
RHSA-2014:0164 security       mysql-server-5.1.73-3.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-sysinit-3.15.3-6.el6_5.x86_64
RHBA-2014:0158 bugfix         nss-tools-3.15.3-6.el6_5.x86_64

I can't see how to configure it to transform "security" to "Severity/Sec."

20 September 2016 8:27 AM Walid Shaari

same in here, what I did was use info-sec with filters, like below: Raw

test-node# yum info-sec|grep  'Critical:'
  Critical: glibc security and bug fix update
  Critical: samba and samba4 security, bug fix, and enhancement update
  Critical: samba security update
  Critical: samba security update
  Critical: nss and nspr security, bug fix, and enhancement update
  Critical: nss, nss-util, and nspr security update
  Critical: nss-util security update
  Critical: samba4 security update
20 June 2017 1:49 PM b.scalio

What's annoying is that "yum update --security" shows 20 packages to update for security but when listing the installable errata in Satellite it shows 102 errata available and yet all those errata don't contain the errata.

20 June 2017 2:05 PM Pavel Moravec

You might hit https://bugzilla.redhat.com/show_bug.cgi?id=1408508 where metadata generated has empty package list for some errata in some circumstances, causing yum thinks such an errata is not applicable (as no package would be updated by applying that errata).

I recommend finding out one of the errata that Sat WebUI offers but yum isnt aware of, and (z)grep that errata id within yum cache - if there will be something like:

Raw
<pkglist>
  <collection short="">
    <name>rhel-7-server-rpms__7Server__x86_64</name>
  </collection>
</pkglist>

with no package in it, you hit that bug.

14 August 2017 1:25 AM PixelDrift.NET Support Community Leader

I've got an interesting requirement in that a customer wants to only allow updates of packages with attached security errata (to limit unecessary drift/update of the OS platform). ie. restrict, warn or block the use of generic 'yum update' by an admin as it will update all packages.

There are other approaches which I have currently implemented, including limiting what is made available to the servers through Satellite so yum update doesn't 'see' non security errata.. but I guess what i'm really interested in is limiting (through client config) the inadvertant use "yum update" by an administrator, or redirecting/mapping 'yum update' to 'yum update --security'. I appreciate an admin can work around any restriction, but it's really to limit accidental use of full 'yum update' by well intentioned admins.

Current approaches are to alias yum, move yum and write a shim in its place (to warn/redirect if yum update is called), or patch the yum package itself (which i'd like to avoid). Any other suggestions appreciated.

16 January 2018 5:00 PM DSI POMONA

why not creating a specific content-view for security patch purpose ?

In that content-view, you create a filter that filters only security updates.

In your patch management process, you can create a script that change on the fly the content-view of a host (or host-group) then apply security patches, and finally switching back to the original content-view (if you let to the admin the possibility to install additional programms if necessary).

hope this helps

12 March 2018 2:25 PM Rackspace Customer IN Newbie 14 points
15 August 2019 12:12 AM IT Accounts NCVER

Hi,

Is it necessary to reboot system after applying security updates ?

15 August 2019 1:17 AM Marcus West

If it's a kernel update, you will have to. For other packages, it's recommended as to ensure that you are not still running the old libraries in memory. If you are just patching one particular independent service (ie, http), you can probably get away without a full system reboot.

More information can be found in the solution Which packages require a system reboot after the update? .

[Feb 03, 2021] A new userful bussword -- Hyper-converged infrastructure

Feb 03, 2021 | en.wikipedia.org

From Wikipedia, the free encyclopedia Jump to navigation Jump to search

Hyper-converged infrastructure ( HCI ) is a software-defined IT infrastructure that virtualizes all of the elements of conventional " hardware -defined" systems. HCI includes, at a minimum, virtualized computing (a hypervisor ), software-defined storage and virtualized networking ( software-defined networking ). [ citation needed ] HCI typically runs on commercial off-the-shelf (COTS) servers.

The primary difference between converged infrastructure (CI) and hyper-converged infrastructure is that in HCI, both the storage area network and the underlying storage abstractions are implemented virtually in software (at or via the hypervisor) rather than physically, in hardware. [ citation needed ] Because all of the software-defined elements are implemented within the context of the hypervisor, management of all resources can be federated (shared) across all instances of a hyper-converged infrastructure. Expected benefits [ edit ]

Hyperconvergence evolves away from discrete, hardware-defined systems that are connected and packaged together toward a purely software-defined environment where all functional elements run on commercial, off-the-shelf (COTS) servers, with the convergence of elements enabled by a hypervisor. [1] [2] HCI infrastructures are usually made up of server systems equipped with Direct-Attached Storage (DAS) . [3] HCI includes the ability to plug and play into a data-center pool of like systems. [4] [5] All physical data-center resources reside on a single administrative platform for both hardware and software layers. [6] Consolidation of all functional elements at the hypervisor level, together with federated management, eliminates traditional data-center inefficiencies and reduces the total cost of ownership (TCO) for data centers. [7] [ need quotation to verify ] [8] [9]

Potential impact [ edit ]

The potential impact of the hyper-converged infrastructure is that companies will no longer need to rely on different compute and storage systems, though it is still too early to prove that it can replace storage arrays in all market segments. [10] It is likely to further simplify management and increase resource-utilization rates where it does apply. [11] [12] [13]

[Feb 02, 2021] A Guide to systemd journal clean up process

Images removed. See the original for full version.
Jan 29, 2021 | www.debugpoint.com

... ... ...

The systemd journal Maintenance

Using the journalctl utility of systemd, you can query these logs, perform various operations on them. For example, viewing the log files from different boots, check for last warnings, errors from a specific process or applications. If you are unaware of these, I would suggest you quickly go through this tutorial – "use journalctl to View and Analyze Systemd Logs [With Examples] " before you follow this guide.

Where are the physical journal log files?

The systemd's journald daemon collects logs from every boot. That means, it classifies the log files as per the boot.

The logs are stored as binary in the path /var/log/journal with a folder as machine id.

For example:

Screenshot of physical journal file -1
Screenshot of physical journal files -2

Also, remember that based on system configuration, runtime journal files are stored at /run/log/journal/ . And these are removed in each boot.

Can I manually delete the log files?

You can, but don't do it. Instead, follow the below instructions to clear the log files to free up disk space using journalctl utilities.

How much disk space is used by systemd log files?

Open up a terminal and run the below command.

journalctl --disk-usage

This should provide you how much is actually used by the log files in your system.

If you have a graphical desktop environment, you can open the file manager and browse to the path /var/log/journal and check the properties.

systemd journal clean process

The effective way of clearing the log files should be done by journald.conf configuration file. Ideally, you should not manually delete the log files even if the journalctl provides utility to do that.

Let's take a look at how you can delete it manually , then I will explain the configuration changes in journald.conf so that you do not need to manually delete the files from time to time; Instead, the systemd takes care of it automatically based on your configuration.

Manual delete

First, you have to flush and rotate the log files. Rotating is a way of marking the current active log files as an archive and create a fresh logfile from this moment. The flush switch asks the journal daemon to flush any log data stored in /run/log/journal/ into /var/log/journal/ , if persistent storage is enabled.

SEE ALSO: Manage Systemd Services Using systemctl [With Examples]

Then, after flush and rotate, you need to run journalctl with vacuum-size , vacuum-time , and vacuum-files switches to force systemd to clear the logs.

Example 1:

sudo journalctl --flush --rotate
sudo journalctl --vacuum-time=1s

The above set of commands removes all archived journal log files until the last second. This effectively clears everything. So, careful while running the command.

journal clean up – example

After clean up:

After clean up – journal space usage

You can also provide the following suffixes as per your need following the number.

Example 2:

sudo journalctl --flush --rotate
sudo journalctl --vacuum-size=400M

This clears all archived journal log files and retains the last 400MB files. Remember this switch applies to only archived log files only, not on active journal files. You can also use suffixes as below.

Example 3:

sudo journalctl --flush --rotate
sudo journalctl --vacuum-files=2

The vacuum-files switch clears all the journal files below the number specified. So, in the above example, only the last 2 journal files are kept and everything else is removed. Again, this only works on the archived files.

You can combine the switches if you want, but I would recommend not to. However, make sure to run with --rotate switch first.

Automatic delete using config files

While the above methods are good and easy to use, but it is recommended that you control the journal log file cleanup process using the journald configuration files which present at /etc/systemd/journald.conf .

The systemd provides many parameters for you to effectively manage the log files. By combining these parameters you can effectively limit the disk space used by the journal files. Let's take a look.

journald.conf parameter Description Example
SystemMaxUse Specifies the maximum disk space that can be used by the journal in persistent storage SystemMaxUse=500M
SystemKeepFree Specifies the amount of space that the journal should leave free when adding journal entries to persistent storage. SystemKeepFree=100M
SystemMaxFileSize Controls how large individual journal files can grow to in persistent storage before being rotated. SystemMaxFileSize=100M
RuntimeMaxUse Specifies the maximum disk space that can be used in volatile storage (within the /run filesystem). RuntimeMaxUse=100M
RuntimeKeepFree Specifies the amount of space to be set aside for other uses when writing data to volatile storage (within the /run filesystem). RuntimeMaxUse=100M
RuntimeMaxFileSize Specifies the amount of space that an individual journal file can take up in volatile storage (within the /run filesystem) before being rotated. RuntimeMaxFileSize=200M

If you add these values in a running system in /etc/systemd/journald.conf file, then you have to restart the journald after updating the file. To restart use the following command.

sudo systemctl restart systemd-journald
Verification of log files

It is wiser to check the integrity of the log files after you clean up the files. To do that run the below command. The command shows the PASS, FAIL against the journal file.

journalctl --verify

... ... ...

[Feb 02, 2021] 5 Most Notable Open Source Centralized Log Management Tools, by James Kiarie

Feb 01, 2021 | www.tecmint.com

... ... ...

1. Elastic Stack ( Elasticsearch Logstash & Kibana)

Elastic Stack , commonly abbreviated as ELK , is a popular three-in-one log centralization, parsing, and visualization tool that centralizes large sets of data and logs from multiple servers into one server.

ELK stack comprises 3 different products:

Logstash

Logstash is a free and open-source data pipeline that collects logs and events data and even processes and transforms the data to the desired output. Data is sent to logstash from remote servers using agents called ' beats '. The ' beats ' ship a huge volume of system metrics and logs to Logstash whereupon they are processed. It then feeds the data to Elasticsearch .

Elasticsearch

Built on Apache Lucene , Elasticsearch is an open-source and distributed search and analytics engine for nearly all types of data – both structured and unstructured. This includes textual, numerical, and geospatial data.

It was first released in 2010. Elasticsearch is the central component of the ELK stack and is renowned for its speed, scalability, and REST APIs. It stores, indexes, and analyzes huge volumes of data passed on from Logstash .

Kibana

Data is finally passed on to Kibana , which is a WebUI visualization platform that runs alongside Elasticsearch . Kibana allows you to explore and visualize time-series data and logs from elasticsearch. It visualizes data and logs on intuitive dashboards which take various forms such as bar graphs, pie charts, histograms, etc.

Related Read : How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS/RHEL 8/7

2. Graylog

Graylog is yet another popular and powerful centralized log management tool that comes with both open-source and enterprise plans. It accepts data from clients installed on multiple nodes and, just like Kibana , visualizes the data on dashboards on a web interface.

Graylogs plays a monumental role in making business decisions touching on user interaction of a web application. It collects vital analytics on the apps' behavior and visualizes the data on various graphs such as bar graphs, pie charts, and histograms to mention a few. The data collected inform key business decisions.

For example, you can determine peak hours when customers place orders using your web application. With such insights in hand, the management can make informed business decisions to scale up revenue.

Unlike Elastic Search , Graylog offers a single-application solution in data collection, parsing, and visualization. It rids the need for installation of multiple components unlike in ELK stack where you have to install individual components separately. Graylog collects and stores data in MongoDB which is then visualized on user-friendly and intuitive dashboards.

Graylog is widely used by developers in different phases of app deployment in tracking the state of web applications and obtaining information such as request times, errors, etc. This helps them to modify the code and boost performance.

3. Fluentd

Written in C, Fluentd is a cross-platform and opensource log monitoring tool that unifies log and data collection from multiple data sources. It's completely opensource and licensed under the Apache 2.0 license. In addition, there's a subscription model for enterprise use.

Fluentd processes both structured and semi-structured sets of data. It analyzes application logs, events logs, clickstreams and aims to be a unifying layer between log inputs and outputs of varying types.

It structures data in a JSON format allowing it to seamlessly unify all facets of data logging including the collection, filtering, parsing, and outputting logs across multiple nodes.

Fluentd comes with a small footprint and is resource-friendly, so you won't have to worry about running out of memory or your CPU being overutilized. Additionally, it boasts of a flexible plugin architecture where users can take advantage of over 500 community-developed plugins to extend its functionality.

4. LOGalyze

LOGalyze is a powerful network monitoring and log management tool that collects and parses logs from network devices, Linux, and Windows hosts. It was initially commercial but is now completely free to download and install without any limitations.

LOGalyze is ideal for analyzing server and application logs and presents them in various report formats such as PDF, CSV, and HTML. It also provides extensive search capabilities and real-time event detection of services across multiple nodes.

Like the aforementioned log monitoring tools, LOGalyze also provides a neat and simple web interface that allows users to log in and monitor various data sources and analyze log files .

5. NXlog

NXlog is yet another powerful and versatile tool for log collection and centralization. It's a multi-platform log management utility that is tailored to pick up policy breaches, identify security risks and analyze issues in system, application, and server logs.

NXlog has the capability of collating events logs from numerous endpoints in varying formats including Syslog and windows event logs. It can perform a range of log related tasks such as log rotation, log rewrites. log compression and can also be configured to send alerts.

You can download NXlog in two editions: The community edition, which is free to download, and use, and the enterprise edition which is subscription-based.

Tags Linux Log Analyzer , Linux Log Management , Linux Log Monitoring If you liked this article, then do subscribe to email alerts for Linux tutorials. If you have any questions or doubts? do ask for help in the comments section.

[Jan 27, 2021] Make Bash history more useful with these tips by Seth Kenlon

Notable quotes:
"... Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong. ..."
Jun 25, 2020 | opensource.com

To block adding a command to the history entries, you can place a space before the command, as long as you have ignorespace in your HISTCONTROL environment variable:

$ history | tail
535 echo "foo"
536 echo "bar"
$ history -d 536
$ history | tail
535 echo "foo"

You can clear your entire session history with the -c option:

$ history -c
$ history
$ History lessons More on Bash Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong.

Use your history sessions as needed, and exercise your power over history wisely. Happy history hacking!

[Jan 03, 2021] 9 things to do in your first 10 minutes on a new to you server

Jan 03, 2021 | opensource.com

1. First contact

As soon as I log into a server, the first thing I do is check whether it has the operating system, kernel, and hardware architecture needed for the tests I will be running. I often check how long a server has been up and running. While this does not matter very much for a test system because it will be rebooted multiple times, I still find this information helpful.

Use the following commands to get this information. I mostly use Red Hat Linux for testing, so if you are using another Linux distro, use *-release in the filename instead of redhat-release :

cat / etc / redhat-release
uname -a
hostnamectl
uptime 2. Is anyone else on board?

Once I know that the machine meets my test needs, I need to ensure no one else is logged into the system at the same time running their own tests. Although it is highly unlikely, given that the provisioning system takes care of this for me, it's still good to check once in a while -- especially if it's my first time logging into a server. I also check whether there are other users (other than root) who can access the system.

Use the following commands to find this information. The last command looks for users in the /etc/passwd file who have shell access; it skips other services in the file that do not have shell access or have a shell set to nologin :

who
who -Hu
grep sh $ / etc / passwd 3. Physical or virtual machine

Now that I know I have the machine to myself, I need to identify whether it's a physical machine or a virtual machine (VM). If I provisioned the machine myself, I could be sure that I have what I asked for. However, if you are using a machine that you did not provision, you should check whether the machine is physical or virtual.

Use the following commands to identify this information. If it's a physical system, you will see the vendor's name (e.g., HP, IBM, etc.) and the make and model of the server; whereas, in a virtual machine, you should see KVM, VirtualBox, etc., depending on what virtualization software was used to create the VM:

dmidecode -s system-manufacturer
dmidecode -s system-product-name
lshw -c system | grep product | head -1
cat / sys / class / dmi / id / product_name
cat / sys / class / dmi / id / sys_vendor 4. Hardware

Because I often test hardware connected to the Linux machine, I usually work with physical servers, not VMs. On a physical machine, my next step is to identify the server's hardware capabilities -- for example, what kind of CPU is running, how many cores does it have, which flags are enabled, and how much memory is available for running tests. If I am running network tests, I check the type and capacity of the Ethernet or other network devices connected to the server.

Use the following commands to display the hardware connected to a Linux server. Some of the commands might be deprecated in newer operating system versions, but you can still install them from yum repos or switch to their equivalent new commands:

lscpu or cat / proc / cpuinfo
lsmem or cat / proc / meminfo
ifconfig -a
ethtool < devname >
lshw
lspci
dmidecode 5. Installed software

Testing software always requires installing additional dependent packages, libraries, etc. However, before I install anything, I check what is already installed (including what version it is), as well as which repos are configured, so I know where the software comes from, and I can debug any package installation issues.

Use the following commands to identify what software is installed:

rpm -qa
rpm -qa | grep < pkgname >
rpm -qi < pkgname >
yum repolist
yum repoinfo
yum install < pkgname >
ls -l / etc / yum.repos.d / 6. Running processes and services

Once I check the installed software, it's natural to check what processes are running on the system. This is crucial when running a performance test on a system -- if a running process, daemon, test software, etc. is eating up most of the CPU/RAM, it makes sense to stop that process before running the tests. This also checks that the processes or daemons the test requires are up and running. For example, if the tests require httpd to be running, the service to start the daemon might not have run even if the package is installed.

Use the following commands to identify running processes and enabled services on your system:

pstree -pa 1
ps -ef
ps auxf
systemctl 7. Network connections

Today's machines are heavily networked, and they need to communicate with other machines or services on the network. I identify which ports are open on the server, if there are any connections from the network to the test machine, if a firewall is enabled, and if so, is it blocking any ports, and which DNS servers the machine talks to.

Use the following commands to identify network services-related information. If a deprecated command is not available, install it from a yum repo or use the equivalent newer command:

netstat -tulpn
netstat -anp
lsof -i
ss
iptables -L -n
cat / etc / resolv.conf 8. Kernel

When doing systems testing, I find it helpful to know kernel-related information, such as the kernel version and which kernel modules are loaded. I also list any tunable kernel parameters and what they are set to and check the options used when booting the running kernel.

Use the following commands to identify this information:

uname -r
cat / proc / cmdline
lsmod
modinfo < module >
sysctl -a
cat / boot / grub2 / grub.cfg

[Jan 02, 2021] 10 shortcuts to master bash by Guest Contributor

06, 2025 | TechRepublic

If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command shell on most modern GNU/Linux distributions.

The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and provides you with the building blocks for shell scripting and automated task execution.

Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can save you a fair bit of typing. This document outlines 10 of the most useful tools:

  1. Easily recall previous commands

    Bash keeps track of the commands you execute in a history buffer, and allows you to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching commands.

  2. Use command aliases

    If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate the required options, so that you don't need to remember them or manually type them every time. For example, if you always run ls with the -l option to obtain a detailed directory listing, you can use this command:

    bash> alias ls='ls -l' 

    To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt will invoke the alias and produce the ls -l output.

    You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.

  3. Use filename auto-completion

    Bash supports filename auto-completion at the command prompt. To use this feature, type the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you. If multiple matches are found, you will be prompted to choose one.

  4. Use key shortcuts to efficiently edit the command line

    Bash supports a number of keyboard shortcuts for command-line navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.

  5. Get automatic notification of new mail

    You can configure bash to automatically notify you of new mail, by setting the $MAILPATH variable to point to your local mail spool. For example, the command:

    bash> MAILPATH='/var/spool/mail/john'
    bash> export MAILPATH 

    Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.

  6. Run tasks in the background

    Bash lets you run one or more tasks in the background, and selectively suspend or resume any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's an example:

    bash> tail -f /var/log/messages &
    [1] 614

    Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the foreground. Here's an example:

    bash> fg 1

    A list of active jobs can be obtained at any time by typing jobs at the bash prompt.

  7. Quickly jump to frequently-used directories

    You probably already know that the $PATH variable lists bash's "search path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this feature, assign a directory list to the $CDPATH variable, as shown in the example below:

    bash> CDPATH='.:~:/usr/local/apache/htdocs:/disk1/backups'
    bash> export CDPATH

    Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory name.

  8. Perform calculations

    Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will attempt to perform the calculation and return the answer.

    bash> echo $((16/2))
    8
  9. Customise the shell prompt

    You can customise the bash shell prompt to display -- among other things -- the current username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable, as below:

    bash> PS1='\u@\h:\w \@> '
    
    bash> export PS1
    root@medusa:/tmp 03:01 PM>

    This will display the name of the currently logged-in user, the host name, the current working directory and the current time at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.

  10. Get context-specific help

    Bash comes with help for all built-in commands. To see a list of all built-in commands, type help. To obtain help on a specific command, type help command, where command is the command you need help on. Here's an example:

    bash> help alias
    ...some help text...

    Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.

[Jan 02, 2021] How to convert from CentOS or Oracle Linux to RHEL

convert2rhel is an RPM package which contains a Python2.x script written in completely incomprehensible over-modulazed manner. Python obscurantism in action ;-)
Looks like a "backbox" tool unless you know Python well. As such it is dangerous to rely upon.
Jan 02, 2021 | access.redhat.com

[Jan 02, 2021] Linux sysadmin basics- Start NIC at boot

Nov 14, 2019 | www.redhat.com

If you've ever booted a Red Hat-based system and have no network connectivity, you'll appreciate this quick fix.

Posted: | (Red Hat)

Image
"Fast Ethernet PCI Network Interface Card SN5100TX.jpg" by Jana.Wiki is licensed under CC BY-SA 3.0

It might surprise you to know that if you forget to flip the network interface card (NIC) switch to the ON position (shown in the image below) during installation, your Red Hat-based system will boot with the NIC disconnected:

Image
Setting the NIC to the ON position during installation.
More Linux resources

But, don't worry, in this article I'll show you how to set the NIC to connect on every boot and I'll show you how to disable/enable your NIC on demand.

If your NIC isn't enabled at startup, you have to edit the /etc/sysconfig/network-scripts/ifcfg-NIC_name file, where NIC_name is your system's NIC device name. In my case, it's enp0s3. Yours might be eth0, eth1, em1, etc. List your network devices and their IP addresses with the ip addr command:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

Note that my primary NIC (enp0s3) has no assigned IP address. I have virtual NICs because my Red Hat Enterprise Linux 8 system is a VirtualBox virtual machine. After you've figured out what your physical NIC's name is, you can now edit its interface configuration file:

$ sudo vi /etc/sysconfig/network-scripts/ifcfg-enp0s3

and change the ONBOOT="no" entry to ONBOOT="yes" as shown below:

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="enp0s3"
UUID="77cb083f-2ad3-42e2-9070-697cb24edf94"
DEVICE="enp0s3"
ONBOOT="yes"

Save and exit the file.

You don't need to reboot to start the NIC, but after you make this change, the primary NIC will be on and connected upon all subsequent boots.

To enable the NIC, use the ifup command:

ifup enp0s3

Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

Now the ip addr command displays the enp0s3 device with an IP address:

$ ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:81:d0:2d brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.64/24 brd 192.168.1.255 scope global dynamic noprefixroute enp0s3
       valid_lft 86266sec preferred_lft 86266sec
    inet6 2600:1702:a40:88b0:c30:ce7e:9319:9fe0/64 scope global dynamic noprefixroute 
       valid_lft 3467sec preferred_lft 3467sec
    inet6 fe80::9b21:3498:b83c:f3d4/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:4e:69:84 brd ff:ff:ff:ff:ff:ff

To disable a NIC, use the ifdown command. Please note that issuing this command from a remote system will terminate your session:

ifdown enp0s3

Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)

That's a wrap

It's frustrating to encounter a Linux system that has no network connection. It's more frustrating to have to connect to a virtual KVM or to walk up to the console to fix it. It's easy to miss the switch during installation, I've missed it myself. Now you know how to fix the problem and have your system network-connected on every boot, so before you drive yourself crazy with troubleshooting steps, try the ifup command to see if that's your easy fix.

Takeaways: ifup, ifdown, /etc/sysconfig/network-scripts/ifcfg-NIC_name

[Jan 02, 2021] Looking forward to Linux network configuration in the initial ramdisk (initrd)

Nov 24, 2020 | www.redhat.com
The need for an initrd

When you press a machine's power button, the boot process starts with a hardware-dependent mechanism that loads a bootloader . The bootloader software finds the kernel on the disk and boots it. Next, the kernel mounts the root filesystem and executes an init process.

This process sounds simple, and it might be what actually happens on some Linux systems. However, modern Linux distributions have to support a vast set of use cases for which this procedure is not adequate.

First, the root filesystem could be on a device that requires a specific driver. Before trying to mount the filesystem, the right kernel module must be inserted into the running kernel. In some cases, the root filesystem is on an encrypted partition and therefore needs a userspace helper that asks the passphrase to the user and feeds it to the kernel. Or, the root filesystem could be shared over the network via NFS or iSCSI, and mounting it may first require configured IP addresses and routes on a network interface.

[ You might also like: Linux networking: 13 uses for netstat ]

To overcome these issues, the bootloader can pass to the kernel a small filesystem image (the initrd) that contains scripts and tools to find and mount the real root filesystem. Once this is done, the initrd switches to the real root, and the boot continues as usual.

The dracut infrastructure

On Fedora and RHEL, the initrd is built through dracut . From its home page , dracut is "an event-driven initramfs infrastructure. dracut (the tool) is used to create an initramfs image by copying tools and files from an installed system and combining it with the dracut framework, usually found in /usr/lib/dracut/modules.d ."

A note on terminology: Sometimes, the names initrd and initramfs are used interchangeably. They actually refer to different ways of building the image. An initrd is an image containing a real filesystem (for example, ext2) that gets mounted by the kernel. An initramfs is a cpio archive containing a directory tree that gets unpacked as a tmpfs. Nowadays, the initrd images are deprecated in favor of the initramfs scheme. However, the initrd name is still used to indicate the boot process involving a temporary filesystem.

Kernel command-line

Let's revisit the NFS-root scenario that was mentioned before. One possible way to boot via NFS is to use a kernel command-line containing the root=dhcp argument.

The kernel command-line is a list of options passed to the kernel from the bootloader, accessible to the kernel and applications. If you use GRUB, it can be changed by pressing the e key on a boot entry and editing the line starting with linux .

The dracut code inside the initramfs parses the kernel command-line and starts DHCP on all interfaces if the command-line contains root=dhcp . After obtaining a DHCP lease, dracut configures the interface with the parameters received (IP address and routes); it also extracts the value of the root-path DHCP option from the lease. The option carries an NFS server's address and path (which could be, for example, 192.168.50.1:/nfs/client ). Dracut then mounts the NFS share at this location and proceeds with the boot.

If there is no DHCP server providing the address and the NFS root path, the values can be configured explicitly in the command line:

root=nfs:192.168.50.1:/nfs/client ip=192.168.50.101:::24::ens2:none

Here, the first argument specifies the NFS server's address, and the second configures the ens2 interface with a static IP address.

There are two syntaxes to specify network configuration for an interface:

ip=<interface>:{dhcp|on|any|dhcp6|auto6}[:[<mtu>][:<macaddr>]]

ip=<client-IP>:[<peer>]:<gateway-IP>:<netmask>:<client_hostname>:<interface>:{none|off|dhcp|on|any|dhcp6|auto6|ibft}[:[<mtu>][:<macaddr>]]

The first can be used for automatic configuration (DHCP or IPv6 SLAAC), and the second for static configuration or a combination of automatic and static. Here some examples:

ip=enp1s0:dhcp
ip=192.168.10.30::192.168.10.1:24::enp1s0:none
ip=[2001:0db8::02]::[2001:0db8::01]:64::enp1s0:none

Note that if you pass an ip= option, but dracut doesn't need networking to mount the root filesystem, the option is ignored. To force network configuration without a network root, add rd.neednet=1 to the command line.

You probably noticed that among automatic configuration methods, there is also ibft . iBFT stands for iSCSI Boot Firmware Table and is a mechanism to pass parameters about iSCSI devices from the firmware to the operating system. iSCSI (Internet Small Computer Systems Interface) is a protocol to access network storage devices. Describing iBFT and iSCSI is outside the scope of this article. What is important is that by passing ip=ibft to the kernel, the network configuration is retrieved from the firmware.

Dracut also supports adding custom routes, specifying the machine name and DNS servers, creating bonds, bridges, VLANs, and much more. See the dracut.cmdline man page for more details.

Network modules

The dracut framework included in the initramfs has a modular architecture. It comprises a series of modules, each containing scripts and binaries to provide specific functionality. You can see which modules are available to be included in the initramfs with the command dracut --list-modules .

At the moment, there are two modules to configure the network: network-legacy and network-manager . You might wonder why different modules provide the same functionality.

network-legacy is older and uses shell scripts calling utilities like iproute2 , dhclient , and arping to configure interfaces. After the switch to the real root, a different network configuration service runs. This service is not aware of what the network-legacy module intended to do and the current state of each interface. This can lead to problems maintaining the state across the root switch boundary.

A prominent example of a state to be kept is the DHCP lease. If an interface's address changed during the boot, the connection to an NFS share would break, causing a boot failure.

To ensure a seamless transition, there is a need for a mechanism to pass the state between the two environments. However, passing the state between services having different configuration models can be a problem.

The network-manager dracut module was created to improve this situation. The module runs NetworkManager in the initrd to configure connection profiles generated from the kernel command-line. Once done, NetworkManager serializes its state, which is later read by the NetworkManager instance in the real root.

Fedora 31 was the first distribution to switch to network-manager in initrd by default. On RHEL 8.2, network-legacy is still the default, but network-manager is available. On RHEL 8.3, dracut will use network-manager by default.

Enabling a different network module

While the two modules should be largely compatible, there are some differences in behavior. Some of those are documented in the nm-initrd-generator man page. In general, it is suggested to use the network-manager module when NetworkManager is enabled.

To rebuild the initrd using a specific network module, use one of the following commands:

# dracut --add network-legacy  --force --verbose
# dracut --add network-manager --force --verbose

Since this change will be reverted the next time the initrd is rebuilt, you may want to make the change permanent in the following way:

# echo 'add_dracutmodules+=" network-manager "' > /etc/dracut.conf.d/network-module.conf
# dracut --regenerate-all --force --verbose

The --regenerate-all option also rebuilds all the initramfs images for the kernel versions found on the system.

The network-manager dracut module

As with all dracut modules, the network-manager module is split into stages that are called at different times during the boot (see the dracut.modules man page for more details).

The first stage parses the kernel command-line by calling /usr/libexec/nm-initrd-generator to produce a list of connection profiles in /run/NetworkManager/system-connections . The second part of the module runs after udev has settled, i.e., after userspace has finished handling the kernel events for devices (including network interfaces) found in the system.

When NM is started in the real root environment, it registers on D-Bus, configures the network, and remains active to react to events or D-Bus requests. In the initrd, NetworkManager is run in the configure-and-quit=initrd mode, which doesn't register on D-Bus (since it's not available in the initrd, at least for now) and exits after reaching the startup-complete event.

The startup-complete event is triggered after all devices with a matching connection profile have tried to activate, successfully or not. Once all interfaces are configured, NM exits and calls dracut hooks to notify other modules that the network is available.

Note that the /run/NetworkManager directory containing generated connection profiles and other runtime state is copied over to the real root so that the new NetworkManager process running there knows exactly what to do.

Troubleshooting

If you have network issues in dracut, this section contains some suggestions for investigating the problem.

The first thing to do is add rd.debug to the kernel command-line, enabling debug logging in dracut. Logs are saved to /run/initramfs/rdsosreport.txt and are also available in the journal.

If the system doesn't boot, it is useful to get a shell inside the initrd environment to manually check why things aren't working. For this, there is an rd.break command-line argument. Note that the argument spawns a shell when the initrd has finished its job and is about to give control to the init process in the real root filesystem. To stop at a different stage of dracut (for example, after command-line parsing), use the following argument:

rd.break={cmdline|pre-udev|pre-trigger|initqueue|pre-mount|mount|pre-pivot|cleanup}

The initrd image contains a minimal set of binaries; if you need a specific tool at the dracut shell, you can rebuild the image, adding what is missing. For example, to add the ping and tcpdump binaries (including all their dependent libraries), run:

# dracut -f  --install "ping tcpdump"

and then optionally verify that they were included successfully:

# lsinitrd | grep "ping\|tcpdump"
Arguments: -f --install 'ping tcpdump'
-rwxr-xr-x   1 root     root        82960 May 18 10:26 usr/bin/ping
lrwxrwxrwx   1 root     root           11 May 29 20:35 usr/sbin/ping -> ../bin/ping
-rwxr-xr-x   1 root     root      1065224 May 29 20:35 usr/sbin/tcpdump
The generator

If you are familiar with NetworkManager configuration, you might want to know how a given kernel command-line is translated into NetworkManager connection profiles. This can be useful to better understand the configuration mechanism and find syntax errors in the command-line without having to boot the machine.

The generator is installed in /usr/libexec/nm-initrd-generator and must be called with the list of kernel arguments after a double dash. The --stdout option prints the generated connections on standard output. Let's try to call the generator with a sample command line:

$ /usr/libexec/nm-initrd-generator --stdout -- \
          ip=enp1s0:dhcp:00:99:88:77:66:55 rd.peerdns=0

802-3-ethernet.cloned-mac-address: '99:88:77:66:55' is not a valid MAC
address

In this example, the generator reports an error because there is a missing field for the MTU after enp1s0 . Once the error is corrected, the parsing succeeds and the tool prints out the connection profile generated:

$ /usr/libexec/nm-initrd-generator --stdout -- \
        ip=enp1s0:dhcp::00:99:88:77:66:55 rd.peerdns=0

*** Connection 'enp1s0' ***

[connection]
id=enp1s0
uuid=e1fac965-4319-4354-8ed2-39f7f6931966
type=ethernet
interface-name=enp1s0
multi-connect=1
permissions=

[ethernet]
cloned-mac-address=00:99:88:77:66:55
mac-address-blacklist=

[ipv4]
dns-search=
ignore-auto-dns=true
may-fail=false
method=auto

[ipv6]
addr-gen-mode=eui64
dns-search=
ignore-auto-dns=true
method=auto

[proxy]

Note how the rd.peerdns=0 argument translates into the ignore-auto-dns=true property, which makes NetworkManager ignore DNS servers received via DHCP. An explanation of NetworkManager properties can be found on the nm-settings man page.

[ Network getting out of control? Check out Network automation for everyone, a free book from Red Hat . ]

Conclusion

The NetworkManager dracut module is enabled by default in Fedora and will also soon be enabled on RHEL. It brings better integration between networking in the initrd and NetworkManager running in the real root filesystem.

While the current implementation is working well, there are some ideas for possible improvements. One is to abandon the configure-and-quit=initrd mode and run NetworkManager as a daemon started by a systemd service. In this way, NetworkManager will be run in the same way as when it's run in the real root, reducing the code to be maintained and tested.

To completely drop the configure-and-quit=initrd mode, NetworkManager should also be able to register on D-Bus in the initrd. Currently, dracut doesn't have any module providing a D-Bus daemon because the image should be minimal. However, there are already proposals to include it as it is needed to implement some new features.

With D-Bus running in the initrd, NetworkManager's powerful API will be available to other tools to query and change the network state, unlocking a wide range of applications. One of those is to run nm-cloud-setup in the initrd. The service, shipped in the NetworkManager-cloud-setup Fedora package fetches metadata from cloud providers' infrastructure (EC2, Azure, GCP) to automatically configure the network.

[Jan 02, 2021] 11 Linux command line guides you shouldn't be without - Enable Sysadmin

Jan 02, 2021 | www.redhat.com

Here are some brief comments about each topic:

  1. How to use the Linux mtr command - The mtr (My Traceroute) command is a major improvement over the old traceroute and is one of my first go-to tools when troubleshooting network problems.
  2. Linux for beginners: 10 commands to get you started at the terminal - Everyone who works on the Linux CLI needs to know some basic commands for moving around the directory structure and exploring files and directories. This article covers those commands in a simple way that places them into a usable context for those of us new to the command line.
  3. Linux for beginners: 10 more commands for manipulating files - One of the most common tasks we all do, whether as a Sysadmin or a regular user, is to manage and manipulate files.
  4. More stupid Bash tricks: Variables, find, file descriptors, and remote operations - These tricks are actually quite smart, and if you want to learn the basics of Bash along with standard IO streams (STDIO), this is a good place to start.
  5. Getting started with systemctl - Do you need to enable, disable, start, and stop systemd services? Learn the basics of systemctl – a powerful tool for managing systemd services and more.
  6. How to use the uniq command to process lists in Linux - Ever had a list in which items can appear multiple times where you only need to know which items appear in the list but not how many times?
  7. A beginner's guide to gawk - gawk is a command line tool that can be used for simple text processing in Bash and other scripts. It is also a powerful language in its own right.
  8. An introduction to the diff command - Sometimes it is important to know the difference.
  9. Looking forward to Linux network configuration in the initial ramdisk (initrd) - The initrd is a critical part of the very early boot process for Linux. Here is a look at what it is and how it works.
  10. Linux troubleshooting: Setting up a TCP listener with ncat - Network troubleshooting sometimes requires tracking specific network packets based on complex filter criteria or just determining whether a connection can be made.
  11. Hard links and soft links in Linux explained - The use cases for hard and soft links can overlap but it is how they differ that makes them both important – and cool.

[Jan 02, 2021] Reference file descriptors

Jan 02, 2021 | www.redhat.com

In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:

Code Meaning Location Description
0 Standard input /dev/stdin Keyboard, file, or some stream
1 Standard output /dev/stdout Monitor, terminal, display
2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display

Now that you know what the default FDs do, let's see them in action. I start by creating a directory named foo , which contains file1 .

$> ls foo/ bar/
ls: cannot access 'bar/': No such file or directory
foo/:
file1

The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use 2> to omit stderr:

$> ls foo/ bar/ 2>/dev/null
foo/:
file1

It is possible to send the output of foo to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:

$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null
foo:
file1

Then:

$> cat ls_out_file
foo:
file1

The following command sends stdout to a file and stderr to /dev/null so that the error won't display on the screen:

$> ls foo/ bar/ >to_stdout 2>/dev/null
$> cat to_stdout
foo/:
file1

The following command sends stdout and stderr to the same file:

$> ls foo/ bar/ >mixed_output 2>&1
$> cat mixed_output
ls: cannot access 'bar/': No such file or directory
foo/:
file1

This is what happened in the last example, where stdout and stderr were redirected to the same file:

    ls foo/ bar/ >mixed_output 2>&1
             |          |
             |          Redirect stderr to where stdout is sent
             |                                                        
             stdout is sent to mixed_output

Another short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:

$> ls foo/ bar/ &>mixed_output

Here is a more complex redirection:

exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-

This is what occurs:

Often it is handy to group commands, and then send the Standard Output to a single file. For example:

$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr
Hello world

As you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.

[Jan 01, 2021] Looks like potentially Oracle can pickup up to 65% of CentOS users

Jan 01, 2021 | forums.centos.org

What do you think of the recent Red Hat announcement about CentOS Linux/Stream?

I can use either CentOS Linux or Stream and it makes no difference to me
6
11%
I will switch reluctantly to CentOS Stream but I'd rather not
2
4%
I depend on CentOS Linux 8 and its stability and now I need a new alternative
10
19%
I love the idea of CentOS Stream and can't wait to use it
1
2%
I'm off to a different distribution before CentOS 8 sunsets at the end of 2021
13
24%
I feel completely betrayed by this decision and will avoid Red Hat solutions in future
22
41%
Total votes: 54

[Jan 01, 2021] Oracle Linux DTrace

Jan 01, 2021 | www.oracle.com

... DTrace gives the operational insights that have long been missing in the data center, such as memory consumption, CPU time or what specific function calls are being made.

Developers can learn about and experiment with DTrace on Oracle Linux by installing the appropriate RPMs:

[Jan 01, 2021] Oracle Linux vs. Red Hat Enterprise Linux by Jim Brull

Jan 05, 2019 | www.centroid.com

... ... ...

Here's what we found.

[Jan 01, 2021] Consider looking at openSUSE (still run out of Germany)

Jan 01, 2021 | www.reddit.com

If you are on CentOS-7 then you will probably be okay until RedHat pulls the plug on 2024-06-30 so do don't do anything rash. If you are on CentOS-8 then your days are numbered (to ~ 365) because this OS will shift from major-minor point updates to a streaming model at the end of 2021. Let's look at two early founders: SUSE started in Germany in 1991 whilst RedHat started in America a year later. SUSE sells support for SLE (Suse Linux Enterprise) which means you need a license to install-run-update-upgrade it. Likewise RedHat sells support for RHEL (Red Hat Enterprise Linux). SUSE also offers "openSUSE Leap" (released once a year as a major-minor point release of SLE) and "openSUSE Tumbleweed" (which is a streaming thingy). A couple of days ago I installed "OpenSUSE Leap" onto an old HP-Compaq 6000 desktop just to try it out (the installer actually had a few features I liked better than the CentOS-7 installer). When I get back to the office in two weeks, I'm going to try installing "OpenSUSE Leap" onto an HP-DL385p_gen8. I'll work with this for a few months and I am comfortable, I will migrate my employer's solution over to "OpenSUSE Leap".

Parting thoughts:

  1. openSUSE is run out of Germany. IMHO switching over to a European distro is similar to those database people who preferred MariaDB to MySQL when Oracle was still hoping that MySQL would die from neglect.

  2. Someone cracked off to me the other day that now that IBM is pulling strings at "Red Hat", that the company should be renamed "Blue Hat"

7 comments 47% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

general-noob 4 points · 3 days ago

I downloaded and tried it last week and was actually pretty impressed. I have only ever tested SUSE in the past. Honestly, I'll stick with Red Hat/CentOS whatever, but I was still impressed. I'd recommend people take a look.

servingwater 2 points · 3 days ago

I have been playing with OpenSUSE a bit, too. Very solid this time around. In the past I never had any luck with it. But Leap 15.2 is doing fine for me. Just testing it virtually. TW also is pretty sweet and if I were to use a rolling release, it would be among the top contenders.

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.
Of course afterwards you can disable it with the proper commands but it becomes a pain with YAST, as it seems YAST insists on being opened by root.

neilrieck 2 points · 2 days ago

Thanks for that "heads about" about root

gdhhorn 1 point · 2 days ago

One thing I don't like with OpenSUSE is that you can't really, or are not supposed to I guess, disable the root account. You can't do it at install, if you leave the root account blank suse, will just assign the password for the user you created to it.

I'm running Leap 15.2 on the laptops my kids run for school. During installation, I simply deselected the option for the account used to be an administrator; this required me to set a different password for administrative purposes.

Perhaps I'm misunderstanding your comment.

servingwater 1 point · 2 days ago

I think you might.
My point is/was that if I select to choose my regular user to be admin, I don't expect for the system to create and activate a root account anyways and then just assign it my password.
I expect the root account to be disabled.

gdhhorn 2 points · 2 days ago

I didn't realize it made a user, 'root,' and auto generated a password. I'd always assumed if I said to make the user account admin, that was it.

TIL, thanks.

servingwater 1 point · 2 days ago

I was surprised, too. I was bit "shocked" when I realized, after the install, that I could login as root with my user password.
At the very least, IMHO, it should then still have you set the root password, even if you choose to make your user admin.
It for one lets you know that OpenSUSE is not disabling root and two gives you a chance to give it a different password.
But other than that subjective issue I found OpenSUSE Leap a very solid distro.

[Jan 01, 2021] What about the big academic labs? (Fermilab, CERN, DESY, etc)

Jan 01, 2021 | www.reddit.com

The big academic labs (Fermilab, CERN and DESY to only name three of many used to run something called Scientific Linux which was also maintained by Red Hat.see: https://scientificlinux.org/ and https://en.wikipedia.org/wiki/Scientific_Linux Shortly after Red Hat acquired CentOS in 2014, Red Hat convinced the big academic labs to begin migrating over to CentOS (no one at that time thought that Red Hat would become Blue Hat) 11 comments 67% Upvoted Log in or sign up to leave a comment Log In Sign Up Sort by level 1

phil_g 14 points · 2 days ago

To clarify, as a user of Scientific Linux:

Scientific Linux is not and was not maintained by Red Hat. Like CentOS, when it was truly a community distribution, Scientific Linux was an independent rebuild of the RHEL source code published by Red Hat. It is maintained primarily by people at Fermilab. (It's slightly different from CentOS in that CentOS aimed for binary compatibility with RHEL, while that is not a goal of Scientific Linux. In practice, SL often achieves binary compatibility, but if you have issues with that, it's more up to you to fix them than the SL maintainers.)

I don't know anything about Red Hat convincing institutions to stop using Scientific Linux; the first I heard about the topic was in April 2019 when Fermilab announced there would be no Scientific Linux 8 . (They may reverse that decision. At the moment, they're " investigating the best path forward ", with a decision to be announced in the first few months of 2021.) level 2 neilrieck 4 points · 2 days ago

I fear you are correct. I just stumbled onto this article: https://www.linux.com/training-tutorials/scientific-linux-great-distro-wrong-name/ Even the wikipedia article states "This product is derived from the free and open-source software made available by Red Hat, but is not produced, maintained or supported by them." But it does seem that Scientific Linux was created as a replacement for Fermilab Linux. I've also seen references to CC7 to mean "Cern Centos 7". CERN is keeping their Linux page up to date because what I am seeing here ( https://linux.web.cern.ch/ ) today is not what I saw 2-weeks ago.

There are

Niarbeht 16 points · 2 days ago

There are

Uh oh, guys, they got him!

deja_geek 9 points · 2 days ago

RedHat didn't convince them to stop using Scientific Linux, Fermilab no longer needed to have their own rebuild of RHEL sources. They switched to CentOS and modified CentOS if they needed to (though I don't really think they needed to)

meat_bunny 10 points · 2 days ago

Maintaining your own distro is a pain in the ass.

My crystal ball says they'll just use whatever RHEL rebuild floats to the top in a few months like the rest of us.

carlwgeorge 2 points · 2 days ago

SL has always been an independent rebuild. It has never been maintained, sponsored, or owned by Red Hat. They decided on their own to not build 8 and instead collaborate on CentOS. They even gained representation on the CentOS board (one from Fermi, one from CERN).

I'm not affiliated with any of those organizations, but my guess is they will switch to some combination of CentOS Stream and RHEL (under the upcoming no/low cost program).

VestoMSlipher 1 point · 11 hours ago

https://linux.web.cern.ch/#information-on-change-of-end-of-life-for-centos-8

[Jan 01, 2021] CentOS HAS BEEN CANCELLED !!!

Jan 01, 2021 | forums.centos.org

Re: CentOS HAS BEEN CANCELLED !!!

Post by whoop " 2020/12/08 20:00:36

Is anybody considering switching to RHEL's free non-production developer subscription? As I understand it, it is free and receives updates.
The only downside as I understand it is that you have to renew your license every year (and that you can't use it in commercial production).

[Jan 01, 2021] package management - yum distro-sync

Jan 01, 2021 | askubuntu.com

In redhat-based distros, the yum tool has a distro-sync command which will synchronize packages to the current repositories. This command is useful for returning to a base state if base packages have been modified from an outside source. The docs for the command is:

distribution-synchronization or distro-sync Synchronizes the installed package set with the latest packages available, this is done by either obsoleting, upgrading or downgrading as appropriate. This will "normally" do the same thing as the upgrade command however if you have the package FOO installed at version 4, and the latest available is only version 3, then this command will downgrade FOO to version 3.

[Dec 30, 2020] Switching from CentOS to Oracle Linux: a hands-on example

In view of the such effective and free promotion of Oracle Linux by IBM/Red Hat brass as the top replacement for CentOS, the script can probably be slightly enhanced.
The script works well for simple systems, but still has some sharp edges. Checks for common bottlenecks should be added. For exmple scale in /boot should be checked if this is a separate filesystem. It was not done. See my Also, in case it was invoked the second time after the failure of the step "Installing base packages for Oracle Linux..." it can remove hundreds of system RPM (including sshd, cron, and several other vital packages ;-).
And failures on this step are probably the most common type of failures in conversion. Inexperienced sysadmins or even experienced sysadmins in a hurry often make this blunder running the script the second time.
It probably happens due to the presence of the line 'yum remove -y "${new_releases[@]}" ' in the function remove_repos, because in their excessive zeal to restore the system after error the programmers did not understand that in certain situations those packages that they want to delete via YUM have dependences and a lot of them (line 65 in the current version of the script) Yum blindly deletes over 300 packages including such vital as sshd, cron, etc. Due to this execution of the script probably should be blocked if Oracle repositories are already present. This check is absent.
After this "mass extinction of RPM packages," event you need to be pretty well versed in yum to recover. The names of the deleted packages are in yum log, so you can reinstall them and something it helps. In other cases system remain unbootable and the restore from the backup is the only option.
Due sudden surge of popularity of Oracle Linux due to Red Hat CentOS8 fiasco, the script definitely can benefit from better diagnostics. The current diagnostic is very rudimentary. It might also make sense to make steps modular in the classic /etc/init.d fashion and make initial steps shippable so that the script can be resumed after the error. Most of the steps have few dependencies, which can be resolved by saving variables during the first run and sourcing them if the the first step is not step 1.
Also, it makes sense to check the amount of free space in /boot filesystem if /boot is a separate filesystem. The script requires approx 100MB of free space in this filesystem. Failure to write a new kernel to it due to the lack of free space leads to the situation of "half-baked" installation, which is difficult to recover without senior sysadmin skills.
See additional considerations about how to enhance the script at http://www.softpanorama.org/Commercial_linuxes/Oracle_linux/conversion_of_centos_to_oracle_linux.shtml
Dec 15, 2020 Simon Coter Blog

... ... ...

We published a blog post earlier this week that explains why , but here is the TL;DR version:

For these reasons, we created a simple script to allow users to switch from CentOS to Oracle Linux about five years ago. This week, we moved the script to GitHub to allow members of the CentOS community to help us improve and extend the script to cover more CentOS respins and use cases.

The script can switch CentOS Linux 6, 7 or 8 to the equivalent version of Oracle Linux. Let's take a look at just how simple the process is.

Download the centos2ol.sh script from GitHub

The simplest way to get the script is to use curl :

$ curl -O https://raw.githubusercontent.com/oracle/centos2ol/main/centos2ol.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 10747 100 10747 0 0 31241 0 --:--:-- --:--:-- --:--:-- 31241

If you have git installed, you could clone the git repository from GitHub instead.

Run the centos2ol.sh script to switch to Oracle Linux

To switch to Oracle Linux, just run the script as root using sudo :

$ sudo bash centos2ol.sh

Sample output of script run .

As part of the process, the default kernel is switched to the latest release of Oracle's Unbreakable Enterprise Kernel (UEK) to enable extensive performance and scalability improvements to the process scheduler, memory management, file systems, and the networking stack. We also replace the existing CentOS kernel with the equivalent Red Hat Compatible Kernel (RHCK) which may be required by any specific hardware or application that has imposed strict kernel version restrictions.

Switching the default kernel (optional)

Once the switch is complete, but before rebooting, the default kernel can be changed back to the RHCK. First, use grubby to list all installed kernels:

[demo@c8switch ~]$ sudo grubby --info=ALL | grep ^kernel
[sudo] password for demo:
kernel="/boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64"
kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
kernel="/boot/vmlinuz-4.18.0-193.el8.x86_64"
kernel="/boot/vmlinuz-0-rescue-0dbb9b2f3c2744779c72a28071755366"

In the output above, the first entry (index 0) is UEK R6, based on the mainline kernel version 5.4. The second kernel is the updated RHCK (Red Hat Compatible Kernel) installed by the switch process while the third one is the kernel that were installed by CentOS and the final entry is the rescue kernel.

Next, use grubby to verify that UEK is currently the default boot option:

[demo@c8switch ~]$ sudo grubby --default-kernel
/boot/vmlinuz-5.4.17-2036.101.2.el8uek.x86_64

To replace the default kernel, you need to specify either the path to its vmlinuz file or its index. Use grubby to get that information for the replacement:

[demo@c8switch ~]$ sudo grubby --info /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
index=1
kernel="/boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64"
args="ro crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet $tuned_params"
root="/dev/mapper/cl-root"
initrd="/boot/initramfs-4.18.0-240.1.1.el8_3.x86_64.img $tuned_initrd"
title="Oracle Linux Server (4.18.0-240.1.1.el8_3.x86_64) 8.3"
id="0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64"

Finally, use grubby to change the default kernel, either by providing the vmlinuz path:

[demo@c8switch ~]$ sudo grubby --set-default /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Or its index:

[demo@c8switch ~]$ sudo grubby --set-default-index 1
The default is /boot/loader/entries/0dbb9b2f3c2744779c72a28071755366-4.18.0-240.1.1.el8_3.x86_64.conf with index 1 and kernel /boot/vmlinuz-4.18.0-240.1.1.el8_3.x86_64

Changing the default kernel can be done at any time, so we encourage you to take UEK for a spin before switching back.

It's easy to access, try it out.

For more information visit oracle.com/linux .

[Dec 30, 2020] Lazy Linux: 10 essential tricks for admins by Vallard Benincosa

The original link to the article of Vallard Benincosa published on 20 Jul 2008 in IBM DeveloperWorks disappeared due to yet another reorganization of IBM website that killed old content. Money greedy incompetents is what current upper IBM managers really is...
Jul 20, 2008 | benincosa.com

How to be a more productive Linux systems administrator

Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the universe...well, maybe not the universe, but you will need these tips to play in the big leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples accompany each trick, so you can duplicate them on your own systems.

The best systems administrators are set apart by their efficiency. And if an efficient systems administrator can do a task in 10 minutes that would take another mortal two hours to complete, then the efficient systems administrator should be rewarded (paid more) because the company is saving time, and time is money, right?

The trick is to prove your efficiency to management. While I won't attempt to cover that trick in this article, I will give you 10 essential gems from the lazy admin's bag of tricks. These tips will save you time -- and even if you don't get paid more money to be more efficient, you'll at least have more time to play Halo.

Trick 1: Unmounting the unresponsive DVD drive

The newbie states that when he pushes the Eject button on the DVD drive of a server running a certain Redmond-based operating system, it will eject immediately. He then complains that, in most enterprise Linux servers, if a process is running in that directory, then the ejection won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD drive. But this is ineffective.

Here's how you find the process that holds your DVD drive and eject it to your heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the DVD drive:

# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done

Now open up a second terminal and try to eject the DVD drive:

# eject

You'll get a message like:

umount: /media/cdrom: device is busy

Before you free it, let's find out who is using it.

# fuser /media/cdrom

You see the process was running and, indeed, it is our fault we can not eject the disk.

Now, if you are root, you can exercise your godlike powers and kill processes:

# fuser -k /media/cdrom

Boom! Just like that, freedom. Now solemnly unmount the drive:

# eject

fuser is good.

Trick 2: Getting your screen back when it's hosed

Try this:

# cat /bin/cat

Behold! Your terminal looks like garbage. Everything you type looks like you're looking into the Matrix. What do you do?

You type reset . But wait you say, typing reset is too close to typing reboot or shutdown . Your palms start to sweat -- especially if you are doing this on a production machine.

Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead, do it:

# reset

Now your screen is back to normal. This is much better than closing the window and then logging in again, especially if you just went through five machines to SSH to this machine.

Trick 3: Collaboration with screen

David, the high-maintenance user from product engineering, calls: "I need you to help me understand why I can't compile supercode.c on these new machines you deployed."

"Fine," you say. "What machine are you on?"

David responds: " Posh." (Yes, this fictional company has named its five production servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another machine become David:

# su - david

Then you go over to posh:

# ssh posh

Once you are there, you run:

# screen -S foo

Then you holler at David:

"Hey David, run the following command on your terminal: # screen -x foo ."

This will cause your and David's sessions to be joined together in the holy Linux shell. You can type or he can type, but you'll both see what the other is doing. This saves you from walking to the other floor and lets you both have equal control. The benefit is that David can watch your troubleshooting skills and see exactly how you solve problems.

At last you both see what the problem is: David's compile script hard-coded an old directory that does not exist on this new server. You mount it, recompile, solve the problem, and David goes back to work. You then go back to whatever lazy activity you were doing before.

The one caveat to this trick is that you both need to be logged in as the same user. Other cool things you can do with the screen command include having multiple windows and split screens. Read the man pages for more on that.

But I'll give you one last tip while you're in your screen session. To detach from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key and strike the A key. Then push the D key.)

You can then reattach by running the screen -x foo command again.

Trick 4: Getting back the root password

You forgot your root password. Nice work. Now you'll just have to reinstall the entire machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to get on the machine and change the password. This doesn't work in all cases (like if you made a GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS Linux example.

First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a normal boot.


Figure 1. GRUB screen after reboot

Next, select the kernel that will boot with the arrow keys, and type E to edit the kernel line. You'll then see something like Figure 2:


Figure 2. Ready to edit the kernel line

Use the arrow key again to highlight the line that begins with kernel , and press E to edit the kernel parameters. When you get to the screen shown in Figure 3, simply append the number 1 to the arguments as shown in Figure 3:


Figure 3. Append the argument with the number 1

Then press Enter , B , and the kernel will boot up to single-user mode. Once here you can run the passwd command, changing password for user root:

sh-3.00# passwd
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully

Now you can reboot, and the machine will boot up with your new password.

Trick 5: SSH back door

Many times I'll be at a site where I need remote support from someone who is blocked on the outside by a company firewall. Few people realize that if you can get out to the world through a firewall, then it is relatively easy to open a hole so that the world can come into you.

In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH back door . To use it, you'll need a machine on the Internet that you can use as an intermediary.

In our example, we'll call our machine blackbox.example.com. The machine behind the company firewall is called ginger. Finally, the machine that technical support is on will be called tech. Figure 4 explains how this is set up.


Figure 4. Poking a hole in the firewall

Here's how to proceed:

  1. Check that what you're doing is allowed, but make sure you ask the right people. Most people will cringe that you're opening the firewall, but what they don't understand is that it is completely encrypted. Furthermore, someone would need to hack your outside machine before getting into your company. Instead, you may belong to the school of "ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me if this doesn't go your way.
  2. SSH from ginger to blackbox.example.com with the -R flag. I'll assume that you're the root user on ginger and that tech will need the root user ID to help you with the system. With the -R flag, you'll forward instructions of port 2222 on blackbox to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're not putting ginger out on the Internet naked.

    You can do this with the following syntax:

    ~# ssh -R 2222:localhost:22 thedude@blackbox.example.com

    Once you are into blackbox, you just need to stay logged in. I usually enter a command like:

    thedude@blackbox:~$ while [ 1 ]; do date; sleep 300; done

    to keep the machine busy. And minimize the window.

  3. Now instruct your friends at tech to SSH as thedude into blackbox without using any special SSH flags. You'll have to give them your password:

    root@tech:~# ssh thedude@blackbox.example.com .

  4. Once tech is on the blackbox, they can SSH to ginger using the following command:

    thedude@blackbox:~$: ssh -p 2222 root@localhost

  5. Tech will then be prompted for a password. They should enter the root password of ginger.
  6. Now you and support from tech can work together and solve the problem. You may even want to use screen together! (See Trick 4 .)
Trick 6: Remote VNC session through an SSH tunnel

VNC or virtual network computing has been around a long time. I typically find myself needing to use it when the remote server has some type of graphical program that is only available on that server.

For example, suppose in Trick 5 , ginger is a storage server. Many storage devices come with a GUI program to manage the storage controllers. Often these GUI management tools need a direct connection to the storage through a network that is at times kept in a private subnet. Therefore, the only way to access this GUI is to do it from ginger.

You can try SSH'ing to ginger with the -X option and launch it that way, but many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much more network-friendly tool and is readily available for nearly all operating systems.

Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports instead. Here's what you do:

  1. Start a VNC server session on ginger. This is done by running something like:

    root@ginger:~# vncserver -geometry 1024x768 -depth 24 :99

    The options tell the VNC server to start up with a resolution of 1024x768 and a pixel depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a better option. Using :99 specifies the port the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying :99 means the server is accessible from port 5999.

    When you start the session, you'll be asked to specify a password. The user ID will be the same user that you launched the VNC server from. (In our case, this is root.)

  2. SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger. This is done from ginger by running the command:

    root@ginger:~# ssh -R 5999:localhost:5999 thedude@blackbox.example.com

    Once you run this command, you'll need to keep this SSH session open in order to keep the port forwarded to ginger. At this point if you were on blackbox, you could now access the VNC session on ginger by just running:

    thedude@blackbox:~$ vncviewer localhost:99

    That would forward the port through SSH to ginger. But we're interested in letting tech get VNC access to ginger. To accomplish this, you'll need another tunnel.

  3. From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox. This would be done by running:

    root@tech:~# ssh -L 5999:localhost:5999 thedude@blackbox.example.com

    This time the SSH flag we used was -L , which instead of pushing 5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session open. Now you're ready to VNC from tech!

  4. From tech, VNC to ginger by running the command:

    root@tech:~# vncviewer localhost:99 .

    Tech will now have a VNC session directly to ginger.

While the effort might seem like a bit much to set up, it beats flying across the country to fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.

Let me add a trick to this trick: If tech was running the Windows® operating system and didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH ports by looking in the options in the sidebar. If the port were 5902 instead of our example of 5999, then you would enter something like in Figure 5.


Figure 5. Putty can forward SSH ports for tunneling

If this were set up, then tech could VNC to localhost:2 just as if tech were running the Linux operating system.

Trick 7: Checking your bandwidth

Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a client node named beckham. Company A has decided they really want to get more bandwidth out of ginger because they have lots of nodes they want to have NFS mount ginger's shared filesystem.

The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together. This is cheapest because usually you have an extra on-board NIC and an extra port on your switch somewhere.

So they do this. But now the question is: How much bandwidth do they really have?

Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from? Well,

1Gb = 1024Mb ; 1024Mb/8 = 128MB ; "b" = "bits," "B" = "bytes"

But what is it that we actually see, and what is a good way to measure it? One tool I suggest is iperf. You can grab iperf like this:

# wget http://dast.nlanr.net/Projects/Iperf2.0/iperf-2.0.2.tar.gz

You'll need to install it on a shared filesystem that both ginger and beckham can see. or compile and install on both nodes. I'll compile it in the home directory of the bob user that is viewable on both nodes:

tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install

On ginger, run:

# /home/bob/perf/bin/iperf -s -f M

This machine will act as the server and print out performance speeds in MBps.

On the beckham node, run:

# /home/bob/perf/bin/iperf -c ginger -P 4 -f M -w 256k -t 60

You'll see output in both screens telling you what the speed is. On a normal server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.

In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this gives you a good indication that your bandwidth is going to be about what you'd expect. If you see something much less, then you should check for a problem.

I recently ran into a case in which the bonding driver was used to bond two NICs that used different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet cards together!

Trick 8: Command-line scripting and utilities

A Linux systems administrator becomes more efficient by using command-line scripting with authority. This includes crafting loops and knowing how to parse data using utilities like awk , grep , and sed . There are many cases where doing so takes fewer keystrokes and lessens the likelihood of user errors.

For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you are about to install. The long way would be to add IP addresses in vi or your favorite text editor. However, it can be done by taking the already existing /etc/hosts file and appending the following to it by running this on the command line:

# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts

Two hundred host names, n001 through n200, will then be created with IP addresses 192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of inadvertently creating duplicate IP addresses or host names, so this is a good example of using the built-in command line to eliminate user errors. Please note that this is done in the bash shell, the default in most Linux distributions.

As another example, let's suppose you want to check that the memory size is the same in each of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or parallel shell would be the best practice, but for the sake of illustration, here's a way to do this using SSH.

Assume the SSH is set up to authenticate without a password. Then run:

# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq

A command line like this looks pretty terse. (It can be worse if you put regular expressions in it.) Let's pick it apart and uncover the mystery.

First you're doing a loop through 001-200. This padding with 0s in the front is done with the -w option to the seq command. Then you substitute the num variable to create the host you're going to SSH to. Once you have the target host, give the command to it. In this case, it's:

free -m | grep Mem | awk '{print $2}'

That command says to:

This operation is performed on every node.

Once you have performed the command on every node, the entire output of all 200 nodes is piped ( | d) to the sort command so that all the memory values are sorted.

Finally, you eliminate duplicates with the uniq command. This command will result in one of the following cases:

This command isn't perfect. If you find that a value of memory is different than what you expect, you won't know on which node it was or how many nodes there were. Another command may need to be issued for that.

What this trick does give you, though, is a fast way to check for something and quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty check.

Trick 9: Spying on the console

Some software prints error messages to the console that may not necessarily show up on your SSH session. Using the vcs devices can let you examine these. From within an SSH session, run the following command on a remote server: # cat /dev/vcs1 . This will show you what is on the first console. You can also look at the other virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll be able to see what he typed.

In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best way to view this information; it also provides the additional benefit of out-of-band viewing capabilities. Using the vcs device provides a fast in-band method that may be able to save you some time from going to the machine room and looking at the console.

Trick 10: Random system information collection

In Trick 8 , you saw an example of using the command line to get information about the total memory in the system. In this trick, I'll offer up a few other methods to collect important information from the system you may need to verify, troubleshoot, or give to remote support.

First, let's gather information about the processor. This is easily done as follows:

# cat /proc/cpuinfo .

This command gives you information on the processor speed, quantity, and model. Using grep in many cases can give you the desired value.

A check that I do quite often is to ascertain the quantity of processors on the system. So, if I have purchased a dual processor quad-core server, I can run:

# cat /proc/cpuinfo | grep processor | wc -l .

I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to send me another processor.

Another piece of information I may require is disk information. This can be gotten with the df command. I usually add the -h flag so that I can see the output in gigabytes or megabytes. # df -h also shows how the disk was partitioned.

And to end the list, here's a way to look at the firmware of your system -- a method to get the BIOS level and the firmware on the NIC.

To check the BIOS version, you can run the dmidecode command. Unfortunately, you can't easily grep for the information, so piping it is a less efficient way to do this. On my Lenovo T61 laptop, the output looks like this:

#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...

This is much more efficient than rebooting your machine and looking at the POST output.

To examine the driver and firmware versions of your Ethernet adapter, run ethtool :

# ethtool -i eth0
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: 0.3-0

Conclusion

There are thousands of tricks you can learn from someone's who's an expert at the command line. The best ways to learn are to:

I hope at least one of these tricks helped you learn something you didn't know. Essential tricks like these make you more efficient and add to your experience, but most importantly, tricks give you more free time to do more interesting things, like playing video games. And the best administrators are lazy because they don't like to work. They find the fastest way to do a task and finish it quickly so they can continue in their lazy pursuits.

About the author

Vallard Benincosa is a lazy Linux Certified IT professional working for the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.

[Dec 30, 2020] HPE ClearOS

Dec 30, 2020 | arstechnica.com

The last of the RHEL downstreams up for discussion today is Hewlett-Packard Enterprise's in-house distro, ClearOS . Hewlett-Packard makes ClearOS available as a pre-installed option on its ProLiant server line, and the company offers a free Community version to all comers.

ClearOS is an open source software platform that leverages the open source model to deliver a simplified, low cost hybrid IT experience for SMBs. The value of ClearOS is the integration of free open source technologies making it easier to use. By not charging for open source, ClearOS focuses on the value SMBs gain from the integration so SMBs only pay for the products and services they need and value.

ClearOS is mostly notable here for its association with industry giant HPE and its availability as an OEM distro on ProLiant servers. It seems to be a bit behind the times -- the most recent version is ClearOS 7.x, which is in turn based on RHEL 7. In addition to being a bit outdated compared with other options, it also appears to be a rolling release itself -- more comparable to CentOS Stream itself, than to the CentOS Linux that came before it.

ClearOS is probably most interesting to small business types who might consider buying ProLiant servers with RHEL-compatible OEM Linux pre-installed later.

[Dec 30, 2020] Where do I go now that CentOS Linux is gone- Check our list - Ars Technica

Dec 30, 2020 | arstechnica.com

Springdale Linux

I've seen a lot of folks mistakenly recommending the deceased Scientific Linux distro as a CentOS replacement -- that won't work, because Scientific Linux itself was deprecated in favor of CentOS. However, Springdale Linux is very similar -- like Scientific Linux, it's a RHEL rebuild distro made by and for the academic scientific community. Unlike Scientific Linux, it's still actively maintained!

Springdale Linux is maintained and made available by Princeton and Rutgers universities, who use it for their HPC projects. It has been around for quite a long time. One Springdale Linux user from Carnegie Mellon describes their own experience with Springdale (formerly PUIAS -- Princeton University Institute for Advanced Study) as a 10-year ride.

Theresa Arzadon-Labajo, one of Springdale Linux's maintainers, gave a pretty good seat-of-the-pants overview in a recent mailing list discussion :

The School of Mathematics at the Institute for Advanced Study has been using Springdale (formerly PUIAS, then PU_IAS) since its inception. All of our *nix servers and workstations (yes, workstations) are running Springdale. On the server side, everything "just works", as is expected from a RHEL clone. On the workstation side, most of the issues we run into have to do with NVIDIA drivers, and glibc compatibility issues (e.g Chrome, Dropbox, Skype, etc), but most issues have been resolved or have a workaround in place.

... Springdale is a community project, and [it] mostly comes down to the hours (mostly Josko) that we can volunteer to the project. The way people utilize Springdale varies. Some are like us and use the whole thing. Others use a different OS and use Springdale just for its computational repositories.

Springdale Linux should be a natural fit for universities and scientists looking for a CentOS replacement. It will likely work for most anyone who needs it -- but its relatively small community and firm roots in academia will probably make it the most comfortable for those with similar needs and environments.

[Dec 30, 2020] GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

Dec 30, 2020 | distrowatch.com

64"best idea" ... (by Otis on 2020-12-25 19:38:01 GMT from United States)
@62 dang it BSD takes care of all that anxiety about systemd and the other bloaty-with-time worries as far as I can tell. GhostBSD and a few others are spearheading a charge into the face of The Enemy, making BSD palatable for those of us steeped in Linux as the only alternative to we know who.

[Dec 30, 2020] Scientific Linux website states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

Dec 30, 2020 | distrowatch.com

Centos (by David on 2020-12-22 04:29:46 GMT from United States)
I was using Centos 8.2 on an older, desktop home computer. When Centos dropped long term support on version 8, I was a little peeved, but not a whole lot, since it is free, anyway. Out of curiosity I installed Scientific Linux 7.9 on the same computer, and it works better that Centos 8. Then I tried installing SL 7.9 on my old laptop -- it even worked on that!

Previously, when I had tried to install Centos 8 on the laptop, an old Dell inspiron 1501, the graphics were garbage --the screen displayed kind of a color mosaic --and the keyboard/everthing else was locked up. I also tried Centos 7.9 on it and installation from minimal dvd produced a bunch of errors and then froze part way through.

I will stick with Scientific Linux 7 for now. In 2024 I will worry about which distro to migrate to. Note: Scientific Linux websites states that they are going to reconsider (in 1st quarter of 2021) whether they will produce a clone of rhel version 8. Previously, they stated that they would not.

[Dec 30, 2020] Springdale vs. CentOS

Dec 30, 2020 | distrowatch.com

52Springdale vs. CentOS (by whoKnows on 2020-12-23 05:39:01 GMT from Switzerland)

@51 • Personal opinion only. (by R. Cain)

"Personal opinion only. [...] After all the years of using Linux, and experiencing first-hand the hobby mentality that has taken over [...], I prefer to use a distribution which has all the earmarks of [...] being developed AND MAINTAINED by a professional organization."

Yeah, your answer is exactly what I expected it to be.

The thing with Springdale is as following: it's maintained by the very professional team of IT specialists at the Institute for Advanced Study (Princeton University) for the own needs. That's why there's no fancy website, RHEL Wiki, live ISOs and such.

They also maintain several other repositories for add-on packages (computing, unsupported [with audio/video codecs] ...).

With other words, if you're a professional who needs an RHEL clone, you'll be fine with it; if you're a hobbyist who needs a how-to on everything and anything, you can still use the knowledge base of RHEL/CentOS/Oracle ...

If you're 'small business' who needs a professional support, you'd get RHEL - unlike CentOS, Springdale is not a commercial distribution selling you support and schooling. Springdale is made by professional and for the professionals.

https://www.ias.edu/math/computing/Springdale-Linux
https://researchcomputing.princeton.edu/faq/what-is-a-cluster

[Dec 29, 2020] Migrer de CentOS Oracle Linux Petit retour d'exp rience Le blog technique de Microlinux

Highly recommended!
Google translation
Notable quotes:
"... Free to use, free to download, free to update. Always ..."
"... Unbreakable Enterprise Kernel ..."
"... (What You Get Is What You Get ..."
Dec 30, 2020 | blog.microlinux.fr

In 2010 I had the opportunity to put my hands in the shambles of Oracle Linux during an installation and training mission carried out on behalf of ASF (Highways of the South of France) which is now called Vinci Autoroutes. I had just published Linux on the onions at Eyrolles, and since the CentOS 5.3 distribution on which it was based looked 99% like Oracle Linux 5.3 under the hood, I had been chosen by the company ASF to train their future Linux administrators.

All these years, I knew that Oracle Linux existed, as did another series of Red Hat clones like CentOS, Scientific Linux, White Box Enterprise Linux, Princeton University's PUIAS project, etc. I didn't care any more, since CentOS perfectly met all my server needs.

Following the disastrous announcement of the CentOS project, I had a discussion with my compatriot Michael Kofler, a Linux guru who has published a series of excellent books on our favorite operating system, and who has migrated from CentOS to Oracle Linux for the Linux ad administration courses he teaches at the University of Graz. We were not in our first discussion on this subject, as the CentOS project was already accumulating a series of rather worrying delays for version 8 updates. In comparison, Oracle Linux does not suffer from these structural problems, so I kept this option in a corner of my head.

A problematic reputation

Oracle suffers from a problematic reputation within the free software community, for a variety of reasons. It was the company that ruined OpenOffice and Java, put the hook on MySQL and let Solaris sink. Oracle CEO Larry Ellison has been the center of his name because of his unhinged support for Donald Trump. As for the company's commercial policy, it has been marked by a notorious aggressiveness in the hunt for patents.

On the other hand, we have free and free apps like VirtualBox, which run perfectly on millions of developer workstations all over the world. And then the very discreet Oracle Linux , which works perfectly and without making any noise since 2006, and which is also a free and free operating system.

Install Oracle Linux

For a first test, I installed Oracle Linux 7.9 and 8.3 in two virtual machines on my workstation. Since it is a Red Hat Enterprise Linux-compliant clone, the installation procedure is identical to that of RHEL and CentOS, with a few small details.

Oracle Linux Installation

Info Normally, I never care about banner ads that scroll through graphic installers. This time, the slogan Free to use, free to download, free to update. Always still caught my attention.

An indestructible kernel?

Oracle Linux provides its own Linux kernel newer than the one provided by Red Hat, and named Unbreakable Enterprise Kernel (UEK). This kernel is installed by default and replaces older kernels provided upstream for versions 7 and 8. Here's what it looks like oracle Linux 7.9.

$ uname -a
Linux oracle-el7 5.4.17-2036.100.6.1.el7uek.x86_64 #2 SMP Thu Oct 29 17:04:48 
PDT 2020 x86_64 x86_64 x86_64 GNU/Linux
Well-crafted packet deposits

At first glance, the organization of official and semi-official package filings seems much clearer and better organized than under CentOS. For details, I refer you to the respective explanatory pages for the 7.x and 8.x versions.

Well-structured documentation

Like the organization of deposits, Oracle Linux's documentation is worth mentioning here, because it is simply exemplary. The main index refers to the different versions of Oracle Linux, and from there, you can access a whole series of documents in HTML and PDF formats that explain in detail the peculiarities of the system and its day-to-day management. As I go along with this documentation, I discover a multitude of pleasant little details, such as the fact that Oracle packages display metadata for security updates, which is not the case for CentOS packages.

Migrating from CentOS to Oracle Linux

The Switch your CentOS systems to Oracle Linux web page identifies a number of reasons why Oracle Linux is a better choice than CentOS when you want to have a company-grade free as in free beer operating system, which provides low-risk updates for each version over a decade. This page also features a script that transforms an existing CentOS system into a two-command Oracle Linux system on the fly. centos2ol.sh

So I tested this script on a CentOS 7 server from Online/Scaleway.

# curl -O https://linux.oracle.com/switch/centos2ol.sh
# chmod +x centos2ol.sh
# ./centos2ol.sh

The script grinds about twenty minutes, we restart the machine and we end up with a clean Oracle Linux system. To do some cleaning, just remove the deposits of saved packages.

# rm -f /etc/yum.repos.d/*.repo.deactivated
Migrating a CentOS 8.x server?

At first glance, the script only predicted the migration of CentOS 7.9 to Oracle Linux 7.9. On a whim, I sent an email to the address at the bottom of the page, asking if support for CentOS 8.x was expected in the near future. centos2ol.sh

A very nice exchange of emails ensued with a guy from Oracle, who patiently answered all the questions I asked him. And just twenty-four hours later, he sent me a link to an Oracle Github repository with an updated version of the script that supports the on-the-fly migration of CentOS 8.x to Oracle Linux 8.x.

So I tested it with a cool installation of a CentOS 8 server at Online/Scaleway.

# yum install git
# git clone https://github.com/oracle/centos2ol.git
# cd centos2ol/
# chmod +x centos2ol.sh
# ./centos2ol.sh

Again, it grinds a good twenty minutes, and at the end of the restart, we end up with a public machine running oracle Linux 8.

Conclusion

I will probably have a lot more to say about that. For my part, I find this first experience with Oracle Linux rather conclusive, and if I decided to share it here, it is that it will probably solve a common problem to a lot of admins of production servers who do not support their system becoming a moving target overnight.

Post Scriptum for the chilly purists

Finally, for all of you who want to use a free and free clone of Red Hat Enterprise Linux without selling their soul to the devil, know that Springdale Linux is a solid alternative. It is maintained by Princeton University in the United States according to the principle WYGIWYG (What You Get Is What You Get ), it is provided raw de-cluttering and without any documentation, but it works just as well.


Writing this documentation takes time and significant amounts of espresso coffee. Do you like this blog? Give the editor a coffee by clicking on the cup.

[Dec 29, 2020] Oracle Linux is "CentOS done right"

Notable quotes:
"... If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject: ..."
"... Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers. ..."
"... [Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on." ..."
Dec 21, 2020 | distrowatch.com

Microlinux

And what about Oracle Linux? (by Microlinux on 2020-12-21 08:11:33 GMT from France)

If you want a free-as-in-beer RHEL clone, you have two options: Oracle Linux or Springdale/PUIAS. My company's currently moving its servers to OL, which is "CentOS done right". Here's a blog article about the subject:

https://blog.microlinux.fr/migration-centos-oracle-linux/

Currently Rocky Linux is not much more than a README file on Github and a handful of Slack (ew!) discussion channels.

Each version of OL is supported for a 10-year cycle. Ubuntu has five years of support. And Debian's support cycle (one year after subsequent release) is unusable for production servers.

dragonmouth

9@Jesse on CentOS: (by dragonmouth on 2020-12-21 13:11:04 GMT from United States)

"There is no rush and I recommend waiting a bit for the dust to settle on the situation before leaping to an alternative. "

For private users there may be plenty of time to find an alternative. However, corporate IT departments are not like jet skis able to turn on a dime. They are more like supertankers or aircraft carriers that take miles to make a turn. By the time all the committees meet and come to some decision, by the time all the upper managers who don't know what the heck they are talking about expound their opinions and by the time the CentOS replacement is deployed, a year will be gone. For corporations, maybe it is not a time to PANIC, yet, but it is high time to start looking for the O/S that will replace CentOS.

Ricardo

"This looks like the vendor equivalent..." (by Ricardo on 2020-12-21 18:06:49 GMT from Argentina)

[Red Hat looks like ]... of a cartoon character sawing off the tree branch they are sitting on."

Jesse, I couldn't have articulated it better. I'm stealing that phrase :)

Cheers and happy holidays to everyone!

[Dec 28, 2020] Time to move to Oracle Linux

Dec 28, 2020 | www.cyberciti.biz
Kyle Dec 9, 2020 @ 2:13

It's an ibm money grab. It's a shame, I use centos to develop and host web applications om my linode. Obviously a small time like that I can't afford red hat, but use it at work. Centos allowed me to come home and take skills and dev on my free time and apply it to work.

I also use Ubuntu, but it looks like the shift will be greater to Ubuntu. Noname Dec 9, 2020 @ 4:20

As others said here, this is money grab. Me thinks IBM was the worst thing that happened to Linux since systemd... Yui Dec 9, 2020 @ 4:49

Hello CentOS users,

I also work for a non-profit (Cancer and other research) and use CentOS for HPC. We choose CentOS over Debian due to the 10-year support cycle and CentOS goes well with HPC cluster. We also wanted every single penny to go to research purposes and not waste our donations and grants on software costs. What are my CentOS alternatives for HPC? Thanks in advance for any help you are able to provide. Holmes Dec 9, 2020 @ 5:06

Folks who rely on CentOS saw this coming when Red Hat brought them 6 years ago. Last year IBM brought Red Hat. Now, IBM+Red Hat found a way to kill the stable releases in order to get people signing up for RHEL subscriptions. Doesn't that sound exactly like "EEE" (embrace, extend, and exterminate) model? Petr Dec 9, 2020 @ 5:08

For me it's simple.
I will keep my openSUSE Leap and expand it's footprint.
Until another RHEL compatible distro is out. If I need a RHEL compatible distro for testing, until then, I will use Oracle with the RHEL kernel.
OpenSUSE is the closest to RHEL in terms of stability (if not better) and I am very used to it. Time to get some SLES certifications as well. Someone Dec 9, 2020 @ 5:23

While I like Debian, and better still Devuan (systemd ), some RHEL/CentOS features like kickstart and delta RPMs don't seem to be there (or as good). Debian preseeding is much more convoluted than kickstart for example. Vonskippy Dec 10, 2020 @ 1:24

That's ok. For us, we left RHEL (and the CentOS testing cluster) when the satan spawn known as SystemD became the standard. We're now a happy and successful FreeBSD shop.

[Dec 28, 2020] This quick and dirty hack worked fine to convert centos 8 to oracle linux 8

Notable quotes:
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 28, 2020 | blog.centos.org

Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
wget \
${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
reboot
dnf remove kernel

[Dec 28, 2020] Red Hat interpretation of CenOS8 fiasco

Highly recommended!
" People are complaining because you are suddenly killing CentOS 8 which has been released last year with the promise of binary compatibility to RHEL 8 and security updates until 2029."
One of immanent features of GPL is that it allow clones to exist. Which means the Oracle Linix, or Rocky Linux, or Lenin Linux will simply take CentOS place and Red hat will be in disadvantage, now unable to control the clone to the extent they managed to co-opt and control CentOS. "Embrace and extinguish" change i now will hand on Red Hat and probably will continue to hand for years from now. That may not be what Redhat brass wanted: reputational damage with zero of narrative effect on the revenue stream. I suppose the majority of CentOS community will finally migrate to emerging RHEL clones. If that was the Red Hat / IBM goal - well, they will reach it.
Notable quotes:
"... availability gap ..."
"... Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it. ..."
"... We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community. ..."
"... If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output. ..."
"... You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions. ..."
"... Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later. ..."
"... They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. ..."
"... IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have ..."
"... What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable. ..."
"... As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH? ..."
Dec 28, 2020 | blog.centos.org

Let's go back to 2003 where Red Hat saw the opportunity to make a fundamental change to become an enterprise software company with an open source development methodology.

To do so Red Hat made a hard decision and in 2003 split Red Hat Linux into Red Hat Enterprise Linux (RHEL) and Fedora Linux. RHEL was the occasional snapshot of Fedora Linux that was a product -- slowed, stabilized, and paced for production. Fedora Linux and the Project around it were the open source community for innovating -- speedier, prone to change, and paced for exploration. This solved the problem of trying to hold to two, incompatible core values (fast/slow) in a single project. After that, each distribution flourished within its intended audiences.

But that split left two important gaps. On the project/community side, people still wanted an OS that strived to be slower-moving, stable-enough, and free of cost -- an availability gap . On the product/customer side, there was an openness gap -- RHEL users (and consequently all rebuild users) couldn't contribute easily to RHEL. The rebuilds arose and addressed the availability gap, but they were closed to contributions to the core Linux distro itself.

In 2012, Red Hat's move toward offering products beyond the operating system resulted in a need for an easy-to-access platform for open source development of the upstream projects -- such as Gluster, oVirt, and RDO -- that these products are derived from. At that time, the pace of innovation in Fedora made it not an easy platform to work with; for example, the pace of kernel updates in Fedora led to breakage in these layered projects.

We formed a team I led at Red Hat to go about solving this problem, and, after approaching and discussing it with the CentOS Project core team, Red Hat and the CentOS Project agreed to " join forces ." We said joining forces because there was no company to acquire, so we hired members of the core team and began expanding CentOS beyond being just a rebuild project. That included investing in the infrastructure and protecting the brand. The goal was to evolve into a project that also enabled things to be built on top of it, and a project that would be exponentially more open to contribution than ever before -- a partial solution to the openness gap.

Bringing home the CentOS Linux users, folks who were stuck in that availability gap, closer into the Red Hat family was a wonderful side effect of this plan. My experience going from participant to active open source contributor began in 2003, after the birth of the Fedora Project. At that time, as a highly empathetic person I found it challenging to handle the ongoing emotional waves from the Red Hat Linux split. Many of my long time community friends themselves were affected. As a company, we didn't know if RHEL or Fedora Linux were going to work out. We had made a hard decision and were navigating the waters from the aftershock. Since then we've all learned a lot, including the more difficult dynamics of an open source development methodology. So to me, bringing the CentOS and other rebuild communities into an actual relationship with Red Hat again was wonderful to see, experience, and help bring about.

Over the past six years since finally joining forces, we made good progress on those goals. We started Special Interest Groups (SIGs) to manage the layered project experience, such as the Storage SIG, Virt Sig, and Cloud SIG. We created a governance structure where there hadn't been one before. We brought RHEL source code to be housed at git.centos.org . We designed and built out a significant public build infrastructure and CI/CD system in a project that had previously been sealed-boxes all the way down.


cmdrlinux says: December 19, 2020 at 2:36 pm

"This brings us to today and the current chapter we are living in right now. The move to shift focus of the project to CentOS Stream is about filling that openness gap in some key ways. Essentially, Red Hat is filling the development and contribution gap that exists between Fedora and RHEL by shifting the place of CentOS from just downstream of RHEL to just upstream of RHEL."

Another long-winded post that doesn't address the single, core issue that no one will speak to directly: why can't CentOS Stream and CentOS _both_ exist? Because in absence of any official response from Red Hat, the assumption is obvious: to drive RHEL sales. If that's the reason, then say it. Stop being cowards about it.

Mark Danon says: December 19, 2020 at 4:14 pm

Redhat has no obligation to maintain both CentOS 8 and CentOS stream. Heck, they have no obligation to maintain CentOS either. Maintaining both will only increase the workload of CentOS maintainers. I don't suppose you are volunteering to help them do the work? Be thankful for a distribution that you have been using so far, and move on.

Dave says: December 20, 2020 at 7:16 am

We might be better off if Red Hat hadn't gotten involved in CentOS in the first place and left it an independent project. THEY choose to pursue this path and THEY chose to renege on assurances made around the non-stream distro. Now they're going to choose to deal with whatever consequences come from the loss of goodwill in the community.

If they were going to pull this stunt they shouldn't have gone ahead with CentOS 8 at all and fulfilled any lifecycle expectations for CentOS 7.

Konstantin says: December 21, 2020 at 12:24 am

Sorry, but that's a BS. CentOS Stream and CentOS Linux are not mutually replaceable. You cannot sell that BS to any people actually knowing the intrinsics of how CentOS Linux was being developed.

If the problem was in money, all RH needed to do was to ask the community. You would have been amazed at the output.

No, it is just a primitive, direct and lame way to either force "free users" to either pay, or become your free-to-use beta testers (CentOS Stream *is* beta, whatever you say).

I predict you will be somewhat amazed at the actual results.

Not talking about the breach of trust. Now how much would cost all your (RH's) further promises and assurances?

Chris Mair says: December 20, 2020 at 3:21 pm

To: centos-devel@centos.org
To: centos-questions@redhat.com

Hi,

Re: https://blog.centos.org/2020/12/balancing-the-needs-around-the-centos-platform/

you can spin this to the moon and back. The fact remains you just killed CentOS Linux and your users' trust by moving the EOL of CentOS Linux 8 from 2029 to 2021.

You've alienated a few hunderd thousand sysadmins that started upgrading to 8 this year and you've thrown the scientific Linux community under a bus. You do realize Scientific Linux was discontinued because CERN and FermiLab decided to standardize on CentOS 8? This trickled down to a load of labs and research institutions.

Nobody forced you to buy out CentOS or offer a gratis distribution. But everybody expected you to stick to the EOL dates you committed to. You boast about being the "Enterprise" Linux distributor. Then, don't act like a freaking start-up that announces stuff today and vanishes a year later.

The correct way to handle this would have been to kill the future CentOS 9, giving everybody the time to cope with the changes.

I've earned my RHCE in 2003 (yes that's seventeen years ago). Since then, many times, I've recommended RHEL or CentOS to the clients I do free lance work for. Just a few weeks ago I was asked to give an opinion on six CentOS 7 boxes about to be deployed into a research system to be upgraded to 8. I gave my go. Well, that didn't last long.

What do you expect me to recommend now? Buying RHEL licenses? That may or may be not have a certain cost per year and may or may be not supported until a given date? Once you grant yourself the freedom to retract whatever published information, how can I trust you? What added values do I get over any of the community supported distributions (given I can support myself)?

And no, CentOS Stream cannot "cover 95% (or so) of current user workloads". Stream was introduces as "a rolling preview of what's next in RHEL".

I'm not interested at all in a "a rolling preview of what's next in RHEL". I'm interested in a stable distribution I can trust to get updates until the given EOL date.

You've made me look elsewhere for that.

-- Chris

Chip says: December 20, 2020 at 6:16 pm

I guess my biggest issue is They should have announced this at the START of CentOS 8.0. Instead they started CentOS 8 with the belief it was going to be like CentOS7 have a long supported life cycle. What they did was basically bait and switch. Not cool. Especially not cool for those running multiple nodes on high performance computing clusters.

Alex says: December 21, 2020 at 12:51 am

I have over 300,000 Centos nodes that require Long term support as it's impossible to turn them over rapidly. I also have 154,000 RHEL nodes. I now have to migrate 454,000 nodes over to Ubuntu because Redhat just made the dumbest decision short of letting IBM acquire them I've seen. Whitehurst how could you let this happen? Nothing like millions in lost revenue from a single customer.

Nika jous says: December 21, 2020 at 1:43 pm

Just migrated to OpenSUSE. Rather than crying for dead os it's better to act yourself. Redhat is a sinking ship it probably want last next decade.Legendary failure like ibm never have upper hand in Linux world. It's too competitive now. Customers have more options to choose. I think person who have take this decision probably ignorant about the current market or a top grade fool.

Ang says: December 22, 2020 at 2:36 am

IBM/RH/CentOS keeps replaying the same talking points over and over and ignoring the actual issues people have. You say you are reading them, but choose to ignore it and that is even worse!

People still don't understand why CentOS stream and CentOS can't co-exist. If your goal was not to support CentOS 8, why did you put 2029 date or why did you even release CentOS 8 in the first place?

Hell, you could have at least had the goodwill with the community to make CentOS 8 last until end of CentOS 7! But no, you discontinued CentOS 8 giving people only 1 year to respond, and timed it right after EOL of CentOS6.

Why didn't you even bother asking the community first and come to a compromise or something?

Again, not a single person had a problem with CentOS stream, the problem was having the rug pulled under their feet! So stop pretending and address it properly!

Even worse, you knew this was an issue, it's like literally #1 on your issue list "Shift Board to be more transparent in support of becoming a contributor-focused open source project"

And you FAILED! Where was the transparency?!

Ang says: December 22, 2020 at 2:36 am

A link to the issue: https://git.centos.org/centos/board/issue/1

AP says: December 22, 2020 at 6:55 am

What a piece of stinking BS. What is this "gap" you're talking about? Nobody in the CentOS community cares about this pre-RHEL gap. You're trying to fix something that isn't broken. And doing that the most horrible and bizzarre way imaginable.

Len Inkster says: December 22, 2020 at 4:13 pm

As I understand it, Fedora - RHEL - CENTOS just becomes Fedora - Centos Stream - RHEL. Why just call them RH-Alpha, RH-Beta, RH?

Anyone who wants to continue with CENTOS? Fork the project and maintain it yourselves. That how we got to CENTOS from Linus Torvalds original Linux.

Peter says: December 22, 2020 at 5:36 pm

I can only comment this as disappointment, if not betrayal, to whole CentOS user base. This decision was clearly done, without considering impact to majority of CentOS community use cases.

If you need upstream contributions channel for RHEL, create it, do not destroy the stable downstream. Clear and simple. All other 'explanations' are cover ups for real purpose of this action.

This stinks of politics within IBM/RH meddling with CentOS. I hope, Rocky will bring the desired stability, that community was relying on with CentOS.

Goodbye CentOS, it was nice 15 years.

Ken Sanderson says: December 23, 2020 at 1:57 pm

We've just agreed to cancel out RHEL subscriptions and will be moving them and our Centos boxes away as well. It was a nice run but while it will be painful, it is a chance to move far far away from the terrible decisions made here.

[Dec 28, 2020] Red Hat Goes Full IBM and Says Farewell to CentOS - ServeTheHome

Dec 28, 2020 | www.servethehome.com

The intellectually easy answer to what is happening is that IBM is putting pressure on Red Hat to hit bigger numbers in the future. Red Hat sees a captive audience in its CentOS userbase and is looking to covert a percentage to paying customers. Everyone else can go to Ubuntu or elsewhere if they do not want to pay...

[Dec 28, 2020] Call our sales people and open your wallet if you use CentOS in prod

Dec 28, 2020 | freedomben.medium.com

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Red Hat expects you to call their sales people and open your wallet if you use CentOS in production. That will not happen.

Dec 27, 2020 | freedomben.medium.com

It seemed obvious (via Occam's Razor) that CentOS had cannibalized RHEL sales for the last time and was being put out to die. Statements like:

If you are using CentOS Linux 8 in a production environment, and are
concerned that CentOS Stream will not meet your needs, we encourage you
to contact Red Hat about options.

That line sure seemed like horrific marketing speak for "call our sales people and open your wallet if you use CentOS in prod." ( cue evil mustache-stroking capitalist villain ).

... CentOS will no longer be downstream of RHEL as it was previously. CentOS will now be upstream of the next RHEL minor release .

... ... ...

I'm watching Rocky Linux closely myself. While I plan to use CentOS for the vast majority of my needs, Rocky Linux may have a place in my life as well, as an example powering my home router. Generally speaking, I want my router to be as boring as absolute possible. That said even that may not stay true forever, if for example CentOS gets good WireGuard support.

Lastly, but certainly not least, Red Hat has talked about upcoming low/no-cost RHEL options. Keep an eye out for those! I have no idea the details, but if you currently use CentOS for personal use, I am optimistic that there may be a way to get RHEL for free coming soon. Again, this is just my speculation (I have zero knowledge of this beyond what has been shared publicly), but I'm personally excited.

[Dec 27, 2020] Why Red Hat dumped CentOS for CentOS Stream by Steven J. Vaughan-Nichols

Red hat always has uneasy relationship with CentOS. Red hat brass always viwed it as something that streal Red hat licences. So "Stop thesteal" mve might be not IBM inspired but it is firmly in IBM tradition. And like many similar IBM moves it will backfire.
This hiring of CentOS developers in 2014 gave them unprecedented control over the project. Why on Earth they now want independent projects like Rocly Linux to re-emerge to fill the vacuum. They can't avoid the side affect of using GPL -- it allows clones. .Why it is better to have a project that are hostile to Red Hat and that "in-house" domesticated project is unclear to me. As many large enterprises deploy mix of Red Hat and CentOS the initial reaction might in the opposite direction the Red Hat brass expected: they will get less licesses, not more by adopting "One IBM way"
Dec 21, 2020 | www.zdnet.com

On Hacker News , the leading comment was: "Imagine if you were running a business, and deployed CentOS 8 based on the 10-year lifespan promise . You're totally screwed now, and Red Hat knows it. Why on earth didn't they make this switch starting with CentOS 9???? Let's not sugar coat this. They've betrayed us."

Over at Reddit/Linux , another person snarled, "We based our Open Source project on the latest CentOS releases since CentOS 4. Our flagship product is running on CentOS 8 and we *sure* did bet the farm on the promised EOL of 31st May 2029."

A popular tweet from The Best Linux Blog In the Unixverse, nixcraft , an account with over 200-thousand subscribers, went: Oracle buys Sun: Solaris Unix, Sun servers/workstation, and MySQL went to /dev/null. IBM buys Red Hat: CentOS is going to >/dev/null . Note to self: If a big vendor such as Oracle, IBM, MS, and others buys your fav software, start the migration procedure ASAP."

Many others joined in this choir of annoyed CentOS users that it was IBM's fault that their favorite Linux was being taken away from them. Still, others screamed Red Hat was betraying open-source itself.

... ... ...

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

... ... ...

[Dec 27, 2020] There are now countless Internet servers out there that run CentOS. This is why the Debian project is so important.

Dec 27, 2020 | freedomben.medium.com

There are companies that sell appliances based on CentOS. Websense/Forcepoint is one of them. The Websense appliance runs the base OS of CentOS, on top of which runs their Web-filtering application. Same with RSA. Their NetWitness SIEM runs on top of CentOS.

Likewise, there are now countless Internet servers out there that run CentOS. There's now a huge user base of CentOS out there.

This is why the Debian project is so important. I will be converting everything that is currently CentOS to Debian. Those who want to use the Ubuntu fork of Debian, that is also probably a good idea.

[Dec 23, 2020] Red Hat and GPL: uneasy romance ended long ego, but Red Hat still depends on GPL as it does not develop many components and gets them for free from the community and other vendors

It all about money and about executive bonuses: shortsighted executive want more and more money as if the current huge revenue is not enough...
Dec 23, 2020 | www.zdnet.com

former Red Hat executive confided, "CentOS was gutting sales. The customer perception was 'it's from Red Hat and it's a clone of RHEL, so it's good to go!' It's not. It's a second-rate copy." From where, this person sits, "This is 100% defensive to stave off more losses to CentOS."

Still another ex-Red Hat official said. If it wasn't for CentOS, Red Hat would have been a 10-billion dollar company before Red Hat became a billion-dollar business .

Yet another Red Hat staffer snapped, "Look at the CentOS FAQ . It says right there:

CentOS Linux is NOT supported in any way by Red Hat, Inc.

CentOS Linux is NOT Red Hat Linux, it is NOT Fedora Linux. It is NOT Red Hat Enterprise Linux. It is NOT RHEL. CentOS Linux does NOT contain Red Hat® Linux, Fedora, or Red Hat® Enterprise Linux.

CentOS Linux is NOT a clone of Red Hat® Enterprise Linux.

CentOS Linux is built from publicly available source code provided by Red Hat, Inc for Red Hat Enterprise Linux in a completely different (CentOS Project maintained) build system.

We don't owe you anything."

[Dec 23, 2020] Patch Command Tutorial With Examples For Linux by İsmail Baydan

Sep 03, 2017 | www.poftut.com

Patch is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the difference between original file and new file. In order to get the difference or patch we use diff tool.

Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes applied to the old file and than new file or patched file is compiled for new version of software.

Syntax
patch [options] [originalfile [patchfile]] 
 
 
patch -pnum <patchfile
Help
$ patch --help
Help
Help
Create Patch File

Now we will create patch file in this step but we need some simple source code with two different version. We call the source code file name as myapp.c .

myapp_old.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
  
}
myapp.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
 
printf("This is new line as a patch"); 
  
}

Now we will create a patch file named myapp.patch .

$ diff -u myapp_old.c myapp.c > myapp.patch
Create Patch File
Create Patch File

We can print myapp.patch file with following command

$ cat myapp.patch
Apply Patch File

Now we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which is named myapp_old.patch . We will simply apply this patch file. Here is what the patch file contains

$ patch < myapp.patch
Apply Patch File
Apply Patch File
Take Backup Before Applying Patch

One of the useful feature is taking backups before applying patches. We will use -b option to take backup. In our example we will patch our source code file with myapp.patch .

$ patch -b < myapp.patch
Take Backup Before Applying Patch
Take Backup Before Applying Patch

The backup name will be the same as source code file just adding the .orig extension. So backup file name will be myapp.c.orig

Set Backup File Version

While taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There is -V option which will set the versioning mechanism of the original file. In this example we will use numbered versioning.

$ patch -b -V numbered < myapp.patch
Set Backup File Version
Set Backup File Version

As we can see from screenshot the new backup file is named as number like myapp.c.~1~

Validate Patch File Without Applying or Dry run

We may want to only validate or see the result of the patching. There is a option for this feature. We will use --dry-run option to only emulate patching process but not change any file really.

$ patch --dry-run < myapp.patch
Reverse Patch

Some times we may need to patch in reverse order. So the apply process will be in reverse. We can use -R parameter for this operation. In the example we will patch myapp_old.c rather than myapp.c

$ patch -R myapp_old.c < myapp.patch
Reverse Patch
Reverse Patch

As we can see that new changes are reverted back.

LEARN MORE CMake Tutorial To Build and Compile In Linux Categories Bash , Blog , C , C++ , CentOS , Debian , Fedora , Kali , Linux , Mint , Programming , RedHat , Ubuntu Tags c main , diff , difference , patch , source code 2 thoughts on "Patch Command Tutorial With Examples For Linux"
  1. David K Hill 07/11/2019 at 4:15 am

    Thanks for the writetup to help me to demystify the patching process. The hands on tutorial definitely helped me.

    The ability to reverse the patch was most helpful!

    Reply
  2. Javed 28/12/2019 at 7:16 pm

    very well and detailed explanation of the patch utility. Was able to simulate and practice it for better understanding, thanks for your efforts !

[Dec 23, 2020] HowTo Apply a Patch File To My Linux

Dec 23, 2020 | www.cyberciti.biz

A note about working on an entire source tree

First, make a copy of the source tree:
## Original source code is in lighttpd-1.4.35/ directory ##
$ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/

Cd to lighttpd-1.4.35-new directory and make changes as per your requirements:
$ cd lighttpd-1.4.35-new/
$ vi geoip-mod.c
$ vi Makefile

Finally, create a patch with the following command:
$ cd ..
$ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch

You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server using patch command as discussed above:
patch -p1
See the man page of patch and other command for more information and usage - bash(1)

[Dec 22, 2020] HowTo Apply a Patch File To My Linux - UNIX Source Code - nixCraft

Dec 22, 2020 | www.cyberciti.biz

A note about working on an entire source tree

First, make a copy of the source tree:
## Original source code is in lighttpd-1.4.35/ directory ##
$ cp -R lighttpd-1.4.35/ lighttpd-1.4.35-new/

Cd to lighttpd-1.4.35-new directory and make changes as per your requirements:
$ cd lighttpd-1.4.35-new/
$ vi geoip-mod.c
$ vi Makefile

Finally, create a patch with the following command:
$ cd ..
$ diff -rupN lighttpd-1.4.35/ lighttpd-1.4.35-new/ > my.patch

You can use my.patch file to patch lighttpd-1.4.35 source code on a different computer/server using patch command as discussed above:
patch -p1
See the man page of patch and other command for more information and usage - bash(1)

[Dec 22, 2020] Patch Command Tutorial With Examples For Linux by İsmail Baydan

Sep 03, 2017 | www.poftut.com

Patch is a command that is used to apply patch files to the files like source code, configuration. Patch files holds the difference between original file and new file. In order to get the difference or patch we use diff tool.

Software is consist of a bunch of source code. The source code is developed by developers and changes in time. Getting whole new file for each change is not a practical and fast way. So distributing only changes is the best way. The changes applied to the old file and than new file or patched file is compiled for new version of software.

Syntax
patch [options] [originalfile [patchfile]] 
 
 
patch -pnum <patchfile
Help
$ patch --help
Help
Help
Create Patch File

Now we will create patch file in this step but we need some simple source code with two different version. We call the source code file name as myapp.c .

myapp_old.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
  
}
myapp.c
#include <stdio.h>  
  
void main(){  
  
printf("Hi poftut");  
 
printf("This is new line as a patch"); 
  
}

Now we will create a patch file named myapp.patch .

$ diff -u myapp_old.c myapp.c > myapp.patch
Create Patch File
Create Patch File

We can print myapp.patch file with following command

$ cat myapp.patch
Apply Patch File

Now we have a patch file and we assume we have transferred this patch file to the system which holds the old source code which is named myapp_old.patch . We will simply apply this patch file. Here is what the patch file contains

$ patch < myapp.patch
Apply Patch File
Apply Patch File
Take Backup Before Applying Patch

One of the useful feature is taking backups before applying patches. We will use -b option to take backup. In our example we will patch our source code file with myapp.patch .

$ patch -b < myapp.patch
Take Backup Before Applying Patch
Take Backup Before Applying Patch

The backup name will be the same as source code file just adding the .orig extension. So backup file name will be myapp.c.orig

Set Backup File Version

While taking backup there may be all ready an backup file. So we need to save multiple backup files without overwriting. There is -V option which will set the versioning mechanism of the original file. In this example we will use numbered versioning.

$ patch -b -V numbered < myapp.patch
Set Backup File Version
Set Backup File Version

As we can see from screenshot the new backup file is named as number like myapp.c.~1~

Validate Patch File Without Applying or Dry run

We may want to only validate or see the result of the patching. There is a option for this feature. We will use --dry-run option to only emulate patching process but not change any file really.

$ patch --dry-run < myapp.patch
Reverse Patch

Some times we may need to patch in reverse order. So the apply process will be in reverse. We can use -R parameter for this operation. In the example we will patch myapp_old.c rather than myapp.c

$ patch -R myapp_old.c < myapp.patch
Reverse Patch
Reverse Patch

As we can see that new changes are reverted back.

LEARN MORE CMake Tutorial To Build and Compile In Linux Categories Bash , Blog , C , C++ , CentOS , Debian , Fedora , Kali , Linux , Mint , Programming , RedHat , Ubuntu Tags c main , diff , difference , patch , source code 2 thoughts on "Patch Command Tutorial With Examples For Linux"
  1. David K Hill 07/11/2019 at 4:15 am

    Thanks for the writetup to help me to demystify the patching process. The hands on tutorial definitely helped me.

    The ability to reverse the patch was most helpful!

    Reply
  2. Javed 28/12/2019 at 7:16 pm

    very well and detailed explanation of the patch utility. Was able to simulate and practice it for better understanding, thanks for your efforts !

[Dec 10, 2020] Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

Dec 10, 2020 | blog.centos.org

Ward Mundy says: December 9, 2020 at 3:12 am

Happy to report that we've invested exactly one day in CentOS 7 to CentOS 8 migration. Thanks, IBM. Now we can turn our full attention to Debian and never look back.

Here's a hot tip for the IBM geniuses that came up with this. Rebrand CentOS as New Coke, and you've got yourself a real winner.

[Dec 10, 2020] Does Oracle Linux have staying power against Red Hat

Notable quotes:
"... If you need official support, Oracle support is generally cheaper than RedHat. ..."
"... You can legally run OL free and have access to patches/repositories. ..."
"... Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. ..."
"... Premium OL subscription includes a few nice bonuses like DTrace and Ksplice. ..."
"... Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle). ..."
Dec 10, 2020 | blog.centos.org

Matthew Stier says: December 8, 2020 at 8:11 pm

My office switched the bulk of our RHEL to OL years ago, and find it a great product, and great support, and only needing to get support for systems we actually want support on.

Oracle provided scripts to convert EL5, EL6, and EL7 systems, and was able to convert some EL4 systems I still have running. (Its a matter of going through the list of installed packages, use 'rpm -e --justdb' to remove the package from the rpmdb, and re-installing the package (without dependencies) from the OL ISO.)

art_ok 1 point· 5 minutes ago

We have been using Oracle Linux exclusively last 5-6 years for everything - thousands of servers both for internal use and hundred or so customers.

Not a single time regretted, had any issues or were tempted to move to RedHat let alone CentOS.

I found Oracle Linux has several advantages over RedHat/CentOS:

If you need official support, Oracle support is generally cheaper than RedHat. You can legally run OL free and have access to patches/repositories. Full binary compatibility with RedHat so if anything is certified to run on RedHat, it automatically certified for Oracle Linux as well. It is very easy to switch between supported and free setup (say, you have proof-of-concept setup running free OL, but then it is being promoted to production status - just matter of registering box with Oracle, no need to reinstall/reconfigure anything). You can easily move licensed/support from one box to another so you always run the same OS and do not have to think and decide (RedHat for production / CentOS for Dec/test). You have a choice to run good old RedHat kernel or use newer Oracle kernel (which is pretty much vanilla kernel with minimal modification - just newer). We generally run Oracle kernels on all boxes unless we have to support particularly pedantic customer who insist on using old RedHat kernel. Premium OL subscription includes a few nice bonuses like DTrace and Ksplice.

Overall, it is pleasure to work and support OL.

Negatives:

I found RedHat knowledge base / documentation is much better than Oracle's Oracle does not offer extensive support for "advanced" products like JBoss, Directory Server, etc. Obviously Oracle has its own equivalent commercial offerings (Weblogic, etc) and prefers customers to use them. Some complain about quality of Oracle's support. Can't really comment on that. Had no much exposure to RedHat support, maybe used it couple of times and it was good. Oracle support can be slower, but in most cases it is good/sufficient. Actually over the last few years support quality for Linux has improved noticeably - guess Oracle pushes their cloud very aggressively and as a result invests in Linux support (as Oracle cloud aka OCI runs on Oracle Linux).
art_ok 1 point· just now

Forgot to mention that converting RedHat Linux to Oracle is very straightforward - just matter of updating yum/dnf config to point it to Oracle repositories. Not sure if you can do it with CentOS (maybe possible, just never needed to convert CentOS to Oracle).

[Dec 10, 2020] Backlash against Red Hat management started

At the end IBM/Red Hat might even lose money as powerful organizations, such as universities, might abandon Red Hat as the platform. Or may be not. Red Hat managed to push systemd down the throat without any major hit to the revenue. Why not to repeat the trick with CentOS? In any case IBM owns enterprise Linux and bitter complains and threats of retribution in this forum is just a symptom that the development now is completely driven by corporate brass, and all key decisions belong to them.
Community wise, this is plain bad news for Open Source and all Open Source communities. IBM explained to them very clearly: you does not matter. And organized minority always beat disorganized majority. Actually most large organizations will probably stick with Red Hat compatible OS, probably moving to Oracle Linux or Rocky Linux, is it materialize, not to Debian.
What is interesting is that most people here believe that when security patches are stopped that's the end of the life for the particular Linux version. It is an interesting superstition and it shows how conditioned by corporations Linux folk are and how far from BSD folk they are actually are. Security is an architectural thing first and foremost. Patched are just band aid and they can't change general security situation in Linux no matter how hard anyone tries. But they now serve as a powerful tool of corporate mind control over the user population. Feat is a powerful instrument of mind control.
In reality security of most systems on internal network does no change one bit with patches. And on external network only applications that have open ports that matter (that's why ssh should be restricted to the subnets used, not to be opened to the whole world)
Notable quotes:
"... Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution ..."
"... We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS. ..."
"... First CoreOS, now CentOS. It's about time to switch to one of the *BSDs. ..."
"... I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. ..."
"... IBM is declining, hence they need more profit from "useless" product line. So disgusting ..."
"... An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window ..."
"... Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms. ..."
"... Another one bites the dust due to corporate greed, which IBM exemplifies ..."
"... More likely to drive people entirely out of the RHEL ecosystem. ..."
"... Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/ ..."
"... 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.' ..."
"... Read again: CentOS Stream is not a production operating system. 'Nuff said. ..."
"... This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. ..."
"... Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen. ..."
"... What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM. ..."
"... IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. ..."
"... Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. ..."
"... Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora. ..."
"... There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest. ..."
"... The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle. ..."
"... Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion. ..."
"... I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM ..."
"... IBM are seeing every CentOS install as a missed RHEL subscription... ..."
"... Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM? ..."
"... Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time. ..."
"... As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux. ..."
"... Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM ..."
"... Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. ..."
Dec 10, 2020 | blog.centos.org

Internet User says: December 8, 2020 at 5:13 pm

This is a pretty clear indication that you people are completely out of touch with your users.

Joel B. D. says: December 8, 2020 at 5:17 pm

Bad idea. The whole point of using CentOS is it's an exact binary-compatible rebuild of RHEL. With this decision RH is killing CentOS and inviting to create a new *fork* or use another distribution. Do you realize how much market share you will be losing and how much chaos you will be creating with this?

"If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options". So this is the way RH is telling us they don't want anyone to use CentOS anymore and switch to RHEL?

Michael says: December 8, 2020 at 8:31 pm

That's exactly what they're saying. We all knew from the moment IBM bought Redhat that we were on borrowed time. IBM will do everything they can to push people to RHEL even if that includes destroying a great community project like CentOS.

OS says: December 8, 2020 at 6:20 pm

First CoreOS, now CentOS. It's about time to switch to one of the *BSDs.

JD says: December 8, 2020 at 6:35 pm

Wow. Well, I guess that means the tens of thousands of cores of research compute I manage at a large University will be migrating to Debian. I've just started preparing to shift from Scientific Linux 7 to CentOS due to SL being discontinued by 2024. Glad I've only just started - not much work to throw away.

ShameOnIBM says: December 8, 2020 at 7:07 pm

IBM is declining, hence they need more profit from "useless" product line. So disgusting

MLF says: December 8, 2020 at 7:15 pm

An entire team worked for months on a centos8 transition at the uni I work at. I assume a small portion can be salvaged but reading this it seems most of it will simply go out the window. Does anyone know if this decision of dumping centos8 is final?

MM says: December 8, 2020 at 7:28 pm

Unless the community can center on a new single proper fork of RHEL, it makes the most sense (to me) to seek refuge in Debian as it is quite close to CentOS in stability terms.

Already existing functioning distribution ecosystem, can probably do good with influx of resources to enhance the missing bits, such as further improving SELinux support and expanding Debian security team.

I say this without any official or unofficial involvement with the Debian project, other than being a user.

And we have just launched hundred of Centos 8 servers.

Faisal Sehbai says: December 8, 2020 at 7:32 pm

Another one bites the dust due to corporate greed, which IBM exemplifies. This is why I shuddered when they bought RH. There is nothing that IBM touches that gets better, other than the bottom line of their suits!

Disgusting!

William Smith says: December 8, 2020 at 7:39 pm

This is a big mistake. RedHat did this with RedHat Linux 9 the market leading Linux and created Fedora, now an also-ran to Ubuntu. I spent a lot of time during Covid to convert from earlier versions to 8, and now will have to review that work with my customer.

Daniele Brunengo says: December 8, 2020 at 7:48 pm

I just finished building a CentOS 8 web server, worked out all the nooks and crannies and was very satisfied with the result. Now I have to do everything from scratch? The reason why I chose this release was that every website and its brother were giving a 2029 EOL. Changing that is the worst betrayal of trust possible for the CentOS community. It's unbelievable.

David Potterveld says: December 8, 2020 at 8:08 pm

What a colossal blunder: a pivot from the long-standing mission of an OS providing stability, to an unstable development platform, in a manner that betrays its current users. They should remove the "C" from CentOS because it no longer has any connection to a community effort. I wonder if this is a move calculated to drive people from a free near clone of RHEL to a paid RHEL subscription? More likely to drive people entirely out of the RHEL ecosystem.

a says: December 8, 2020 at 9:08 pm

From a RHEL perspective I understand why they'd want it this way. CentOS was probably cutting deep into potential RedHat license sales. Though why or how RedHat would have a say in how CentOS is being run in the first place is.. troubling.

From a CentOS perspective you may as well just take the project out back and close it now. If people wanted to run beta-test tier RHEL they'd run Fedora. "LATER SECURITY FIXES AND UNTESTED 'FEATURES'?! SIGN ME UP!" -nobody

I'll probably run CentOS 7 until the end and then swap over to Debian when support starts hurting me. What a pain.

Ralf says: December 8, 2020 at 9:08 pm

Don't trust Red Hat. 1 year ago Red Hat's CTO Chris Wright agreed in an interview: 'Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."' https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

I'm a current user of old school CentOS, so keep your promise, Mr CTO.

Tamas says: December 8, 2020 at 10:01 pm

That was quick: "Old school CentOS isn't going anywhere. Stream is available in parallel with the existing CentOS builds. In other words, "nothing changes for current users of CentOS."

https://www.zdnet.com/article/red-hat-introduces-rolling-release-centos-stream/

Konstantin says: December 9, 2020 at 3:36 pm

From the same article: 'To be exact, CentOS Stream is an upstream development platform for ecosystem developers. It will be updated several times a day. This is not a production operating system. It's purely a developer's distro.'

Read again: CentOS Stream is not a production operating system. 'Nuff said.

Samuel C. says: December 8, 2020 at 10:53 pm

This makes my decision to go with Ansible and CentOS 8 in our enterprise simple. Nope, time to got with Puppet or Chef. IBM did what I thought they would screw up Red Hat. My company is dumping IBM software everywhere - this means we need to dump CentOS now too.

Brendan says: December 9, 2020 at 12:15 am

Ironic, and it puts those of us who have recently migrated many of our development serves to CentOS8 in a really bad spot. Luckily we haven't licensed RHEL8 production servers yet -- and now that's never going to happen.

vinci says: December 8, 2020 at 11:45 pm

I can't believe what IBM is actually doing. This is a direct move against all that open source means. They want to do exactly the same thing they're doing with awx (vs. ansible tower). You're going against everything that stands for open source. And on top of that you choose to stop offering support for Centos 8, all of a sudden! What a horrid move on your part. This only reliable choice that remains is probably going to be Debian/Ubuntu. What a waste...

Peter Vonway says: December 8, 2020 at 11:56 pm

What IBM fails to understand is that many of us who use CentOS for personal projects also work for corporations that spend millions of dollars annually on products from companies like IBM and have great influence over what vendors are chosen. This is a pure betrayal of the community. Expect nothing less from IBM.

Scott says: December 9, 2020 at 8:38 am

This is exactly it. IBM is cashing in on its Red Hat acquisition by attempting to squeeze extra licenses from its customers.. while not taking into account the fact that Red Hat's strong adoption into the enterprise is a direct consequence of engineers using the nonproprietary version to develop things at home in their spare time.

Having an open source, non support contract version of your OS is exactly what drives adoption towards the supported version once the business decides to put something into production.

They are choosing to kill the golden goose in order to get the next few eggs faster. IBM doesn't care about anything but its large enterprise customers. Very stereotypically IBM.

OSLover says: December 9, 2020 at 12:09 am

So sad. Not only breaking the support promise but so quickly (2021!)

Business wise, a lot of business software is providing CentOS packages and support. Like hosting panels, backup software, virtualization, Management. I mean A LOT of money worldwide is in dark waters now with this announcement. It took years for CentOS to appear in their supported and tested distros. It will disappear now much faster.

Community wise, this is plain bad news for Open Source and all Open Source communities. This is sad. I wonder, are open source developers nowadays happy to spend so many hours for something that will in the end benefit IBM "subscribers" only in the end? I don't think they are.

What a sad way to end 2020.

technick says: December 9, 2020 at 12:09 am

I don't want to give up on CentOS but this is a strong life changing decision. My background is linux engineering with over 15+ years of hardcore experience. CentOS has always been my go to when an organization didn't have the appetite for RHEL and the $75 a year license fee per instance. I fought off Ubuntu take overs at 2 of the last 3 organizations I've been with successfully. I can't, won't fight off any more and start advocating for Ubuntu or pure Debian moving forward.

RIP CentOS. Red Hat killed a great project. I wonder if Anisble will be next?

ConcernedAdmin says: December 9, 2020 at 12:47 am

Hoping that stabbing Open Source community in the back, will make it switch to commercial licenses is absolutely preposterous. This shows how disconnected they're from reality and consumed by greed and it will simply backfire on them, when we switch to Debian or any other LTS alternative. I can't think moving everything I so caressed and loved to a mess like Ubuntu.

John says: December 9, 2020 at 1:32 am

Assinine. This is completely ridiculous. I have migrated several servers from CentOS 7 to 8 recently with more to go. We also have a RHEL subscription for outward facing servers, CentOS internal. This type of change should absolutely have been announced for CentOS 9. This is garbage saying 1 year from now when it was supposed to be till 2029. A complete betrayal. One year to move everything??? Stupid.

Now I'm going to be looking at a couple of other options but it won't be RHEL after this type of move. This has destroyed my trust in RHEL as I'm sure IBM pushed for this. You will be losing my RHEL money once I chose and migrate. I get companies exist to make money and that's fine. This though is purely a naked money grab that betrays an established timeline and is about to force massive work on lots of people in a tiny timeframe saying "f you customers.". You will no longer get my money for doing that to me

Concerned Fren says: December 9, 2020 at 1:52 am

In hind sight it's clear to see that the only reason RHEL took over CentOS was to kill the competition.

This is also highly frustrating as I just completed new CentOS8 and RHEL8 builds for Non-production and Production Servers and had already begun deployments. Now I'm left in situation of finding a new Linux distribution for our enterprise while I sweat out the last few years of RHEL7/CentOS7. Ubuntu is probably a no go there enterprise tooling is somewhat lacking, and I am of the opinion that they will likely be gobbled up buy Microsoft in the next few years.

Unfortunately, the short-sighted RH/IBMer that made this decision failed to realize that a lot of Admins that used Centos at home and in the enterprise also advocated and drove sales towards RedHat as well. Now with this announcement I'm afraid the damage is done and even if you were to take back your announcement, trust has been broken and the blowback will ultimately mean the death of CentOS and reduced sales of RHEL. There is however an opportunity for another Corporations such as SUSE which is own buy Microfocus to capitalize on this epic blunder simply by announcing an LTS version of OpenSues Leap. This would in turn move people/corporations to the Suse platform which in turn would drive sale for SLES.

William Ashford says: December 9, 2020 at 2:02 am

So the inevitable has come to pass, what was once a useful Distro will disappear like others have. Centos was handy for education and training purposes and production when you couldn't afford the fees for "support", now it will just be a shadow of Fedora.

Christian Reiss says: December 9, 2020 at 6:28 am

This is disgusting. Bah. As a CTO I will now - today - assemble my teams and develop a plan to migrate all DataCenters back to Debian for good. I will also instantly instruct the termination of all mirroring of your software.

For the software (CentOS) I hope for a quick death that will not drag on for years.

Ian says: December 9, 2020 at 2:10 am

This is a bit sad. There was always a conflict of interest associated with Redhat managing the Centos project and this is the end result of this conflict of interest.

There is a genuine benefit associated with the existence of Centos for Redhat however it would appear that that benefit isn't great enough and some arse clown thought that by forcing users to migrate it will increase Redhat's revenue.

The reality is that someone will repackage Redhat and make it just like Centos. The only difference is that Redhat now live in the same camp as Oracle.

cody says: December 9, 2020 at 4:53 am

Everyone predicted this when redhat bought centos. And when IBM bought RedHat it cemented everyone's notion.

Ganesan Rajagopal says: December 9, 2020 at 5:09 am

Thankfully we just started our migration from CentOS 7 to 8 and this surely puts a stop to that. Even if CentOS backtracks on this decision because of community backlash, the reality is the trust is lost. You've just given a huge leg for Ubuntu/Debian in the enterprise. Congratulations!

Bomel says: December 9, 2020 at 6:22 am

I am senior system admin in my organization which spends millions dollar a year on RH&IBM products. From tomorrow, I will do my best to convince management to minimize our spending on RH & IBM, and start looking for alternatives to replace existing RH & IBM products under my watch.

Steve says: December 9, 2020 at 8:57 am

IBM are seeing every CentOS install as a missed RHEL subscription...

Ralf says: December 9, 2020 at 10:29 am

Some years ago IBM bought Informix. We switched to PostgreSQL, when Informix was IBMized. One year ago IBM bought Red Hat and CentOS. CentOS is now IBMized. Guess what will happen with our CentOS installations. What's wrong with IBM?

Michel-André says: December 9, 2020 at 5:18 pm

Hi all,

Remember when RedHat, around RH-7.x, wanted to charge for the distro, the community revolted so much that RedHat saw their mistake and released Fedora. You can fool all the people some of the time, and some of the people all the time, but you cannot fool all the people all the time.

Even though RedHat/CentOS has a very large share of the Linux server market, it will suffer the same fate as Novell (had 85% of the matket), disappearing into darkness !

Mihel-André

PeteVM says: December 9, 2020 at 5:27 pm

As I predicted, RHEL is destroying CentOS, and IBM is running Red Hat into the ground in the name of profit$. Why is anyone surprised? I give Red Hat 12-18 months of life, before they become another ordinary dept of IBM, producing IBM Linux.

CentOS is dead. Time to either go back to Debian and its derivatives, or just pay for RHEL, or IBMEL, and suck it up.

JadeK says: December 9, 2020 at 6:36 pm

I am mid-migration from Rhel/Cent6 to 8. I now have to stop a major project for several hundred systems. My group will have to go back to rebuild every CentOS 8 system we've spent the last 6 months deploying.

Congrats fellas, you did it. You perfected the transition to Debian from CentOS.

Godimir Kroczweck says: December 9, 2020 at 8:21 pm

I find it kind of funny, I find it kind of sad. The dreams in which I moving 1.5K+ machines to whatever distro I yet have to find fitting for replacement to are the..

Wait. How could one with all the seriousness consider cutting down already published EOL a good idea?

I literally had to convince people to move from Ubuntu and Debian installations to CentOS for sake of stability and longer support, just for become looking like a clown now, because with single move distro deprived from both of this.

Paul R says: December 9, 2020 at 9:14 pm

Happy to donate and be part of the revolution away the Corporate vampire Squid that is IBM

Nicholas Knight says: December 9, 2020 at 9:34 pm

Red Hat's word now means nothing to me. Disagreements over future plans and technical direction are one thing, but you *lied* to us about CentOS 8's support cycle, to the detriment of *everybody*. You cost us real money relying on a promise you made, we thought, in good faith. It is now clear Red Hat no longer knows what "good faith" means, and acts only as a Trumpian vacuum of wealth.

[Dec 10, 2020] GPL bites Red hat in the butt: they might faceemarge of CentOs alternative due to the wave of support for such distro

Dec 10, 2020 | blog.centos.org

Sam Callis says: December 8, 2020 at 3:58 pm

I have been using CentOS for over 10 years and one of the things I loved about it was how stable it has been. Now, instead of being a stable release, it is changing to the beta testing ground for RHEL 8.

And instead of 10 years of a support you need to update to the latest dot release. This has me, very concerned.

Sieciowski says: December 9, 2020 at 11:19 am

well, 10 years - have you ever contributed with anything for the CentOS community, or paid them a wage or at least donated some decent hardware for development or maybe just being parasite all the time and now are you surprised that someone has to buy it's your own lunches for a change?

If you think you might have done it even better why not take RH sources and make your own FreeRHos whatever distro, then support, maintain and patch all the subsequent versions for free?

Joe says: December 9, 2020 at 11:47 am

That's ridiculous. RHEL has benefitted from the free testing and corner case usage of CentOS users and made money hand-over-fist on RHEL. Shed no tears for using CentOS for free. That is the benefit of opening the core of your product.

Ljubomir Ljubojevic says: December 9, 2020 at 12:31 pm

You are missing a very important point. Goal of CentOS project was to rebuild RHEL, nothing else. If money was the problem, they could have asked for donations and it would be clear is there can be financial support for rebuild or not.

Putting entire community in front of done deal is disheartening and no one will trust Red Hat that they are pro-community, not to mention Red Hat employees that sit in CentOS board, who can trust their integrity after this fiasco?

Matt Phelps says: December 8, 2020 at 4:12 pm

This is a breach of trust from the already published timeline of CentOS 8 where the EOL was May 2029. One year's notice for such a massive change is unacceptable.

Move this approach to CentOS 9

fahrradflucht says: December 8, 2020 at 5:37 pm

This! People already started deploying CentOS 8 with the expectation of 10 years of updates. - Even a migration to RHEL 8 would imply completely reprovisioning the systems which is a big ask for systems deployed in the field.

Gregory Kurtzer says: December 8, 2020 at 4:27 pm

I am considering creating another rebuild of RHEL and may even be able to hire some people for this effort. If you are interested in helping, please join the HPCng slack (link on the website hpcng.org).

Greg (original founder of CentOS)

Reply
A says: December 8, 2020 at 7:11 pm

Not a programmer, but I'd certainly use it. I hope you get it off the ground.

Michael says: December 8, 2020 at 8:26 pm

This sounds like a great idea and getting control away from corporate entities like IBM would be helpful. Have you considered reviving the Scientific Linux project?

Bond Masuda says: December 8, 2020 at 11:53 pm

Feel free to contact me. I'm a long time RH user (since pre-RHEL when it was RHL) in both server and desktop environments. I've built and maintained some RPMs for some private projects that used CentOS as foundation. I can contribute compute and storage resources. I can program in a few different languages.

Rex says: December 9, 2020 at 3:46 am

Dear Greg,

Thank you for considering starting another RHEL rebuild. If and when you do, please consider making your new website a Brave Verified Content Creator. I earn a little bit of money every month using the Brave browser, and I end up donating it to Wikipedia every month because there are so few Brave Verified websites.

The verification process is free, and takes about 15 to 30 minutes. I believe that the Brave browser now has more than 8 million users.

dovla091 says: December 9, 2020 at 10:47 am

Wikipedia. The so called organization that get tons of money from tech oligarchs and yet the whine about we need money and support? (If you don't believe me just check their biggest donors) also they keen to be insanely biased and allow to write on their web whoever pays the most... Seriously, find other organisation to donate your money

dan says: December 9, 2020 at 4:00 am

Please keep us updated. I can't donate much, but I'm sure many would love to donate to this cause.

Chad Gregory says: December 9, 2020 at 7:21 pm

Not sure what I could do but I will keep an eye out things I could help with. This change to CentOS really pisses me off as I have stood up 2 CentOS servers for my works production environment in the last year.

Vasile M says: December 8, 2020 at 8:43 pm

LOL... CentOS is RH from 2014 to date. What you expected? As long as CentOS is so good and stable, that cuts some of RHEL sales... RH and now IBM just think of profit. It was expected, search the net for comments back in 2014.

[Dec 10, 2020] Amazon Linux 2

Dec 10, 2020 | aws.amazon.com

Amazon Linux 2 is the next generation of Amazon Linux, a Linux server operating system from Amazon Web Services (AWS). It provides a secure, stable, and high performance execution environment to develop and run cloud and enterprise applications. With Amazon Linux 2, you get an application environment that offers long term support with access to the latest innovations in the Linux ecosystem. Amazon Linux 2 is provided at no additional charge.

Amazon Linux 2 is available as an Amazon Machine Image (AMI) for use on Amazon Elastic Compute Cloud (Amazon EC2). It is also available as a Docker container image and as a virtual machine image for use on Kernel-based Virtual Machine (KVM), Oracle VM VirtualBox, Microsoft Hyper-V, and VMware ESXi. The virtual machine images can be used for on-premises development and testing. Amazon Linux 2 supports the latest Amazon EC2 features and includes packages that enable easy integration with AWS. AWS provides ongoing security and maintenance updates for Amazon Linux 2.

[Dec 10, 2020] A letter to IBM brass

Notable quotes:
"... Redhat endorsed that moral contract when you brought official support to CentOS back in 2014. ..."
"... Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community. ..."
"... Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds. ..."
"... Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future. ..."
"... This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle. ..."
"... I think you destroyed a large part of the RHEL / CentOS community with this move today. ..."
"... Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain. ..."
Dec 10, 2020 | blog.centos.org

Orsiris de Jong says: December 9, 2020 at 9:41 am

Dear IBM,

As a lot of us here, I've been in the CentOS / RHEL community for more than 10 years.
Reasons of that choice were stability, long term support and good hardware vendor support.

Like many others, I've built much of my skills upon this linux flavor for years, and have been implicated into the community for numerous bug reports, bug fixes, and howto writeups.

Using CentOS was the good alternative to RHEL on a lot of non critical systems, and for smaller companies like the one I work for.

The moral contract has always been a rock solid "Community Enterprise OS" in exchange of community support, bug reports & fixes, and growing interest from developers.

Redhat endorsed that moral contract when you brought official support to CentOS back in 2014.

Now that you decided to turn your back on the community, even if another RHEL fork comes out, there will be an exodus of the community.

Also, a lot of smaller developers won't support RHEL anymore because their target weren't big companies, making less and less products available without the need of self supporting RPM builds.

This will make RHEL less and less widely used by startups, enthusiasts and others.

CentOS Stream being the upstream of RHEL, I highly doubt system architects and developers are willing to be beta testers for RHEL.

Providing a free RHEL subscription for Open Source projects just sounds like your next step to keep a bit of the exodus from happening, but I'd bet that "free" subscription will get more and more restrictions later on, pushing to a full RHEL support contract.

As a lot of people here, I won't go the Oracle way, they already did a very good job destroying other company's legacy.

Gregory Kurtzer's fork will take time to grow, but in the meantime, people will need a clear vision of the future.

This means that we'll now have to turn to other linux flavors, like Debian, or OpenSUSE, of which at least some have hardware vendor support too, but with a lesser lifecycle.

I think you destroyed a large part of the RHEL / CentOS community with this move today.

Maybe you'll get more RHEL subscriptions in the next months yielding instant profits, but the long run growth is now far more uncertain.

... ... ...

[Dec 10, 2020] CentOS will be RHEL's beta, but CentOS denies this

IBM have a history of taking over companies and turning them into junk, so I am not that surprised. I am surprised that it took IBM brass so long to kill CentOS after Red Hat acquisition.
Notable quotes:
"... By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%. ..."
"... Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024 ..."
Dec 10, 2020 | www.zdnet.com

I'm far from alone. By W3Tech 's count, while Ubuntu is the most popular Linux server operating system with 47.5%, CentOS is number two with 18.8% and Debian is third, 17.5%. RHEL? It's a distant fourth with 1.8%.

If you think you just realized why Red Hat might want to remove CentOS from the server playing field, you're far from the first to think that.

Red Hat will continue to support CentOS 7 and produce it through the remainder of the RHEL 7 life cycle . That means if you're using CentOS 7, you'll see support through June 30, 2024

[Dec 10, 2020] Time to bring back Scientific Linux

Notable quotes:
"... I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. ..."
Dec 10, 2020 | www.reddit.com

I bet Fermilab are thrilled back in 2019 they announced that they wouldn't develop Scientific Linux 8, and focus on CentOS 8 instead. https://listserv.fnal.gov/scripts/wa.exe?A2=SCIENTIFIC-LINUX-ANNOUNCE;11d6001.1904 l

clickwir 19 points· 1 day ago

Time to bring back Scientific Linux.

[Dec 10, 2020] CentOS Project: Embraced, extended, extinguished.

Notable quotes:
"... My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that. ..."
Dec 10, 2020 | www.reddit.com

KugelKurt 18 points· 1 day ago

I wonder what Red Hat's plan is WRT companies like Blackmagic Design that ship CentOS as part of their studio equipment.

The cost of a RHEL license isn't the issue when the overall cost of the equipment is in the tens of thousands but unless I missed a change in Red Hat's trademark policy, Blackmagic cannot distribute a modified version of RHEL and without removing all trademarks first.

I don't think a rolling release distribution is what BMD wants.

My gut feeling is that something like Scientific Linux will make a return and current CentOS users will just use that.

[Dec 10, 2020] Oracle Linux -- A better alternative to CentOS

Currently limited of CentOS 6 and CentOS7.
Dec 10, 2020 | linux.oracle.com
Oracle Linux: A better alternative to CentOS

We firmly believe that Oracle Linux is the best Linux distribution on the market today. It's reliable, it's affordable, it's 100% compatible with your existing applications, and it gives you access to some of the most cutting-edge innovations in Linux like Ksplice and DTrace.

But if you're here, you're a CentOS user. Which means that you don't pay for a distribution at all, for at least some of your systems. So even if we made the best paid distribution in the world (and we think we do), we can't actually get it to you... or can we?

We're putting Oracle Linux in your hands by doing two things:

We think you'll like what you find, and we'd love for you to give it a try.

FAQ

Wait, doesn't Oracle Linux cost money?
Oracle Linux support costs money. If you just want the software, it's 100% free. And it's all in our yum repo at yum.oracle.com . Major releases, errata, the whole shebang. Free source code, free binaries, free updates, freely redistributable, free for production use. Yes, we know that this is Oracle, but it's actually free. Seriously.
Is this just another CentOS?
Inasmuch as they're both 100% binary-compatible with Red Hat Enterprise Linux, yes, this is just like CentOS. Your applications will continue to work without any modification whatsoever. However, there are several important differences that make Oracle Linux far superior to CentOS.
How is this better than CentOS?
Well, for one, you're getting the exact same bits our paying enterprise customers are getting . So that means a few things. Importantly, it means virtually no delay between when Red Hat releases a kernel and when Oracle Linux does:


Delay in kernel security advisories since January 2018: CentOS vs Oracle Linux; CentOS has large fluctuations in delays

So if you don't want to risk another CentOS delay, Oracle Linux is a better alternative for you. It turns out that our enterprise customers don't like to wait for updates -- and neither should you.

What about the code quality?
Again, you're running the exact same code that our enterprise customers are, so it has to be rock-solid. Unlike CentOS, we have a large paid team of developers, QA, and support engineers that work to make sure this is reliable.
What if I want support?
If you're running Oracle Linux and want support, you can purchase a support contract from us (and it's significantly cheaper than support from Red Hat). No reinstallation, no nothing -- remember, you're running the same code as our customers.

Contrast that with the CentOS/RHEL story. If you find yourself needing to buy support, have fun reinstalling your system with RHEL before anyone will talk to you.

Why are you doing this?
This is not some gimmick to get you running Oracle Linux so that you buy support from us. If you're perfectly happy running without a support contract, so are we. We're delighted that you're running Oracle Linux instead of something else.

At the end of the day, we're proud of the work we put into Oracle Linux. We think we have the most compelling Linux offering out there, and we want more people to experience it.

How do I make the switch?
Run the following as root:

curl -O https://linux.oracle.com/switch/centos2ol.sh
sh centos2ol.sh

What versions of CentOS can I switch?
centos2ol.sh can convert your CentOS 6 and 7 systems to Oracle Linux.
What does the script do?
The script has two main functions: it switches your yum configuration to use the Oracle Linux yum server to update some core packages and installs the latest Oracle Unbreakable Enterprise Kernel. That's it! You won't even need to restart after switching, but we recommend you do to take advantage of UEK.
Is it safe?
The centos2ol.sh script takes precautions to back up and restore any repository files it changes, so if it does not work on your system it will leave it in working order. If you encounter any issues, please get in touch with us by emailing oraclelinux-info_ww_grp@oracle.com .

[Dec 10, 2020] The demise of CentOs and independent training providers

Dec 10, 2020 | blog.centos.org

Anthony Mwai

says: December 8, 2020 at 8:44 pm

IBM is messing up RedHat after the take over last year. This is the most unfortunate news to the Free Open-Source community. Companies have been using CentOS as a testing bed before committing to purchase RHEL subscription licenses.

We need to rethink before rolling out RedHat/CentOS 8 training in our Centre.

Joe says: December 9, 2020 at 1:03 pm

You can use Oracle Linux in exactly the same way as you did CentOS except that you have the option of buying support without reinstalling a "commercial" variant.

Everything's in the public repos except a few addons like ksplice. You don't even have to go through the e-delivery to download the ISOs any more, they're all linked from yum.oracle.com

TechSmurf says: December 9, 2020 at 12:38 am

Not likely. Oracle Linux has extensive use by paying Oracle customers as a host OS for their database software and in general purposes for Oracle Cloud Infrastructure.

Oracle customers would be even less thrilled about Streams than CentOS users. I hate to admit it, but Oracle has the opportunity to take a significant chunk of the CentOS user base if they don't do anything Oracle-ish, myself included.

I'll be pretty surprised if they don't completely destroy their own windfall opportunity, though.

David Anderson says: December 8, 2020 at 7:16 pm

"OEL is literally a rebranded RH."

So, what's not to like? I also was under the impression that OEL was a paid offering, but apparently this is wrong - https://www.oracle.com/ar/a/ocom/docs/linux/oracle-linux-ds-1985973.pdf - "Oracle Linux is easy to download and completely free to use, distribute, and update."

Bill Murmor says: December 9, 2020 at 5:04 pm

So, what's the problem?

IBM has discontinued CentOS. Oracle is producing a working replacement for CentOS. If, at some point, Oracle attacks their product's users in the way IBM has here, then one can move to Debian, but for now, it's a working solution, as CentOS no longer is.

k1 says: December 9, 2020 at 7:58 pm

Because it's a trust issue. RedHat has lost trust. Oracle never had it in the first place.

[Dec 10, 2020] Oracle has a converter script for CentOS 7. And here is a quick hack to convert CentOs8 to Oracle Linux

You can use Oracle Linux exactly like CentOS, only better
Ang says: December 9, 2020 at 5:04 pm "I never thought we'd see the day Oracle is more trustworthy than RedHat/IBM. But I guess such things do happen with time..."
Notable quotes:
"... The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option. ..."
"... this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv: ..."
Dec 10, 2020 | blog.centos.org

Charlie F. says: December 8, 2020 at 6:37 pm

Oracle has a converter script for CentOS 7, and they will sell you OS support after you run it:

https://linux.oracle.com/switch/centos/

It would be nice if Oracle would update that for CentOS 8.

David Anderson says: December 8, 2020 at 7:15 pm

The link says that you don't have to pay for Oracle Linux . So switching to it from CentOS 8 could be a very easy option.

Max Grü says: December 9, 2020 at 2:05 pm

Oracle Linux is free. The only thing that costs money is support for it. I quote "Yes, we know that this is Oracle, but it's actually free. Seriously."

Reply
Phil says: December 9, 2020 at 2:10 pm

this quick n'dirty hack worked fine to convert centos 8 to oracle linux 8, ymmv:

repobase=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage
wget \
${repobase}/redhat-release-8.3-1.0.0.1.el8.x86_64.rpm \
${repobase}/oracle-release-el8-1.0-1.el8.x86_64.rpm \
${repobase}/oraclelinux-release-8.3-1.0.4.el8.x86_64.rpm \
${repobase}/oraclelinux-release-el8-1.0-9.el8.x86_64.rpm
rpm -e centos-linux-release --nodeps
dnf --disablerepo='*' localinstall ./*rpm 
:> /etc/dnf/vars/ociregion
dnf remove centos-linux-repos
dnf --refresh distro-sync
# since I wanted to try out the unbreakable enterprise kernel:
dnf install kernel-uek
reboot
dnf remove kernel

[Dec 10, 2020] Linux Subshells for Beginners With Examples - LinuxConfig.org

Dec 10, 2020 | linuxconfig.org

Bash allows two different subshell syntaxes, namely $() and back-tick surrounded statements. Let's look at some easy examples to start:

$ echo '$(echo 'a')'
$(echo a)
$ echo "$(echo 'a')"
a
$ echo "a$(echo 'b')c"
abc
$ echo "a`echo 'b'`c"
abc

SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.

https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=1566064928&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710648&bpp=17&bdt=1664&idt=-M&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&correlator=5015138629854&frm=20&pv=2&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=577243598424319&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=1663&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530671%2C21068083&oid=3&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=1Bdmecc8YW&p=https%3A//linuxconfig.org&dtd=634


In the first command, as an example, we used ' single quotes. This resulted in our subshell command, inside the single quotes, to be interpreted as literal text instead of a command. This is standard Bash: ' indicates literal, " indicates that the string will be parsed for subshells and variables.

In the second command we swap the ' to " and thus the string is parsed for actual commands and variables. The result is that a subshell is being started, thanks to our subshell syntax ( $() ), and the command inside the subshell ( echo 'a' ) is being executed literally, and thus an a is produced, which is then inserted in the overarching / top level echo . The command at that stage can be read as echo "a" and thus the output is a .

In the third command, we further expand this to make it clearer how subshells work in-context. We echo the letter b inside the subshell, and this is joined on the left and the right by the letters a and c yielding the overall output to be abc in a similar fashion to the second command.

In the fourth and last command, we exemplify the alternative Bash subshell syntax of using back-ticks instead of $() . It is important to know that $() is the preferred syntax, and that in some remote cases the back-tick based syntax may yield some parsing errors where the $() does not. I would thus strongly encourage you to always use the $() syntax for subshells, and this is also what we will be using in the following examples.

Example 2: A little more complex
$ touch a
$ echo "-$(ls [a-z])"
-a
$ echo "-=-||$(ls [a-z] | xargs ls -l)||-=-"
-=-||-rw-rw-r-- 1 roel roel 0 Sep  5 09:26 a||-=-

Here, we first create an empty file by using the touch a command. Subsequently, we use echo to output something which our subshell $(ls [a-z]) will generate. Sure, we can execute the ls directly and yield more or less the same result, but note how we are adding - to the output as a prefix.

In the final command, we insert some characters at the front and end of the echo command which makes the output look a bit nicer. We use a subshell to first find the a file we created earlier ( ls [a-z] ) and then - still inside the subshell - pass the results of this command (which would be only a literally - i.e. the file we created in the first command) to the ls -l using the pipe ( | ) and the xargs command. For more information on xargs, please see our articles xargs for beginners with examples and multi threaded xargs with examples .

Example 3: Double quotes inside subshells and sub-subshells!
echo "$(echo "$(echo "it works")" | sed 's|it|it surely|')"
it surely works

https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=2724449972&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&adsid=ChEIgM2S_gUQzN_42M_QwuOnARIqAL1WgU0IPKMTPLYMrAFnUAY1w18hzIzNy0CGR82uXn3xCpt9jLaEISQY&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710249&bpp=9&bdt=1264&idt=211&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&prev_fmts=754x189%2C0x0&nras=1&correlator=5015138629854&frm=20&pv=1&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=2308974393696511&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=3548&biw=1519&bih=762&scr_x=0&scr_y=513&eid=42530671%2C21068083&oid=3&psts=AGkb-H_kCAb-qHdw3GuwXq6RB3MJbClRq9VISu7n8l1rpQZCm8sfL6sdfh-BMTltKIaB0w&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&jar=2020-11-30-05&ifi=2&uci=a!2&btvi=2&fsb=1&xpc=0ozDx6KTCC&p=https%3A//linuxconfig.org&dtd=4517


Cool, no? Here we see that double quotes can be used inside the subshell without generating any parsing errors. We also see how a subshell can be nested inside another subshell. Are you able to parse the syntax? The easiest way is to start "in the middle or core of all subshells" which is in this case would be the simple echo "it works" .

This command will output it works as a result of the subshell call $(echo "it works") . Picture it works in place of the subshell, i.e.

echo "$(echo "it works" | sed 's|it|it surely|')"
it surely works

This looks simpler already. Next it is helpful to know that the sed command will do a substitute (thanks to the s command just before the | command separator) of the text it to it surely . You can read the sed command as replace __it__ with __it surely__. The output of the subshell will thus be it surely works`, i.e.

echo "it surely works"
it surely works
Conclusion

In this article, we have seen that subshells surely work (pun intended), and that they can be used in wide variety of circumstances, due to their ability to be inserted inline and within the context of the overarching command. Subshells are very powerful and once you start using them, well, there will likely be no stopping. Very soon you will be writing something like:

$ VAR="goodbye"; echo "thank $(echo "${VAR}" | sed 's|^| and |')" | sed 's|k |k you|'

This one is for you to try and play around with! Thank you and goodbye

[Dec 10, 2020] Top 10 Awesome Linux Screen Tricks by Isaias Irizarry

May 13, 2011 | blog.urfix.com

Screen or as I like to refer to it "Admin's little helper"
Screen is a window manager that multiplexes a physical terminal between several processes

here are a couple quick reasons you'd might use screen

Lets say you have a unreliable internet connection you can use screen and if you get knocked out from your current session you can always connect back to your session.

Or let's say you need more terminals, instead of opening a new terminal or a new tab just create a new terminal inside of screen

Here are the screen shortcuts to help you on your way Screen shortcuts

and here are some of the Top 10 Awesome Linux Screen tips urfix.com uses all the time if not daily.

1) Attach screen over ssh
ssh -t remote_host screen -r

Directly attach a remote screen session (saves a useless parent bash process)

2) Share a terminal screen with others
% screen -r someuser/
3) Triple monitoring in screen
tmpfile=$(mktemp) && echo -e 'startup_message off\nscreen -t top  htop\nsplit\nfocus\nscreen
-t nethogs nethogs  wlan0\nsplit\nfocus\nscreen -t iotop iotop' > $tmpfile &&
sudo screen -c $tmpfile

This command starts screen with 'htop', 'nethogs' and 'iotop' in split-screen. You have to have these three commands (of course) and specify the interface for nethogs – mine is wlan0, I could have acquired the interface from the default route extending the command but this way is simpler.

htop is a wonderful top replacement with many interactive commands and configuration options. nethogs is a program which tells which processes are using the most bandwidth. iotop tells which processes are using the most I/O.

The command creates a temporary "screenrc" file which it uses for doing the triple-monitoring. You can see several examples of screenrc files here: http://www.softpanorama.org/Utilities/Screen/screenrc_examples.shtml

4) Share a 'screen'-session
screen -x

Ater person A starts his screen-session with `screen`, person B can attach to the srceen of person A with `screen -x`. Good to know, if you need or give support from/to others.

5) Start screen in detached mode
screen -d -m [<command>]

Start screen in detached mode, i.e., already running on background. The command is optional, but what is the purpose on start a blank screen process that way?
It's useful when invoking from a script (I manage to run many wget downloads in parallel, for example).

6) Resume a detached screen session, resizing to fit the current terminal
screen -raAd.

By default, screen tries to restore its old window sizes when attaching to resizable terminals. This command is the command-line equivalent to typing ^A F to fit an open screen session to the window

7) use screen as a terminal emulator to connect to serial consoles
screen /dev/tty<device> 9600

Use GNU/screen as a terminal emulator for anything serial console related.

screen /dev/tty

eg.

screen /dev/ttyS0 9600

8) ssh and attach to a screen in one line.
ssh -t user@host screen -x <screen name>

If you know the benefits of screen, then this might come in handy for you. Instead of ssh'ing into a machine and then running a screen command, this can all be done on one line instead. Just have the person on the machine your ssh'ing into run something like
screen -S debug
Then you would run
ssh -t user@host screen -x debug
and be attached to the same screen session.

9) connect to all screen instances running
screen -ls | grep pts | gawk '{ split($1, x, "."); print x[1] }' | while read i; do gnome-terminal -e screen\ -dx\ $i; done

connects to all the screen instances running.

10) Quick enter into a single screen session
alias screenr='screen -r $(screen -ls | egrep -o -e '[0-9]+' | head -n 1)'

There you have 'em folks the top 10 screen commands. enjoy!

[Dec 10, 2020] Possibility to change only year or only month in date

Jan 01, 2017 | unix.stackexchange.com

Gilles

491k 109 965 1494 asked Aug 22 '14 at 9:40 SHW 7,341 3 31 69

> ,

Christian Severin , 2017-09-29 09:47:52

You can use e.g. date --set='-2 years' to set the clock back two years, leaving all other elements identical. You can change month and day of month the same way. I haven't checked what happens if that calculation results in a datetime that doesn't actually exist, e.g. during a DST switchover, but the behaviour ought to be identical to the usual "set both date and time to concrete values" behaviour. – Christian Severin Sep 29 '17 at 9:47

Michael Homer , 2014-08-22 09:44:23

Use date -s :
date -s '2014-12-25 12:34:56'

Run that as root or under sudo . Changing only one of the year/month/day is more of a challenge and will involve repeating bits of the current date. There are also GUI date tools built in to the major desktop environments, usually accessed through the clock.

To change only part of the time, you can use command substitution in the date string:

date -s "2014-12-25 $(date +%H:%M:%S)"

will change the date, but keep the time. See man date for formatting details to construct other combinations: the individual components are %Y , %m , %d , %H , %M , and %S .

> ,

> , 2014-08-22 09:51:41

I don't want to change the time – SHW Aug 22 '14 at 9:51

Michael Homer , 2014-08-22 09:55:00

There's no option to do that. You can use date -s "2014-12-25 $(date +%H:%M:%S)" to change the date and reuse the current time, though. – Michael Homer Aug 22 '14 at 9:55

chaos , 2014-08-22 09:59:58

System time

You can use date to set the system date. The GNU implementation of date (as found on most non-embedded Linux-based systems) accepts many different formats to set the time, here a few examples:

set only the year:

date -s 'next year'
date -s 'last year'

set only the month:

date -s 'last month'
date -s 'next month'

set only the day:

date -s 'next day'
date -s 'tomorrow'
date -s 'last day'
date -s 'yesterday'
date -s 'friday'

set all together:

date -s '2009-02-13 11:31:30' #that's a magical timestamp

Hardware time

Now the system time is set, but you may want to sync it with the hardware clock:

Use --show to print the hardware time:

hwclock --show

You can set the hardware clock to the current system time:

hwclock --systohc

Or the system time to the hardware clock

hwclock --hctosys

> ,

garethTheRed , 2014-08-22 09:57:11

You change the date with the date command. However, the command expects a full date as the argument:
# date -s "20141022 09:45"
Wed Oct 22 09:45:00 BST 2014

To change part of the date, output the current date with the date part that you want to change as a string and all others as date formatting variables. Then pass that to the date -s command to set it:

# date -s "$(date +'%Y12%d %H:%M')"
Mon Dec 22 10:55:03 GMT 2014

changes the month to the 12th month - December.

The date formats are:

Balmipour , 2016-03-23 09:10:21

For ones like me running ESXI 5.1, here's what the system answered me
~ # date -s "2016-03-23 09:56:00"
date: invalid date '2016-03-23 09:56:00'

I had to uses a specific ESX command instead :

esxcli system time set  -y 2016 -M 03 -d 23  -H 10 -m 05 -s 00

Hope it helps !

> ,

Brook Oldre , 2017-09-26 20:03:34

I used the date command and time format listed below to successfully set the date from the terminal shell command performed on Android Things which uses the Linux Kernal.

date 092615002017.00

MMDDHHMMYYYY.SS

MM - Month - 09

DD - Day - 26

HH - Hour - 15

MM - Min - 00

YYYY - Year - 2017

.SS - second - 00

> ,

[Dec 09, 2020] Is Oracle A Real Alternative To CentOS

Notable quotes:
"... massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html ..."
Dec 09, 2020 | centosfaq.org

Is Oracle A Real Alternative To CentOS? Home " CentOS " Is Oracle A Real Alternative To CentOS? December 8, 2020 Frank Cox CentOS 33 Comments

Is Oracle a real alternative to CentOS ? I'm asking because genuinely don't know; I've never paid any attention to Oracle's Linux offering before now.

But today I've seen a couple of the folks here mention Oracle Linux and I see that Oracle even offers a script to convert CentOS 7 to Oracle. Nothing about CentOS 8 in that script, though.

https://linux.oracle.com/switch/ CentOS /

That page seems to say that Oracle Linux is everything that CentOS was prior to today's announcement.

But someone else here just said that the first thing Oracle Linux does is to sign you up for an Oracle account.

So, for people who know a lot more about these things than I do, what's the downside of using Oracle Linux versus CentOS? I assume that things like epel/rpmfusion/etc will work just as they do under CentOS since it's supposed to be bit-for-bit compatible like CentOS was. What does the "sign up with Oracle" stuff actually do, and can you cancel, avoid, or strip it out if you don't want it?

Based on my extremely limited knowledge around Oracle Linux, it sounds like that might be a go-to solution for CentOS refugees.

But is it, really?

Karl Vogel says: December 9, 2020 at 3:05 am

... ... ..

Go to https://linux.oracle.com/switch/CentOS/ , poke around a bit, and you end up here:
https://yum.oracle.com/oracle-linux-downloads.html

I just went to the ISO page and I can grab whatever I like without signing up for anything, so nothing's changed since I first used it.

... ... ...

Gianluca Cecchi says: December 9, 2020 at 3:30 am

[snip]

Only to point out that while in CentOS (8.3, but the same in 7.x) the situation is like this:

[g.cecchi@skull8 ~]$ ll /etc/redhat-release /etc/CentOS-release
-rw-r–r– 1 root root 30 Nov 10 16:49 /etc/CentOS-release lrwxrwxrwx 1 root root 14 Nov 10 16:49 /etc/redhat-release -> CentOS-release
[g.cecchi@skull8 ~]$

[g.cecchi@skull8 ~]$ cat /etc/CentOS-release CentOS Linux release 8.3.2011

in Oracle Linux (eg 7.7) you get two different files:

$ ll /etc/redhat-release /etc/oracle-release 
-rw-r–r– 1 root root 32 Aug 8 2019 /etc/oracle-release 
-rw-r–r– 1 root root 52 Aug 8 2019 /etc/redhat-release 
$ cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.7 (Maipo)
$ cat /etc/oracle-release Oracle Linux Server release 7.7 

This is generally done so that sw pieces officially certified only on upstream enterprise vendor and that test contents of the redhat-release file are satisfied. Using the lsb_release command on an Oracle Linux 7.6 machine:

# lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: OracleServer Description: Oracle Linux Server release 7.6 
Release: 7.6 
Codename: n/a 
# 

Gianluca

Rainer Traut says: December 9, 2020 at 4:18 am

Am 08.12.20 um 18:54 schrieb Frank Cox:

Yes, it is better than CentOS and in some aspects better than RHEL:

– faster security updates than CentOS, directly behind RHEl
– better kernels than RHEL and CentOS (UEKs) wih more features
– free to download (no subscription needed):
https://yum.oracle.com/oracle-linux-isos.html
– free to use:
https://yum.oracle.com/oracle-linux-8.html
massive amount of extra packages and full rebuild of EPEL (same link): https://yum.oracle.com/oracle-linux-8.html

Rainer Traut says: December 9, 2020 at 4:26 am

Hi,

Am 08.12.20 um 19:03 schrieb Jon Pruente:

KVM is a subscription feature. They want you to run Oracle VM Server for x86 (which is based on Xen) so they can try to upsell you to use the Oracle Cloud. There's other things, but that stood out immediately.

Oracle Linux FAQ (PDF): https://www.oracle.com/a/ocom/docs/027617.pdf

There is no subscription needed. All needed repositories for the oVirt based virtualization are freely available.

https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-manager/getstart/manager-install.html#manager-install-prepare

Rainer Traut says: December 10, 2020 at 4:40 am

Am 09.12.20 um 17:52 schrieb Frank Cox:

I'll try to answer best to my knowledge.

I have an oracle account but never used it for/with Oracle linux. There are oracle communities where you need an oracle account: https://community.oracle.com/tech/apps-infra/categories/oracle_linux

Niki Kovacs says: December 10, 2020 at 10:22 am

Le 10/12/2020 à 17:18, Frank Cox a écrit :

That's it. I know Oracle's history, but I think for Oracle Linux, they may be much better than their reputation. I'm currently fiddling around with it, and I like it very much. Plus there's a nice script to turn an existing CentOS installation into an Oracle Linux system.

Cheers,

Niki

--
Microlinux – Solutions informatiques durables
7, place de l'église – 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Ljubomir Ljubojevic says: December 10, 2020 at 12:53 pm

There is always Springdale Linux made by Princeton University: https://puias.math.ias.edu/

Johnny Hughes says: December 10, 2020 at 4:10 pm

Am 10.12.20 um 19:53 schrieb Ljubomir Ljubojevic:

I did a conversion of a test webserver from C8 to Springdale. It went smoothly.

Niki Kovacs says: December 12, 2020 at 11:29 am

Le 08/12/2020 à 18:54, Frank Cox a écrit :

I spent the last three days experimenting with it. Here's my take on it: https://blog.microlinux.fr/migration-CentOS-oracle-linux/

tl;dr: Very nice if you don't have any qualms about the company.

Cheers,

Niki

--
Microlinux – Solutions informatiques durables 7, place de l'église – 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12

Frank Cox says: December 12, 2020 at 11:52 am

That's a really excellent article, Nicholas. Thanks ever so much for posting about your experience.

Peter Huebner says: December 15, 2020 at 5:07 am

Am Dienstag, den 15.12.2020, 10:14 +0100 schrieb Ruslanas Gžibovskis:

According to the Oracle license terms and official statements, it is "free to download, use and share. There is no license cost, no need for a contract, and no usage audits."

Recommendation only: "For business-critical infrastructure, consider Oracle Linux Support." Only optional, not a mandatory requirement. see: https://www.oracle.com/linux

No need for such a construct. Oracle Linux can be used on any production system without the legal requirement to obtain a extra commercial license. Same as in CentOS.

So Oracle Linux can be used free as in "free-beer" currently for any system, even for commercial purposes. Nevertheless, Oracle can change that license terms in the future, but this applies as well to all other company-backed linux distributions.
--
Peter Huebner

[Nov 29, 2020] Provisioning a system

Nov 29, 2020 | opensource.com

We've gone over several things you can do with Ansible on your system, but we haven't yet discussed how to provision a system. Here's an example of provisioning a virtual machine (VM) with the OpenStack cloud solution.

- name: create a VM in openstack
osp_server:
name: cloudera-namenode
state: present
cloud: openstack
region_name: andromeda
image: 923569a-c777-4g52-t3y9-cxvhl86zx345
flavor_ram: 20146
flavor: big
auto_ip: yes
volumes: cloudera-namenode 

All OpenStack modules start with os , which makes it easier to find them. The above configuration uses the osp-server module, which lets you add or remove an instance. It includes the name of the VM, its state, its cloud options, and how it authenticates to the API. More information about cloud.yml is available in the OpenStack docs, but if you don't want to use cloud.yml, you can use a dictionary that lists your credentials using the auth option. If you want to delete the VM, just change state: to absent .

Say you have a list of servers you shut down because you couldn't figure out how to get the applications working, and you want to start them again. You can use os_server_action to restart them (or rebuild them if you want to start from scratch).

Here is an example that starts the server and tells the modules the name of the instance:

- name: restart some servers
os_server_action:
action: start
cloud: openstack
region_name: andromeda
server: cloudera-namenode 

Most OpenStack modules use similar options. Therefore, to rebuild the server, we can use the same options but change the action to rebuild and add the image we want it to use:

os_server_action:
action: rebuild
image: 923569a-c777-4g52-t3y9-cxvhl86zx345

[Nov 29, 2020] bootstrap.yml

Nov 29, 2020 | opensource.com

For this laptop experiment, I decided to use Debian 32-bit as my starting point, as it seemed to work best on my older hardware. The bootstrap YAML script is intended to take a bare-minimal OS install and bring it up to some standard. It relies on a non-root account to be available over SSH and little else. Since a minimal OS install usually contains very little that is useful to Ansible, I use the following to hit one host and prompt me to log in with privilege escalation:

$ ansible-playbook bootstrap.yml -i '192.168.0.100,' -u jfarrell -Kk

The script makes use of Ansible's raw module to set some base requirements. It ensures Python is available, upgrades the OS, sets up an Ansible control account, transfers SSH keys, and configures sudo privilege escalation. When bootstrap completes, everything should be in place to have this node fully participate in my larger Ansible inventory. I've found that bootstrapping bare-minimum OS installs is nuanced (if there is interest, I'll write another article on this topic).

The account YAML setup script is used to set up (or reset) user accounts for each family member. This keeps user IDs (UIDs) and group IDs (GIDs) consistent across the small number of machines we have, and it can be used to fix locked accounts when needed. Yes, I know I could have set up Network Information Service or LDAP authentication, but the number of accounts I have is very small, and I prefer to keep these systems very simple. Here is an excerpt I found especially useful for this:

---
- name : Set user accounts
hosts : all
gather_facts : false
become : yes
vars_prompt :
- name : passwd
prompt : "Enter the desired ansible password:"
private : yes

tasks :
- name : Add child 1 account
user :
state : present
name : child1
password : "{{ passwd | password_hash('sha512') }}"
comment : Child One
uid : 888
group : users
shell : /bin/bash
generate_ssh_key : yes
ssh_key_bits : 2048
update_password : always
create_home : yes

The vars_prompt section prompts me for a password, which is put to a Jinja2 transformation to produce the desired password hash. This means I don't need to hardcode passwords into the YAML file and can run it to change passwords as needed.

The software installation YAML file is still evolving. It includes a base set of utilities for the sysadmin and then the stuff my users need. This mostly consists of ensuring that the same graphical user interface (GUI) interface and all the same programs, games, and media files are installed on each machine. Here is a small excerpt of the software for my young children:

- name : Install kids software
apt :
name : "{{ packages }}"
state : present
vars :
packages :
- lxde
- childsplay
- tuxpaint
- tuxtype
- pysycache
- pysiogame
- lmemory
- bouncy

I created these three Ansible scripts using a virtual machine. When they were perfect, I tested them on the D620. Then converting the Mini 9 was a snap; I simply loaded the same minimal Debian install then ran the bootstrap, accounts, and software configurations. Both systems then functioned identically.

For a while, both sisters enjoyed their respective computers, comparing usage and exploring software features.

The moment of truth

A few weeks later came the inevitable. My older daughter finally came to the conclusion that her pink Dell Mini 9 was underpowered. Her sister's D620 had superior power and screen real estate. YouTube was the new rage, and the Mini 9 could not keep up. As you can guess, the poor Mini 9 fell into disuse; she wanted a new machine, and sharing her younger sister's would not do.

I had another D620 in my pile. I replaced the BIOS battery, gave it a new SSD, and upgraded the RAM. Another perfect example of breathing new life into old hardware.

I pulled my Ansible scripts from source control, and everything I needed was right there: bootstrap, account setup, and software. By this time, I had forgotten a lot of the specific software installation information. But details like account UIDs and all the packages to install were all clearly documented and ready for use. While I surely could have figured it out by looking at my other machines, there was no need to spend the time! Ansible had it all clearly laid out in YAML.

Not only was the YAML documentation valuable, but Ansible's automation made short work of the new install. The minimal Debian OS install from USB stick took about 15 minutes. The subsequent shape up of the system using Ansible for end-user deployment only took another nine minutes. End-user acceptance testing was successful, and a new era of computing calmness was brought to my family (other parents will understand!).

Conclusion

Taking the time to learn and practice Ansible with this exercise showed me the true value of its automation and documentation abilities. Spending a few hours figuring out the specifics for the first example saves time whenever I need to provision or fix a machine. The YAML is clear, easy to read, and -- thanks to Ansible's idempotency -- easy to test and refine over time. When I have new ideas or my children have new requests, using Ansible to control a local virtual machine for testing is a valuable time-saving tool.

Doing sysadmin tasks in your free time can be fun. Spending the time to automate and document your work pays rewards in the future; instead of needing to investigate and relearn a bunch of things you've already solved, Ansible keeps your work documented and ready to apply so you can move onto other, newer fun things!

[Nov 25, 2020] What you need to know about Ansible modules by Jairo da Silva Junior

Mar 04, 2019 | opensource.com

Ansible works by connecting to nodes and sending small programs called modules to be executed remotely. This makes it a push architecture, where configuration is pushed from Ansible to servers without agents, as opposed to the pull model, common in agent-based configuration management systems, where configuration is pulled.

These modules are mapped to resources and their respective states , which are represented in YAML files. They enable you to manage virtually everything that has an API, CLI, or configuration file you can interact with, including network devices like load balancers, switches, firewalls, container orchestrators, containers themselves, and even virtual machine instances in a hypervisor or in a public (e.g., AWS, GCE, Azure) and/or private (e.g., OpenStack, CloudStack) cloud, as well as storage and security appliances and system configuration.

With Ansible's batteries-included model, hundreds of modules are included and any task in a playbook has a module behind it.

More on Ansible The contract for building modules is simple: JSON in the stdout . The configurations declared in YAML files are delivered over the network via SSH/WinRM -- or any other connection plugin -- as small scripts to be executed in the target server(s). Modules can be written in any language capable of returning JSON, although most Ansible modules (except for Windows PowerShell) are written in Python using the Ansible API (this eases the development of new modules).

Modules are one way of expanding Ansible capabilities. Other alternatives, like dynamic inventories and plugins, can also increase Ansible's power. It's important to know about them so you know when to use one instead of the other.

Plugins are divided into several categories with distinct goals, like Action, Cache, Callback, Connection, Filters, Lookup, and Vars. The most popular plugins are:

Ansible's official docs are a good resource on developing plugins .

When should you develop a module?

Although many modules are delivered with Ansible, there is a chance that your problem is not yet covered or it's something too specific -- for example, a solution that might make sense only in your organization. Fortunately, the official docs provide excellent guidelines on developing modules .

IMPORTANT: Before you start working on something new, always check for open pull requests, ask developers at #ansible-devel (IRC/Freenode), or search the development list and/or existing working groups to see if a module exists or is in development.

Signs that you need a new module instead of using an existing one include:

In the ideal scenario, the tool or service already has an API or CLI for management, and it returns some sort of structured data (JSON, XML, YAML).

Identifying good and bad playbooks
"Make love, but don't make a shell script in YAML."

So, what makes a bad playbook?

- name : Read a remote resource
command : "curl -v http://xpto/resource/abc"
register : resource
changed_when : False

- name : Create a resource in case it does not exist
command : "curl -X POST http://xpto/resource/abc -d '{ config:{ client: xyz, url: http://beta, pattern: *.* } }'"
when : "resource.stdout | 404"

# Leave it here in case I need to remove it hehehe
#- name: Remove resource
# command: "curl -X DELETE http://xpto/resource/abc"
# when: resource.stdout == 1

Aside from being very fragile -- what if the resource state includes a 404 somewhere? -- and demanding extra code to be idempotent, this playbook can't update the resource when its state changes.

Playbooks written this way disrespect many infrastructure-as-code principles. They're not readable by human beings, are hard to reuse and parameterize, and don't follow the declarative model encouraged by most configuration management tools. They also fail to be idempotent and to converge to the declared state.

Bad playbooks can jeopardize your automation adoption. Instead of harnessing configuration management tools to increase your speed, they have the same problems as an imperative automation approach based on scripts and command execution. This creates a scenario where you're using Ansible just as a means to deliver your old scripts, copying what you already have into YAML files.

Here's how to rewrite this example to follow infrastructure-as-code principles.

- name : XPTO
xpto :
name : abc
state : present
config :
client : xyz
url : http://beta
pattern : "*.*"

The benefits of this approach, based on custom modules , include:

Implementing a custom module

Let's use WildFly , an open source Java application server, as an example to introduce a custom module for our not-so-good playbook:

- name : Read datasource
command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:read-resource()'"
register : datasource

- name : Create datasource
command : "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:add(driver-name=h2, user-name=sa, password=sa, min-pool-size=20, max-pool-size=40, connection-url=.jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE..)'"
when : 'datasource.stdout | outcome => failed'

Problems:

A custom module for this would look like:

- name : Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
driver-name : h2
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
jndi-name : "java:jboss/datasources/DemoDS"
user-name : sa
password : sa
min-pool-size : 20
max-pool-size : 40

This playbook is declarative, idempotent, more readable, and converges to the desired state regardless of the current state.

Why learn to build custom modules?

Good reasons to learn how to build custom modules include:

" abstractions save us time working, but they don't save us time learning." -- Joel Spolsky, The Law of Leaky Abstractions
Custom Ansible modules 101 The Ansible way An alternative: drop it in the library directory library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)

site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier

roles/
common/ # this hierarchy represents a "role"
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case

TIP: You can use this directory layout to overwrite existing modules if, for example, you need to patch a module.

First steps

You could do it in your own -- including using another language -- or you could use the AnsibleModule class, as it is easier to put JSON in the stdout ( exit_json() , fail_json() ) in the way Ansible expects ( msg , meta , has_changed , result ), and it's also easier to process the input ( params[] ) and log its execution ( log() , debug() ).

def main () :

arguments = dict ( name = dict ( required = True , type = 'str' ) ,
state = dict ( choices = [ 'present' , 'absent' ] , default = 'present' ) ,
config = dict ( required = False , type = 'dict' ))

module = AnsibleModule ( argument_spec = arguments , supports_check_mode = True )
try :
if module. check_mode :
# Do not do anything, only verifies current state and report it
module. exit_json ( changed = has_changed , meta = result , msg = 'Fez alguma coisa ou não...' )

if module. params [ 'state' ] == 'present' :
# Verify the presence of a resource
# Desired state `module.params['param_name'] is equal to the current state?
module. exit_json ( changed = has_changed , meta = result )

if module. params [ 'state' ] == 'absent' :
# Remove the resource in case it exists
module. exit_json ( changed = has_changed , meta = result )

except Error as err:
module. fail_json ( msg = str ( err ))

NOTES: The check_mode ("dry run") allows a playbook to be executed or just verifies if changes are required, but doesn't perform them. Also, the module_utils directory can be used for shared code among different modules.

For the full Wildfly example, check this pull request .

Running tests The Ansible way

The Ansible codebase is heavily tested, and every commit triggers a build in its continuous integration (CI) server, Shippable , which includes linting, unit tests, and integration tests.

For integration tests, it uses containers and Ansible itself to perform the setup and verify phase. Here is a test case (written in Ansible) for our custom module's sample code:

- name : Configure datasource
jboss_resource :
name : "/subsystem=datasources/data-source=DemoDS"
state : present
attributes :
connection-url : "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE"
...
register : result

- name : assert output message that datasource was created
assert :
that :
- "result.changed == true"
- "'Added /subsystem=datasources/data-source=DemoDS' in result.msg" An alternative: bundling a module with your role

Here is a full example inside a simple role:

Molecule + Vagrant + pytest : molecule init (inside roles/)

It offers greater flexibility to choose:

But your tests would have to be written using pytest with Testinfra or Goss, instead of plain Ansible. If you'd like to learn more about testing Ansible roles, see my article about using Molecule .

tyberious 5 hours ago

They were trying to overcome the largest Electoral and Popular Vote Victory in American History!

It was akin to dousing a 4 alarm fire with a garden hose, eventually you will get burned! play_arrow 95 play_arrow pocomotion 5 hours ago

Tyberious, thanks for sharing your thoughts. I think you are correct is your assessment. play_arrow 15 play_arrow 1 systemsplanet 4 hours ago

I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.

https://thedonald.win/p/11QRtZSsD4/

The game is rigged y_arrow 1 Stainmaker 4 hours ago

I found over 500 duplicate voters in Georgia, but get the feeling that it doesn't matter.

Of course it doesn't matter when you have Lester Holt interviewing Joe Biden and asking whether Creepy Joe's administration is going to continue investigating Trump. Whatever happened to Hunter's laptop and the hundreds of millions in Russian, Ukrainian & Chinese bribes anyway? 7 play_arrow HelluvaEngineer 5 hours ago

So far, they are winning. Got an idea of a path to victory? I don't. Americans are fvcking stupid. play_arrow 24 play_arrow 1 tyberious 5 hours ago

They like to think they are!

Go here and follow!

https://www.thegatewaypundit.com/?ff_source=Email&ff_medium=the-gateway-pundit&ff_campaign=dailypm&ff_content=daily

[Nov 25, 2020] My top 5 Ansible modules by Mark Phillips

Nov 25, 2019 | opensource.com

5. authorized_key

Secure shell (SSH) is at the heart of Ansible, at least for almost everything besides Windows. Key (no pun intended) to using SSH efficiently with Ansible is keys ! Slight aside -- there are a lot of very cool things you can do for security with SSH keys. It's worth perusing the authorized_keys section of the sshd manual page . Managing SSH keys can become laborious if you're getting into the realms of granular user access, and although we could do it with either of my next two favourites, I prefer to use the module because it enables easy management through variables .

4. file

Besides the obvious function of placing a file somewhere, the file module also sets ownership and permissions. I'd say that's a lot of bang for your buck with one module. I'd proffer a substantial portion of security relates to setting permissions too, so the file module plays nicely with authorized_keys .

3. template

There are so many ways to manipulate the contents of files, and I see lots of folk use lineinfile . I've used it myself for small tasks. However, the template module is so much clearer because you maintain the entire file for context. My preference is to write Ansible content in such a way that anyone can understand it easily -- which to me means not making it hard to understand what is happening. Use of template means being able to see the entire file you're putting into place, complete with the variables you are using to change pieces.

2. uri

Many modules in the current distribution leverage Ansible as an orchestrator. They talk to another service, rather than doing something specific like putting a file into place. Usually, that talking is over HTTP too. In the days before many of these modules existed, you could program an API directly using the uri module. It's a powerful access tool, enabling you to do a lot. I wouldn't be without it in my fictitious Ansible shed.

1. shell

The joker card in our pack. The Swiss Army Knife. If you're absolutely stuck for how to control something else, use shell . Some will argue we're now talking about making Ansible a Bash script -- but, I would say it's still better because with the use of the name parameter in your plays and roles, you document every step. To me, that's as big a bonus as anything. Back in the days when I was still consulting, I once helped a database administrator (DBA) migrate to Ansible. The DBA wasn't one for change and pushed back at changing working methods. So, to ease into the Ansible way, we called some existing DB management scripts from Ansible using the shell module. With an informative name statement to accompany the task.

You can ac hieve a lot with these five modules. Yes, modules designed to do a specific task will make your life even easier. But with a smidgen of engineering simplicity, you can achieve a lot with very little. Ansible developer Brian Coca is a master at it, and his tips and tricks talk is always worth a watch.

[Nov 25, 2020] 10 Ansible modules for Linux system automation by Ricardo Gerardi

Nov 25, 2020 | opensource.com

10 Ansible modules for Linux system automation These handy modules save time and hassle by automating many of your daily tasks, and they're easy to implement with a few commands. 26 Oct 2020 Ricardo Gerardi (Red Hat) Feed 69 up 3 comments Image by : Opensource.com x Subscribe now

Get the highlights in your inbox every week.

https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0

Ansible is a complete automation solution for your IT environment. You can use Ansible to automate Linux and Windows server configuration, orchestrate service provisioning, deploy cloud environments, and even configure your network devices.

Ansible modules abstract actions on your system so you don't need to worry about implementation details. You simply describe the desired state, and Ansible ensures the target system matches it.

This module availability is one of Ansible's main benefits, and it is often referred to as Ansible having "batteries included." Indeed, you can find modules for a great number of tasks, and while this is great, I frequently hear from beginners that they don't know where to start.

Although your choice of modules will depend exclusively on your requirements and what you're trying to automate with Ansible, here are the top ten modules you need to get started with Ansible for Linux system automation.

1. copy

The copy module allows you to copy a file from the Ansible control node to the target hosts. In addition to copying the file, it allows you to set ownership, permissions, and SELinux labels to the destination file. Here's an example of using the copy module to copy a "message of the day" configuration file to the target hosts:

- name: Ensure MOTD file is in place
copy:
src: files / motd
dest: / etc / motd
owner: root
group: root
mode: 0644

For less complex content, you can copy the content directly to the destination file without having a local file, like this:

- name: Ensure MOTD file is in place
copy:
content: "Welcome to this system."
dest: / etc / motd
owner: root
group: root
mode: 0644

This module works idempotently , which means it will only copy the file if the same file is not already in place with the same content and permissions.

The copy module is a great option to copy a small number of files with static content. If you need to copy a large number of files, take a look at the synchronize module. To copy files with dynamic content, take a look at the template module next.

2. template

The template module works similarly to the copy module, but it processes content dynamically using the Jinja2 templating language before copying it to the target hosts.

For example, define a "message of the day" template that displays the target system name, like this:

$ vi templates / motd.j2
Welcome to {{ inventory_hostname }} .

Then, instantiate this template using the template module, like this:

- name: Ensure MOTD file is in place
template:
src: templates / motd.j2
dest: / etc / motd
owner: root
group: root
mode: 0644

Before copying the file, Ansible processes the template and interpolates the variable, replacing it with the target host system name. For example, if the target system name is rh8-vm03 , the result file is:

Welcome to rh8-vm03.

While the copy module can also interpolate variables when using the content parameter, the template module allows additional flexibility by creating template files, which enable you to define more complex content, including for loops, if conditions, and more. For a complete reference, check Jinja2 documentation .

This module is also idempotent, and it will not copy the file if the content on the target system already matches the template's content.

3. user

The user module allows you to create and manage Linux users in your target system. This module has many different parameters, but in its most basic form, you can use it to create a new user.

For example, to create the user ricardo with UID 2001, part of the groups users and wheel , and password mypassword , apply the user module with these parameters:

- name: Ensure user ricardo exists
user:
name: ricardo
group: users
groups: wheel
uid: 2001
password: "{{ 'mypassword' | password_hash('sha512') }}"
state: present

Notice that this module tries to be idempotent, but it cannot guarantee that for all its options. For instance, if you execute the previous module example again, it will reset the password to the defined value, changing the user in the system for every execution. To make this example idempotent, use the parameter update_password: on_create , ensuring Ansible only sets the password when creating the user and not on subsequent runs.

You can also use this module to delete a user by setting the parameter state: absent .

The user module has many options for you to manage multiple user aspects. Make sure you take a look at the module documentation for more information.

4. package

The package module allows you to install, update, or remove software packages from your target system using the operating system standard package manager.

For example, to install the Apache web server on a Red Hat Linux machine, apply the module like this:

- name: Ensure Apache package is installed
package:
name: httpd
state: present More on Ansible This module is distribution agnostic, and it works by using the underlying package manager, such as yum/dnf for Red Hat-based distributions and apt for Debian. Because of that, it only does basic tasks like install and remove packages. If you need more control over the package manager options, use the specific module for the target distribution.

Also, keep in mind that, even though the module itself works on different distributions, the package name for each can be different. For instance, in Red Hat-based distribution, the Apache web server package name is httpd , while in Debian, it is apache2 . Ensure your playbooks deal with that.

This module is idempotent, and it will not act if the current system state matches the desired state.

5. service

Use the service module to manage the target system services using the required init system; for example, systemd .

In its most basic form, all you have to do is provide the service name and the desired state. For instance, to start the sshd service, use the module like this:

- name: Ensure SSHD is started
service:
name: sshd
state: started

You can also ensure the service starts automatically when the target system boots up by providing the parameter enabled: yes .

As with the package module, the service module is flexible and works across different distributions. If you need fine-tuning over the specific target init system, use the corresponding module; for example, the module systemd .

Similar to the other modules you've seen so far, the service module is also idempotent.

6. firewalld

Use the firewalld module to control the system firewall with the firewalld daemon on systems that support it, such as Red Hat-based distributions.

For example, to open the HTTP service on port 80, use it like this:

- name: Ensure port 80 ( http ) is open
firewalld:
service: http
state: enabled
permanent: yes
immediate: yes

You can also specify custom ports instead of service names with the port parameter. In this case, make sure to specify the protocol as well. For example, to open TCP port 3000, use this:

- name: Ensure port 3000 / TCP is open
firewalld:
port: 3000 / tcp
state: enabled
permanent: yes
immediate: yes

You can also use this module to control other firewalld aspects like zones or complex rules. Make sure to check the module's documentation for a comprehensive list of options.

7. file

The file module allows you to control the state of files and directories -- setting permissions, ownership, and SELinux labels.

For instance, use the file module to create a directory /app owned by the user ricardo , with read, write, and execute permissions for the owner and the group users :

- name: Ensure directory / app exists
file:
path: / app
state: directory
owner: ricardo
group: users
mode: 0770

You can also use this module to set file properties on directories recursively by using the parameter recurse: yes or delete files and directories with the parameter state: absent .

This module works with idempotency for most of its parameters, but some of them may make it change the target path every time. Check the documentation for more details.

8. lineinfile

The lineinfile module allows you to manage single lines on existing files. It's useful to update targeted configuration on existing files without changing the rest of the file or copying the entire configuration file.

For example, add a new entry to your hosts file like this:

- name: Ensure host rh8-vm03 in hosts file
lineinfile:
path: / etc / hosts
line: 192.168.122.236 rh8-vm03
state: present

You can also use this module to change an existing line by applying the parameter regexp to look for an existing line to replace. For example, update the sshd_config file to prevent root login by modifying the line PermitRootLogin yes to PermitRootLogin no :

- name: Ensure root cannot login via ssh
lineinfile:
path: / etc / ssh / sshd_config
regexp: '^PermitRootLogin'
line: PermitRootLogin no
state: present

Note: Use the service module to restart the SSHD service to enable this change.

This module is also idempotent, but, in case of line modification, ensure the regular expression matches both the original and updated states to avoid unnecessary changes.

9. unarchive

Use the unarchive module to extract the contents of archive files such as tar or zip files. By default, it copies the archive file from the control node to the target machine before extracting it. Change this behavior by providing the parameter remote_src: yes .

For example, extract the contents of a .tar.gz file that has already been downloaded to the target host with this syntax:

- name: Extract contents of app.tar.gz
unarchive:
src: / tmp / app.tar.gz
dest: / app
remote_src: yes

Some archive technologies require additional packages to be available on the target system; for example, the package unzip to extract .zip files.

Depending on the archive format used, this module may or may not work idempotently. To prevent unnecessary changes, you can use the parameter creates to specify a file or directory that this module would create when extracting the archive contents. If this file or directory already exists, the module does not extract the contents again.

10. command

The command module is a flexible one that allows you to execute arbitrary commands on the target system. Using this module, you can do almost anything on the target system as long as there's a command for it.

Even though the command module is flexible and powerful, it should be used with caution. Avoid using the command module to execute a task if there's another appropriate module available for that. For example, you could create users by using the command module to execute the useradd command, but you should use the user module instead, as it abstracts many details away from you, taking care of corner cases and ensuring the configuration only changes when necessary.

For cases where no modules are available, or to run custom scripts or programs, the command module is still a great resource. For instance, use this module to run a script that is already present in the target machine:

- name: Run the app installer
command: "/app/install.sh"

By default, this module is not idempotent, as Ansible executes the command every single time. To make the command module idempotent, you can use when conditions to only execute the command if the appropriate condition exists, or the creates argument, similarly to the unarchive module example.

What's next?

Using these modules, you can configure entire Linux systems by copying, templating, or modifying configuration files, creating users, installing packages, starting system services, updating the firewall, and more.

If you are new to Ansible, make sure you check the documentation on how to create playbooks to combine these modules to automate your system. Some of these tasks require running with elevated privileges to work. For more details, check the privilege escalation documentation.

As of Ansible 2.10, modules are organized in collections. Most of the modules in this list are part of the ansible.builtin collection and are available by default with Ansible, but some of them are part of other collections. For a list of collections, check the Ansible documentation . What you need to know about Ansible modules Learn how and when to develop custom modules for Ansible.

[Nov 22, 2020] Programmable editor as sysadmin tool

Highly recommended!
Oct 05, 2020 | perlmonks.org

likbez

C( vi/vim, emacs, THE, etc ).

There are also some newer editors that use LUA as the scripting language, but none with Perl as a scripting language. See https://www.slant.co/topics/7340/~open-source-programmable-text-editors

Here, for example, is a fragment from an old collection of hardening scripts called Titan, written for Solaris by by Brad M. Powell. Example below uses vi which is the simplest, but probably not optimal choice, unless your primary editor is VIM.

FixHostsEquiv() {

if   -f /etc/hosts.equiv -a -s /etc/hosts.equiv ; then
      t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a copy..."
      /bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG

        if grep -s "^+$" /etc/hosts.equiv
        then
        ed - /etc/hosts.equiv <<- !
        g/^+$/d
        w
        q
        !
        fi
else
        t_echo 2 "        No /etc/hosts.equiv -  PASSES CHECK"
        exit 1
fi

For VIM/Emacs users the main benefit here is that you will know your editor better, instead of inventing/learning "yet another tool." That actually also is an argument against Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to kill a bird with a cannon. Positive return on investment probably starts if you manage over 8 or even 16 boxes.

Perl also can be used. But I would recommend to slurp the file into an array and operate with lines like in editor; a regex on the whole text are more difficult to write correctly then a regex for a line, although experts have no difficulties using just them. But we seldom acquire skills we can so without :-)

On the other hand, that gives you a chance to learn splice function ;-)

If the files are basically identical and need some slight customization you can use patch utility with pdsh, but you need to learn the ropes. Like Perl the patch utility was also written by Larry Wall and is a very flexible tool for such tasks. You need first to collect files from your servers into some central directory with pdsh/pdcp (which I think is a standard RPM on RHEL and other linuxes) or other tool, then create diffs with one server to which you already applied the change (diff is your command language at this point), verify that on other server that diff produced right results, apply it and then distribute the resulting files back to each server using again pdsh/pdcp. If you have a common NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the tree and diffs on common filesystem.

The same central repository of config files can be used with vi and other approaches creating "poor man Ansible" for you .

[Nov 22, 2020] Which programming languages are useful for sysadmins, by Jonathan Roemer

I am surprised that Perl is No.3. It should be no.1 as it is definitely superior to both shell and Python for the most sysadmin scripts and it has more commonality with bash (which remain the major language) than Python. Far more.
It looks like Python as the language taught at the universities dominate because number of weak sysadmin, who just mentions it but actually do not used it, exceed the number of strong sysadmins (who really wrote at least one complex sysadmin script) by several orders of magnitude.
Jul 24, 2020 | www.redhat.com
What&#039;s your favorite programming/scripting language for sysadmin work?

Life as a systems engineer is a process of continuous improvement. In the past few years, as software-defined-everything has started to overhaul how we work in the same way virtualization did, knowing how to write and debug software has been a critical skill for systems engineers. Whether you are automating a small, repetitive, manual task, writing daily reporting tools, or debugging a production outage, it is vital to choose the right tool for the job. Below, are a few programming languages that I think all systems engineers will find useful, and also some guidance for picking your next language to learn.

Bash

The old standby, Bash (and, to a certain extent, POSIX sh) is the go-to for many systems engineers. The quick access to system primitives makes it ideal for ad-hoc data transformations. Slap together curl and jq with some conditionals, and you've got everything from a basic health check to an automated daily reporting tool. However, once you get a few levels of iteration deep, or you're making multiple calls to jq , you probably want to pull out a more fully-featured programming language.

Python

Python's easy onboarding, wide range of libraries, and large community make it ideal for more demanding sysadmin tasks. Daily reports might start as a few hundred lines of Bash that are run first thing in the morning. Once this gets large enough, however, it makes sense to move this to Python. A quick import json for simple JSON object interaction, and import jinja2 for quickly templating out a daily HTML-formatted email.

The languages your tools are built in

One of the powers of open source is, of course, access to the source! However, it is hard to realize this value if you don't have an understanding of the languages these tools are built in. An understanding of Go makes digging into the Datadog or Kubernetes codebases much easier. Being familiar with the development and debugging tools for C and Perl allow you to quickly dig down into aberrant behavior.

The new hotness

Even if you don't have Go or Rust in your environment today, there's a good chance you'll start seeing these languages more often. Maybe your application developers are migrating over to Elixir. Keeping up with the evolution of our industry can frequently feel like a treadmill, but this can be mitigated somewhat by getting ahead of changes inside of your organization. Keep an ear to the ground and start learning languages before you need them, so you're always prepared.

[ Download now: A sysadmin's guide to Bash scripting . ]

[Nov 22, 2020] Read a file line by line

Jul 07, 2020 | www.redhat.com

Assume I have a file with a lot of IP addresses and want to operate on those IP addresses. For example, I want to run dig to retrieve reverse-DNS information for the IP addresses listed in the file. I also want to skip IP addresses that start with a comment (# or hashtag).

I'll use fileA as an example. Its contents are:

10.10.12.13  some ip in dc1
10.10.12.14  another ip in dc2
#10.10.12.15 not used IP
10.10.12.16  another IP

I could copy and paste each IP address, and then run dig manually:

$> dig +short -x 10.10.12.13

Or I could do this:

$> while read -r ip _; do [[ $ip == \#* ]] && continue; dig +short -x "$ip"; done < ipfile

What if I want to swap the columns in fileA? For example, I want to put IP addresses in the right-most column so that fileA looks like this:

some ip in dc1 10.10.12.13
another ip in dc2 10.10.12.14
not used IP #10.10.12.15
another IP 10.10.12.16

I run:

$> while  read -r ip rest; do printf '%s %s\n' "$rest" "$ip"; done < fileA

[Nov 22, 2020] Save terminal output to a file under Linux or Unix bash

Apr 19, 2020 | www.cyberciti.biz
PayPal / Bitcoin , or become a supporter using Patreon . Advertisements

[Nov 22, 2020] Top 7 Linux File Compression and Archive Tools

Notable quotes:
"... It's currently support 188 file extensions. ..."
Nov 22, 2020 | www.2daygeek.com

6) How to Use the zstd Command to Compress and Decompress File on Linux

Zstandard command stands for zstd, it is a real-time lossless data compression algorithm that provides high compression rates.

It was created by Yann Collet on Facebook. It offers a wide range of options for compression and decompression.

It also provides a special mode for small data known as dictionary summary.

To compress the file using the zstd command.

# zstd [Files To Be Compressed] -o [FileName.zst]

To decompress the file using the zstd command.

# zstd -d [FileName.zst]

To decompress the file using the unzstd command.

# unzstd [FileName.zst]
7) How to Use the PeaZip Command to Compress and Decompress Files on Linux

PeaZip is a free and open-source file archive utility, based on Open Source technologies of 7-Zip, p7zip, FreeArc, PAQ, and PEA projects.

It's Cross-platform, full-featured and user-friendly alternative to WinRar and WinZip archive manager applications.

It supports its native PEA archive format (featuring volume spanning, compression and authenticated encryption).

It was developed for Windows and later added support for Unix/Linux as well. It's currently support 188 file extensions.

[Nov 18, 2020] Why the lone wolf mentality is a sysadmin mistake by Scott McBrien

Jul 10, 2019 | www.redhat.com

If you have worked in system administration for a while, you've probably run into a system administrator who doesn't write anything down and keeps their work a closely-guarded secret. When I've run into administrators like this, I often ask why they do this, and the response is usually a joking, "Job security." Which, may not actually be all that joking.

Don't be that person. I've worked in several shops, and I have yet to see someone "work themselves out of a job." What I have seen, however, is someone that can't take a week off without being called by the team repeatedly. Or, after this person left, I have seen a team struggle to detangle the mystery of what that person was doing, or how they were managing systems under their control.

[Nov 04, 2020] Utility dirhist -- History of changes in one or several directories was posted on GitHub

Designed to run from cron. Uses different, simpler approach than the etckeeper. Does not use GIT or any other version control system as they proved to be of questionable utility , unless there are multiple sysadmins on the server.

Designed to run from cron. Uses different, simpler approach than the etckeeper (and does not have the connected with the usage of GIT problem with incorrect assignment of file attributes when reconverting system files).

If it detects changed file it creates a new tar file for each analyzed directory. For example /etc, /root, and /boot

Detects all "critical" changed file, diffs them with previous version, and produces report.

All information by default in stored in /var/Dirhist_base. Directories to watch and files that are considered important are configurable via two config files dirhist_ignore.lst and dirhist_watch.lst which by default are located at the root of the /var/Dirhist_base tree ( as /var/Dirhist_base/dirhist_ignore.lst and /var/Dirhist_base/dirhist_watch.lst )

You can specify any number of watched directories and within each directory any number of watched files and subdirectories. The format used is similar to YAML dictionaries, or Windows 3 ini files. If any of "watched" files or directories changes, the utility can email you the report to selected email addresses, to alert about those changes. Useful when several sysadmin manage the same server. Can also be used for checking, if changes made were documented in GIT or other version management system (this process can be automated using the utility admpolice.)

[Nov 02, 2020] The Pros and Cons of Ansible - UpGuard

Nov 02, 2020 | www.upguard.com

Ansible has no notion of state. Since it doesn't keep track of dependencies, the tool simply executes a sequential series of tasks, stopping when it finishes, fails or encounters an error . For some, this simplistic mode of automation is desirable; however, many prefer their automation tool to maintain an extensive catalog for ordering (à la Puppet), allowing them to reach a defined state regardless of any variance in environmental conditions.

[Nov 02, 2020] YAML for beginners - Enable Sysadmin

Nov 02, 2020 | www.redhat.com

YAML Ain't a Markup Language (YAML), and as configuration formats go, it's easy on the eyes. It has an intuitive visual structure, and its logic is pretty simple: indented bullet points inherit properties of parent bullet points.

But this apparent simplicity can be deceptive.

Great DevOps Downloads

It's easy (and misleading) to think of YAML as just a list of related values, no more complex than a shopping list. There is a heading and some items beneath it. The items below the heading relate directly to it, right? Well, you can test this theory by writing a little bit of valid YAML.

Open a text editor and enter this text, retaining the dashes at the top of the file and the leading spaces for the last two items:

---
Store: Bakery
  Sourdough loaf
  Bagels

Save the file as example.yaml (or similar).

If you don't already have yamllint installed, install it:

$ sudo dnf install -y yamllint

A linter is an application that verifies the syntax of a file. The yamllint command is a great way to ensure your YAML is valid before you hand it over to whatever application you're writing YAML for (Ansible, for instance).

Use yamllint to validate your YAML file:

$ yamllint --strict shop.yaml || echo "Fail"
$

But when converted to JSON with a simple converter script , the data structure of this simple YAML becomes clearer:

$ ~/bin/json2yaml.py shop.yaml
{"Store": "Bakery Sourdough loaf Bagels"}

Parsed without the visual context of line breaks and indentation, the actual scope of your data looks a lot different. The data is mostly flat, almost devoid of hierarchy. There's no indication that the sourdough loaf and bagels are children of the name of the store.

[ Readers also liked: Ansible: IT automation for everybody ]

How data is stored in YAML

YAML can contain different kinds of data blocks:

There's a third type called scalar , which is arbitrary data (encoded in Unicode) such as strings, integers, dates, and so on. In practice, these are the words and numbers you type when building mapping and sequence blocks, so you won't think about these any more than you ponder the words of your native tongue.

When constructing YAML, it might help to think of YAML as either a sequence of sequences or a map of maps, but not both.

YAML mapping blocks

When you start a YAML file with a mapping statement, YAML expects a series of mappings. A mapping block in YAML doesn't close until it's resolved, and a new mapping block is explicitly created. A new block can only be created either by increasing the indentation level (in which case, the new block exists inside the previous block) or by resolving the previous mapping and starting an adjacent mapping block.

The reason the original YAML example in this article fails to produce data with a hierarchy is that it's actually only one data block: the key Store has a single value of Bakery Sourdough loaf Bagels . YAML ignores the whitespace because no new mapping block has been started.

Is it possible to fix the example YAML by prepending each sequence item with a dash and space?

---
Store: Bakery
  - Sourdough loaf
  - Bagels

Again, this is valid YAML, but it's still pretty flat:

$ ~/bin/json2yaml.py shop.yaml
{"Store": "Bakery - Sourdough loaf - Bagels"}

The problem is that this YAML file opens a mapping block and never closes it. To close the Store block and open a new one, you must start a new mapping. The value of the mapping can be a sequence, but you need a key first.

Here's the correct (and expanded) resolution:

---
Store:
  Bakery:
    - 'Sourdough loaf'
    - 'Bagels'
  Cheesemonger:
    - 'Blue cheese'
    - 'Feta'

In JSON, this resolves to:

{"Store": {"Bakery": ["Sourdough loaf", "Bagels"],
"Cheesemonger": ["Blue cheese", "Feta"]}}

As you can see, this YAML directive contains one mapping ( Store ) to two child values ( Bakery and Cheesemonger ), each of which is mapped to a child sequence.

YAML sequence blocks

The same principles hold true should you start a YAML directive as a sequence. For instance, this YAML directive is valid:

Flour
Water
Salt

Each item is distinct when viewed as JSON:

["Flour", "Water", "Salt"]

But this YAML file is not valid because it attempts to start a mapping block at an adjacent level to a sequence block :

---
- Flour
- Water
- Salt
Sugar: caster

It can be repaired by moving the mapping block into the sequence:

---
- Flour
- Water
- Salt
- Sugar: caster

You can, as always, embed a sequence into your mapping item:

---
- Flour
- Water
- Salt
- Sugar:
    - caster
    - granulated
    - icing

Viewed through the lens of explicit JSON scoping, that YAML snippet reads like this:

["Flour", "Salt", "Water", {"Sugar": ["caster", "granulated", "icing"]}]

[ A free guide from Red Hat: 5 steps to automate your business . ]

YAML syntax

If you want to comfortably write YAML, it's vital to be aware of its data structure. As you can tell, there's not much you have to remember. You know about mapping and sequence blocks, so you know everything you need have to work with. All that's left is to remember how they do and do not interact with one another. Happy coding! Check out these related articles on Enable Sysadmin Image 10 YAML tips for people who hate YAML

[Nov 02, 2020] Deconstructing an Ansible playbook by Peter Gervase

Oct 21, 2020 | www.redhat.com

This article describes the different parts of an Ansible playbook starting with a very broad overview of what Ansible is and how you can use it. Ansible is a way to use easy-to-read YAML syntax to write playbooks that can automate tasks for you. These playbooks can range from very simple to very complex and one playbook can even be embedded in another.

More about automation Installing httpd with a playbook

Now that you have that base knowledge let's look at a basic playbook that will install the httpd package. I have an inventory file with two hosts specified, and I placed them in the web group:

[root@ansi